Video was still just a signal then. Today we work with “digital” signals, meaning it’s arranged via electrical “on” and “off” pulses like all computer data. Back then it was “analog” signals, meaning it was either radio waves or magnetic patterns on tape etc.
If you spend a lot of time studying and explaining how digital and analog video works, you’ll find some similarities. They all do things a little differently but in the end the goal is the same: produce a series of images to be played on a TV.
So, then and now, we can build a machine that takes the signal from the camera but *adds* something new to that signal. If you don’t plug in the camera to its input, the output would be text or whatever on a black image. All the machine is really doing is analyzing the signal, figuring out what a “frame” looks like, “drawing” text on top of it, then outputting a new signal where the image has the text on top of the input image.
That’s “it”. They had machines that were made to add text and other graphics to a video signal and output a new signal with both things combined. Back then those machines were pretty darn expensive, but now we can use plain old computers to do it.
It’s called *chroma keying.*
The graphic was created/shot on another video feed. Everything that wasn’t the graphic was a solid primary colour (usually blue back then. Now they use green).
The video mixer would overlay everything from the graphic feed that wasn’t that primary color, essentially removing one color from the graphic video feed. Graphics were designed to look good with only two colors to facilitate chroma keying.
On black and white TV. The graphic background was black. Everything that wasn’t black would be part of the overlay.
It’s called *chroma keying.*
The graphic was created/shot on another video feed. Everything that wasn’t the graphic was a solid primary colour (usually blue back then. Now they use green).
The video mixer would overlay everything from the graphic feed that wasn’t that primary color, essentially removing one color from the graphic video feed. Graphics were designed to look good with only two colors to facilitate chroma keying.
On black and white TV. The graphic background was black. Everything that wasn’t black would be part of the overlay.
It’s called *chroma keying.*
The graphic was created/shot on another video feed. Everything that wasn’t the graphic was a solid primary colour (usually blue back then. Now they use green).
The video mixer would overlay everything from the graphic feed that wasn’t that primary color, essentially removing one color from the graphic video feed. Graphics were designed to look good with only two colors to facilitate chroma keying.
On black and white TV. The graphic background was black. Everything that wasn’t black would be part of the overlay.
Video was still just a signal then. Today we work with “digital” signals, meaning it’s arranged via electrical “on” and “off” pulses like all computer data. Back then it was “analog” signals, meaning it was either radio waves or magnetic patterns on tape etc.
If you spend a lot of time studying and explaining how digital and analog video works, you’ll find some similarities. They all do things a little differently but in the end the goal is the same: produce a series of images to be played on a TV.
So, then and now, we can build a machine that takes the signal from the camera but *adds* something new to that signal. If you don’t plug in the camera to its input, the output would be text or whatever on a black image. All the machine is really doing is analyzing the signal, figuring out what a “frame” looks like, “drawing” text on top of it, then outputting a new signal where the image has the text on top of the input image.
That’s “it”. They had machines that were made to add text and other graphics to a video signal and output a new signal with both things combined. Back then those machines were pretty darn expensive, but now we can use plain old computers to do it.
Video was still just a signal then. Today we work with “digital” signals, meaning it’s arranged via electrical “on” and “off” pulses like all computer data. Back then it was “analog” signals, meaning it was either radio waves or magnetic patterns on tape etc.
If you spend a lot of time studying and explaining how digital and analog video works, you’ll find some similarities. They all do things a little differently but in the end the goal is the same: produce a series of images to be played on a TV.
So, then and now, we can build a machine that takes the signal from the camera but *adds* something new to that signal. If you don’t plug in the camera to its input, the output would be text or whatever on a black image. All the machine is really doing is analyzing the signal, figuring out what a “frame” looks like, “drawing” text on top of it, then outputting a new signal where the image has the text on top of the input image.
That’s “it”. They had machines that were made to add text and other graphics to a video signal and output a new signal with both things combined. Back then those machines were pretty darn expensive, but now we can use plain old computers to do it.
The earliest, simplest BW television titling was done by simply adding a synchronized camera output to the signal before broadcast. The camera would be pointed at a black board with white lettering. Since “black” is 0 volts you just see the fight and white is 1 volt adding it to the fight is more than 1 volt but gets clipped to 1 volt, which is “white”.
The complex part was having the camera’s output synchronized to the cameras recording the event. NTSC signal has timing and calibration pulses mixed with the actual image. Getting all of the cameras’ to generate the pulses at exactly the same time is required to stop the picture “glitching” when switching cameras, and to have the title text show up where you want it to over the image.
This synchronization is done by Phase Lock Loop, a topic for another ELI5.
Latest Answers