How does a digital camera turn light from a lens into a series of 0s and 1s?


How does a digital camera turn light from a lens into a series of 0s and 1s?

In: Technology

The sensor in a camera is made up of millions of tiny capacitors that can hold an electrical charge.

Light from the lens is turned into a charge in the capacitor, with more charge building up the more intense the light is.

Circuitry in the camera then is responsible for shifting that charge off of the array, where it converts into a voltage. Those voltages are then sampled and stored as a sequence of 0s and 1s for later display.

Essentially, using some variant of the [photoelectric effect]( The incoming light hits specially designed materials, essentially ‘knocking’ electrons out of the atoms and causing them to be able to move freely. Because of the way the circuit is designed, those freed electrons prefer moving in one direction rather than another, so the light hitting the sensors directly translates to a difference in charge between one end of the sensor and the other.

This difference in charge can be detected and amplified into a voltage, which can then be chopped up into a series of fixed levels by comparing it against reference voltages. Those fixed levels can then be expressed mathematically as a series of 1s and 0s and saved/transmitted digitally.

There’s a pretty good diagram and explanation here. Basically you put red, green, and blue color filters in front of photosensitive material (the sort of elements that allow electricity to flow when light shines on them, like in a nightlight) and when they’re arranged in a grid pattern you can start to assemble a 2D image from them.

Fun fact you can also do this with a single photodiode by scanning left, right, up and down. Fax machines and scanning electron microscopes essentially work like this, and old CRT TVs work in oppositely (a single electron beam that scans left to right, row by row, not to sense light but to project it onto a phosphorescent screen.) It just takes more time so if you can pack a bunch of photosensors into a tiny grid it’s better. Even still, a grid of digital sensors usually gets turned into a single steam of data, row by row, for simplicity and cost savings.

Imagine a grid of squares, if light falls on one you colour it black “1”, if no light falls on it you colour it white “0”. If you have a large number of small squares in a grid and zoom out, you’d have a black and white image.

If you want more distinction than black or white, you could have a scale from eg 1 to 8 for no light to lots of light, and you’d represent each number in binary (ie 1s and 0s) on the grid. For more detail you could increase the size of the scale from 1 to 8 to 1 to 16, or make the squares smaller so you could fit more in. Colour would work the same expect you’d essentially have three grids, one for each colour.