Obviously it was a thing for film cameras, but now that everything is digital, something like “just make the picture darker” seems extremely easy to do with software
quick edit, I know what ND filters are for and how to use them, no need to explain. it just seems to me that it could be engineered in a way that doesn’t require them, which is what I’m asking about
In: 181
Digital cameras work by using special material (CMOS semiconductors) that produces an electric signal when hit by light, sort of but not exactly like a bunch of small solar panels. This signal is then amplified (similar to volume on a headphone) and converted into a whole number so it can be stored on computers. The more light that hits the camera in a given spot the higher the number, which creates a brighter region in the image.
But there is a limit, at some point you max out the signal, so even if more light hits it the image won’t get any brighter. There is also a minimum amount of light too. This is similar to how if you turn a speaker all the way up music gets distorted, the speaker is simply not capable of being louder so it cuts off sounds in the music.
You can edit digital images, but you can’t get back anything that isn’t recorded to the image in the first place. If the signal gets maxed out, you will lose all detail in that part of the image. This cannot simply be corrected with software, because the detail was never in the digital image to begin with.
There are a few main ways to adjust your image to get the right amount of light, but they can also have other side effects:
1. Exposure Time and Shutter Speed. The exposure time is how long the camera collects light that hits it. Each pixel basically “adds up” all the light over this time. Most cameras (though not all) also regulate this by having a small screen called the shutter that will block out the sensor and scan over the image, and the timing of this can also be adjusted. This allows for very short exposure times.
You can turn down the exposure time if the image is too bright, but this has 2 possible unwanted effects:
The less time the camera has to gather light, the less likely it is to detect it. This is because the camera only has some probability to actually detect light hitting it (due to physics). Low exposures can create random noise on the image if some pixels detect more light by chance (called shot noise), but with longer exposures the probability evens out across the image and you get less noise.
long exposures create motion blur, which can be stylistic. For example, a long exposure of a river is likely to smooth out the appearance of the river and make it more artistic than photorealistic. Short exposures lose this.
—
2. ISO/gain. This is the electronic amplification that is given to the image, basically making its pixel values more spread out so when you digitize it into whole numbers you retain the most information. The issue using this though is that there is a minimum gain – 0 amplification – which in bright images may not be low enough.
Also, if the image has very bright but also very dark spots, low gain will create poor dynamic range (how spread out the pixel values are in the final digital image), so the darker areas may be under saturated and lose detail.
—
3. F number/numerical aperture. This is a more complicated setting, but basically it is a measurement of what directions your camera captures light from. Low F number captures light from a wide direction, and high F number captures light from a more narrow direction – meaning less light overall and a darker image.
However, F number also affects the focus of the camera. Low F numbers have a narrow range of focus, so the background will be blurred. High F numbers have a large range of focus, so both the foreground and background can be seen.
In addition, this number influences the resolution of the camera, meaning changing the F number can affect the size of details you can clearly see. Exactly how it does this depends heavily on the lens, but know that changing F number may blur or sharpen certain details even when in focus.
Also, some lenses may not have variable F number, or a limited range. So you are not always free to change this.
So, most photographers will choose F number based on artistic style, the lens they have, and other technical requirements for the desired image. Changing this to alter light levels is usually not preferred.
—
ND filters on the other hand just uniformly reduce the brightness of light coming in. You can still keep a longer exposure, use enough gain to get a good dynamic range, and pick an F number based on image requirements while not over exposing the image. This maintains the artistic capabilities of the camera. Of course, in some situations ISO or exposure time is enough and is preferred to an ND filter. It all depends on what the image calls for and what tools you have on hand.
Another factor, especially with very bright images like solar photography, is that the light can actually damage the camera due to the high intensity of light. So an ND filter is absolutely required in this situation.
For those who don’t know:
In photography and optics, a neutral-density filter, or ND filter, is a filter that reduces or modifies the intensity of all wavelengths, or colors, of light equally, giving no changes in hue of color rendition. It can be a colorless (clear) or grey filter. The purpose of a standard photographic neutral-density filter is to reduce the amount of light entering the lens. Doing so allows the photographer to select combinations of aperture, exposure time and sensor sensitivity that would otherwise produce overexposed pictures.
About as ELI5 as I can make it:
Digital camera sensors effectively collect light, transform it into electrons, and collect those electrons in a “bucket” for a certain time (the exposure time) waiting for the image to be readout. If you have a very low light level, and you have an almost empty bucket, you can amplify the signal and the limit is set by the noise, which is greater if you amplify more. But if you have a lot of light, the limit is actually the size of the bucket: once it’s full, it “overflows” and you read it as being a “clipped white”.
If you want a long exposure (for creativity or aesthetic reasons: maybe you want some motion blur), but the scene is too bright, you have to use an ND filter to reduce the amount of light hitting the sensor to avoid overflowing the bucket (or in some cases, you could also take multiple pictures and merge them in software).
Now, why can’t they make the bucket bigger? A single pixel contains both the bucket and the photodiode (which converts light into electrons to fill the bucket), so for a given size of the pixel, if you want a bigger bucket, your photodiode becomes smaller and your pictures will be worse when the scene in very dim.
In reality, sometimes the photodiode IS also the bucket, but things can get pretty complicated pretty quickly…
It’s all to do with the amount of light reaching the sensor inside the camera and how much light the sensor can handle!
The sensor has a maximum and minimum amount of light it can deal with. More light that it’s maximum sensitivity will be recorded as 100%, the sensor doesn’t have the range to tell the difference between bright and really really bright. At the point it’s been recorded as 100%, there’s nothing you can do in software as if you try to make it darker the bright and really bright bits will be made darker by the same amount as they both got measured at 100%.
There’s are 5 ways you can change how much light hits your sensor. 3 of them are known as the exposure triangle:
Aperture – a expanding and contracting ring that physically blocks light, this also has the affect of changing the “depth of field” which is basically the depth of the area in focus, the smaller this is the more cinematic the look of the photo/video is.
Shutter speed – essentially how long the shutter is open and allowing light into the camera. This also affects the motion blur of the image, if the shutter is open longer you capture more “time” and if the thing your capture moves in this time you get a kind of ghosting effect called motion blur!
ISO/Gain – this is kind of the power of the sensor. You can turn the power of the sensor down which has the affect of making it less sensitive to light, so it can “see” brighter things and tell them apart. But this has the affect of increasing noise, like the white noise on old TVs. This is because there’s always random electronic interference in all things, as you turn the sensitivity down it’s harder to tell this interference apart from actual data.
So each of those three settings has a trade off, in an ideal world you want minimum noise, a depth of field that suits your look (normally narrow and cinematic) and a shutter speed that has a natural amount of motion blur (our eyes have motion blur too, so an image with none at all can look very un-natural and fake, too much and you can’t see what’s going on)
When you have those settings correct and the exposure/brightness isn’t what you want, you have two more options, more lights! And ND filters to just block x% of the light.
ND filters are often more relevant in film than photography as in photography you can use shutter speed much more and the motion blur is less relevant.
I take photos of the Sun’s surface with my camera that has an ND100000 filter installed. It’s an extreme example, but I don’t think it would be practical to engineer the sensor to photograph the Sun directly without any reducing the light entering it in the first place. It gets extremely hot and would fry the sensor quite quickly. It’s like using magnifying glass to burn paper.
Exposure time, film speed, aperture are all ways to control the amount of light coming in. This affects more than just how dark or light an image is. Aperture for example is used to control depth of field. Exposure time controls motion blur. If there’s a specific depth of field and exposure time you want to use but this would over expose the image, ND filters can be used to lower the amount of light coming in.
Latest Answers