Why do cameras need to do so drastically change brightness to make clear photos, but our eyes don’t need that?

265 views

So for example, if you try to capture a bright area with a camera, the other surrounding area can become too dark due to the camera “dilating” so that the bright area is not too bright, but our eyes can look at it fine.

Our eyes dilate too, but it’s not nearly as much as a camera.

In: 0

6 Answers

Anonymous 0 Comments

[deleted]

Anonymous 0 Comments

Our eyes have a greater dynamic range than photographs, i.e., eyes can more comfortably deal with bright and dim objects together. The issue is mostly with our display devices; cameras can just about match the eye for capturing images. Printed photos, especially those printed with inks as opposed to developed in a dark room, have a limited range of contrast.

The situation is improving with HDR (high dynamic range) photography and movies. One of the most important upgrades to TV standards is the addition of HDR and the latest TV screen technology can display this much better; it’s a much more noticeable improvement than the increase in resolution to 4K.

Anonymous 0 Comments

Brightness range. Our eyes can percieve a wide range of brightness level, we can process an “image” where some parts are literally thousands of times brighter than others. Monitors need to compress this to 256 values from “completely black” to “completely white”. Most cheap cameras also cannot capture such a wide range, they need to lower or raise the total amount of light that gets inside them.

However, modern cameras can provide better dynamic range than monitors, some compression is already performed.

Anonymous 0 Comments

Dynamic range is the max difference in brightness in a scene where you can make out detail in the brightest and darkest bits simultaneously. Dynamic range is measured in f-stops, where one f-stop is a factor of two in brightness. A great camera can resolve about 15 f-stops difference (~32000x), with a phone somewhere in the 10-14 f-stop range.

The human eye, can resolve about 14 f-stops of dynamic range, which is similar to a top of the range camera.

However – the eye *and brain* working together can exceed 24 f-stops! (~16M times brightness difference!)

How does this work? Well your eye actually has a very narrow sharp field of view, most of your vision is quite blurry and only the centre is sharp. Subconsciously you move your eye around all the time to scan the whole scene, and as your eye looks at different things it dynamically adjusts based on the brightness. Your brain is able to maintain this brightness information as you look around meaning you perceive a very high dynamic range.

A fairer comparison would be to film scanning across a scene through a zoom lens, as you look at the shade and bright areas, the camera dynamically adjusts exposure (pretty poorly compared to the eye, but it works none the less.) If you then post processed the video you could make an image where the bright and dark bits had detail.

A camera exposes the whole scene at once and needs to balance the exposure to try and preserve the most detail.

Anonymous 0 Comments

Every light sensitive cell in our retina is quite complex on its own and is able to adjust its exposure individually. They’re like their own 1 pixel cameras.

Each pixel in a DLSR sensor is much simpler, so they all have to be exposed to the same amount.

Thus your eyes can see a mix of light and dark address clearly while (with current technology anyway) a camera can’t.

Anonymous 0 Comments

There are 2 parts to your human self seeing (brain and eyes) ad 2 parts to the iphone seeing (hardware and software).

The hardware opens and closes a hole (this is for both) that physically lets in more or less light.

The software goes , hey, this is low light, lets switch to dark mode (rods in your eye and some other stuff in a camera lens) and open the aperture, but lets notify the software (brain or algorithmic, it is the same essentially, your brain has computational ability) to fill in the blanks of the grainy picture and say, hey, that is a door, even though the edges and not there and it is pixelated.

If you are in too high light,, your light sensors (cones in people, and basically the same in cameras bleach.

And all the calculations that let you see contrast and make adjustments are overloaded and can’t work.

If you have enough of that, you permanently damage the physical sensors (retinal, film, chip) as with snow blindness.

THe iPhone camera is a bit less good then your brain to deal with variations in light. So if there is a bright spot , like a super bright reflection, it may adjust to the brightest (or darkest) thing less well than your eyes do.

But there is a limit to what your eyes can do also.

Reflections of headlights are small and mess up your vision. People who work with lasers , the amount and of light is small but powerful, so they wear protection.