eli5 Why do camera lenses need to focus on something? Why can’t they just render an image in which everything is clear?

575 views

Or maybe only some types of lenses work like that?

In: 281

16 Answers

Anonymous 0 Comments

It has to do with the camera’s “aperture.”

The aperture are [those little blades inside of a camera](https://upload.wikimedia.org/wikipedia/commons/f/f8/Lenses_with_different_apertures.jpg) that control how big the hole is that lets light through. Some cameras like smartphone cameras might not have moving blades and instead just have a fixed size hole that light can pass through.

The way camera focus works is by [taking light from multiple directions, and focusing into a single point](https://www.androidauthority.com/wp-content/uploads/2020/04/Camera-Lens-Focusing.jpg) where it hits the film/digital-sensor.

Changing the size of the aperture has an [interesting side effect that it changes how many directions the light can come from](https://www.abc.net.au/science/askanexpert/img/aperture.jpg). Therefore, while a smaller aperture lets in less light (which will need to be compensated for by longer exposure times or boosting the brightness after the fact), more things will be in focus at once. The numbers listed as “f/X.X” in the following image are the aperture settings, with **lower** **numbers being a wider hole** and the other fraction representing how long the image was exposed for**:** [aperture comparison](http://1.bp.blogspot.com/-a4WUCHdyIzM/UbdpplzzCXI/AAAAAAAAAeY/PSXQKVRPhSg/s1600/equivalent%2Bexposure.jpg)

So depending on what the aperture size is, the point which light hits the sensor will be more or less narrow. Depending on how far away the object you want in focus is, that point may need to be adjusted to actually land on the sensor instead of coming together too early or too late and creating a blurry image.

Anonymous 0 Comments

There are several good answers but I want to make a literal eli5.

All lenses have to focus on something, even your own eye. Try this , hold your finger about 15 centimeters ( about 1 banana’s length if you’re American) from your eyes. Close one eye and look at the finger. You can see yhe grooves on your finger and everything else looks blurry but if you look at something farther while still holding the finger at the same place, the finger becomes blurry.

That’s just how seeing works.

Anonymous 0 Comments

They can— it all depends on the optical design. Some lenses have anything from a few inches to infinity in focus and some have a razor thin depth of field.

Anonymous 0 Comments

Fundamentally, depth of field exists because lenses have a size. That’s why pinhole cameras, which have an aperture of effectively zero size, have an infinite depth of field. The front of a lens sees the world from a range of slightly different points of view, from the left, right, top, and bottom edges of the lens and all the points in between. Each point of view has a slightly different perspective on the world so each one sees a slightly different image. Combining different images together gives a blurry result. It’s possible to adjust the alignment of the many images so that objects at some distances do align—that’s what lens focusing does—but it can’t work for all subject distances. This is a principle of geometry that even perfect lenses can’t overcome.

To experiment, look at a scene where near and far objects overlap. Now try covering each eye in turn and see how the scene changes. There’s no way to combine both of the views into a single, sharp image, and a lens large enough to cover both of your eyes has exactly the same problem.

Anonymous 0 Comments

a pinhole camera has everything in as much focus, since all the light passes through a single point, no matter its distance.

but this is something you should just google. lots of detailed articles with illustrative pictures.

Anonymous 0 Comments

Picture a completely black room with only a single point of light, sending out rays in all directions.

Somewhere else in the room is a camera, pointed at that point of light. Now if you think about the front element of the lens of that camera, and all the light rays leaving that point of light hitting it, you can imagine that there’s a solid cone of light going from the point of light to that lens (all the other light is being lost, let’s say).

What is happening to that light once it hits that front element of the lens? Well, in a compound lens like an SLR camera lens, the light rays that make up that cone are going into the lens, where they get refracted and shaped into a column, maybe there are multiple stages inside the lens shaping it into a smaller column or whatever, and it goes through the aperture in the lens (the size of which corresponds to whatever f-stop is set) at some point, and then comes out the back element.

The lens is designed so that the disc of light hitting the back element comes out and forms another solid cone of light. (Of course, where the light converges to a point at the tip of this cone, it just continues on through, so it spreads out again into another cone that just disperses, growing infinitely with no base…at least until it hits something.)

Now, you can imagine that floating point of light in front of the camera moves up in the frame, what happens to the point of the cone behind the camera? It moves down. If the point in front moves left, the point coming out the back moves right.

Where is the sensor in all this? Well, if you put the sensor behind the camera it will collect whatever light falls on it. You could position it so that it cuts off the cone somewhere and you get a blob of light on it. But, remember, you’re trying to make an image of what’s in front of the camera, which is a single point of light. So, you should position the sensor right where that cone of light converges to a single point—boom, you have an “in focus” image of your point of light!

However, you can’t move the sensor closer or farther from the back of the lens. So, in order to focus it such that the cone coming out the back converges right where the sensor is, you could move the point closer or farther from the front element. Just like when it moves left/right or up/down, moving closer or farther from the camera changes the cone coming out the back.

Or, if you actually want people to buy your device, the lens can be designed to let you shape that cone coming out the back so you can focus it without moving the sensor or the point of light in front of the camera, which is exactly how lenses are designed.

Now that you can picture how a single point of light is focused, you can easily imagine how two points of light would work. Notice that if you focus on one of them, unless the other one is in the same focal plane, the cone it corresponds to coming out the back of the lens will converge either behind or in front of the sensor—it will be out of focus.

Finally, just realize that when you have a real scene in front of the camera, that’s just a whole lot of single points of light jammed really close together, each one at a different distance from the lens. When you focus the lens, you’re choosing one focal plane out in front of the lens, and only the points of light in that focal plane will converge to a point on the sensor. All of the points that are farther or closer to the lens will have their corresponding light cones hit the sensor before they converge, or after, and they’ll appear as out of focus blobs.