So what is our goal in making a camera? When we point a camera at something we want to gather the light from that object onto our camera sensor or a strip of film. Sounds easy enough right? It’s not that simple though, we want each point on the object to have its light fall on one point and only one point on the sensor. If it falls on other points on the sensor which have other light coming onto it as well it ends up melding those two together and your result is a blurry image. Each point of an object radiates light in every direction, so if we just hold up a strip of film to the object it’s not gonna work, everything is going to get blurry.
To solve this, the earliest cameras were what we call pinhole cameras, you just take a box, put a tiny hole in it, and it works. No lens, no anything, something like [this](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3b/Pinhole-camera.svg/800px-Pinhole-camera.svg.png).
This works because two points form a line and only one line. For each point on an object you are trying to photograph, there is exactly one direction the ray of light can go to go into the pinhole and onto the film or whatever on the other side.
The only downside is that the only ray of light that we can use is the one that goes in that direction. It’s not a lot of light, our sensors aren’t that good at picking them up nor is our film. We can maybe run things for long exposure or use flash but it doesn’t help much.
So we then to lenses. Lenses bend light, we take the light that’s radiates out and bend it back onto a point on our sensor. This way we can use a lot more light, and we can actually open up the hole to be wider to allow more light in.
The only problem with this is that based on the distance from the object, the light comes in at different angles, [here’s not an amazing diagram but it shows it](https://inst.eecs.berkeley.edu/~ee198-4/fa07/images/week12_dofdistance.gif). Our goal on this image is to make the two rays from each distance line up on the green line which is the sensor (there are a lot more rays in reality but this uses 2 to reduce clutter). As you see, the light from different points comes in at different angles and the lens is only adjusted to really correct for one angle. The result is things in the foreground and background are blurry, only the stuff along where it is focused is clear.
This image also shows another fact, by moving the lens forward and back we can adjust the distance from the lens to the sensor to focus in on different distances, which is what cameras do.
The blur is going to be worse based on aperture. Our pinhole camera is so focused already due to a small aperture it needs no lens, but the more we widen the hole the less the camera approximates this pinhole camera ideal, more light is let in at different angles into the camera, a lot of which doesn’t go where it’s supposed to.
At least if we don’t want it to go there. I find bokeh pretty, it’s a limitation we have to deal with but makes things a bit more fun in my mind.
Latest Answers