When scrolling through galleries of photos for residential (and some commercial) properties, it always seems like the perspective is off. But each picture seems a little differently off – almost like a fisheye lens, but not. What are they using to do that and why has that “look” become industry standard, even when we all know the pictures aren’t true to life?
In: Technology
Most phones nowadays come with many cameras: normal wide, wide angle, ultra wide angle, macro, bokeh, etc. Most of these names come from the lenses (except bokeh). Different lenses have different purposes and effects.
What they are using is an ultra wide angle lens which simply fits more of the place within a single photo without needing to get farther back (sometimes you can’t because you’re already against the opposite wall or corner of the room). You may also use an ultra wide angle lens to fit more friends on a group photo or get more of a panoramic view into one picture, that’s why your phone comes with that lens (and you’re missing out if you aren’t taking advantage of this option when needed/appropriate xD)
Ultra wide angle lenses have “that look”. It’s like seeing with your eye both to the left and the right at the same time and fitting that on a single picture. They can “view” from about 90° to 120° (depending on the lens). Corners look kinda expanded and deformed while the center looks compressed, this is especially noticeable on faces, human bodies and circular objects because the diagonal deformation usually is evident on such things since you know how a face, body or circle should look like. Fisheye lenses usually go from 120° to 200° so they are kinda the same but more extreme.
Also sometimes they use HDR to get more highlights from darker/brighter parts of the room (this usually comes enabled on your phone by default but there’s a setting in your camera app for that too). HDR means that the phone takes 3 or more pics, one very bright, another very dark and some in the middle. Then combines the middle one and the darker parts of the brightest pics (to get more details in the darker areas of the room and discards the brightest parts of these because they are washed out)… And the brightest parts from the darker pics (to get more details on the washed parts of the other pics). All of this to get a single pic that looks balanced without blackened or whitened spots. This is why some low/middle end phones take a while to take a pic on normal light conditions: it costs a lot of computational power from a medium capacity CPU to grab the darker/lighter parts.
Finally they may enhance the colors quite a bit to get a more vivid look heightening the colors up. Kinda like travel photos but a tad lower.
Latest Answers