Two reasons. The first is that digital devices store colors as numbers, with no way to display colors “between” two numbers. So if you have an object which is textured in many very close shades of white (or black, or whatever), the picture won’t be able to capture the gradients between those colors. It’s sort of like how a topological map would make a hill look like a series of steps instead of, you know, a hill.
The second is that our eyes see the moon as much larger than it actually is due to an optical illusion. I don’t remember quite how it works, but when we see bright objects against darkness they just look bigger to us, probably so we could pick out eyes reflecting light in the dark while hunting. When you just take a picture and look at that, the effect is lost, so the moon looks its actual size.
Latest Answers