My favorite way for conceptualizing a log scale is to imagine a brick wall, where the position of a brick from left to right corresponds to some value you’re plotting.
If you look at the brick wall straight on, it looks like [this](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRn-wd2YsRsF-sTytPPbOzLnjpIuY627CS-6Q&s). This is like plotting values on a linear scale: the relative width of a brick in the picture is the same everywhere. This is good for comparing values that are of similar size, but if our dataset contains both tiny and huge numbers, like 10, 20 and 34512, we’d need to extend the brick wall hugely from left to right in order to show all values at the same time. From such a zoomed-out view, 10 and 20 would be so close together that you couldn’t tell them apart, even though 10 is twice as big as 20.
If you look at the brick wall at an angle, it looks like [this](https://www.shutterstock.com/image-photo/brick-wall-angled-view-260nw-462391810.jpg). This is like plotting on a log scale. Now, the further to the right in the image you look, the more bricks fit into the same space on screen. This way, you can keep both bricks that are close to you (low values) and bricks that are far from you (high values) in your view at the same time.
I have no idea if this makes sense to anyone other than me.
Latest Answers