How do pixel dimensions relate to resolution?

309 views

I’m a professional graphic designer. 4 years of college and 3 years of job experience have not been able to explain this.

Why is it that two images with the same pixel dimensions can be different resolutions? As I understand it, a pixel is one dot of color in a larger image, and resolution is a measurement of pixel density, so two images displayed at the same size with the same number of pixels should always be the same resolution.

I created an image for an email signature at about 1200 pixels wide. When implemented, the computer scaled it down to 299 pixels wide so it would fit, and it looked perfectly crisp and clear. This part makes sense to me.

To minimize the load time and storage for the image, I scaled it down to 299 pixels wide in Photoshop – exactly the same size it was in the signature – but it came out far lower resolution than it was on the email signature.

How can this be? This isn’t the first time I’ve encountered this issue. In fact, I run into it so often that I normally avoid using pixels as a measurement for images entirely. I’ve googled it many times with no solid answer. I struggle to understand why we even bother measuring images in pixels if the measurement doesn’t mean anything.

In: 6

6 Answers

Anonymous 0 Comments

When you say lower resolution, do you mean it looked blurry? I take it the size in pixels was the same.

Depending on how the downscaling is performed, it may not just reduce the resolution of the image, but rather take the data from the higher resolution image and find the next best thing by modifying how it’s displayed at the lower pixel count at the size the image is compared to just decreasing the resolution of the original in image editing software.

You have more information to work with, so which pixel is which colour or average of multiple colours if there is interpolation going on will be different than just reducing the resolution. You basically start from having more information than you need to approximate something at a smaller number of pixels that look good whereas if you reduce the actual resolution, you are removing that information. Just as a hypothetical case, say you’ve got red and green next to each other in the image, a downscaling algorithm may make some pixels some manner of yellow, or darker/lighter shades so that the two parts look better together.

It may be doing something similar to cleartype for text, but for images: https://learn.microsoft.com/en-us/typography/cleartype/ IT gives a good example of how you can make use of not just the image you want to display, but also the information on how a monitor is made to improve how something looks at the same “resolution”.

Side note, most monitors have a RGB subpixel layout and cleartype is optimized for that layout. A few monitors are BGR and cif cleartype is enabled, text actually looks worse with it. Pixel interpolation downscaling can also sometimes go a tad awry too and the result doesn’t look as good.

ETA: Different downscaling algorithms will have results of varying quality depending on the image.

You are viewing 1 out of 6 answers, click here to view all answers.