What is the difference between spectral bands and spatial bands in sensors?

97 views

I’m learning about hyperspectral imaging but I don’t understand spectral bands vs. spatial bands.

In: 5

Anonymous 0 Comments

Spectral has the same origin as the word speculate or spectacle: It’s talking about looking. It’s related to telescopes and lenses because an image appears when you look through the spectacle. Nowadays, we relate spectral to mean “of the spectrum of electromagnetic radiation.” That encompasses a *lot,* but to keep things simple, it’s the frequency that these things we call photons wiggle so that we can see. **Faster = more blue. Slower = more red.**

Spatial bands are more confusing at first, but I promise there’s an easy way: Have you ever met someone with no spatial awareness? They’re always bumping into things? They’ve got no idea of the **space** they take up.

So when you hear spectral, think colors/frequencies of light. When you hear spatial, think distances (We’ll come back to explain *what* distances in a moment). This might seem obvious but it’s an important foundation and hey, it’s ELI5.

So back before imaging that used electro-optical sensors (fancy computers that see light), we used film. Think old black-and-white photos. These photos could not finely distinguish between all of the **spectral** data that existed because all they understood was dark or light. Later, we got some films that could represent colors and layered them together so that we’re taking 3 color tinted photos and combining them. That’s basically how imaging works today, except we can *really* get the detail we want. You can think of it like us taking hundreds or even thousands of ‘images’ all with their very specific shades of color. We’ll often lump several of them together and describe them based on their wavelength, usually in nanometers.

Spatial resolution is measured in meters, and describes the sensor’s ability to *resolve* an object, or distinguish between separate light sources. That’s a little complex, but put simply: Given equal ‘zoom ability’, the further away you are from something, the harder it is to tell things apart. A high quality satellite photo might be able to see buildings, but how finely are you able to distinguish them? How much blurring happens between the edges? 10 meters? 1 meter?

So for the best quality photo, you’d want to ideally get close either physically, or by getting a really long focal-length lens. But if you’re a weather satellite, getting close is a *bad* thing because now you can’t see the forest for the trees. You want to see the whole atmosphere moving, not just someone’s house. You sacrifice quality for field-of-regard.

So finally we get to the root of the question: What’s the difference? A sensor’s ability to detect things exists physically. We could get into the really cool science behind *how* these detectors work, but in short: You can’t build a sensor that does both things perfectly because it requires different materials. So you’ve got to compromise. A *sensor platform* might have several sensors that operate differently for prioritizing spatial resolution or spectral resolution. And you can often filter what you receive so you can tune that data. Example: I only want to see light that is 620 nanometers in wavelength.

Hopefully that helps. It’s definitely a difficult subject. I don’t want to go too deep on examples like LANDSAT, I just wanted to make sure I was in the right ballpark regarding your question. Feel free to ask more if you’ve got more questions!