How does multispectral remote sensing work compared to hyperspectral remote sensing?


I think I understand hyperspectral better than multi- in hyper (plz correct if I’m wrong) the several hundred narrow bandwidth reflective signatures of whatever is being observed is capture and consolidated into a “data cube”, which provides a full “spectral signature” that is specific enough to identify the material.

How does multispectral imaging differ?
Do the relatively broad signatures that are captured still enable a certain level of material differentiation once compiled? Or is it more along the lines of traditional optical imaging, simply capturing R/G/B/IR and overlaying the info to create an “image”?

I realize my wording of this request is atrocious… My deepest apologies and thanks for the help in advance!

In: 0

Hyperspectral imaging IS Multispectral imaging, but Multispectral imaging is not necessarily Hyperspectral imaging.

Hyperspectral just uses a lot more spectral bands which have a much narrower bandwidth. How many more? There is not a hard definition for where one becomes the other.

It’s like cutting a pie into 4 pieces (multisliced) instead of 100 pieces (hypersliced).