why does a telescope array have a size equivalent to a giant telescope the same size?

190 views

I understand that the bigger your telescope’s dish/mirror, the more information you receive and the better your image/data. But why does an array of relatively small dishes spread out over a mile (for example) get treated as having similar data collection powers as a single dish a mile across?

It’s this lazy reporting? Me misunderstanding? Some complex concept that I don’t understand?

Logically (to me) three twenty meter dishes should only collect three times the data of a single twenty meter dish. What an I missing?

In: 5

3 Answers

Anonymous 0 Comments

Generally that only works for a radiotelescope. So whatever we get, it’s not a visible image anyways, but rather an image that we translate from radio waves.

For example, when we got the images of the black hole using radio telescopes all over the Earth, we were trying to simulate a telescope the size of the Earth.

When trying to differentiate between light sources, if they are too far away, they get smaller, and too small, we can’t differentiate between them. The theoretical maximum for angular resolution of a telescope is θ = 1.22λ/D where that’s angular separation of the features you’re trying to identify in radians, wavelength of light, and diameter of the telescope.

To put angular resolution into perspective, holding your thumb at arm’s length is about 1° wide in your field of view, the moon is about 1/2°.

So we want a big telescope, but we can’t actually cover the Earth with one big telescope, so we use our knowledge of how light moves and the smaller telescopes as data points, and we fill in the blanks around it until we get one image.

You are viewing 1 out of 3 answers, click here to view all answers.