What is information?

123 views

We use it all the time, it’s communicated all the time, but I can’t quite understand what it really is, or the nature of it.

In: 0

4 Answers

Anonymous 0 Comments

It’s just facts and knowledge, often derived from observation or data, stored in people’s brains (or in books or web pages or a variety of other places). Nothing magical about it.

Anonymous 0 Comments

It’s just facts and knowledge, often derived from observation or data, stored in people’s brains (or in books or web pages or a variety of other places). Nothing magical about it.

Anonymous 0 Comments

I’m gonna take a risk and try to ELI5 Claude Shannon’s [information theory](https://en.wikipedia.org/wiki/Information_theory). If you’re up for some ELI18 math, check out [his paper](https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf).

First, we have a **message**. A message is literally anything that goes from a sender to a receiver. Electrical signals, radio waves, or colored seashells. Usually a message has **meaning**: the message is connected to some property of the real world. To convey information (which I haven’t defined yet), messages must have a **coding**: there must be several different possible messages, and each corresponds to a different property of the real world. Each different message is a **symbol**.

But none of these — message, meaning, or coding — are information. We’ll get to that.

To keep this definite, let’s pick seashells as my *symbols*. I will mail you colored seashells as a message: their *meaning* is connected to the weather. I’ll send you a red one if it’s sunny here, and a blue one if it’s rainy. That’s my *coding*.

Suppose I send you this sequence of shells: red, red, red, red, red, red… forever. Maybe it never rains where I live, or maybe I’m lying, or maybe I just suck at this. Have I sent you information? No, not really. After the first few red shells, you’re pretty sure you can predict what the next one will be, so my sending you another red shell doesn’t change your understanding of the weather on my end.

Suppose I send you: red, blue, red, blue, red, blue… and so on. Maybe the weather alternates every day where I live, maybe I’m just messing with you, but either way, if it was red yesterday you can guess it’ll be blue today. You’re not surprised.

But if I send red, blue, blue, blue, red, blue, blue, red, blue, red, something changes on your end. You’re surprised. The shells change your view of what’s happening on my end. You have received information.

And so, information is an **unexpected** message, which **causes a change** in the recipient. It is a literal **element of surprise**. It must be **unpredictable**, and therefore, statistically **random**.

Claude Shannon developed this concept mathematically, using the laws of statistics to find the limits on how much information (surprise) can be transmitted by messages with a certain complexity. He showed the link between randomness and information, and showed how not-entirely-random messages (like the words I’m writing now) still convey information, but it’s less densely packed than you’d expect. He also showed how information could be transmitted reliably even if there are oc3asionfl probdems with the me2sages.

One interesting thing about Shannon’s theory of information: in his view, information is totally separate from *meaning*. My unpredictable seashell messages convey just as much information whether I actually check the weather, or just send you a random one that has nothing to do with the real world. This seems dumb, but it turns out to be useful: it’s just as difficult for a computer to transmit a bunch of meaningless random data as it is to send something meaningful. I mean, just look at Reddit!

Anonymous 0 Comments

I’m gonna take a risk and try to ELI5 Claude Shannon’s [information theory](https://en.wikipedia.org/wiki/Information_theory). If you’re up for some ELI18 math, check out [his paper](https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf).

First, we have a **message**. A message is literally anything that goes from a sender to a receiver. Electrical signals, radio waves, or colored seashells. Usually a message has **meaning**: the message is connected to some property of the real world. To convey information (which I haven’t defined yet), messages must have a **coding**: there must be several different possible messages, and each corresponds to a different property of the real world. Each different message is a **symbol**.

But none of these — message, meaning, or coding — are information. We’ll get to that.

To keep this definite, let’s pick seashells as my *symbols*. I will mail you colored seashells as a message: their *meaning* is connected to the weather. I’ll send you a red one if it’s sunny here, and a blue one if it’s rainy. That’s my *coding*.

Suppose I send you this sequence of shells: red, red, red, red, red, red… forever. Maybe it never rains where I live, or maybe I’m lying, or maybe I just suck at this. Have I sent you information? No, not really. After the first few red shells, you’re pretty sure you can predict what the next one will be, so my sending you another red shell doesn’t change your understanding of the weather on my end.

Suppose I send you: red, blue, red, blue, red, blue… and so on. Maybe the weather alternates every day where I live, maybe I’m just messing with you, but either way, if it was red yesterday you can guess it’ll be blue today. You’re not surprised.

But if I send red, blue, blue, blue, red, blue, blue, red, blue, red, something changes on your end. You’re surprised. The shells change your view of what’s happening on my end. You have received information.

And so, information is an **unexpected** message, which **causes a change** in the recipient. It is a literal **element of surprise**. It must be **unpredictable**, and therefore, statistically **random**.

Claude Shannon developed this concept mathematically, using the laws of statistics to find the limits on how much information (surprise) can be transmitted by messages with a certain complexity. He showed the link between randomness and information, and showed how not-entirely-random messages (like the words I’m writing now) still convey information, but it’s less densely packed than you’d expect. He also showed how information could be transmitted reliably even if there are oc3asionfl probdems with the me2sages.

One interesting thing about Shannon’s theory of information: in his view, information is totally separate from *meaning*. My unpredictable seashell messages convey just as much information whether I actually check the weather, or just send you a random one that has nothing to do with the real world. This seems dumb, but it turns out to be useful: it’s just as difficult for a computer to transmit a bunch of meaningless random data as it is to send something meaningful. I mean, just look at Reddit!