is there a limit to how much you can learn? Is there a point where a human neural brain won’t be able to store any more new information?

2.60K views

Obviously currently this doesn’t happen, but assuming there would be some techniques or devices for learning more – would there be a limit to how much our neural networks in the brain can store?

In: 802

92 Answers

Anonymous 0 Comments

[Bekenstein bound](https://en.wikipedia.org/wiki/Bekenstein_bound) limit from a physics perspective:

Scroll to just before the footnotes at the end…
This means that the number O=2^{I} of states (bits) of the human brain must be **less than** ≈10^(7.8*10^41).

Note: This is 10 raised to the 7.8*10^41 power, where for comparison a terabyte is 8*10^(12), and a googol is 10^(100). Or in long form, 10^(780,000,000,000,000,000,000,000,000,000,000,000,000,000) bits.

Anonymous 0 Comments

[Bekenstein bound](https://en.wikipedia.org/wiki/Bekenstein_bound) limit from a physics perspective:

Scroll to just before the footnotes at the end…
This means that the number O=2^{I} of states (bits) of the human brain must be **less than** ≈10^(7.8*10^41).

Note: This is 10 raised to the 7.8*10^41 power, where for comparison a terabyte is 8*10^(12), and a googol is 10^(100). Or in long form, 10^(780,000,000,000,000,000,000,000,000,000,000,000,000,000) bits.

Anonymous 0 Comments

In terms of short term memory, yes. In terms of long term memory, yes. In terms of instinctual memory, yes.

Anonymous 0 Comments

In terms of short term memory, yes. In terms of long term memory, yes. In terms of instinctual memory, yes.

Anonymous 0 Comments

In terms of short term memory, yes. In terms of long term memory, yes. In terms of instinctual memory, yes.

Anonymous 0 Comments

I’ll just say that there’s a very popular case that everyone reposts on the internet about this guy that had 90% of his brain death and didn’t even noticed.

One thing I know is that when neurons dying destroy a neural pathway, the brain just creates another, meaning we have lots of extra space, whatever you learn is objectively just another neural pathway.

Anonymous 0 Comments

I’ll just say that there’s a very popular case that everyone reposts on the internet about this guy that had 90% of his brain death and didn’t even noticed.

One thing I know is that when neurons dying destroy a neural pathway, the brain just creates another, meaning we have lots of extra space, whatever you learn is objectively just another neural pathway.

Anonymous 0 Comments

While the exact neural network topology used in mammal brains for something like memory storage isn’t really fully known, you can take an area of the brain like the hippocampus in isolation and see that its behavior should follow at least some models of artificial neural nets (ANNs). (ANNs can be a good model for many systems within brains that are well-studied, and this is knowable since now there are connectomes — complete neuron wiring maps, and thus neuron-by-neuron simulations — available for some animals.) ([There are several kinds of memories stored in several areas of the brain](https://qbi.uq.edu.au/brain-basics/memory/where-are-memories-stored), so one must always be cautious about over-generalizing anything.)

So let’s look at an ANN designed explicitly around memory storage — the [modern Hopfield net](https://en.wikipedia.org/wiki/Hopfield_network#Dense_associative_memory_or_modern_Hopfield_network), which is a generalized version of the classic model of computational neuroscience. If you push it to the limits of storage capacity, with or without error tolerance (discussed in ref #10), the number of memories you can store scales exponentionally with the number of (feature, or the input-output) nodes/neurons (*N*) as

[*Max number of memories*] =~ 2^(*N*/2)

Of course, there are trade-offs when you optimize a network for storage like this — of particular concern is the reliability and error-tolerance in recall, which drops significantly in the modern Hopfield net as you push for more capacity. (However, it doesn’t become zero, which is the point.) This is in extreme contrast to what is taught (or what I was taught) for the “typical” neural net (multilayer perceptron), which is to expect a capacity somewhere from Log *N* to *N*^2 (iirc, and I’m skipping details), for which architecture the number of neurons in the hippocampus may not be able to explain the number of memories we humans seem to accurately consistently recall.

(For a super-silly estimate, [the human hippocampus contains about 40 million neurons](https://pubmed.ncbi.nlm.nih.gov/2358525/) — though only parts are used in memory storage, it’s still gonna be millions, so let’s say *N* = 1 million. If the network architecture for storage is some sort of perceptron (the typical ANN I mentioned) then that would maybe allow 1 trillion memories. If it’s a modern Hopfield net, however, then the 2^(500k) memory capacity overflows a basic big-number calculator — it is effectively infinite.)

One issue with applying optimized models like these to biological systems is that a neural network in the brain is limited to the functionality that the cells and glia come with. Changing the update function in something like a modern Hopfield net is a single line of code (and no retraining/relearning), while an analogous change for neurons in the brain would probably require creating a new kind of neuron altogether for that purpose. (Keep in mind that there are many different types of neurons in the brain and periphery, and many more types of glial cells, all with different response “functions” to incoming signals).

Anonymous 0 Comments

I’ll just say that there’s a very popular case that everyone reposts on the internet about this guy that had 90% of his brain death and didn’t even noticed.

One thing I know is that when neurons dying destroy a neural pathway, the brain just creates another, meaning we have lots of extra space, whatever you learn is objectively just another neural pathway.

Anonymous 0 Comments

While the exact neural network topology used in mammal brains for something like memory storage isn’t really fully known, you can take an area of the brain like the hippocampus in isolation and see that its behavior should follow at least some models of artificial neural nets (ANNs). (ANNs can be a good model for many systems within brains that are well-studied, and this is knowable since now there are connectomes — complete neuron wiring maps, and thus neuron-by-neuron simulations — available for some animals.) ([There are several kinds of memories stored in several areas of the brain](https://qbi.uq.edu.au/brain-basics/memory/where-are-memories-stored), so one must always be cautious about over-generalizing anything.)

So let’s look at an ANN designed explicitly around memory storage — the [modern Hopfield net](https://en.wikipedia.org/wiki/Hopfield_network#Dense_associative_memory_or_modern_Hopfield_network), which is a generalized version of the classic model of computational neuroscience. If you push it to the limits of storage capacity, with or without error tolerance (discussed in ref #10), the number of memories you can store scales exponentionally with the number of (feature, or the input-output) nodes/neurons (*N*) as

[*Max number of memories*] =~ 2^(*N*/2)

Of course, there are trade-offs when you optimize a network for storage like this — of particular concern is the reliability and error-tolerance in recall, which drops significantly in the modern Hopfield net as you push for more capacity. (However, it doesn’t become zero, which is the point.) This is in extreme contrast to what is taught (or what I was taught) for the “typical” neural net (multilayer perceptron), which is to expect a capacity somewhere from Log *N* to *N*^2 (iirc, and I’m skipping details), for which architecture the number of neurons in the hippocampus may not be able to explain the number of memories we humans seem to accurately consistently recall.

(For a super-silly estimate, [the human hippocampus contains about 40 million neurons](https://pubmed.ncbi.nlm.nih.gov/2358525/) — though only parts are used in memory storage, it’s still gonna be millions, so let’s say *N* = 1 million. If the network architecture for storage is some sort of perceptron (the typical ANN I mentioned) then that would maybe allow 1 trillion memories. If it’s a modern Hopfield net, however, then the 2^(500k) memory capacity overflows a basic big-number calculator — it is effectively infinite.)

One issue with applying optimized models like these to biological systems is that a neural network in the brain is limited to the functionality that the cells and glia come with. Changing the update function in something like a modern Hopfield net is a single line of code (and no retraining/relearning), while an analogous change for neurons in the brain would probably require creating a new kind of neuron altogether for that purpose. (Keep in mind that there are many different types of neurons in the brain and periphery, and many more types of glial cells, all with different response “functions” to incoming signals).