Law of Increasing Functional Information


Law of Increasing Functional Information

In: 5

Disclosure: I’m a scientist/biologist, but I’ve never heard of this law before this question. I did some brief reading and will try to explain what I understand.

First, this is an *extremely* recent concept, so calling it a “law” is a bit ambitious. While the paper that introduced it had some brilliant authors and went through peer-review, I think it’s worth seeing how this law gets incorporated into other research before making any large claims or conclusions.

This law is as much philosophy as it is science, because it seeks to describe universal principles of all “macroscopic systems”. A “system” is any situation where two or more distinct parts/things interact. It could be two electrons, a star and a planet, an animal and it’s environment, or a bacteria and an immune cell – all of these are “systems”.

The researchers were particularly interested in systems that change, or “evolve”. They laid out three principles that define what they call “evolving systems”: 1) The system has a lot of pieces that can rearrange in a lot of different ways; 2) The system actually *does* rearrange and change and make new arrangements; and 3) The system is subject to some selection based on function; that is, arrangements that accomplish a particular goal better than other arrangements will tend to persist.

That last principle is extremely similar to Darwin’s theory of natural selection. The key innovation here is that these researchers intentionally used broad/vague language so that this could describe almost *any* type of system. It could be genes in our DNA that determine our evolutionary fitness, like in Darwinian evolution; but it could also be the different combinations of minerals that make up rocks, the different elements found in stars, the specific hyperparameters chosen by a deep learning AI model, etc.

So that’s the setup to the law. What these researchers wanted to do is, with this broadly applicable definition of “evolving system”, see if they could identify unifying principles or properties that would apply to *all* such systems.

They identified three “universal concepts”, or patterns that ALL macroscopic evolving systems subject to functional selection exhibit:

– “Static persistence”: the particular arrangement of a system is stable enough to stay in its current state for at least enough time to evolve more states. It might even settle in this “equilibrium” for a while. This could be a star like our sun that has settled into hydrogen fusion (for now), an animal that fits its niche well enough to be genetically stable/pure for a while, etc.

– “Dynamic persistence”: there is enough energy input or other driver that the system can generate a lot of different rearrangements and be “stable” in this highly varied state. This might be volcanic conditions that can give rise to hundreds of different combinations of minerals, or late-stage fusion in stars that start to form heavier elements, or high genetic diversity in an plant/animal/bacteria species.

– “Novelty generation”: these systems will generate brand new configurations/arrangements given enough time. This is arguably the most interesting one.

The researchers propose that ALL macroscopic evolving systems exhibit these three functions. When you combine them, you arrive at the conclusion that such systems must inevitably become “more functional” over time. They are stable enough to exist for a meaningful length of time, dynamic enough to have variety, and will generate new combinations that might, every once in a while, be even “better” (in relation to the functional selection) than previous ones (and possibly lead to some highly unexpected outcomes).

One of the “big deals” of this proposed law is that it is “time-asymmetric”; that is, it specifically states that the “functional information” (how well the system functions given its selective pressure) goes in one direction (up) as time goes in one direction, and would reverse if time reversed. They mention that the only other universal law that has this property is the second law of thermodynamics (entropy must increase over time), so, if this law pans out, it could, in theory, help us understand and predict a lot of different phenomena.

Not only do living things evolve, but many natural systems evolve to be more functional.

Essentially, you can use the rough equivalent of biological evolutionary concepts (though the authors wish you wouldn’t) to describe the tendency of certain systems towards evolving more functionality.

These system require many interacting pieces, generating many configurations, and some of the configurations are more stable or more functional than other configurations. Essentially, Darwinism for non living things.

The example they give is how stars start by burning hydrogen, which ends up creating helium, which then burns the helium, which helps create carbon, and this process continues until the star creates and burns the entire periodic table of elements. Basically, the star is creating functionality.

The nature of the proposed law is that the universe tends towards more functionality under certain conditions.

The conditions they identify are: static persistence (stuff stays in the configuration for a while rather than breaking down quickly), dynamic persistence (the system actively recreates configurations), and novelty generation (the system creates new functions).

The easiest way to understand all of this is that biological evolution is only one example of a larger law: that certain systems tend towards more functionality. Or to flip it around, even non living systems can evolve similar to biological evolution.

The simple, real world example I Iike to use when talking about this: throw some headphone cables into a bag and shake it. What happens? The cables become more tangled and rarely become less tangled. Why is that?

Personal commentary: If this law is true… That means that life is really not that special, it’s just one form of many systems of evolving functionality.

Random comment: When I was getting my degree in Computer Science, there was a small field of research called “Artificial Life.” The idea was to create computer systems that created more functionality by themselves. The most famous was “Conway’s Game of Life” where simple board game rules would take random grid configurations and tend towards complex functions. It would be fascinating to be studying Aritificial Life again under the ideas of this new proposed law!

Given that the elements of our material and biological systems are not infinite, we might interprete the proposed law as Combinatorics and Derivative Mathematics – with an intentional focus twist on systemic evolvement.

It seems amazing that we don’t yet have a theorem or law to understand systematic rearrangements based on how functional they are to their own evolvement; instead of merely solving a given system.

It may be too early stages, but I think this could be a promising platform for us to think of our knowledge imperatives in a different way: in terms or progressive evolvement and function, as opposite to hard-solved answers to our problems.

It feels more like “the solution itself is the elected method for the advancement of the system”, instead of “this solves that”.

Tickles the brain in a great way.

From a philosophical viewpoint, I think the ideas presented are sound but their application less so due to the huge disparity in complexity (which they acknowledge) between living and non-living systems. This is due to sophisticated information storage and retrieval in living systems, a quantum leap in generating and preserving new functions. All this life and variety flourishes on Earth, while the rest of the universe remains relatively barren.

The paper is another in a long line that extrapolates Darwin to explain levels of organization beyond speciation, but I applaud the novelty of the application of evolution to non-living systems, whether ultimately useful or not.