Does Heisenberg uncertainty principle kill the scope of unification of sciences?Is there any scope in future for this principle to be contradicted?

502 views

This means that I can’t model a human brain through physics and make predictions about it thanks to this principle. I can’t predict chemical products through computer simulation.

In: 0

6 Answers

Anonymous 0 Comments

>This means that I can’t model a human brain through physics and make
predictions about it thanks to this principle. I can’t predict chemical
products through computer simulation.

No it doesn’t.

Why do you think it implies that?

Anonymous 0 Comments

Looks like with your knowlage and the subject, this might be the wrong sub. I would answer if I knew

Anonymous 0 Comments

So we know the chance of a coin flip is 0.5 heads or tails. But we also know that if you flip a coin it could be a run of heads nothing but heads. We understand this is possible because each flip is an independent choice that doesn’t depend on the previous choices so there’s nothing wrong with seeing heads heads heads heads heads. It can happen.

We understand that it’s much less likely to continue the longer it gets because each time you flip the coin you halve your chance of continuing. So you’re much more likely to get 3x heads than you are to get 5x heads.

By the time you get to 100x heads well we understand this is possible but we’ve halved our chance of surviving so many times that now this is extremely unlikely. You would basically not be able to see this occur in your lifetime even if you spent your whole life flipping coins.

The larger the number of random decisions made, the closer the actual average of these decisions will be to the predicted average. Meaning for very large numbers of random decisions it is *extremely* likely the aggregate of these random decisions will be almost exactly equal to the value predicted by mathematics.

Once you have a system which involves a sufficiently large number of these decisions you can start ignoring the fact that they’re random because you’re sampling so many of these random decisions that the probability they will be anything other than the expected value (in aggregate) is so close to zero we can call it zero.

Exactly where this threshold lies is an interesting question, but a bit of an academic one. Chemistry for example can move backwards to the point where quantum effects are actually very important for the functioning of the chemistry (I believe photosynthesis relies on probabilistic quantum effects) so we don’t actually need a hard boundary we only need to understand the effects that are probabilistic and work them into the theory.

Then anything that builds upon chemistry, such as medicine, that now has a firm foundation because the point at which these random effects have coalesced around predictable values by sheer weight of numbers, we passed this point back in chemistry. Anything above this layer is really working with millions and trillions of atoms so the probabilistic uncertainty in lower layers has smoothed out to the point where it’s truly irrelevant.

Anonymous 0 Comments

It seems pretty fundamental so far, but I don’t get why you’re so bummed out over it.

It doesn’t mean you can’t simulate or predict anything. It just means you can’t do it with 100% accuracy. You should be just fine with that because all the math that simulation is using was confirmed by experiments that had the uncertainty principle affecting the measurements taken.

Anonymous 0 Comments

You can build predictive models with the Uncertainty Principle – and quantum mechanics models do – you just have to factor in the uncertainties somewhere.

So if you want to know where something will end up, a classical physics model will give you an exact point. But a quantum mechanical model will give you a probability distribution for where it could be with associated likelihood of it being there.

It is still a predictive model, just with a random element to it.

Anonymous 0 Comments

The uncertainty principle usually isn’t the limiting factor that prevents you from being able to reduce complicated systems down to quantum mechanics and make predictions. The biggest obstacle you face is simply that it’s very difficult to do precise quantum mechanical calculations involving large numbers of particles. Even the model of the hydrogen atom, which consists of just one proton and one electron, is fairly elaborate, and each time you add a new particle things get much more complicated.

Fortunately there are often very good approximations you can make to deal with large systems, for example with atoms you can often ignore the interactions within the nucleus and treat it as a single particle (if you’re not interested in things like radioactive decay), and you can use a simplified model of the electron-electron interactions. But the errors that you introduce by using these approximations tend to be vastly larger than the uncertainty that comes from the uncertainty principle. And a lot of the time there aren’t any particularly brilliant approximations available – even some aspects of the behaviour of large nuclei are quite poorly understood, let alone brains.

> This means that I can’t model a human brain through physics

The human brain is extremely complicated and poorly understood even on a large scale. For example, there is a lot of uncertainty about what roles are played by the different large structures in the brain. The uncertainty principle is nowhere near the limiting factor in understanding how brains work.