# the decision problem

41 views
0

I was watching [this](https://youtu.be/eqvBaj8UYz4) video and even though I was trying very hard, there are some things that I still don’t understand. I get what’s Turing’s machine but I don’t understand what happens later. There’s some kind of paradox, that if you have machine that decides if programe will halt (solve the problem) at some point or just loop forever (not being able to solve problem) and then you give this infromation to another machine it’ll do opposite thing. So if the machine says “this program will halt” it’ll then loop forever and if says it’ll loop forever, then it’ll just halt.

I know it’s paradox so it isn’t supposed to make sense but well… I think I miss something here. Can someone explain it to me in the easiest way possible?

In: 5

Basically he shows that if it was possible to create a machine that could look at an arbitrary program and determine whether or not that program will eventually stop or otherwise go into a loop, then we could use that as the basis for building another type of machine (the “opposite” machine) which results in the paradox you describe. By inverting the output of the halting machine, then feeding its own code into itself, it can’t halt OR loop. Because if it halted, then by its own code it should loop forever. But if it looped forever, then by its own code it should halt. So it can’t do either.

But a program has to either halt or loop forever. There isn’t a third option. That means such a program can’t exist and since such a program is built on the notion of a halting machine, then a halting machine can’t exist.

Here’s a way maybe to wrap your head around it. Instead of what Turing was doing with programs and halting, let’s say we are making a machine (call it A) that will take a statement and determine if it is true or false. Then you attach a second machine (call it B) to A, which will give the opposite response of A. Then you group these two together into a single machine called C. So I could input “2 plus 2 is 4” into C and A would return true, B would flip that and return false, and then ultimately C returns false.

Now take C and pass in the statement “this statement will be evaluated as true”. Well if A thought it would be true and returns that, then B would return false. But then that means that A should return false, since it won’t be evaluated as true, but B would reverse that and return true. So it is impossible for A to return the correct answer.

That is essentially what Turing is describing with his example. But with his examples, instead of machines, they are programs. So program A determines if an input program will eventually finish. If it will, then it will return true. Program B takes that response and it gets true, it will create an infinite loop that will never finish. And vice versa, if program A decides an input program wouldn’t finish, it returns false, and then program B takes that input, and returns a result and finishes. These two programs are put together into a single program C. Program C is input into program C to determine if it will halt or not.

Program A will determine if program C will finish. If it thinks it would, then it will return true, but then program B would go into an infinite loop, and C wouldn’t finish. But then if C wouldn’t finish, A should return false, so B gets that, and it returns true and program C does finish. So then program A should return true, but then…

So Program C can’t ever evaluate whether or not Program C will finish.

> if you have machine that decides if programe will halt (solve the problem) at some point or just loop forever (not being able to solve problem) and then you give this infromation to another machine it’ll do opposite thing. So if the machine says “this program will halt” it’ll then loop forever and if says it’ll loop forever, then it’ll just halt.

Yes. That’s the “opposite machine”. You give it any machine (or program; on an abstract level, there’s no difference between the two) as input, and it does the opposite of what that machine does.

The paradox comes in the next step: if the machine you give to the opposite machine as input is *the opposite machine itself.* Then, if the opposite machine halts, the opposite machine goes into a loop, and if the opposite machine goes into a loop, it halts. As a machine can’t halt *and* go into a loop at the same time, it follows that such a machine simply cannot exist. And as the opposite machine could easily be constructed from a decision machine, it follows that a decision machine cannot exist either.

Imagine something which is impossible wasn’t impossible. Then, get confused because you just imagined something which is impossible wasn’t impossible. Dressing it up as logic doesn’t change anything.

Say you can choose red or blue, and I claim that I can always guess correctly what color anyone ever chooses if I know their steps, even if they ask me a question before deciding their color. Then you can come along, your steps are “these are my steps, ask you what color I’d pick if I follow these steps, then pick the opposite”. Obviously my answer to that would be wrong, so there is at least 1 person/steps that I can’t get right or can’t give an answer to. Therefore my claim that I can always guess correctly is false.