Computers run code in binary, if we read it, even changing/translating it to proper text, it would be unreadable garbage. But writing it that way sucks because it’s unreadable, so we have the language that we write the code in (C, LUA, etc) and then we run it through a compiler so the computer can read it and do the stuff we want.
You can see what’s running. But what’s running is binary code. Decompilation is the process of turning that binary code into higher level source code.
It is generally not possible to recover the original source code as information such as variable names is usually lost during the compilation process. But the decompiled source is generally much easier to read than raw machine code.
0110100001100101011011000110110001101111
Can you read that?
How about this?
01101000 01100101 01101100 01101100 01101111
Surely you can read this one:
h e l l o
Computers operate using 1s and 0s, carefully arranged to tell the computer what to do and in what order.
Most humans can’t easily read the binary directly, so we write code in a human-understandable programming language, then compile it into something the computer understands.
For another example, A car engine runs by injecting fuel into the cylinders, firing spark plugs, and opening/closing valves in a specific order and with specific timing. A human could do all those tasks, but not nearly fast enough or consistent enough.
Instead, the human just pushes down on the accelerator pedal, and lets the car handle all the working details.
A mechanic might better understand the small details, and can sue that information when troubleshooting, but a driver just needs to know the big picture inputs: press the gas to go, release the gas and press the brake to slow down. Make those controls consistent, and an experienced driver can drive almost any car without worrying about what the engine is doing.
The problem with computers is that they are running stuff at the limit of human understanding. Even with source code written by humans, for humans, to be as readable as possible, understanding what is going on can be hard.
Also, there is a layered approach humans use to do even more. You can create a windowing layer where programs can request windows to draw on, and your layer simplifies the entire thing so human programmer creating those windows can just think of windows as things computer understands.
Compiling program into ones and zeroes for CPU throws away every bit of effort put into readability and instead tries to just squeeze efficiency. All the different abstraction layers get collapsed into one, so all that separation of tasks is gone trying to read that.
You can try to read those ones and zeroes. People sometimes do it. But it’s a very human-unfriendly business compared to reading source code on higher level language.
And the cool thing is, sometimes this source code can be partially recovered. So if you can do that it’s gonna make your experience much, much easier.
Latest Answers