First ARM has been around since the late 70s. But at the time there were basically two main types of CPU architecture: complex instruction set computers (CISC), and reduced instruction set computers (RISC)
The idea behind CISC was to make a lot of different instructions to do common tasks that software might want. RISC took the opposite approach: implement the bare minimum set of instructions, but use the simplicity to make the CPU run those instructions really fast.
Intel went down the CISC route, and ARM(Called Acorn at the time) chose RISC. Over time, Intel kept adding more and more instructions and had to keep supporting all of them, whether or not they were widely used, or risk creating incompatibilities with older software. ARM didn’t really have this problem, since they rarely added new instructions. Over time, this bloat caught up with Intel/AMD.
One of the side effects of having a simple CPU instruction set, is that the circuits to power those instructions is so simple. Simple circuits lead to fewer transistors, which ultimately leads to less power consumption. While that may not be important on a desktop computer, it’s becoming more important as more of our devices are battery powered. And on the data center side, cutting power consumption on a large scale leads to huge saving in both power and cooling.
And the x86 compatibility is becoming less of an issue these days as it’s super easy to port software to other architectures, or even run a whole x86 emulation layer in software. And the power savings is dramatic as evidenced by the new ARM powered Macs. (M1/M2/etc)
Latest Answers