What is the difference between ARM architecture and the typical architecture in CPUs, and why is it so revolutionary?

276 viewsEngineeringOther

What is the difference between ARM architecture and the typical architecture in CPUs, and why is it so revolutionary?

In: Engineering

14 Answers

Anonymous 0 Comments

Every type of computing chip comes with a set of instructions that engineers can use to build software. Not like those in Lego sets, but a a list of things the chip can be *instructed* do when told, such as arithmetic, logic operations, and moving data around in memory. ALL software of any kind is built using these basic few functions like “add these two numbers”, “tell me if two numbers are the same”, and “remember this number”.  

ARM is a type of chip architecture that focuses on being physically small and using the least amount of electricity as possible to do the work it is instructed. The most important way it does this is by how limiting or *reduced* the list and power of available instructions is. This is called Reduced Instruction Set Computing (RISC).   

In contrast, the x86 architecture based off of Intel’s work focuses entirely on performance. Its use of Complex Instruction Set Computing (CISC) means lots more power and speed, but at the cost of energy consumption. And until recently, it was so popular and so much better documented, it was the go-to choice because you could build more powerful software faster.

ARM and x86 are both 50 years old, so neither is all that revolutionary. Our increased desire and ability to make smaller battery-powered electronics just means ARM is now preferred for many applications.

Anonymous 0 Comments

Imagine two robots – one is smarter but slower – lets call him CISC and the second one is less smart, but works faster, let’s call him RISC.

You can say to CISC: “clean the room” …and he will do so by itself, but it will take the whole day.

Or you can make RISC to clean the room, but he is not that smart and you will need to say to him: “take this broom, sweep with it like this” then you need to say “take this duster and do so and so” and “take this mop and do so and so”. However, RISC is faster and will clean the room in an hour or two – if you tell it well what to do.

CISC is smarter, you can easily tell it what to do, but it has big brain and so it is slow.
RISC is not that smart, you have to explain things to it more, but it is fast.

CPU is a brain of a computer – it is a robot, but instead of cleaning rooms, it works with information: calculates and says what to display on the screen of your computer or mobile phone. CPU can also be CISC or RISC. Again, CISC (x86) is smarter but slower and RISC (ARM) is less smart, but faster (and cheaper).

…it’s probably a bit too advanced for a 5 year old… :))

Anonymous 0 Comments

Back in the 90s during the “Mhz (and eventually Ghz) Wars” there was a general focus in chip manufacturing to just push faster clock speeds above all else. Intel was pretty much the king of chip making back then, and for years their x86 architecture design was tailored towards pure clock speed above all other priorities. They chewed through more and more electricity, and generated higher temperatures, but that wasn’t too hard to deal with in PCs.

ARM, on the other hand, focused more on efficiency, which wasn’t in very high demand for PCs, but was useful for various embedded devices and other types of hardware.

Fast forward to the 2000’s, and phones and eventually smartphones and other mobile devices started becoming a much more significant market for chips, and those form factors really benefited from more power efficient and heat efficient chips, so x86 chips were not well suited to this emerging chip market, but ARM was much better suited for it.

And so ARM became a foundation that a lot of today’s chips have built upon.

Anonymous 0 Comments

Well, given the enormous amount of mobile devices I’d say ARM is a typical architecture today. There is less old cruft needed for historical reasons in an ARM CPU compared to an Intel CPU, and the ARM CPU makers have learned a lot from building mobile devices that combine low power consumption with high performance.

Anonymous 0 Comments

At one time, the x86 architecture was all about performance and the ARM architecture was all about low power. In the last decade or so, they have converged as phones, tablets, and laptops started to become competitive. One represents “CISC” and the other “RISC”, but these are merely marketing terms today. ARM has adopted lots of complex instructions while x86 took on RISC internally, squeezing out differences of substance. In the end, you see ARM in low power devices and x86 in servers and supercomputers. The fanbois and marketeers will yell at me, but the meaningful differences are today at the software level, and both ARM and x86 happily run a lot of the same types of software.

Anonymous 0 Comments

ARM is not that revolutionary, and have been around for almost 50 years. As an example, lots of early 2000 Nokia phones used ARM based processor. 

ARM is chip architecture, which basically means a set of rules how software (like windows, iOS or Linux) can use the processor (hardware). The competing architecture is x86. ARM was originally designed to be “reduced” in complexity to be more power efficient, while x86 focused on pure performance. So untlil recently, ARM are the choice for low power, battery device (cell phone, smart gadgets etc.), while x86 dominate the field of PC, workstations and server. But recently ARM have evolved significantly (accelerated by the wide adoption and investment in smartphones in the 2010s) to the point that it can compete with x86 on performance, while still being energy efficient. So it now up to who rewrite their software to operate on ARM first and put it on a product. It’s not a easy task because you need to convince all the third parties also willing to rewrite their software to run on ARM. Apple did it successfully first on their computer line up (MacOS), and their computers (especially laptops) are now way ahead in terms of power efficiency than any of the x86 offerings.

Anonymous 0 Comments

Think of an instruction set as the “language” that is hard-coded on the chip. At one time a few decades ago it was true that ARM was a simplified language and x86 was much more complex. Today that’s not really the case.

ARM is not revolutionary. There isn’t no fundamental reason inherent to the instruction set for one to be more efficient than another.

For decades, x86 chipmakers were focused on peak performance. Intel engineers were excellent at that and consequently x86 dominated the market. Because of the shift to mobile computing, the market now values efficiency over peak performance and that left x86 chipmakers who had spent decades ignoring power consumption in their fanatical quest to make the fastest chips in a bad spot.

It could have been different. If, in the early days of mobile computing, Intel’s mobile offering, called “Atom processors” were even close to competitive in the power/performance balance, it’s entirely possible that most mobile devices today would run on x86 instead of ARM. History turned out the way it did because Intel was not ready and able to compete in the mobile processing space when it emerged. Now mobile computing is mature and x86 has been largely shut out.

Because desktop computing is a much smaller segment of the market it is now possible to imagine the dominant mobile computing architecture, ARM, breaking into the desktop market.

TL;DR: Intel used to make the best chips. They missed the memo that everything was going mobile. So now they don’t make the best chips anymore.

Anonymous 0 Comments

ARM isn’t really revolutionary. It’s been around for almost 40 years. It was originally used on the BBC Micro and other computers, in the era of the Commodore 64. Then they started working with Apple to develop a new version for its portable Newton portable digital assistant, and that’s when they came to be called ARM.

From here on, ARM focused on portable systems where performance isn’t as important as efficiency. It was used in a few computers early on, but the real focus went to portables where energy efficiency and low heat is important. x86 and others like DEC Alpha went the opposite route, maximizing performance (far faster than any ARM), but also running very hot and using a lot of electricity.

And there ARM stayed for years, with incremental performance improvements aimed at low-power applications such as embedded and portables. They licensed their architecture to whoever wanted, and they could build chips based on it. This was called the “core” license. You can make your chip and slap a standard ARM core in it. The first iPhone used an off-the-shelf ARM chip made by Samsung.

Then Apple came back again and got a license to not only build chips based on it, but to create their own wildly different chips that modified the core, an “architecture license.” Apple bought some chip design companies and put a lot more effort into making ARM faster than ARM themselves ever did. Eventually it was a phone that could compete with lower-end laptops. Then they decided to make a laptop/desktop version of it to compete with x86.

TL;DR: x86 came from the position of not caring about power consumption, but later tried to build some power savings in. ARM came from the position of caring about power consumption, but being fast enough, and later worried about performance. And now that our world is centered around portables that need batteries, ARM’s philosophy has become very popular.

Anonymous 0 Comments

I don‘t quite understand the question. ARM is a fairly typical architecture and it’s certainly not revolutionary. What makes current ARM architecture particularly suitable for building high-performance CPUs is that ARM instructions are carefully designed to be easier to work with in hardware. This means it can be easier (and more energy-efficient) to build an ARM CPU that executes a lot of instructions simultaneously. Also, ARM instructions tend to pack commonly occurring functionality together, which again improves their efficiency.

Anonymous 0 Comments

all microchips work in a similar way, they have a set of things they can be told to do using instructions. There are two ways we can make a microchip do complicated things: build complex instructions in hardware (CISC), or use a reduced set of instructions (RISC) and build the complicated thing in software.

Basically an ARM cpu can be built with fewer physical components which means it can run using less power but the list of instructions needed to make it do some work is a little longer, while a more traditional desktop cpu from Intel or AMD needs more components which requires more power to run but needs a shorter list of instructions to achieve the same work.

There is a lot of extra detail here but in the end they’re about the same in terms of capability, and as mentioned elsewhere ARM have been trending towards a more complex set of instructions and for a long time x86 (Intel/AMD) have used something like a RISC cpu inside their processors and provide their complex instruction sets through software