why was the M1 chip so revolutionary? What did it do that combined power with efficiency so well that couldn’t be done before?

915 viewsOtherTechnology

I ask this because when M1 Mac’s came I felt we were entering a new era of portable PCs: fast, lightweight and with a long awaited good battery life.

I just saw the announcement of the Snapdragon X Plus, which is looking like a response to the M chips, and I am seeing a lot of buzz around it, so I ask: what is so special about it?

In: Technology

18 Answers

Anonymous 0 Comments

As others have said, the M1 isn’t that revolutionary… Rather, Intel’s chips are able to carry around over 3 decades of technical debt without being awful.

Anonymous 0 Comments

It’s a few things that happened all at once:
– Shared high performance memory – with Graphics cards, CPU and other components (apple calls this unified memory architecture). Big power and performance benefit.
– RISC – better performance per watt. But also, Apple, due to expertise in A mobile chips, was able to execute on this extremely well. Only now is Qualcomm catching up. Maybe.
– Control over the hardware and OS – that allowed them to do rosetta 2, that allowed for all legacy Intel x86 apps to work during the transition.
– Intel chips were struggling – with lower nm / EUV – Made the performance difference more stark.
– Developer network – Apple worked with many top vendors to optimize the performance of their apps on M1

Anonymous 0 Comments

My i7-6600u outlasts my friends M1 with similar performance in the real world. Seems like 100% Apple manufactured hype.

Anonymous 0 Comments

It’s not that it’s revolutionary as such, it’s just that Apple could extract comparable performance per core than the fastest Intel processors while using 4-5 times less power. How did they do it? Spending more money per core helped, but most of all it was because their design prioritized low power consumption. 

Anonymous 0 Comments

ELI5: older computer chips do math 100 different ways. However, if we program our code a little differently turns out we can do pretty much all of the same stuff but with only 10 kinds of math.

this allows us to devote more of the computer chip to doing those 10 types of math really fast and efficiently.

Anonymous 0 Comments

It’s been technically possible for a long time but x86 was pretty entrenched for a long time. Everyone was sort of afraid breaking stuff and you sort of just needed someone with enough balls to do it that wasn’t afraid to create a paradigm shift. The biggest ace Apple had in its sleeve was Rosetta 2, which was not only very well done, but made it a far less painful transition. Arguably, that was the most revolutionary part of all.

As for what Apple did to get it so fast, they had a few things going on. ARM is a much cleaner instruction set than x86. There isn’t always a clear winner between CISC and RISC, and if we had a crystal ball and could start over you could create a good CISC instruction set that is more power efficient and generates less heat, but x86 is a large, old, crufty spec. Jim Keller (architect for a bunch of AMD and Intel processors) talked about this recently. Modern CISC uses micro instructions that are very RISC-like, and so a lot of modern speed gains in CPU design come from bringing things closer to the CPU and things like prediction. One argument in favour of RISC still possibly is that it’s a simpler instruction set to implement — I’m not a computer engineer but I imagine that’s a lot easier to reason about and iterate on when designing a chip.

Another thing was building a bunch of stuff on a single chip, which manufacturers have been doing for a while, but Apple took it a step further. Memory and graphics are tightly integrated onto the same package as the CPU, and the memory between CPU and GPU are unified.

One downside of this kind of design though is it might have major implications for the more open and modular PC designs of the past. There are a lot of people that like that they can upgrade and customize their hardware in a modular way, and these kinds of chip designs don’t allow for that so much.

Anonymous 0 Comments

There’s nothing revolutionary about the chip – it’s essentially the same thing as the ones found in iPads and iPhones. The developer kit, which is a pre-release test hardware for the new product, was literally iPad internals in a Mac Mini shell.

The revolutionary thing about it is that someone (Apple) bothered to run a desktop operating system on it and made the transition completely seamless.

Microsoft has tried the thing for years now, and they released commercial products trying to do the same thing, way before Apple did.

The product was called Windows RT (released in 2012!) and it ran on computers with ARM processors, the same type M1 is.

The problem Microsoft faced was that Windows RT won’t run most apps that run on normal Windows installations that run on Intel/AMD chips that use the x86 architecture (I.e. not ARM).

Even if Windows RT ran on the M1, the problem was that people can’t use their apps with it, because they aren’t compatible with the processor – they speak different languages.

That’s what Apple got right. Their software called Rosetta 2 acted as a translation layer so that all the apps Mac users were using would run on the new chip, without having to do anything at all. Yes, some of them were buggy and needed some quick fixes, but that’s far from “absolutely would not run at all” as you would find in Windows RT.

Because of the success and widespread adoption of the M1, it encouraged app developers to make native ARM-compatible versions of their apps.

And now because tons of app developers now have ARM compatible versions of their software, Microsoft is now finally confident enough to push their dormant Windows for ARM projects because they know app developers would support them now (clearly they didn’t before) because half the development work they need to do (making ARM versions of apps) was already done thanks to Apple.m

That now means Microsoft is now more confident about announcing the use of an updated ARM chip from Qualcomm where Windows users will finally be able to have an ARM based computer that runs all their apps.

And take note, this Qualcomm partnership isn’t new – Microsoft has been releasing Microsoft Surface computers with Snapdragon chips exclusively for years now, alongside the M1 – except they still don’t run all the apps.

The Snapdragon X elite isn’t the revolution in the Windows space, it’s Windows finally catching up to make the apps actually run on the new chips they’ve been pushing since 2012.

Anonymous 0 Comments

This is a hard ELI5. I’ll try.

The preceding paradigm of Intel processors and Windows and HP making PCs and all that was one structured as a heterogeneous supplier/customer market. Computer processors are designed for a broad market – because Intel (or AMD, etc) is selling to a wide range of customers (OEMs like HP) who are selling to a wide range of customers – businesses, students, gamers, etc. who all have diverse needs and may even be running different operating systems (Windows, Linux). So you design to a kind of generalized need – kind of like how every car is a crossover SUV. It checks most of everyone’s boxes, but it’s maybe a bit too big for some, or small for others, or heavy or unable to tow, etc. but it’s ‘close enough’ for the most people.

Apple’s situation is different. They design processors only for themselves, in computers they design. They are designed to only run their own operating system, which is written using one of two of Apple’s own languages, compiled with their own compilers. Apps on the platform use Apple’s APIs, and so on. Apple has control from top to bottom for the design of the overall computer. They don’t need design to the generalized need – they can be much more targeted – this design for iPad and MacBook Air, this other design for iMac and MacBook Pro, etc. And when Intel looks for a performance benefit, there is only one place they can put that – in the processor they make. Apple can put it *anywhere*. They can choose to change how the compiler works and the processor. They can put it on a support chip because they are the OEM – they make the actual computer, not just the chip. And they don’t need to optimize the design for the 100 different programming languages that you typically use with PCs, they can focus just on the two that they designed, and that is used for 99% of the apps on the platform. So, when the M1 came out, it could read 8 instructions at a time, instead of 6 which was the fastest x86 chip – that was a function of ARM/AppleSilicon instructions vs x86 ones. It could release an object 5x faster than on x86. That was a function of the languages and compilers and design of the silicon – something that nobody really has end-to-end control over in the PC space. If Apple increased a buffer here, they could change APIs over there to work optimally with that buffer size, again, too much diversity in the PC space to do that. Apple removed support for 32bit instructions because they had eliminated them years before from iOS. Less silicon, remove support for handling both 32 and 64 bit instructions, 32 and 64 bit addresses, etc. Breaking that on PC would destroy so much legacy software. Add a specialized core for doing a certain kind of computation that was otherwise slow and have support for it across the system because they also write the OS and the APIs. And on and on and on.

Each of these were a few percent faster here, a few percent faster there, but it all compounds. So Apple can strip back parts of the processor that Intel still needs to support for legacy purposes, which cuts power and improves speed. They can coordinate the best way to do certain things across the OS/language/compiler/silicon groups and distribute that work most effectively. And they can just not worry about performance costs for obscure cases that might actually be big PC markets. So instead of a crossover SUV that checks the most boxes for the most people, they make something more like a sports car that is more narrowly tailored to specific use cases but is stripped back of a lot of the stuff that those users don’t need, so it can run faster or run with less power. And of course, Apple is spending more money to have these chips fabricated than Intel – so they are absolutely cutting edge process, and they can buy up the entire first production run so even TSMCs other customers have a hard time getting components on those processes. It adds up.

So, as to Snapdragon – there’s realistically no way they can do what Apple has done. They can get closer – no question, but the Qualcomm/Microsoft space still lacks the kind of ‘put everyone in the same room and work toward this one goal’ ability that Apple does, which took Apple *decades* to get to. And Microsoft is not going to be as cutthroat with cutting support for old things in order to improve performance as Apple is Apple is uniquely aggressive on that front (they took the headphone jack out of a quarter billion iPhones when everyone was still utterly dependent on them – there was no grace period – you *will* adapt to this change) and Microsoft is very deferential to backward support – just the opposite. Microsoft is also unlikely to just kill off the ability to boot linux on their hardware. They do make their own compilers but it’s still a very diverse development environment where Apple is very uniform. Microsoft’s customers just won’t tolerate the kinds of things that Apple’s customers will, so their ability to keep pace with Apple is pretty limited.