I feel like this is a pretty good ELI5 comparison, sorta.
I have a network rack at home and a router I wanted to mount into it for space saving. The brackets purchased from the company were $100, so I decided to 3d print my own.
I started taking measurements, getting the rough idea of what I wanted it to be. Designed it in Fusion360, and then 3d printed it. It fit but was a bit weaker than I wanted. So I increased the thickness of everything. Printed it again. It fit well but the holes were a little bit off from where I wanted them aesthetically/functionally. Fixed the locations. Printed it again. Everything fits great, but the router is a little bit back heavy so the brackets are slightly twisting and are fine now but long term maybe not. So I designed a small addition to the bracket that went underneath it and supported the router. Printed it. Everything fits great and now it’s in use on the rack.
This is essentially research & development.
Idea, design, produce, test, fix, test, fix, test, fix, test, fix, production, release.
Mine is very minimal, compared to a chip with all the intricacies involved so scale it times 100000 and bam you’ve got $10 billion in R&D.
Remember R&D is pretty broad… That includes all salaries, benefits for the employees, the office space they work in, the massive equipment they use, all the hardware/software, etc.
And I’d also like to point out that I doubt it’s $10 billion *only* on this chip. A lot of those people, that equipment, that research, was also accruing to other projects within the company too.
When we hear “research and development” we often think of only folks in a lab working on cutting edge stuff. But almost any engineer at all in a tech company is considered to be part of R&D.
For example, someone at Reddit working on a bug in their text formatting tool is part of their R&D team, even if they spend 3 days trying to figure out why the ‘bold’ function doesn’t work in some weird circumstance.
I don’t work for nvidia (I wish I was paid as well) but I do work on chip design and I can tell you I cost more to my company in computer and licenses for software than what they pay me.
Making something on silicon is a ton of work, and it also takes a lot of time from when you send your design to the fab to getting the first batches out. And then if they don’t work like you expect, you’re out of luck, you can’t open that shit and plug an oscilloscope to get some traces of what is happening.
So what a lot of their engineers do is a lot of simulations, using software from companies nobody knows outside of this field like Cadence and Synopsys, that allow you to send the software a design and some program you want it to run, it simulates and you check that you’re getting what you want. It is a pretty long process but at least you can get feedback quickly enough (for small subsystems, could be mere minutes, and typically just days for whole system scale tests) so you can fix your design before sending it to TSMC or other foundries.
For a GPU, you’d typically start with simulations of a single CUDA core and have fake connections to the outside world, so you can check the core is doing what you want. Then you start putting a few together, adding the interconnects with memory, making sure you’re not getting some cores that get stalled because it can’t get data fast enough, stuff like that. Then you move into the fun stuff, simulating how it heats up and the whole dynamic frequency stuff to tweak the performance.
Simulations are done at different levels, you start with something further away from reality but pretty fast then you move to simulations that are more precise but really slow until you believe you got it right (or the higher ups tell you to get it done and there’s the deadline) and pray the silicon does what you expect.
Research, it’s very expensive to make something that hasn’t been made before, you need to prove every component works individually, then you need to prove that they work together.
From there you need to provide the infrastructure for compatibility with existing digital systems, then you need to figure out how to industrialize it on a mass scale.
Which means new machines, new factories, new employees, teaching them, training them and allocating.
Then you need enough raw materials and eventually processing to actually make the things.
All of these aspects need to be stable during this process, so you pay a premium for employees, and it needs to happen quickly so you pay them even more.
Must microchip companies do a big part of their research at another location, like IMEC in Belgium. There, they pay large amounts money to have their wafers processed on the several tools they have there. Or pay even larger amounts of money to use the tools themselves.
This alone costs more money than I expected.
On top of that, there is a large team of people researching and developing. And they all need to get paid.
And the research can take years and years before it gets released.
Your kindergarten teacher has activities for you. Did they just make this up when they got to work that day? Maybe, but probably not. They planned their lessons. You lay be the one producing the colored page, but the teacher selected that activity and likely some educational professor taught them good activities for kindergartners to do. Staying in the lines is difficult; designing a page that will be successful for a kindegartner to color takes a professional. Also, making something small is more expensive than making it larger. Laptops are more expensive to build than desktops.
Latest Answers