Most electronics are now designed with Hardware Description Languages (HDL) like [Verilog](https://en.wikipedia.org/wiki/Verilog) on [FPGA](https://en.wikipedia.org/wiki/Field-programmable_gate_array)s (Field Programmable Gate Array)s. You write code to tell the FPGA what the system should do and the FPGA emulates the required electronics, spitting out the resulting logic that goes into fabrication.
It gets a little more interesting when engineers then make alterations by hand so their silicone is more efficient and designs are much harder to emulate. Apple are particularly well known for doing this with their M series of processors
There are hierarchical layers of building blocks.
A silicon team designs a handful of different transistors for a given process.
Next level up someone is making building blocks like AND and OR gates, and D Flip flops out of transistors, hand designed.
You get enough of these building blocks with their attributes detailed enough, you can put them together into a library of gates.
Synthesis tools can compile higher level languages like verilog and vhdl into a netlist that uses the predefined building blocks.
Next layer up people will design reusable things like DMA controllers or CPUs in their chosen HDL.
Special useful blocks like 10gig ethernet Logic are probably hand designed and laid out from discreet transistors to make sure they perform at top speed.
Most of the design work is done at the top level module level, and complied down thru the layers.
Very few hand designed transistors in the chip..
A CPU is all digital circuit and it’s build by using code. The code consists in inputs and outputs and you say for example: I want that these inputs to add and them the code will create a gate that does that and for more complex operations. Also, the chip is not entirely different, it uses instances that are the same and are placed in different places. For example, you have multiple cores in an Intel chip, they were not created separately, they’re the same instance but placed multiple times. This is how you are able to create chips with billions of transitors.
Microelectronics engineer here.
Most chips (Or ICs Integrated Circuits) will have memory or storage on them. Each bit of storage is a repition of the same circuit. So on your Intel or AMD chip, huge parts of it are just copies of the same set of transistors, laid out the same way to form sram. The more benign parts of the circuit can use standard building blocks (think Lego with input and out holes for power and ground, input and output digital signals) so we can simplify the design.
The reality comes back to performance. The faster and more specialised the chip is, the higher the frequency is, then the more hand designed parts there are. We have tools that are very efficient at building parts of the chip, and there are some software libraries to accelerate this process. Languages like verilog and vhdl help, but often they form a starting point prior to hand optimisation of critical parts of chips.
Ultimately manufacturing chips is quite expensive, so unless you need the performance using more general chips and then writing software on them is going to save time and money. Hence why you mostly see Intel, amd and arm processors inside circuits. When the chips are custom, they often still have a general purpose arm processor at their heart.
Obviously I can’t know because I am not working at Intel, but I have done fairly large designs in state of the art design software and most of the time when I make something, we make a symbol out of it and from there on we treat that entire circuit as a black box. That’s it. Design reuse is the answer here. At some point they designed some of those circuits and a lot if manpower went into it and most likely they are simply building on top of those well designed modules. Keep in mind that once you need to change something in the inner module while upper circuits have already been designed, it’s not a big problem. Layout will need some adjustments but can be done.
CAD (Computer Aided Design), which is a broad term encompassing many things and generally speaking, has been nothing short of a revolutionary step in human technological progress, and it allows for the relatively easy repetition amd modification of a successful design.
When applied to IC’s and circuit boards, it, along with CAM (Computer Aided Manufacturing), again a very broad term covering miraculous advancements in precision and productivity of our ability to mass produce extremely complex things, have resulted in formerly inconcievable levels of communication and information sharing made possible by vast arrays of tiny machines capable of billions of “for-next” loops and magnificent volumes of binary calculations which can be programmed to execute functions that aid humans to alter their own perceptions of the very universe and planet that birthed their existence.
Whew. Too much Tequila with friday night fajita taco salad.
Transistors are just switches, I’m sure you know this, and switches just perform logical operations when combined in certain way.
Once you know what the small number of individual components can do to a bit of a data, you can use abstract design principles to layout more and more complex structures. You can literally write a simulation of a processor in many computer languages, because at the heart of a processor microchip, you’re just manipulating data in much the same way that you do with any other program that runs ON the processor.
Now if we are talking about the FIRST powerful processors, they very much were designed by engineers and scientists by hand, drawn out, broken down into distinct regions of the processor, and worked on and tested etc. The great thing about this process is that once you find one way that works, it will always work – it just might become outdated if they find a better way to do it. If you design one successful microprocessor you’ve just completed not a stair step, but an elevator that can assist your future designs and make them ten times easier to complete. It’s an exponential progression because of how powerful microprocessors can be.
Modern chips are designed by scientists and engineers, but mostly through a lot of theoretical research and small scale testing to see if it is possible to adapt small, but meaningful, new ideas to the development of the next generation of processors. Each new processor in a line of products isn’t usually a discreet product, it’s actually more likely to be the same exact design that did not hold up to benchmarking standards and is then marketed as a slower or less powerful processor. This is not always the case, but it’s common.
There is no good ELi5 answer to this question, because every small step of designing and creating a microprocessor itself is a huge topic that there are literally books written about, and people take years of schooling to understand, from optical lithography, chemical etching and vapor deposition, just the simple topic of understanding how electrons flow through a transistor require a very real understanding of quantum mechanics, and in fact we are nearing the limit of how small we can make a transistor before our electrons start disappearing and reappearing in places they aren’t supposed to due to the “quantum tunneling” effect of subatomic particles.
It’s a vast topic, one better left to baby steps upward toward a career in the field or at least a high level discussion with an expert via youtube or Ask Science on reddit, this one is definitely not best suited for an ELi5. That being said, I hope my answer and many others helped you get a small grasp on the topic and understand how the incredible advancement in microprocessor technology has advanced further in the last 50-60 years than arguably any other human scientific endeavor in the history of the world.
Latest Answers