Cut and paste. You are either laying out multiple copies that do the same thing (memory, cache, cores, etc…) or you are copying an architecture from another design whose logic has already been worked out and tweaking it for a new size.
Chips are also designed by teams, not individuals. It’s not different than any complex machine like a space shuttle or an aircraft carrier.
In short, yes. There’s a lot of repetition in the design, even if it doesn’t seem that way. Think of a book with a thousand pages in it. It could have a million letters, each “placed” by the author, but the author is really prescribing words, not letters. The computer that the author uses can handle indentation, capitalization, page numbering, keeping all the lines straight, etc. A really repetitive book could be written quickly with clever use of copy and paste. It could also be written faster by multiple authors working on different chapters at the same time. It can be made even faster by using speech-to-text software, or a shorthand like court stenographers use. The final product is still a million letters in a unique, designed order, neatly placed and spaced on a thousand pages, but it was not the labor of a single person over a decade to achieve that.
If the book is published in braille, then the single braille dot is like a single transistor. Each identical, in exactly its proper place, but largely handled by machine.
Answer is abstraction. The design scales exponentially not linearly like you suggest. Combine several transistors into a logic operation device like AND or XOR. Then combine those logic devices into higher abstraction devices further until you get to ALU.
The designing of CPU is not linear – you don’t spend the the same amount of time per transistor. Design is exponential – you spend first say an hour designing a transistor device, then you spend another hour designing a logic device that consists of 10s or transistors, then the next hour you combine multiple logic devices into another abstraction level device that consists of 100s of transistors. For the purposes of this example you’ve spent 3 hours but have gotten already into 100s of transistors despite initially spending a whole hour on 1 transistor. In this way you can get to billions of transistors quite soon. Of course in the industry CPU design takes years, but the same concept applies.
Digital design engineer here.
The short answer is yes.
The long answer is: We describe the chip in a hardware design language (HDL) like VHDL or Verilog. It looks like software, but it’s actually describing hardware. Tools figure out how to turn this HDL description into standard cells (pre-defined building blocks supplied by your chip manufacturer. Things like AND, OR, NOT, buffers, flip flops etc.) and place and route them on the silicon.
Take for example this piece of code:
if (b && c)
a <= 1;
else
a <= 0;
It’s just an AND gate with b and c at the inputs. You could also write it as `a <= b & c;`. So the tool knows it needs an AND gate. It goes into its standard cell library and picks an AND gate. It knows it has to connect it to the b and c inputs so it places it close to those. This process works even for very complex logic. You can turn pretty much everything automatically into simple standard cells.
The standard cells are designed by hand and are different from manufacturer to manufacturer and technology (e.g. TSMC vs. Intel, 28nm vs 14nm). Memories are also designed by hand and then just scaled up.
On a larger scale you tell the tools where to put things on the silicon by assigning areas (partitions). You can copy&paste partitions. This is how you create identical CPU cores etc.
Short answer, because the detailed one has a lot of scope creep: yes, it is all automated… but there are a lot of things you need to tell the programs to get things right.
The very simplified process (skipping a lot of steps) is as follows.
Some dudes built a set of “standard cells” (think LEGO blocks) for a specific technology (loosely a “these are the transistors” set, there are many with different tradeoffs). Each of these cells has defined characteristics, the most notable of which is its function (i.e. “what does it do”), that are all saved in a big fat database for later use. This is a gigantic simplification because you can focus on *what* to do (albeit to an extreme detail) instead of “how should I dope my silicon to do what I need”.
The guys designing a digital portion of a chip write code in specialized languages that describes in high detail, but still with higher abstraction than standard cells can provide, what the block should do. They (or someone else) then take the description and feed it to a synthesiser that tries to figure out which standard cells to use and how to connect them to do what the code requires, and thus spits out a “netlist” of cells and the wires between them. This step already is unfeasible by hand even for small blocks, small chips can easily have some tens of thousands of cells. Notice that this is somewhat like the map of a metro system: it shows how stuff is connected, but it does not really hold up as a project for construction (“the tunnel should bend there, then avoid that place, then get bigger because reasons…”) and needs to be refined further into an actual 3d model.
The final step in defining how exactly the chip will look in the real world is taking the synthesised netlist and loading it into a layout program, which is tasked with fitting all those cells in a defined space and actually making them work. You might need to have some close to each other to limit the propagation time of some signals, or the opposite in order to avoid some nasty things that can happen when calculations are ready too soon (specifics on this are out of scope). Some wires running parallel to each other might crosstalk and need to be spaced further apart or shielded. Connecting the giant bundle of wires going all over the place without having them touch each other is hard. Sometimes the program is not able to do everything right and some dude has to step in a sort things out manually if at all possible (speaking from experience, some things spook the hell out of the layout tool and you get a couple of obvious “just move this wire there” solutions that the computer cannot see by just looking at it).
If anyone is interested in how digital design works I’ll be glad to share my experience. I think I am particularly suited to provide a broad overview due to me working on small and simple chips, thus maybe missing some more advanced techniques but doing everything (RTL, simulation, linting, synthesis, formal equivalence checks, layout, timing checks, signoff, power simulation, test patterns… name it, I have done it) by myself.
The main idea use is “design hierarchy”. You would design a transistor, than use that design to build a set of logic gates to match of set of requirements that you need. Then you would use those designs to built larger and larger devices until you have built the entire device. There are specific CAD tools use to do layout with (such as Cadence) and human readable languages for doing the top level design (Verilog and VHDL), so there is some automation in the process. However, you would also use those tools to test your design to see if it performs as required, then adjust the parts that are not performing as required manually until you get a design that works.
Latest Answers