How do you move from it being a collection of transistors to it being a fully functional computer?

579 views

How do you move from it being a collection of transistors to it being a fully functional computer?

In: 20

12 Answers

Anonymous 0 Comments

A transistor is basically a voltage controller switch. Depending on the voltage you apply to its gate terminal, it will either allow or prevent current from flowing between it’s source and drain terminals. This means we can use electrical signals to control the behaviour of other electrical devices. The signal that turns on a device is usually referred to as high or 1, while the signal that turns off a device is low or 0 or ground. Most importantly for our discussion here are logic gates and latch.

A logic gate is a configuration of transistors that performs a logic operation: NOT, AND, OR and XOR. A NOT gate takes a single signal and reverses it, turning a 1 into a 0 and visa versa. AND takes 2 inputs, and if they are both 1, it outputs a 1, otherwise it outputs 0. OR takes 2 inputs and if either of them are 1, it outputs 1, which a XOR (exclusive or) gate only outputs a 1 if only one of it’s 2 inputs is 1. From these, you can build what is referred to a combinatorial logic. Based entirely on the current input, the combination of logic gate will produce a consistent output. These kind of devices include multiplexers, encoders, decoders, and circuits that perform mathematical operations.

A latch is something more complex. The most basic design of a latch is when you feedback the output of a NOT gate into itself. This will result in the output of the NOT gate flipping back and forth between 0 and 1 as quickly as the internal transistors can charge and discharge. The current output of the latch is entirely depend of the previous state of the latch. If we put two NOT gates in sequence, and feedback the output of the second to the input of the first, you get a device that will maintain it’s output value until it is acted on by an external device. This is a fairly basic kind of memory. Add a bit more circuitry to control when and how the stored value gets changed, and you get a register, the most basic kind of memory used in your CPU. There is generally a clock pulse used to synchronize all the devices in a system. Any kind of system that has a feedback loop in it is sequential logic.

Ok, now we have enough knowledge to explain finite state machines. Finite state machines are made of two parts: a memory and the control logic. The memory stores which state you are currently in. The control logic decides which state you should go to next based on the current input and the current state, and also produces the output of the machine. Lets say that we want to design a machine that outputs 1 if the last 2 input are the different. It’s control logic would need to implement the following table:

| State | Input | Output | Next State |
|——-|——–|——–|————|
| 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 1 |
| 1 | 0 | 1 | 0 |
| 1 | 1 | 0 | 1 |

As logic functions:

Output = State XOR Input

Next State = Input

Now this is a trivial example but at it’s heart the CPU operates in fundamentally the same way. It has a table like this that describes how each operation it can perform is done, and the outputs are connected to each individual device to enable or disable them as needed. The input that the CPU sees come from instructions in memory and one of the devices that the CPU controls is the “program counter”, which stores the address in memory for the current instruction.

Anonymous 0 Comments

There are several abstraction levels between a transistor and a computer.

You combine transistors to make simple logical devices like NOT, OR, XOR, AND.

You combine simple logical devices to create a bit more complicated devices like multiplexers.

You combine those devices to create even more complicated devices like ALUs.

You make a processor from ALU, memory and other parts.

You make a computer from processor, memory, disk memory, pipelines to connect it all and vola.

It may seem like the complexity is growing with each level, but the beautiful thing about abstraction is that from each level to the next the complexity of the design actually doesn’t increase. Why’s that? Because when you take a device from previous abstraction level you don’t need to concern yourself with it’s internal structure – you care just for input and output rules of that device. In this way one department can design simple logic devices, while another department can just take those and make the next level.

Anonymous 0 Comments

Logic gates, is the short answer.

Im sure you can look them up on youtube, how they work. Now you just need to build a processor of them that handles data, that is very tricky.

Anonymous 0 Comments

A transistor is like a door. It lets stuff through or it doesn’t. One door can help you sort into two piles. Two doors sorts into three piles, you get the idea.

60 billion doors, arranged properly, lets you do a bunch of cool stuff.

Anonymous 0 Comments

It’s actually super interesting to see it recreated physically. Essentially you replace the electricity with something physical like a marble rolling down a track or water in a pipe and the transistors with physical gates or valves. If we fill one of these paths with water or a marble then that is a 1 and an empty path is a 0. We have now physically represented the electrical components of a computer. From there we form simple “if/then” logic statements.

For example “IF both paths are in the same state THEN this gate will read as 1 (filled) and pass it down the line” or “IF either path is filled THEN this gate will read as filled as well and pass it down the line”.

From there we just have to represent numbers and mathematical functions in 1s and 0s. And eventually we’ll get an output we can read as the result we intended. Place a ton of these gates together and you can form more complex functions like determining the color and brightness of a single pixel on a monitor. With the speed of electricity you can do that function hundreds of times a second.

Anonymous 0 Comments

By arranging the transistors in the right patterns, you can use them to do logic on single-bit values (1s and 0s). These circuits are called **[logic gates](https://en.wikipedia.org/wiki/Logic_gate)**. For instance, an AND gate will output the value 1 only if *both* of its inputs are 1; whereas a NOR gate will output a 1 only if *neither* of its inputs are 1.

Once you have logic gates, you can arrange those in the right patterns and use them to do binary arithmetic. For instance, by combining an AND gate with an XOR gate, you can make a **half adder** that computes the sum of two bits. A half adder can only do a tiny amount of addition: it can only do 0+0, 0+1, 1+0, and 1+1.

To deal with numbers bigger than a single bit, adders and other logic are connected together in bigger patterns. When you have circuits to do a lot of different arithmetic and logic operations, they can be bundled together into an **arithmetic and logic unit (ALU)**. This lets you input two numbers (each made of many bits) but also has a third input that selects what operation you want to do on them — adding, subtracting, and so on.

With a simple ALU, a small memory circuit (a **register**), and some circuits to control the display, you can make an old-school pocket calculator. This isn’t quite a computer yet, because it doesn’t run code; it just responds to button presses by doing one arithmetic operation and displaying the result.

A computer has a few things a pocket calculator doesn’t. A pocket calculator can only remember one or two numbers at a time. A computer has a **memory** that allows it to store data in numbered locations. So instead of saying *”add 3 and 5 and show me the result”*, you can tell it to *”take the number in memory cell #1 and the number in memory cell #2, add them together, and store the sum in memory cell #3.”* This is where **variables** come from.

A computer can also have a list of instructions stored in its memory: a **stored program**. This is *code* in the simplest sense. These instructions are stored as numbers, and each instruction is in a numbered cell in memory. The computer also has a special memory register called the **program counter (PC)** to keep track of which instruction to do next. And it also has instructions to modify the PC, which allows it to skip around in the list of instructions — which makes it possible to do branches and loops in code.

Anonymous 0 Comments

A computer is basically a Rube Goldberg machine of transistors.

In the old days you pick up a newspaper and get dozens of bits of news.

Today to get news – it’s relayed from one device to another device – dozens of times around the world. It’s processed by multiple big companies, and goes through incredibly convoluted processes.

A typical Rube Goldberg machine might have hundreds, thousands, or tens of thousands of parts.

A modern computer chip takes that several orders of magnitudes higher by having billions, sometimes tens of billions of transistors.

Often these are insanely complicated – and there are glitches that come up. They can be difficult to solve because of all the steps involved. This can result in viruses, hacks, and more.

Anonymous 0 Comments

tl;dr, you organize those transistors into logic gates (AND, OR, NOT, etc) so that you can process data through them. The more circuits you have, basically, the more data and the more complex data you can generally process…

ETA, people have actually demonstrated this using redstone components in Minecraft. See: [Logic gates with redstone](https://www.youtube.com/watch?v=VEcmaXwjwuY). Some crazies have even built *working computers* inside the game. They’re huge and ridiculous, but they work. If you have a few hours, it’s fascinating stuff, and it’s pretty much exactly how computer hardware is developed. It might help you understand the real-world process.

Anonymous 0 Comments

You interconnect them in the correct configuration to perform the logical function that a computer is. Today the interconnection and transistors are all made at the same time in an integrated circuit but the first transistor-based computer used chips with a single transistor that was interconnected with wires.

To get how it works I would look at the videos on [https://eater.net/8bit/](https://eater.net/8bit/) where Ben Eater build a simple 8-bit computer from individual chips. He does not use individual transistors but from a simple logic chip. There are introduction videos that explain how semiconductors and transistors work and demonstrate how to build simple logical circuits with them.

It is simpler to get how it works from a video like those than any text on Reddit because you can do drawings the same way.

If you look at all videos you see a simple 8-bit computer being built and how every step work is explained. Computers today are a lot more complex but the basic idea of how they work is the same.

Anonymous 0 Comments

You can arrange transistors so that they can behave in the same manner as logical gates (interpreting High and Low voltage as True or False). And you can arrange logical gates to mimic mathematical operations (interpreting True and False as 1 or 0).