Eli5: von Neumann and Harvard architectures

314 views

The video I’m watching about the two architectures says that Harvard architectures “can fetch instructions at the same time as reading/writing data”. This makes sense, but how is this different from pipelining? Can’t be von Neumann architectures do this as well? It also says that Von Neumann architectures follow a linear fetch, decode, execute cycle. Again, this doesn’t make sense to me. Surely pipelining means that they don’t have to do this, right?

In: 5

8 Answers

Anonymous 0 Comments

If the execute cycle also accesses memory, then it conflicts with another instruction fetch cycle. Modern processors fetches new instructions on every cycle so they are all modified Harvard.

Anonymous 0 Comments

I’m not exactly sure, but from the datasheet of a Harvard architecture microcontroller, this one has a separate bus and memory for code (flash memory), from RAM. This allows it to read instructions at the same time as RAM accesses. (reading constant data from flash is “slower” because of this, so constant globals will be copied to RAM in the code).
Others, use a single bus, storing both code and data in RAM.

Anonymous 0 Comments

von Neumann has a single memory with a single database for both the induction and the data.

Harvard has two separate memory systems one for the program and one for the data.

Let’s ignore cache memory and a system with multiple memory buses to a unified memory.

A von Neumann machine can’t read an induction from memory at the same time at is reads or writes data. There is only a single memory subsystem. A Harward architecture can always do that. Pipelining does not matter, it can happen at the same moment because there is only one memory bus.

Even if you have two memory buses the data and instruction can be on the same bus. So you can read instructions and data sometimes at the same time.

The advantage of von Neumann is it is cheaper. The memory is also shared and the amount use for the installation and the data depend in your requirement right now. A Harvard architecture can have full data memory, you can use the induction memory for data even if a lot of it is empty.

The advantage of Harward is it will be faster.

If you look at the computer you use right now it might look like a pure von Neumann machine. That is not the case because when you look at cache memory the Level 1 cache is typically split. AMD Zen first generation had 32KB L1 data cache, 64KB level 1 induction cache per core. It also has 512KB unified L2 and 2048 KB unified L3 cache,

So if the induction and/or data in the L1 cache it is works as a Harvard machine. But if it needs from main memory it is a von Neumann.

So desktop computers the hybrid systems the are Harward on in regards ot L1 cache and von Neumann in regards to higher cache level and main memory.

If you look at microcontrollers a Harvard architecture is more common. The induction memory is often FLASH memory and the data is in RAM. By splitting them and having them integrated on the same chip you can avoid cache memory.

Because applications are fixed in them the model you use will in part depend on the amount of memory. There is often multiple models with different amounts of memory at different price points. You usually select the cheapest model that can do the tasks you require.

Anonymous 0 Comments

von Neumann has a single memory for everything, data and code so it can’t fetch code and read/write data at the same time. Harvard has separate memories for data and code so it can fetch code at the same time as it’s reading/writing data.

pipelining is a separate concept, both can be pipelined, but the case of von Neumann the fetch will have to wait if the execute needs to access data

Anonymous 0 Comments

Harvard has an entirely separate bus and storage to fetch instructions.

Von Neumann uses the same memory for code and data.

Anonymous 0 Comments

Harvard cleanly separates instruction memory from data memory, allowing access to instruction memory and data memory simultaneously. Von Neuman does not have this clean separation, just a common memory, so only 1 memory access at a time. This is commonly known as the von Neuman bottleneck.

Anonymous 0 Comments

The big ELI5 difference between von Neumann and Harvard is that Harvard stores instructions separately from data, and treats them as two separate things that never mix. In von Neumann, instructions are just another type of data.

This means that on a true Harvard architecture, some of the things we take for granted in a standard desktop computer are impossible. Let’s download a game from the Internet (as data) and play it (as code)!

When you get down into the guts of modern microprocessors, there’s sometimes weird semi-Harvardish stuff going on, but all modern general-purpose PCs and phones at least *behave* like von Neumann architectures. The main place you’ll find Harvard thinking is in microcontrollers (Arduino, for instance), which are programmed once to do one job that never changes.

Anonymous 0 Comments

No, pipelining cannot fetch instructions at the same time as data.

What pipelining actually doing is reducing waste:

* it reuses idle bus cycles for fetching. Idle cycles can appear, if instruction requires some internal calculation. Multiplication, division, and read-modify-write can be donors of idle cycles. Pipeline turns `memory_time + calcualtion_time` into `max(memory_time, calculation_time)`.
* it can also allow for fractional fetch time, if the instruction machine code is shorter than the bus width. For example, 1 byte instruction on 2 byte bus can be fetched (on average) in 0.5 bus cycles.

However, pipelining cannot do two memory accesses in the same time. No miracles here. Von Neumann will always have `memory_time = fetch_time + data_time`, while Harvard can do `max(fetch_time, data_time)`.

As for a linear fetch, decode, execute cycle – no, that doesn’t seem right.