If we have ssd’s that exceed the 2tb mark, why are RAM modules still limited in size?

684 views

I understand that even the fastest nvme drive is way slower than RAM, but surely that isn’t the only reason we don’t see 1tb memory modules on the market. I was only able to find 512gb modules in my searching, and that was for servers.

In: Technology

8 Answers

Anonymous 0 Comments

Hi,

I am quite frankly aghast at the trolls to this question. Because it is a REALLY good one, to which I do not know all the answers.

So I’ll say a few things first. Many people do not know the difference between RAM and storage. Many people don’t know the difference between SSD and HDD. Many people don’t know the difference between volatile and non-volatile. Most people don’t understand the complexities of creating none volatile solid state storage.

And that’s the crux of it. RAM and STORAGE are two VERY different requirements and have two very different technologies. The former is to allow the CPU (And it’s cache / DMA friends) to access the RAM, where the program code is and run it as fast as possible. The latter is for long term (that is to say {out of memory} storage) for later retrieval by the CPU.

The reason that Memory is faster is because it doesn’t have to latch, that is, to remember it’s previous states.

The reason SSD is all slower per GB is because they HAS to be able to remember.

The chips used for SSD/ M2 /PCI are fairly flatlined now – only the interface and ways of combining those chips into something reliable (Variations of RAID, and ECC, if you will) is happening..

So RAM is faster because it has far fewer responsibilities (when working!) than storage, it is closer to the CPU (in literal mechanical terms and logical access terms) and it is made from smaller components, which are more simple than storage components (No need for memory (AKA storage)) and can therefore run at a much higher clock rate and have greater throughput.

NOW THEN. With greater density comes greater heat responsibility and greater potential for loss. You can’t just stack RAM like you can storage (Both were tried / both failed) Because RAM can run close to the speed of the signals feeding the CPU it needs to be an expensive bit of wafer.

I’m gonna leave you to google ‘Wafer’, ‘CPU Wafer’ or ‘news about electronic wafers’

Hope this helps dude. Don’t let the other bastards put you off. Can’t have too much RAM or correctly tiered storage.

Anonymous 0 Comments

There’s an interesting (and terrible) thing known as IOWait, this is your drives (SSD or HDD) not being able to respond fast enough (commonly happens while under load), so the CPU ends up…waiting. This can cause real life issues, such as a webpage timing out, or a program failing to run. RAM is simply required, for these latency issues.

Anonymous 0 Comments

RAM is that much more expensive to manufacture. Higher the density of the chips, more they cost. It didn’t make sense to have higher capacity in the DDR4 standard. It would have added demands on CPU and motherboard hardware (=more cost) for something that hardly anyone would be buying and selling.

The new DDR5 standard quadruples the single stick capacity limit.

Anonymous 0 Comments

Because memory modules are faster the latency of the communication lines between the processor and memory is actually important. So even long after hard drive have moved to serial interfaces the memory modules are still using a low latency parallel interface. This is why there is so many pins. Each command is sent all at once with one bit per pin. However this means that there is a limited address space. If the processor wants to retrieve the data beyond 512GB it needs to use a bit which is not there. There just is not enough pins on the memory module. The pin is actually there from the processor but instead of being sent to the memory module it is sent to the motherboard which will make sure only one of the memory modules on the channel is enabled. So using multiple modules it is possible to have more then 512GB of memory.

Anonymous 0 Comments

For most users, the volume of RAM they need is relatively small. Even 64GB is way more than most personal computer users will need at this point in time.

Think of it like batteries for your own power tools. You’re not going to be using more than a couple at once and you can have a couple more charging. (not counting loaning them to friends helping you – just the one simultaneous user, just like there’s only one person using a PC at a time)

Storage however may need a lot. If you’re a photographer, or do some video editing, it’s pretty easy to eat up a lot of storage.

Edit- added “way” for clarity

Anonymous 0 Comments

Because there is no need for it.

If you aren’t actively using your RAM you gain literally nothing by adding more to your system. There is no incentive to manufacture huge 1tb ram modules at scale.

Anonymous 0 Comments

Because we don’t need them yet. That’s it, really.

Also, there are certain advantages of having multiple memory modules vs one big one. Multi-channel mode is the key example.

Anonymous 0 Comments

In short, we don’t need as much RAM as we do proper storage, there’s hardly any incentive at the moment to create terabytes of RAM in one module, since most people only need 16GB of RAM total.