Eli5: How the hell does memory / storage work? Flash drives, hard drives, floppy drives, etc.

798 views

I barely understand floppies (somethin with magnets and a strip with magnetically sensitive stuff on it, I think) and after that I’m just lost entirely.

I’ve seen the inside of a mechanical hard drive and have seen the platters and the head. I think I know that the data is on the platters, but have no idea how it gets there, how it’s read, or how it’s erased.

And flash memory just is a whole ‘nother universe. How the hell does a [piece of plastic smaller than my thumbnail](https://shop.westerndigital.com/products/memory-cards/sandisk-extreme-uhs-i-microsd#SDSQXA1-1T00-AN6MA) store 8,000,000,000,000 ones and zeros?

In: Technology

3 Answers

Anonymous 0 Comments

Magnetic memory: Made up of small crystal-like structures called magnetic domains. With a high enough magnetic field, you can get all elementary magnets (ie,magnetic fields caused by the atoms) to align, and thus hold a bit of information. The smaller you can make the domains (and still be able to read/write them), the higher the density.

Flash storage: Each bit is a floating gate transistor. Basically just like a normal microchip transistor, but with an extra gate in-between, that’s not connected to anything.
If you want to write a bit, you apply a high voltage to the gate, causing electrons to “tunnel” into the floating gate, where they get trapped. If you want to read, you apply a low voltage to the gate and see if the transistor “opens”. If it does, the bit was 0, if it doesn’t (because the charge on the floating gate gets in the way) it was 1.
And ofc theses transistors can be made really small, in the scale of nanometers, with lithography. That’s why TB sizes are possible.

Anonymous 0 Comments

It’s just segmented into pieces with either “some magnetic charge” or “no magnetic charge”.

So it’s like by the ability to name the segments smaller while still being detectable. We are getting very good at that. A single bit wise memory segment on those media is really, really small. Often, those media have layers of storage, as well.

It IS impressive at times.

Anonymous 0 Comments

Imagine you have a light switch in your kitchen (it doesn’t even need to be wired to anything, the switch itself is what’s important). Imagine you have a problem where you wake up in the morning and can’t remember whether the dishes in the dishwasher have been washed or not. So you devise a code. Before you go to bed, if the dishes are clean, you turn the switch ON. If they’re dirty, you leave it OFF. In the morning, when you wake up, you can tell whether the dishes are clean or dirty by looking at the switch. Voila! You’ve written an **encoded binary message** (on or off as an encoding for clean or dirty) to **persistent storage**.

The problem with the storage solution above is that you only have 1 bit (**b**inary dig**it**) of space, and also your encoding scheme isn’t very extensible (it can only store one kind of message about one kind of data). With some creativity though, you could easily create more complex encodings, capable of holding more complex messages, and you could add more switches to your wall to get more bits of storage. With 2 switches (bits) you have 4 binary states you can remember (off/off, off/on, on/off, on/on). With N switches (bits), you have 2^N states. Now you can remember all sorts of details about your kitchen!

Computer storage does exactly the same. The encodings used are far more versatile than the toy encoding devised above (i.e. `on == clean` and `off == dirty`), and they’ve been standardized so everybody in the world knows what they mean (thus allowing software and hardware to interoperate on standard platforms). The switches used are **waaaaaay** smaller than a light switch, and they are flipped via mechanically controlled magnets (in a HDD) rather than human fingers.

My description above is mostly centered around computer science, which makes modern computing **conceptually and logically** possible. The switches themselves are more in the realm of microelectronics, but they are the tiny heroes that make modern computing **physically** possible (and even I don’t know much about exactly how they work, because it’s not really relevant to a software engineer).

RAM is similar but also quite different, and it’s pretty hard to cover it all in a post like this, but the key difference is that it is a tradeoff compared to persistent storage, like so many design elements in engineering are tradeoffs. RAM is far more performant in terms of speed than persistent storage, but it’s not… persistent. So in the [von Neumann architecture](https://en.wikipedia.org/wiki/Von_Neumann_architecture) on which most modern computers are based… both RAM and storage have their own roles to play.

If you really want to learn, I suggest checking out Petzold’s ***[CODE](https://www.amazon.com/Code-Language-Computer-Developer-Practices-ebook/dp/B00JDMPOK2)***. It’s perfectly accessible to somebody without a CS background and is really enjoyable even as a casual read. I kind of want to read it again now… it’s been a long time since my first reading.