I was thinking about how to design a filesystem from scratch.
Let’s say every cell in our FLASH chip has 100’000 write cycles before it dies and let’s imagine a simple flat filesystem.
I imagine the filesystem splits the data of the file across multiple chunks and creates a linked list of these chunks where every chunk holds a pointer to the next chunk. It would be a copy-on-write filesystem. This would theoretically slow the wear down on the cells? Now I’m not really sure how to achieve efficient wear.
But I really don’t know how the filesystem table should work because changing, creating or deleting files would be very easy to do and wouldn’t do too much damage to the FLASH cells. But I imagine you’d need to update the filesystem table with each of this operation.
My question is how do you get around this? Do you just rely on the fact that when the cells in the filesystem table fail the whole filesystem fails? I guess you could store somehow multiple copies of the filesystem table and compare them against each other at every write operation? And if one of the filesystem table copies failed you just create a new one?
And so you would also need to store the addresses of bad sectors somewhere to not attempt to write to them. But you would also need multiple copies of that?
In: 4
Latest Answers