Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Also, it is not possible for an SSD to decrease your usable storage space over time.

It totally could be possible. For Windows/Linux, you could have a daemon that creates a 'ssd_wear' file which grows in size as the SSD wears out, and messages to the SSD which physical blocks it occupies. Mac could obviously do a deeper integration directly into the OS.

> all of your blocks are on the verge of failure

A failed block still stores some information (in an information entropy sense). It's totally possible to store one blocks worth of information across two, four, or eight failed blocks (with extra error correction information).

Combining these two techniques, an SSD should never fail. Instead it slows down and gets smaller. Sadly as far as I'm aware, no consumer drive vendor has implemented these.



You should read up on Zoned Storage for NVMe SSDs, and some of the more recently added error handling features like Get LBA Status. The gist is that it is only practical for the SSD to be in charge of determining what has failed-not the host, and that retrofitting NAND retirement into the traditional block device/LBA model is not worth the trouble when there are also other reasons to migrate to a different, more flash-friendly abstraction.


I used to work on SSD firmware. I did implement something like the above for a custom solution for caching. When the flash storage is running low on space, it reduced the reported storage capacity and notifies the host which will need to trim some data to bring it below the reported storage capacity. The striping of data across multiple blocks also has been implemented. You could fail an entire NAND die and it would still function though a little slower assuming you still have enough spare blocks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: