I’m not sure if you’ve noticed, but in the last 30 years, it’s gotten cheaper to store things. I can recall my first computer back in 1997 with its whopping 1.5 GB of space. Two years later, we bought a second hard drive with 10 GB. These days, you can buy 12 TB Hard Drives (and 100 TB Solid State drives if you feel like spending both arms, both legs, and maybe your first born; all for roughly the same price as a 1 GB hard drive around 1995. What changed? What allows manufacturers to cram more bits and bytes into those small boxes?
As discussed here, the development of computers is largely exponential. This means that every year computers get smaller, faster, and cheaper to manufacture. The same is true for hard drives. Every year, hard drive manufacturers are able to make the magnet that hovers over a disk smaller and smaller, and they’re able to make the magnetic material spots on the spinning disk smaller and more tightly compacted, meaning that in the same space they can increase how many places they can store data. In solid state drives, manufactures are finding new and interesting ways to stack and cram more transistor cells into a chip, leading to the same net increase in data storage.
But in both cases, there will be a physical limit to how much they can cram in one space. At a certain point, magnetic fields between spots on a disk will begin to mess with each other, and they’re only so small they can make a transistor before electrons will literally just pass through it. So manufacturers need a second option.
Precision and Error Correction
As anyone who’s used a stove will know, it stays hot even after you turn the heat off. But it isn’t hot forever. That energy eventually dissipates into the surrounding environment. The same is true for storage. In a hard drive, the magnetic fields will eventually weaken to the point that they can’t be read. In a solid state drive, the level of electrical potential in a cell drops until it can’t be read either. Manufacturers are very aware of this.
When you store a 0 or 1 on a hard drive, it isn’t recorded as a “magnetic field” and “not magnetic field” because even with very precise instruments, it becomes difficult to know “was that 2 empty spaces or 3?” Solid state drives similarly don’t store things as “charge” or “no charge.” Both store things in a range of values. They then use math to look at the value of a particular spot or cell and compare it to the ones immediately surrounding them and make a best guess effort to determine what the value is.
These error correction algorithms are very good. So good, in fact, that hard drive manufactures now cram so many spots together that the magnetic field of each spot affects several of the ones around it. The drives are constantly looking around at neighboring spots and constructing their own best idea of what the data is. They’ve gotten so good that they no longer store one bit in each spot. They now break it down to “0, 1, 2, 3, or 4” per each spot. Solid state drives now store 0 through 8 in a single cell, using the same guessing to figure out what it is. That’s a lot of guessing!
Both hard drives and solid state drives also have measures in place to rewrite data whenever it gets too close to the border of another value. This isn’t foolproof, as if there is a file or section of the drive that isn’t read, it can drop below the ability to be correctly determined, leading to some corruption. The drive may try numerous times to read it before it gives up. If you’ve ever experienced that your system takes longer and longer to boot and fully start up, this may be why (and why rebooting at least once a week is a good idea).
It’s impossible to know how we’ll store data in the future (my money’s on holograms), but it’s likely that manufactures will squeeze every possible bit wherever they can, and still somehow make them work.