Advances in Data Density

A recent article in the Economist, Atoms and the voids, tells of a data storage breakthrough where individual chlorine atoms were arranged on a sheet of copper to form the code of zeros and ones used in typical binary storage.

While the advancement is expected to result in improvements in speed and simplicity, the cost of the project was large, and the technical requirements were severe. A tunneling electron microscope was used in temperatures cooled by liquid nitrogen of -196◦C. and the process was very slow – read and write speeds of 1-2 minutes per 64 bits.

Not surprisingly, the very idea of storing data within individual atoms can be challenging to understand so the above animation by Stijlbende explains the process used by scientists to store one kilobyte of data on an area 100 nm x 100 nm in size.

But the density achieved was quite impressive: “78 trillion bits per square centimeter, which is hundreds of times better than the current state of the art for computer hard drives.”

And that’s the part we find very interesting – not the density – but the relative density compared to today’s storage.

Data Density in Perspective

Thirty years ago a typical home PC came with a 10MB disk drive. Today, it’s a 1TB drive. That’s an increase in density of 100,000 times.

If arranging individual atoms is only hundreds of times more space efficient than today’s storage technologies, are we approaching limits where density will no longer grow so quickly? One thing is more certain – the rate at which we create data shows no sign of slowing.

The future will be interesting. Could this point to the need for larger storage systems, larger servers, larger data centers?