TINY DATA

Google is using AI to compress photos, just like on HBO’s Silicon Valley

Google’s neural network compression could come right out of an episode of HBO’s Silicon Valley.
Google’s neural network compression could come right out of an episode of HBO’s Silicon Valley.
Image: HBO

It’s not middle-out compression, but it’s the next best thing.

Researchers at Google are working on a way to use neural networks, the building blocks of modern artificial intelligence, to make our picture files smaller without sacrificing quality. To consumers, smaller files means more available space on phones, tablets, and computers, but for tech companies like Google that offer unlimited photo storage, smaller photos could reduce server load, power consumption, and improve transfer speeds. This sort of idea has made its way into pop culture thanks to HBO’s Silicon Valley, where the fictional compression startup Pied Piper uses neural networks to optimize how they shrink files. (Dropbox has actually used the start-up’s middle-out idea for its own photo compression.)

Google’s work teaches neural networks how to scrimp and save data by looking at examples of how standard compression works in random images from the internet, according to a technical paper published on ArXiv. The paper shows that neural networks can beat standard JPEG compression on standard tests, according to the Google team. However, it doesn’t mean that this is ready to be implemented into Google products.

The network is trained by breaking 6 million randomly-selected, previously-compressed photos into tiny 32×32 pixel pieces, and then selects 100 pieces with the least effective compression from which to learn. Effectiveness in this case is gauged by the pieces that retain their size the most when compressed into a PNG—they resist compression. By training with tougher problems, researchers theorize that the neural nets would be more prepared to take on the easy patches. The network itself then predicts how the image would look after it would be compressed, and then generates that image. The big differentiator in this research is that the neural networks can decide the best way to variably compress separate patches of a given photo, and how those patches fit together, rather than treating the whole image as one big piece.

Research on the same topic was published earlier this year by Google, but previous work never proved the method could be used beyond tiny 64×64 pixel images. This work is not limited by the size of the file.

While it’s easy to think of the best compression as the one that makes the file smallest, subjective human perception plays a huge part. If something looks weird to an end user, the compression failed. The Google team points out that there’s no standardized metric or test for this (unlike Silicon Valley‘s fictional Weissman score), which makes it difficult to measure the network’s efficiency.

So far it’s not anything up to par with Pied Piper’s compression, which theoretically makes files so small that their size is negligible, but Google’s work proves that the show’s idea isn’t so far-fetched.