Why Cellphones Make Humans Sound Like the Speak & Spell Robot

Beep boop.

When I wrote about why we should blame our hatred of phone calls on equipment and infrastructure rather than human habits, I talked about the digital-switching infrastructure called pulse code modulation (PCM), which was rolled out starting in the 1960s.

But there’s another factor I didn’t discuss: Digital cellphones compress your voice even before it reaches the PCM exchange. Using techniques based on linear predictive coding (LPC), cellphones crunch voice data down even further than PCM’s 64 kilobits/sec would demand, first to reduce data transfer demands and then to manage traffic between your handset and the cellular tower. Different service providers make different choices about which LPC-style codec to use, and how much compression it performs.

LPC can also be used to synthesize speech through vocal cord modeling. If you had a Texas Instruments Speak & Spell toy in the late ’70s or early ’80s, you already know what LPC-synthesized voices sound like (that is, like robots). So there’s another important reason cellphone conversations sound terrible: They turn human voices into (slightly better versions of) Speak & Spell robots.

Ian Bogost is a contributing writer at The Atlantic.