How To Talk To Strangers Where No One Can See You
Today, I’m writing about something that was first used by the Ancient Mesopotamians. It used to be illegal to export under weapons trafficking treaties, and it is frequently bemoaned by law enforcement. It’s also a critical component of almost every electronic device, and without it, the global economy would come to a screeching halt. I’m talking, of course, about encryption algorithms.
Encryption is technically a subset of cryptography, which is the study of how to communicate securely in the presence of an adversary (who might try to eavesdrop, edit, or disrupt said communication). To encrypt something is to take some information, like “my password is 1234,” and combine it with a “key” (a chunk of hopefully random data) via some encryption algorithm such that it becomes unreadable gibberish. Said gibberish is only decipherable by providing an identical key, or a key that’s mathematically related to the original key in a complicated way.
The kind of encryption you’re most likely familiar with is symmetric encryption: encrypt a message with one key, and decrypt it with the same key. Symmetric encryption dates back to ancient times; Julius Caesar invented (or popularized) a cipher where the letters of the alphabet were simply shifted some number of places down (e.g. A becomes C, B becomes D, and so on). But cryptography didn’t really take off until the early 20th century, with the advent of technologies like radio, which enabled longer-range communication at the cost of being trivially easy to eavesdrop on. World War II saw use of the Enigma Machine, a fascinatingly complicated electromechanical device that was only decoded by the Allies after a Herculean effort. These days, though, encryption generally refers to modern computerized algorithms like AES.
AES is short for Advanced Encryption Standard and has been the standardized encryption method for the U.S. government’s classified information since 2002. It’s also used for most web traffic, disk encryption on iOS and macOS, password managers, end-to-end encrypted chat applications, and a zillion other things. In fact, most modern processors have specific hardware components just to encrypt and decrypt AES data. But symmetric encryption still has a flaw: you need both parties to have the same key for them to talk to each other. How does that happen when there’s no secure communication channel, like when accessing a website over the Internet? It would obviously be impractical for every computer to come pre-programmed with what would be millions or billions of different encryption keys for everything. So, we need a way for two parties, communicating solely over an insecure communication channel, to have a conversation that’s impervious to eavesdropping. It sounds impossible, but as it turns out, it’s perfectly achievable with a bit of sorcery known broadly as “asymmetric-key encryption.”
It’s not particularly an exaggeration to say that without asymmetric-key encryption, the Internet wouldn’t exist anywhere near its current form. It would be impossible to transmit any sensitive information like credit cards, passwords, or private email, unless you obtained an encryption key offline (which kind of obviates the entire point). It’s hard to speculate on exactly what an Internet in this world would look like, or whether it would exist at all, but I can safely say that it would be a lot worse than it is now.
So how does asymmetric-key encryption work? The first hint is in a more common name for it: public-key cryptography. Instead of one key that encrypts and decrypts (symmetric encryption), asymmetric encryption uses a pair of keys: a public key and a private key. If you encrypt a message with the public key, it can only be decrypted with the corresponding private key, and there’s no way you can figure out the private key from just the public key. So, you can make the public key as public as you want: transmit it over a public WiFi network, give it to your friends, even post it on social media. Meanwhile, the private key is private only to you. If someone wants to send you a message, all they have to do is encrypt it with your freely available public key and transmit it to you through any channel, even an insecure one. You can think of public-key cryptography as like a safe with two separate keys—one key can only lock, and the other one can only unlock. You can duplicate the locking key as much as you want, and anyone can use it to put stuff in the safe and then lock it. But to access what’s inside after the safe has been locked, you’d need your secret unlocking key.
But the lock analogy breaks down at a certain point, because there’s no difference in principle between the public and private key. If I encrypt something with my private key, it can only be decrypted with the corresponding public key. This is useful for identity verification through something called a digital signature. If I take a message, encrypt it (or “sign” it) with my private key, and publish the encrypted and original messages together, then you can verify the encrypted message decrypts successfully to the original. If they match, then you know I am who I say I am. (This concept, by the way, is key to the security of blockchain-based cryptocurrency: transfers out of a specific account are only accepted by the rest of the network if they have a valid digital signature proving that whoever submitted the transfer possesses the private key for said account.)
The specific mathematical underpinning behind public-key cryptography is sort of complicated and varies based on the specific algorithm. For many algorithms, we rely on the fact that multiplying very large numbers together is relatively easy, while finding the factors of a very large number is very, very hard. (For performance reasons, some modern algorithms use things that are kind of similar to large numbers like elliptic curves, but we can safely ignore that.) Either way, though, the mathematical details of public-key cryptography are somewhat less interesting than the fact that it exists and you can do things with it.
Almost everything you do on the Internet nowadays relies on public-key cryptography. If you’re reading this on a computer, the webpage was transmitted via the HTTPS protocol, which (to simplify things) means your computer transmitted an encrypted request using The Phoenix website’s public key, which lets your computer talk to the server through a private channel. If you’re reading this in the print edition, then public-key cryptography was still involved—I send in these articles via email, which involves my computer making a secure connection to my mail server using its public key via the same method. (Technically, asymmetric encryption is generally used just to secretly transmit a key for symmetric encryption, since symmetric encryption is considerably faster.)
But why does any of this matter, aside from it being really cool and interesting? Well, I’ve previously written about why HTTPS makes paying for a VPN somewhat unhelpful for a lot of people. Today, though, I’m going to cover end-to-end encryption, which is a fascinating application of cryptography and an interesting thing to be aware of in your own life.
When you send an email, it’s (usually) encrypted in transit via the methods I talked about above: if someone is eavesdropping on your Internet traffic, they can’t read your mail. But once it reaches your mail server, it’s decrypted and is readable by your mail provider (e.g. Gmail). It’s important to note that “readable” doesn’t mean someone at Google is regularly snooping through your mail to learn all your secrets, it just means that Google’s systems can process the plain contents of messages. This can be for innocuous reasons: checking whether messages are spam, for instance, or automatically adding a booking to your calendar based on a confirmation email. But there’s nothing technically stopping Google from scanning your email to target advertisements. (Google explicitly says that they don’t do this. Sometimes it might seem like they do, but those are often cases where, e.g., you search for “winter coats,” spend an hour browsing winter-coat-related websites, and then see an ad for winter coats next to an email you sent to a friend asking about coat recommendations.)
But the fact that Google could read your email if they wanted to is more important in a different way: if Google can theoretically do it, then the government can too. If you’re worried about government surveillance (from any government), then you don’t care what a company says they will or won’t look at, you want a cryptographic guarantee that they can’t provide data to anyone even if they were made to by a court order, subpoena, or police raid. This is where end-to-end encryption comes in.
End-to-end encryption is when your data stays encrypted all the way from you to the person or people you’re talking to. Most commonly, this is in the context of chat applications like WhatsApp or iMessage. To secure your messages, instead of the server publishing its public key, everyone on the service publishes a public key. The private keys never leave each person’s device. If you want to send a message to your friend, you ask the server for your friend’s public key and use it to encrypt the message. The server here just passes encrypted messages back and forth, so all it can possibly know is when you send messages and who you send them to. (Through a little bit more cryptography, it’s actually also possible to also obscure the fact that you’re sending the messages—kind of like dropping a letter in a mailbox without writing a return address.) Actual end-to-end encryption as implemented also uses a bit more stuff on top of the public/private key business, via something called a double ratchet: the two parties constantly change their public and private keys via an agreed-upon method. This means that if a private key is compromised, an attacker can only view a few messages before the keys are regenerated.
Full end-to-end encryption (or at least, end-to-end encryption that didn’t suck) was pioneered in 2013 by what would eventually become Signal. Signal was the first end-to-end encrypted messaging app that tried to be usable by non-computer-nerds while still being secure, and as a result has seen extensive use among whistleblowers, journalists, and any social movement you care to name. But what’s had an even bigger impact is the Signal Protocol that the Signal app was built on. The protocol defines a standardized and secure method for sending text and other communication completely securely between two or more parties. In 2016, WhatsApp, the most popular messaging application in the world, switched over to the Signal Protocol for all of its messages and data. This means that every text sent on WhatsApp is unreadable to WhatsApp, its parent company Facebook, or anyone else, except the intended recipients. (Unlike Signal, however, WhatsApp does collect and use data about when and to whom messages were sent, and might use that information to target advertisements.)
So, if end-to-end encryption is so easy to use, why isn’t it used for everything? Mostly because it turns out that not having a usable copy of your data stored on a company’s servers is annoying from a usability standpoint for anything more complicated than simple text chat. You may have experienced this yourself if you’ve ever been added to a WhatsApp group chat in progress: since previous messages were only encrypted with the previous participants’ keys, you can’t read them and miss any context that happened before you got there. End-to-end encryption also means that mirroring messages or conversations between multiple devices is difficult: since only your phone holds the keys to decrypt the messages, keeping chat records consistent between your laptop and phone requires awkward relay setups. Finally, it’s sort of pointless for public-facing things like social media where everyone is supposed to be able to read it anyway.Notice that in this article I haven’t really talked about any possibility of “breaking” a key. That’s because modern encryption algorithms are, for all intents and purposes, unbreakable: cracking a single 256-bit AES key with every computer on the planet would take about 14 thousand trillion trillion trillion trillion, or 14,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, years.
It’s impossible to even begin to give a perspective on how big that number is. If you try to express it in terms of multiples of the age of the universe, another mind-bogglingly big number, you get another number that’s still too big to properly express. (About 900 thousand trillion trillion trillion times the age of the universe, if you’re wondering.) But the fact that properly implemented AES encryption is effectively impossible to break via computational brute force doesn’t mean that your secrets are necessarily safe from, say, regular brute force (as a classic xkcd comic illustrates). One of the fundamental lessons of encryption (and indeed of all computer security) is that the humans that use encryption algorithms are almost always more vulnerable to deception, persuasion, or blunt force trauma than the algorithms themselves. It doesn’t matter how big your encryption key is if the password used to generate said key is just the word “password.”
This article was originally published in the Swarthmore Phoenix.