Privacy and Technology
A World of Strong Privacy
There has been a lot of concern in recent years about the end of privacy. As we will see in the next two chapters, there is reason for such fears; the development of improved technologies for surveillance and data processing does indeed threaten our ability to restrict other people’s access to information about us. But a third and less familiar technology is working in precisely the opposite direction. If the arguments of this chapter are correct we will soon be experiencing in part of our lives – an increasingly important part – a level of privacy that human beings have never known before. It is a level of privacy that not only scares the FBI and the National Security Agency, two organizations whose routine business involves prying into other people’s secrets, it sometimes even scares me.
We start with an old problem: how to communicate with someone without letting other people know what you are saying. There are a number of familiar solutions. If you are worried about eavesdroppers, check under the eaves before saying things you do not want the neighbors to hear. To be safer still, hold your private conversation in the middle of a large, open field or a boat in the middle of a lake. The fish are not interested and nobody else can hear.
That approach no longer works. Even the middle of a lake is within range of a shotgun mike. The eaves do not have to contain eavesdroppers – just a microphone and a transmitter. If you check for bugs, someone can still bounce a laser beam off your windowpane and use it to pick up the vibration from your voice. I am not sure that satellite observation is good enough yet to read lips from orbit – but if not, it soon will be. Much of our communication is now indirect, over phone wires, airwaves, the internet. Phone lines can be tapped, cordless or cell phone messages intercepted. An email bounces through multiple computers on its way to its destination – anyone controlling one of those computers can, in principle, save a copy.
A different set of old technologies was used for written messages. A letter sealed with the sender’s signet ring could not protect the message but at least it let the recipient know if it had been opened – unless the spy was very good with a hot knife. A letter sent via a trusted messenger was safer still, provided he deserved the trust.
A more ingenious approach was to protect not the physical message but the information it contained, by scrambling the message and providing the intended recipient with the formula for unscrambling it. A simple version was a substitution cipher, in which each letter in the original message was replaced by a different letter. If we replace each letter with the next one in the alphabet, we get “mjlf uijt” from the words “like this.”
“Mjlf uijt” does not look much like “like this,” but it is not very hard, if you have a long message and patience, to deduce the substitution and decode the message. More sophisticated scrambling schemes rearrange the letters according to an elaborate formula, or convert letters into numbers and do complicated arithmetic with them to convert the message (plaintext) into its coded version (ciphertext). Such methods were used, with varying degrees of success, by both sides in World War II.
There were two problems with this way of keeping secrets. The first was that it was slow and difficult – it took a good deal of work to convert a message into its coded form or to reverse the process. It was worth doing if the message was the order telling your fleet when and where to attack, but not for casual conversations among ordinary people.
That problem has been solved. The computers most of us have on our desktops can scramble messages, using methods that are probably unbreakable even by the NSA, faster than we can type them. They can even scramble – and unscramble – the human voice as fast as we can speak. Encryption is now available not merely to the Joint Chiefs of Staff but to you and me for our ordinary conversation.
The other problem is that in order to read my scrambled message you need the key – the formula describing how to unscramble it. If I do not have a safe way of sending you messages, I may not have a safe way of sending you the key either. If I sent it by a trusted messenger but made a small mistake as to who was entitled to trust him, someone else now has a copy and can use it to decrypt my future messages to you. This may not be too much of a problem for governments, willing and able to send information back and forth in briefcases handcuffed to the wrists of military attachés, but for the ordinary purposes of ordinary people that is not a practical option.
About twenty-five years ago, this problem was solved. The solution was public key encryption, a new way of scrambling and unscrambling messages that does not require a secure communication channel for either the message or the key. The software to implement that solution is now widely available.
Public key encryption works by generating a pair of keys – call them A and B – each a long number that can be used to unscramble what the other has scrambled. If you encrypt a message with A, someone who possesses only A cannot decrypt it – that requires B. If you encrypt a message with B, you have to use A to decrypt it. If you send a friend key A (your public key) while keeping key B (your private key) secret, your friend can use A to encrypt messages to you and you can use B to decrypt them. If a spy gets a copy of key A, he can send you secret messages too. But he still cannot decrypt the messages from your friend. That requires key B, which never leaves your possession.
How can one have the information necessary to encrypt a message yet be unable to decrypt it? How can it be possible to produce two keys with the necessary relationship but not, starting with one key, to calculate the other? The answer to both questions depends on the fact that there are some mathematical processes that are much easier to do in one direction than another.
Most of us can multiply 293 by 751 reasonably quickly, using nothing more sophisticated than pencil and paper, and get 220,043. Starting with 220,043 and finding the only pair of three-digit numbers that can be multiplied together to give it takes a lot longer. The most widely used version of public key encryption depends on that asymmetry – between multiplying and factoring – using much larger numbers. Readers who are still puzzled may want to look at Appendix I of this chapter, where I describe a very simple form of public key encryption suited to a world where people know how to multiply but have not yet learned how to divide, or check one of the webbed descriptions of the mathematics of the El-Gamal and RSA algorithms, the most common forms of public key encryption.
When I say that encryption is unbreakable, what I mean is that it cannot be broken at a reasonable cost in time and effort. Almost all encryption schemes, including public key encryption, are breakable given an unlimited amount of time. If, for example, you have key A and a message 1,000 characters long encrypted with it, you can decrypt the message by having your computer create every possible 1,000-character message, encrypt each with A, and find the one that matches. Alternatively, if you know that key B is a number 100 digits long, you could try all possible 100-digit numbers, one after another, until you found one that correctly decrypted a message that you had encrypted with key A.
Both of these are what cryptographers describe as “brute-force” attacks. To implement the first of them, you should start by providing yourself with a good supply of candles – the number of possible 1,000-character sequences is so astronomically large that, using the fastest computers now available, the sun will have burned out long before you finish. The second is workable if key B is a sufficiently short number – which is why people who are serious about protecting their privacy use long keys, and why people who are serious about violating privacy try to pass laws restricting the length of the keys that encryption software uses.
Encryption Conceals …
Imagine that everyone has an internet connection and suitable encryption software, and that everyone’s public key is available to everyone else – published in the phone book, say. What follows?
What I Say
One obvious result is that we can have private conversations. If I want to send you a message that nobody else can read, I first encrypt it with your public key. When you respond, you encrypt your message with my public key. The FBI, or my nosy neighbor, is welcome to tap the line – everything he gets will be gibberish to anyone who does not have the corresponding private key.
Even if the FBI does not know what I am saying, it can learn a good deal by watching who I am saying it to – known in the trade as traffic analysis. That problem too can be solved, using public key encryption and an anonymous remailer, a site on the internet that forwards email. When I want to communicate with you I send the message to the remailer, along with your email address. The remailer sends it to you.
If that was all that happened, someone tapping the net could follow the message from me to the remailer and from the remailer to you. To prevent that, the message to the remailer, including your email address, is encrypted with the remailer’s public key. When he receives it he uses his private key to strip off that layer of encryption, revealing your address, and forwards the decrypted message. Our hypothetical spy sees 1,000 messages go into the remailer and 1,000 go out, but he can neither read the email addresses on the incoming messages – they are hidden under a layer of encryption – nor match up incoming and outgoing message.
What if the remailer is a plant – a stooge for whoever is spying on me? There is a simple solution. The email address he forwards the message to is not actually yours – it is the email address of a second remailer. The message he forwards is your message plus your email address, the whole encrypted with the second remailer’s public key. If I am sufficiently paranoid, I can bounce the message through ten different remailers before it finally gets to you. Unless all ten are working for the same spy, there is no way anyone can trace the message from me to you. (Readers who want a more detailed description of how remailers work will find it in Appendix II.)
We now have a way of corresponding that is doubly private – nobody can know what we are saying and nobody can find out whom we are saying it to. But there is still a problem.
Who I Am
When interacting with other people, it is helpful to be able to prove your identity – which can be a problem online. If I am leading a conspiracy to overthrow an oppressive government, I want my fellow conspirators to be able to tell which messages are coming from me and which from the secret police pretending to be me. If I am selling my consulting services online, I need to be able to prove my identity in order to profit from the reputation earned by past consulting projects and make sure that nobody else free rides on that reputation by masquerading as me.
That problem too can be solved by public key encryption. In order to digitally sign a message, I encrypt it using my private key instead of your public key. I then send it to you with a note telling you whom it is from. You decrypt it with my public key. The fact that what comes out is a message and not gibberish tells you that it was encrypted with the matching private key. Since I am the only one who has that private key, the message must be from me.
My digital signature not only demonstrates that I sent the signed message, it does so in a form that I cannot later disavow. If I try to deny having sent it, you point out that you have a copy of the message encrypted with my private key – something that nobody but I could have produced. Thus a digital signature makes it possible for people to sign contracts that they can be held to – and does so in a way much harder to forge than an ordinary signature.
And Whom I Pay
If we are going to do business online we need a way of paying for things. Checks and credit cards leave a paper trail. What we want is an online equivalent of currency – a way of making payments that cannot later be traced, either by the parties themselves or anyone else.
The solution, discussed in some detail in a later chapter, is anonymous ecash. Its essential feature is that it permits people to make payments to each other by sending a message, without either party having to know the identity of the other and without any third party having to know the identity of either of them. One of the many things it can be used for is to pay for the services of an anonymous remailer, or a string of anonymous remailers, thus solving the problem of how to keep remailers in business without sacrificing their customers’ anonymity. Another, as we will see later, is to help us eliminate one of the chief minor nuisances of modern life – spam email.
Combine and Stir
Combine public key encryption, anonymous remailers, digital signatures, and ecash, and we have a world where individuals can talk and trade with reasonable confidence that no third party is observing them.
A less obvious implication is the ability to combine anonymity and reputation. You can do business online without revealing your real-world identity – your true name.1 You prove you are the same person who did business yesterday, or last year, by digitally signing your messages. Your online persona is defined by its public key. Anyone who wants to communicate with you privately uses that key to encrypt his messages; anyone who wants to be sure that you are the person who sent a message uses it to check your digital signature.
With the exception of fully anonymous ecash, all of these technologies already exist, implemented in software that is currently available for free. At present, however, they are mostly limited to the narrow bandwidth of email – sending private text messages back and forth. As computers and computer networks get faster, that will change.
Twice in the past month I traveled several hundred miles – once by car, once by air – in order to give a series of talks. With only mild improvements in current technology I could have given them from my office. Both my audience and I would have been wearing virtual reality goggles – glasses with the lenses replaced by tiny computer screens. My computer would be drawing the view of the lecture room as seen from the podium – including the faces of my audience – at sixty frames a second. Each person in the audience would have a similar view, from his seat, drawn by his computer. Earphones take care of sound. The result would be the illusion, for all of us, that we were present in the same room seeing and hearing each other.
Virtual reality not only keeps down travel costs, it has other advantages as well. Some lecture audiences expect a suit and tie – and not only do I not like wearing ties, all of the ones I own possess a magnetic attraction for foodstuffs in contrasting colors. To give a lecture in virtual reality I do not need a tie, or even a shirt. My computer can add both to the image it sends out over the net. It can also remove a few wrinkles, darken my hair, and cut a decade or so off my apparent age.
As computers get faster they can not only create and transmit virtual reality worlds, they can also encrypt them. That means that any human interaction involving only sight and sound can be moved to cyberspace and protected by strong privacy.
Handing Out the Keys: A Brief Digression
In order to send an encrypted message to a stranger or check the digital signature on a message from a stranger, I need his public key. Earlier in the chapter, I assumed that problem away by putting everyone’s public key in the phone book. Although that is a possible solution, it is not a very good one.
A key published in the phone book is only as reliable as whoever is publishing it. If our hypothetical bad guy can arrange for his public key to be listed under my name, he can read messages intended for me and sign bogus messages from me with a digital signature that checks against my supposed key. A phone book is a centralized system, vulnerable to failures at the center, whether due to dishonesty or incompetence. There is, however, a simple decentralized solution; as you might guess, it too depends on public key encryption.
Consider some well-known organization, such as American Express, which many people know and trust. American Express arranges to make its public key very public – posted in the window of every American Express office, printed – and magnetically encoded – on every American Express credit card, included in the margin of every American Express ad. It then goes into the identity business.
To take advantage of its services, I use my software to create a public key/private key pair. I then go to an American Express office, bringing with me my passport, driver’s license, and public key. After establishing my identity to their satisfaction, I hand them a copy of my public key and they create a message saying, in language a computer can understand, “The public key of David D. Friedman, born on 2/12/45 and employed by Santa Clara University, is 10011011000110111001010110001101000… .” They digitally sign the message using American Express’s private key, copy the signed message to a floppy disk, and give it to me.
To prove my identity to a stranger, I send him a copy of the digital certificate from American Express. He now knows my public key – allowing him to send encrypted messages that only David Friedman can read and check digital signatures to see if they are really from David Friedman. Someone with a copy of my digital certificate can use it to prove to people what my public key is but cannot use it to masquerade as me – because he does not possess the matching private key.
So far this system has the same vulnerability as the phone book; if American Express or one of its employees is working for the bad guy, they can create a bogus certificate identifying someone else’s public key as mine. But nothing in a system of digital certificates requires trust in any one organization. I can email you a whole pack of digital certificates – one from American Express, one from the U.S. Post Office, one from the Catholic Church, one from my university, one from Microsoft, one from Apple, one from AOL – and you can have your computer check all of them and make sure they all agree. It is unlikely that a single bad guy has infiltrated all of them.2
So far I have been assuming that real-world identities are unique – each individual has only one. But each of us has, in a very real sense, multiple identities – different things about us are relevant identifiers to different people. What my students need to know is that a message really came from the professor teaching the course they are taking. What my daughter needs to know is that it really came from her father. One can imagine circumstances where it is important to keep multiple real-world identities separate – to conceal from some of the people you are interacting with identifying features that you want to be able to reveal to others. A system of multiple certifying authorities makes that possible, provided you remember which certificates to send to which correspondent. Sending your superior in the criminal organization you are infiltrating the certificate identifying you as a police officer might be hazardous.
A World of Strong Privacy
One of the attractive features of the world created by these technologies is free speech. If I communicate online under my own name using encryption, I can be betrayed only by the person I am communicating with. If I do it using an online persona, with reputation but with no link to my realspace identity, not even the people I communicate with can betray me. Thus strong privacy creates a world that is, in important ways, safer than the one we now live in – a world where you can say things other people disapprove of without the risk of punishment, legal or otherwise.
This brings me to another digression – one directed especially at my friends on the right wing of the political spectrum.
The Virtual Second Amendment
The Second Amendment to the U.S. Constitution guarantees Americans the right to bear arms. A plausible interpretation of its history views it as a solution to a problem of considerable concern to eighteenth-century thinkers – the problem of standing armies. Everyone knew that professional armies beat amateur armies. Everyone also knew – with Cromwell’s dictatorship still fairly recent history – that a professional army posed a serious risk of military takeover.
The Second Amendment embodied an ingenious solution to that problem. Combine a small professional army under the control of the Federal government with an enormous citizen militia – every able-bodied adult man. Let the Federal government provide sufficient standardization so that militia units from different states could work together but let the states appoint the officers – thus making sure that the states and their citizens maintained control over the militia. In case of foreign invasion, the militia would provide a large, if imperfectly trained and disciplined, force to supplement the small regular army. In case of an attempted coup by the Federal government, the Federal army would find itself outgunned a hundred to one.
The beauty of this solution is that it depends not on making a military takeover illegal but on making it impossible. In order for that takeover to occur, it would first be necessary to disarm the militia. But until the takeover had occurred the Second Amendment prevented the militia from being disarmed, since any such attempt would be seen as a violation of the Constitution and resisted with force.
It was an elegant solution 200 years ago, but I am less optimistic than some of my friends about its relevance today. The United States. has a much larger professional military, relative to its population, than it did then; the states are much less independent of the Federal government than they were; and the gap between civilian and military weaponry has increased enormously.
Other things have changed as well over 200 years. In a world of broad-based democracy and network television, conflicts between the U.S. government and its citizens are likely to involve information warfare, not guns. A government that wants to do bad things to its citizens will do them by controlling the flow of information in order to make them look like good things.
In that world, widely available strong encryption functions as a virtual Second Amendment. As long as it exists, the government cannot control the flow of information. And once it does exist, eliminating it, like disarming an armed citizenry, is extraordinarily difficult – especially for a government that cannot control the flow of information to its citizens about what it is doing.
If You Work for the IRS, Stop Here
Freedom of speech is something most people, at least in this country, favor. But strong privacy will also reduce the power of government in less obviously desirable ways. Activities that occur entirely in cyberspace will be invisible to outsiders – including ones working for the Federal government. It is hard to tax or regulate things you cannot see.
If I earn money selling services in cyberspace and spend it buying goods in realspace, the government can tax my spending. If I earn money selling goods in realspace and spend it buying services in cyberspace, they can tax my income. But if I earn money in cyberspace and spend it in cyberspace they cannot observe either income or expenditure and so will have nothing to tax.
Similarly for regulation. I am, currently, a law professor but not a member of the State Bar of California, making it illegal for me to sell certain sorts of legal services in California. Suppose I wanted to do so anyway. If I do it as David D. Friedman I am likely to get in trouble. But if I do it as Legal Eagle Online, taking care to keep the true name – the real-world identity – of Legal Eagle a secret, there is not much the State Bar can do about it.
In order to sell my legal services I have to persuade someone to buy them. I cannot do that by pointing potential customers to my books and articles because they were all published under my own name. What I can do is to start by giving advice for free and then, when the recipients find that the advice is good – perhaps by checking it against the advice of their current lawyers – raise my price. Thus over time I establish an online reputation for an online identity guaranteed by my digital signature.
Legal advice is one example; the argument is a general one. Once strong privacy is well established, governmental regulation of information services can no longer be enforced. Governments may still attempt to maintain the quality of professional services by certifying professionals – providing information as to who they believe is competent. But it will no longer be possible to force customers to act on that information – to legally forbid them from using uncertified providers, as they currently are legally forbidden to use unlicensed doctors or lawyers who have not passed the Bar.3
The Downside of Strong Privacy
Reducing the government’s ability to collect taxes and regulate professions is in my view a good thing, although some will disagree. But the same logic also applies to government activities I approve of, such as preventing theft and murder. Online privacy will make it harder to keep people from sharing stolen credit card numbers or information on how to kill people or organizing plots to steal things or blow things up.
This is not a large change; the internet and strong encryption merely make it somewhat easier for criminals to do things they are doing already. A more serious problem is that, by making it possible to combine anonymity and reputation, strong privacy makes possible criminal firms with brand-name reputation.
Suppose you very much want to have someone killed. The big problem is not the cost; so far as I can gather from public accounts, hiring a hit man costs less than buying a car, and most of us can afford a car. The big problem – assuming you have already resolved any moral qualms – is finding a reliable seller of the service you want to buy. That problem, in a world of widely distributed strong encryption, we can solve. Consider my four-step business plan for Murder Incorporated:
Arrange for mystery billboards on major highways. Each contains a single long number and the message “write this down.” Display ads with the same message appear in major newspapers.
Put a full-page ad in the New York Times, apparently written in gibberish.
Arrange a multiple assassination with high-profile targets, such as film stars or major sports figures – perhaps a bomb at the Academy Awards.
Send a message to all major media outlets telling them that the number on all of those bulletin boards is a public key. If they use it to decrypt the New York Times ad they will get a description of the assassination, published the day before it happened.
You have now made sure that everyone in the world has, or can get, your public key – and knows that it belongs to an organization willing and able to kill people. Once you have taken steps to tell people how to post messages where you can read them, everyone in the world will know how to send you messages that nobody else can read and how to identify messages that can only have come from you. You are now in business as a middleman selling the services of hit men. Actual assassinations still have to take place in realspace, so being a hit man still has risks. But the problem of locating a hit man – when you are not yourself a regular participant in illegal markets – has been solved.
Murder Incorporated is a particularly dramatic example of the problem of criminal firms with brand-name reputations, operating openly in cyberspace while keeping their realspace identity and location secret, but there are many others. Consider “Trade Secrets Inc. – We Buy and Sell.” Or an online pirate archive, selling other people’s intellectual property in digital form, computer programs, music, and much else, for a penny on the dollar, payable in anonymous digital cash.
Faced with such unattractive possibilities, it is tempting to conclude that the only solution is to ban encryption. A more interesting approach is to find ways of achieving our objectives – preventing murder, providing incentives to produce computer programs – that are made easier by the same technological changes that make the old ways harder.
Anonymity is the ultimate defense. Not even Murder Incorporated can assassinate you if they do not know who you are. If you plan to do things that might make people want to kill you – publish a book making fun of the prophet Mohammed, say, or revealing the true crimes of Bill (Gates or Clinton) – it might be prudent not to do it under a name linked to your realspace identity. That is not a complete solution – the employer of the hit man might, after all, be your wife, and it is hard to conduct a marriage entirely in cyberspace – but it at least protects many potential victims.
Similarly for the more common, if less dramatic, problem of protecting intellectual property online. Copyright law will become largely unenforceable, but there are other ways of protecting property. One – using encryption to provide the digital equivalent of a barbed wire fence protecting your property – will be discussed at some length in Chapter 8.
Why It Will Not Be Stopped
For the past two decades powerful elements in the U.S. government, most notably the National Security Agency and the FBI, have been arguing for restrictions on encryption designed to maintain their ability to tap phones, read seized records, and in a variety of other ways violate privacy for what they regard as good purposes. After my description of the downside of strong privacy, readers may think there is a good deal to be said for the idea.
There are, however, practical problems. The most serious is that the cat is already out of the bag – has been for more than twenty-five years. The mathematical principles on which public key encryption is based are public knowledge. That means that any competent computer programmer with an interest in the subject can write encryption software. Quite a lot of such software has already been written and is widely available. And, given the nature of software, once you have a program you can make an unlimited number of copies. It follows that keeping encryption software out of the hands of spies, terrorists, and competent criminals is not a practical option. They probably have it already, and if they don’t they can easily get it.
Banning the production and possession of encryption software is not a practical option, but what about banning or restricting its use? To enforce such a ban law enforcement agencies would randomly monitor a substantial fraction of all communications, taking advantage of the massive wiretapping capacity that current law requires the phone companies to provide them and expanding the legal requirements to apply to other communication providers as well. Any message that looked like gibberish and could not be shown to be the result of a legal form of encryption would lead to legal action against its author.
One practical problem is the enormous volume of information flowing over computer networks. A second problem is that while it is easy enough to tell whether a message consists of text written in English, it is much harder – in practice, impossible – to identify other sorts of content well enough to be sure that they do not consist of, or contain, encrypted messages.
Consider a three-million pixel digital photo. It is made up of three million colored dots, each described by three numbers – intensity of red, intensity of blue, intensity of green.4 Each of those numbers is, from the standpoint of the computer, a string of ones and zeros. Changing the rightmost digit – the “least significant bit” – from one to zero or zero to one will have only a tiny effect on the appearance of the dot, just as changing the rightmost digit in a long decimal number, say 9,319,413, has only a very small effect on its size.
To conceal a million-character-long encrypted message in my digital photo, I simply replace the least significant bit of each of the numbers in the photo with one bit of the message. The photo is now a marginally worse picture than it was – but there is no way an FBI agent, or a computer working for an FBI agent, can know precisely what the photo ought to look like. This is a simple example of steganography – concealing messages.
It is not practical for law enforcement to keep sophisticated criminals, spies, or terrorists from possessing and using strong encryption software. What is possible is to put limits on the encryption software publicly marketed and publicly used – to insist, for example, that if AOL or Microsoft builds encryption into their programs it must contain a back door permitting properly authorized persons – a law enforcement agent with a court order, say – to read the message without the key.
The problem with such an approach is that there is no way of giving law enforcement what it wants without imposing very high costs on the rest of us. In order to deal with crimes in progress, police have to be able to decrypt encrypted information they have obtained reasonably quickly; it does little good to read the intercepted message an hour after the bomb has gone off.5 The equivalent in realspace would be legal rules that let properly authorized law enforcement agents open any lock in the country in half an hour. That includes not only the lock on your front door but the locks protecting bank vaults, trade secrets, lawyers’ records, lists of contributors to unpopular causes, and much else.
While access would be nominally limited to those properly authorized, it is hard to imagine any system flexible enough to do the job that was not vulnerable to misuse. If being a police officer gives you access to locks with millions of dollars behind them, in cash, diamonds, or information, some cops will become criminals and some criminals will become cops. Proper authorization presumably means a court order – but not all judges are honest, and half an hour is not long enough for even an honest judge to verify what the officer applying for the court order tells him.6
Encryption provides the locks for cyberspace. If nobody has strong encryption, everything in cyberspace is vulnerable to a sufficiently sophisticated private criminal. If people have strong encryption but it comes with a mandatory back door accessible in half an hour to any police officer with a court order, then everything in cyberspace is vulnerable to a private criminal with the right contacts. Those locks have billions of dollars’ worth of stuff behind them – money in banks, trade secrets in computers.
One could imagine a system for accessing encrypted documents so rigorous that it required written permission from the President, Chief Justice, and Attorney General and only got used once every two or three years. Such a system would not seriously handicap online dealings. But it would also be of no real use to law enforcement, since there would be no way of knowing which one communication out of the billions crisscrossing the internet each day they needed to crack.
In order for regulation to be useful, it has to either prevent the routine use of encryption or make it reasonably easy for law enforcement agents to access encrypted messages. Doing either will seriously handicap the ordinary use of the net. Not only will it handicap routine transactions, it will make computer crime easier by restricting the technology best suited to defend against it. And what we get in exchange is protection, not against the use of encryption by sophisticated criminals and terrorists – there is no way of providing that – but only against the use of encryption by ordinary people and unsophisticated criminals.
Readers who have followed the logic of the argument may point out that even if we cannot keep sophisticated criminals from using strong encryption, we may be able to prevent ordinary people from using it to deal with sophisticated criminals – and doing so would make my business plan for Murder Incorporated unworkable. While it would be a pity to seriously handicap the development of online commerce, some may think that price worth paying to avoid the undesirable consequences of strong privacy.
To explain why I do not expect that to happen requires a brief economic digression.
Property Rights and Myopia
You are thinking of going into the business of growing trees – hardwoods that mature slowly but produce valuable lumber. It will take forty years from planting to harvest. Should you do it? The obvious response is not unless you are confident of living at least another forty years.
Like many obvious responses, it is wrong. Twenty years from now you will be able to sell the land, covered with twenty-year-old trees, for a price that reflects what those trees will be worth in another twenty years. Following through the logic, it is straightforward to show that if what you expect the trees to sell for will more than repay your investment, including forty years of compound interest, you should do it.
This assumes a world of secure property rights. Suppose we assume instead that your trees are quite likely, at some point during the next forty years, to be stolen – legally via government confiscation or illegally by someone driving into the forest at night, cutting them down, and carrying them off. In that case you will only be willing to go into the hardwood business if the return from selling the trees is enough larger than the ordinary return on investments to compensate you for the risk.
Generalizing the argument, we can see that long-run planning depends on secure property rights.7 If you are confident that what you own today you will still own tomorrow – unless you choose to sell it – you can afford to give up benefits today in exchange for greater benefits tomorrow, or next year, or next decade. The greater the risk that what you now own will be taken out of your control at some point in the future, the greater the incentive to limit yourself to short-term projects.
Politicians in a democratic society have insecure property rights over their political assets; Bill Clinton could rent out the White House but he could not sell it. One consequence is that in such a system government policy is dominated by short-run considerations – most commonly the effect of current policy on the outcome of the next election. Very few politicians will accept political costs today in exchange for benefits ten or twenty or thirty years in the future, because they know that when the benefits arrive someone else will be in power to enjoy them.
Preventing the development of strong privacy means badly handicapping the current growth of online commerce. It means making it easier for criminals to hack into computers, intercept messages, defraud banks, steal credit cards. It is thus likely to be politically costly, not ten or twenty years from now but in the immediate future.
What do you get in exchange? The benefit of encryption regulation – the only substantial benefit, since it cannot prevent the use of encryption by competent criminals – is preventing the growth of strong privacy. From the standpoint of governments, and of people in a position to control governments, that may be a large benefit, since strong privacy threatens to seriously reduce government power, including the power to collect taxes. But it is a long-run threat, one that will not become serious for a decade or two. Defeating it requires the present generation of elected politicians to do things that are politically costly for them – in order to protect the power of whomever will hold their offices ten or twenty years from now.
The politics of encryption regulation so far fits the predictions of this analysis. Support for regulation has come almost entirely from long-lived bureaucracies such as the FBI and NSA. So far, at least, they have been unable to get elected politicians to do what they want when doing so involved any serious political cost.8
If this argument is right, it is unlikely that serious encryption regulation, sufficient to make things much easier for law enforcement and much harder for the rest of us, will come into existence, at least in the United States. So there is a reasonable chance that we will end up with something along the lines of the world of strong privacy described in this chapter.
In my view that is a good thing. The attraction of a cyberspace protected by encryption is that it is a world where all transactions are voluntary: You cannot get a bullet through a T1 line. It is a world where the technology of defense has finally beaten the technology of offense. In the world we now live in, our rights can be violated by force or fraud; in a cyberspace protected by strong privacy, only by fraud. Fraud is dangerous, but less dangerous than force. When someone offers you a deal too good to be true, you can refuse it. Force makes it possible to offer you deals you cannot refuse.
Truth to Tell
In several places in this chapter I have simplified the mechanics of encryption, describing how something could be done but not how it is done. Thus, for example, public key encryption is usually done not by encrypting the message with the recipient’s public key but by encrypting the message with an old-fashioned single key encryption scheme, encrypting the single key with the recipient’s public key, and sending both encrypted message and encrypted key. The recipient uses his private key to decrypt the encrypted key and uses that to decrypt the message. Although this is a little more complicated than the method I described, in which the message itself is encrypted with the public key, it is also significantly faster.
Similarly, a digital signature is actually calculated by using a one-way hash function to create a message digest of the original message and encrypting the digest with your private key, then sending both message and digest. The recipient decrypts the digest, creates a second digest from the message using the same hash function, and compares them to make sure they are identical, as they will be if the message has not been changed and the public and private keys match.
Such complications make describing the mechanics of encryption more difficult and are almost entirely irrelevant to the issues discussed here, so I ignored them.
A second set of complications, also ignored but more important, concerns indirect ways in which cryptographically protected anonymity might be attacked. One example is textual analysis. A perceptive reader or sufficiently sophisticated software might recognize stylistic similarities between the books of David Friedman and the written legal advice of Legal Eagle. The odds that the same person has read work by both identities closely enough to identify them as the same may not be very high – but software designed for textual analysis could create a database linking a very large number of known authors to stylistic identifiers for their writing. A simple one for me would be the overuse of “hence.”
Another problem is that most of what I have described depends on your having complete control over your computer – or at least over a smart card containing your private key and enough software to use it to encrypt and decrypt. If someone else can get at your private key by either a physical or virtual intrusion, all bets are off. If someone else can get control of your computer, even without access to your private key, he can use that control to mislead you in a variety of ways – for instance, by falsely reporting that a message has a valid digital signature. As Mark Miller puts it, “People don’t sign, computers sign.” And encrypt, decrypt, and check signatures. So a crucial element of strong privacy is the ability of individuals to control the computers they use. In practice, a secure system is likely to include provisions for publicly canceling private keys that may have fallen into the wrong hands.
An alternative approach is to memorize your private key. A 128-bit key can be represented as a string of 19 numbers, letters, and punctuation marks, which is not that difficult to memorize. Alternatively, the key can be derived from a passphrase, a procedure that is less secure but easier for the user. In either case, you still have the problem of making sure that your computer can be trusted to forget the key as soon as it has used it.
PUBLIC KEY ENCRYPTION: A VERY ELEMENTARY EXAMPLE
Imagine a world in which people know how to multiply numbers but not how to divide them. Further imagine that there exists some mathematical procedure capable of generating pairs of numbers that are inverses of each other: X and 1/X. Finally, assume that the messages we wish to encrypt are simply numbers.
I generate a pair X, 1/X. To encrypt the number M using the key X, I multiply X times M. We might write
[M,X] = MX,
meaning “Message M encrypted using the key X is M times X.”
Suppose someone has the encrypted message MX and the key X. Since he does not know how to divide, he cannot decrypt the message and find out what the number M is. If, however, he has the other key, 1/X, he can multiply it times the encrypted message to get back the original M:
MX(1/X) = M(X/X) = M
Alternatively, one could encrypt a message by multiplying it by the other key, 1/X, giving us
[M,1/X] = M/X.
Someone who knows 1/X but does not know X has no way of decrypting the message and finding out what M is. But someone with X can multiply it times the encrypted messages and get back M:
(M/X) X = M
So in this world, multiplication provides a primitive form of public key encryption: a message encrypted by multiplying it with one key can only be decrypted with the other.
Public key encryption in the real world depends on mathematical operations that, like multiplication and division in my example, are much easier to do in one direction than the other. The RSA algorithm, for example, at present the most widely used form of public key encryption, depends on the fact that it is easy to generate a large number by multiplying together several large primes but much harder to start with a large number and factor it to find the primes that can be multiplied together to give that number. The keys in such a system are not literally inverses of each other, like X and 1/X, but they are functional inverses, since either one can undo (decrypt) what the other does (encrypts).
CHAINING ANONYMOUS REMAILERS
M is my actual message; [M,K] means “message M encrypted using key K.” Kr is the public key of the intended recipient of my message, Er is his email address. I am using a total of three remailers; their public keys are K1, K2, K3, and their email addresses are E1, E2, E3. What I send to the first remailer is:
[([([([M,Kr] + Er),K3] + E3),K2] + E2),K1]
The first remailer uses his private key to strip off the top layer of encryption, leaving him with:
[([([M,Kr] + Er),K3] + E3),K2] + E2
He can now read E2, the email address of the second remailer, so he sends the rest of the message to that address. The second remailer receives:
[([([M,Kr] + Er),K3] + E3),K2]
and uses his private key to strip off a layer of encryption, leaving him with:
[([M,Kr] + Er),K3] + E3
He then sends to the third remailer:
[([M,Kr] + Er),K3]
The third remailer strips the third layer of encryption off, giving him:
[M,Kr] + Er
and sends [M,Kr] to the intended recipient at Er – who then uses his private key to strip off the last level of encryption, giving him M, the original message.
1 The earliest sketch of these ideas that I have seen appeared in a science fiction story by a computer scientist – “True Names” by Vernor Vinge (Vinge, 1987). The point of the title was that just as, in traditional fantasy, a sorcerer must protect his true name to keep others from using magic against him, so in an online world individuals must protect their real-world identities to keep others from acting against them in realspace.
2 A still more
decentralized version was proposed and implemented by Philip Zimmerman,
creator of the widely used public key program PGP – a web of trust.
Every time you correspond with someone, he provides you a list of all
the public keys he knows about, along with the identities of their
owners. Your software keeps track of the information. The more people
have told you that a particular public key belongs to a particular
person, the more confident you are that it is true. In effect, everyone
becomes a certifying authority for everyone else.
3 The use of professional licensing to reduce competition is discussed in Milton Friedman, Capitalism and Freedom, Chapter 9.
4 There are, of course, lots of other ways in which the image might be encoded.
5 I think there is a quote somewhere from the FBI director about wanting the ability to decrypt in half an hour; if any reader is aware of that or something similar, please let me know.
6 The raid on Steve Jackson Games, which resulted in the Secret Service being found liable for tort damages for its violation of federal law, provides one striking example. According to news stories in 2007, in 2005 the chief judge of the Foreign Intelligence Surveillance Court complained to the Justice Department that FBI repeatedly had provided the court with inaccurate information in order to get surveillance warrants: “Records show that the FISA court approves almost every application for the warrants, which give agents broad powers to electronically monitor and surveil people who they allege are connected to terrorism or espionage cases. The number of requests rose from 886 in 1999 to 2,074 in 2005. The court did not reject a single application in 2005 but ‘modified’ 61, according to a Justice Department report to Congress.”
8 This passage was first written before the September 11th attack on the World Trade Center. That event strengthened the hand of the supporters of encryption regulation, but I think the long-term prediction still holds.