[This is based on the final draft on my hard disk, and may differ in detail from the published version]
Originally published in Social Philosophy & Policy, volume 13, number 2 (Summer 1996)), published by Cambridge University Press. Copyright 1996 by the Social Philosophy and Policy Foundation. Reprinted with the permission of the Social Philosophy and Policy Foundation. Any reproduction, copying, downloading, or use of any kind of this material is a violation of copyright, and such uses must first receive written permission by contacting the Social Philosophy and Policy Center, Bowling Green State University, Bowling Green, Ohio 43403.
A major theme in discussions of the influence of technology on society has been the computer as a threat to privacy. It now appears that the truth is precisely opposite. Three technologies associated with computers, public key encryption, networking, and virtual reality, are in the process of giving us a level of privacy never known before. The U.S. government is currently intervening in an attempt, not to protect privacy, but to prevent it.
Part I of this article is an explanation of the technologies, intended to demonstrate that current developments, if they continue, will produce a world of strong privacy, a world in which large parts of our lives are technologically protected from the observation of others. Part II is a discussion of the likely consequences, attractive and unattractive, of that change. Part III is a brief account of attempts by the U.S. government to prevent or control the rise of privacy.
Two people wish to communicate privately. One way to do so is to make sure that nobody can intercept the communication. If you are worried about eavesdroppers, check under the eaves-or hold your confidential conversations in the middle of large open spaces. Send letters only by trusted messengers.
This approach has become more difficult over time, and in some contexts, such as cellular phone calls, very nearly impossible. Broadcast signals can be intercepted. A complicated switching network, such as the phone system, can be tapped. EMail goes from one computer to another through a series of intermediates; someone controlling any of the intermediate machines can intercept the message. How, in such a world, can we preserve privacy?
One approach is by legal restrictions on the interception of messages and the use of information. Tapping phone lines is, under most circumstances, illegal. The use of even legally obtained information is restricted by legislation such as the Fair Credit Reporting Act.
An alternative, and in many contexts superior, approach is to encrypt the message, so that even if it is intercepted it cannot be read. Parents spell out messages they do not want their small children to understand; when the children learn to spell, the parents switch to using a foreign language. The military and intelligence services rely on more sophisticated applications of the same principle to protect sensitive communications.
Until quite recently, encryption suffered from two serious handicaps. Encrypting and decrypting were slow and laborious processes, making encryption of any save highly sensitive information more trouble than it was worth. And in order for B to decrypt the message received from A, B had to get from A the information necessary for decryption: the key. If the key was intercepted or stolen, the encrypted message could be read.
Both problems have been solved. With modern computers, written messages can be encrypted and decrypted faster than they can be typed. It is becoming possible to encrypt and decrypt even spoken messages as fast as they are spoken.
The problem of transmitting keys was solved by the invention of public key cryptography. A public key encryption scheme involves two keys, each of which functions as an inverse of the other: If key 1 is used to encrypt a message, key 2 is required to decrypt it, and vice versa.
This sounds puzzling; how can comeone have the information
necessary to encrypt a message yet be unable to decrypt it? A description
of how actual public key algorithms work would require a level of mathematics
unsuited to this journal, but it is possible to describe a form of public
key encryption that would work in a world more mathematically primitive
than our own.
Public Key Encryption: A Very Elementary Example
Imagine a world in which people know how to multiply numbers but not how to divide them. Further imagine that there exists some mathematical procedure capable of generating pairs of numbers that are inverses of each other: X and 1/X. Finally, assume that the messages we wish to encrypt are simply numbers.
I generate a pair X, 1/X. To encrypt the number M using the key X, I multiply X times M. We might write
Meaning "Message M encrypted using the key X is M times X."
Suppose someone has the encrypted message MX and the key X. Since he does not know how to divide, he cannot decrypt the message and find out what the number M is. If, however, he has the other key 1/X, he can multiply it times the encrypted message to get back the original M:
Alternatively, one could encrypt a message by multiplying it by the other key, 1/X, giving us
Someone who knows 1/X but does not know X has no way of decrypting the message and finding out what M is. But someone with X can multiply it times the encrypted messages and get back M:
So in this world, multiplication provides a primitive form of public key encryption: a message encrypted by multiplying it with one key can only be decrypted with the other.
In the real world, of course, we know how to divide. Real public key encryption depends on mathematical operations which, like multiplication and division in my example, are very much easier to do in one direction than the other. The RSA algorithm, for example, which is at present the most widely used for of public key encryption, depends on the fact that it is easy to generate a large number by multiplying together several large primes, but much harder to start with the number and factor it to find the primes that can be multiplied together to give that number. The keys in such a system are not literally inverses of each other, like X and 1/X, but they are functional inverses, since either one can undo (decrypt) what the other does (encrypts).
Using Public Key Encryption
I wish to make it possible for other people to send me messages that only I can read. I publish one of my two keys, called my public key, where anyone can read it-in the phone book or its future equivalent. The other key, my private key, I keep secret. Anyone who wishes to send me a message encrypts it using my public key. Decrypting it requires my private key, which only I have. The private key cannot be deduced from the public key at any reasonable cost in computing time.
I wish to send someone a message and prove that it comes from me. I encrypt the message with my private key. The recipient decrypts it with my public key. The fact that he ends up with a message and not gibberish implies that it was encrypted with my private key-which only I have.
Public key encryption thus solves two problems at once. It provides secure communications-messages that can only be read by the intended recipient. And it provides the digital equivalent of a signature, a way of proving the origin of a message. By encrypting a message with both the intended recipient's public key and the author's private key, one can produce a message that is both secure and signed. Only the author could have created it, since it was encrypted with the author's private key; only the intended recipient can read it, since it must be decrypted with the intended recipient's private key.
Public key encryption as it now exists, implemented in readily available computer programs such as PGP, provides a secure and verifiable way of transmitting EMail. It thus provides strong privacy, privacy that depends neither on secure communication channels nor on legal protection, for a small but increasingly important class of communications. It permits individuals to send messages across the internet with reasonable confidence that if intercepted, even by the FBI or the NSA, they cannot be read.
Several technological developments already in progress will greatly increase the importance of encryption. One is the increasing power of computers, which makes it possible to encrypt more and more complicated signals. Currently it is easy to encrypt text and possible to encrypt speech in real time (as fast as it is typed or spoken); in the near future it should be possible to do the same for speech plus plus video.
Increasing computer power not only makes it easier to encrypt, it also makes it easier to break encrypted messages. As computers become more powerful, users must lengthen their keys if they do not wish their messages to become easier to crack. It appears to be the case, however, that increasing the key length increases the computer time needed to break an encrypted message by more than it increases the time needed to encrypt it, so the net effect of more powerful computers is to favor encryption over cracking.
A related development is the increased use and bandwidth of computer networks. At present, a few tens of millions of people have access to the internet. The number is growing rapidly; it seems likely that in another decade or so, a majority of the population of the developed world will have access to the internet or something similar. At present, internet connections go through channels of varying bandwidth, from modems up to fiber optic cables. The result is a transmission rate adequate for text messages for essentially all users, for transmission of still pictures for many users, and for transmission of real time audio-video signals for very few. Changes currently in progress should result, over a decade or two, in a network with sufficient bandwidth to support real time audio-video for most users.
One reason such capacity is important is another technology: virtual reality. The year is 2010. From the viewpoint of an observer, I am alone in my office, wearing goggles and earphones. From my viewpoint I am at a table in a conference room with a dozen other people. The other people are real-seated in offices scattered around the world. The table and the room exist only in the mind of a computer. The scene is being drawn, at a rate of sixty frames a second, on my goggles-a little differently for each eye, to give three dimensional vision. The meeting is virtual but, to my sight and hearing, it might as well be real. It is sufficiently real for the purposes of a large fraction of human interactions-consulting, teaching, meeting. There is little point to shuttling people around the world when you can achieve the same effect by shuttling electrical signals instead. As wide band networks and sufficiently powerful computers become generally available, a large part of our communication will shift to cyberspace.
Encryption makes the content of messages private. But even if the content is private, the mere fact that A is communicating, or doing business, with B provides information to observers-especially is B is a criminal or a supporter of unpopular political positions. That raises two problems for a technology of privacy: how to make cash transactions private and how to prevent monitoring of who is talking to whom. There are solutions to both.
The solution to the first problem is an idea called digital cash, invented by cryptographer David Chaum. It is a procedure that uses encryption to permit payments in which neither payer nor payee can identify the other, and the creator of the private money they are using can identify neither. A less sophisticated equivalent would be transactions using discrete banks in a trustworthy jurisdiction-perhaps a nation that makes its living in part through banking privacy. With digital cash, payments can be made by simply sending messages--without either party knowing the identity or physical location of the other. The cash can pass from one person to another through a long and anonymous chain, before the final recipient returns it to the issuing bank to be redeemed for (say) dollars.
And Anonymous Remailers
One solution to the second problem already exists: anonymous remailers. An anonymous remailer is a site on the internet which receives messages, each with the address of its destination attached, and then resends them to that address. An observer sees a thousand messages come into the remailer and a thousand come out, but even if he knows the source of each incoming message and the destination of each outgoing, he does not know which sender is communicating with which recipient.
Anonymous remailers can use public key encryption to prevent a spy from either reading destination addresses on intercepted incoming messages or matching incoming and outgoing messages by comparing them. The original sender encrypts the message with the recipient's public key, then encrypts both that encrypted message and the destination using the remailer's public key. The remailer uses his private key to decrypt, leaving him with a message he can not read and a destination that he can.
One potential weakness of this way of maintaining privacy is that it depends on the reliabiity of the remailer; what if he is secretly working for the observer? The solution is to relay a message through multiple remailers. As long as at least one is honest, the observer cannot match up sender and receiver. With sufficiently high speed networks and computers, the whole process occurs fast enough to introduce no noticeable lag. Using digital cash, anonymous remailers can function as private businesses, selling privacy at a few cents an hour.
A more sophisticated solution to the problem of concealing who is talking with whom (sometimes referred to as the "Dining Cryptographer's" protocol) has been proposed by David Chaum. It is a procedure by which a group of individuals jointly generate a signal in such a way that no subgroup can identify the contribution of any single member. If all save one member of the group has nothing to say, that one member is the source of the message-but no member or group of members (short of everyone else) can tell which one it is.
Consider a world in which the technologies I have described exist and are widely available. What will it look like?
In that world, any transaction that can be carried out in virtual reality over a network is secure-it cannot be observed by private snoops, the FBI, or the IRS. Not only is the content of the transaction private, so is the identity of the parties. I can plot a crime, or address a mass meeting, anonymously.
One disadvantage of anonymity at present is that an anonymous agent has no reputation to protect and therefore cannot be trusted. Digital signatures solve that problem. A firm can go into business by publishing its public key. Thereafter, anyone who sends messages encrypted with that public key knows he is dealing with that firm, since nobody else can read them. Anyone who receives messages that make sense when decrypted with that key knows they are from the firm. Thus a firm or an individual can have an identity and a reputation, despite the fact that nobody outside the firm knows where it is located or who controls it.
This world has both advantages and disadvantages in comparison with ours. One obvious advantage is freedom of speech that does not depend on congress or the courts. What cannot be observed cannot be controlled.
A more ambiguous consequence will be a severe restriction on the ability of governments to tax. In a world of strong privacy, I can sell my services as teacher, lawyer, or business consultant without anyone, even my customers, knowing who I am. My business name and attached reputation are defined by my public key. A large and growing part of the economy will no longer be taxable.
The IRS has the alternative of deducing income from expenditures, a traditional approach to dealing with those engaged in illegal professions. But in a world of strong privacy, a substantial part of my expenditure will also be invisible, spent as digital cash to buy information and services over the net. In such a world taxes, whether of production or consumption, will shift away from information goods and services and towards goods that can be physically observed: food, housing, fuel.
Another consequence of strong privacy will be to make certain sorts of legal regulation impractical. In many ways, this will be a good thing. Political censorship, for example, will become enormously more difficult. Many professions, such as medicine, will no longer be able to use professional licensing or trade barriers to restrict competition. Other consequences are less obviously attractive. In a world of strong privacy, violation of copyright becomes easy. A pirate publisher, operating anonymously, can set up a commercial archive of copyrighted books, music, or programs and sell them over the net just like a legitimate dealer. His customers will be able to communicate with him and he with them, but neither party to the transaction need know the physical location or true identity of the other.
One can imagine the use of privacy for more serious criminal enterprises as well. Buying and selling of trade secrets, purchasing embarrassing information for purposes of blackmail, even hiring a contract killer, become easier in a world where businesses can operate, and establish reputations, without revealing their physical location or proprietors.
One way to prevent these threats is by preventing the world I have described from coming into existence; in part III I discuss that possibility. A more interesting approach is to find ways of using the technologies that create the problems to solve them, or at least to reduce their costs.
Consider the case of copyright. In a world of strong privacy, intellectual property law is unenforceable. Contract law, however, is still enforceable because parties choose who to contract with. I can insist on contractual partners providing adequate guarantees of performance, whether by revealing their identity (in a jurisdiction that enforces contracts), posting a bond with a reputable bonding agency, or simply having a reputation in cyberspace (attached to their public key--which is what defines their identity in cyberspace) that they do not want to lose. This suggests the possibility of using contracts to make up for the unenforceability of intellectual property law.
I have created an item of intellectual property, say a book. Imagine that there is a way to label each separate copy of the book, so that if a pirate copy appears I can prove which original it is from. I can then replace the protection of copyright with the protection of contract by requiring purchasers to agree that they will be liable for substantial damages if a pirate copy made from their original is offered for sale.
There are two obvious problems with this approach: ability to label and willingness of purchasers to assume liability. Let us start with the first.
Suppose I am writing a book. From time to time I come to a sentence that I could write in either of two equally good ways. Instead of choosing one I record both. When I am done, I have a book with a hundred such pairs of variant sentences. Every time I sell a copy, my computer chooses at random which variant of each sentence to use, creates a copy using those choices, and records the copy and the buyer. With a hundred variant sentences there are 2100 possible versions of the book-roughly a thousand billion billion billion. If a pirate copy appears, I compare it with the record of the copies sold and sue the purchaser of the copy on which it was based.
Unfortunately, a clever pirate can defeat this form of labelling. The pirate buys ten copies instead of one. His computer compares the ten versions in order to identify the variant sentences, then randomly chooses one sentence from each pair to create an eleventh version-which he offers for sale.
While this form of labelling does not work on something as simple as a book, it can work for more complicated forms of intellectual property. For a simple example of why it can work, consider an arithmetic text in which both questions and answers exist in variant versions. A pirate who buys ten copies and picks alternative at random will produce a copy where questions and answers no longer match.
That particular example is too simple; the pirate would recognize the problem and vary questions and answers together. For a workable example, consider a computer program. A program can be varied in a multitude of ways that are irrelevant to how it works-provided that everything varies together. It does not matter whether a particular variable is located at memory location 2000 or 3000-provided that every reference to that variable points to the right location. To produce labelled copies, the programmer writes the program in his preferred source language, then uses a variant compiler to produce ten thousand different machine language versions, all functionally equivalent. A pirate can, if he wishes, buy ten versions and combine them to make an eleventh-but its probability of running will be very close to zero.
So the problem of labelling copies is soluble for complicated forms of intellectual property, such as computer programs, although not for simple forms, such as books. For complicated property, it should be possible to prove which purchaser is responsible for letting his copy be pirated. We are left with the problem of persuading purchasers to agree to be liable if pirate copies are made from their original.
Whether they are willing to agree to accept such liability depends on how likely it is that their copy will get out without their permission. In our world, where my master copies of programs are located on disks in an unlocked drawer of my office, I would be reluctant to agree to be liable for thousands, perhaps hundreds of thousands, of dollars of damages if they were stolen. But we are considering a world where strong encryption is in common use. In such a world, my copy of a program exists inside my computer system, encrypted with a key that only I have, so is unlikely to get out without my cooperation. While there is some risk, it should be less than the risk I now take, every time I drive, of getting in an accident and finding myself liable for a large damage payment.
My conclusion is that contract law will provide an effective way of protecting some intellectual property. Other intellectual property will be protected by other technological means. Producers of intellectual property that cannot be protected will have to make money from their work in less direct ways.
There are many different indirect ways available--although they are not a perfect substitute for intellectual property protection. A childrens movie might, as some do, collect large revenues from the sale of toys based on it-even in a world of strong privacy, intellectual property law is still enforceable when what is being sold is a physical object (a toy) rather than information (the text of a book). Other approaches are observed in the context of publicly distributed software-shareware, freeware, and the like. Some programmers ask for-and sometimes get-payments for the use of their shareware programs, based on moral suasion or tie-ins with less easily pirated goods, such as support. Philip Zimmermann received no royalties for writing PGP, but the effect on his reputation may have greatly increased his future earning power.
In the context of intellectual property, the new technology produces both new problems and new solutions. The same should be true in other contexts, such as the problem posed by criminal firms with brand name reputation. Law will provide less protection than before, since it will be harder to enforce, but potential victims will be more able than before to defend themselves through privacy. It is hard to have a competitor killed, or even to steal his trade secrets, if you have no idea what he looks like or where on the globe he lives. The larger the fraction of one's activities that take place in cyberspace, the more practical protection through anonymity will be.
As the example of intellectual property suggests, while strong privacy makes more difficult the enforcement of laws imposed on unwilling parties, it permits enforcement of agreed-upon rules and greatly facilitates freedom of association. One implication is the possibility of replacing, for a considerable range of human activities, politically generated law with market generated law.
Consider a simple, and real, example: a mailgroup. Individuals with EMail access wish to hold a conversation on some topic of mutual interest. One of them creates an EMail address for the group and sets up his computer so that EMail incoming to that address is relayed back to everyone on the list.
Such a mail group is, among other things, a proprietary community. The list administrator controls the list of addresses from which EMail is accepted for the group and to which EMail is echoed. He can make any rules he likes about what other members must do to remain in the group. In practice, this often means rules defining the level of courtesy required and excluding conversations on subjects unrelated to the purpose of the group. Thus such a group exhibits a simple form of private law.
The list administrator is a dictator, but not a monopolist. If others find his rules unsatisfactory, they are free to set up their own groups. Given the widespread availability of computers, the cost of establishing such a group is low. The list administrator, like the proprietor of a firm in a competitive market, can do whatever he pleases-but if he does not please his customers, he will soon have none. Thus one way of looking at a mail group is as a mechanism for privately producing law on a competitive market.
So far I have been describing mailgroups as they now exist, private communities held together by a very narrow band-width communication: occasional text messages. Consider the same institution as it would exist in a world of virtual reality and high bandwidth networks. In this world, the mailgroup becomes a virtual community, whose members can see and hear each other, gather in (virtual) living rooms, interact in many of the ways possible to real communities.
In part III of Anarchy, State and Utopia, Robert Nozick described a utopian vision-a world of communities, each set up under its own rules, with members free to move among communities or start their own. Within the limits of cyberspace, that vision already exists in the form of mail groups and will exist, within a few decades, in the form of virtual communities. Each community will have its own rules, enforced by a single ultimate sanction: expulsion. The result will be a world defined by a single rule: freedom of association. Encryption is the essential defensive technology for such a world, the technology that gives individuals the power to set up and maintain virtual communities inhabited by willing citizens, whether or not other individuals, or governments, approve. Think of it as crypto-anarchy.
This is an attractive vision, at least to those committed to the idea of individual freedom. The problems occur on the interface between cyberspace and physical space-when, for example, the anonymity of crypto-anarchy is used to protect a firm that assassinates real bodies.
So far I have presented the world of strong privacy is an inevitable result of current technological developments. This raises an obvious question: can it be stopped?
In one sense, the answer is obviously yes. If, for example, a thermonuclear war returns us to the technological level of the stone age, the problems and promises of strong privacy will cease to be of much concern. If, to take a less extreme case, governments decide to entirely prohibit private use of computer networks, one of the key technologies will be eliminated.
I do not think it likely that many developed countries will adopt such policies. The advantages of networks are too enormous to be foregone, even if they bring with them long term risks. It is particularly unlikely given that governments tend to make their decisions primarily in terms of short term costs and benefits.
A more plausible strategy is to permit networks but forbid encryption. This restriction is also costly, although less costly than banning networks entirely, since encryption is important for many uses of networks that governments have no wish to prevent, such as banking services. Even for activities where security is not essential, it is still of considerable value in a context where messages are easy to intercept. It therefore seems unlikely that attempts to ban encryption will be very successful, and more likely that attempts to prevent the developments discussed here will take the form either of policies designed either to slow the spread of encryption into general use or to control it. That has in fact been the pattern so far in the U.S.
The government has tried to impede the development and spread of encryption by the use of export controls. The International Traffic in Arms Regulation (ITAR) defines cryptographic devices, including software, as munitions; exporting them requires permission from the State Department, and such permission is generally not available for software embodying strong protection.
This policy makes little sense as a way of keeping foreign governments from learning how to protect their secrets and steal ours, since it does not prevent the domestic sale of such products; it is easy enough to smuggle floppy disks out in a diplomatic pouch. It makes more sense as a way of slowing the spread of encryption into general use. An american company that wishes to include encryption capabilities in its software products must either get permission to export them, or create and maintain a different, encryption free, version for its foreign customers.
Two recent legal controversies center on export controls of encryption. One involves Philip Zimmermann, whose program PGP is widely used, here and abroad, for the public key encryption and decryption of EMail. He is reported to be under investigation by a grand jury for violating export controls by making PGP available on servers from which it was possible for foreigners to download it; so far no charges have been filed. Most servers used to make documents available on the internet are accessible to foreign as well as domestic users, so the implication of the prosecution is that making encryption software freely available on the internet without prior state department approval is illegal. The other legal controversy is a recent lawsuit by a graduate student in Mathematics at the University of California at Berkeley named Dan Bernstein, supported by the Electronic Freedom Foundation, seeking to have the present system of export controls declared unconstitutional, chiefly on first amendment grounds.
The outcome of these two law cases may affect how rapidly encryption comes into use. But even if the government wins both of them, that will only delay these developments, not prevent them. Since encryption software is useful, reasonably easy to write and, like any software, easy to copy and transmit, it is hard to see how such restrictions can permanently prevent its spread.
A second approach is the Clipper initiative, an attempt to establish an encryption standard designed to be vulnerable to a law enforcement agent with a court order but to nobody else. The Clipper Chip was announced under the Clinton administration, but represents the outcome of a National Security Agency research program going back many years. The essential features of the chip and the proposed policy are:
1. Every Clipper Chip has two keys built into it; possession of both is needed in order to decrypt messages encrypted by that chip. Two escrow agencies will be established, one to hold a database containing the first key for every chip produced, one to hold a database containing the second key for every chip produced. A law enforcement agent with a court order for a wiretap takes the court order and the serial number of the chip to be tapped to the escrow agencies and obtains the keys.
2. The government asserts that the Clipper Chip provides secure encryption against anyone not possessing the keys. The encryption algorithm, however, will not be made public, and the chip has been designed to prevent discovery of the algorithm by reverse engineering.
3. The government is not at present either requiring telephone companies to use the Clipper Chip or forbidding the use of other forms of encryption, although the original announcement implied that the latter possibility had not been ruled out. The intention is to get it voluntarily adopted as a standard. AT&T has announced that it will use the Clipper Chip in forthcoming encryption devices.
4. The government will use the Clipper chip itself for sensitive but not for classified communications. It is hoped that government use will encourage its adoption by private parties who wish to be able to hold encrypted conversation with government agencies, and therefore help make it a standard.
The Clipper Chip was announced by the Clinton Administration as a way of preventing terrorists, drug dealers, and foreign spies from using modern technology to make themselves impervious to wire taps, while providing ordinary citizens with the benefit of electronic privacy. This raises an obvious problem. It hardly seems likely that a malefactor sophisticated enough to use encryption would deliberately choose a form of encryption specifically designed to be vulnerable to a law enforcement agent with a court order. So achieving the declared purpose of the Clipper Chip seems to require that other forms of encryption be prevented.
It is not clear whether this is either desirable or practical. Even if every telephone in the country is equipped with a Clipper Chip, a criminal may still be able to use his own hardware to pre-encrypt a message before it gets to the phone-in which case the law enforcement agent at the other end, after decrypting the Clipper encryption, will still hear gibberish. Thus the effect of widespread adoption of the standard may be to permit law enforcement to tap the phones of everyone except the sophisticated criminals whose deeds the Clipper is supposed to prevent.
This suggests that the supporters of the Clipper chip may be doing a poor job of describing what it is good for. It is of limited usefulness as a way of maintaining the ability of law enforcement agents to intercept communications between sophisticated criminals, both because sophisticated criminals can evade it and because wire taps represent a fairly small part of law enforcement efforts. But if Clipper can be established as a standard, it might be very useful as a way of preventing unsophisticated customers from dealing with sophisticated criminal firms-from hiring assassins, for example.
The information available about the Clipper leaves considerable doubt as to how good the protection it provides will be-a particularly serious issue if alternative forms of encryption are forbidden or discouraged. The general view of the cryptographic community is that the only practical way of testing the security of a new algorithm is for lots of clever people to spend lots of time trying to see if they can find a way of breaking it. By refusing to make the Clipper's algorithm public, the government prevents such a test.
This raises further questions concerning the reason for keeping the algorithm secret. One much discussed possibility is that it contains a back door, a deliberate weakness that can be exploited, presumably by the NSA, to decrypt messages. A second possibility is that the algorithm is being kept secret not because it has a back door but because it might have one. Keeping the algorithm secret makes it harder for cryptographers to figure out how to break it. The disturbing thing about this explanation is that the NSA itself knows the algorithm and employs able cryptographers. So this explanation implies that the NSA may eventually be able to read conversations encrypted by the Clipper chip without the formality of a court order.
Even if the Clipper chip provides adequate security against anyone who does not have the key, there remain two problems inherent in the idea of key-escrow encryption. The first is the security of the agencies controlling the keys. It does no good to have a technologically secure system if it is vulnerable to any private detective with the right contacts. The second is that foreign countries are unlikely to trust and adopt an encryption system created, kept secret, and controlled by the U.S. government. So even if Clipper could be established as a standard within the U.S., it is unlikely to be established as standard for the world-which, in a world wide network, defeats much of the point of having a standard.
For these reasons and others, the Clipper chip proposal has met heavy opposition from large parts of the computer and communications industry. While it is possible that this administration or the next will push the proposal through despite that opposition, it seems unlikely that it will be able to convert the Clipper into a mandatory standard by banning alternative forms of encryption. If it does not, it seems unlikely that it will be adopted sufficiently widely to prevent the spread of other forms of encryption, here and abroad. It follows that the future I have been describing, a future of strong privacy, is, although not certain, probable.
September 26, 1995
Back to the list of articles.
Back to my home page.