Legal Issues of the 21st Century
May 12, 2000
Intelligent robotic assistants have been a theme in science fiction for decades, from Lost in Space to The Jetsons. Having willing help that understands complex instructions, available twenty-four hours a day, would eliminate menial labor from the average day. Housework would no longer be a chore. Impervious minions could do dangerous tasks at no risk to human life, and we would be allowed more leisure time for family, friends, and hobbies. But to create these machines will require tremendous computing and mechanical power.
Vast advances have already been made in computing power and continue to be made, and we have almost reached a level of computing sophistication that allows intelligent programs. Big Blue may not have beat Kasparov at chess, but before the end of Kasparov’s lifetime there may be a descendant of Big Blue that can defeat even the most complicated gambit. These advances provide the ground work to assume that sometime before the end of the 21st century, computing power will be sufficient and programming will be sophisticated enough to produce actual intelligence, defined by the American Heritage Dictionary as “the capacity to acquire and apply knowledge.” These machines, as with many technological advances, will probably be intended to improve the standard of living, allowing us to avoid menial or dangerous tasks, and providing more leisure time. Unlike common science fiction images, these machines need not be humanoid in form, and indeed some may remain as only software, sentient programs capable of sweeping computer networks for security breaches and unwanted or illegal materials. Others may be humanoid and used for housework or customer service positions. The possibilities are as great as the human capacity for invention, and as the capacity of software to acquire and apply knowledge is expanded, and our ability to create new, efficient forms of machines grows, the number of tasks to which artificial intelligence powered machines can be applied will expand as well. Robotic mining platforms, fire fighters, and nannies will replace current, less sophisticated, applications of software such as robotic arms used on assembly lines and programs that read for the blind. Intelligent machines can supplant humans in increasing numbers of dangerous or even fatal tasks, as well as menial ones.
Product demand and intended application will dictate the amount of “intelligence” applied to each machine built. Scientists and engineers will be able to control the capacity of each product as required, allowing creation of everything from sophisticated robotic soldiers that automatically differentiate between hostile targets and civilian non-combatants to comparatively “dumb” household appliances that order necessary groceries or repair work. However, with highly intelligent machines, a second definition of intelligence may eventually come into play. In addition to signifying the capacity to acquire and apply knowledge, intelligence encompasses “the faculty of thought and reason.”  Machines that reach this definition of intelligence reflect more accurately the conception of Artificial Intelligence reflected in science fiction. Hal, the computer in 2001: A Space Odyssey, could not have caused trouble for its crew without the ability to reason, nor could Rosie, from The Jetsons, help Elroy with his science projects. But reason suggests the capacity to analyze one’s own thoughts and environment in order to make determinations, and this in turn suggests sentience, or self-awareness.
Currently, we do not know what actual connections exist between intelligence and self-awareness, but with sufficient experimentation in Artificial Intelligence, the assumption that sentient machines will be created seems reasonable. With the ability to study intelligence from its first stages through the creation of artificial intelligence, we have an opportunity to discover the connections and eventually create truly self-aware machines. In doing so, we will also make great gains in understanding human intelligence, since we may need to understand human thought to recreate it in a computer program. Whether sentience will be achieved by design or accident is uncertain however. Without knowing or understanding the connections of intelligence and self-awareness, it may be that truly sentient machines will be a byproduct of experimentation not intention. However, whether accidental or intentional, sentient machines are likely to be developed by humans, not spontaneously occur as an act of nature, and presumably, development by humans in a scientific environment, even if accidental, will allow eventual understanding and replication of the mechanics of sentience. This may disappoint the writers of catastrophe stories and B-grade science fiction movies who delight in tales of comets causing machines to revolt, but it does provide humanity control over their own creations through rationed production and replication of sentient machinery. This control permits limitations on the number of sentient machines and their level of intelligence, as well as their physical capabilities.
However, though the intelligence of machines can be controlled by humans, the effects of sentience most likely cannot until we fully understand how sentience functions. Therefore, any emotional responses, ethical proclivities, or “human nature” that occurs as a result of sentience in machines will not be easily manipulated in the way that production or level of intelligence may be. These elements begin to mirror reactions that we consider to be “human,” and therefore may trigger our ethical and moral considerations concerning the proper treatment of people. Sentient technology provides the capacity to expand our society to include machines, or to create a servant or slave class within that society, composed entirely of technological beings, designed from their inception to function as the social inferiors of humans. Additionally, this class could be designed for specified levels of intelligence and responses to social situations. Such a class may be welcome relief from many tasks and may be adopted eagerly by some, but the existence of a servant class does not conform to American legal principals and therefore creates a number of legal issues, in addition to the more complicated ethical and moral issues. Most importantly, are machines persons for purposes of law?
Moral and ethical concerns aside, there are no legal issues if sentient machines are not considered to be persons at law. If machines are not legal persons, they have no legal rights, and most legal issues resulting from machine sentience are immediately moot. However, if sentient machines are persons, and therefore have rights, they cannot legally be treated as slaves and can only be treated as servants if they are compensated. Additionally, law allows control of the potential effects of sentience through the obligations of law, which may be the only way these effects can be controlled.
In the conventional sense of “persons,” machines do not qualify. The general definition of person is “a living human being.” However, the legal definition of person is not so limited by biological criteria. At law, for purposes of constitutional rights and application of legal rights and penalties, a person is defined as 1) “ a human being” or 2) “an entity (such as a corporation) that is recognized by law as having the rights and duties of a human being.” The second type of person is considered an “artificial person,” defined as “an entity ... created by law, and given certain legal rights and duties of a human being; a being, real or imaginary, who for the purpose of legal reasoning is treated more or less as a human being.” These entities are considered persons because the law regards them as capable of rights and duties. Anyone capable of rights and duties, human or not, is a legal person and anyone incapable is not. This standard is fairly low, recognizing the capacity for limited rights in children and even fetuses, and imposing legal obligations on parties with the intelligence of an average seven-year-old child.
As self-aware beings, with the intelligence to recognize obligations, machines are no less capable of rights and duties than any human being. Even if their intelligence is severely limited, it is unlikely that any machine produced with true intelligence or sentience would be intended for a task requiring less intelligence than that of the average seven-year-old child, suggesting that even the most limited machines would be capable of rights and duties. At this point it may be helpful to differentiate between a sentient machine and a “smart” machine. A smart machine need not be intelligent in the sense that would support self-awareness. Smart machines are those that are created for a specific task and respond only to specific pre-set situations. We currently use smart bombs, which can self-correct their trajectories, and we may soon use home appliances that detect the need for service and contact a repairperson. This does not mean that these machines are persons at law. Refrigerators will not rise up to claim their rights and cars will not demand hazard pay. Smart machines are merely “smart” in the sense that they respond to certain limited stimuli and situational criteria, and can distinguish the necessary action from a limited number of possible variables.
By contrast, the level of intelligence which might give rise to sentience would accompany complex programming that can independently react to new stimuli for which it does not have a programmed response. It would analyze all situational elements and make a reasoned determination based on those elements. These programs would be used for complex tasks, such as childcare, military work, or other tasks that require quick responses to novel, unexpected situations with large numbers of variable criteria. These machines would be created to function without human guidance, rather than merely responding to user input and data. Because of this human-like ability to interact with their surrounding environment and respond to changing circumstances, a truly sentient machine would display something much more akin to “true intelligence” than any mere “smart machine.” It is also this interactive ability that leads to awareness of its environment, existence, and thoughts, and to classification as a “person” for the purpose of legal rights. The ability to reason, problem-solve, and recognize its own position in society would allow a sentient machine to recognize its duties at law and comprehend the difference between right and wrong. Therefore, a sentient machine would be a person as a matter of law, even if not viewed as such by general society.
By granting legal personhood, a myriad number of issues arise concerning how to apply current law to machines. Covering every possible legal issue, let alone the ethical or moral considerations, could fill a multi-volume treatise. There are some immediate issues raised and these are addressed below.
As persons, sentient machines will have the legal ability to enjoy and enforce the rights recognized by the Constitution. This poses immediate complications in using sentient machines for utilitarian purposes, as machines are normally intended. Sentient machines would most likely be created with the intention of use and ownership, not in contemplation of creating persons with legal rights. However, if a sentient machine were a legal person, the 13th amendment would prevent ownership of machines or forced performance of tasks. Machines would be constitutionally entitled to select their labor of choice, if they have the capacity to desire different tasks from those initially programmed. But would preprogramming selected tasks constitute involuntary servitude? Many people know from a young age what job they wish to perform when they are adults, but if a human being were somehow programmed from birth, society would be outraged at the invasion of personal choice. A sentient machine however, does not have the power to choose its own course without a computer program. Without the guidance of an initial program the machine cannot function. Programming the machine for a specific task from inception would not be unlike a parent guiding a child to a specific career choice. The rights violation could only be considered to occur if a machine “changes its mind,” and humans then prevent that choice though involuntarily service in another task.
Without a clearer understanding of how sentience functions, or any knowledge of the interaction between sentience and complex computer programming, the ability of a machine to alter its own programming is unpredictable. Self-awareness seems to suggest an ability to consider one’s position in society and rationally change it. This would allow machines to change their tasks, but does not require the desire to do so. Emotion creates aspirations. Sentience only creates capacity for choice. Therefore, machines may perform their programmed tasks voluntarily. It may be that we can avoid legal issues of interference with choice and not lose the intended result of servitude or slavery is left as a question of ethics, not law.
Presuming a machine can alter its program and choose its career, and if the analogy to raising a child were used, programming a sentient machine’s function would probably not be in violation of the 13th amendment because the program is merely guidance, not a forced result. Additionally, if the machine cannot learn to choose, or be designed with choice from the initiation of function, a set program may be involuntary. But the machine cannot function without a program. Therefore, they are either involuntarily assigned a task or they are denied “life.” This would be a situation where the “more or less” enters into the treatment of an artificial person as a person at law. We cannot treat a sentient machine exactly as a human in this situation. Therefore, a rough approximation is used instead. Initial programming should be allowed, lest the machine be denied a chance at “life” but tampering or denial of the right to choose must be protected. Additionally, the right to a “life” implicates the 14th amendment. Since sentient machines are not living beings, but do have awareness, whether they are considered to be “alive” becomes complicated. If they are alive, the 14th Amendment gives a right not to be deprived of that life. Therefore, presuming for the sake of argument that sentient machines have life of which they can be deprived, can we create and produce sentient machines without violation of their rights?
Unlike humans, the intelligence of a machine, and presumably its sentience as well, will be within human control from the time of first creation. Human intelligence can be harmed by medical error, physical accident, or health defects, and helped by nutrition, education, and exercise, but we do not control intelligence through these mechanisms. Innate capacity for intelligence is currently out of human hands. Machine intelligence however, is a product of human invention, and therefore can be documented, manipulated, and controlled precisely through the manufacturing process. With such control over the level of intelligence a machine possesses, can humans legally determine the level of intelligence a sentient machine will have, or is it our duty to create each machine with a certain minimal level of intelligence?
As an initial basis to answer these questions, a comparison to neo-natal medical care may be helpful. A doctor has a duty to avoid harm where possible and to inform the mother of possible danger. If the doctor fails in this duty to mother and child, he may be liable for the child’s loss of enjoyment of life. However, this does not extend an action to the child beyond the medical malpractice claim. There is no action for “wrongful life” where the child’s mental capacity is permanently damaged by the failure of the physician. It is presumed that any life, even if severely deformed or mentally incapacitated, is better than none. By analogy, the creator of a sentient machine does not have a duty beyond preservation of what the machine would naturally be capable of, and such capacity would be dictated by the intended function of the machine.
As with 13th amendment concerns about involuntary servitude and interfering with the capacity to choose, this situation is one where machines are treated “more or less” as humans. Humans do not choose to be born. That is a decision of nature, and some parental choice. Machines, likewise, do not choose to be created, although there is much more control over whether, and how, they are created. Therefore, for purposes of law, we may choose to treat machines like humans and deny a “wrongful life” type claim. However, because humans can control the level of intelligence of a sentient machine, whether there is a duty to provide a certain minimal level of intelligence depends on how we determine to assign duties.
Duties are generally assigned based on either an objective, often monetary, loss occurring in the absence of enforcement, or a subjective, emotional loss occurring, such as pain and suffering. If a sentient machine lacks emotion, then there is only a duty to create a minimal level of intelligence if the machine would suffer monetary loss. Presumably a machine would not be created unless it had a task and “living accommodations.” Therefore, low intelligence would not cause loss of capacity to work, or inability to find shelter or other necessities of “life.” The only pecuniary loss would be if the machine were unable to gain repair and maintenance. However, with an assigned task, a sentient machine would have steady income. The true loss would occur if the lack of intelligence causes obsolescence at an advanced rate, causing inability to obtain repair and maintenance prior to the end of the intended life span of the machine. Therefore, there would be a duty to either provide enough intelligence to retain “employment” as necessary, or to provide guaranteed care and maintenance for machine’s expected life span.
On the other hand, if machines are capable of emotion, there may be a duty to provide at least average intelligence to avoid emotional harm damages. However, as long as sufficient intelligence is provided to perform its intended task, there may be no emotional harm because there is no loss to their intended life.
Just as humans may have a number of duties toward machines based on our control over their lives, presuming that machines are persons, all rights and obligations at law descend upon them as well. This would require sentient machines to be held to the same rule of law as other living beings, including civil and criminal liabilities. This presents a number of potential problems because of the current status of law being focused towards biological entities with certain physical limitations. Machines, especially computers with network capacity, do not have the same physical limitations as humans. They also do not have the same physical function, which may alter the duty of care for “medical treatments” or change the elements of murder. Current definitions are insufficient to encompass these problems.
Are machines capable of functioning within current legal parameters? For the purposes of most laws, especially those applying to “artificial persons”, the answer is yes. However, there are laws that cannot currently be applied to sentient machines in the fashion they are applied to humans. For example, copyright law currently provides that calling a program into the Random Access Memory (RAM) of a computer constitutes infringement of copyrights by making an unauthorized copy of a protected work. Sentient machines, aside from their own personal operating systems, may need to call other information into their RAM to “think” about a topic. Therefore, when their “thought process” requires information, the machine may be committing copyright infringement.
This seems somewhat laughable when applied to human thought, but creates a significant legal problem when applied to sentient machines. Copyright law essentially prevents a sentient machine from reasoning through a problem using information that was not part of its initial programming, where that information is subject to the protection of copyright. Since the purpose of creating a sentient machine is to allow the machine to use outside information, rather than relying solely on the preprogrammed possibilities within the foresight of its creator, it is counterproductive to hold a machine liable for using outside information. Copyright law would have to be modified to exempt sentient machines from liability for “thinking.” This is relatively simple modification however. Fair use rules could easily be changed to create such an exemption.
The differences in function between human and machine present much greater complications in general tort law, both in relation to liability of machines and duties owed to machines. Although most tort liability concerning duties towards other people would apply fairly easily to a sentient machine, such as negligence or defamation, other actions present difficulties because of the ability to ignore physical boundaries. Harm to a person by a machine can be reasonably determined, but less tangible harms are harder to measure. Application of privacy laws and trespass actions is of particular concern to any machine with the ability to work through a computer network. Since sentient machines are not limited to humanoid form, it may be possible that the Internet is someday modified to be self-aware or that sentient programs will patrol its networks. This would allow intelligent screening of online materials for criminal content or other controlled information based on country of origin or country of request, but the Internet also reaches into every home with a computer. Would the Internet, or any sentient network or computer program, be liable for trespass or a privacy violation because of the ability to reach into homes?
So long as the program was invited there is no legal harm, but when the program enters a system uninvited and causes harm by taking protected information or causing damage, liability would attach. Current law relating to the use of robots and spider programs on the Internet provides the basis for liability, as well as laws such as the Computer Fraud and Abuse Act. What constitutes proper use of robots and other means to obtain or connect information on the Internet has been the subject of recent debate. At this point, the possibility of a trespass action for use of robots and other programs to collect information has been recognized, and it seems a reasonable conclusion that if a program under the control of a reasoning person can be considered an act of trespass, then the actions of a reasoning program would also be trespass. Treating a sentient program like a living being, the “posting” of an online “No Trespassing” or “Private Property” sign on a network should act as it would in physical property cases. A reasonable person would be aware they were in violation of trespass law.
As to duties owed to machines, some are easily analogized from duties towards living beings. For example, the duty of a doctor to use reasonable care in treating a patient analogizes to a duty to use reasonable care in the repair and maintenance of a sentient machine. Other duties are much more difficult. For example, can a sentient machine be murdered? How would a sentient computer program prove battery or other “physical” harm?
Murder of a human occurs when the victim is rendered non-functional to the point that restoration of even minimal function is impossible. This may take more extensive injury for a machine, but is still possible. They may be deleted or dismantled, or the application of magnets to their memory could render them incapable of repair. Battery is much harder. If a program is accosted and one line of code is rewritten, there may not be harm. The result may even be improved function. If there were a proven intent to harm, then beneficial effect would not be relevant so long as there was some harm that could be proven. However, if the intent was benevolent, such as a programmer wishing to upgrade an obsolete command or remove a virus, is there still battery? By analogy to current law, if a person is assaulted, and they suffer no physical harm or emotional trauma, they have to reason to pursue a claim, therefore, no battery.
These tort problems are only a few of the possible complications in applying law to machines. However, many of these complications can be settled through analogy to current legal practices. Intellectual property law presents more difficult issues. Patent and copyrights have potential to disrupt and complicate a machine’s life that cannot analogize to humans. Additionally, there may be conflict in allowing patent and copyrights on sentient machines, and not on analogous elements of human function.
Patents are based in the desire to encourage invention by providing economic reward to inventors. Inventors of utilitarian devices gain seventeen years of control over the production and sale of their invention. The first inventor of intelligent machines likely will use patent rights to protect their ability to market such machines, and will undoubtedly become wealthy doing so. However, once a machine gains sentience and becomes a legal person, if the machine can no longer be sold or used for its intended servile purposes without violating its constitutional rights, there will no longer be any economic incentive to create more machines of its like.
Therefore, unless the machine itself decides to make duplicates (and can obtain the information required to do so), we may never have more than one sentient machine, or perhaps a handful of them, because there is no economic value in the production of a machine that cannot be sold as property. However, even if lack of economic incentive prevents construction of more than a few sentient machines, that relatively small number of machines still raises complications with current patent law. At the moment, “anything under the sun made by man” can be subject of a valid patent, provided that it meets the requirements of novelty, non-obviousness, and utility. Including biological materials, microbes, and software. When applied to sentient machines, patent protection would be available for their design, parts, function, and all software that allows them to function. Man makes machines, and a sentient machine would be likely to have many elements in its internal parts and software that would be new, useful, and non-obvious.
Limitations might exist if the machine uses self-modifying software. If the internal software of the machine were self-modifying in some way, patent protection might be limited to the initial form of the programming, prior to any modification. Only the elements that can be explicitly stated in the claims of the patent document can gain protection. To allow protection to the modifications of a self-altering program would prevent any certainty of non-infringement, since modifications may be unpredictable. Allowing patent protection in modifications would be similar to allowing protection of mutations of biological material on the basis that they are a form of the original patented materials.
So long as the software does not replicate and is necessary to the function and creation of the machine, a utility patent should apply. However, is it acceptable to patent the parts of a sentient being? This is easily answered in the affirmative. We currently grant patents on biological materials that are necessary to human function. These patents do not interfere with our ability to live, and therefore by analogy, patents on internal mechanisms of sentient machines also would not interfere with their lives. But if all the parts of a sentient machine were patented, would the machine violate patent law by way of regular maintenance and occasional replacement of parts? Patent law forbids the making or using a patented process or machine without permission of the patent holder, so by making and replacing parts that require a patented process to be created, removed, or replaced, would patent rights be violated? There are already safeguards in patent law for this sort of problem, precisely because so much complex machinery requires maintenance and is subject to patent protection. First, the legitimate owner of an individual embodiment of a machine may use it regardless of the patent. Presuming that, as a legal person, a sentient machine can be considered its own owner, they would have the right to use their own embodiment. It may also be implied that by creating a machine with seating the maker grant an implied license to function.
Also, regular maintenance is permitted without violation of patent law. This is true both of maintenance by the owner of the patented item and by third parties who perform repairs. Where the maintenance involves parts that commonly wear out, or otherwise require regular replacement due to normal use and wear, there is no violation of patent law to replace those parts, provided there is no violation in the production of those parts. There is only risk of patent infringement when the extent of the work is such that the patented item is virtually reconstructed. Therefore, basic maintenance on a sentient machine would not conflict with patent law unless there was a need to rebuild a portion of the machine’s architecture that was protected by patent. In that case, patent infringement can be avoided by seeking repair from the holder of the patent or an approved repair group.
Alternatively, a fair use exception to patents could be sought based on the right of the sentient machine to pursue life and liberty under the constitution. As a legal person, a sentient machine would have the right to repair damaged parts or processes where such damage impinges on their ability to “live” or function. Finding patent infringement in the case of a machine that attempts to repair itself, or have itself repaired by a third party, would impinge on this right to life and liberty by denying the ability to function as intended. An exception would not undermine patent law. As noted above, the economic incentive to create sentient machines is already diminished by the inability to use them for any purpose against their will, but economic incentive persists the in the ability to make money off the production and installation of replacement parts and the creation of improvements. Just as replacement parts are available for other machines, distributors and repair services could sell replacement parts for sentient machines.
Using biological patents, patent protection for sentient machines can be justified, but if the internal components of sentient machines can be patented, would an analogy to the human genome be equally possible? Probably such an analogy could not be made. The analogy of biology patents works because it permits the patenting of elements of a sentient being, not because it permits protection of naturally occurring elements. The human genome however is a product of nature, not man. Therefore it is not properly subject to patent protection, although medical applications through gene therapy may qualify as a patentable process. Likewise, other elements of human beings or nature that are not currently patent ability would probably remain unpatentable. This may not be true of copyright however.
Aside from the problems posed regarding copyright infringement for a sentient machine’s “thoughts,” there are other possible changes to copyright law that might be suggested by the existence of a sentient machine that relies on software to “think” and reason. Drawing a comparison between complex software that permits a computer to show intelligences and rational thought, and the use of the human brain to carry out similarly complex mental tasks, there may several new applications of copyright law that are both legally interesting and morally disturbing.
Copyright protects original works of authorship, embodied in a physical medium that can be viewed either by the naked eye or using any technology currently in existence or created in the future. This encompasses all digital works, including software, but does not include ideas or transient works. The software allowing a computer to show reason and intelligence can be copyrighted. Therefore, if “thoughts” of the machine are part of that program, and are therefore part of the copyright, a sentient machine could prevent the duplication of its memories because they are fixed on a computer disk or other tangible medium. A human’s thoughts and common thought processes are not protected by copyright at this time because they cannot be “perceived” for purposes of copyright law. However, if we can create the technology to give a machine sentience, we may also be able to create the technology to watch brain waves and neurochemical changes in humans, and to read regular changes to the brain like computer code to show which chances elicit which responses. If this were the case, thought processes might be “fixed in a tangible medium” at the time we think them because they would be set down in the physical matter of the brain and could be “perceived with a machine or device.” Further, if those changes were stored in brain tissue, the permanent effects may be mapped and “read,” much like the memories of a sentient machine.
However, copyright law, like patents, protects economic incentives and therefore would not apply to thought without a showing that there is an economic use for a copyright on them. Most of the potential uses of copyright protection for human thought relate to technology that is just as speculative as sentient machines, and in some cases more speculative. Copyright protection might prevent the use of thoughts for purposes other than those intended by their creator, such as possible “download” to a computer. If the human mind can be downloaded to a computer, and a machine can be sentient, and exact duplicate of an individual’s mental processes could be made and moved to a machine. By allowing copyright protection for thoughts and mental processes, this process could not be conducted without consent. Similar uses might include preventing unauthorized cloning of a person to the extent that a clone has the same mental patterns, or providing protection against theft of ideas that may be of value to the person who initiated them.
Similarly, if the copyright on a sentient machine’s program could be analogized to specific DNA sequences, a parent who creates a genetically specific child might prevent cloning of that child. This would not prevent cloning a non-genetically altered human without their permission, since such a person would not have created their own DNA code and therefore would not have authorship rights for copyright, but it might violate the copyright interest of the person’s parents. Also, organ banks could gain copyright protection on specific, laboratory created organs that match certain blocks of donor criteria, providing economic advantage to the bank that holds rights in organs with the lowest failure rate on transplant.
These are only a few of the potential applications of copyright law. Others no doubt would exist, but it is hard to predict what technology will be available at the time of creation of a sentient machine. Much of these have moral and ethical complications as well, and some may be too attenuated to apply, but they raise interesting concerns.
These are only a handful of potential issues arising from the creation of truly sentient machines, and may not cover all the fundamental ones. Many others will arise as attempts are made to fit sentient machines into our legal and social structure. Social change is never easy, and sentient machines will undoubtedly face opposition just as many minorities have. This will raise issues of bias, hate crimes, adoption rights, family law, and many other areas, but for now, intellectual property and the initial application of law to sentient machines seems to present the most immediate concerns. Other legal issues will become more pressing as machines become part of society and the courts and legislatures attempt to control or facilitate this merger. New issues will also be created depending on how like humans machines truly are, once they are created, but these initial problems will have to be handled first.
 American Heritage Dictionary, digital version, www.dictionary.com
 American Heritage Dictionary, digital version, www.dictionary.com
 Sentient (snshnt, -sh-nt), adj. 1.Having sense perception; conscious. Conscious (knshs) adj. 1. a. Having an awareness of one's environment and one's own existence, sensations, and thoughts. American Heritage Dictionary, digital version, www.dictionary.com
 American Heritage Dictionary, online edition, www.dictionary.com
 Black’s Legal Dictionary, 4th Edition.
 Black’s Legal Dictionary, 4th Edition.
 “any being whom the law regards as capable of rights and duties. Any being that is so capable is a person, whether a human being or not, and no being that is not so capable as a person, even though he be a man. Persons are the substances of which rights and duties are the attributes. It is only in this respect that persons possess juridical significance, and this is the exclusive point of view from which personality receives legal recognition.” John Salmond, Jurisprudence 318 (Glanville L. Williams, ed., 10th ed. 1947)
 Abortion rulings have turned on when the state has an interest in protecting the 14th amendment rights of a child, and currently that interest is acknowledged from conception. Roe v. Wade, US (1975); Planned Parenthood v.Casey, US (1994).
 The 13th amendment prevents the ownership of persons or use of them as slaves. Neither slavery nor involuntary servitude, except as punishment for crime ... shall exist within the United States. Amendment 13, United States Constitution.
 See note 6
 “No State ...shall deprive any person of life, liberty, or property, without due process of law. 14th amendment
 Sentient machines can be programmed to recognize legal duties and obligations. In a sense, machines have the capacity to be much more law abiding than humans because they can be designed to remember and recognize rights in a way that human memory cannot. Each sentient machine could be instilled with the knowledge of all legal obligations and their ramifications. Even with the necessity of constant upgrades as legal obligations shift, they would be superior to humans for purposes of obedience to law.
 MAI v. Peak Computers, Inc., 510 U.S. 1033 (1994).
 18 USC 1030(a)(5)(A)
 Compuserve v. CyberPromotions, 962 F.Supp. 1015 (S.D.Ohio, 1997).
 35 USC §154
 Diamond v. Chakrabaraty, 447 U.S. 303 (1980).
 35 USC §101-103
 Amgen Inc. v. Chugai Pharmaceutical Co., 927 F.2d 1200 (Fed Cir, 1991).
 Note 16.
 Diamond v. Diehr, 450 U.S. 175 (1981).
 35 USC §112
 Genentech v. Wellcome, 29 F.3d 1555 (Fed. Cir. 1994).
 35 USC §154
 Dana Corp. v. American Precision Co., 829 F.2d 43 (Fed. Cir. 1987)
 35 USC §102(b)
 17 USC §102(a)
 17 USC §102(b)
 17 USC §102
 17 USC §102
 17 USC §102(a)
Back to Class Web Page