Artificial Intelligence: Legal Research

Contents

I.     Assumptions 2

II.    Legal Issues 2

A.    Legal Rights 2

B.    Liability. 2

III.  Defining Artificial Intelligence. 3

A.    Intelligent Entity. 3

B.    Software Agents 4

C.    Objectives of Artificial Intelligence. 4

IV.   Legal Rights and Status 4

V.    Legal Liability. 5

A.    Agency Law. 5

B.    Criminal Law. 6

i.     The Perpetration-by-Another Liability Model: AI Robots as Innocent Agents 7

ii.    The Natural-Probable-Consequence Liability Model: Foreseeable Offenses Committed by AI Robots 7

iii.   The Direct Liability Model: AI Robots as Direct Subject of Criminal Liability. 8

C.    Tort Law. 8

VI.   Endnotes 10

 


 

    I.         Assumptions

This research assumes that eventually a form of artificial intelligence (AI) will exist that is on par with human intelligence. Further, we should assume that:

      i.         We will employ intelligent agents specifically for their creative intelligence and corresponding judgment;

     ii.         These agents will make decisions, and

   iii.         Some of the resultant decisions will be ÒpathologicalÓ; that is, not only unpredictable and surprising, which may be desirable, but also having unintended and destructive consequences both for networks and for the substantial portions of our infrastructure connected to those networks.[1]

This research contemplates the anticipated legal issues that will arise if AI does reach such a level, and then attempts to develop theories of legal liability for offenses committed by an AI.

 II.         Legal Issues

A.   Legal Rights

á      If an entity is aware of life enough to assert its rights by protesting its dissolution, is it entitled to the protection of the law?[2]

á      Should a self-conscious computer facing the prospect of imminent unplugging have standing to bring a claim of battery?[3]

á      If we agree that a machine could potentially be a candidate for rights, we still must answer: Which machines and which rights? What would a computer have to do to deserve legal or moral personhood?[4]

B.   Liability

á      With complex computer systems consisting of a combination of overlapping programs created by different coders, it is often difficult to know who should bear moral blame or legal liability for a computer action that produces an injury. [5]

á      Computers often play major roles in writing their own software; what if one created a virus and sent it around the world?[6]

á      Computers now help operate on us and help us handle our investments; should we hold them as accountable as we do our surgeons and financial analysts when they screw up?[7]

á      Can society impose criminal liability upon robots? If so, how do you punish an AI robot?[8]

á      Does the growing intelligence of AI robots subject them to legal social control, just as any other legal entity?[9]

á      How can AI entities fulfill the two requirements of criminal liability (i.e., actus reus and mens rea)?[10]

á      If an AI entity can be held criminally liable for an offense, can it raise a defense of insanity in relation to a malfunctioning AI algorithm, when its analytical capabilities have become corrupted as a result of that malfunction?[11] May it assert a defense of being under the influence of an intoxicating substance (similar to humans being under the influence of alcohol or drugs) if its operating system is infected by an electronic virus?[12]

III.         Defining Artificial Intelligence

A.   Intelligent Entity

There are five attributes that one would expect an intelligent entity to have:[13]

1.     Communication.

á      The easier it is to communicate with an entity, the more intelligent the entity seems.

2.     Mental knowledge.

á      An intelligent entity is expected to have some knowledge about itself.

3.     External knowledge.

á      An intelligent entity is expected to know about the outside world, to learn about it, and utilize that information.

4.     Goal-driven behavior.

á      An intelligent entity is expected to take action in order to achieve its goals.

5.     Creativity.

á      An intelligent entity is expected to have some degree of creativity. In this context creativity means the ability to take alternate action when the initial action fails.

B.   Software Agents

Software agents are rules-based software products which may aid with internet searches, filter incoming electronic mail, find the appropriate area of a help program in online documentation, watch for news on topics you have specified, and suggest changes to your stock portfolio.[14]

Advances in the area of artificial intelligence are producing Òlearning algorithmsÓ that will soon produce software agent products that are based on neural networks. These products, the Òsecond generation of intelligent software agents,Ó will be able to learn from their experiences and adapt their behavior accordingly. Software programs with this ability to learn will, consequently, be capable of decision-making, resulting in software that may take actions that neither the licensor nor the licensee anticipated.[15]

C.   Objectives of Artificial Intelligence

The objectives of AI scientists may be grouped into two broad categories: the development of computer-based models of intelligent behavior, and the pursuit of machines capable of solving problems normally thought to require human intelligence.[16]

IV.         Legal Rights and Status

Through empathy or arrogance, or perhaps through an instinct for ethical consistency, we tend to seek rights for things that appear to be like us and to deny rights to things that donÕt.[17]

The list of threshold characteristics proposed for a computer to have legal or moral personhood is exhaustive: the ability to experience pain or suffering, to have intentions or memories, and to possess moral agency or self-awareness. None of these characteristics is well-defined, though, and this is especially the case with the most oft-cited of the lot: consciousness.[18]

It is most likely that a machine that has the ability to interact with humans in the world will be the first candidate for rights.[19]

One possibility would be to treat A.I. machines as valuable cultural artifacts, to accord them landmark status with stipulations about their preservation and disassembly.[20]

  V.         Legal Liability

A.   Agency Law

Intelligent software agents will be capable of causing harm. Unlike earlier software agents, they will be capable of finding their own sources of information and making commitments – possible unauthorized commitments.[21]

Where software is capable of unplanned – rather than predictable – behavior, the law will therefore need to be applied without factual precedent to these situations.[22]

Agency law provides a suitable framework in which to find a solution for harms committed by the next generation of intelligent software; an agency relationship is formed when the software licensee installs and then executes the software program. Accordingly, intelligent software agents should be regulated under agency law.[23]

Agency law provides an adequate framework because, in general, a software licensee will be activating software for some purpose. The intelligent software agent will then use its learning, mobility and autonomous properties to accomplish specific tasks for the licensee. Thus, we see the software agent in the legal role of the Òagent,Ó and the software licensee in the legal role of the Òprincipal.Ó[24] This relationship of agent/principal has been formed whether or not the parties themselves intended to create an agency or even think of themselves as agent and principal.[25]

Once intelligent software agents are viewed as legal agents within an agency relationship, it follows that liability can be attributed to the actions of the software agents, binding the software licensee (principal) to legal duties.[26]

It appears that the courts are willing to consider machines as participants in ordinary consumer transactions.[27]

By extending this previous court treatment of transactions conducted by autonomous machines, or by using the liability theories available under contract (where the action of an agent bind the principal to third parties), or tort (where the principal may be vicariously liable for the actions of the agent), intelligent software agents can be treated as Òlegal agents.Ó[28]

Once intelligent software agents are viewed as having legal status with formation of the agency relationship, it follows that liability can be attributed to the actions of the software agents and the licensees will consequently be held responsible. Certain legal duties vest in the licensees as principals in the newly formed agency relationship. Therefore, any harm committed by the intelligent software agent that breaches the licenseeÕs legal duties will cause the licensee to be liable.[29]

B.   Criminal Law

In order to impose criminal liability upon a person, two main elements must exist. The first is the factual element, i.e., criminal conduct (actus reus), while the other is the mental element, i.e., knowledge or general intent in relation to the conduct element (mens rea). If one of them is missing, no criminal liability can be imposed.[30]

The actus reus requirement is expressed mainly by acts or omissions. Sometimes, other factual elements are required in addition to conduct, such as the specific results of that conduct and the specific circumstances underlying that conduct.[31]

The mens rea requirement has various levels of mental elements. The highest level is expressed by knowledge, while sometimes it is accompanied by a requirement of intent or specific intention. Lower levels are expressed by negligence (a reasonable person should have known), or by strict liability offenses.[32]

Gabriel Hallevy has proposed that AI entities can fulfill the two requirements of criminal liability under three possible models of criminal liability: (i) the Perpetration-by-Another liability model; (ii) the Natural-Probable Consequence liability model; and (iii) the Direct liability model.

      i.         The Perpetration-by-Another Liability Model: AI Robots as Innocent Agents

This model does not consider the AI robot as possessing any human attributes. Instead, the model theorizes that AI entities are akin to mentally limited persons, such as a children, and therefore do not have the criminal state of mind to commit an offense.[33] The AI robot is viewed as an intermediary that is used as an instrument, while the party orchestrating the offense is the real perpetrator (hence the name, perpetration-by-another).[34] The person controlling the AI, or the perpetrator, is regarded as a principal in the first degree and is held accountable for the conduct of the innocent agent (the AI).[35] The perpetratorÕs liability is determined on the basis of that conduct and his own mental state.[36] The AI robot is an innocent agent.[37]

This model would likely be implemented in scenarios where programmers have programmed an AI to commit an offense, or where a person controlling the AI has commanded it to commit an offense.[38] This model would not be suitable when the AI robot decides to commit an offense based on its own accumulated experience or knowledge.

    ii.         The Natural-Probable-Consequence Liability Model: Foreseeable Offenses Committed by AI Robots

This model of criminal liability assumed deep involvement of the programmers or users in the AI robotÕs daily activities, but without any intention of committing an offense via the AI robot.[39] For instance, one scenario would be when an AI robot commits an offense during the execution of its daily tasks. This model is based upon the ability of the programmers or users to foresee the potential commission of offenses; a person might be held accountable for an offense if that offense is a natural and probable consequence of that personÕs conduct.[40]

Natural-probable-consequence liability seems to be legally suitable for situations where an AI robot committed an offense, but the programmer or user had no knowledge of it, had not intended it and had not participated in it. [41] The natural-probable-consequence liability model only requires the programmer or user to be in a mental state of negligence, not more.[42] Programmers or users are not required to know about any forthcoming commission of an offense as a result of their activity, but are required to know that such an offense is a natural, probable consequence of their actions.[43]

Liability may be predicated on negligence and would be appropriate in a situation where a reasonable programmer or user should have foreseen the offense and prevented it from being committed by the AI robot.[44]

  iii.         The Direct Liability Model: AI Robots as Direct Subject of Criminal Liability

The third model does not assume any dependence of the AI robot on a specific programmer or user. The third model focuses on the AI robot itself.[45] This model essentially treats AI robots like human actors, and accordingly, if an AI robot fulfills the factual element (actus reus) and mental element (mens rea), it will be held criminally liable on its own.[46] The premise is that if an AI robot is capable of fulfilling the requirements of both the factual element and the mental element, and in fact, it actually fulfills them, there is presumptively nothing to prevent criminal liability from being imposed on that AI robot.[47]

The criminal liability of an AI robot does not replace the criminal liability of the programmers or the users, if criminal liability is imposed on the programmers and users by any other legal path.[48] Criminal liability is not to be divided, but rather, added; the criminal liability of the AI robot is imposed in addition to the criminal liability of the human programmer or user.[49]

Not only positive factual and mental elements may be attributed to AI robots; rather, all negative fault elements should be attributable to AI robots. Most of these elements are expressed by the general defenses in criminal law (e.g., self-defense, necessity, duress, or intoxication).[50]

C.   Tort Law

The most obvious theory of tort liability that seems applicable to injuries caused by artificially intelligent entities is products liability.[51] Products liability is the area of law in which manufacturers, distributors, suppliers, retailers, and others who make products available to the public are held responsible for the injuries those products cause.[52] Artificially intelligent entities will presumably be manufactured by a company, and accordingly the company may be held liable when an AI goes awry.

A manufacturer may be held liable under a negligence cause of action when an AI causes an injury that was reasonably foreseeable to the manufacturer. The typical prima facie negligence claim requires that an injured plaintiff must show (i) that the manufacturer owed a duty to the plaintiff, (ii) the manufacturer breached that duty, (iii) the breach was the cause in fact of the plaintiff's injury (actual cause), (iv) the breach proximately caused the plaintiff's injury, (v) and the plaintiff suffered actual quantifiable injury (damages).

Alternatively, a manufacturer may be strictly liable for injuries caused by its product. Strict liability does not require a showing of negligence, and accordingly, a manufacturer may be liable even if it exercised reasonable care. Accordingly, the focus of strict liability will primarily be on whether the a defect in the manufacturerÕs product was a cause of the plaintiffÕs injury.

Injuries caused by an AI raise unique issues under both theories of products liability. Specifically, we may not be able to determine if the harm was caused by human or natural agencies; rather, the congruence of human, natural and technical agencies caused the ultimate harm.[53]  Which one do we pick out as the legally responsible cause? Or should we blame all the causal vectors?


 

VI.         Endnotes



[1] Curtis E.A. Karnow. Liability for Distributed Artificial Intelligences, 11 Berkeley Tech. L.J. 147, 173 (1996).

[2] Benjamin Soskis. Man and the Machines: ItÕs Time to Start Thinking about How We Might Grant Legal Rights to Computers, 36 Legal Aff. 37, 37 (2005).

[3] Id. at 38.

[4] Id. at 39.

[5] Id. at 38.

[6] Id.

[7] Id.

[8] Gabriel Hallevy. I, Robot -  I, Criminal – When Science Fiction Becomes Reality: Legal Liability of AI Robots Committing Criminal Offenses, 22 Syracuse Sci. & Tech. L. Rep. 1, 1 (2010).

[9] Id. at 2.

[10] Id. at 9.

[11] Id. at 24.

[12] Id. at 26.

[13] Id. at 4-5; see also Roger C. Schank, What is an AI, Anyway?, The Foundations of Artificial Intelligence 3 (Derek Partridge & Yorick Wilks eds., 2006).

[14] Suzanne Smed. Intelligent Software and Agency Law, 14 Santa Clara Computer & High Tech. L.J. 503, 503 (1998).

[15] Id.

[16] Steven J. Franka. Tort Adjudication and the Emergence of Artificial Intelligence Software, Suffolk U. L. Review 623, 625 (1987).

[17] Soskis, supra note 2, at 41.

[18] Id. at 39.

[19] Id. at 40.

[20] Id. at 41.

[21] Smed, supra note 14, at 504.

[22] Id. at 505.

[23] Id. at 504.

[24] Id. at 505; see also Restatement (Second) of Agency ¤ 1 (1958).

[25] Smed, supra note 14, at 505; see also Restatement (Second) of Agency ¤ 1, cmt. b (1958).

[26] Smed, supra note 14, at 505.

[27] See McEvens v Citibank, 408 NYS 2d 870 (N.Y. County Civ. Ct. 1978) (bank responsible for money lost by automated teller machine); see also State Farm Mutual Automobile Ins. Co. v. Bockhorst, 453 F.2d 533, 535-536 (10th Cir. 1972) (insurance company was forced to pay a claim that occurred during a lapse period of customerÕs policy due to computer error); see also Allen v. Beneficial Fin. Co., 531 F.2d 797 (7th Cir. 1976) (bank did not comply with truth in lending regulations because its computer-generated explanation regarding load terms was not clear enough for an ordinary borrower to understand.

[28] Smed, supra note 14, at 506.

[29] Id. at 506-507.

[30] Hallevy, supra note 8, at 7.

[31] Id. at 7.

[32] Id. at 7-8.

[33] Id. at 9-10.

[34] Id. at 10.

[35] Id. at 10.

[36] Hallevy, supra note 8, at 10.

[37] Id. at 11.

[38] Id. at 11.

[39] Id. at 13.

[40] Id. at 14.

[41] Id. at 14.

[42] Hallevy, supra note 8, at 14.

[43] Id. at 14-15.

[44] Id. at 15-16.

[45] Id. at 18.

[46] Id. at 19.

[47] Id. at 19.

[48] Hallevy, supra note 8, at 25.

[49] Id.

[50] Id. at 26.

[51] See Restatement (Third) of Torts: Products Liability ¤ 1, et. seq. (2011).

[52] Products Liability, Wikipedia (April 8, 2012), http://en.wikipedia.org/wiki/Product_liability.

[53] Karnow, supra note 1, at 175-176.