This research assumes that eventually a form of artificial intelligence (AI) will exist that is on par with human intelligence. Further, we should assume that:
i. We will employ intelligent agents specifically for their creative intelligence and corresponding judgment;
ii. These agents will make decisions, and
iii. Some of the resultant decisions will be “pathological”; that is, not only unpredictable and surprising, which may be desirable, but also having unintended and destructive consequences both for networks and for the substantial portions of our infrastructure connected to those networks.
This research contemplates the anticipated legal issues that will arise if AI does reach such a level, and then attempts to develop theories of legal liability for offenses committed by an AI.
Š If an entity is aware of life enough to assert its rights by protesting its dissolution, is it entitled to the protection of the law?
Š Should a self-conscious computer facing the prospect of imminent unplugging have standing to bring a claim of battery?
Š If we agree that a machine could potentially be a candidate for rights, we still must answer: Which machines and which rights? What would a computer have to do to deserve legal or moral personhood?
Š With complex computer systems consisting of a combination of overlapping programs created by different coders, it is often difficult to know who should bear moral blame or legal liability for a computer action that produces an injury. 
Š Computers often play major roles in writing their own software; what if one created a virus and sent it around the world?
Š Computers now help operate on us and help us handle our investments; should we hold them as accountable as we do our surgeons and financial analysts when they screw up?
Š Can society impose criminal liability upon robots? If so, how do you punish an AI robot?
Š Does the growing intelligence of AI robots subject them to legal social control, just as any other legal entity?
Š How can AI entities fulfill the two requirements of criminal liability (i.e., actus reus and mens rea)?
Š If an AI entity can be held criminally liable for an offense, can it raise a defense of insanity in relation to a malfunctioning AI algorithm, when its analytical capabilities have become corrupted as a result of that malfunction? May it assert a defense of being under the influence of an intoxicating substance (similar to humans being under the influence of alcohol or drugs) if its operating system is infected by an electronic virus?
There are five attributes that one would expect an intelligent entity to have:
Š The easier it is to communicate with an entity, the more intelligent the entity seems.
2. Mental knowledge.
Š An intelligent entity is expected to have some knowledge about itself.
3. External knowledge.
Š An intelligent entity is expected to know about the outside world, to learn about it, and utilize that information.
4. Goal-driven behavior.
Š An intelligent entity is expected to take action in order to achieve its goals.
Š An intelligent entity is expected to have some degree of creativity. In this context creativity means the ability to take alternate action when the initial action fails.
Software agents are rules-based software products which may aid with internet searches, filter incoming electronic mail, find the appropriate area of a help program in online documentation, watch for news on topics you have specified, and suggest changes to your stock portfolio.
Advances in the area of artificial intelligence are producing “learning algorithms” that will soon produce software agent products that are based on neural networks. These products, the “second generation of intelligent software agents,” will be able to learn from their experiences and adapt their behavior accordingly. Software programs with this ability to learn will, consequently, be capable of decision-making, resulting in software that may take actions that neither the licensor nor the licensee anticipated.
The objectives of AI scientists may be grouped into two broad categories: the development of computer-based models of intelligent behavior, and the pursuit of machines capable of solving problems normally thought to require human intelligence.
Through empathy or arrogance, or perhaps through an instinct for ethical consistency, we tend to seek rights for things that appear to be like us and to deny rights to things that don’t.
The list of threshold characteristics proposed for a computer to have legal or moral personhood is exhaustive: the ability to experience pain or suffering, to have intentions or memories, and to possess moral agency or self-awareness. None of these characteristics is well-defined, though, and this is especially the case with the most oft-cited of the lot: consciousness.
It is most likely that a machine that has the ability to interact with humans in the world will be the first candidate for rights.
One possibility would be to treat A.I. machines as valuable cultural artifacts, to accord them landmark status with stipulations about their preservation and disassembly.
Intelligent software agents will be capable of causing harm. Unlike earlier software agents, they will be capable of finding their own sources of information and making commitments – possible unauthorized commitments.
Where software is capable of unplanned – rather than predictable – behavior, the law will therefore need to be applied without factual precedent to these situations.
Agency law provides a suitable framework in which to find a solution for harms committed by the next generation of intelligent software; an agency relationship is formed when the software licensee installs and then executes the software program. Accordingly, intelligent software agents should be regulated under agency law.
Agency law provides an adequate framework because, in general, a software licensee will be activating software for some purpose. The intelligent software agent will then use its learning, mobility and autonomous properties to accomplish specific tasks for the licensee. Thus, we see the software agent in the legal role of the “agent,” and the software licensee in the legal role of the “principal.” This relationship of agent/principal has been formed whether or not the parties themselves intended to create an agency or even think of themselves as agent and principal.
Once intelligent software agents are viewed as legal agents within an agency relationship, it follows that liability can be attributed to the actions of the software agents, binding the software licensee (principal) to legal duties.
It appears that the courts are willing to consider machines as participants in ordinary consumer transactions.
By extending this previous court treatment of transactions conducted by autonomous machines, or by using the liability theories available under contract (where the action of an agent bind the principal to third parties), or tort (where the principal may be vicariously liable for the actions of the agent), intelligent software agents can be treated as “legal agents.”
Once intelligent software agents are viewed as having legal status with formation of the agency relationship, it follows that liability can be attributed to the actions of the software agents and the licensees will consequently be held responsible. Certain legal duties vest in the licensees as principals in the newly formed agency relationship. Therefore, any harm committed by the intelligent software agent that breaches the licensee’s legal duties will cause the licensee to be liable.
In order to impose criminal liability upon a person, two main elements must exist. The first is the factual element, i.e., criminal conduct (actus reus), while the other is the mental element, i.e., knowledge or general intent in relation to the conduct element (mens rea). If one of them is missing, no criminal liability can be imposed.
The actus reus requirement is expressed mainly by acts or omissions. Sometimes, other factual elements are required in addition to conduct, such as the specific results of that conduct and the specific circumstances underlying that conduct.
The mens rea requirement has various levels of mental elements. The highest level is expressed by knowledge, while sometimes it is accompanied by a requirement of intent or specific intention. Lower levels are expressed by negligence (a reasonable person should have known), or by strict liability offenses.
Gabriel Hallevy has proposed that AI entities can fulfill the two requirements of criminal liability under three possible models of criminal liability: (i) the Perpetration-by-Another liability model; (ii) the Natural-Probable Consequence liability model; and (iii) the Direct liability model.
This model does not consider the AI robot as possessing any human attributes. Instead, the model theorizes that AI entities are akin to mentally limited persons, such as a children, and therefore do not have the criminal state of mind to commit an offense. The AI robot is viewed as an intermediary that is used as an instrument, while the party orchestrating the offense is the real perpetrator (hence the name, perpetration-by-another). The person controlling the AI, or the perpetrator, is regarded as a principal in the first degree and is held accountable for the conduct of the innocent agent (the AI). The perpetrator’s liability is determined on the basis of that conduct and his own mental state. The AI robot is an innocent agent.
This model would likely be implemented in scenarios where programmers have programmed an AI to commit an offense, or where a person controlling the AI has commanded it to commit an offense. This model would not be suitable when the AI robot decides to commit an offense based on its own accumulated experience or knowledge.
This model of criminal liability assumed deep involvement of the programmers or users in the AI robot’s daily activities, but without any intention of committing an offense via the AI robot. For instance, one scenario would be when an AI robot commits an offense during the execution of its daily tasks. This model is based upon the ability of the programmers or users to foresee the potential commission of offenses; a person might be held accountable for an offense if that offense is a natural and probable consequence of that person’s conduct.
Natural-probable-consequence liability seems to be legally suitable for situations where an AI robot committed an offense, but the programmer or user had no knowledge of it, had not intended it and had not participated in it.  The natural-probable-consequence liability model only requires the programmer or user to be in a mental state of negligence, not more. Programmers or users are not required to know about any forthcoming commission of an offense as a result of their activity, but are required to know that such an offense is a natural, probable consequence of their actions.
Liability may be predicated on negligence and would be appropriate in a situation where a reasonable programmer or user should have foreseen the offense and prevented it from being committed by the AI robot.
The third model does not assume any dependence of the AI robot on a specific programmer or user. The third model focuses on the AI robot itself. This model essentially treats AI robots like human actors, and accordingly, if an AI robot fulfills the factual element (actus reus) and mental element (mens rea), it will be held criminally liable on its own. The premise is that if an AI robot is capable of fulfilling the requirements of both the factual element and the mental element, and in fact, it actually fulfills them, there is presumptively nothing to prevent criminal liability from being imposed on that AI robot.
The criminal liability of an AI robot does not replace the criminal liability of the programmers or the users, if criminal liability is imposed on the programmers and users by any other legal path. Criminal liability is not to be divided, but rather, added; the criminal liability of the AI robot is imposed in addition to the criminal liability of the human programmer or user.
Not only positive factual and mental elements may be attributed to AI robots; rather, all negative fault elements should be attributable to AI robots. Most of these elements are expressed by the general defenses in criminal law (e.g., self-defense, necessity, duress, or intoxication).
The most obvious theory of tort liability that seems applicable to injuries caused by artificially intelligent entities is products liability. Products liability is the area of law in which manufacturers, distributors, suppliers, retailers, and others who make products available to the public are held responsible for the injuries those products cause. Artificially intelligent entities will presumably be manufactured by a company, and accordingly the company may be held liable when an AI goes awry.
A manufacturer may be held liable under a negligence cause of action when an AI causes an injury that was reasonably foreseeable to the manufacturer. The typical prima facie negligence claim requires that an injured plaintiff must show (i) that the manufacturer owed a duty to the plaintiff, (ii) the manufacturer breached that duty, (iii) the breach was the cause in fact of the plaintiff's injury (actual cause), (iv) the breach proximately caused the plaintiff's injury, (v) and the plaintiff suffered actual quantifiable injury (damages).
Alternatively, a manufacturer may be strictly liable for injuries caused by its product. Strict liability does not require a showing of negligence, and accordingly, a manufacturer may be liable even if it exercised reasonable care. Accordingly, the focus of strict liability will primarily be on whether the a defect in the manufacturer’s product was a cause of the plaintiff’s injury.
Injuries caused by an AI raise unique issues under both theories of products liability. Specifically, we may not be able to determine if the harm was caused by human or natural agencies; rather, the congruence of human, natural and technical agencies caused the ultimate harm. Which one do we pick out as the legally responsible cause? Or should we blame all the causal vectors?
 Curtis E.A. Karnow. Liability for Distributed Artificial Intelligences, 11 Berkeley Tech. L.J. 147, 173 (1996).
 Benjamin Soskis. Man and the Machines: It’s Time to Start Thinking about How We Might Grant Legal Rights to Computers, 36 Legal Aff. 37, 37 (2005).
 Id. at 38.
 Id. at 39.
 Id. at 38.
 Gabriel Hallevy. I, Robot - I, Criminal – When Science Fiction Becomes Reality: Legal Liability of AI Robots Committing Criminal Offenses, 22 Syracuse Sci. & Tech. L. Rep. 1, 1 (2010).
 Id. at 2.
 Id. at 9.
 Id. at 24.
 Id. at 26.
 Id. at 4-5; see also Roger C. Schank, What is an AI, Anyway?, The Foundations of Artificial Intelligence 3 (Derek Partridge & Yorick Wilks eds., 2006).
 Suzanne Smed. Intelligent Software and Agency Law, 14 Santa Clara Computer & High Tech. L.J. 503, 503 (1998).
 Steven J. Franka. Tort Adjudication and the Emergence of Artificial Intelligence Software, Suffolk U. L. Review 623, 625 (1987).
 Soskis, supra note 2, at 41.
 Id. at 39.
 Id. at 40.
 Id. at 41.
 Smed, supra note 14, at 504.
 Id. at 505.
 Id. at 504.
 Id. at 505; see also Restatement (Second) of Agency § 1 (1958).
 Smed, supra note 14, at 505; see also Restatement (Second) of Agency § 1, cmt. b (1958).
 Smed, supra note 14, at 505.
 See McEvens v Citibank, 408 NYS 2d 870 (N.Y. County Civ. Ct. 1978) (bank responsible for money lost by automated teller machine); see also State Farm Mutual Automobile Ins. Co. v. Bockhorst, 453 F.2d 533, 535-536 (10th Cir. 1972) (insurance company was forced to pay a claim that occurred during a lapse period of customer’s policy due to computer error); see also Allen v. Beneficial Fin. Co., 531 F.2d 797 (7th Cir. 1976) (bank did not comply with truth in lending regulations because its computer-generated explanation regarding load terms was not clear enough for an ordinary borrower to understand.
 Smed, supra note 14, at 506.
 Id. at 506-507.
 Hallevy, supra note 8, at 7.
 Id. at 7.
 Id. at 7-8.
 Id. at 9-10.
 Id. at 10.
 Id. at 10.
 Hallevy, supra note 8, at 10.
 Id. at 11.
 Id. at 11.
 Id. at 13.
 Id. at 14.
 Id. at 14.
 Hallevy, supra note 8, at 14.
 Id. at 14-15.
 Id. at 15-16.
 Id. at 18.
 Id. at 19.
 Id. at 19.
 Hallevy, supra note 8, at 25.
 Id. at 26.
 See Restatement (Third) of Torts: Products Liability § 1, et. seq. (2011).
 Products Liability, Wikipedia (April 8, 2012), http://en.wikipedia.org/wiki/Product_liability.
 Karnow, supra note 1, at 175-176.