UTF-8 https://feraios.blogspot.com/feeds/posts/default
Google

Sunday, August 19, 2012

ROBOT ETHICS



Morals and the machine

IN THE classic science-fiction film “2001”, the ship's computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship's mission (investigating an artefact near Jupiter) and to keep the mission's true purpose secret from the ship's crew. To resolve the contradiction, he tries to kill the crew.




As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.
A bestiary of robots
Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines (see Technology Quarterly). Its evolution is producing an extraordinary variety of species. The Sand Flea can leap through a window or onto a roof, filming all the while. It then rolls along on wheels until it needs to jump again. RiSE, a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot, trots behind a human over rough terrain, carrying up to 180kg of supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd and follow him. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tonnes of bombs.
Robots are spreading in the civilian world, too, from the flight deck to the operating theatre . Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Volvo's new V40 hatchback essentially drives itself in heavy traffic. It can brake when it senses an imminent collision, as can Ford's B-Max minivan. Fully self-driving vehicles are being tested around the world. Google's driverless cars have clocked up more than 250,000 miles in America, and Nevada has become the first state to regulate such trials on public roads. In Barcelona a few days ago, Volvo demonstrated a platoon of autonomous cars on a motorway.
As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.
As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.
One way of dealing with these difficult questions is to avoid them altogether, by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times. Campaign groups such as the International Committee for Robot Arms Control have been formed in opposition to the growing use of drones. But autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. Driverless cars are very likely to be safer than ordinary vehicles, as autopilots have made planes safer. Sebastian Thrun, a pioneer in the field, reckons driverless cars could save 1m lives a year.
Instead, society needs to develop ways of dealing with the ethics of robotics—and get going fast. In America states have been scrambling to pass laws covering driverless cars, which have been operating in a legal grey area as the technology runs ahead of legislation. It is clear that rules of the road are required in this difficult area, and not just for robots with wheels.
The best-known set of guidelines for robo-ethics are the “three laws of robotics” coined by Isaac Asimov, a science-fiction writer, in 1942. The laws require robots to protect humans, obey orders and preserve themselves, in that order. Unfortunately, the laws are of little use in the real world. Battlefield robots would be required to violate the first law. And Asimov's robot stories are fun precisely because they highlight the unexpected complications that arise when robots try to follow his apparently sensible rules. Regulating the development and use of autonomous robots will require a rather more elaborate framework. Progress is needed in three areas in particular.
Three laws for the laws of robotics
First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design: it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.
Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people. The techniques of experimental philosophy, which studies how people respond to ethical dilemmas, should be able to help. Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices. Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts.
Technology has driven mankind's progress, but each new advance has posed troubling new questions. Autonomous machines are no different. The sooner the questions of moral agency they raise are answered, the easier it will be for mankind to enjoy the benefits that they will undoubtedly bring.


 The Ethical and Social Implications of Robotics
Robots will be all over the place in a couple of decades, not to destroy us in Terminator fashion but to clean our houses, take care of our elderly or sick, play with and teach our children, and yes, have sex with us. If you wonder about the implications of such scenarios, read this book. It contains careful reflections -- sometimes enthusiastic, sometimes cautious -- about the many psychological, ethical, legal and socio-cultural consequences of robots engineered to play a major role in war and security, research and education, healthcare and personal companionship in the foreseeable future. The book contains contributions from many of the key participants in the discussions about robot ethics which began in the twenty-first century. Their papers are significant in their own right, but they gain more value from the clear organization of the book, which presents a succinct overview of the primary strands of the field. In eight parts, each consisting of three chapters, the reader is introduced to a specific topic and then confronted with some of the current issues, positions and problems that have arisen.
The first three chapters provide the reader with a general introduction to robotics, ethics and the various specificities of robot ethics. Together with the second section, on the design and programming of robots, they provide the necessary background for those unfamiliar with the particulars of robot ethics. Especially relevant is Colin Allen and Wendell Wallach's chapter that nicely summarizes the main point of their seminal book Moral Machines (2008)[1]. Allan and Wallach suggest that a 'functional morality', i.e., machines with the capacity to assess and respond to moral challenges, is not only possible but required. In order to perform their complex tasks in everyday environments, robots will need a considerable degree of autonomy. They approvingly quote (on p. 56) Rosalind Picard: "The greater the freedom of a machine, the more it will need moral standards".[2] They go beyond their summary in categorizing the different critiques their book encountered and addressing them in the remainder of the chapter in a refreshingly honest and constructive way. For instance, they admit to being 'guilty as charged' to the criticism that they may have contributed to the illusion that there is a technological fix to the dangers AI poses: "We should have spent more time thinking about the contexts in which (ro)bots operate and about human responsibility for designing those contexts." (p. 65).
There is a similar constructive openness in the other two chapters that explore the close connections between religion and morality. James Hughes attempts to draw lessons from a Buddhist framework for the attempt to create morally responsible machines, but it seems fair to say that his chapter remains quite general and much hinges on the still distant possibility of creating conscious, self-aware machine minds. In contrast, Selmer Bringsjord and Joshua Taylor become very specific and technical in their discussion of a 'divine-command computational logic', a computational natural-deduction proof theory "intended for the ethical control of a lethal robot on the basis of perceived divine commands." (p. 93).
The sentiments of Noel Sharkey that "robots will change the way that wars are fought" (p. 111), coupled with news reports from around the world of 'Predator drones attacking foreign soil,' sets an ominous tone from the outset of Section 3 on military robots. In addition to taking an overview of a number of current technologies like the MAARS (Modular Advanced Armed Robotics System) and the SWORD (Special Weapons Observation Reconnaissance Detection System), and the push (mainly by the US military) for the emergence of fully autonomous robotic weapons, Sharkey in Chapter 7 identifies a number of ethical issues like the proportionality of force and how robotic weapons might fit within current ethical frameworks. One ethical issue which is particularly striking is the question of whether a robot should be allowed to autonomously identify and kill (suspected) enemy combatants. For us at least an inner conflict arises. On the one hand the idea of robots replacing soldiers could be commended from the standpoint of a person who does not want to see their fellow countrymen killed in combat. On the other hand, however, the idea of robots making life and death decisions seems extremely risky, particularly (but not limited to) when we consider the ethical implications if a robot were to make a mistake and kill a civilian.
The idea of combatant identifications is further developed in Chapter 8 by Marcello Guarini and Paul Bello who note that combat identification is exacerbated by today's counter insurgency brand of warfare. Gone are the days of 'total war' where enemies faced each other en-mass on the battlefield in clearly defined uniforms. In today's theatres of war combatants can blend in with non-combatants, meaning one's ability to identify 'who an enemy is' becomes a tricky task. As a result, soldiers are actively being forced to make snap judgments about a person through their behavior by ascribing mental states to them (p. 131). In this type of warfare where intuitions are key, the ultimate question is whether a robot could be as good as a human at sensing and evaluating a situation and acting on intuitions.
The problems noted by Sharkey, Guarini and Bello regarding the ethical implications of mistakes made in the theatre of war seem to reach a natural crescendo in the form of the issue of responsibility, which Gert-Jan Lokhorst and Jeroen van den Hoven tackle in Chapter 9. They provide a rigorous account and overview of responsibility and consider where the line might lie in terms of when responsibility could shift between designers and the robot itself.
Section 4 attempts to cover a wide range of issues regarding the law and governance of robotics. In Chapter 10 Richard O'Meara gives us an insight as to how we could extend current legal infrastructures like the Geneva Convention to robots (as a starting point) in order to create a framework for the governance of robots to account for their growing sophistication and increasingly larger deployment in the theatre of war. In Chapter 11 Peter Asaro considers how a number of crucial legal concepts like responsibility, culpability, causality and intentionality might be applied to new cases of tele-operated, semi-autonomous and fully autonomous robots. It is worth noting that the coverage of all three levels of autonomy of robots is particularly impressive. In Chapter 12 Ryan Calo takes an overview of a number of issues churned up by robots and their implications on privacy. Calo focuses mainly on how the increased risk of hacking, due to more robots in our lives, potentially opens the door for hackers to covertly view and participate in our private lives. He then moves on to how the increasing surveillance potential of robots effects constitutional rights under the Fourth Amendment against unreasonable government intrusions in the private sphere.
Unlike Section 3, where it is easy to find a golden thread between each chapter, the chapters in section 4 seem a little more disjointed from one another. Whilst all the chapters are linked by the idea of governance and regulation, the variety of legal subject matter is very broad. To move from the governance of military robots (chapter 10) to the extension of jurisprudential concepts to robots (chapter 11), and to then jump to robots and privacy (chapter 12), seems like too much material is attempted to be covered with no opportunity for a substantive discussion in either area. This should not be taken as a criticism of the texts themselves; they are all well written, engaging and, in the space the authors have available, very good. But while it might not have been the intention of the editors to make connections between the various chapters, the lack of connection between the chapters means there is a lack of 'oomph' to the section, making it seem a little watered down.
Emotional and sexual relationships between humans and robots are the topic of section 5. Matthias Scheutz clearly identifies the danger that robots specifically designed for eliciting human emotions and feelings could lead to emotional dependency or even harm. Several experiments are discussed that show that humans are affected by a robot's presence in a way "that is usually only caused by the presence of another human." (p. 210). However, in the case of human-robot interaction, the emotional bonds are unidirectional and could be exploited by, e.g., companies that make their robots "convince the owner to purchase products the company wishes to promote." (p. 216). David Levy looks at the issue of future robot prostitutes. After discussing the reasons for (especially) men to pay (mostly) women for sex, Levy considers five aspects of the ethics of robot prostitution. Unfortunately these aspects receive a rather cursory treatment. For instance, he compares sexbots to vibrators and argues from the widespread acceptance of the latter that objecting to the former would be 'anomalous' (p. 227).
However, it seems that he is ignoring the unidirectional emotional bonds discussed in the chapter before by Scheutz. What makes sexbots genuinely different is their ability to tap into our social interaction capacities, sensitivities and vulnerabilities. The importance of this comes also to the fore in Levy's discussion of the ethics of using robot prostitutes vis-à-vis one's partner. He speaks of "the knowledge that what is taking place is nothing 'worse' than a form of masturbation" (p. 228), thereby again missing the fact that these sexbots will have certain looks and behavioral styles that may lead to emotional consequences for both the user and his/her partner. One would expect a discussion of such delicate issues to focus on the potential differences between robotic and standard sex toys (or at least argue they don't exist), and not on the assumption that they will be similar in most relevant aspects.
Blay Whitby directly addresses Levy when he considers how social isolation might drive people to robots for love and affection. Whitby says, "peaceful, even loving, interaction among humans is a moral good in itself", and "we should distrust the motives of those who wish to introduce technology in a way that tends to substitute for interaction between humans." (p. 238). He therefore suggests that robot lovers and caregivers are political topics, rather than simply technological. Whatever one may think about the particular positions and arguments that are presented in this section, the discussion in itself, though possibly distasteful to some, will remain with us for a long time to come.
Section 6 brings us back to, as its introduction states, the "more serious interaction" (p. 249) between robots and humans, in the form of companionship and medical care. Jason Borenstein & Yvette Pearson examine whether robot caregivers will lead to a reduction in human contact for members of society that tend to be marginalized as a result of their impairments. Specifically, they analyze robot care and robot-assisted care from the perspective of human flourishing and Mark Coeckelbergh's differentiation between shallow (routine), deep (reciprocity of feelings) and good (respecting human dignity) care.[3] They express concern about whether human beings will still meaningfully be in the loop as robot caregivers become more pervasive (p. 262).
The care of the vulnerable, young children and the elderly is the main topic of the chapter by Noel and Amanda Sharkey. Robot supervision could lead to a loss of privacy and liberty. For instance, in the case of a young child playing, the problem is in trusting the robot's capacity to determine what constitutes a dangerous activity (p. 272). How do we avoid robot care from becoming overly restrictive? Another issue is that robot care might come as a replacement for human contact. Studies have been done with robot pets, such as Paro, that respond interactively. Although positive effects have been reported, the authors rightly warn, "These outcomes need to be interpreted with caution, as they depend on the alternatives on offer." (p. 277).
To probe our intuitions concerning robot servants, Steve Petersen suggests considering a 'Person-o-Matic' machine, not unlike the food replicator in Star Trek, that can make an artificial person from just about any specifications, from plastics, metals, or organic matter, with potentially any kind of programmable behavior. What would we find allowable or unacceptable about creating 'artificial servants' this way, or the kinds of servants that could be created? Peterson considers several possibilities and concludes "Sometimes I can't myself shake the feeling that there is something ethically fishy here. I just do not know if this is irrational intuition . . . or the seeds of a better objection." (p. 295).
The introduction to Section 7, 'Right and Ethics', is engaging and well-framed, asking the reader the provocative question whether we could one day see a robotic 'emancipation proclamation.' In Chapter 19 Rob Sparrow considers whether a robot could be a person, which he believes would consequently guarantee a robot gaining moral consideration. Sparrow notes that our conception of personhood has been anthropomorphized to the point that being a human has become the condition to being a person. He challenges this view and attempts to demonstrate how a robot can be a person through a test called the 'Turing Triage Test.' Kevin Warwick in Chapter 20 provides a fascinating thought experiment built on research in the field of neuromorphics: He asks us to consider whether a robot with a human brain could deserve personhood. We felt that for Warwick not to take the typical physicalist-functionalist approach to psychological capabilities for personhood meant the article was a refreshing read and helped distinguish it from the abundance of articles that seem to dogmatically restate the physicalist-functionalist argument that psychological capabilities associated with personhood can be distinguished as functional neural activity rather than being tied to a specific biological state.[4]
To finish off Section 7, Anthony Beavers (Chapter 21) takes a metaethical lens to the field and considers the implications that robotic (non-biological) technologies have on an ethics derived from biological agents, and specifically the strain that robots place on these biologically derived ethical concepts. What we found enjoyable about this section is how forward looking it is. No reasonable person would currently argue that any robot is deserving of rights or should be considered a person, but this section takes a futuristic approach to these issues, teasing us with questions of the 'what if?' variety. This section attempts to push the envelope of robot ethics by going beyond the 'state of the art' of today's robot ethical issues to how we might reach the point of a right-bearing robot and the conditions for that robot to be a right holder.
The Epilogue (Section 8) is an excellent overview of the book by Gianmarco Veruggio and Keith Abney. They not only condense many of the hurdles facing robot ethics that are given throughout the book, but also almost set out an agenda of research for some of the key areas and questions that need to be asked and answered within the robot ethics field. One particularly interesting question they raise, which many writers allude to but none really seems to have properly solved, is the question of 'when does a machine become a moral agent?' (p. 353). While no one would reasonably underestimate the difficulty of finding the point at which a machine becomes a moral agent, it does seem clear that once an answer is given many of the ethical issues surrounding robot ethics like moral and legal responsibility, personhood and rights etc will be far easier to give an answer to (or at least easier to justify a position on). We would no longer be asking whether a robot should have rights or whether the robot is a person. If you can argue that a robot is a moral agent, the answer to these types of questions will hopefully be more straightforward. On the other hand, of course, answering the question about a robot's moral agency actually requires a clear and consistent position regarding many if not all of the other issues mentioned. This book is doing us a great service in bringing together so many of the right kinds of questions to ask. They are difficult, but if robot ethics is to meet the demands of the upcoming events in robot technology we need to begin tackling the questions of ethical issues that are likely to arise tomorrow, today.
Patrick Lin, Keith Abney, and George A. Bekey (eds.)

Reviewed byPim Haselager, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Nijmegen, and David Jablonka, University of Bristol



[1] Wallach, W., & Allen, C. (2008). Moral Machines: Teaching robots right from wrong. New York: Oxford University Press.
[2] Picard, R. (1997). Affective computing. Cambridge, MA: MIT Press, p. 19.
[3] Coeckelbergh, M. (2010). Health care, capabilities, and AI assistive technologies. Ethical Theory and Moral Practice, 13 (2), 181-190.
[4] Lewis, D. (1986). On the Plurality of Worlds. Oxford: Blackwell.
 SOURCE  http://ndpr.nd.edu

Labels:

0 Comments:

Post a Comment

<< Home