An Introduction to Ethics in Robotics and AI Christoph Bartneck 2024 scribd download
An Introduction to Ethics in Robotics and AI Christoph Bartneck 2024 scribd download
com
https://textbookfull.com/product/an-introduction-
to-ethics-in-robotics-and-ai-christoph-bartneck/
https://textbookfull.com/product/robotics-ai-and-humanity-science-
ethics-and-policy-joachim-von-braun/
textbookfull.com
https://textbookfull.com/product/an-introduction-to-modeling-neuronal-
dynamics-1st-edition-christoph-borgers-auth/
textbookfull.com
https://textbookfull.com/product/eye-movement-research-an-
introduction-to-its-scientific-foundations-and-applications-christoph-
klein/
textbookfull.com
https://textbookfull.com/product/a-theory-of-imperialism-utsa-patnaik/
textbookfull.com
Book Retreat Mystery 07 Murder in the Cookbook Nook Ellery
Adams
https://textbookfull.com/product/book-retreat-mystery-07-murder-in-
the-cookbook-nook-ellery-adams/
textbookfull.com
https://textbookfull.com/product/maths-in-focus-year-11-mathematics-
extension-1-grove/
textbookfull.com
https://textbookfull.com/product/becoming-a-better-communicator-3rd-
edition-rhonda-m-gallagher/
textbookfull.com
https://textbookfull.com/product/the-chemistry-of-medical-and-dental-
materials-second-edition-royal-society-of-chemistry-great-britain/
textbookfull.com
https://textbookfull.com/product/a-short-path-to-change-30-ways-to-
transform-your-life-first-edition-mannion/
textbookfull.com
Wiley Cpaexcel Exam Review 2018 Study Guide Auditing And
Attestation wiley Cpa Exam Review Auditing Attestation 1st
Edition Wiley
https://textbookfull.com/product/wiley-cpaexcel-exam-
review-2018-study-guide-auditing-and-attestation-wiley-cpa-exam-
review-auditing-attestation-1st-edition-wiley/
textbookfull.com
SPRINGER BRIEFS IN ETHICS
Christoph Bartneck
Christoph Lütge
Alan Wagner
Sean Welsh
An Introduction
to Ethics in
Robotics and AI
123
SpringerBriefs in Ethics
Springer Briefs in Ethics envisions a series of short publications in areas such as
business ethics, bioethics, science and engineering ethics, food and agricultural
ethics, environmental ethics, human rights and the like. The intention is to present
concise summaries of cutting-edge research and practical applications across a wide
spectrum.
Springer Briefs in Ethics are seen as complementing monographs and journal
articles with compact volumes of 50 to 125 pages, covering a wide range of content
from professional to academic. Typical topics might include:
• Timely reports on state-of-the art analytical techniques
• A bridge between new research results, as published in journal articles, and a
contextual literature review
• A snapshot of a hot or emerging topic
• In-depth case studies or clinical examples
• Presentations of core concepts that students must understand in order to make
independent contributions
An Introduction to Ethics
in Robotics and AI
123
Christoph Bartneck Christoph Lütge
HIT Lab NZ Institute for Ethics in Artificial Intelligence
University of Canterbury Technical University of Munich
Christchurch, New Zealand München, Germany
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Fig. 1 The logo of the EPIC project
This book was made possible through the European Project “Europe’s ICT
Innovation Partnership With Australia, Singapore & New Zealand (EPIC)” under
the European Commission grant agreement Nr 687794. The project partners in this
consortium are:
• eutema GmbH
• Intersect Australia Limited (INTERSECT)
• Royal Melbourne Institute Of Technology (RMIT)
• Callaghan Innovation Research Limited (CAL)
• University Of Canterbury (UOC)
• National University Of Singapore (NUS)
• Institute For Infocomm Research (i2r)
From February 2–6, 2019 we gathered at the National University of Singapore.
Under the guidance of Laia Ros from Book Sprints we wrote this book in an
atmosphere of mutual respect and with great enthusiasm for our shared passion:
artificial intelligence and ethics. We have backgrounds in different disciplines and
the synthesis of our knowledge enabled us to cover the wide spectrum of topics
relevant to AI and ethics.
This book was written using the BookSprint method (http://www.booksprints.net).
v
Contents
vii
viii Contents
11 Military Uses of AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
11.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
11.2 The Use of Autonomous Weapons Systems . . . . . . . . . . . . . . . 95
11.2.1 Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
11.2.2 Proportionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
11.2.3 Responsibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
11.3 Regulations Governing an AWS . . . . . . . . . . . . . . . . . . . . . . . 97
11.4 Ethical Arguments for and Against AI for Military
Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
11.4.1 Arguments in Favour . . . . . . . . . . . . . . . . . . . . . . . . . 97
11.4.2 Arguments Against . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
11.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
12 Ethics in AI and Robotics: A Strategic Challenge . . . . . . . . . . . . . . 101
12.1 The Role of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
12.2 International Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
List of Figures
xi
Chapter 1
About the Book
This book provides an introduction into the ethics of robots and artificial intelligence.
The book was written with university students, policy makers, and professionals in
mind but should be accessible for most adults. The book is meant to provide balanced
and, at times, conflicting viewpoints as to the benefits and deficits of AI through the
lens of ethics. As discussed in the chapters that follow, ethical questions are often not
cut and dry. Nations, communities, and individuals may have unique and important
perspectives on these topics that should be heard and considered. While the voices
that compose this book are our own, we have attempted to represent the views of the
broader AI, robotics, and ethics communities.
1.1 Authors
Christoph Lütge holds the Peter Löscher Chair of Business Ethics at Techni-
cal University of Munich (TUM). He has a background in business informatics
and philosophy and has held visiting positions in Harvard in Taipei, Kyoto and
Venice. He was awarded a Heisenberg Fellowship in 2007. In 2019, Lütge was
appointed director of the new TUM Institute for Ethics in Artificial Intelligence.
Among his major publications are: “The Ethics of Competition” (Elgar 2019),
“Order Ethics or Moral Surplus: What Holds a Society Together?” (Lexington
2015), and the “Handbook of the Philosophical Foundations of Business Ethics”
(Springer 2013). He has commented on political and economic affairs on Times
Higher Education, Bloomberg, Financial Times, Frankfurter Allgemeine Zeitung,
La Repubblica and numerous other media. Moreover, he has been a member of the
Ethics Commission on Automated and Connected Driving of the German Federal
Ministry of Transport and Digital Infrastructure, as well as of the European AI
Ethics initiative AI4People. He has also done consulting work for the Singapore
Economic Development Board and the Canadian Transport Commission.
Alan R. Wagner is an assistant professor of aerospace engineering at the Penn-
sylvania State University and a research associate with the universities ethics
institute. His research interest include the development of algorithms that allow
a robot to create categories of models, or stereotypes, of its interactive partners,
creating robots with the capacity to recognize situations that justify the use of
deception and to act deceptively, and methods for representing and reasoning
about trust. Application areas for these interests range from military to health-
care. His research has won several awards including being selected for by the
Air Force Young Investigator Program. His research on deception has gained
significant notoriety in the media resulting in articles in the Wall Street Journal,
New Scientist Magazine, the journal of Science, and described as the 13th most
important invention of 2010 by Time Magazine. His research has also won awards
within the human-robot interaction community, such as the best paper award at
RO-MAN 2007.
Sean Welsh holds a PhD in philosophy from the University of Canterbury and is
co-lead of the Law, Ethics and Society working group of the AI Forum of New
Zealand. Prior to embarking on his doctoral research in AI and robot ethics he
worked as a software engineer for various telecommunications firms. His arti-
cles have appeared in The Conversation, the Sydney Morning Herald, the World
Economic Forum, Euronews, Quillette and Jane’s Intelligence Review. He is the
author of Ethics and Security Automata, a research monograph on machine ethics.
This book begins with introductions to both artificial intelligence (AI) and ethics.
These sections are meant to provide the reader with the background knowledge nec-
essary for understanding the ethical dilemmas that arise in AI. Opportunities for
further reading are included for those interested in learning more about these top-
1.2 Structure of the Book 3
ics. The sections that follow focus on how businesses manage the risks, rewards,
and ethical implications of AI technology and their own liability. Next, psychologi-
cal factors that mediate how humans and AI technologies interact and the resulting
impact on privacy are presented. The book concludes with a discussion of AI appli-
cations ranging from healthcare to warfare. These sections present the reader with
real world situations and dilemmas that will impact stakeholders around the world.
The chapter that follows introduces the reader to ethics and AI with an example that
many people can try at home.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 2
What Is AI?
Siri is not the only virtual assistant that will struggle to answer this question
(see Fig. 2.1). Toma et al. (2008) showed that almost two thirds of people provide
inaccurate information about their weight on dating profiles. Ignoring, for a moment,
what motivates people to lie about their dating profiles, why is it so difficult, if not
impossible, for digital assistants to answer this question?
To better understand this challenge it is necessary to look behind the scene and
to see how this question is processed by Siri. First, the phone’s microphone needs
to translate the changes in air pressure (sounds) into a digital signal that can then be
stored as data in the memory of the phone. Next, this data needs to be sent through
the internet to a powerful computer in the cloud. This computer then tries to classify
the sounds recorded into written words. Afterwards, an artificial intelligence (AI)
system needs to extract the meaning of this combination of words. Notice that it
even needs to be able to pick the right meaning for the homophone “lie”. Chris does
not want to lie down on his dating profile, he is wondering if he should put inaccurate
information on it.
While the above steps are difficult and utilise several existing AI techniques,
the next step is one of the hardest. Assuming Siri fully understands the meaning
of Chris’s question, what advice should Siri give? To give the correct advice, it
would need to know what a person’s weight means and how the term relates to their
attractiveness. Siri needs to know that the success of dating depends heavily on both
participants considering each other attractive—and that most people are motivated
to date. Furthermore, Siri needs to know that online dating participants cannot verify
the accuracy of information provided until they meet in person. Siri also needs to
know that honesty is another attribute that influences attractiveness. While deceiving
potential partners online might make Chris more attractive in the short run, it would
have a negative effect once Chris meets his date face-to-face.
But this is not all. Siri also needs to know that most people provide inaccurate
information on their online profiles and that a certain amount of dishonesty is not
likely to impact Chris’s long-term attractiveness with a partner. Siri should also be
aware that women select only a small portion of online candidates for first dates and
that making this first cut is essential for having any chance at all of convincing the
potential partners of Chris’s other endearing qualities.
There are many moral approaches that Siri could be designed to take. Siri could
take a consequentialist approach. This is the idea that the value of an action depends
on the consequences it has. The best known version of consequentialism is the clas-
sical utilitarianism of Jeremy Bentham and John Stuart Mill (Bentham 1996; Mill
1863). These philosophers would no doubt advise Siri to maximise happiness: not
just Chris’s happiness but also the happiness of his prospective date. So, on the con-
sequentalist approach Siri might give Chris advice that would maximise his chances
to not only to have many first dates, but maximise the chances for Chris to find true
love.
2 What Is AI? 7
2.1 Introduction to AI
The field of artificial intelligence (AI) has evolved from humble beginnings to a
field with global impact. The definition of AI and of what should and should not be
included has changed over time. Experts in the field joke that AI is everything that
computers cannot currently do. Although facetious on the surface, there is a sense
8 2 What Is AI?
that developing intelligent computers and robots means creating something that does
not exist today. Artificial intelligence is a moving target.
Indeed, even the definition of AI itself is volatile and has changed over time.
Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external
data, to learn from such data, and to use those learnings to achieve specific goals and
tasks through flexible adaptation” (Kaplan and Haenlein 2019). Poole and Mackworth
(2010) define AI as “the field that studies the synthesis and analysis of computational
agents that act intelligently.” An agent is something (or someone) that acts. An agent
is intelligent when:
1. its actions are appropriate for its circumstances and its goals
2. it is flexible to changing environments and changing goals
3. it learns from experience, and
4. it makes appropriate choices given its perceptual and computational limitations.
Russell and Norvig define AI as “the study of [intelligent] agents that receive pre-
cepts from the environment and take action. Each such agent is implemented by
a function that maps percepts to actions, and we cover different ways to represent
these functions, such as production systems, reactive agents, logical planners, neural
networks, and decision-theoretic systems” Russell and Norvig (2010, p. viii).
Russell and Norvig also identify four schools of thought for AI. Some researchers
focus on creating machines that think like humans. Research within this school of
thought seeks to reproduce, in some manner, the processes, representations, and
results of human thinking on a machine. A second school focuses on creating
machines that act like humans. It focuses on action, what the agent or robot actually
does in the world, not its process for arriving at that action. A third school focuses on
developing machines that act rationally. Rationality is closely related to optimality.
These artificially intelligent systems are meant to always do the right thing or act
in the correct manner. Finally, the fourth school is focused on developing machines
that think rationally. The planning and/or decision-making that these machines will
do is meant to be optimal. Optimal here is naturally relevant to some problems that
the system is trying to solve.
We have provided three definitions. Perhaps the most basic element common to
all of them is that AI involves the study, design and building of intelligent agents that
can achieve goals. The choices an AI makes should be appropriate to its perceptual
and cognitive limitations. If an AI is flexible and can learn from experience as well
as sense, plan and act on the basis of its initial configuration, it might be said to
be more intelligent than an AI that just has a set of rules that guides a fixed set of
actions. However, there are some contexts in which you might not want the AI to
learn new rules and behaviours, during the performance of a medical procedure, for
example. Proponents of the various approaches tend to stress some of these elements
more than others. For example, developers of expert systems see AI as a repository of
expert knowledge that humans can consult, whereas developers of machine learning
systems see AI as something that might discover new knowledge. As we shall see,
each approach has strengths and weaknesses.
2.1 Introduction to AI 9
In 1950 Alan Turing (see Fig. 2.2) suggested that it might be possible to determine
if a machine is intelligent based on its ability to exhibit intelligent behaviour which
is indistinguishable from an intelligent human’s behaviour. Turing described a con-
versational agent that would be interviewed by a human. If the human was unable
to determine whether or not the machine was a person then the machine would be
viewed as having passed the test. Turing’s argument has been both highly influen-
tial and also very controversial. For example, Turing does not specify how long the
human would have to talk to the machine before making a decision. Still, the Turing
Test marked an important attempt to avoid ill-defined vague terms such as “thinking”
and instead define AI with respect to a testable task or activity.
John Searle later divided AI into two distinct camps. Weak AI is limited to a single,
narrowly defined task. Most modern AI systems would be classified in this category.
These systems are developed to handle a single problem, task or issue and are gen-
erally not capable of solving other problems, even related ones. In contrast to weak
AI, Searle defines strong AI in the following way: “The appropriately programmed
computer with the right inputs and outputs would thereby have a mind in exactly the
same sense human beings have minds” (Searle 1980). In strong AI, Searle chooses to
connect the achievement of AI with the representation of information in the human
mind. While most AI researchers are not concerned with creating an intelligent agent
that meets Searle’s strong AI conditions, these researchers seek to eventually create
machines for solving multiple problems which are not narrowly defined. Thus one
of the goals of AI is to create autonomous systems that achieve some level of general
intelligence. No AI system has yet achieved general intelligence.
There are many different types of AI systems. We will briefly describe just a few.
Knowledge representation is an important AI problem that tries to deal with how
information should be represented in order for a computer to organise and use this
information. In the 1960s, expert systems were introduced as knowledge systems that
can be used to answer questions or solve narrowly defined problems in a particular
domain. They often have embedded rules that capture knowledge of a human expert.
Mortgage loan advisor programs, for example, have long been used by lenders to
evaluate the credit worthiness of an applicant. Another general type of AI system
are planning systems. Planning systems attempt to generate and organise a series
of actions which may be conditioned on the state of the world and unknown uncer-
tainties. The Hubble telescope, for example, utilised an AI planning system called
SPIKE.
Computer vision is a subfield of AI which focuses on the challenge of converting
data from a camera into knowledge representations. Object recognition is a common
task often undertaken by computer vision researchers. Machine learning focuses
on developing algorithms the allow a computer to use experience to improve its
performance on some well-defined task. Machine learning is described in greater
detail in the sections below.
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
2.1 Introduction to AI 11
AI currently works best in constrained environments, but has trouble with open
worlds, poorly defined problems, and abstractions. Constrained environments include
simulated environments and environments in which prior data accurately reflects
future challenges. The real world, however, is open in the sense that new challenges
arise constantly. Humans use solutions to prior related problems to solve new prob-
lems. AI systems have limited ability to reason analogically from one situation to
another and thus tend to have to learn new solutions even for closely related prob-
lems. In general, they lack the ability to reason abstractly about problems and to use
common sense to generate solutions to poorly defined problems.
the form of a reinforcement function to label states of the world as more or less
desirable with respect to some goal. Consider, for example, a robot attempting to
move from one location to another. If the robot’s sensors provide feedback telling
it its distance from a goal location, then the reinforcement function is simply a
reflection of the sensor’s readings. As the robot moves through the world it arrives
at different locations which can be described as states of the world. Some world
states are more rewarding than others. Being close to the goal location is more
desirable than being further away or behind an obstacle. Reinforcement learning
learns a policy, which is a mapping from the robot’s action to expected rewards.
Hence, the policy tells the system how to act in order to achieve the reward.
2.3.1 Sense-Plan-Act
A robot’s embodiment offers some advantages in that its experiences tend to be with
real objects, but it also poses a number of challenges. Sensing in the real world is
extremely challenging. Sensors such as cameras, laser scanners, and sonar all have
limitations. Cameras, for example, suffer from colour shifts whenever the amount
of light changes. Laser scanners have difficulty perceiving transparent objects. Con-
verting sensor data into a usable representation is challenging and can depend on the
nature and limitations of the sensor. Humans use a wide array of integrated sensors
to generate perceptions. Moreover, the number of these sensors is (at least currently)
much higher than the number of sensors of any robot. The vast amount of sensors
available to a human is advantageous in terms of uncertainty reduction of percep-
tion. Humans also use a number different brain structures to encode information, to
perform experience-based learning, and to relate this learning to other knowledge
and experiences. Machines typically cannot achieve this type of learning.
Planning is the process by which the robot makes use of its perceptions and
knowledge to decide what to do next. Typically, robot planning includes some type
of goal that the robot is attempting to achieve. Uncertainty about the world must be
dealt with at the planning stage. Moreover, any background or historical knowledge
that the system has can be applied at this stage.
2.3 What Is a Robot? 13
Finally, the robot acts in the world. The robot must use knowledge about its own
embodiment and body schema to determine how to move joints and actuators in a
manner dictated by the plan. Moreover, once the robot has acted it may need to then
provide information to the sensing process in order to guide what the robot should
look for next.
It should be understood that AI agents and robots have no innate knowledge
about the world. Coming off the factory production line a robot or AI is a genuine
“blank slate” or to be more exact an unformatted drive. Babies, on the other hand,
enter the world “pre-programmed” so to speak with a variety of innate abilities
and knowledge. For example, at birth babies can recognise their mother’s voice. In
contrast, AI agents know nothing about the world that they have not been explicitly
programmed to know. Also in contrast to humans, machines have limited ability to
generate knowledge from perception. The process of generating knowledge from
information requires that the AI system creates meaningful representations of the
knowledge. As mentioned above, a representation is a way of structuring information
in order to make it meaningful. A great deal of research and debate has focused
on the value of different types of representations. Early in the development of AI,
symbolic representations predominated. A symbolic representation uses symbols,
typically words, as the underlying representation for an object in the world. For
example, the representation of the object apple would be little more than “Apple.”
Symbolic representations have the value of being understandable to humans but are
otherwise very limiting because they have no precise connection to the robot’s or
the agent’s sensors. Non-symbolic representations, on the other hand, tend not to be
easily understood, but tend to relate better to a machine’s sensors.
In reality, to develop a working system capable of achieving real goals in the real
world, a vast array of different systems, programmes and processes must be integrated
to work together. System integration is often one of the hardest parts of building a
working robotic system. System integrators must deal with the fact that different
information is being generated by different sensors at different times. The different
sensors each have unique limitations, uncertainties, and failure modes, and the actu-
ators may fail to work in the real world. For all of these reasons, creating artificially
intelligent agents and robots is extremely challenging and fraught with difficulties.
The sections above have hinted at why AI is hard. It should also be mentioned that
not all software is AI. For example, simple sorting and search algorithms are not
considered intelligent. Moreover, a lot of non-AI is smart. For example, control
14 2 What Is AI?
algorithms and optimisation software can handle everything from airline reservation
systems to the management of nuclear power plants. But they only take well-defined
actions within strictly defined limits. In this section, we focus on some of the major
challenges that make AI so difficult. The limitations of sensors and the resulting lack
of perception have already been highlighted.
AI systems are rarely capable of generalising across learned concepts. Although
a classifier may be trained on very related problems, typically classifier performance
drops substantially when the data is generated from other sources or in other ways.
For example, face recognition classifiers may obtain excellent results when faces are
viewed straight on, but performance drops quickly as the view of the face changes
to, say profile. Considered another way, AI systems lack robustness when dealing
with a changing, dynamic, and unpredictable world. As mentioned, AI systems lack
common sense. Put another way, AI systems lack the enormous amount of experi-
ence and interactions with the world that constitute the knowledge that is typically
called common sense. Not having this large body of experience makes even the most
mundane task difficult for a robot to achieve. Moreover, lack of experience in the
world makes communicating with a human and understanding a human’s directions
difficult. This idea is typically described as common ground.
Although a number of software systems have claimed to have passed the Turing
test, these claims have been disputed. No AI system has yet achieved strong AI, but
some may have achieved weak AI based on their performance on a narrow, well-
defined task (like beating a grandmaster in chess or Go, or experienced players in
Poker). Even if an AI agent is agreed to have passed the Turing test, it is not clear
whether the passing of the test is a necessary and sufficient condition for intelligence.
AI has been subject to many hype cycles. Often even minor advancements have
been hailed as major breakthroughs with predictions of soon to come autonomous
intelligent products. These advancements should be considered with respect to the
narrowness of the problem attempted. For example, early types of autonomous cars
capable of driving thousands of miles at a time (under certain conditions) were already
being developed in the 1980s in the US and Germany. It took, however, another 30+
years for these systems to just begin to be introduced in non-research environments.
Hence, predicting the speed of progression of AI is very difficult—and in this regard,
most prophets have simply failed.
Artificial Intelligence and robotics are frequent topics in popular culture. In 1968, the
Stanley Kubrick classic “2001” featured the famous example of HAL, a spacecraft’s
intelligent control system which turns against its human passengers. The Terminator
movies (since 1984) are based on the idea that a neural network built for military
defense purposes gains self-awareness and, in order to protect itself from deactiva-
tion by its human creators, turns against them. The Steven Spielberg’s movie “A.I.”
(2001), based on a short story by Brian Aldiss, explores the nature of an intelligent
2.5 Science and Fiction of AI 15
robotic boy (Aldiss 2001). In the movie “I, Robot” (2004), based on motives from
a book by Isaac Asimov, intelligent robots originally meant to protect humans are
turning into a menace. A more recent example is the TV show “Westworld” (since
2016) in which androids entertain human guests in a Western theme park. The guests
are encouraged to live out their deepest fantasies and desires.
For most people, the information provided through these shows is their first expo-
sure to robots. While these works of fiction draw a lot of attention to the field and
inspire our imagination, they also set a framework of expectations that can inhibit
the progress of the field. One common problem is that the computer systems or
robots shown often exhibit levels of intelligence that are equivalent or even supe-
rior to that of humans or current systems. The media thereby contributes to setting
very high expectations in the audience towards AI systems. When confronted with
actual robots or AI systems, people are often disappointed and have to revise their
expectations. Another issue is the frequent repetition of the “Frankenstein Complex”
as defined by Isaac Asimov. In this trope, bands of robots or an AI system achieve
consciousness and enslave or kill (all) humans. While history is full of examples of
colonial powers exploiting indigenous populations, it does not logically follow that
an AI system will repeat these steps. A truly intelligent system will (hopefully) have
learned from humanity’s mistakes. Another common and rather paradoxical trope is
the assumption that highly intelligent AI systems desire to become human. Often the
script writers use the agent’s lack of emotions as a the missing piece of the puzzle
that would make them truly human.
It is important to distinguish between science and fiction. The 2017 recommen-
dation to the European Parliament to consider the establishment of electronic per-
sonalities (Delvaux 2017) has been criticised by many as a premature reflex to the
depiction of robots in the media.1 For example, granting the robot “Sophia” Saudi
Arabian citizenship in October 2017 can in this respect be considered more as a
successful public relations stunt (Reynolds 2018) than as a contribution to the field
of AI or its ethical implications. Sophia’s dialogues are based on scripts and can-
not therefore be considered intelligent. It does not learn nor is it able to adapt to
unforeseen circumstances. Sophia’s presentation at the United Nation is an uncon-
vincing demonstration of artificial intelligence. People do anthropomorphise robots
and autonomous systems, but this does not automatically justify the granting of per-
sonhood or other forms of legal status. In the context of autonomous vehicles, it may
become practical to consider such a car a legal entity, similar to how we consider an
abstract company to be a legal person. But this choice would probably be motivated
more out of legal practicality than out of existential necessity.
1 http://www.robotics-openletter.eu/.
Random documents with unrelated
content Scribd suggests to you:
The Project Gutenberg eBook of Into the blue
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.
Language: English
By F. BRITTEN AUSTIN
We sat there for yet some time, Toby and I, puffing at our pipes in
silence. He leaned back on the settee, with his eyes closed. I was
thinking—never mind what I was thinking; but my thoughts ranged
far into the dreary future of my life. My glance fell on him, scrutinizing
him, probing him, weighing him, as he lay there all unconscious of it.
About his feelings I had no doubt. Were they reciprocated? I
remembered that peculiarly attractive smile of his, the alluring touch
of mystery about him—and almost hated him for them. That was the
kind of thing which appealed to women, I reflected bitterly.
He opened his eyes.
“‘Puro è disposto a salire alle stelle,’” he murmured to himself,
staring as at a vision where this somewhat gaudy hotel lounge had
no place.
“What’s that?” I said, not quite catching his words.
“Eh?” He looked at me as though he had forgotten my presence,
was only now reminded of it by my voice. “Oh, that’s the last line of
the Purgatorio—where Dante, having drunk forgetfulness of the
earth from Lethe, is ready to ascend with Beatrice into the stars of
the Paradiso. .... All right, Jimmy,” he added, with a smile of sardonic
superiority which irritated me, “don’t worry yourself with trying to
understand. You wont. You’re one of those whose idea of the fit
habitation for the divine soul shining through the eyes of your
beloved is a bijou residence in a London suburb. After a few years of
you, your wife, whoever she is, will be another Mrs. Bryant.”
“Many thanks!” I replied, somewhat nettled, and a little puzzled
also. This was a new Toby. We were not given to cultivating poetry in
our mess. “But since when have you taken to studying Dante in the
original?”
“Oh, I’ve had plenty of time,” he answered, his eyes straying away
from me evasively. “I’ve lived pretty much by myself these last few
years.” He rose to his feet, cutting short the subject. “Let’s go for a
stroll, shall we? Get a breath of fresh air into our lungs.”
I helped her with it. She looked more charming than ever in the thick
leather coat, the close-fitting leather helmet framing her dainty
features. Then I made a step toward the gangplank.
“But aren’t you coming too?” she demanded in surprise.
Toby answered for me.
“Esdaile doesn’t care for flying,” he said with a sardonic smile,
looking me straight in the eyes. There was a sort of mocking triumph
in that unmistakable sneer.
“Oh—but please!” Sylvia turned to me pleadingly. “Do come!”
“I’d rather take you up alone,” said Toby in a stubborn voice,
looking up from the mooring-rope he had bent to untether.
She ignored him, laid a hand upon my arm.
“Wont you?” she asked.
“I should infinitely prefer not to,” I replied awkwardly. I cursed
myself for my imbecility, but the mere idea of going up in that
machine made me feel sick inside, still so powerful was the memory
of that moment long ago when, ten thousand feet up with a Hun just
below me plunging in flames to destruction, I had felt my nerve
suddenly break, my head go dizzy in an awful panic. “Please excuse
me.”
She could not, of course, guess my reason.
“I sha’n’t go without you,” she said obstinately. Her eyes seemed
to be telling me something I was not intelligent enough to catch. “And
I want to go. Please— Jimmy!”
I surrendered.
“All right,” I said, feeling ghastly. “I’ll come.”
Toby stopped in the act of pulling on his flying-coat, and looked at
me. His face was livid, his eyes almost insanely malignant in a
sudden fury of bad temper.
“Don’t think you’re going to spoil it!” he said, through his teeth. “I’ll
see to that!”
With that cryptic remark, he swung himself into the pilot’s seat and
started the engine with a jerk that almost threw me into the water. I
slid down to the seat beside Sylvia. Toby had already cast off the
one remaining mooring-rope, and with a whirring roar that gave me
an odd thrill of old familiarity, the propeller at our nose a dark blur in
its initial low-speed revolutions, we commenced to move over the
waves.
For a moment we had a slight sensation of their rise and fall as we
partly tore through them, partly floated on their lifting crests, and then
suddenly the engine note swelled to the deafening intensity of full
power; the blur of the propeller disappeared; a fount of white spray,
sunlit from a rift in the clouds, sprang up on either hand from the
floats beneath us, hung poised like jeweled curtains at our flanks,
stung our faces with flying drops. For yet a minute or two we raced
through the high-flung water; and then abruptly the glittering foam-
curtains vanished. Our nose lifted. We sagged for another splash,
lifted again, on a buoyancy that was not the buoyancy of the sea. I
glanced over the side, saw the tossing wave-crests already twenty
feet below us.
Instinctively I looked round to Sylvia to see how she was taking it.
Her eyes were bright, her face ecstatic. I saw her lips move as she
smiled. But her words were swallowed in the roar of the engine, and
the blast of air that almost choked one, despite the little mica wind-
screen behind which we crouched. I bent my ear close to her face,
just caught her comment as she repeated it.
“It’s—wonderful!” she gasped.
Then she clutched my arm in sudden nervousness as the machine
banked side-wise. Below us, diminished already, the pier, the long
promenade of Southbeach, whirled round dizzily in a complete circle,
got yet smaller as they went. Toby was putting the machine to about
as steep a spiral as it could stand. As we went round again and yet
again, with our nose seeming to point almost vertically up to the gray
ceiling of cloud and our bodies heavy against the backs of our seats,
I had a spasm of alarm that turned to anger. What was he playing
at? It was ridiculous to show off like this! I did not doubt his skill—but
it would not be the first airplane to stall at so steep an angle that it
slipped back in a fatal tail-spin. I noticed that Sylvia was not strapped
in her seat, and promptly rectified the omission. It might be all right,
but with an inexperienced lady-passenger, it was as well to take
precautions if he was going to play tricks of this sort.