0% found this document useful (0 votes)
173 views

Artificial Intelligence and Robotics

This document provides a table of contents for a report on the ethics of artificial intelligence and robotics. The table of contents lists an abstract, research methodology, limitations, literature review, conclusion, and references. The literature review section discusses the history and development of AI, defines AI and robotics, examines their main purposes and applications, and outlines some of the major debates in AI ethics around issues of privacy, data use, surveillance, and the tradeoff between service affordability and privacy.

Uploaded by

Saahil Bc
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
173 views

Artificial Intelligence and Robotics

This document provides a table of contents for a report on the ethics of artificial intelligence and robotics. The table of contents lists an abstract, research methodology, limitations, literature review, conclusion, and references. The literature review section discusses the history and development of AI, defines AI and robotics, examines their main purposes and applications, and outlines some of the major debates in AI ethics around issues of privacy, data use, surveillance, and the tradeoff between service affordability and privacy.

Uploaded by

Saahil Bc
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

TABLE OF CONTENTS

SL PAGE
CONTENTS
NO. NO

1 ABSTRACT 1

2 RESEARCH METHODOLOGY 2

3 LIMITATIONS 2

4 LITERATURE AND LEARNING REVIEW 3-14

5 CONCLUSION 15

6 REFERENCES 16
ABSTRACT

Ethics of Artificial intelligence (AI) and robotics is a very young field within
applied ethics, despite this, it has seen significant press coverage in recent years,
which supports insightful research. Artificial intelligence (AI) and robotics are
kinds of digital technologies that will have a profound impact on humanity in
the near future. This has raised fundamental questions about what we should do
with these systems, what the systems themselves should do, what risks they
involve, and the major underlying ethical issues that could be seemingly
innocent, yet may have deleterious consequences. The purpose of the report is
to thereby define, understand and analyze these ethical issues.

~1~
RESEARCH METHODOLOGY

We conducted a basic* and exploratory* investigation on the ethical issues of


AI and robotics, therefore we have elucidated on the main aspects of the under-
researched, ethical issues its existing arguments. Moreover, we have used
secondary information, from highly accredited sources and publications
between 2013-2019, making our research more reliable, honest and impactful in
the present day context. Collectively we felt that secondary data was the best
option as it was in plenty and allowed a careful condensation. Qualitative
method of data analysis was used including observations and thematic
analysis*. 

LIMITATIONS

1. If we had some more time to carefully evaluate more research papers, we


could have developed an even more comprehensive understanding. 

2. There isn't really any existing framework for ethics of robotics and AI, so we


have had to think outside the box, and provide our own view point. 

3. No access to accredited research journals, except for 7 day free trial basis. 

~2~
*basic research aims to develop and understand knowledge and theories.

*exploratory research aims to investigate an under researched topic

*thematic analysis is a kind of qualitative analysis, helps understanding and reading various text to develop themes
LITERATURE AND LEARNING REVIEW

INTRODUCTION

Background
The field of Artificial intelligence (AI) officially started in 1956, propelled by a
little but now-famous DARPA-sponsored summer conference at Dartmouth
College, in Hanover, Modern Hampshire. From where we stand presently, into
the start of the new millennium, the Dartmouth conference is paramount for the
reason that, the term ‘artificial intelligence’ was coined there. In spite of the fact
that, the term ‘artificial intelligence’ made its approach at the 1956 conference,
certainly, the field of AI was in operation before 1956. Historically, it is worth
noticing that the term “AI” was utilized between 1950–1975, at that point, came
into notoriety amid the “AI winter”(1975–1995), and narrowed. As a result,
regions such as “machine learning”, “natural language processing” and “data
science” were frequently not named as “AI”. Since 2010, the use has broadened
once more, and at times nearly all of computer science and indeed high-tech is
lumped beneath “AI”. Presently it may be a title to be pleased with, a booming
industry with a massive capital venture.

~3~
AI & Robotics
The notion of artificial intelligence
is understood broadly as any kind of
artificial computational system that
shows intelligent behaviour, i.e.,
complex behaviour that is
conducive to reaching goals.

AI somehow gets closer to our skin


than other technologies. Perhaps this is because the project of AI is to create
machines that have a feature central to how we humans see ourselves, namely as
feeling, thinking, intelligent beings. The main purposes of an artificially
intelligent agent probably involve sensing, modeling, planning and action, but
current AI applications also include perception, text analysis, logical reasoning,
game-playing, decision support systems, data analytics, predictive analytics, as
well as autonomous vehicles and other forms of robotics.
While AI can be totally computer program, robots are physical machines that
move. Robots are subject to physical impact, regularly through “sensors”, and
they apply physical drive onto the world, regularly through “actuators”, like a
gripper or a turning wheel. Accordingly, autonomous cars or planes are robots,
and only a minute portion of robots is “humanoid” (human-shaped), like in the
motion pictures. 

A few robots use AI, and a few don't: Ordinary mechanical robots aimlessly
take after totally characterized scripts with negligible tangible input and no
learning or thinking. It is likely reasonable to say that while robotics
frameworks cause more concerns within the public, AI frameworks are more
likely to have a more prominent impact on humankind. Also, AI or robotics
systems for a narrow set of errands are less likely to cause modern issues than
systems that are more adaptable and independent.
Robotics and AI can thus be seen as covering two overlapping sets of systems:
systems that are only AI, systems that are only robotics, and systems that are
both. We are interested in all three; the scope of this report is thus not only the
intersection, but the union, of both sets. Therefore the main debates are as
follows;

~4~
MAIN DEBATES

Privacy 
Privacy has several well recognised aspects, example being “the right to be let
alone”, information privacy, privacy as an aspect of personhood, control over
information about oneself, and the right to secrecy. Privacy studies have
historically focused on state surveillance by secret services but now include
surveillance by other state agents, businesses, and even individuals.
But as more data becomes digitalized, and more information is shared online,
data privacy is becoming more important. Data privacy denotes how
information should be managed based on its perceived importance. It isn’t just a
business concern; individuals have a lot at stake when it comes to the privacy of
their data. The more you are aware of it, the better you’ll be able to shield
yourself from multiple risks. In this digital age, the concept of data privacy is
mainly applied to critical personal information, also referred to as personally
identifiable information (PII) and personal health information (PHI). This
typically includes financial data, medical and health records, social security
numbers, and even basic yet sensitive information like birthdates, full names,
and addresses.

It’s a Data Driven Economy. 


User data is an extremely valuable asset in this information age. It not only
helps organisations understand their customers, but also enables them to ‘track’
customers and target them with ‘relevant’ ads. Marketing is just one of the ways
companies leverage user data to strengthen their position in the market and
increase their revenues. There are other more harmful ways. In 2018, Facebook
founder Mark Zuckerberg was called to testify before the United States
Congress, following the Cambridge Analytic-a Scandal. Questioning during the
hearings unearthed several details of a data privacy crisis for companies like
Facebook that are dependent on data manipulation and harvesting.

The Service Affordability Tradeoff


Many in the tech industry are disinclined to support privacy regulations due to
its potential to hold back innovation. Mark Zuckerberg defended his company’s
advertising-based model by pointing out that it enabled its services to “be
affordable to everyone”. “Instead of charging users, we charge the advertisers”,
he added. Google’s Senior VP for Global Affairs, Kent Walker, echoed the
same sentiment by saying ads allow them to deliver search to users of all

~5~
income levels across the globe for free. However, both executives also
acknowledged that security and privacy has to be a principal consideration, even
if it impacts profitability. It’s impossible to ignore the fact that all this personal
data can lead to interferences and intrusions with people’s private lives. This
can have a damaging and distressing effect on individuals.
Data Privacy Should be a Basic Human Right
The right to privacy safeguards an individual’s dignity by protecting their
personal information from public scrutiny. This right is typically protection by
statutory law.

Surveillance 
Digital surveillance is the monitoring of computer activity, data stored on a hard
drive, or being transferred through computer networks. Digital surveillance is
usually done superstitiously and can be done by anyone, government,
corporations and even individuals. At the most basic level, surveillance is a way
of accessing data. Surveillance implies an agent who accesses personal data.
Surveillance is not a new phenomenon, yet for a large stretch of human history
the practice remained limited. Information was confined to specific locations
and went largely unshared and unrecorded. An inherent lack of manpower and
access to adequate processing tools further restricted the ability of organisations
to analyse large amounts of data — a defining characteristic of contemporary
systems of surveillance.

The Rationale for Surveillance


The primary justification for the expansion of surveillance rests on an undefined
and ambiguous concept. A fundamental belief that the attainment of “security”
is achievable if all aspects of lived experience are tracked, rationalised and
regulated drives the advancement of increasingly complex and opaque systems.

Unconscious Workers of Surveillance Systems


While the security rationale sketched above justifies the development and
implementation of increasingly complex and opaque systems of surveillance,
such systems are reinforced by a creeping sense of technological determinism. It
may be the growing unease that our technology rules us rather than the opposite
that has resulted first in our acquiescence towards, and now active participation
in, contemporary systems of surveillance.
No longer can we consider ourselves to be the passive subjects of omnipresent
and controlling systems imposed on us from above. We have become active
agents that each contribute towards increasingly sophisticated systems of
scrutiny

Psychological manipulation is a type of social influence that aims to change the


behaviour or perception of others through indirect, deceptive, or underhanded

~6~
tactics. By advancing the interests of the manipulator, often at another's
expense, such methods could be considered exploitative and devious. Internet
manipulation refers to the co-optation of digital technology, such as social
media algorithms and automated scripts, for commercial, social or political
purposes. Internet manipulation is sometimes also used to describe selective
Internet censorship or violations of net neutrality. The ethical issues of AI in
surveillance go beyond the mere accumulation of data and direction of
attention: They include the use of information to manipulate behaviour, online
and offline, in a way that undermines autonomous rational choice. Of course,
efforts to manipulate behaviour are ancient, but they may gain a new quality
when they use AI systems.

One more specific issue is that machine learning techniques in AI rely on


training with vast amounts of data. This means there will often be a trade-off
between privacy and rights to data vs. technical quality of the product. This
influences the consequentialist evaluation of privacy-violating practices.

Opacity of AI Systems
Opacity and bias are central issues in what is now sometimes called “data
ethics” or “big data ethics” While automated decision systems have the
potential to bring more efficiency, consistency and fairness, it also opens up the
possibility of new forms of discrimination which may be harder to identify and
address. The opaque nature of machine learning algorithms and the many ways
human biases can creep in, challenge “our ability to understand how and why a
decision has been made”. Bias in decision systems and data sets is exacerbated
by this opacity. So, in cases where there is a desire to remove bias, analysis of
opacity and bias go hand in hand and political response has to tackle both issues
together.
Many AI systems rely on machine learning techniques in neural networks that
will extract patterns from a given dataset, with or without correct solutions
provided; i.e., supervised, semi-supervised or unsupervised. With these
techniques, the learning captures patterns in the data and these are labeled in a
way that appears useful to the decision the system makes, while the programmer
does not really know which patterns in the data the system has used. In fact, the
programs are evolving, so when new data comes in, or new feedback is given,
the patterns used by the learning system change. This means is that the outcome
is not transparent to the user or programmers, it is opaque. The quality of the
program depends heavily on the quality of the data provided. So, if the data
already involved a bias, then the program will reproduce that bias.
There are several technical activities that aim at explaining AI, a mechanism for
elucidating and articulating the power structures, biases, and influences that
computational artifacts exercise in society is sometimes called algorithmic
accountability reporting.

~7~
Bias in decision system
Analytics and decision aids such as decision support systems (DSS) are
intended to improve the quality of decisions made, for example, using
communications technologies, acquiring and processing data, assisting in
analyzing data and documents, using quantitative models to identify and solve
problems, completing decision process tasks, and guiding decision making.
Traditionally, technologically-oriented decision tools have supported only part
of an organizational or individual decision process due to the complexity and
uncertainty inherent in semi-structured and unstructured decision tasks. The
user is expected to interact with the system in some way to provide input or
data, make choices about processing, interpret results, or come to a decision. In
short, the system should help the decision maker think rationally.
Cognitive bias is closely connected with human decision-making because
people learn and develop predictable thinking patterns. The human cognitive
system is generally prone to have various kinds of cognitive biases i.e.,
Confirmation bias, it is a type of cognitive bias that leads to poor decision-
making. It prevents us from looking at a situation objectively to make a
decision. A third form of bias is present in data when it exhibits systematic
error, e.g., statistical bias
any given dataset will only be unbiased for a single kind of issue, so the mere
creation of a dataset involves the danger that it may be used for a different kind
of issue, and then turn out to be biased for that kind. Machine learning on the
basis of such data would then not only fail to recognise the bias, but codify and
automate the historical bias. There are significant technical efforts to detect and
remove bias from AI systems, but it is fair to say that these are in early stages. 

Human – Robot Interaction (HRI)


Artificial Intelligence in the midst of human beings can
be very disruptive. Especially if these machines are
made to look, act and insinuate itself in our daily lives.
This would not only change the way people interact
with such machines but also how people interact with
each other. For example, by designing robots that look
like human beings or animals we may develop a liking,
attraction or an emotional affinity towards them, but as
humans we very easily attribute mental properties to Sophia
objects that resemble ourselves. We might just end up
giving more emotional importance to AI’s and robots than they deserve.

~8~
Therefore, these robots could be used as a weapon of deception. A few
examples of problematic robots might be Hanson Robotics’ most advanced
human like Robot “Sophia” and Hiroshi Ishiguro’s remote controlled
Geminoids.
Since research and innovation in the area of
healthcare, robotics has seen a significant
growth in recent years. For example, Care
Robots have been developed to care and
support elderly people living at home, robotic
nurses have been developed to assist with care
tasks, surgical robots have been designed to use
in hospitals and there are also robots that are
given to patients for their comfort and
Hiroshi Ishiguro and his company, example the Paro robot seal. Such
Geminoid innovations have led to question if these robots
are created to automate tasks essentially replacing people or to help and allow
humans to work and perform more effectively and efficiently. In order to be
cared for there has to be an intension to care, which robots lack and a system
that is pretending to care would just mean
deceiving people into believing that they are
cared for.
Another key innovation in the field of
robotics is Sex Robots. 2010 saw the release
of ROXXXY the “love robot”. This raises
questions if is it perfectly alright to abuse a ROXXXY the “love robot”
sex robot? It is a machine that just looks like
us, does not feel anything nor does it enjoy any rights or freedom so is it ethical
to treat them as just sex slaves. People can end up having deep emotional
attachments to objects. Companionship or friendship with an android is highly
possible but isn’t this just a way of deceiving humans, since it is the people here
who would be sharing their most vulnerable self to machines which would not
reciprocate or care for such feelings and emotions. And the worst part of it is
that people would be paying for this deception. Another important factor to be
taken into consideration is consent, wouldn’t human behaviours be influenced
by such experiences, would people not end up treating other people as mere
objects of desire or even recipients of abuse,” Campaign Against Sex Robots”
argues that these devices are continuation of slavery and prostitution
[Richardson 2016].

~9~
Automation and Employment
Advancement in the field of Artificial intelligence and robotics has had a huge
impact on manual labour. Intelligent systems have vastly improved the
productivity of many office jobs ranging from clerical to professional jobs.
Artificial Intelligent technologies has removed several repetitive works, thus
helping humans to complete their work faster and effectively. This has allowed
them to focus and give more time to value added activities. However,
automation has resulted in several job losses. This brings up several questions
such as, has artificial intelligence put several people’s jobs at risk or has it made
it better and easier to perform the job? Can we as people consider it ethical to
take away people’s livelihood in an attempt to increase productivity? And
finally, can we call automation ethical if it is going to cause automation
anxiety? We can also clearly state that automation affects people of lower- and
middle-income groups the most. It ends up bringing social injustice, the rich get
richer and poor get poorer. Automation would lead to unjust distribution of
wealth and this indeed is a development that we can already see.

Taking an example of the farming sector, over 60% of the work force in Europe
and Norther America were employed in the farming sector during the 1800s but
by 2010 only about 5% were working in the field of agriculture. Another
example can be India, presently India is going through a job crisis,
unemployment rate is at a 45-year high and since the pandemic the situation has
become much more worse. So, should we as a country still head towards
automation knowing how many roles and responsibilities would be eliminated
due to automation? Can we consider it ethical to take away potential job
opportunities which would have benefited a huge amount of people?

However, we can say that as long as automation is an assistant or a tool and not
a replacement, we can call it ethical.

Autonomous Systems
Autonomous systems could be referred to the systems, which are all
independent and can complete a given task or objective to achieve all by itself
with absolutely no or minimum human participation. These systems are
structured in a manner, that they are able to prognosticate future and
simultaneously be completely aware of their present environment. The primary
objective of these autonomous systems is to reduce or eliminate human labour.
On the other hand, Artificial Intelligence have their eventual goal to structure
ability of machines to operate on their own with least human effort involved.

~ 10 ~
Autonomous systems can add up to a part of artificial intelligence, however it
can be more complicated as we expect machine to work like human with no
error. One of the philosophical debate includes it strong notion, under which
responsibility and personhood acts as the basis for autonomy.
In the above mentioned scenario, responsibility is directly hinted towards
autonomy but the relation does not act similar in the case of personhood and
autonomy. Personhood is more related to the legalities attached with an
autonomous system. There is always a doubt, when it comes to the matter of
these systems and their adaptability. Technical concerns related to these systems
is always present.
For further advancement, there is also a “verifiable Artificial Intelligence”
which is created for the safety and security of these systems. Massive frames
like The Institute of Electrical and Electronic Engineers and British Standards
Institution have created particular technical standards like transparency and
security of data. Autonomous systems are structured to work on land, water and
underwater, air etc. following would be brief description about such systems
with two particular examples.

1. Autonomous vehicles:
Autonomous vehicles are one of the example
of autonomous systems. They hold up the
potential of minimizing notable losses that
could probably occur due to human fault
while driving. It could be still noted that
issues such as the behaviour of the vehicle, its
adaptability on roads, risk involved and many
other factors are still being questioned. There
are still problems with regard to these vehicles
referred as “trolley problem” which basically depicts the right decision taken at
the right time by such systems. Although, only theoretical tools have be used to
investigate such problem to check its ethical intuitions.
All the general ethical problems of
automated vehicles like speed, rules,
overtaking, maintaining distance and such
other issues are taken care by keeping in
account the basic legal rules of driving.
The vehicle is programed in a way that it
ensures “maximum utility” and “safety of
its occupants” by following the legal
driving regulations.

2. Autonomous Weapons:

~ 11 ~
The concept of autonomous weapons is not something recent, it has been in use
since years. It can be referred to weapons used during war, remote piloted
weapons, instructed missiles and such other instruments. The major advantage
of such weapons was they could easily target and identify the enemy during
war. But it was not much supported as it directed towards human killing and
destructions. “Risk analysis” is been conducted before making use of such
weapons. It indicates who is mainly exposed to risk factors by using such
weapons, who is making benefit out of such weapons and then accordingly take
decisions of using the weapons.

Machine ethics
Machine ethics refers to the behaviour of machines towards humans and other
machines. It includes whether a particular action by the machine is accepted and
safe for the users or not. Artificial Intelligence should be accountable for many
other factors like social, legal, ethical factors while designing the ability of a
machine. It is also seen in situations that a particular robot designed to behave
ethically can be very easily altered to behave in an unethical manner. Certain
criteria such as taking responsibility, exhibiting transparency, auditability also
comes in picture when we talk about machine ethics.
Artificial Intelligence as an emerging trend also witnesses few issues under it,
which are stated below:
 Loss of jobs: AI has ultimate goal of eliminating human effort or involvement
and letting machines take over entire responsibility. This automation will
definitely result in loss of jobs in many organisations and leading to
unemployment.
 Biased AI: Although there is no argument when it comes to the functioning,
speed and ability of AI to perform tasks better than human in any manner, but
we cannot ignore the fact that these systems are also been created by human.
There can be no evidence proving that these humans are not judgmental or
being biased in any form while creating such system. Therefore, it includes
chances of these systems being biased.
 Security Issues: when the technology grows, there is also growth in the
security of the systems and data. This applies to robots and other autonomous
systems as well. Cyber security is always an issue while using AI machines.
 Stupidity of machines: The AI machines have been trained before using them
and in order to update the system or advance them, training of next level
should be provided to the systems. In such cases, there is always a possibility
that these machines can committee a mistake.

By the application of principles of AI, the ethics will guarantee the


advancement and development of safety aspects and other ethical concerns
relating to the systems. Ethics and safety have been top priority while

~ 12 ~
structuring and designing these systems. Necessary actions will also be checked
to avoid wrong use or negative consequences of such AI systems.

Artificial Moral Agents


An Artificial Moral Agents (AMA) is a virtual agent (software) or physical
agent (robot) capable of engaging in moral behavior or at least of avoiding
immoral behavior. This moral behavior may be based on ethical theories such as
teleological ethics, deontology, and virtue ethics, but not necessarily. One of the
objectives in the field of artificial intelligence for some decades has been the
development of artificial agents capable of coexisting in harmony with people
and other systems. The computing research community has made efforts to
design artificial agents capable of doing tasks the way people do, tasks requiring
cognitive mechanisms such as planning, decision-making, and learning. The
application domains of such software agents are evident nowadays. Humans are
experiencing the inclusion of artificial agents in their environment as unmanned
vehicles, intelligent houses, and humanoid robots capable of caring for people.
However, there are still crucial challenges in the development of truly AMA’s.
Responsibility for Robots Since robots are systems constructed with different
components, including hardware, software, and cloud services that may be
provided by different companies, and they perform tasks with a degree of
autonomy, it might be uncertain who is responsible if something goes wrong.
Any technology subject to uncertainty and with a potentially high impact on
human society is expected to be handled cautiously, and intelligent systems
surely fall into this category. Thus, preventing harm and having the burden of
proof of harmlessness is something that producers of intelligent systems are
responsible for.
Rights for Robots as AI and robotics act and look like humans it might be
necessary to grant them extensive rights at least in order to avoid ‘violating the
rights of humans due to misidentification’. Robot rights" is the concept that
people should have moral obligations towards their machines, similar to human
rights or animal rights. It has been suggested that robot rights, such as a right to
exist and perform its own mission, could be linked to robot duty to serve
human, by analogy with linking human rights to human duties. These could
include the right to life and liberty, freedom of thought and expression and
equality.

Singularity
Singularity And Superintelligence, The technological singularity ,
hyperintelligence, or superhuman intelligence is a hypothetical agent that
possesses intelligence far surpassing that of the brightest and most gifted human

~ 13 ~
minds. "Superintelligence" may also refer to the form or degree of intelligence
possessed by such an agent.
As computers increase in power, it becomes possible for people to build a
machine that is more intelligent than humanity; this superhuman intelligence
possesses greater problem-solving and inventive skills than current humans are
capable of. This superintelligent machine then designs an even more capable
machine, or re-writes its own software to become even more intelligent; this
(even more capable) machine then goes on to design a machine of yet greater
capability, and so on.
Existential Risk From superintelligence AI systems could also pose risks if they
are not designed and used carefully. As it is almost certain to happen, unlike
with natural pandemics, which may not happen at all, since it is a lottery type
risk. The second reason why it is so dangerous is that it may happen much
earlier than the risk mostly talked about in recent years – the climatic
catastrophe. The risk coming from Superintelligence is more likely to happen in
the next 50 years rather than in the next century. On the other hand, I believe
that if we manage to deliver the so called “friendly” Superintelligence, then
instead of becoming the biggest risk, it will itself help us reduce other
anthropogenic risks, such as climate change.
Controlling Superintelligence Capability control proposals aim to reduce the
capacity of AI systems to influence the world, in order to reduce the danger that
they could pose. However, capability control would have limited effectiveness
against a superintelligence with a decisive advantage in planning ability, as the
superintelligence could conceal its intentions and manipulate events to escape
control.

~ 14 ~
CONCLUSION

As stated at the beginning, AI has raised fundamental questions about what we


should do with these systems, what the systems themselves should do, what
risks they involve, and the major underlying ethical issues. More importantly,
they also challenge the human view of humanity as the intelligent and dominant
species on Earth, and that scares people away. 

After the Background and History, the main themes of this report are: Ethical
issues that arise with AI systems as items/objects made and used by humans.
This includes issues of privacy and manipulation, opacity and bias, human-robot
interaction, employment, and the effects of autonomy. Then system ethics for
the AI systems themselves in machine ethics and artificial moral agency.
Finally, the problem of a possible future AI superintelligence leading to a
“singularity”.

 To sustain the progress of AI, a rational and harmonic interaction is required
between application specific projects and visionary research ideas, even more
so, the ethical challenges to ensure that the society as a whole will benefit from
the evolution of AI. 

In short, the very next decade of robotics will become vital components in a
number of applications and robots paired with AI will be able to perform
complex actions that are capable of learning from humans, and we must never
forget the underlying ethical consequences in this endeavour, with progresses.
With great innovation, comes great responsibility. 

~ 15 ~
REFERENCES

1. Abowd, John M, 2017, “How Will Statistical Agencies Operate When All
Data Are Private?”, Journal of Privacy and Confidentiality, 7(3): 1–15.

2. Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and


Legal Case Against Autonomy in Weapons Systems”, Global Jurist,
18(1): art.

3. Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial


Intelligence and the Future of Humans, Washington, DC: Pew Research
Center.

4. 2011, Machine Ethics, Cambridge: Cambridge University Press.


doi:10.1017/CBO9780511978036

5. Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation,


Robotics and the Future of Work, New York: Oxford University Press. 

6. Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström,


Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller,
Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey
Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of
Human Civilization”, Foresight, 21(1): 53–83

~ 16 ~
7. Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A
Legal and Ethical Analysis”, The Information Society, 34(3): 130–140.

8. 2012, “The Superintelligent Will: Motivation and Instrumental


Rationality in Advanced Artificial Agents”, Minds and Machines, 22(2):
71–85. 

9. 2013, “Existential Risk Prevention as Global Priority”, Global Policy,


4(1): 15–31. doi:10.1111/1758-5899.12002. 

10.2019, “The Past Decade and Future of Ai’s Impact on Society”,


in Towards a New Enlightenment: A Transcendent Decade, Madrid:
Turner - BVVA

11.Devlin, Kate, 2018, Turned On: Science, Sex and Robots, London:
Bloomsbury.

12.Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to


the Special Issue”, Ethics and Information Technology, 20(1): 1–3.

~ 17 ~

You might also like