Artificial Intelligence and Robotics
Artificial Intelligence and Robotics
SL PAGE
CONTENTS
NO. NO
1 ABSTRACT 1
2 RESEARCH METHODOLOGY 2
3 LIMITATIONS 2
5 CONCLUSION 15
6 REFERENCES 16
ABSTRACT
Ethics of Artificial intelligence (AI) and robotics is a very young field within
applied ethics, despite this, it has seen significant press coverage in recent years,
which supports insightful research. Artificial intelligence (AI) and robotics are
kinds of digital technologies that will have a profound impact on humanity in
the near future. This has raised fundamental questions about what we should do
with these systems, what the systems themselves should do, what risks they
involve, and the major underlying ethical issues that could be seemingly
innocent, yet may have deleterious consequences. The purpose of the report is
to thereby define, understand and analyze these ethical issues.
~1~
RESEARCH METHODOLOGY
LIMITATIONS
~2~
*basic research aims to develop and understand knowledge and theories.
*thematic analysis is a kind of qualitative analysis, helps understanding and reading various text to develop themes
LITERATURE AND LEARNING REVIEW
INTRODUCTION
Background
The field of Artificial intelligence (AI) officially started in 1956, propelled by a
little but now-famous DARPA-sponsored summer conference at Dartmouth
College, in Hanover, Modern Hampshire. From where we stand presently, into
the start of the new millennium, the Dartmouth conference is paramount for the
reason that, the term ‘artificial intelligence’ was coined there. In spite of the fact
that, the term ‘artificial intelligence’ made its approach at the 1956 conference,
certainly, the field of AI was in operation before 1956. Historically, it is worth
noticing that the term “AI” was utilized between 1950–1975, at that point, came
into notoriety amid the “AI winter”(1975–1995), and narrowed. As a result,
regions such as “machine learning”, “natural language processing” and “data
science” were frequently not named as “AI”. Since 2010, the use has broadened
once more, and at times nearly all of computer science and indeed high-tech is
lumped beneath “AI”. Presently it may be a title to be pleased with, a booming
industry with a massive capital venture.
~3~
AI & Robotics
The notion of artificial intelligence
is understood broadly as any kind of
artificial computational system that
shows intelligent behaviour, i.e.,
complex behaviour that is
conducive to reaching goals.
A few robots use AI, and a few don't: Ordinary mechanical robots aimlessly
take after totally characterized scripts with negligible tangible input and no
learning or thinking. It is likely reasonable to say that while robotics
frameworks cause more concerns within the public, AI frameworks are more
likely to have a more prominent impact on humankind. Also, AI or robotics
systems for a narrow set of errands are less likely to cause modern issues than
systems that are more adaptable and independent.
Robotics and AI can thus be seen as covering two overlapping sets of systems:
systems that are only AI, systems that are only robotics, and systems that are
both. We are interested in all three; the scope of this report is thus not only the
intersection, but the union, of both sets. Therefore the main debates are as
follows;
~4~
MAIN DEBATES
Privacy
Privacy has several well recognised aspects, example being “the right to be let
alone”, information privacy, privacy as an aspect of personhood, control over
information about oneself, and the right to secrecy. Privacy studies have
historically focused on state surveillance by secret services but now include
surveillance by other state agents, businesses, and even individuals.
But as more data becomes digitalized, and more information is shared online,
data privacy is becoming more important. Data privacy denotes how
information should be managed based on its perceived importance. It isn’t just a
business concern; individuals have a lot at stake when it comes to the privacy of
their data. The more you are aware of it, the better you’ll be able to shield
yourself from multiple risks. In this digital age, the concept of data privacy is
mainly applied to critical personal information, also referred to as personally
identifiable information (PII) and personal health information (PHI). This
typically includes financial data, medical and health records, social security
numbers, and even basic yet sensitive information like birthdates, full names,
and addresses.
~5~
income levels across the globe for free. However, both executives also
acknowledged that security and privacy has to be a principal consideration, even
if it impacts profitability. It’s impossible to ignore the fact that all this personal
data can lead to interferences and intrusions with people’s private lives. This
can have a damaging and distressing effect on individuals.
Data Privacy Should be a Basic Human Right
The right to privacy safeguards an individual’s dignity by protecting their
personal information from public scrutiny. This right is typically protection by
statutory law.
Surveillance
Digital surveillance is the monitoring of computer activity, data stored on a hard
drive, or being transferred through computer networks. Digital surveillance is
usually done superstitiously and can be done by anyone, government,
corporations and even individuals. At the most basic level, surveillance is a way
of accessing data. Surveillance implies an agent who accesses personal data.
Surveillance is not a new phenomenon, yet for a large stretch of human history
the practice remained limited. Information was confined to specific locations
and went largely unshared and unrecorded. An inherent lack of manpower and
access to adequate processing tools further restricted the ability of organisations
to analyse large amounts of data — a defining characteristic of contemporary
systems of surveillance.
~6~
tactics. By advancing the interests of the manipulator, often at another's
expense, such methods could be considered exploitative and devious. Internet
manipulation refers to the co-optation of digital technology, such as social
media algorithms and automated scripts, for commercial, social or political
purposes. Internet manipulation is sometimes also used to describe selective
Internet censorship or violations of net neutrality. The ethical issues of AI in
surveillance go beyond the mere accumulation of data and direction of
attention: They include the use of information to manipulate behaviour, online
and offline, in a way that undermines autonomous rational choice. Of course,
efforts to manipulate behaviour are ancient, but they may gain a new quality
when they use AI systems.
Opacity of AI Systems
Opacity and bias are central issues in what is now sometimes called “data
ethics” or “big data ethics” While automated decision systems have the
potential to bring more efficiency, consistency and fairness, it also opens up the
possibility of new forms of discrimination which may be harder to identify and
address. The opaque nature of machine learning algorithms and the many ways
human biases can creep in, challenge “our ability to understand how and why a
decision has been made”. Bias in decision systems and data sets is exacerbated
by this opacity. So, in cases where there is a desire to remove bias, analysis of
opacity and bias go hand in hand and political response has to tackle both issues
together.
Many AI systems rely on machine learning techniques in neural networks that
will extract patterns from a given dataset, with or without correct solutions
provided; i.e., supervised, semi-supervised or unsupervised. With these
techniques, the learning captures patterns in the data and these are labeled in a
way that appears useful to the decision the system makes, while the programmer
does not really know which patterns in the data the system has used. In fact, the
programs are evolving, so when new data comes in, or new feedback is given,
the patterns used by the learning system change. This means is that the outcome
is not transparent to the user or programmers, it is opaque. The quality of the
program depends heavily on the quality of the data provided. So, if the data
already involved a bias, then the program will reproduce that bias.
There are several technical activities that aim at explaining AI, a mechanism for
elucidating and articulating the power structures, biases, and influences that
computational artifacts exercise in society is sometimes called algorithmic
accountability reporting.
~7~
Bias in decision system
Analytics and decision aids such as decision support systems (DSS) are
intended to improve the quality of decisions made, for example, using
communications technologies, acquiring and processing data, assisting in
analyzing data and documents, using quantitative models to identify and solve
problems, completing decision process tasks, and guiding decision making.
Traditionally, technologically-oriented decision tools have supported only part
of an organizational or individual decision process due to the complexity and
uncertainty inherent in semi-structured and unstructured decision tasks. The
user is expected to interact with the system in some way to provide input or
data, make choices about processing, interpret results, or come to a decision. In
short, the system should help the decision maker think rationally.
Cognitive bias is closely connected with human decision-making because
people learn and develop predictable thinking patterns. The human cognitive
system is generally prone to have various kinds of cognitive biases i.e.,
Confirmation bias, it is a type of cognitive bias that leads to poor decision-
making. It prevents us from looking at a situation objectively to make a
decision. A third form of bias is present in data when it exhibits systematic
error, e.g., statistical bias
any given dataset will only be unbiased for a single kind of issue, so the mere
creation of a dataset involves the danger that it may be used for a different kind
of issue, and then turn out to be biased for that kind. Machine learning on the
basis of such data would then not only fail to recognise the bias, but codify and
automate the historical bias. There are significant technical efforts to detect and
remove bias from AI systems, but it is fair to say that these are in early stages.
~8~
Therefore, these robots could be used as a weapon of deception. A few
examples of problematic robots might be Hanson Robotics’ most advanced
human like Robot “Sophia” and Hiroshi Ishiguro’s remote controlled
Geminoids.
Since research and innovation in the area of
healthcare, robotics has seen a significant
growth in recent years. For example, Care
Robots have been developed to care and
support elderly people living at home, robotic
nurses have been developed to assist with care
tasks, surgical robots have been designed to use
in hospitals and there are also robots that are
given to patients for their comfort and
Hiroshi Ishiguro and his company, example the Paro robot seal. Such
Geminoid innovations have led to question if these robots
are created to automate tasks essentially replacing people or to help and allow
humans to work and perform more effectively and efficiently. In order to be
cared for there has to be an intension to care, which robots lack and a system
that is pretending to care would just mean
deceiving people into believing that they are
cared for.
Another key innovation in the field of
robotics is Sex Robots. 2010 saw the release
of ROXXXY the “love robot”. This raises
questions if is it perfectly alright to abuse a ROXXXY the “love robot”
sex robot? It is a machine that just looks like
us, does not feel anything nor does it enjoy any rights or freedom so is it ethical
to treat them as just sex slaves. People can end up having deep emotional
attachments to objects. Companionship or friendship with an android is highly
possible but isn’t this just a way of deceiving humans, since it is the people here
who would be sharing their most vulnerable self to machines which would not
reciprocate or care for such feelings and emotions. And the worst part of it is
that people would be paying for this deception. Another important factor to be
taken into consideration is consent, wouldn’t human behaviours be influenced
by such experiences, would people not end up treating other people as mere
objects of desire or even recipients of abuse,” Campaign Against Sex Robots”
argues that these devices are continuation of slavery and prostitution
[Richardson 2016].
~9~
Automation and Employment
Advancement in the field of Artificial intelligence and robotics has had a huge
impact on manual labour. Intelligent systems have vastly improved the
productivity of many office jobs ranging from clerical to professional jobs.
Artificial Intelligent technologies has removed several repetitive works, thus
helping humans to complete their work faster and effectively. This has allowed
them to focus and give more time to value added activities. However,
automation has resulted in several job losses. This brings up several questions
such as, has artificial intelligence put several people’s jobs at risk or has it made
it better and easier to perform the job? Can we as people consider it ethical to
take away people’s livelihood in an attempt to increase productivity? And
finally, can we call automation ethical if it is going to cause automation
anxiety? We can also clearly state that automation affects people of lower- and
middle-income groups the most. It ends up bringing social injustice, the rich get
richer and poor get poorer. Automation would lead to unjust distribution of
wealth and this indeed is a development that we can already see.
Taking an example of the farming sector, over 60% of the work force in Europe
and Norther America were employed in the farming sector during the 1800s but
by 2010 only about 5% were working in the field of agriculture. Another
example can be India, presently India is going through a job crisis,
unemployment rate is at a 45-year high and since the pandemic the situation has
become much more worse. So, should we as a country still head towards
automation knowing how many roles and responsibilities would be eliminated
due to automation? Can we consider it ethical to take away potential job
opportunities which would have benefited a huge amount of people?
However, we can say that as long as automation is an assistant or a tool and not
a replacement, we can call it ethical.
Autonomous Systems
Autonomous systems could be referred to the systems, which are all
independent and can complete a given task or objective to achieve all by itself
with absolutely no or minimum human participation. These systems are
structured in a manner, that they are able to prognosticate future and
simultaneously be completely aware of their present environment. The primary
objective of these autonomous systems is to reduce or eliminate human labour.
On the other hand, Artificial Intelligence have their eventual goal to structure
ability of machines to operate on their own with least human effort involved.
~ 10 ~
Autonomous systems can add up to a part of artificial intelligence, however it
can be more complicated as we expect machine to work like human with no
error. One of the philosophical debate includes it strong notion, under which
responsibility and personhood acts as the basis for autonomy.
In the above mentioned scenario, responsibility is directly hinted towards
autonomy but the relation does not act similar in the case of personhood and
autonomy. Personhood is more related to the legalities attached with an
autonomous system. There is always a doubt, when it comes to the matter of
these systems and their adaptability. Technical concerns related to these systems
is always present.
For further advancement, there is also a “verifiable Artificial Intelligence”
which is created for the safety and security of these systems. Massive frames
like The Institute of Electrical and Electronic Engineers and British Standards
Institution have created particular technical standards like transparency and
security of data. Autonomous systems are structured to work on land, water and
underwater, air etc. following would be brief description about such systems
with two particular examples.
1. Autonomous vehicles:
Autonomous vehicles are one of the example
of autonomous systems. They hold up the
potential of minimizing notable losses that
could probably occur due to human fault
while driving. It could be still noted that
issues such as the behaviour of the vehicle, its
adaptability on roads, risk involved and many
other factors are still being questioned. There
are still problems with regard to these vehicles
referred as “trolley problem” which basically depicts the right decision taken at
the right time by such systems. Although, only theoretical tools have be used to
investigate such problem to check its ethical intuitions.
All the general ethical problems of
automated vehicles like speed, rules,
overtaking, maintaining distance and such
other issues are taken care by keeping in
account the basic legal rules of driving.
The vehicle is programed in a way that it
ensures “maximum utility” and “safety of
its occupants” by following the legal
driving regulations.
2. Autonomous Weapons:
~ 11 ~
The concept of autonomous weapons is not something recent, it has been in use
since years. It can be referred to weapons used during war, remote piloted
weapons, instructed missiles and such other instruments. The major advantage
of such weapons was they could easily target and identify the enemy during
war. But it was not much supported as it directed towards human killing and
destructions. “Risk analysis” is been conducted before making use of such
weapons. It indicates who is mainly exposed to risk factors by using such
weapons, who is making benefit out of such weapons and then accordingly take
decisions of using the weapons.
Machine ethics
Machine ethics refers to the behaviour of machines towards humans and other
machines. It includes whether a particular action by the machine is accepted and
safe for the users or not. Artificial Intelligence should be accountable for many
other factors like social, legal, ethical factors while designing the ability of a
machine. It is also seen in situations that a particular robot designed to behave
ethically can be very easily altered to behave in an unethical manner. Certain
criteria such as taking responsibility, exhibiting transparency, auditability also
comes in picture when we talk about machine ethics.
Artificial Intelligence as an emerging trend also witnesses few issues under it,
which are stated below:
Loss of jobs: AI has ultimate goal of eliminating human effort or involvement
and letting machines take over entire responsibility. This automation will
definitely result in loss of jobs in many organisations and leading to
unemployment.
Biased AI: Although there is no argument when it comes to the functioning,
speed and ability of AI to perform tasks better than human in any manner, but
we cannot ignore the fact that these systems are also been created by human.
There can be no evidence proving that these humans are not judgmental or
being biased in any form while creating such system. Therefore, it includes
chances of these systems being biased.
Security Issues: when the technology grows, there is also growth in the
security of the systems and data. This applies to robots and other autonomous
systems as well. Cyber security is always an issue while using AI machines.
Stupidity of machines: The AI machines have been trained before using them
and in order to update the system or advance them, training of next level
should be provided to the systems. In such cases, there is always a possibility
that these machines can committee a mistake.
~ 12 ~
structuring and designing these systems. Necessary actions will also be checked
to avoid wrong use or negative consequences of such AI systems.
Singularity
Singularity And Superintelligence, The technological singularity ,
hyperintelligence, or superhuman intelligence is a hypothetical agent that
possesses intelligence far surpassing that of the brightest and most gifted human
~ 13 ~
minds. "Superintelligence" may also refer to the form or degree of intelligence
possessed by such an agent.
As computers increase in power, it becomes possible for people to build a
machine that is more intelligent than humanity; this superhuman intelligence
possesses greater problem-solving and inventive skills than current humans are
capable of. This superintelligent machine then designs an even more capable
machine, or re-writes its own software to become even more intelligent; this
(even more capable) machine then goes on to design a machine of yet greater
capability, and so on.
Existential Risk From superintelligence AI systems could also pose risks if they
are not designed and used carefully. As it is almost certain to happen, unlike
with natural pandemics, which may not happen at all, since it is a lottery type
risk. The second reason why it is so dangerous is that it may happen much
earlier than the risk mostly talked about in recent years – the climatic
catastrophe. The risk coming from Superintelligence is more likely to happen in
the next 50 years rather than in the next century. On the other hand, I believe
that if we manage to deliver the so called “friendly” Superintelligence, then
instead of becoming the biggest risk, it will itself help us reduce other
anthropogenic risks, such as climate change.
Controlling Superintelligence Capability control proposals aim to reduce the
capacity of AI systems to influence the world, in order to reduce the danger that
they could pose. However, capability control would have limited effectiveness
against a superintelligence with a decisive advantage in planning ability, as the
superintelligence could conceal its intentions and manipulate events to escape
control.
~ 14 ~
CONCLUSION
After the Background and History, the main themes of this report are: Ethical
issues that arise with AI systems as items/objects made and used by humans.
This includes issues of privacy and manipulation, opacity and bias, human-robot
interaction, employment, and the effects of autonomy. Then system ethics for
the AI systems themselves in machine ethics and artificial moral agency.
Finally, the problem of a possible future AI superintelligence leading to a
“singularity”.
To sustain the progress of AI, a rational and harmonic interaction is required
between application specific projects and visionary research ideas, even more
so, the ethical challenges to ensure that the society as a whole will benefit from
the evolution of AI.
In short, the very next decade of robotics will become vital components in a
number of applications and robots paired with AI will be able to perform
complex actions that are capable of learning from humans, and we must never
forget the underlying ethical consequences in this endeavour, with progresses.
With great innovation, comes great responsibility.
~ 15 ~
REFERENCES
1. Abowd, John M, 2017, “How Will Statistical Agencies Operate When All
Data Are Private?”, Journal of Privacy and Confidentiality, 7(3): 1–15.
~ 16 ~
7. Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A
Legal and Ethical Analysis”, The Information Society, 34(3): 130–140.
11.Devlin, Kate, 2018, Turned On: Science, Sex and Robots, London:
Bloomsbury.
~ 17 ~