Ethics and Ai
Ethics and Ai
UNIT I
INTRODUCTION
Definition of morality and ethics in AI- Impact on society- Impact on human psychology-
Impact on the legal system- Impact on the environment and the planet- Impact on trust
INTRODUCTION
What is AI – and what is intelligence?
'Artificial Intelligence (AI) refers to systems that display intelligent behaviour by
analysing their environment and taking actions – with some degree of autonomy – to achieve
specific goals.
AI-based systems can be purely software-based, acting in the virtual world (e.g. voice
assistants, image analysis software, search engines, speech and face recognition systems) or AI
can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or
Internet of Things applications).'
How do we define intelligence?
A straightforward definition is that intelligent behaviour is 'doing the right thing at the
right time'. The definitions of intelligence, identifying three common features
Intelligence is
(1) A property that an individual agent has as it interacts with its environment or
environments.
(2) Related to the agent's ability to succeed or profit with respect to some goal or
objective.
(3) Depends on how able that agent is to adapt to different objectives and
environments.
They point out that intelligence involves adaptation, learning and understanding. At its
simplest, then, intelligence is 'the ability to acquire and apply knowledge and skills and to
manipulate one's environment'.
Physical robot and its environment are,
• Human environment (for social robots)
• A city street (for an autonomous vehicle)
• A care home or hospital (for a care or assisted living robot)
• A workplace (for a workmate robot)
The 'environment' of a software AI are,
• clinical (for a medical diagnosis AI)
• public space (for face recognition in airports, for instance, or virtual for face
recognition in social media)
Types of artificial intelligence
• weak AI
• strong AI
1
Weak AI
Strong AI
Ethics are moral principles that govern a person's behaviour or the conduct of an
activity. As a practical example, one ethical principle is to treat everyone with respect.
AI ethics are the study of the moral and ethical considerations involved in developing and
using Artificial Intelligence. The field of AI ethics does not only focus on what is morally right
or wrong for a specific machine but also on how to approach important questions such as: How
can we make sure that autonomous machines act following our values? How can we ensure
that they have less probability of harming humans than other technologies? What is our
responsibility as designers and users of ethical AI systems?
Principles for AI ethics are a set of rules and guidelines that are meant to help protect society
from the negative effects of Artificial Intelligence. These principles aim to protect people, the
environment, and the economy.
1. Safety:
This refers to how well an AI can avoid harming humans. This includes things like not causing
physical harm or using offensive language. It also includes things like protecting intellectual
property rights and privacy.
2. Security:
This refers to how well an AI can prevent other systems from attacking it or taking advantage
of it in some way. It also refers to how well an AI can protect itself from being hacked or
manipulated by humans who want to use it for nefarious means (like stealing money).
3. Privacy:
This refers to how much information an AI system knows about you, where it gets its data
from, how it stores that information, what kind of analysis tools it uses with that data, etc.
Basically, everything related to your personal information is being used/shared by any
technology company!
4. Fairness:
This refers to whether or not your rights as a consumer are being protected when interacting
with a company’s services/products.
AI systems should be designed and operated to be safe, secure, and private. The designers and
builders of intelligent autonomous systems must:
Challenges in AI Ethics
As a new field, AI ethics is still in the process of being developed. There are many
ethics and risks of AI. There are no clear rules or guidelines for AI ethics because it is a
relatively new field. As such, of these AI ethical issues, it can be challenging to determine
whether or not any given program has acted ethically when there are no established protocols
for determining what constitutes ethical behavior.
In fact, many people believe that some form of regulation may be necessary before
Artificial Intelligence becomes widespread enough for us humans even realize there’s anything
wrong with our creations’ behavior patterns; these individuals fear that without proper
oversight by experts versed both in technology development and ethics research fields like
philosophy/political science/economics, etc., society will suffer greatly due to irresponsible use
cases involving Artificial Intelligence technology devices such as autonomous cars driving
around streets full of pedestrians who might not understand what they’re witnessing.
This same scenario applies equally well across many industries where autonomous
machines are becoming commonplace, including manufacturing plants where robots perform
tasks intended by humans so efficiently they’re impacting unemployment rates worldwide.
“These things could get more intelligent than us and could decide to take over, and we
need to worry now about how we prevent that happening,” said Geoffrey Hinton, known as the
“Godfather of AI” for his foundational work on machine learning and neural
network algorithms. In 2023, Hinton left his position at Google so that he could “talk about the
dangers of AI,” noting a part of him even regrets his life’s work.
Whether it’s the increasing automation of certain jobs, gender and racially biased algorithms or
autonomous weapons that operate without human oversight (to name just a few), unease
abounds on a number of fronts. And we’re still in the very early stages of what AI is really
capable of.
AI and deep learning models can be difficult to understand, even for those that work
directly with the technology. This leads to a lack of transparency for how and why AI comes to
its conclusions, creating a lack of explanation for what data AI algorithms use, or why they
may make biased or unsafe decisions. These concerns have given rise to the use of explainable
AI, but there’s still a long way before transparent AI systems become common practice.
“The reason we have a low unemployment rate, which doesn’t actually capture people
that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty
robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise,
though, “I don’t think that’s going to continue.”
As AI robots become smarter and more dexterous, the same tasks will require fewer
humans. And while AI is estimated to create 97 million new jobs by 2025, many employees
won’t have the skills needed for these technical roles and could get left behind if companies
don’t upskill their workforces.
“If you’re flipping burgers at McDonald’s and more automation comes in, is one of
these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job
requires lots of education or training or maybe even intrinsic talents — really strong
interpersonal skills or creativity — that you might not have? Because those are the things that,
at least so far, computers are not very good at.” Even professions that require graduate degrees
and additional post-college training aren’t immune to AI displacement.
As technology strategist Chris Messina has pointed out, fields like law and accounting
are primed for an AI takeover. In fact, Messina said, some of them may well be decimated. AI
already is having a significant impact on medicine. Law and accounting are next, Messina said,
the former being poised for “a massive shakeup.”
“Think about the complexity of contracts, and really diving in and understanding what
it takes to create a perfect deal structure,” he said in regards to the legal field. “It’s a lot of
attorneys reading through a lot of information — hundreds or thousands of pages of data and
documents. It’s really easy to miss things. So AI that has the ability to comb through and
comprehensively deliver the best possible contract for the outcome you’re trying to achieve is
probably going to replace a lot of corporate attorneys.”
Social manipulation also stands as a danger of artificial intelligence. This fear has
become a reality as politicians rely on platforms to promote their viewpoints, with one
example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of
younger Filipinos during the Philippines’ 2022 election.
TikTok, which is just one example of a social media platform that relies on AI
algorithms, fills a user’s feed with content related to previous media they’ve viewed on the
platform. Criticism of the app targets this process and the algorithm’s failure to filter out
harmful and inaccurate content, raising concerns over TikTok’s ability to protect its users from
misleading information.
Online media and news have become even murkier in light of AI-generated images and
videos, AI voice changers as well as deepfakes infiltrating political and social spheres. These
technologies make it easy to create realistic photos, videos, audio clips or replace the image of
one figure with another in an existing picture or video. As a result, bad actors have another
avenue for sharing misinformation and war propaganda, creating a nightmare scenario where it
can be nearly impossible to distinguish between creditable and faulty news.
“No one knows what’s real and what’s not,” Ford said. “So it really leads to a situation
where you literally cannot believe your own eyes and ears; you can’t rely on what, historically,
we’ve considered to be the best possible evidence... That’s going to be a huge issue.”
In addition to its more existential threat, Ford is focused on the way AI will adversely
affect privacy and security. A prime example is China’s use of facial recognition technology in
offices, schools and other venues. Besides tracking a person’s movements, the Chinese
government may be able to gather enough data to monitor a person’s activities, relationships
and political views.
If you’ve played around with an AI chatbot or tried out an AI face filter online, your
data is being collected — but where is it going and how is it being used? AI systems often
collect personal data to customize user experiences or to help train the AI models you’re using
(especially if the AI tool is free). Data may not even be considered secure from other users
when given to an AI system, as one bug incident that occurred with ChatGPT in 2023 “allowed
some users to see titles from another active user’s chat history.” While there are laws present
to protect personal information in some cases in the United States, there is no explicit federal
law that protects citizens from data privacy harm experienced by AI.
6. Biases due to AI
Various forms of AI bias are detrimental too. Speaking to the New York Times,
Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender
and race. In addition to data and algorithmic bias (the latter of which can “amplify” the
former), AI is developed by humans — and humans are inherently biased.
“A.I. researchers are primarily people who are male, who come from certain racial
demographics, who grew up in high socioeconomic areas, primarily people without
disabilities,” Russakovsky said. “We’re a fairly homogeneous population, so it’s a challenge to
think broadly about world issues.”
If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may
compromise their DEI initiatives through AI-powered recruiting. The idea that AI can measure
the traits of a candidate through facial and voice analyses is still tainted by racial biases,
reproducing the same discriminatory hiring practices businesses claim to be eliminating.
Widening socioeconomic inequality sparked by AI-driven job loss is another cause for
concern, revealing the class biases of how AI is applied. Blue-collar workers who perform
more manual, repetitive tasks have experienced wage declines as high as 70 percent because of
automation. Meanwhile, white-collar workers have remained largely untouched, with some
even enjoying higher wages.
Sweeping claims that AI has somehow overcome social boundaries or created more jobs fail to
paint a complete picture of its effects. It’s crucial to account for differences based on race,
class and other categories. Otherwise, discerning how AI and automation benefit certain
individuals and groups at the expense of others becomes more difficult.
Along with technologists, journalists and political figures, even religious leaders are
sounding the alarm on AI’s potential socio-economic pitfalls. In a 2019 Vatican meeting titled,
“The Common Good in the Digital Age,” Pope Francis warned against AI’s ability to
“circulate tendentious opinions and false data” and stressed the far-reaching consequences of
letting this technology develop without proper oversight or restraint.
The rapid rise of generative AI tools like ChatGPT and Bard gives these concerns more
substance. Many users have applied the technology to get out of writing
assignments, threatening academic integrity and creativity.
“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina
said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique
to technology. That’s been happening forever.’”
As is too often the case, technological advancements have been harnessed for the
purpose of warfare. When it comes to AI, some are keen to do something about it before it’s
too late: In a 2016 open letter, over 30,000 individuals, including AI and robotics researchers,
pushed back against the investment in AI-fueled autonomous weapons.
“The key question for humanity today is whether to start a global AI arms race or to
prevent it from starting,” they wrote. “If any major military power pushes ahead with AI
weapon development, a global arms race is virtually inevitable, and the endpoint of this
technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of
tomorrow.”
This prediction has come to fruition in the form of Lethal Autonomous Weapon
Systems, which locate and destroy targets on their own while abiding by few regulations.
Because of the proliferation of potent and complex weapons, some of the world’s most
powerful nations have given in to anxieties and contributed to a tech cold war.
Many of these new weapons pose major risks to civilians on the ground, but the danger
becomes amplified when autonomous weapons fall into the wrong hands. Hackers have
mastered various types of cyber attacks, so it’s not hard to imagine a malicious actor
infiltrating autonomous weapons and instigating absolute armageddon. If political rivalries
and warmongering tendencies are not kept in check, artificial intelligence could end up being
applied with the worst intentions.
While AI algorithms aren’t clouded by human judgment or emotions, they also don’t
take into account contexts, the interconnectedness of markets and factors like human trust and
fear. These algorithms then make thousands of trades at a blistering pace with the goal of
selling a few seconds later for small profits. Selling off thousands of trades could scare
investors into doing the same thing, leading to sudden crashes and extreme market volatility.
This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms
can help investors make smarter and more informed decisions on the market. But finance
organizations need to make sure they understand their AI algorithms and how those algorithms
make decisions. Companies should consider whether AI raises or lowers their
confidence before introducing the technology to avoid stoking fears among investors and
creating financial chaos.
There also comes a worry that AI will progress in intelligence so rapidly that it will
become sentient, and act beyond humans’ control — possibly in a malicious manner. Alleged
reports of this sentience have already been occurring, with one popular account being from a
former Google engineer who stated the AI chatbot LaMDA was sentient and speaking to him
just as a person would. As AI’s next big milestones involve making systems with artificial
general intelligence, and eventually artificial superintelligence, cries to completely stop these
developments continue to rise.
AI still has numerous benefits, like organizing health data and powering self-driving
cars. To get the most out of this promising technology, though, some argue that plenty of
regulation is necessary.
“There’s a serious danger that we’ll get [AI systems] smarter than us fairly soon and
that these things might get bad motives and take control,” Hinton told NPR. “This isn’t just a
10
science fiction problem. This is a serious problem that’s probably going to arrive fairly soon,
and politicians need to be thinking about what to do about it now.”
AI regulation has been a main focus for dozens of countries, and now the U.S. and
European Union are creating more clear-cut measures to manage the spread of artificial
intelligence. Although this means certain AI technologies could be banned, it doesn’t prevent
societies from exploring the field.
Preserving a spirit of experimentation is vital for Ford, who believes AI is essential for
countries looking to innovate and keep up with the rest of the world.
“You regulate the way AI is used, but you don’t hold back progress in basic
technology. I think that would be wrong-headed and potentially dangerous,” Ford said. “We
decide where we want AI and where we don’t; where it’s acceptable and where it’s not. And
different countries are going to make different choices.”
The key is deciding how to apply AI in an ethical manner. On a company level, there are many
steps businesses can take when integrating AI into their operations. Organizations can develop
processes for monitoring algorithms, compiling high-quality data and explaining the findings
of AI algorithms. Leaders could even make AI a part of their company culture, establishing
standards to determine acceptable AI technologies.
“The creators of AI must seek the insights, experiences and concerns of people across
ethnicities, genders, cultures and socio-economic groups, as well as those from other fields,
such as economics, law, medicine, philosophy, history, sociology, communications, human-
computer-interaction, psychology, and Science and Technology Studies (STS).”
“I think we can talk about all these risks, and they’re very real,” Ford said. “But AI is also
going to be the most important tool in our toolbox for solving the biggest challenges we face.”
11
Humans interact with robots and AI systems as if they are social actors. This effect has
called as the “Media Equation” (Reeves and Nass 1996). People treat robots with politeness
and apply social norms and values to their interaction partner (Broadbent 2017). Through
repeated interaction, humans can form friendships and even intimate relationships with
machines. This anthropomorphisation is arguably hard-wired into our minds and might have an
evolutionary basis (Zlotowski et al. 2015). Even if the designers and engineers did not intend
the robot to exhibit social signals, users might still perceive them. The human mind is wired to
detect social signals and to interpret even the slightest behaviour as an indicator of some
underlying motivation. This is true even of abstract animations. Humans can project “theory of
mind” onto abstract shapes that have
n
Here we discuss how people relate to robots and autonomous systems from a
psychological point of view. Humans tend to anthropomorphise them and form unidirectional
relationships. The trust in these relationships is the basis for persuasion and manipulation that
can be used for good and evil.
Here we discuss psychological factors that impact the ethical design and use of AIs and
robots. It is critical to understand that humans will attribute desires and feelings to machines
even if the machines have no ability whatsoever to feel anything. That is, people who are
unfamiliar with the internal states of machines will assume machines have similar internal
states of desires and feelings as themselves. This is called anthropomorphism. Various ethical
risks are associated with anthropomorphism. Robots and AIs might be able to use “big data” to
persuade and manipulate humans to do things they would rather not do. Due to unidirectional
emotional bonding, humans might have misplaced feelings towards machines or trust them too
much. In the worst-case scenarios, “weaponised” AI could be used to exploit humans.
Problems of Anthropomorphisation
o minds at all (Heider and Simmel 1944). It is therefore the responsibility of the system’s
creators to carefully design the physical features and social interaction the robots will have,
especially if they interact with vulnerable users, such as children, older adults and people with
cognitive or physical impairments.
To accomplish such good social interaction skills, AI systems need to be able to sense
and represent social norms, the cultural context and the values of the people (and other agents)
with which they interact (Malle et al. 2017). A robot, for example, needs to be aware that it
would be inappropriate to enter a room in which a human is changing his/her underwear.
Being aware of these norms and values means that the agent needs to be able to sense relevant
behaviour, process its meaning and express the appropriate signals. A robot entering the
bedroom, for example, might decide to knock on the door prior to entering. It then needs to
hear the response, even if only non-verbal utterance, and understand its meaning. Robots might
not need to be perfectly honest. As Oscar Wilde observed “The truth is rarely pure and never
simple.” White lies and minor forms of dishonesty are common in human-human interaction
(Feldman et al. 2002; DePaulo et al. 1996).
can develop even when the interactions between the robot and the human are largely
unidirectional with the human providing all of the emotion.
A group of soldiers in Iraq, for example, held a funeral for their robot and created a
medal for it (Kolb 2012). Carpenter provides an in-depth examination of human-robot
interaction from the perspective of Explosive Ordinance Disposal (EOD) teams within the
military (Carpenter 2016). Her work offers an glimpse of how naturally and easily people
anthropomorphise robots they work with daily. Robinette et al. (2016) offered human subjects
a guidance robot to assist them with quickly finding an exit during an emergency. They were
told that if they did not reach the exit within the allotted 30 s then their character in the
environment would perish. Those that interacted with a good guidance robot that quickly led
them directly to an exit tended to name the robot and described its behaviour in heroic terms.
Much research has shown that humans tend to quickly befriend robots that behave socially.
2. Misplaced Trust in AI
Users may also trust the robot too much. Ever since the Eliza experiments of the 1960s,
it has become apparent that computers and robots have a reputation of being honest. While
they rarely make mistakes in their calculations, this does not mean that their decisions are
smart or even meaningful. There are examples of drivers blindly following their navigation
devices into even dangerous and illegal locations. Robinette et al. (2016) showed that
participants followed an obviously incompetent robot in a fire evacuation scenario. It is
therefore necessary for robots to be aware of the certainty of their own results and to
communicate this to the users in a meaningful way (Fig. 7.1).
Persuasive AI
By socially interacting with humans for a longer period, relationships will form that
can be the basis for considerable persuasive power. People are much more receptive to
persuasion from friends and family compared to a car salesperson. The first experiments with
robotic sales representatives showed that the robots do have sufficient persuasive power for the
job (Ogawa et al. 2009). Other experiments have explored the use of robots in shopping malls
(Shiomi et al. 2013; Watanabe et al. 2015). This persuasive power can be used for good or evil.
The concern is that an AI system may use, and potentially abuse, its powers. For
example, it might use data, such as your Facebook profile, your driving record or your credit
standing to convince a person to do something they would not normally do. The result might
be that the person’s autonomy is diminished or compromised when interacting with the robot.
Imagine, for example, encountering the ultimate robotic car sales person who knows
everything about you, can use virtually imperceptible micro expression to game you into
making the purchase it prefers. The use of these “superpowers” for persuasion can limit a
person’s autonomy and could be ethically questionable.
Persuasion works best with friends. Friends influence us because they have intimate
knowledge of our motivations, goals, and personality quirks. Moreover, psychologists have
long known that when two people interact over a period of time they begin to exchange and
take on each other subtle mannerisms and uses of language (Brandstetter et al. 2017). This is
known as the Michelangelo phenomenon. Research has also shown that as relationships grow,
each person’s uncertainty about the other person reduces fostering trust. This trust is the key to
13
a successful persuasion. Brandstetter and Bartneck (2017) showed that it only takes 10% of the
members of a community to own a robot at which changes in the use of language in the whole
community can take place.
The emotional connection between the robot or AI system and its user might be
unidirectional. While humans might develop feelings of friendship and affection towards their
silicon friends and these might even be able to display emotional expressions and emit signals
of friendship, the agent might still be unable to experience any “authentic” phenomenological
friendship or affection. The relationship is thereby unidirectional which may lead to even more
loneliness (Scheutz 2014). Moreover, tireless and endlessly patient systems may accustom
people to unrealistic human behaviour. In comparison, interacting with a real human being
might become increasingly difficult or plain boring.
For example, already in the late 1990s, phone companies operated flirt lines. Men and
women would be randomly matched on the phone and had the chance to flirt with each other.
Unfortunately, more men called in than women and thus not all of the men could be matched
with women. The phone companies thus hired women to fill the gap and they got paid by how
long they could keep the men on the line. These professional talkers became highly trained in
talking to men. Sadly, when a real woman called in, men would often not be interested in her
because she lacked the conversational skill that the professional talkers had honed. While the
phone company succeeded in making profit, the customers failed to achieve dates or actual
relationships since the professional women would always for unforeseeable reasons be
unavailable for meetings. This example illustrates the danger of AI systems that are designed
to be our companion. Idealised interactions with these might become too much fun and thereby
inhibit human-human interaction.
These problems could become even more intense when considering intimate
relationships. An always available amorous sex robot that never tires might set unrealistic if
not harmful and disrespectful expectations. It could even lead to undesirable cognitive
development in adolescents, which in turn might cause problems. People might also make
robotic copies of their ex-lovers and abuse them (Sparrow 2017).
Even if a robot appears to show interest, concern, and care in a person, these robots
cannot truly have these emotions. Nevertheless, naive humans tend to believe that the robot
does in fact have emotions as well, and a unidirectional relationship can develop. Humans tend
to befriend robots even if they present only a limited veneer of social competence. Short et al.
(2010) found that robots which cheated while playing the game rock, paper, scissors were
14
viewed as more social and got more attributions of mental state compared to those that did not.
People may even hold robots as morally accountable for mistakes. Experiments have shown
that when a robot incorrectly assesses a person’s performance in a game, preventing them from
winning a prize, people hold the robot morally accountable (Kahn et al. 2012).
Perhaps surprisingly, even one’s role while interacting with a robot can influence the bond
that develops. Kim, Park, and Sundar asked study participants to either act as a caregiver to a
robot or to receive care from a robot. Their results demonstrate that receiving care from a robot
led participants to form a more positive view of the robot (Kim et al. 2013). Overall, the
research clearly shows that humans tend to form bonds with robots even if their interactions
with the robot are one-directional, with the person providing all of the emotion. The bond that
the human then feels for the robot can influence the robot’s ability to persuade the person.
Artificial intelligence is a computer or robot that can do all the tasks that human
intelligence requires. It helps people to get rid of regular tasks. It corresponds to the thinking
people think at the human level and enables them to focus more on tasks that computers can't
accomplish. It is the science of computers that recognize the reason, to know, to imagine, to
communicate, and to make choices like men. It has both good and bad effects for people
since it helps effectively and efficiently to our work, but it may, on the other hand, actually
take over thousands of individuals' jobs.
The concept of artificial intelligence and law are combined with computer and
mathematical methods to make the law more rational, convenient, useful, practical, or
predictable. Artificial intelligence enables us to seek ideas such as contract review and due
diligence analysis, recognize changes in e-mail tone, and even devise where the computer
knows what to draught and produces the document.
The Indian law practice is very traditional and manual. The concept of artificial
intelligence in law is a little reluctant to the proponents. No doubt that you now use
laptops/computers rather than writing machines, or send letters through fax machines utilizing
online portals for legal research (such as Manu Patra and SCC online). It is equally true,
however, that people need time to adopt new instruments. However, some lawyers can alter the
way law companies and law firms operate. They shift their focus to artificial intelligence. But
artificial intelligence is now in its early stage in India and will need some time to deploy
correctly.
15
The advance in law technology has certainly brought an increase in legal professionals'
duties. It may be an important factor in changing the way lawyers work and the law is seen in
India. Various kinds of businesses that deal with artificial intelligence and law have long
sought new ways to extend the technology to improve legal profession speed and accuracy.
Even ordinary people may thus readily access the law.
Artificial intelligence in India is discovering ways to enhance work quality. As
practiced, computers and robots cannot replace the function of the lawyer in court, but they can
carry out research and draught a paper. The function of lawyers in the workplace may be
significantly decreased. As artificial intelligence-created technologies assist in draught
different legal papers. There is a huge Indian legal system and our constitution is the longest.
A lawyer wants to attempt to perform many tasks, such as drafting a document and providing
multiple support to his customers. Thus, the advocates will do their work in seconds with the
help of artificial intelligence.
The research carried out by lawyers takes a variety of man-hour and lowers profit
jointly. The whole legal society, therefore, may be balanced using artificial intelligence since
research work takes just seconds. It saves time for drafting and helps lawyers to take more time
in work. It helps lawyers do due diligence and research by providing them with additional
insights and shortcuts in analytics.
There are even different sectors in which law practitioners are using artificial
intelligence technology. We may also observe that technology has prepared the way for
multifunctional gadgets in this epidemic because it also has made life simpler, faster, better,
and more interesting. It's an important tool we can't ignore nowadays. Because in this dynamic
world existence without technology has no significance. This is one of the ways that we have
remained in the world and part of our lives.
Artificial intelligence is supposed to have a very good scope since it is useful in many
areas.
16
• Due diligence
It is a technique that requires a lengthy number of hours since litigators need multiple
papers to be reviewed. It covers the examination of contracts, legal research, and electronic
discovery and is extremely difficult to arrange and convert in a short space of time. Thus,
tedious work may be done simply using artificial intelligence technology.
• Research work
The work of research is extremely complicated and needs many hours of human time
and attention. The law researchers are therefore able to finish their work efficiently in one
minute using artificial intelligence technology since the corresponding material is supplied in
only one click. This will optimize legal research and allow lawyers to gain legal time to
specialize in law, negotiations, and strategy rather than waste time on daily routine tasks, since
computers are capable to do the tasks much earlier than even the first trained human.
• Technology prediction
The software system for artificial intelligence forecasts the probable result of an
upcoming law or the new case brought before the Court. Software machine learning systems
may group a capable number of data and this data is utilized for the preparation of the
forecasts. These kinds of information are also more trustworthy than legal experts' forecasts.
The software system for artificial intelligence helps legal professionals to discover the
previous law and also gives judgments in their current case.
• Automated billing
The software system of artificial intelligence helps the creation of attorneys' invoices in line
with their work. The law companies and lawyers will thus just interpret the exact amount of
the granting facts of the practicing work carried out underneath them. It enables lawyers to
spend more time on customer issues collaboratively.
17
18
19
friendly since clients pay once they reach goals, and the professional connections between
customers and law firms are reinforced by this term.
Revenue focus to higher profit
Law firms are now focusing on increased revenue, with competition between law firms
continuously growing and demand legal services stagnating, making revenue growth very
challenging. Thus, law firms in the future would focus on greater profits and margins rather
than revenue.
Making Technology the basis of growth
In recent years, we see an important launch of new IT-based solutions that will enhance
the efficiency and customer friendliness of the legal sector. Various legal tech companies have
been founded to improve the life of a lawyer or a firm from the automation solutions for E-
Discovery in contract drafting and trademark search. Legal solutions based on artificial
intelligence assist law firms make themselves more efficient, potentially lower costs and earn
more profits. In addition to these technologies, the future law firm will work in synergy with
other businesses to provide AI-based solutions that may further improve the legal sector.
High brand value focus
In tomorrow's law firm, the brand presence would become a future focus. A sloppy or
irresponsible counsel from just a few people may quickly harm a company's image, and thus
the brand value law firm has to depend on AI-based legal solutions and platforms with
technologically knowledgeable lawyers. On the other hand, law firms must also arrange more
conferences and take part in cross-border workshops and seminars.
Artificial intelligence’s contribution to human productivity: Boon or Bane
The lawyers and law firms are wrongly going that artificial intelligence or machine
learning is a danger to their lives or that Artificial Intelligence is replacing lawyers. Evidence
suggests that artificial intelligence will only let legal lawyers and law firms do more with less
and be much more productive than their predecessors in other sectors and vertical industries
like e-commerce, sanitary, and accountancy. I think that artificial intelligence will start from
what is traditionally known as the "bar," and eventually reach the "bench," in which the judges
may even use the power of NLP Summary to collect the total of both sides' arguments. Judges
may rapidly determine whether the section has merit following the Acts/Statutes and the
current laws on the dispute subject law.
Based on the preceding arguments, we see no reason to take over the employment of
professionals by Artificial Intelligence. Indeed, AI will enhance the productivity, effectiveness,
better, accuracy and targeted outcome of professionals.
Artificial Intelligence (AI) has the potential to have a significant impact on the
environment, both positive and negative. The development and implementation of AI have
revolutionized many aspects of our lives, including the way we interact with the environment.
With its ability to analyze vast amounts of data, learn from patterns, and make decisions in
real-time, AI can be used to improve energy efficiency, reduce waste, and enhance sustainable
practices. However, the negative environmental impact of AI is also a cause for concern.
The positive environmental impact of AI can be seen in several areas. One of the most
significant benefits of AI is its ability to optimize energy consumption and reduce waste. For
example, machine learning algorithms can analyze data from smart grids to optimize energy
20
consumption in real-time, reducing the need for fossil fuel-based energy generation. This can
lead to a reduction in greenhouse gas emissions and help mitigate the effects of climate
change.
AI can also be used to develop and implement sustainable practices in industries such
as agriculture, forestry, and transportation. Precision agriculture, for example, can help farmers
reduce the use of fertilizers and pesticides, leading to healthier crops and less environmental
contamination. Similarly, AI-powered forestry management can help ensure that forests are
sustainably managed, with minimal impact on the surrounding ecosystem. In transportation, AI
can help optimize routes and reduce fuel consumption, leading to lower emissions and
improved air quality.
Another area where AI can have a positive impact on the environment is through the
development of new, sustainable materials. AI can be used to design new materials with
specific properties, such as increased strength or reduced weight, that can be used in
everything from construction to aerospace. These materials can be made from renewable
resources, reducing our reliance on fossil fuels and minimizing the environmental impact of
manufacturing.
In addition, AI can also be used to monitor and predict environmental changes, helping
us to better understand and address environmental issues. For example, AI can be used to
monitor and predict weather patterns, allowing us to better prepare for extreme weather events
and reduce their impact on the environment and society. AI can also be used to monitor and
analyze environmental data, such as air and water quality, to identify areas of concern and
develop targeted solutions.
Despite the many positive impacts of AI on the environment, there are also concerns
about the potential negative environmental impact of AI. One of the most significant concerns
is the amount of energy required to train and operate AI algorithms. Training an AI model can
require significant amounts of computational power, which in turn requires a large amount of
energy. This energy is often generated using fossil fuels, leading to an increase in greenhouse
gas emissions.
Finally, there are concerns about the ethical implications of using AI to manage the
environment. AI algorithms are only as good as the data they are trained on, and biases in this
data can lead to biased decision-making. For example, if an AI algorithm is trained on data that
prioritizes economic growth over environmental protection, it may make decisions that
prioritize short-term economic gain over long-term environmental sustainability.
21
Here are some of the obvious and not-so-obvious ways in which robotics can affect the
environment:
22
lOMoARcP SD| 33 87 674 0
Back in 2017, It had been found that industrial and manufacturing robots use
over 21,000 KWh annually on average. Additionally, the use of robotics to replace human-
powered tasks, boost workplace productivity and facilitate human-robot collaboration are some
of the factors that increase electricity usage over time.
Examples of automation replacing human workers include the usage of robots for
vacuum cleaners, floor sweepers, delivery vehicles, and forklifts, whereas examples of human-
machine collaboration are personal robot assistants with emotional intelligence, surgical robots
for invasive surgeries in hospitals. While some of these robotic applications may be frugal in
the way they use electricity, using them relentlessly on a daily basis increases the average daily
power usage on average.
Resolving such issues requires countries to invest in the development of green robotics-
based technologies for automation to reduce resource consumption. Implementing green
robotics can be a challenge for businesses. Overcoming inequality is harder still, with the need
for world bodies and governments to work in unison over several years to fix the widespread
issue. The resolution of such problems promises to be the answer to many of the negative
environmental impacts of AI.
23
With artificial intelligence (AI) tools increasing in sophistication and usefulness, people
and industries are eager to deploy them to increase efficiency, save money, and inform human
decision making. But are these tools ready for the real world? As any comic book fan knows:
with great power comes great responsibility. The proliferation of AI raises questions about
trust, bias, privacy, and safety, and there are few settled, simple answers.
As AI has been further incorporated into everyday life, more scholars, industries, and
ordinary users are examining its effects on society. The academic field of AI ethics has grown
over the past five years and involves engineers, social scientists, philosophers, and others. The
Caltech Science Exchange spoke with AI researchers at Caltech about what it might take to
trust AI.
To trust a technology, you need evidence that it works in all kinds of conditions, and
that it is accurate. "We live in a society that functions based on a high degree of trust. We have
a lot of systems that require trustworthiness, and most of them we don't even think about day
to day," says Caltech professor Yisong Yue. "We already have ways of ensuring
trustworthiness in food products and medicine, for example. I don't think AI is so unique that
you have to reinvent everything. AI is new and fresh and different, but there are a lot of
common best practices that we can start from."
Today, many products come with safety guarantees, from children's car seats to
batteries. But how are such guarantees established? In the case of AI, engineers can use
mathematical proofs to provide assurance. For example, the AI that a drone uses to direct its
landing could be mathematically proven to result in a stable landing.
This kind of guarantee is hard to provide for something like a self-driving car because
roads are full of people and obstacles whose behavior may be difficult to predict. Ensuring the
AI system's responses and "decisions" are safe in any given situation is complex.
One feature of AI systems that engineers test mathematically is their robustness: how
the AI models react to noise, or imperfections, in the data they collect. "If you need to trust
these AI models, they cannot be brittle. Meaning, adding small amounts of noise should not be
able to throw off the decision making," says Anima Anandkumar, Bren Professor of
Computing and Mathematical Sciences at Caltech. "A tiny amount of noise—for example,
something in an image that is imperceptible to the human eye—can throw off the decision
making of current AI systems." For example, researchers have engineered small imperfections
in an image of a stop sign that led the AI to recognize it as a speed limit sign instead. Of
course, it would be dangerous for AI in a self-driving car to make this error.
24
When AI is used in social situations, such as the criminal justice or banking systems,
different types of guarantees, including fairness, are considered.
Clear Instructions
Though we may call it "smart," today's AI cannot think for itself. It will do exactly
what it is programmed to do, which makes the instructions engineers give an AI system
incredibly important. "If you don't give it a good set of instructions, the AI's learned behavior
can have unintended side effects or consequences," Yue says.
For example, say you want to train an AI system to recognize birds. You provide it
with training data, but the data set only includes images of North American birds in daytime.
What you have actually created is an AI system that recognizes images of North American
birds in daylight, rather than all birds under all lighting and weather conditions. "It is very
difficult to control what patterns the AI will pick up on," Yue says.
Instructions become even more important when AI is used to make decisions about
people's lives, such as when judges make parole decisions on the basis of an AI model that
predicts whether someone convicted of a crime is likely to commit another crime.
Instructions are also used to program values such as fairness into AI models. For
example, a model could be programmed to have the same error rate across genders. But the
people building the model have to choose a definition of fairness; a system cannot be designed
to be fair in every conceivable way because it needs to be calibrated to prioritize certain
measures of fairness over others in order to output decisions or predictions.
"Scientifically, we don't know why the neural networks are working as well as they
are," says Caltech professor Yaser Abu-Mostafa. "If you look at the math, the data that the
neural network is exposed to, from which it learns, is insufficient for the level of performance
that it attains." Scientists are working to develop new mathematics to explain why neural
networks are so powerful.
Uncertainty Measures
Another active area of research is designing AI systems that are aware of and can give
users accurate measures of certainty in results. Just like humans, AI systems can make
25
mistakes. For example, a self-driving car might mistake a white tractor-trailer truck crossing a
highway for the sky. But to be trustworthy, AI needs to be able to recognize those mistakes
before it is too late. Ideally, AI would be able to alert a human or some secondary system to
take over when it is not confident in its decision-making. This is a complicated technical task
for people designing AI.
Adjusting to AI
When people encounter AI in everyday life, they may be tempted to adjust their
behavior according to how they understand the system to work. In other words, they could
"game the system." When AI is designed by engineers and tested in lab conditions, this issue
may not arise, and therefore the AI would not be designed to avoid it.
Take social media as an example: platforms use AI to recommend content to users, and
the AI is often trained to maximize engagement. It might learn that more provocative or
polarizing content gets more engagement. This can create an unintended feedback loop in
which people are incentivized to create ever more provocative content to maximize
engagement—especially if sales or other financial incentives are involved. In turn, the AI
system learns to focus even more on the most provocative content.
Similarly, people may have an incentive to misreport data or lie to the AI system to
achieve desired results. Caltech professor of computer science and economics Eric
Mazumdar studies this behavior. "There is a lot of evidence that people are learning to game
algorithms to get what they want," he says. "Sometimes, this gaming can be beneficial, and
sometimes it can make everyone worse off. Designing algorithms that can reason about this is
a big part of my research. The goal is to find algorithms that can incentivize people to report
truthfully."
Misuse of AI
"You can think of AI or computer vision as basic technologies that can have a million
applications," says Pietro Perona, Allen E. Puckett Professor of Electrical Engineering at
Caltech. "There are tons of wonderful applications, and there are some bad ones, too. Like
with all new technologies, we will learn to harvest the benefits while avoiding the bad uses.
Think of the printing press: For the last 400 years, our civilization benefited tremendously, but
there have been bad books, too."
AI-enabled facial recognition has been used to profile certain ethnic groups and target
political dissidents. AI-enabled spying software has violated human rights, according to the
UN. Militaries have used AI to make weapons more effective and deadly.
"When you have something as powerful as that, people will always think of malicious
ways of using it," Abu-Mostafa says. "Issues with cybersecurity are rampant, and what
happens when you add AI to that effort? It's hacking on steroids. AI is ripe for misuse given
the wrong agent."
26
Questions about power, influence, and equity arise when considering who is creating
widespread AI technology. Because the computing power needed to run complex AI systems
(such as large-language models) is prohibitively expensive, only organizations with vast
resources can develop and run them.
Bias in Data
For a machine to "learn," it needs data to learn from, or train on. Examples of training
data are text, images, videos, numbers, and computer code. In most cases, the larger the data
set, the better the AI will perform. But no data set is perfectly objective; each comes with
baked-in biases, or assumptions and preferences. Not all biases are unjust, but the term is most
often used to indicate an unfair advantage or disadvantage for a certain group of people.
While it may seem that AI should be impartial because it is not human, AI can reveal
and amplify existing biases when it learns from a data set. Take an AI system that is trained to
identify resumes of candidates who are the most likely to succeed at a company. Because it
learns from human resources records of previous employee performance, if managers at that
company previously hired and promoted male employees at a higher rate, the AI would learn
that males are more likely to succeed, and it would select fewer female candidate resumes.
In this way, AI can encode historical human biases, accelerate biased or flawed
decision-making, and recreate and perpetuate societal inequities. On the other hand, because
AI systems are consistent, using them could help avoid human inconsistencies and snap
judgments. For example, studies have shown that doctors diagnose pain levels differently for
certain racial and ethnic populations. AI could be a promising alternative to receive
information from patients and give diagnoses without this type of bias.
When people think about the dangers of AI, they often think of Skynet, the fictional,
sentient, humanity-destroying AI in the Terminator movies. In this imagined scenario, an AI
system grows beyond human ability to control it and develops new capabilities that were not
programmed at the outset. The term "singularity" is sometimes used to describe this situation.
Experts continue to debate when—and whether—this is likely to occur and the scope of
resources that should be directed to addressing it. University of Oxford professor Nick
Bostrom notably predicts that AI will become superintelligent and overtake humanity. Caltech
AI and social sciences researchers are less convinced.
27
"People will try to investigate the scenario even if the probability is small because the
downside is huge," Abu-Mostafa says. "But objectively knowing the signs that I know, I don't
see this as a threat."
"On one hand, we have these novel machine-learning tools that display some autonomy
from our own decision-making. On the other, there's hypothetical AI of the future that
develops to the point where it's an intelligent, autonomous agent," says Adam Pham, the
Howard E. and Susanne C. Jessen Postdoctoral Instructor in Philosophy at Caltech. "I think it's
really important to keep those two concepts separate, because you can be terrified of the latter
and make the mistake of reading those same fears into the existing systems and tools—which
have a different set of ethical issues to interrogate."
Others explore the idea of building AI with "break glass in case of emergency"
commands. But superintelligent AI could potentially work around these fail-safes.
While perfect trustworthiness in the view of all users is not a realistic goal, researchers
and others have identified some ways we can make AI more trustworthy. "We have to be
patient, learn from mistakes, fix things, and not overreact when something goes wrong,"
Perona says. "Educating the public about the technology and its applications is fundamental."
"The issue is taking data sets from the lab directly to real-world applications,"
Anandkumar says. "There is not enough testing in different domains."
"You basically have to audit algorithms at every step of the way to make sure that they
don't have these problems," Mazumdar says. "It starts from data collection and goes all the
way to the end, making sure that there are no feedback loops that can emerge out your
algorithms. It's really an end-to-end endeavor."
While AI technology itself only processes and outputs information, negative outcomes
can arise from how those answers are used. Who is using the AI system—a private company?
28
government agency? scientist?—and how are they making decisions on the basis of those
outputs? How are "wrong" decisions judged, identified, and handled?
Quality control becomes even more elusive when companies sell their AI systems to
others who can use them for a variety of purposes.
"Whatever biases AI systems may have, they mirror biases that are in society, starting
with those built into our language," Perona says. "It's not easy to change the way people think
and interact. With AI systems, things are easier: We are developing methods to measure their
performance and biases. We can be more objective and quantitative about the biases of a
machine than the biases of our institutions. And it's much easier to fix the biases of an AI
system once you know that they are there."
To further test self-driving cars and other machinery, manufacturers can use AI to
generate unsafe scenarios that couldn't be tested in real life—and to generate scenarios
manufacturers might not think of.
Researchers from Caltech and Johns Hopkins University are using machine learning to
create tools for a more trustworthy social media ecosystem. The group aims to identify and
prevent trolling, harassment, and disinformation on platforms like Twitter and Facebook by
integrating computer science with quantitative social science.
OpenAI, the creator of the most advanced non-private, large-language model, GPT-3,
has developed a way for humans to adjust the behaviors of a language model using a small
amount of curated "values-based" data. This raises the question: who gets to decide which
values are right and wrong for an AI system to possess?
The U.S. National Institute of Standards and Technology (NIST) says it "increasingly
is focusing on measurement and evaluation of technical characteristics of trustworthy AI."
NIST periodically tests the accuracy of facial-recognition algorithms, but only when a
company developing the algorithm submits it for testing.
In the future, certifications could be developed for different uses of AI, Yue says. "We
have certification processes for things that are safety critical and can harm people. For an
airplane, there are nested layers of certification. Each engine part, bolt, and material meets
certain qualifications, and the people who build the airplane check that each meets safety
standards. We don't yet know how to certify AI systems in the same way, but it needs to
happen."
29
"You have to basically treat all AI like a community, a society," says Mory Gharib,
Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering at Caltech. "We
need to have protocols, like we have laws in our society, that AI cannot cross to make sure that
these systems cannot hurt us, themselves, or a third party."
"These are no longer just engineering problems. These algorithms interact with people
and make decisions that affect people's lives," Mazumdar says. "The traditional way that
people are taught AI and machine learning does not consider that when you use these
classifiers in the real world, they become part of this feedback loop. You increasingly need
social scientists and people from the humanities to help in the design of AI."
"Having diverse teams is so important because they bring different perspectives and
experiences in terms of what the impacts can be," said Anandkumar on the Radical AI podcast.
"For one person, it's impossible to visualize all possible ways that technology like AI can be
used. When teams are diverse, only then can we have creative solutions, and we'll know issues
that can arise before AI is deployed."
30