0% found this document useful (0 votes)
55 views

Ai Ethics

Artificial intelligence systems raise important ethical issues regarding bias, fairness, privacy, and accountability. AI ethics provides principles to help address these issues, such as respecting individuals, doing no harm, and distributing benefits justly. Companies should establish governance processes to manage AI development according to ethical standards and promote explainability to build trust. The purpose of AI is to augment rather than replace human intelligence by supporting workers and protecting data privacy.

Uploaded by

Bella
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Ai Ethics

Artificial intelligence systems raise important ethical issues regarding bias, fairness, privacy, and accountability. AI ethics provides principles to help address these issues, such as respecting individuals, doing no harm, and distributing benefits justly. Companies should establish governance processes to manage AI development according to ethical standards and promote explainability to build trust. The purpose of AI is to augment rather than replace human intelligence by supporting workers and protecting data privacy.

Uploaded by

Bella
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

AI ETHICS

ETHICS

 A set of moral principles which help us


discern between right and wrong
• Cognitive Biases in Human Behaviours

• Human beings come with all sorts of cognitive biases,


such as recency and confirmation bias, and those inherent
biases are exhibited in our behaviors and subsequently,
our data.
Big Data
• Data is the foundation for all machine learning algorithms,
• With the emergence of big data, companies have increased their focus to
drive automation and data-driven decision-making across their organizations.
• Companies are experiencing unforeseen consequences in some of their AI
applications, particularly due to poor upfront research design and biased
datasets
• As instances of unfair outcomes have come to light, new guidelines have
emerged, primarily from the research and data science communities, to
address concerns around the ethics of AI.
• Lack of diligence in this area can result in reputational, regulatory and legal
exposure, resulting in costly penalties. As with all technological advances,
innovation tends to outpace government regulation in new, emerging fields.
• As the appropriate expertise develops within the government industry, we
can expect more AI protocols for companies to follow, enabling them to
avoid any infringements on human rights and civil liberties
AI ETHICS

• AI ethics is a set of guidelines that advise on the


design and outcomes of artificial intelligence.
PRINCIPLES OF AI ETHICS
1. RESPECT FOR PERSONS

• This principle primarily touches on the idea of consent.


• Individuals should be aware of the potential risks and benefits of any
experiment that they’re a part of, and they should be able to choose to
participate or withdraw at any time before and during the experiment.
2. BENEFICENCE

• his principle takes a page out of healthcare ethics, where doctors take an oath
to “do no harm.” This idea can be easily applied to artificial intelligence
where algorithms can amplify biases around race, gender, political leanings,
et cetera, despite the intention to do good and improve a given system
3. JUSTICE
• This principle deals with issues, such as fairness and equality. Who should reap the
benefits of experimentation and machine learning? The Belmont Report offers five
ways to distribute burdens and benefits, which are by:
Equal share
Individual need
Individual effort
Societal contribution
Merit
ETHICAL ISSUES OF AI
1. TECHNOLOGICAL SINGULARITY
1. TECHNOLOGICAL SINGULARITY

• Use of Strong AI and Superintelligence


• Eg: Self- Driving Cars
• Nick Bostrum defines as “any intellect that vastly outperforms the best human brains in practically every field, including
scientific creativity, general wisdom, and social skills.” Despite the fact that Strong AI and superintelligence is not imminent in
society, the idea of it raises some interesting questions as we consider the use of autonomous systems, like self-driving cars.
• It’s unrealistic to think that a driverless car would never get into a car accident, but who is responsible and liable under those
circumstances? Should we still pursue autonomous vehicles, or do we limit the integration of this technology to create only
semi-autonomous vehicles which promote safety among drivers?
AI IMPACT ON JOBS
2. AI IMPACT ON JOBS

• Shift of job roles due to Market Demand

• Eg: Shift of Fuel economy to Electric ; similar transition of


AI will shift demand of jobs to other areas.

• AI generates new areas of Market Demand


• With every disruptive, new technology, we see that the market demand for
specific job roles shift.
For example, when we look at the automotive industry, many manufacturers,
like GM, are shifting to focus on electric vehicle production to align with green
initiatives.
• The energy industry isn’t going away, but the source of energy is shifting
from a fuel economy to an electric one. Artificial intelligence should be
viewed in a similar manner, where artificial intelligence will shift the demand
of jobs to other areas.
• There will need to be individuals to help manage these systems as data grows
and changes every day. There will still need to be resources to address more
complex problems within the industries that are most likely to be affected by
job demand shifts, like customer service.

• The important aspect of artificial intelligence and its effect on the job market
will be helping individuals transition to these new areas of market demand.
3. DATA PRIVACY
PII(Personal Identifiable
Information)
3.DATA PRIVACY

• recent legislation has forced companies to rethink how they store and use
personally identifiable Information(PII).
• As a result, investments within security have become an increasing priority
for businesses as they seek to eliminate any vulnerabilities and opportunities
for surveillance, hacking, and cyberattacks.
4.BIAS AND DISCRIMINATION
• Algorithms

• Facial Description

• Facial recognition to
social media algorithms
4.BIAS AND DISCRIMINATION
• AI can be biased in the given data or produced algorithms in hiring candidates.
• AI can be biased in recognizing the facial description
• In their effort to automate and simplify a process, Amazon unintentionally biased
potential job candidates by gender for open technical roles, and they ultimately had to
scrap the project.
• As events like these surface, Harvard Business Review (link resides outside ibm.com)
has raised other pointed questions around the use of AI within hiring practices, such as
what data should you be able to use when evaluating a candidate for a role.
• Bias and discrimination aren’t limited to the human resources function either; it can be
found in a number of applications from facial recognition software to social media
algorithms.
5.ACCOUNTABILITY

•Construction and Distribution of AI models


5.ACCOUNTABILITY

• The current incentives for companies to adhere to these guidelines are the
negative repercussions of an unethical AI system to the bottom line. To fill
the gap, ethical frameworks have emerged as part of a collaboration between
ethicists and researchers to govern the construction and distribution of AI
models within society.
ESTABLISHING AI ETHICS
1. GOVERNANCE

• Companies can leverage their existing organizational structure to help manage ethical AI.
• If a company is collecting data, it has likely already established a governance system to
facilitate data standardization and quality assurance. Internal regulatory and legal teams
are likely already partnering with governance teams to ensure compliance with
government entities, and so expanding the scope of this team to include ethical AI is a
natural extension of its current priorities.
• This team can also steward organizational awareness and incentivize stakeholders to act in
accordance with company values and ethical standards
2. EXPLAINABILITY

• Machine learning models, particularly deep learning models, are frequently called “black box
models” as it’s usually unclear how a model is arriving at a given decision. According to this
research (link resides outside ibm.com) (PDF, 1.8 MB), explainability seeks to eliminate this
ambiguity around model assembly and model outputs by generating a “human understandable
explanation that expresses the rationale of the machine”.
• this type of transparency is important for building trust with AI systems to ensure that
individuals understand why a model is arriving to a given decision point. If we can better
understand the why, we will be better equipped to avoid AI risks, such as bias and
discrimination.
AI ETHICS- PURPOSE
1. THE PURPOSE OF AI IS TO AUGMENT
HUMAN INTELLIGENCE

•Support Manpower
•Promote Skills and Training
• This means that we do not seek to replace human intelligence with AI, but support it. Since every
new technological innovation involves changes to the supply and demand of particular job roles,
• Eg: IBM is committed to supporting workers in this transition by investing in global initiatives to
promote skills training around this technology.
2. DATA AND INSIGHTS BELONG TO THEIR
CREATOR

• Protect privacy of Clients


• Protect Data

• IBM clients can rest assured that they, and they alone, own their data. IBM has not and
will not provide government access to client data for any surveillance programs, and it
remains committed to protecting the privacy of its clients.
3.AI SYSTEMS MUST BE TRANSPARENT AND
EXPLAINABLE

• Ensure used Algorithms

Technology companies need to be clear about who trains their AI systems, what data was used
in that training and, most importantly, what went into their algorithms’ recommendations.
4.FAIRNESS

•Equality
•Inclusivity

This refers to the equitable treatment of individuals, or groups of individuals, by an AI


system. When properly calibrated, AI can assist humans in making fairer choices, countering
human biases, and promoting inclusivity
5.ROBUSTNESS

• Minimum Security Risks

• AI-powered systems must be actively defended from adversarial


attacks, minimizing security risks and enabling confidence in
system outcomes.
6.TRANSPARENCY

• To reinforce trust, users must be able to see how the


service works, evaluate its functionality, and comprehend
its strengths and limitations.
7.PRIVACY

• AI systems must prioritize and safeguard consumers’


privacy and data rights and provide explicit assurances to
users about how their personal data will be used and
protected.

You might also like