UNIT 5 NOTES NEW
UNIT 5 NOTES NEW
CCS345-ETHICS &AI-NOTES
UNIT V
CHALLENGES
Ethical challenges facing AI has identified six types of concerns that can be traced to the
operational parameters of decision-making algorithms and AI systems.
The map reproduced and adapted in Figure 1 takes into account decision-making algorithms
1) turn data into evidence for a given outcome (henceforth conclusion), and that this
outcome is then used to
(2) trigger and motivate an action that (on its own, or when combined with other
actions) may not be ethically neutral. This work is performed in ways that are complex
and (semi-)-autonomous, which
CCS345-ETHICS &AI-NOTES
The proposed types of concerns can cause failures involving multiple human,
organisational, and technological agents.
This mix of human and technological actors leads to difficult questions concerning
how to assign responsibility and liability for the impact of AI behaviours.
These difficulties are captured in traceability as a final, overarching, type of concern.
Unjustified actions
Much algorithmic decision-making and data mining relies on inductive knowledge
and correlations identified within a dataset.
Even if strong correlations or causal knowledge are found, this knowledge may only
concern populations while actions with significant personal impact are directed
towards individuals
Opacity
Opacity in machine learning algorithms is a product of the high dimensionality of data,
complex code and changeable decision-making logic.[
[2] Transparency is often naively treated as a panacea for ethical issues arising from
new technologies.
Bias
The automation of human decision-making is often justified by an alleged lack of bias in AI
and algorithms. This belief is unsustainable; AI systems unavoidably make biased
decisions. Development is not a neutral, linear path. Inclusiveness and equity in both the
design and usage of AI is thus key to combat implicit biases.
(1) pre-existing social values found in the “social institutions, practices and attitudes”
from which the technology emerges,
Discrimination
Discrimination against individuals and groups can arise from biases in AI systems.
CCS345-ETHICS &AI-NOTES
The goals of equality law (e.g., formal and substantive equality), and appropriate
thresholds for distribution of outcomes across groups. In this context, embedding
considerations of non-discrimination and fairness into AI systems is particularly
difficult.
Autonomy
Value-laden decisions made by algorithms can also pose a threat to autonomy.
Personalisation of content by AI systems, such as recommender systems, is
particularly challenging in this regard.
Different information, prices, and other content can be offered to profiling groups or
audiences within a population defined by one or more attributes,
for example the ability to pay, which can itself lead to discrimination. Personalisation
reduces the diversity of information users encounter by excluding content deemed
irrelevant or contradictory to the user's beliefs or desires.
In a healthcare setting this could include insurers, remote care providers (e.g., chatbot
and triage service providers), consumer technology companies, and others. Opaque
decision-making inhibits oversight and informed decision-making concerning data
sharing.
Data subjects cannot define privacy norms to govern all types of data generically
because the value or insightfulness of data is only established through processing.
Blame can only be justifiably attributed when the actor has some degree of control
and intentionality in carrying out the action.
CCS345-ETHICS &AI-NOTES
Traditionally, developers and software engineers have had “control of the behaviour
of the machine in every detail” insofar as they can explain its overall design and
function to a third party.
Automation bias
A related problem concerns the diffusion of feelings of responsibility and
accountability for users of AI systems, and the related tendency to trust the outputs of
systems on the basis of their perceived objectivity, accuracy, or
complexity. Delegating decision-making to AI can shift responsibility away from
human decision-makers.
Similar effects can be observed in mixed networks of human and information systems
as already studied in bureaucracies, characterised by reduced feelings of personal
responsibility and the execution of otherwise unjustifiable actions.
Algorithms involving stakeholders from multiple disciplines can, for instance, lead to
each party assuming others will shoulder ethical responsibility for the algorithm‟s
actions. Machine learning adds an additional layer of complexity between designers
and actions driven by the algorithm, which may justifiably weaken blame placed upon
the former.
Useful distinctions exist between errors of design (types) and errors of operation
(tokens), and between the failure to operate as intended (dysfunction) and the
presence of unintended side-effects (misfunction). Misfunctioning is distinguished
from mere negative side effects by „avoidability‟, or the extent to which comparable
types of systems or artefacts accomplish the intended function without the effects in
question.
Ethical auditing
CCS345-ETHICS &AI-NOTES
How best to operationalise and set standards for testing of these ethical challenges
remains an open question, particularly for machine learning.
For all types of AI, auditing is a necessary precondition to verify correct functioning.
For systems with foreseeable human impact, auditing can create an ex post procedural
record of complex automated decision-making to unpack problematic or inaccurate
decisions, or to detect discrimination or similar harms.
CCS345-ETHICS &AI-NOTES
OPPORTUNITIES IN AI ETHICS
CCS345-ETHICS &AI-NOTES
environmental protection, and disaster response can improve people's lives and address
societal challenges. For instance, IBM's "Watson for Oncology" uses AI to assist doctors in
cancer treatment decisions, enhancing accuracy and efficiency in diagnoses.
Interdisciplinary Collaboration:
Engaging experts from various fields, including ethics, law, sociology, psychology, and
philosophy, can lead to more holistic and nuanced approaches to AI ethics. Collaborative
efforts among diverse stakeholders can help identify and address
omplex ethical challenges effectively. The Partnership on AI is an example of a multi-
stakeholder organization that fosters collaboration among industry, academia, and civil
society to address AI challenges responsibly.
Artificial intelligence (AI) provides many new and exciting capabilities. We see AI in our
daily lives in the form of virtual assistants, instructional programs and autonomous
operations:
CCS345-ETHICS &AI-NOTES
In the United States, AI regulation is decentralized which can cause uncertainty surrounding
what legal implications can result from the usage of artificial intelligence. While we do have
some rules that regulate the outcomes, there is often confusion around the actual operational
usage of AI tools.
11 AI Ethical Issues
Artificial intelligence has the potential to make your business more efficient. That‟s a win.
But increasing your output could come at a cost regardless of any savings. Making the ethics
of AI a focal point will help ensure your business remains in good standing from an
operational, regulatory and reputational standpoint. Here are 11 ethical issues you should
know about when it comes to AI.
Job displacement is a concern that is frequently cited in discussions surrounding AI. There is
fear that automation will replace certain aspects or entire job roles, causing unemployment
rates to spike industries. According to CompTIA‟s Business Technology Adoption and Skills
Trends report, 81% of U.S. workers have recently seen articles which focus on the
replacement of workers with AI. The same report found that 3 out of 4 workers are very or
somewhat concerned about how automated technologies will impact the workforce.
Issue 2: Privacy
Training of AI models requires massive amounts of data, some of which includes PII. There
is currently little insight into how the data is being collected, processed and stored which
raises concerns about who can access your data and how they can use it. There are other
privacy concerns surrounding the use of AI in surveillance. Law enforcement agencies use AI
to monitor and track the movements of suspects. While highly valuable, many are worried
CCS345-ETHICS &AI-NOTES
about the misuse of those capabilities in public spaces, infringing upon individual rights to
privacy.
Issue 3: Bias
There is another ethical concern surrounding AI bias. Although AI does not inherently come
with bias, systems are trained using data from human sources and deep learning which can
lead to the propagation of biases through technology. For instance, an AI hiring tool could
omit certain demographics if the data sets used to train the algorithm contained a bias against
a particular group. This could also have legal implications if it leads to discriminatory
practices.
Issue 4: Security
Security remains a top priority when it comes to AI (and really any branch of computer
science). Lax security can have a wide-ranging impact. For example, AI is susceptible to
malicious attacks which can compromise outcomes. The Cybersecurity Infrastructure and
Security Agency (CISA) references documented instances of attacks leadings to misbehaviors
in autonomous vehicles and the hiding of objects in security camera footage. Experts and
governmental entities are urging for more security measures to limit potentially negative
effects.
Issue 5: Explainability
It‟s not enough to simply put AI tools out into the world and watch them work. It can be
particularly important to understand the decision-making process with certain AI
applications. In some cases, it can be difficult to understand why certain AI tools came to
conclusions. This can have sizeable implications, especially in industries such as healthcare
or law enforcement where influencing factors must be considered, and real human lives are at
stake.
Issue 6: Accountability
The increasing prevalence of AI in all industries means that we use AI tools to make
decisions daily. In cases where those decisions lead to negative outcomes, it can be difficult
to identify who is responsible for the results. Are companies on the hook for validating the
algorithms of a tool they buy? Or do you look to the creator of an AI tool? The quest for
accountability can be a deep rabbit hole which can make it difficult to keep people and
companies accountable.
CCS345-ETHICS &AI-NOTES
Issue 7: Deepfakes
The usage of deepfakes creates ethical concerns. Deepfakes are now able to circumvent voice
and facial recognition which can be used to override security measures. One study even
showed that a Microsoft API was tricked more than 75% of the time using easily generated
deepfakes. Other ethical challenges arise when it comes to impersonation. The usage of
deepfakes to sway public opinion in political races can have far-reaching implications. There
is also concern over whether deepfakes could be used to influence the stock market if a CEO
were believed to be making decisions or taking actions that were considered questionable.
With no oversight and easy access to the software, the abuse of deepfakes presents a
significant security gap.
Issue 8: Misinformation
Misinformation has a way of creating social divides and perpetuating untrue opinions to the
detriment of organizations and others. A topic that gained scrutiny in the context of the
political upheaval seen in recent years, misinformation can affect public opinion and cause
severe reputational damage. Once misinformation becomes widely shared on social media, it
can be difficult to determine where it originated and challenging to combat. AI tools have
been used to spread misinformation, making it appear as though the information is legitimate,
when it is in fact not.
A recent lawsuit against ChatGPT involving several popular writers who claim the platform
made illegal use of their copyrighted work has brought attention to the issue of AI
exploitation of intellectual property. Several authors, including favorites as Jodi Picoult and
John Grisham, recently sued OpenAI for infringing on copyright by using their content to
train their algorithms. The lawsuit further claims that this type of exploitation will endanger
the ability of authors to make a living from writing. This kind of exploitation has owners of
intellectual property concerned about how AI will continue to impact their livelihoods.
CCS345-ETHICS &AI-NOTES
New technologies present companies, tech giants and startups alike, with a particular
challenge because there is a constant race to innovate. Often, success is determined by a
company‟s ability to be the first to release a particular technology or application. When it
comes to AI systems, companies often aren‟t taking the time to ensure their technology is
ethically designed or that it contains stringent security measures.
The application of artificial intelligence (AI) in medicine holds great promise for improving
healthcare outcomes, but it also raises several societal issues that need careful consideration.
Here are some key societal issues concerning the application of AI in medicine:
CCS345-ETHICS &AI-NOTES
Access to Technology: Not all healthcare facilities or regions have equal access to AI
technologies. This could lead to unequal access to the benefits of AI-driven healthcare,
creating a "digital divide."
Patient Data Protection: AI systems rely heavily on patient data, raising concerns about
privacy. How this data is collected, stored, and used must align with strict regulations like
GDPR and HIPAA.
Data Bias: Biases in healthcare data, such as historical disparities in treatment, can be
inadvertently perpetuated by AI systems, leading to unequal treatment.
Black Box Problem: Many AI algorithms are complex "black boxes" where it's challenging to
understand how they arrive at decisions. This lack of transparency raises questions about
accountability and the ability to challenge or appeal decisions made by AI.
Impact on Healthcare Jobs: As AI automates certain tasks, there's concern about the potential
displacement of healthcare workers. This includes administrative roles as well as some
clinical tasks.
Training and Education: Healthcare professionals need training to effectively use AI tools.
There's a need to ensure that healthcare workers are equipped with the skills to work
alongside AI systems.
Legal Frameworks: Existing medical liability frameworks may not be well-suited for cases
involving AI errors. New legal frameworks may be needed to determine liability when AI is
involved.
Ensuring Safe Use: Ensuring that AI systems are rigorously tested and validated is essential
to minimize the risk of errors leading to malpractice claims.
Algorithmic Bias: AI algorithms can inherit biases present in the data used to train them,
which can lead to discriminatory outcomes. This is particularly concerning in healthcare,
where biased algorithms could perpetuate disparities in diagnosis and treatment.
CCS345-ETHICS &AI-NOTES
Interoperability: As AI systems are integrated into healthcare systems, ensuring they can
work together and share data seamlessly is crucial for maximizing their potential benefits.
8. Overreliance on Technology
Impact on Communication: The introduction of AI into healthcare settings may change the
dynamics of doctor-patient relationships. Patients may feel alienated if they perceive AI as
replacing human care and empathy.
Trust: Building and maintaining patient trust in AI systems is crucial for their acceptance and
effective use in healthcare.
Advancements in artificial intelligence (AI) can help with the decision -making process
by evaluating data and variables in complex situations. This enables companies and
organizations to make faster, more well-informed decisions than when humans tackle
the problems without assistance.
The purpose of AI in decision making is not complete automation. Rather, the goal is
to help humans make quicker and better decisions through streamlined processes and
effective use of data.
CCS345-ETHICS &AI-NOTES
some of the most common challenges individuals and companies face when they
incorporate AI into their business decision making and problem-solving.
Enhanced accuracy. AI can use advanced algorithms and data science and
analysis to provide accurate and objective insights repeatably, reducing the
likelihood of human error and bias.
Faster decision making. AI can process vast amounts of data at incredible
speeds, enabling quick analysis and generating insights in real time. This
ultimately leads to faster and more efficient decision-making processes,
especially when you‟re able to incorporate automation in many components of
the process.
Improved efficiency. AI automates time-consuming and repetitive tasks in
decision-making processes, freeing up valuable human resources to focus on
more complex and strategic aspects.
Better risk assessment and mitigation. AI can assess and analyze various risk
factors, helping decision makers identify potential risks and devise effective
mitigation strategies.
Data-driven insights. AI leverages large volumes of data to uncover patterns,
trends, and correlations that may go unnoticed by humans. Understanding data
can be a complicated endeavor, but incorporating the computer science of AI
into your analysis can simplify the process.
CCS345-ETHICS &AI-NOTES
Predictive analytics
AI uses predictive analytics to analyze historical data, identify patterns, and make
accurate predictions
As big data systems continue to grow, companies will have larger sets of data to work
from, which should increase the accuracy of predictive analytics.
Predictive analytics enables decision makers to anticipate future outcomes and make
proactive decisions in various domains, such as sales forecasting and demand planning.
CCS345-ETHICS &AI-NOTES
Risk assessment takes time and careful planning to ensure a company is aware of and
protected against potential threats. Effective risk management relies on the proper
analysis of data; situations can become problematic if the data that‟s used is incomplete
or inaccurate.
AI algorithms can assess and analyze complex risk factors, such as credit risk or
cybersecurity threats. Since an AI-powered tool can quickly analyze large sets of data
and detect anomalies,
This data can support decision makers in evaluating risks, identifying vulnerabilities,
and devising effective mitigation strategies, minimizing potential negative impacts.
Risk managers and auditors can use AI tools to ensure they are using a larger range of
available data, and not just the evidence they have detected on their own.
Example:
Banks can use risk AI assessment and mitigation for fraud prevention.
Health care systems may apply this approach for patient-specific disease
prevention or community epidemic prevention.
Here are a few of the main ways NLP can help with decision making:
Sentiment analysis. NLP can provide insight into the sentiment (or emotional
tone) of textual documents and data in addition to analyzing the actual
information presented.
Text classification. NLP can sort text into predefined labels or classes. This
can help you organize large amounts of data into preset cat egories, making the
information easier to understand and utilize.
Information extraction. By extracting relevant information, you can better
identify trends and patterns during the decision-making process.
CCS345-ETHICS &AI-NOTES
Example:
Marketing organizations are already using this approach for managing programs across
channels to optimize revenue. Individuals can use these generative AI tools for wide-
ranging decision-making in activities such as planning trips, determining who to vote
for, or simply creating menus from available ingredients.
These systems use machine learning models and operational data to develop insights
and access real-time information. Since this involves nonstop data processing, systems
must be equipped to quickly analyze and process the data consistently.
However, as mentioned above, critical thinking is necessary to ensure that the data
being used is accurate and trustworthy. Make sure you feel confident about where the
system is pulling the data from and how it is using all available information for the
validation of conclusions.
Recommender systems
This approach is helpful because it reveals insights companies may not have been able
to identify on their own. The findings can equip decision makers in areas such as
product recommendations, content suggestions, or personalized marketing campaigns to
deliver effective campaigns and advertisements tailored to the user‟s specific taste.
Exmaple:
CCS345-ETHICS &AI-NOTES
Using AI, teams can better allocate their resources by quickly analyzing availability,
utilization, and performance. This data will enable you to identify potential bottlenecks
and ensure that all team members are working on the most important tasks.
Many supply chain managers are using AI to improve their route optimization. They can
automatically create the most efficient routes for their drivers by inputting a list of
stops. The system will consider factors such as traffic and consumer demand to
determine what routes will be the most efficient and cost-effective.
AI algorithms can analyze large volumes of data and detect anomalies and patterns
associated with fraudulent activities. The findings can assist decision makers in fraud
detection and prevention efforts, mitigating financial losses and protecting businesses
and consumers.
A current example is American Express, which has developed an AI-based system that
can analyze billions of transactions in real time to identify patterns of fraudulent
activity. This platform employs machine learning algorithms and big data analytics to
effectively detect potential fraudulent transactions.
CCS345-ETHICS &AI-NOTES
used to evaluate the findings produced by AI and check for any potential errors or
mistakes.
As the technology behind AI continues to progress beyond expectations, policy initiatives are
springing up across the globe to keep pace with these developments.
The first national strategy on AI was launched by Canada in March 2017, followed soon after
by technology leaders Japan and China. In Europe, the European Commission put forward a
communication on AI, initiating the development of independent strategies by Member
States.
These initiatives differ widely in terms of their goals, the extent of their investment, and their
commitment to developing ethical frameworks, reviewed here as of May 2019.
CCS345-ETHICS &AI-NOTES
As artificial intelligence (AI) advances across economies and societies, policy makers and AI
actors around the world seek to move from principles to practice.
To harness the benefits of AI while mitigating the risks, governments are investing in AI
R&D; leveraging AI in specific industries such as transportation and healthcare; building
human capacity on AI; ensuring a fair labour market transformation; reviewing and adapting
relevant policy and regulatory frameworks and developing standards; and co-operating
internationally.
This Going Digital Toolkit note provides an overview of the various AI policy initiatives
undertaken by governments and analyses these initiatives throughout the AI policy cycle:
1) Policy design;
2) policy implementation;
3) Policy intelligence;
CCS345-ETHICS &AI-NOTES
These resources provide a baseline to map countries‟ AI policy initiatives according to the
recommendations to governments contained in the OECD AI Principles (Box 1).
AI policy design
Some countries, such as Canada and Finland, developed their national AI strategies as
early as 2017, closely followed by Japan, France, Germany and the United Kingdom in 2018.
CCS345-ETHICS &AI-NOTES
Other countries, such as Brazil, Egypt, Hungary, Poland and Spain, launched a
national AI strategy more recently. Several countries are currently in AI policy consultation
and development processes.
CCS345-ETHICS &AI-NOTES
CCS345-ETHICS &AI-NOTES
The WEF has also announced plans to develop an 'AI toolkit' to help businesses to
best implement AI and to create their own ethics councils, which will be released at 2020's
Davos conference (Vanian, 2019).
National Strategy In AI
The first national strategy on AI was launched by Canada in March 2017, followed soon after
by technology leaders Japan and China. In Europe, the European Commission put forward a
communication on AI, initiating the development of independent strategies by Member
States. An American AI initiative is expected soon, alongside intense efforts in Russia to
formalise their 10-point plan for AI.
These initiatives differ widely in terms of their goals, the extent of their investment,
and their commitment to developing ethical frameworks, reviewed here as of May 2019.
Europe
The European Commission's Communication on Artificial Intelligence (European
Commission, 2018a), released in April 2018, paved the way to the first international strategy
CCS345-ETHICS &AI-NOTES
on AI. The document outlines a coordinated approach to maximise the benefits, and address
the challenges, brought about by AI.
The EU's High-Level Expert Group on AI shortly after released a further set of policy
and investment guidelines for trustworthy AI (European Commission High-Level Expert
Group on AI, 2019b), which includes a number of important recommendations around
protecting people, boosting uptake of AI in the private sector, expanding European research
capacity in AI and developing ethical data management practices
Finland was the first Member State to develop a national programme on AI (Ministry
of Economic Affairs and Employment of Finland, 2018a). The programme is based on two
reports, Finland's Age of Artificial Intelligence and Work in the Age of Artificial Intelligence
(Ministry of Economic Affairs and Employment of Finland, 2017, 2018b). Policy objectives
focus on investment for business competitiveness and public services. Although
recommendations have already been incorporated into policy, Finland's AI steering group
will run until the end of the present Government's term, with a final report expected
imminently.
CCS345-ETHICS &AI-NOTES
Singapore. 'AI.SG'
CCS345-ETHICS &AI-NOTES
In June 2018, the government announced three new initiatives on AI governance and
ethics. The new Advisory Council on the Ethical Use of AI and Data will help the
Government develop standards and governance frameworks for the ethics of AI.
Saudi Arabia
King Salman issued a royal decree to establish an artificial intelligence (AI) center to
enhance the drive toward innovation and digital transformation in Saudi Arabia in Sep 2019.
The establishment of the center aligns with the Kingdom‟s Vision 2030 program. The
Government of Saudi Arabia is now drafting a national AI strategy that aims to build an
innovative and ethical AI ecosystem in the country by 2030.
CCS345-ETHICS &AI-NOTES
Australia does not yet have a national strategy on AI. It does however have a' Digital
Economy Strategy' (Australian Government, 2017) which discusses empowering Australians
through 'digital skills and inclusion', listing AI as a key emerging technology. A report on
'Australia's Tech Future' further details plans for AI, including using AI to improve public
services, increase administrative efficiency and improve policy development (Australian
Government, 2018).
UAE
The UAE Strategy for Artificial Intelligence, was announced in October 2017. The
UAE became the first country in the world to create a Ministry of Artificial Intelligence and
first in the Middle East to launch an AI strategy. The strategy is the first initiative of the UAE
Centennial 2071 Plan and its main objective is to enhance government performance and
efficiency. The government will invest in AI technologies in nine sectors: transport, health,
space, renewable energy, water, technology, education, environment, and traffic. In doing so,
the government aims to diversify the economy, cut costs across the government and position
the UAE as a global leader in the application of AI.
CCS345-ETHICS &AI-NOTES
United States.
In February 2019, the United States launched the American AI Initiative, in the form
of an executive order. This “whole-of-government strategy” aims at focusing federal
government resources for investing in AI research, unleashing AI resources, setting AI
governance standards, building the AI workforce and protecting the US AI advantage.
CCS345-ETHICS &AI-NOTES
NITI Aayog, the government think tank that wrote the report, calls this approach
#AIforAll. The strategy, as a result, aims to (1) enhance and empower Indians with the skills
to find quality jobs; (2) invest in research and sectors that can maximize economic growth
and social impact; and (3) scale Indian-made AI solutions to the rest of the developing world.
CCS345-ETHICS &AI-NOTES