0% found this document useful (0 votes)
27 views32 pages

UNIT 5 NOTES NEW

The document discusses the ethical challenges and opportunities associated with artificial intelligence (AI), highlighting issues such as bias, privacy, accountability, and the implications of automation on jobs. It emphasizes the need for ethical design, robust frameworks, and interdisciplinary collaboration to address these challenges while leveraging AI for social good. Additionally, it outlines legal considerations and specific ethical issues that arise from the use of AI technologies.

Uploaded by

drkavitha.jegan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views32 pages

UNIT 5 NOTES NEW

The document discusses the ethical challenges and opportunities associated with artificial intelligence (AI), highlighting issues such as bias, privacy, accountability, and the implications of automation on jobs. It emphasizes the need for ethical design, robust frameworks, and interdisciplinary collaboration to address these challenges while leveraging AI for social good. Additionally, it outlines legal considerations and specific ethical issues that arise from the use of AI technologies.

Uploaded by

drkavitha.jegan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

lOMoARcPSD|36947303

Unit V-CCS45- Ethics AND AI Notes

Ethics and AI (Anna University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Kavitha Subramaniam ([email protected])
lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

UNIT V

AI AND ETHICS- CHALLENGES AND OPPORTUNITIES

Challenges – Opportunities- ethical issues in artificial intelligence- Societal Issues


Concerning the Application of Artificial Intelligence in Medicine- decision-making role
in industries-National and International Strategies on AI.

CHALLENGES

Ethical challenges facing AI has identified six types of concerns that can be traced to the
operational parameters of decision-making algorithms and AI systems.

The map reproduced and adapted in Figure 1 takes into account decision-making algorithms

1) turn data into evidence for a given outcome (henceforth conclusion), and that this
outcome is then used to

(2) trigger and motivate an action that (on its own, or when combined with other
actions) may not be ethically neutral. This work is performed in ways that are complex
and (semi-)-autonomous, which

(3) complicates apportionment of responsibility for effects of actions driven by


algorithms.”

IIIYEAR CSE/V SEM Page 1

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

 The proposed types of concerns can cause failures involving multiple human,
organisational, and technological agents.
 This mix of human and technological actors leads to difficult questions concerning
how to assign responsibility and liability for the impact of AI behaviours.
 These difficulties are captured in traceability as a final, overarching, type of concern.

Unjustified actions
 Much algorithmic decision-making and data mining relies on inductive knowledge
and correlations identified within a dataset.

 Correlations based on a „sufficient‟ volume of data are often seen as sufficiently


credible to direct action without first establishing causality and gain knowledge.

 Even if strong correlations or causal knowledge are found, this knowledge may only
concern populations while actions with significant personal impact are directed
towards individuals

Opacity
Opacity in machine learning algorithms is a product of the high dimensionality of data,
complex code and changeable decision-making logic.[

1] Transparency and comprehensibility are generally desired because algorithms that


are poorly predictable or interpretable are difficult to control, monitor and correct

[2] Transparency is often naively treated as a panacea for ethical issues arising from
new technologies.

Bias
The automation of human decision-making is often justified by an alleged lack of bias in AI
and algorithms. This belief is unsustainable; AI systems unavoidably make biased
decisions. Development is not a neutral, linear path. Inclusiveness and equity in both the
design and usage of AI is thus key to combat implicit biases.

(1) pre-existing social values found in the “social institutions, practices and attitudes”
from which the technology emerges,

(2) technical constraints and

(3) emergent aspects of a context of use.

Discrimination
 Discrimination against individuals and groups can arise from biases in AI systems.

 Discriminatory analytics can contribute to self-fulfilling prophecies and stigmatisation


in targeted groups, undermining their autonomy and participation in society.

IIIYEAR CSE/V SEM Page 2

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

 The goals of equality law (e.g., formal and substantive equality), and appropriate
thresholds for distribution of outcomes across groups. In this context, embedding
considerations of non-discrimination and fairness into AI systems is particularly
difficult.

Autonomy
 Value-laden decisions made by algorithms can also pose a threat to autonomy.
Personalisation of content by AI systems, such as recommender systems, is
particularly challenging in this regard.

 Personalisation can be understood as the construction of choice architectures which


are not the same across a sample.

 Different information, prices, and other content can be offered to profiling groups or
audiences within a population defined by one or more attributes,

 for example the ability to pay, which can itself lead to discrimination. Personalisation
reduces the diversity of information users encounter by excluding content deemed
irrelevant or contradictory to the user's beliefs or desires.

Informational privacy and group privacy


 Algorithms also transform notions of privacy.

 Responses to discrimination, personalisation, and the inhibition of autonomy due to


opacity often appeal to informational privacy, or the right of data subjects to “shield
personal data from third parties.”

 Informational privacy concerns the capacity of an individual to control information


about herself, and the effort required by third parties to obtain this information.

 In a healthcare setting this could include insurers, remote care providers (e.g., chatbot
and triage service providers), consumer technology companies, and others. Opaque
decision-making inhibits oversight and informed decision-making concerning data
sharing.

 Data subjects cannot define privacy norms to govern all types of data generically
because the value or insightfulness of data is only established through processing.

Moral responsibility and distributed responsibility


 When a technology fails, blame and sanctions must be apportioned.

 Blame can only be justifiably attributed when the actor has some degree of control
and intentionality in carrying out the action.

IIIYEAR CSE/V SEM Page 3

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

 Traditionally, developers and software engineers have had “control of the behaviour
of the machine in every detail” insofar as they can explain its overall design and
function to a third party.

 This traditional conception of responsibility in software design assumes the developer


can reflect on the technology‟s likely effects and potential for malfunctioning, and
make design choices to choose the most desirable outcomes according to the
functional specification.

Automation bias
 A related problem concerns the diffusion of feelings of responsibility and
accountability for users of AI systems, and the related tendency to trust the outputs of
systems on the basis of their perceived objectivity, accuracy, or
complexity. Delegating decision-making to AI can shift responsibility away from
human decision-makers.

 Similar effects can be observed in mixed networks of human and information systems
as already studied in bureaucracies, characterised by reduced feelings of personal
responsibility and the execution of otherwise unjustifiable actions.

 Algorithms involving stakeholders from multiple disciplines can, for instance, lead to
each party assuming others will shoulder ethical responsibility for the algorithm‟s
actions. Machine learning adds an additional layer of complexity between designers
and actions driven by the algorithm, which may justifiably weaken blame placed upon
the former.

Safety and resilience


 The need to apportion responsibility is acutely felt when algorithms malfunction.
Unethical algorithms can be thought of as malfunctioning software artefacts that do
not operate as intended.

 Useful distinctions exist between errors of design (types) and errors of operation
(tokens), and between the failure to operate as intended (dysfunction) and the
presence of unintended side-effects (misfunction). Misfunctioning is distinguished
from mere negative side effects by „avoidability‟, or the extent to which comparable
types of systems or artefacts accomplish the intended function without the effects in
question.

 Machine learning in particular raises unique challenges, because achieving the


intended or “correct” behaviour does not imply the absence of errors or harmful
actions and feedback loops.

Ethical auditing

IIIYEAR CSE/V SEM Page 4

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

 How best to operationalise and set standards for testing of these ethical challenges
remains an open question, particularly for machine learning.

 Merely rendering the code of an algorithm transparent is insufficient to ensure ethical


behaviour. One possible path to achieve interpretability, fairness, and other ethical
goals in AI systems is via auditing carried out by data processors, external
regulators, or empirical researchers, using ex post audit studies, reflexive
ethnographic studies in development and testing, or reporting mechanisms designed
into the algorithm itself.

 For all types of AI, auditing is a necessary precondition to verify correct functioning.
For systems with foreseeable human impact, auditing can create an ex post procedural
record of complex automated decision-making to unpack problematic or inaccurate
decisions, or to detect discrimination or similar harms.

IIIYEAR CSE/V SEM Page 5

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

OPPORTUNITIES IN AI ETHICS

Ethical Design and Development:


Integrating ethical considerations into the design and development of AI systems presents a
significant opportunity for promoting responsible AI. By embedding ethical principles from
the outset, developers can create AI technologies that align with human values and respect
users' rights. A notable example is Google's "Ethical AI Principles", which guide the
development and deployment of AI technologies to ensure they are designed with ethical
considerations in mind, including transparency, privacy, and fairness.

Leveraging AI for Social Good:


Leveraging AI for positive social impact offers great potential. AI applications in healthcare,
powered by technologies like natural language processing and machine learning, education,

IIIYEAR CSE/V SEM Page 6

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

environmental protection, and disaster response can improve people's lives and address
societal challenges. For instance, IBM's "Watson for Oncology" uses AI to assist doctors in
cancer treatment decisions, enhancing accuracy and efficiency in diagnoses.

Robust Ethical Frameworks:


Developing comprehensive ethical frameworks for AI can guide policymakers, developers,
and users in making responsible choices. These frameworks provide a set of guiding
principles and standards to ensure the ethical use of AI. An example of this is the "Asilomar
AI Principles", a set of 23 principles proposed by AI researchers to ensure the safe and
beneficial development of AI technologies.

Public Engagement and Awareness:


Raising public awareness about AI ethics fosters informed discussions and ensures that
ethical considerations are at the forefront of AI adoption. Organizations like the AI Now
Institute - Symposium hold symposiums to bring together experts and the public to discuss
AI's social impact and ethical implications, promoting an inclusive and transparent
conversation about AI ethics.

Interdisciplinary Collaboration:
Engaging experts from various fields, including ethics, law, sociology, psychology, and
philosophy, can lead to more holistic and nuanced approaches to AI ethics. Collaborative
efforts among diverse stakeholders can help identify and address
omplex ethical challenges effectively. The Partnership on AI is an example of a multi-
stakeholder organization that fosters collaboration among industry, academia, and civil
society to address AI challenges responsibly.

ETHICAL ISSUES IN ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI) provides many new and exciting capabilities. We see AI in our
daily lives in the form of virtual assistants, instructional programs and autonomous
operations:

 Self-driving cars? Check.


 Instantaneous translation of phrases into another language. Check.
 Write code. Check.

IIIYEAR CSE/V SEM Page 7

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Legal Considerations for AI

In the United States, AI regulation is decentralized which can cause uncertainty surrounding
what legal implications can result from the usage of artificial intelligence. While we do have
some rules that regulate the outcomes, there is often confusion around the actual operational
usage of AI tools.

Here are some legal considerations:

 Violations to intellectual property rights


 Data privacy issues that violate General Data Protection Regulation (GDPR)
 Data privacy issues that violate the California Consumer Privacy Act (CCPA)
 Violations of employment regulations
 Inappropriate usage of copyright data
 Disputes concerning contract law when generative AI is used
 Consumer confidentiality and issues with personally identifiable information (PII)
 Inaccurate usage of generative AI output

11 AI Ethical Issues

Artificial intelligence has the potential to make your business more efficient. That‟s a win.
But increasing your output could come at a cost regardless of any savings. Making the ethics
of AI a focal point will help ensure your business remains in good standing from an
operational, regulatory and reputational standpoint. Here are 11 ethical issues you should
know about when it comes to AI.

Issue 1: Job Displacement

Job displacement is a concern that is frequently cited in discussions surrounding AI. There is
fear that automation will replace certain aspects or entire job roles, causing unemployment
rates to spike industries. According to CompTIA‟s Business Technology Adoption and Skills
Trends report, 81% of U.S. workers have recently seen articles which focus on the
replacement of workers with AI. The same report found that 3 out of 4 workers are very or
somewhat concerned about how automated technologies will impact the workforce.

Issue 2: Privacy

Training of AI models requires massive amounts of data, some of which includes PII. There
is currently little insight into how the data is being collected, processed and stored which
raises concerns about who can access your data and how they can use it. There are other
privacy concerns surrounding the use of AI in surveillance. Law enforcement agencies use AI
to monitor and track the movements of suspects. While highly valuable, many are worried

IIIYEAR CSE/V SEM Page 8

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

about the misuse of those capabilities in public spaces, infringing upon individual rights to
privacy.

Issue 3: Bias

There is another ethical concern surrounding AI bias. Although AI does not inherently come
with bias, systems are trained using data from human sources and deep learning which can
lead to the propagation of biases through technology. For instance, an AI hiring tool could
omit certain demographics if the data sets used to train the algorithm contained a bias against
a particular group. This could also have legal implications if it leads to discriminatory
practices.

Issue 4: Security

Security remains a top priority when it comes to AI (and really any branch of computer
science). Lax security can have a wide-ranging impact. For example, AI is susceptible to
malicious attacks which can compromise outcomes. The Cybersecurity Infrastructure and
Security Agency (CISA) references documented instances of attacks leadings to misbehaviors
in autonomous vehicles and the hiding of objects in security camera footage. Experts and
governmental entities are urging for more security measures to limit potentially negative
effects.

Issue 5: Explainability

It‟s not enough to simply put AI tools out into the world and watch them work. It can be
particularly important to understand the decision-making process with certain AI
applications. In some cases, it can be difficult to understand why certain AI tools came to
conclusions. This can have sizeable implications, especially in industries such as healthcare
or law enforcement where influencing factors must be considered, and real human lives are at
stake.

Issue 6: Accountability

The increasing prevalence of AI in all industries means that we use AI tools to make
decisions daily. In cases where those decisions lead to negative outcomes, it can be difficult
to identify who is responsible for the results. Are companies on the hook for validating the
algorithms of a tool they buy? Or do you look to the creator of an AI tool? The quest for
accountability can be a deep rabbit hole which can make it difficult to keep people and
companies accountable.

IIIYEAR CSE/V SEM Page 9

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Issue 7: Deepfakes

The usage of deepfakes creates ethical concerns. Deepfakes are now able to circumvent voice
and facial recognition which can be used to override security measures. One study even
showed that a Microsoft API was tricked more than 75% of the time using easily generated
deepfakes. Other ethical challenges arise when it comes to impersonation. The usage of
deepfakes to sway public opinion in political races can have far-reaching implications. There
is also concern over whether deepfakes could be used to influence the stock market if a CEO
were believed to be making decisions or taking actions that were considered questionable.
With no oversight and easy access to the software, the abuse of deepfakes presents a
significant security gap.

Issue 8: Misinformation

Misinformation has a way of creating social divides and perpetuating untrue opinions to the
detriment of organizations and others. A topic that gained scrutiny in the context of the
political upheaval seen in recent years, misinformation can affect public opinion and cause
severe reputational damage. Once misinformation becomes widely shared on social media, it
can be difficult to determine where it originated and challenging to combat. AI tools have
been used to spread misinformation, making it appear as though the information is legitimate,
when it is in fact not.

Issue 9: Exploitation of Intellectual Property

A recent lawsuit against ChatGPT involving several popular writers who claim the platform
made illegal use of their copyrighted work has brought attention to the issue of AI
exploitation of intellectual property. Several authors, including favorites as Jodi Picoult and
John Grisham, recently sued OpenAI for infringing on copyright by using their content to
train their algorithms. The lawsuit further claims that this type of exploitation will endanger
the ability of authors to make a living from writing. This kind of exploitation has owners of
intellectual property concerned about how AI will continue to impact their livelihoods.

Issue 10: Loss of Social Connection

While AI has the potential to provide hyper-personalized experiences by customizing search


engine content based on your preferences and enhancing customer service through the use of
chatbots, there is concern that this could lead to a lack of social connection, empathy for
others and general well-being. If all you see on social media are opinions that reinforce your
own, you‟re unlikely to develop a mindset that allows you to empathize with others and
engage in actions for social good.

IIIYEAR CSE/V SEM Page 10

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Issue 11: Balancing Ethics With Competition

New technologies present companies, tech giants and startups alike, with a particular
challenge because there is a constant race to innovate. Often, success is determined by a
company‟s ability to be the first to release a particular technology or application. When it
comes to AI systems, companies often aren‟t taking the time to ensure their technology is
ethically designed or that it contains stringent security measures.

Societal Issues Concerning the Application of Artificial Intelligence in Medicine

Medicine is becoming an increasingly data-centred discipline and, beyond classical statistical


approach-es, artificial intelligence (AI) and, in particular, machine learning (ML) are
attracting much interest for the analysis of medical data.

The application of artificial intelligence (AI) in medicine holds great promise for improving
healthcare outcomes, but it also raises several societal issues that need careful consideration.
Here are some key societal issues concerning the application of AI in medicine:

1. Equity and Access

Healthcare Disparities: There is a risk that AI applications may inadvertently exacerbate


existing healthcare disparities if not implemented carefully. For example, if AI algorithms are
trained primarily on data from certain demographics, they may not perform as well for others.

IIIYEAR CSE/V SEM Page 11

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Access to Technology: Not all healthcare facilities or regions have equal access to AI
technologies. This could lead to unequal access to the benefits of AI-driven healthcare,
creating a "digital divide."

2. Data Privacy and Security

Patient Data Protection: AI systems rely heavily on patient data, raising concerns about
privacy. How this data is collected, stored, and used must align with strict regulations like
GDPR and HIPAA.

Data Bias: Biases in healthcare data, such as historical disparities in treatment, can be
inadvertently perpetuated by AI systems, leading to unequal treatment.

3. Transparency and Accountability

Black Box Problem: Many AI algorithms are complex "black boxes" where it's challenging to
understand how they arrive at decisions. This lack of transparency raises questions about
accountability and the ability to challenge or appeal decisions made by AI.

Responsibility for Errors: When errors occur in AI-driven diagnosis or treatment


recommendations, it's crucial to define who is responsible—whether it's the developer,
healthcare provider, or the AI system itself.

4. Job Displacement and Training

Impact on Healthcare Jobs: As AI automates certain tasks, there's concern about the potential
displacement of healthcare workers. This includes administrative roles as well as some
clinical tasks.

Training and Education: Healthcare professionals need training to effectively use AI tools.
There's a need to ensure that healthcare workers are equipped with the skills to work
alongside AI systems.

5. Medical Liability and Malpractice

Legal Frameworks: Existing medical liability frameworks may not be well-suited for cases
involving AI errors. New legal frameworks may be needed to determine liability when AI is
involved.

Ensuring Safe Use: Ensuring that AI systems are rigorously tested and validated is essential
to minimize the risk of errors leading to malpractice claims.

6. Bias and Discrimination

Algorithmic Bias: AI algorithms can inherit biases present in the data used to train them,
which can lead to discriminatory outcomes. This is particularly concerning in healthcare,
where biased algorithms could perpetuate disparities in diagnosis and treatment.

IIIYEAR CSE/V SEM Page 12

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

7. Regulation and Standardization

Regulatory Oversight: Governments and regulatory bodies need to develop frameworks to


ensure the safe and ethical use of AI in healthcare. This includes standards for data quality,
algorithm transparency, and patient consent.

Interoperability: As AI systems are integrated into healthcare systems, ensuring they can
work together and share data seamlessly is crucial for maximizing their potential benefits.

8. Overreliance on Technology

Human Oversight: There's a risk of overreliance on AI systems, leading to a decrease in


critical thinking or decision-making by healthcare professionals. AI should be seen as a tool
to enhance, not replace, human expertise.

9. Cost and Resource Allocation

Financial Barriers: Implementing AI in healthcare can be costly, which could create


disparities in access based on the financial resources of healthcare organizations.

Resource Allocation: Determining where to allocate resources for AI implementation—


whether in research, development, or patient care—raises complex ethical questions.

10. Changing Doctor-Patient Relationships

Impact on Communication: The introduction of AI into healthcare settings may change the
dynamics of doctor-patient relationships. Patients may feel alienated if they perceive AI as
replacing human care and empathy.

Trust: Building and maintaining patient trust in AI systems is crucial for their acceptance and
effective use in healthcare.

Addressing these societal issues requires collaboration among policymakers, healthcare


providers, AI developers, ethicists, patients, and other stakeholders. Ethical guidelines,
transparency in AI systems, ongoing education, and robust regulatory frameworks are
essential to ensure that AI in medicine benefits society as a whole while minimizing potential
harms.

DECISION-MAKING ROLE IN INDUSTRIES

Advancements in artificial intelligence (AI) can help with the decision -making process
by evaluating data and variables in complex situations. This enables companies and
organizations to make faster, more well-informed decisions than when humans tackle
the problems without assistance.

The purpose of AI in decision making is not complete automation. Rather, the goal is
to help humans make quicker and better decisions through streamlined processes and
effective use of data.

IIIYEAR CSE/V SEM Page 13

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

some of the most common challenges individuals and companies face when they
incorporate AI into their business decision making and problem-solving.

Importance of AI in decision making


AI can play a significant role in data-driven decision making, providing benefits such
as:

 Enhanced accuracy. AI can use advanced algorithms and data science and
analysis to provide accurate and objective insights repeatably, reducing the
likelihood of human error and bias.
 Faster decision making. AI can process vast amounts of data at incredible
speeds, enabling quick analysis and generating insights in real time. This
ultimately leads to faster and more efficient decision-making processes,
especially when you‟re able to incorporate automation in many components of
the process.
 Improved efficiency. AI automates time-consuming and repetitive tasks in
decision-making processes, freeing up valuable human resources to focus on
more complex and strategic aspects.
 Better risk assessment and mitigation. AI can assess and analyze various risk
factors, helping decision makers identify potential risks and devise effective
mitigation strategies.
 Data-driven insights. AI leverages large volumes of data to uncover patterns,
trends, and correlations that may go unnoticed by humans. Understanding data
can be a complicated endeavor, but incorporating the computer science of AI
into your analysis can simplify the process.

AI Is Used in Decision-Making Processes in Industries

IIIYEAR CSE/V SEM Page 14

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

 Predictive analytics

AI uses predictive analytics to analyze historical data, identify patterns, and make
accurate predictions

As big data systems continue to grow, companies will have larger sets of data to work
from, which should increase the accuracy of predictive analytics.

Predictive analytics enables decision makers to anticipate future outcomes and make
proactive decisions in various domains, such as sales forecasting and demand planning.

Several types of predictive analytics exist. In addition to using predictive analytics to


imagine what the future could look like, the same technology can be helpful when
trying to understand what happened in the past and what events led to a certain result.

Example: predictive analytics is applied in the management of equipment maintenance.


Historical breakdown analysis is combined with real-time process metrics and
operational schedules to determine the most cost-effective times to shut equipment
down for necessary maintenance.

IIIYEAR CSE/V SEM Page 15

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

 Risk assessment and mitigation

Risk assessment takes time and careful planning to ensure a company is aware of and
protected against potential threats. Effective risk management relies on the proper
analysis of data; situations can become problematic if the data that‟s used is incomplete
or inaccurate.

AI algorithms can assess and analyze complex risk factors, such as credit risk or
cybersecurity threats. Since an AI-powered tool can quickly analyze large sets of data
and detect anomalies,

This data can support decision makers in evaluating risks, identifying vulnerabilities,
and devising effective mitigation strategies, minimizing potential negative impacts.
Risk managers and auditors can use AI tools to ensure they are using a larger range of
available data, and not just the evidence they have detected on their own.

Example:

 Banks can use risk AI assessment and mitigation for fraud prevention.
 Health care systems may apply this approach for patient-specific disease
prevention or community epidemic prevention.

 Natural language processing (NLP)

Natural language processing (NLP) refers to a computer‟s ability to automatically


analyze and process language in a conversational manner. Conversational chatbots such
as ChatGPT use NLP to analyze human prompts and questions to produce a coherent
response. NLP techniques enable AI systems to analyze human language in ways that
facilitate decision-making processes that involve text data, such as sentiment analysis,
contract review, or customer feedback analysis.

Here are a few of the main ways NLP can help with decision making:

 Sentiment analysis. NLP can provide insight into the sentiment (or emotional
tone) of textual documents and data in addition to analyzing the actual
information presented.
 Text classification. NLP can sort text into predefined labels or classes. This
can help you organize large amounts of data into preset cat egories, making the
information easier to understand and utilize.
 Information extraction. By extracting relevant information, you can better
identify trends and patterns during the decision-making process.

IIIYEAR CSE/V SEM Page 16

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

 Summarization. NLP can help you condense long documents into


summaries so that you can have the relevant information without going
through all the material yourself.
 Question answering. You can use NLP systems to ask questions about various
documents and datasets to find answers quickly.

Example:

Marketing organizations are already using this approach for managing programs across
channels to optimize revenue. Individuals can use these generative AI tools for wide-
ranging decision-making in activities such as planning trips, determining who to vote
for, or simply creating menus from available ingredients.

 Decision support systems

AI-powered decision support systems assist decision makers by providing relevant


information, data analysis, and insights in real time, empowering th em to make well-
informed decisions across various domains, including healthcare, logistics, or supply
chain management.

These systems use machine learning models and operational data to develop insights
and access real-time information. Since this involves nonstop data processing, systems
must be equipped to quickly analyze and process the data consistently.

However, as mentioned above, critical thinking is necessary to ensure that the data
being used is accurate and trustworthy. Make sure you feel confident about where the
system is pulling the data from and how it is using all available information for the
validation of conclusions.

 Recommender systems

AI-based recommender systems analyze user preferences, historical behavior, and


contextual data to provide personalized recommendations. These systems use big data to
analyze relevant information such as past purchases, demographic information, and
other factors that help companies learn about customers‟ preferences.

This approach is helpful because it reveals insights companies may not have been able
to identify on their own. The findings can equip decision makers in areas such as
product recommendations, content suggestions, or personalized marketing campaigns to
deliver effective campaigns and advertisements tailored to the user‟s specific taste.

Exmaple:

IIIYEAR CSE/V SEM Page 17

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Netflix currently includes a recommender system as a part of its algorithm. The


platform uses your past viewing history to predict what might interest in the future
based on the history of similar consumers. The purpose of this system is to eliminate the
time and frustration that may happen when you‟re deciding what to watch next.

 Optimization and resource allocation

AI optimization algorithms enable decision makers to allocate resources efficiently,


optimize processes, and solve complex optimization problems. This can help in areas
such as workforce scheduling, supply chain optimization, or route planning.

Using AI, teams can better allocate their resources by quickly analyzing availability,
utilization, and performance. This data will enable you to identify potential bottlenecks
and ensure that all team members are working on the most important tasks.

Many supply chain managers are using AI to improve their route optimization. They can
automatically create the most efficient routes for their drivers by inputting a list of
stops. The system will consider factors such as traffic and consumer demand to
determine what routes will be the most efficient and cost-effective.

 Fraud detection and prevention

AI algorithms can analyze large volumes of data and detect anomalies and patterns
associated with fraudulent activities. The findings can assist decision makers in fraud
detection and prevention efforts, mitigating financial losses and protecting businesses
and consumers.

A current example is American Express, which has developed an AI-based system that
can analyze billions of transactions in real time to identify patterns of fraudulent
activity. This platform employs machine learning algorithms and big data analytics to
effectively detect potential fraudulent transactions.

 Cognitive decision making

AI technologies, such as cognitive computing and machine learning, can facilitate


decision-making processing by analyzing vast amounts of data, recognizing patterns,
and recommending optimal solutions. This can help decision makers in complex
scenarios, such as medical diagnosis or strategic planning.

Remember, this information should be used to inform the human decision-making


process rather than replace it entirely. While the data produced by AI technologies can
be helpful, it may sometimes have fallacies or errors. Human discernment should be

IIIYEAR CSE/V SEM Page 18

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

used to evaluate the findings produced by AI and check for any potential errors or
mistakes.

Applications of AI in decision making


Let‟s introduce a few prominent companies already using AI to help with their decision
making.

 Google. Google uses a deep learning system to better understand search


prompts and provide personalized results.
 IBM. IBM has optimized its decision making to solve complex problems in a
fraction of the time it once required. This innovation has saved customers
significant time and money.
 Microsoft. Microsoft believes AI can help individuals tackle life‟s biggest
challenges with ease. Their philosophy is that AI can provide people with
a wider range of information, but humans ultimately must make the decisions.
 Deloitte. Deloitte‟s team is working on creating automated processes that
improve human decision making by predicting and simulating future
outcomes.
 Salesforce. Salesforce incorporates AI to gain further insight into customer
behavior and buying patterns. The company has improved its decision making
by forecasting sales trends, which enables them to quickly respond to an ever -
changing market.

NATIONAL AND INTERNATIONAL STRATEGIES ON AI.

As the technology behind AI continues to progress beyond expectations, policy initiatives are
springing up across the globe to keep pace with these developments.

The first national strategy on AI was launched by Canada in March 2017, followed soon after
by technology leaders Japan and China. In Europe, the European Commission put forward a
communication on AI, initiating the development of independent strategies by Member
States.

An American AI initiative is expected soon, alongside intense efforts in Russia to formalise


their 10-point plan for AI.

These initiatives differ widely in terms of their goals, the extent of their investment, and their
commitment to developing ethical frameworks, reviewed here as of May 2019.

IIIYEAR CSE/V SEM Page 19

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

An overview of national AI strategies and policies

As artificial intelligence (AI) advances across economies and societies, policy makers and AI
actors around the world seek to move from principles to practice.

To harness the benefits of AI while mitigating the risks, governments are investing in AI
R&D; leveraging AI in specific industries such as transportation and healthcare; building
human capacity on AI; ensuring a fair labour market transformation; reviewing and adapting
relevant policy and regulatory frameworks and developing standards; and co-operating
internationally.

This Going Digital Toolkit note provides an overview of the various AI policy initiatives
undertaken by governments and analyses these initiatives throughout the AI policy cycle:

1) Policy design;

2) policy implementation;

3) Policy intelligence;

4) approaches for international and multi-stakeholder cooperation on AI policy

The development of national policies and strategies focusing specifically on AI is a relatively


new phenomenon. To track these initiatives, the OECD (The Organization for Economic
Cooperation and Development) AI Policy Observatory (OECD.AI) comprises over 620
national AI policies from over 60 countries and the European Union (EU).

IIIYEAR CSE/V SEM Page 20

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

These resources provide a baseline to map countries‟ AI policy initiatives according to the
recommendations to governments contained in the OECD AI Principles (Box 1).

AI policy design

Countries are at different stages of the development and implementation of national


AI strategies and policies.

Some countries, such as Canada and Finland, developed their national AI strategies as
early as 2017, closely followed by Japan, France, Germany and the United Kingdom in 2018.

IIIYEAR CSE/V SEM Page 21

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Other countries, such as Brazil, Egypt, Hungary, Poland and Spain, launched a
national AI strategy more recently. Several countries are currently in AI policy consultation
and development processes.

Effective implementation of national AI initiatives hinges on coordination

Countries pursue different national governance models to co-ordinate the


implementation of their national AI policies across government, offering regulatory and
ethical oversight (Figure 1). Models include:

• Assigning oversight of the development and implementation strategies to an existing


ministry, department or body. Among existing ministries or agencies tasked with developing
or implementing an AI strategy, the following tend to drive the creation of AI strategies most
often:

1) information technology and communications ministries;

2) economics or finance ministries; or

3) education, science (and technology) and innovation ministries.

• Creating a new governmental or independent AI co-ordination entity.

• Establishing AI expert advisory groups. These are generally multistakeholder groups


comprising AI experts tasked with identifying and reporting on current and future
opportunities, risks and challenges arising from the use of AI in society. These AI councils
also provide recommendations to the government.

• Setting up oversight and advisory bodies for AI and data ethics.

IIIYEAR CSE/V SEM Page 22

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

World Economic Forum


The World Economic Forum (WEF) formed a Global AI Council in May 2019, co-chaired by
speech recognition developer Kai-Fu Lee, previously of Apple, Microsoft and Google, and
current President of Microsoft Bradford Smith. One of six 'Fourth Industrial Revolution'
councils, the Global AI Council will develop policy guidance and address governance gaps,
in order to develop a common understanding among countries of best practice in AI policy
(World Economic Forum, 2019a).
In October 2019, they released a framework for developing a national AI strategy to guide
governments that are yet to develop or are currently developing a national strategy for AI.
The WEF describe it as a way to create a 'minimum viable' AI strategy and includes four
main stages:
1) Assess long-term strategic priorities
2) Set national goals and targets
3) Create plans for essential strategic elements
4) Develop the implementation plan

IIIYEAR CSE/V SEM Page 23

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

The WEF has also announced plans to develop an 'AI toolkit' to help businesses to
best implement AI and to create their own ethics councils, which will be released at 2020's
Davos conference (Vanian, 2019).

Government Readiness for AI

A report commissioned by Canada's International Development Research Centre (Oxford


Insights, 2019) evaluated the 'AI readiness' of governments around the globe in 2019, using a
range of data including not only the presence of a national AI strategy, but also data
protection laws, statistics on AI startups and technology skills.
Singapore was ranked number 1 in their estimation, with Japan as the only other
Asian nation in the top 10 (Table 3). Sixty percent of countries in the top 10 were European,
with the remainder from North America.

National Strategy In AI

The first national strategy on AI was launched by Canada in March 2017, followed soon after
by technology leaders Japan and China. In Europe, the European Commission put forward a
communication on AI, initiating the development of independent strategies by Member
States. An American AI initiative is expected soon, alongside intense efforts in Russia to
formalise their 10-point plan for AI.
These initiatives differ widely in terms of their goals, the extent of their investment,
and their commitment to developing ethical frameworks, reviewed here as of May 2019.

Europe
The European Commission's Communication on Artificial Intelligence (European
Commission, 2018a), released in April 2018, paved the way to the first international strategy

IIIYEAR CSE/V SEM Page 24

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

on AI. The document outlines a coordinated approach to maximise the benefits, and address
the challenges, brought about by AI.

The EU's High-Level Expert Group on AI shortly after released a further set of policy
and investment guidelines for trustworthy AI (European Commission High-Level Expert
Group on AI, 2019b), which includes a number of important recommendations around
protecting people, boosting uptake of AI in the private sector, expanding European research
capacity in AI and developing ethical data management practices

Finland was the first Member State to develop a national programme on AI (Ministry
of Economic Affairs and Employment of Finland, 2018a). The programme is based on two
reports, Finland's Age of Artificial Intelligence and Work in the Age of Artificial Intelligence
(Ministry of Economic Affairs and Employment of Finland, 2017, 2018b). Policy objectives
focus on investment for business competitiveness and public services. Although
recommendations have already been incorporated into policy, Finland's AI steering group
will run until the end of the present Government's term, with a final report expected
imminently.

IIIYEAR CSE/V SEM Page 25

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Denmark's National Strategy for Artificial Intelligence (The Danish Government,


2019) was released in March 2019 and follows its 'Strategy for Digital Growth' (The Danish
Government, 2018). This comprehensive framework lists objectives including establishing a
responsible foundation for AI, providing high quality data and overall increasing investment
in AI (particularly in the agriculture, energy, healthcare and transport sectors)

Germany's AI Strategy was adopted soon after in November 2018 (Die


Bundesregierung, 2018) and makes three major pledges: to make Germany a global leader in
the development and use of AI, to safeguard the responsible development and use of AI, and
to integrate AI in society in ethical, legal, cultural and institutional terms. Individual
objectives include developing Centres of Excellence for research, the creation of 100 extra
professorships for AI, establishing a German AI observatory, funding 50 flagship
applications of AI to benefit the environment, developing guidelines for AI that are
compatible with data protection laws, and establishing a 'Digital Work and Society Future
Fund' (De.digital, 2018).

Sweden's approach to AI (Government Offices of Sweden, 2018) has less specific


terms, but provides general guidance on education, research, innovation and infrastructure for
AI. Recommendations include building a strong research base, collaboration between sectors
and with other countries, developing efforts to prevent and manage risk and developing
standards to guide the ethical use of AI. A Swedish AI Council, made up of experts from
industry and academia, has also been established to develop a 'Swedish model' for AI, which
they say will be sustainable, beneficial to society and promote long-term economic growth
(Swedish AI Council, 2019).

Singapore. 'AI.SG'

IIIYEAR CSE/V SEM Page 26

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

AI Singapore is a five-year, S$150 million national program launched in May 2017 to


enhance Singapore‟s capabilities in AI. Its goals are to invest in the next wave of AI research,
address major societal and economic challenges, and broaden adoption and use of AI within
industry.

In June 2018, the government announced three new initiatives on AI governance and
ethics. The new Advisory Council on the Ethical Use of AI and Data will help the
Government develop standards and governance frameworks for the ethics of AI.

Saudi Arabia

King Salman issued a royal decree to establish an artificial intelligence (AI) center to
enhance the drive toward innovation and digital transformation in Saudi Arabia in Sep 2019.
The establishment of the center aligns with the Kingdom‟s Vision 2030 program. The
Government of Saudi Arabia is now drafting a national AI strategy that aims to build an
innovative and ethical AI ecosystem in the country by 2030.

IIIYEAR CSE/V SEM Page 27

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

Australia does not yet have a national strategy on AI. It does however have a' Digital
Economy Strategy' (Australian Government, 2017) which discusses empowering Australians
through 'digital skills and inclusion', listing AI as a key emerging technology. A report on
'Australia's Tech Future' further details plans for AI, including using AI to improve public
services, increase administrative efficiency and improve policy development (Australian
Government, 2018).

UAE

The UAE Strategy for Artificial Intelligence, was announced in October 2017. The
UAE became the first country in the world to create a Ministry of Artificial Intelligence and
first in the Middle East to launch an AI strategy. The strategy is the first initiative of the UAE
Centennial 2071 Plan and its main objective is to enhance government performance and
efficiency. The government will invest in AI technologies in nine sectors: transport, health,
space, renewable energy, water, technology, education, environment, and traffic. In doing so,
the government aims to diversify the economy, cut costs across the government and position
the UAE as a global leader in the application of AI.

IIIYEAR CSE/V SEM Page 28

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

United States.

In February 2019, the United States launched the American AI Initiative, in the form
of an executive order. This “whole-of-government strategy” aims at focusing federal
government resources for investing in AI research, unleashing AI resources, setting AI
governance standards, building the AI workforce and protecting the US AI advantage.

Following the American AI Initiative, the US issued the National Artificial


Intelligence Research and Development Strategic Plan: 2019 Update calling for developing
shared public datasets and environments for AI training and testing. The Initiative was also
considered in the development of the US‟s new Federal Data Strategy and associated Action
Plan, which includes an action item to “improve data resources for AI research and
development”.

IIIYEAR CSE/V SEM Page 29

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

India. 'Social Inclusion and AI Garage'

India‟s National Strategy for Artificial Intelligence focuses on using technologies to


ensure social growth, inclusion and positioning the country as a leader in AI on the global
platform. Strategically, the government also seeks to establish India as an “AI Garage,”
incubating AI that can be applicable to the rest of the developing world.

NITI Aayog, the government think tank that wrote the report, calls this approach
#AIforAll. The strategy, as a result, aims to (1) enhance and empower Indians with the skills
to find quality jobs; (2) invest in research and sectors that can maximize economic growth
and social impact; and (3) scale Indian-made AI solutions to the rest of the developing world.

IIIYEAR CSE/V SEM Page 30

Downloaded by Kavitha Subramaniam ([email protected])


lOMoARcPSD|36947303

CCS345-ETHICS &AI-NOTES

G7 Common Vision for the Future of AI


At the 2018 meeting of the G7 in Charlevoix, Canada, the leaders of the G7 (Canada, France,
Germany, Italy, Japan, the United Kingdom and the United States) committed to 12
principles for AI, summarised below:
1. Promote human-centric AI and the commercial adoption of AI, and
continue to advance appropriate technical, ethical and technologically neutral
approaches.
2. Promote investment in R&D in AI that generates public test in new
technologies and supports economic growth.
3. Support education, training and re-skilling for the workforce.
4. Support and involve underrepresented groups, including women
5. Facilitate multi-stakeholder dialogue on how to advance AI innovation to
increase trust and adoption.
6. Support efforts to promote trust in AI, with particular attention to
countering harmful stereotypes and fostering gender equality. Foster initiatives
that promote safety and transparency.
7. Promote the use of AI by small and medium-sized enterprises.
8. Promote active labour market policies, workforce development and training
programmes to develop the skills needed for new jobs.
9. Encourage investment in AI.
10. Encourage initiatives to improve digital security and develop codes of
conduct.
11. Ensure the development of frameworks for privacy and data protection.
12. Support an open market environment for the free flow of data, while
respecting privacy and data protection.
(G7 Canadian Presidency, 2018).

IIIYEAR CSE/V SEM Page 31

Downloaded by Kavitha Subramaniam ([email protected])

You might also like