EAI Notes 1
EAI Notes 1
M COLLEGE OF ENGINEERING
TRICHY
i)Software/Methodology:
ii)Embodied:
1)Voice Assistants
Digital assistants like Siri, Google Home, and Alexa use AI-backed
Voice User Interfaces (VUI) to process and decipher voice
commands. AI gives these applications the freedom to not solely
rely on voice commands but also leverage vast databases on cloud
storage platforms. These applications can then parse thousands of
lines of data per second to complete tasks and bring back tailored
search engine results.
3)Personalized Marketing
With the help of AI, these apps can efficiently correct mistakes, help
switch between languages, and predict the next word in a non-
intrusive manner. Utilizing the principle of the "random forest"
machine learning algorithm, AI programmers are teaching these
apps to understand the context of the message being typed and
make accurate predictions.
Apps like Typewise and Swiftkey are now integrated with over 300
languages and dialects. Added facilities like real-time translation and
integrated search engines are also being introduced lately.
5)Navigation and Travel
6)Gamified Therapy
AI had found a place in gaming from the time classics such as Pac-
Man and Pong were around for intuitive universe-building. However,
until now innovations in gaming AI have focused on presenting more
interesting challenges to the gamer and not on gauging the gamer's
mindset.
7)Self-driving Vehicles
10)Internet of Things
The simple answer for this question is we need ethics which means logics
in Ai in certain to achieve optimal actions from an agent.
Let’s take, for example, the trucking industry, where millions of people
are employed in the United States alone. If Tesla’s Elon Musk delivers on
his promise of offering true self-driving cars (and by extension, delivery
trucks) and they become widely available within the next decade, then
what’s going to happen to those millions of people? But self-driving
trucks do seem like an ethical option when we consider their ability to
lower our accident rates.
This challenge requires human raters to use text input to chat with an
unknown entity.
4. How Do We Guard Against Possible Detrimental Mistakes?
AI has the capability of speed and capacity processing that far exceeds
the capabilities of humans; however, due to human influence, it cannot
always be trusted to be neutral and fair.
The more powerful the technology, the more it can be used for good as
well as various purposes.
Question 4-
Artificial intelligence (AI) has been rapidly advancing in recent years, and with it comes a growing
concern for the ethical implications of its use. As a result, there have been several initiatives aimed
at addressing these concerns and promoting responsible AI development. Here are some
examples:
1. The Partnership on AI: This is a collaboration between major tech companies such as Google,
Facebook, and Microsoft, as well as non-profit organizations and academic institutions. The
Partnership aims to promote responsible AI development and ensure that AI benefits society as a
whole.
2. The IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems: This
initiative is focused on developing standards and guidelines for the ethical use of AI and
autonomous systems. It includes stakeholders from various industries, including academia,
government, and industry.
3. The European Union's Ethics Guidelines for Trustworthy AI: These guidelines were developed by
the European Commission's High-Level Expert Group on AI and provide a framework for ensuring
that AI is developed and used in an ethical manner.
4. The Montreal Declaration for Responsible AI: This declaration was signed by over 1,000 AI
researchers and aims to promote the development of AI that is beneficial to society and respects
human rights.
5. The AI Now Institute: This is a research institute at New York University focused on studying the
social implications of AI. Its work includes analyzing the impact of AI on labor, civil rights, and
democracy.
These are just a few examples of the many initiatives underway to promote ethical AI
development. As AI continues to advance, it will be important to continue these efforts to ensure
that it is used in a way that benefits society as a whole.
Question -5
Ethical issues with our relationship with Artificial entities
As we develop more advanced artificial intelligence (AI) systems, there are several ethical issues
that arise in our relationship with these entities. Some of the key ethical concerns include:
1. Bias and Discrimination: AI systems can be biased based on the data they are trained on,
leading to discrimination against certain groups. For example, facial recognition software has been
shown to be less accurate for people with darker skin tones, which can lead to discriminatory
outcomes.
2. Privacy: AI systems can collect and analyze vast amounts of personal data, raising concerns
about privacy and surveillance. This is particularly concerning when AI systems are used by
governments or corporations with a lot of power.
3. Accountability: As AI systems become more autonomous, it becomes more difficult to hold them
accountable for their actions. This raises questions about who is responsible when an AI system
makes a mistake or causes harm.
4. Transparency: It can be difficult to understand how AI systems make decisions, which can lead
to a lack of transparency and accountability. This is particularly concerning when AI systems are
used in high-stakes decision-making contexts such as healthcare or criminal justice.
5. Job Displacement: As AI systems become more advanced, they have the potential to displace
human workers, leading to job loss and economic disruption.
6. Weaponization: AI systems can be used to develop autonomous weapons, which raises concerns
about the ethics of using machines to make life-and-death decisions.
7. Human-AI Interaction: As we interact more with AI systems, there are concerns about how this
will affect our relationships with each other and with the world around us. For example, some worry
that we will become overly reliant on AI systems and lose important skills and abilities.
These are just a few of the ethical issues that arise in our relationship with artificial entities. As AI
continues to advance, it will be important to address these issues in order to ensure that we use
these technologies in an ethical and responsible manner.
Unit-2
Framework and Models
One key aspect of human-right centered design in AI governance is ensuring that AI systems are
developed and deployed in a way that is transparent, accountable, and participatory. This means
involving a diverse range of stakeholders in the development process, including those who may be
impacted by the technology, and ensuring that there are mechanisms in place for ongoing
monitoring and evaluation.
Overall, human-right centered design in AI governance is essential for ensuring that AI systems are
developed and used in a way that promotes human flourishing and well-being. By prioritizing
human rights and dignity in the development of AI technologies, we can harness their potential for
positive impact while mitigating any negative consequences.
Normative models in ethics of Artificial Intelligence (AI) refer to the frameworks and principles that
guide the development and use of AI systems in a way that aligns with ethical values and norms.
There are several normative models in ethics of AI, including consequentialism, deontological
ethics, virtue ethics, and care ethics.
Consequentialism is a normative model that evaluates the morality of an action based on its
consequences. In the context of AI, consequentialism would prioritize the development and use of
AI systems that produce the greatest good for the greatest number of people. This model may be
used to justify the use of AI for tasks such as medical diagnosis, disaster response, and
environmental monitoring.
Deontological ethics is a normative model that emphasizes the importance of following moral rules
and duties. In the context of AI, deontological ethics would prioritize the development and use of AI
systems that respect human rights and dignity, regardless of their potential consequences. This
model may be used to justify the prohibition of certain uses of AI, such as autonomous weapons or
mass surveillance.
Virtue ethics is a normative model that focuses on developing virtuous character traits, such as
compassion, honesty, and courage. In the context of AI, virtue ethics would prioritize the
development and use of AI systems that promote these character traits in individuals and
communities. This model may be used to justify the use of AI for tasks such as education, mental
health support, and social services.
Care ethics is a normative model that emphasizes the importance of relationships and empathy in
moral decision-making. In the context of AI, care ethics would prioritize the development and use
of AI systems that foster human connection and social cohesion. This model may be used to justify
the use of AI for tasks such as elder care, childcare, and community building.
Overall, normative models in ethics of AI provide a framework for evaluating the moral implications
of AI development and use. By considering the ethical principles and values that underlie these
models, we can ensure that AI systems are developed and used in a way that aligns with our
ethical commitments and promotes human flourishing and well-being.
Professional norms play a crucial role in the ethics of artificial intelligence (AI) by providing
guidance and standards for the development and use of AI systems. These norms are developed
and enforced by professional organizations, such as the Association for Computing Machinery
(ACM) and the Institute of Electrical and Electronics Engineers (IEEE), which have established codes
of ethics for their members.
For example, the principle of transparency requires that AI systems be designed and operated in a
way that is understandable and explainable to users and stakeholders. This principle is important
for ensuring that AI systems are not used in a way that is discriminatory or biased, as it allows for
the identification and correction of such issues.
Similarly, the principle of accountability requires that individuals and organizations responsible for
the development and use of AI systems be held responsible for their actions. This principle is
important for ensuring that AI systems are used in a way that is ethical and legal, as it provides a
mechanism for holding individuals and organizations accountable for any harm caused by their
actions.
Overall, professional norms play a critical role in the ethics of AI by providing guidance and
standards for the development and use of AI systems. By adhering to these norms, professionals in
the field of AI can ensure that their work aligns with ethical values and norms, promoting human
flourishing and well-being.
1. Developing ethical frameworks: It is essential to develop ethical frameworks that guide the
behavior of AI systems. These frameworks should be based on ethical principles and values that
are widely recognized in society, such as fairness, transparency, and accountability.
4. Collaborating with ethicists: Collaboration with ethicists can help to ensure that AI systems are
developed and used in a way that aligns with ethical principles and values. Ethicists can provide
guidance on ethical issues related to AI development and use.
5. Conducting regular audits: Regular audits of AI systems can help to identify any ethical issues or
biases in the system's behavior. This can help to ensure that the system's actions align with ethical
principles and values.
Example
One example of teaching AI machines to be moral is in the development of autonomous vehicles.
In situations where a collision is unavoidable, the AI system must make a decision about who to
prioritize for safety - the passengers in the vehicle or pedestrians on the road. By incorporating
moral reasoning and ethical principles into the decision-making process, the AI system can make a
decision that aligns with societal values and reduces harm to all parties involved. This involves
developing ethical frameworks, building in feedback mechanisms, and collaborating with ethicists
to ensure that the system's actions align with ethical principles and values.
Unit – 3
Concepts and issues
There are several ways in which accountability can be built into computer systems. One approach
is to ensure that the system is transparent, meaning that its decision-making process is visible and
understandable to users and stakeholders. This can be achieved through techniques such as
explainable AI, which allows users to understand how a system arrived at a particular decision.
Another approach is to incorporate feedback mechanisms into the system, allowing users to
provide input on the system's performance and decision-making. This can help to identify and
address potential biases or errors in the system.
In addition to technical solutions, accountability also requires a legal and regulatory framework.
This includes laws and regulations that govern the use of AI and other advanced technologies, as
well as mechanisms for holding individuals and organizations accountable for their actions.
Overall, accountability is critical for ensuring that computer systems are used responsibly and
ethically, and that they align with societal values and goals. By building accountability into these
systems, we can help to mitigate the risks and maximize the benefits of these technologies for
individuals and society as a whole.
Question 2- Elaborate the following concepts of Artificial intelligence ethics
Answer-
1. Transparency:
a) Importance: As AI systems become more prevalent in our daily lives, it is important to ensure
that individuals and organizations are held responsible for their actions and decisions. This includes
ensuring that AI systems are designed and used in an ethical and responsible manner, and that
individuals and organizations are accountable for any negative impacts of these systems.
b) Challenges: Holding individuals and organizations responsible for the actions of AI systems can
be challenging as it can be difficult to determine who is ultimately responsible for the system's
actions. Additionally, there may be a lack of regulations or guidelines governing the use of AI
systems.
c) Solutions: Regulations and guidelines can help ensure that AI systems are designed and used in
an ethical and responsible manner. Additionally, organizations can be encouraged to develop
internal policies and procedures for the use of AI systems, including measures for accountability
and transparency.
a) Importance: AI systems have the potential to perpetuate or even amplify existing biases and
discrimination in society. For example, facial recognition technology has been shown to have
higher error rates for people of color and women. It is important to consider issues of race and
gender in the design and use of AI systems, and to ensure that these systems are not perpetuating
or amplifying existing biases and discrimination.
b) Challenges: Addressing issues of race and gender in AI systems can be challenging as biases
can be difficult to identify and address. Additionally, there may be a lack of diversity in the
development teams creating AI systems, which can lead to blind spots in the system's design.
c) Solutions: Increasing diversity in development teams can help address blind spots in the design
of AI systems. Additionally, testing and auditing can be used to identify and address biases in AI
systems. Finally, regulations can be put in place to ensure that AI systems are designed and used
in a way that does not perpetuate or amplify existing biases and discrimination.
4. AI as a moral right-holder:
a) Importance: The concept of AI as a moral right-holder suggests that AI systems may have moral
rights similar to those of humans. This concept is important as it raises questions about how we
should treat AI systems and whether they should be protected from harm and exploitation.
c) Solutions: Further research and debate are needed to determine whether AI systems can truly
be considered moral right-holders. Additionally, regulations and guidelines can be put in place to
ensure that AI systems are treated with respect and dignity, regardless of whether they are
considered moral right-holders.