0% found this document useful (0 votes)
2 views

AWSEducate_IntroductionTToResponsibleAI_Transcript_v1

The document provides an introduction to Responsible AI, focusing on the differences between generative AI and traditional AI, and the importance of incorporating responsible AI standards in AI systems. It outlines course objectives, challenges of generative AI, and core dimensions of responsible AI, including fairness, explainability, privacy, security, robustness, and governance. The document emphasizes the need for best practices in each dimension to ensure ethical and effective AI development and deployment.

Uploaded by

Eduardo Galveia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

AWSEducate_IntroductionTToResponsibleAI_Transcript_v1

The document provides an introduction to Responsible AI, focusing on the differences between generative AI and traditional AI, and the importance of incorporating responsible AI standards in AI systems. It outlines course objectives, challenges of generative AI, and core dimensions of responsible AI, including fairness, explainability, privacy, security, robustness, and governance. The document emphasizes the need for best practices in each dimension to ensure ethical and effective AI development and deployment.

Uploaded by

Eduardo Galveia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Introduction to Responsible AI

AWS Educate

Introduction

Over the last several years, artificial intelligence, or AI, has rapidly expanded its capabilities in
the world of IT. Today, that expansion has entered the domain of generative artificial
intelligence, or generative AI.

Generative AI can help you innovate faster and reduce the number of hours needed for
development. This provides you with more time to grow your business.

However, it is important to understand that while taking advantage of these benefits of AI,
you should also incorporate responsible AI standards into all of your AI systems.

Course objectives

By the end of this course, you will be able to do the following:

• Define generative artificial intelligence (generative AI) and how it differs from
traditional AI.
• Describe responsible AI.
• Discuss the core dimensions of AI.
• Identify AWS services and tools for responsible AI.

What is generative AI?

Generative AI is a type of artificial intelligence that can create new content and ideas,
including conversations, stories, images, videos, and music. Generative AI is powered by
machine learning foundation models, or FMs These models are capable of producing content
so you don’t have to. The content that generative AI creates can be edited so that you can
make the necessary modifications that meet your needs.

How generative AI differs from traditional AI

Generative AI is a subset of machine learning, or ML. To help you understand the difference
between traditional ML and generative AI, this course will review some key differences.

Traditional ML models perform tasks based on data that you provide. These models can make
predictions such as ranking, sentiment analysis, and image classification. However, each
model can perform only one task. To successfully perform a task, the model needs to be
carefully trained on the data. As the model trains, it analyzes the data and look for patterns.
Then this model makes a prediction based on these patterns.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

With generative AI, the models are pre-trained on massive amounts of general domain data
beyond your own data. These models can perform multiple tasks. Based on user input, usually
in the form of text called a prompt, the model generates that content. This content comes
from learning patterns and relationships that help the model predict the desired outcome.

Generative AI and traditional AI examples

Review the basic differences between traditional AI and generative AI.

Traditional AI does not create new content. It makes predictions based on models that are
trained on datasets.

Examples include: Recommendation engines, gaming, and voice assistance.

Generative AI actually generates new content. It generates content based on pre-trained


data in large foundation models (FMs).

Examples include: Chatbots, code generation and text and image generation.

What is Responsible AI

As you develop your AI system, whether they are traditional or generative AI applications, it is
important to incorporate responsible AI.

Responsible AI refers to the standards of upholding responsible practices and mitigating


potential risks and negative outcomes of an AI application.

You should consider these responsible standards throughout the entire lifecycle of an AI
application. This lifecycle includes the initial design, development, deployment, monitoring,
and evaluation phases.

Identifying bias in the model’s data

As this course has mentioned, responsible AI is not exclusive to any one form of AI. And the
number one problem that developers face in AI applications is bias.

Biases are imbalances in the training data or the prediction behavior of the model across
different groups, such as age or income bracket. Biases can result from the data or algorithm
used to train your model.

You might remember that both traditional and generative AI applications are powered by
models that are trained on data. These models can make predictions or generate content

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

based only on the data they are trained on. If the data is biased or incomplete, then the
model will be restricted in its outcomes.

For example, if an AI model is trained primarily on data from middle-aged individuals, it


might be less accurate when making predictions involving younger and older people.

Challenges of generative AI

Just as generative AI has its unique set of benefits, it also has a unique set of challenges.
Some of these challenges include toxicity, hallucinations, intellectual property, and plagiarism
and cheating.

Review each topic to learn more about the challenges of generative AI.

Toxicity

Toxicity is the possibility of generating content (whether it be text, images, or other


modalities) that is offensive, disturbing, or otherwise inappropriate. This is a primary concern
with generative AI. It is hard to define and scope toxicity. The subjectivity involved in
determining what constitutes toxic content is an additional challenge, and the boundary
between restricting toxic content and censorship might be murky and context- and culture-
dependent. For example, should quotations that would be considered offensive out of
context be suppressed if they are clearly labeled as quotations? What about opinions that
might be offensive to some users but are clearly labeled as opinions? Technical challenges
include offensive content that might be worded in a very subtle or indirect fashion, without
the use of obviously inflammatory language.

Hallucinations

Hallucinations are assertions or claims that sound plausible but are verifiably incorrect.
Considering the next-word distribution sampling employed by large language models (LLMs),
it is perhaps not surprising that in more objective or factual use cases, LLMs are susceptible to
hallucinations. For example, a common phenomenon with current LLMs is creating
nonexistent scientific citations. Suppose that an LLM is prompted with the request “Tell me
about some papers by a particular author.” The model is not actually searching for legitimate
citations but is generating citations from the distribution of words associated with that
author. The result will be realistic titles and topics in the area of ML but not real articles, and
the results might include plausible coauthors but not actual ones.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

Intellectual property

Protecting intellectual property was a problem with early LLMs. This was because the LLMs
had a tendency to occasionally produce text or code passages that were verbatim of parts of
their training data, which resulted in privacy and other concerns. Improvements in this regard
have not prevented reproductions of training content that are more ambiguous and nuanced.
Consider the following prompt for a generative image model: “Create a painting of a
skateboarding cat in the style of [name of a famous artist]." If the model is able to do so in a
convincing yet original manner because it was trained on images of the specific artist,
objections to such mimicry might arise.

Plagiarism and cheating

The creative capabilities of generative AI give rise to worries that it will be used to write
college essays, create writing samples for job applications, and conduct other forms of
cheating or illicit copying. Debates on this topic are happening at universities and many other
institutions, and attitudes vary widely. Some are in favor of explicitly forbidding any use of
generative AI in settings where content is being graded or evaluated while others argue that
educational practices must adapt to, and even embrace, the new technology. The underlying
challenge of verifying that a given piece of content was authored by a person is likely to
present concerns in many contexts.

Core dimensions of responsible AI

The core dimensions of responsible AI include fairness, explainability, privacy and security,
robustness, governance, and transparency. No one dimension is a standalone goal for
responsible AI. In fact, you should consider each topic as a required part for a complete
implementation of responsible AI. You will find that there is considerable overlap between
many of these topics. For example, you will find that when you implement transparency in
your AI system, elements of explainability, fairness, and governance will be required. Next,
you will explore how each topic is used in responsible AI.

Review each core dimension topic to learn the meaning, explore best practices, and examine a
use case.

Core dimensions: Fairness

Fairness is crucial for developing responsible AI systems. It helps AI systems promote


inclusion, prevent discrimination, uphold responsible values and legal norms, and build trust
with society.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

You should consider fairness in your AI applications to create systems suitable and beneficial
for all.

Core dimensions: Fairness best practices

Some of the best practices of fairness that you should incorporate in your generative AI
applications include representative data, bias mitigation, fairness metrics, bias testing, and
external audits. Review each best practice.

Representative data: Representative data means that the data used to train an AI
system accurately reflects the populations it will be applied to. It should have fair
representation across different demographics such as gender, race, and age.

Bias mitigation: Bias mitigation refers to the process of identifying, understanding,


and reducing biases that can arise in AI systems. It aims to help ensure the fairness,
impartiality, and objectivity of AI systems by minimizing or eliminating any biases.

Fairness metrics: Fairness metrics are used to measure how fair or unbiased an AI
system's outputs are toward different groups, such as groups of a specific gender or
race. They assess whether an AI system is treating individuals or groups fairly and
without bias.

Bias testing: Bias testing refers to testing ML models for unfair bias or discrimination
against certain groups. The goal of bias testing is to help ensure that the AI system is
fair, is unbiased, and does not exhibit discriminatory or unfair behavior toward any
group of people.

External audits: External audits refer to independent assessments by external entities


of an organization's AI systems and practices. These audits help ensure accountability
and that these systems and practices meet responsible, legal, and regulatory
requirements.

Core dimensions: Fairness example

Next, you review an example of fairness in an AI system. A ride-sharing company develops an


AI-based pricing model that sets dynamic rates for rides. To help ensure fairness in the
model’s pricing, the company might use relevant data, conduct external audits, implement
fairness metrics, and perform bias testing.

By applying these steps, the company builds trust by pricing customers in a responsible and

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

accountable way.

Core dimensions: Explainability

Explainability refers to the ability of an AI model to clearly explain or provide justification for
its internal mechanisms and decisions so that it is understandable to humans.

This helps humans to understand how models are making decisions and to address any issues
of bias, trust, or fairness.

Core dimensions: Explainability best practices

Some of the best practices of explainability that you should incorporate in your generative AI
applications include model interpretation, justifications, provenance tracking, audit trails, and
what-if analysis. Review each best practice.

Model interpretation: Model interpretation refers to the process of understanding


and explaining the decisions made by an ML model. This process is essential to
helping ensure that the model is making fair and unbiased decisions.

Justifications: Justification refers to the ability of an AI system to explain or provide a


rationale for its decisions and actions. The goal of justification is to help ensure that
AI systems are designed, developed, and deployed in a way that is fair, transparent,
and responsible.

Provenance tracking: Provenance tracking refers to the capability to capture,


manage, and verify the origin and evolution of data, models, and systems used in
responsible AI. This involves keeping track of how data was collected, prepared, and
used to develop the model.

Audit trails: Audit trails refer to the logs, records, and other documentation that track
the decision-making processes and actions of AI systems. These trails provide a
detailed account of how the AI system was trained, processes information, and makes
decisions.

What-if analysis: What-if analysis is a technique used to understand how an AI


system's outputs or behaviors would change under different hypothetical scenarios.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

Core dimensions: Explainability example

Next, you review an example of explainability in an AI system. A bank develops an AI system


to help analyze customers and flag accounts for fraud investigation based on activities. To
help ensure explainability in the AI system, the company might implement model
interpretations, conduct what-if analysis, and set up audit trails.

Core dimensions: Pivacy and security

Privacy and security in responsible AI refers to data that is protected from theft and exposure.
More specifically, this means that at a privacy level, individuals control when and if their data
can be used. At the security level, it verifies that no unauthorized systems or unauthorized
users will have access to the individual’s data.

When this is properly implemented and deployed in an AI system, users can trust that their
data is not going to be compromised and used without their authorization.

Core dimensions: Pivacy and security best practices

Some of the best practices of data and security that you should incorporate in your
generative AI applications include access control, secure compute, encryption, lifecycle
protections, and risk modeling. Review each best practice.

Access control: Access control refers to controlling the access and use of data and
resources by AI models, systems, or algorithms. Access controls are an essential part
of helping ensure that AI models operate in a responsible manner.

Secure compute: Secure compute refers to the concept of helping ensure that AI
models are implemented in a way that protects sensitive or proprietary information.

Encryption: Encryption in responsible AI refers to the method of securing data and


algorithms used to train AI models. It prevents unauthorized access, tampering, or
reverse engineering of sensitive information.

Lifecycle protections: Lifecycle protections refer to steps taken throughout the entire
process of developing and deploying an AI system to help ensure that the system is
responsible, unbiased, transparent, and accountable.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

Risk modeling: Risk modeling refers to the process of identifying, assessing, and
mitigating potential risks associated with developing and deploying AI systems.

Core dimensions: Privacy and security example

Next, you review an example of privacy and security in an AI system. A retail company
develops a prediction model for customer purposes to optimize inventory. As part of their
responsible AI approach, they institute various privacy and security measures, including
encryption, access controls, and lifecycle protections.

Core dimensions: Robustness

Robustness in AI refers to the mechanisms to help ensure that an AI system operates reliably,
even with unexpected situations, uncertainty, and errors.

The goal of robustness in responsible AI is to develop AI models that are resilient to changes
in input parameters, data distributions, and external circumstances.

This means that the AI model should retain reliability, accuracy, and safety in uncertain
environments.

Core dimensions: Robustness best practices

Some of the best practices of robustness that you should incorporate into your generative AI
applications include reliability, generalization, graceful failure modes, vulnerability
assessments, and concept drift detection. Review each best practice.

Reliability: Reliability refers to the quality and trustworthiness of the outputs produced by an
AI system. It helps ensure that the system consistently produces accurate and reliable results,
minimizing errors and avoiding biases.

Generalization: Generalization refers to how well an ML model can perform accurately on


new, unseen data after being trained on a finite sample dataset.

Graceful failure modes: Graceful failure modes refer to the ability of an AI system to fail in a
way that minimizes harm and negative impact while also providing opportunities for learning
and improvement.

Vulnerability assessments: Vulnerability assessments are the process of identifying,


assessing, and mitigating the potential vulnerabilities or weaknesses in an ML model or AI
system.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

Concept drift detection: Concept drift detection is the ability of AI algorithms to recognize
when the underlying concepts or patterns in the data that they are analyzing have changed
or drifted.

Core dimensions: Robustness example

Next, you review an example of robustness in an AI system. A health care organization


develops an AI system to assist doctors with diagnosing medical conditions. They pursue
several robustness measures as part of their responsible AI approach:

To help ensure robustness in the AI system, the company might test for generalization, help
ensure reliability, and implement concept drift detection.

Taking these steps, the health care organization can provide a robust AI system that can help
assist doctors with diagnosing medical conditions.

Core dimensions: Governance

Governance is a set of processes that are used to define, implement, and enforce responsible
AI practices within an organization.

Governance is used to address various concerns such as responsibility, legal, and societal
problems that generative AI might invite.

For example, governance policies can help to protect the rights of individuals to intellectual
property. It can also be used to enforce compliance with laws and regulations. Governance is
a vital component of responsible AI for an organization that seeks to incorporate responsible
best practices.

Core dimensions: Governance best practices

Some of the best practices of governance that you should incorporate in your generative AI
applications include policies and processes, oversight, operational integrity, risk management,
and compliance verification. Review each best practice.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

Policies and processes: Policies and processes in responsible AI refer to a set of


guidelines, rules, and procedures that organizations follow to help ensure the
responsible and equitable use of AI technologies.

Oversight: Oversight refers to the process of reviewing and monitoring AI systems to


help ensure they are operating in a responsible manner. This involves ongoing
evaluations of AI systems to help ensure they are aligned with human values and do
not cause harm.

Operational integration: Operational integration is the process of integrating AI


systems into an organization's daily operations across all levels and functions to help
ensure that AI systems are used in a responsible manner.

Risk management: Risk management refers to the process of identifying, assessing,


and mitigating the potential risks associated with developing and deploying AI
systems. This can include helping ensure the fairness, transparency, and
accountability of AI systems.

Compliance verification: Compliance verification in responsible AI refers to the


process of helping ensure that the development and deployment of AI models and
systems comply with relevant legal, responsible, and regulatory standards.

Core dimensions: Governance example

Next, you review an example of governance in an AI system. A software company looks to


integrate AI into a new product recommendation engine. To help ensure responsible AI
governance, the company might set up a risk management team, implement oversight, and
create polices and processes.

These steps can help ensure that responsible obligations are met.

Core dimensions: Transparency

Transparency communicates information about an AI system so that stakeholders can make


informed choices about their use of the system. Some of this information includes
development processes, system capabilities, and limitations.

Transparency provides individuals, organizations, and stakeholders access to assess the

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

fairness, robustness, and explainability of AI systems. This helps them identify and mitigate
potential biases, reinforce responsible standards, and foster trust in the technology.

Core dimensions: Transparency best practices

Some of the best practices of transparency that you should incorporate in your generative AI
applications include model cards, data sheets, traceability, open standards over black boxes,
and communication to users. Review each best practice.

Model cards: Model cards are documents that accompany ML models to provide details and
context about how the model was built, evaluated, and intended to be used. They document
the characteristics, assumptions, limitations, and intended uses of a model.

Data sheets: Data sheets are documents that provide information about the data used to
train an ML model. They also document the methods used to collect, store, and manipulate
that data. The purpose of data sheets is to provide transparency.

Traceability: Traceability refers to the ability to track and understand the origin,
development, and deployment of AI systems and their impact and performance over time.
Traceability provides the ability to assess AI systems as they get deployed in real-world
settings.

Open standards over black boxes: Open standards over black boxes in responsible AI mean
that the technology and its inner workings are transparent and accessible to all stakeholders
rather than being shrouded in secrecy and patented proprietary code.

Communication to users: Communication to users refers to the process of informing users


about the responsible practices, potential risks, and benefits of AI systems and algorithms. It
involves transparent communication and clear disclosure about the AI system.

Core dimensions: Transparency example

Next, you review an example of transparency in an AI system. A financial institution develops


an AI lending model to help make decisions on loan applications. As part of their responsible
AI initiative, they take several transparency steps. Some of these steps might include creating
model cards, using open standards over black boxes, and communicating with users.

These transparency steps demonstrate accountability and responsibility by giving internal


and external stakeholders visibility into how this AI lending model works, how well it works
for different groups, and how fair and responsible it is. This transparency helps companies

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

address issues quickly and builds trust.

AWS services and tools for responsible AI

As the leader in cloud technologies, AWS offers services such as Amazon SageMaker and
Amazon Bedrock that have built-in tools to help you with responsible AI. These tools cover
topics such as evaluating FMs, implementing safeguards, detecting bias, explaining model
predictions, monitoring and human reviews, and improving governance.

Review each topic to learn about the the AWS services and tools that can help with
responsible AI.

Evaluating FMs

Model evaluation on Amazon Bedrock gives you the ability to evaluate, compare, and
select the best FM for your use case in just a few clicks. Amazon Bedrock offers a
choice of automatic evaluation and human evaluation:

• Automatic evaluation offers predefined metrics such as accuracy, robustness,


and toxicity.
• Human evaluation offers subjective or custom metrics such as friendliness,
style, and alignment to brand voice. For human evaluation, you can leverage
your in-house employees or an AWS managed team as reviewers.

Amazon SageMaker Clarify supports FM evaluation. You can automatically evaluate


FMs for your generative AI use case with metrics such as accuracy, robustness, and
toxicity to support your responsible AI initiative. For criteria or nuanced content that
requires sophisticated human judgment, you can choose to leverage your own
workforce or use a managed workforce provided by AWS to review model responses.

Implementing safeguards

Guardrails for Amazon Bedrock gives you the ability to implement safeguards for your
generative AI applications based on your use cases and responsible AI policies.
Guardrails for Amazon Bedrock helps control the interaction between users and FMs
by filtering undesirable and harmful content and will soon redact personally
identifiable information (PII), enhancing content safety and privacy in generative AI
applications. You can create multiple guardrails with different configurations tailored
to specific use cases. Additionally, you can continuously monitor and analyze user

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

inputs and FM responses that might violate customer-defined policies in the


guardrails.

Detecting bias

SageMaker Clarify helps identify potential bias during data preparation without
writing code. You specify input features, such as gender or age, and SageMaker Clarify
runs an analysis job to detect potential bias in those features. SageMaker Clarify then
provides a visual report with a description of the metrics and measurements of
potential bias so that you can identify steps to remediate the bias.

Amazon SageMaker Data Wrangler can be used to balance your data in cases of any
imbalances. SageMaker Data Wrangler offers three balancing operators: random
undersampling, random oversampling, and synthetic minority oversampling
technique (SMOTE) to rebalance data in your unbalanced datasets.

Explaining model predictions

SageMaker Clarify is integrated with Amazon SageMaker Experiments to provide


scores detailing which features contributed the most to your model prediction on a
particular input for tabular, natural language processing (NLP), and computer vision
models. For tabular datasets, SageMaker Clarify can also output an aggregated
feature importance chart that provides insights into the overall prediction process of
the model. These details can help determine if a particular model input has more
influence than expected on overall model behavior.

Monitoring and human reviews

Amazon SageMaker Model Monitor monitors the quality of Amazon SageMaker ML


models in production. You can set up continuous monitoring with a real-time
endpoint (or a batch transform job that runs regularly) or on-schedule monitoring for
asynchronous batch transform jobs. With SageMaker Model Monitor, you can set
alerts that notify you when there are deviations in the model quality. Early and
proactive detection of these deviations helps you to take corrective actions.

Amazon Augmented AI (Amazon A2I) is a service that you can use to build the
workflows required for human review of ML predictions. Amazon A2I brings a human

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

review to all developers, removing the undifferentiated heavy lifting associated with
building human review systems or managing large numbers of human reviewers.

Improving governance

SageMaker provides purpose-built governance tools to help you implement ML


responsibly. These tools give you tighter control and visibility over your ML models.
You can capture and share model information and stay informed on model behavior,
such as bias, all in one place.

These tools include the following:

• Amazon SageMaker Role Manager: With Amazon SageMaker Role Manager,


administrators can define minimum permissions in minutes.
• Amazon SageMaker Model Cards: With Amazon SageMaker Model Cards, you
can capture, retrieve, and share essential model information, such as intended
uses, risk ratings, and training details, from conception to deployment.
• Amazon SageMaker Model Dashboard: With Amazon SageMaker Model
Dashboard, you can keep your team informed on model behavior in production
all in one place.

AWS AI Service Cards

AWS AI Service Cards are a new resource to help customers better understand AWS AI
services. AI Service Cards are a form of responsible AI documentation that provides customers
with a single place to find information on the intended use cases and limitations, responsible
AI design choices, and deployment and performance optimization best practices for AWS AI
services.

They are part of a comprehensive development process to build AWS services in a responsible
way that addresses the core dimensions of responsible AI.

Components of AI Service Cards

Each AI Service Card contains four sections that cover the following:

• Basic concepts to help customers better understand the service or service features,
• Intended use cases and limitations,
• Responsible AI design considerations, and
• Guidance on deployment and performance optimization.

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .
Introduction to Responsible AI
AWS Educate

The content of the AI Service Cards addresses a broad audience of customers, technologists,
researchers, and other stakeholders. This content helps these audiences better understand
key considerations in the responsible design and use of an AI service.

Evolving best practices for responsible AI

Taking the steps to build AI responsibly is crucial for harnessing the potential of AI while
promoting responsible and fair outcomes. By following the responsible AI core dimensions of
fairness, explainability, privacy, robustness, governance, and transparency, organizations can
harness the full potential of generative AI. Building AI responsibly will help organizations to
build trust and mitigate the risks associated with AI systems.

As technologies advance, organizations should keep up with new and evolving responsible AI
standards and with AWS solutions to help implement those standards.

Summary

In this course, you learned how to do the following:

• Define generative AI and how it differs from traditional AI.


• Describe responsible AI.
• Discuss the core dimensions of AI.
• Identify AWS services and tools for responsible AI.

Additional resources

For more information about responsible AI, see the following links:

Transform responsible AI from theory into practice

Tools and resources to build AI responsibly

Responsible AI in the generative era

Responsible AI Best Practices: Promoting Responsible and Trustworthy AI Systems

© 2024, Amazon Web Services, Inc. or its affiliates. All rights reserved. .

You might also like