CLF-C02 Updated Dumps - AWS Certified Cloud Practitioner
CLF-C02 Updated Dumps - AWS Certified Cloud Practitioner
2. Which AWS service or tool does AWS Control Tower use to create resources?
A. AWS CloudFormation
B. AWS Trusted Advisor
C. AWS Directory Service
D. AWS Cost Explorer
Answer: A
Explanation:
AWS Control Tower uses AWS CloudFormation to create resources in your landing zone. AWS
CloudFormation is a service that helps you model and set up your AWS resources using
templates. AWS Control Tower supports creating AWS::ControlTower::EnabledControl
resources in AWS CloudFormation. Therefore, the correct answer is
A. You can learn more about AWS Control Tower and AWS CloudFormation
3. A company is running and managing its own Docker environment on Amazon EC2 instances.
The company wants an alternative to help manage cluster size, scheduling, and environment
maintenance.
Which AWS service meets these requirements?
A. AWS Lambda
B. Amazon RDS
C. AWS Fargate
D. Amazon Athena
Answer: C
Explanation:
AWS Fargate is a serverless compute engine for containers that works with both Amazon
Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon
EKS). AWS Fargate allows you to run containers without having to manage servers or clusters
of Amazon EC2 instances. With AWS Fargate, you only pay for the compute resources you use
to run your containers, and you don’t need to worry about scaling, patching, securing, or
maintaining the underlying infrastructure.
AWS Fargate simplifies the deployment and management of containerized applications, and
enables you to focus on building and running your applications instead of managing the
infrastructure.
References: AWS Fargate, What is AWS Fargate?
4. Which AWS service or feature identifies whether an Amazon S3 bucket or an IAM role has
been shared with an external entity?
A. AWS Service Catalog
B. AWS Systems Manager
C. AWS IAM Access Analyzer
D. AWS Organizations
Answer: C
Explanation:
AWS IAM Access Analyzer is a service that helps you identify the resources in your
organization and accounts, such as Amazon S3 buckets or IAM roles, that are shared with an
external entity. This lets you identify unintended access to your resources and data, which is a
security risk. IAM Access Analyzer uses logic-based reasoning to analyze the resource-based
policies in your AWS environment. For each instance of a resource shared outside of your
account, IAM Access Analyzer generates a finding. Findings include information about the
access and the external principal granted to it345.
References: 3: Using AWS Identity and Access Management Access Analyzer, 4: IAM Access
Analyzer - Amazon Web Services (AWS), 5: Welcome - IAM Access Analyzer
5. Which activity is a customer responsibility in the AWS Cloud according to the AWS shared
responsibility model?
A. Ensuring network connectivity from AWS to the internet
B. Patching and fixing flaws within the AWS Cloud infrastructure
C. Ensuring the physical security of cloud data centers
D. Ensuring Amazon EBS volumes are backed up
Answer: D
Explanation:
The AWS shared responsibility model describes how AWS and the customer share
responsibility for security and compliance of the AWS environment. AWS is responsible for the
security of the cloud, which includes the physical security of AWS facilities, the infrastructure,
hardware, software, and networking that run AWS services. The customer is responsible for
security in the cloud, which includes the configuration of security groups, the encryption of
customer data on AWS, the management of AWS Lambda infrastructure, and the management
of network throughput of each AWS Region. One of the customer responsibilities is to ensure
that Amazon EBS volumes are backed up.
6. A company wants to securely store Amazon RDS database credentials and automatically
rotate user passwords periodically.
Which AWS service or capability will meet these requirements?
A. Amazon S3
B. AWS Systems Manager Parameter Store
C. AWS Secrets Manager
D. AWS CloudTrail
Answer: C
Explanation:
AWS Secrets Manager is a service that helps you protect access to your applications, services,
and IT resources. This service enables you to easily rotate, manage, and retrieve database
credentials, API keys, and other secrets throughout their lifecycle1. Amazon S3 is a storage
service that does not offer automatic rotation of credentials. AWS Systems Manager Parameter
Store is a service that provides secure, hierarchical storage for configuration data management
and secrets management2, but it does not offer automatic rotation of credentials. AWS
CloudTrail is a service that enables governance, compliance, operational auditing, and risk
auditing of your AWS account3, but it does not store or rotate credentials.
7. Which AWS service or tool provides a visualization of historical AWS spending patterns and
projections of future AWS costs?
A. AWS Cos! and Usage Report
B. AWS Budgets
C. Cost Explorer
D. Amazon CloudWatch
Answer: C
Explanation:
AWS Cost Explorer provides a visualization of historical AWS spending patterns and allows
users to project future costs based on past usage. It offers advanced filtering and grouping
features, enabling users to analyze costs and usage at a granular level. The AWS Cost and
Usage Report provides detailed AWS cost and usage data but does not offer visualization or
future cost projections. AWS Budgets is used for setting custom cost and usage budgets and
receiving alerts. Amazon CloudWatch is for monitoring AWS resources and applications, not for
cost management.
8. What is a customer responsibility when using AWS Lambda according to the AWS shared
responsibility model?
A. Managing the code within the Lambda function
B. Confirming that the hardware is working in the data center
C. Patching the operating system
D. Shutting down Lambda functions when they are no longer in use
Answer: A
Explanation:
According to the AWS shared responsibility model, AWS is responsible for the security of the
cloud, while customers are responsible for the security in the cloud. This means that AWS is
responsible for the physical servers, networking, and operating system that run Lambda
functions, while customers are responsible for the security of their code and AWS IAM to the
Lambda service and within their function1. Customers need to manage the code within the
Lambda function, such as writing, testing, debugging, deploying, and updating the code, as well
as ensuring that the code does not contain any vulnerabilities or malicious code that could
compromise the security or performance of the function23.
References: 2: AWS Lambda - Amazon Web Services (AWS), 3: AWS Lambda Documentation,
1: Amazon CLF-C02: What is customer responsibility under AWS … - PUPUWEB
10. Which AWS services and features are provided to all customers at no charge? (Select
TWO.)
A. Amazon Aurora
B. VPC
C. Amazon SageMaker
D. AWS Identity and Access Management (IAM)
E. Amazon Polly
Answer: B, D
Explanation:
The AWS services and features that are provided to all customers at no charge are VPC and
AWS Identity and Access Management (IAM). VPC is a service that allows you to launch AWS
resources in a logically isolated virtual network that you define. You can create and use a VPC
at no additional charge, and you only pay for the resources that you launch in the VPC, such as
EC2 instances or EBS volumes. IAM is a service that allows you to manage access and
permissions to AWS resources. You can create and use IAM users, groups, roles, and policies
at no additional charge, and you only pay for the AWS resources that the IAM entities access.
Amazon Aurora, Amazon SageMaker, and Amazon Polly are not free services, and they charge
based on the usage and features that you choose5
11. A company wants its AWS usage to be more sustainable. The company wants to track,
measure, review, and forecast polluting emissions that result from its AWS applications.
Which AWS service or tool can the company use to meet these requirements?
A. AWS Health Dashboard
B. AWS customer carbon footprint tool
C. AWS Support Center
D. Amazon QuickSight
Answer: B
Explanation:
AWS customer carbon footprint tool is a tool that helps customers measure and manage their
carbon emissions from their AWS usage. It provides data on the carbon intensity, energy
consumption, and estimated emissions of AWS services across regions and time periods. It also
enables customers to review and forecast their emissions, and compare them with industry
benchmarks. AWS Health Dashboard is a service that provides personalized information about
the health and performance of AWS services and resources. AWS Support Center is a service
that provides access to AWS support resources, such as cases, forums, and documentation.
Amazon QuickSight is a service that provides business intelligence and analytics for AWS data
sources.
12. A company has data lakes designed for high performance computing (HPC) workloads.
Which Amazon EC2 instance type should the company use to meet these requirements?
A. General purpose instances
B. Compute optimized instances
C. Memory optimized instances
D. Storage optimized instances
Answer: B
Explanation:
For high performance computing (HPC) workloads, compute resources play a critical role in
delivering the necessary processing power and efficiency. HPC workloads are typically
computationally intensive, often requiring a large number of CPU cycles to solve complex
problems. These workloads benefit most from instances that provide powerful processors and
high clock speeds, which is why Compute optimized instances (Answer B) are the best choice in
this scenario.
Why Compute Optimized Instances (C Instances)?
Designed for Compute-Intensive Tasks: Compute optimized instances in Amazon EC2, such as
the C6i or C5 series, are designed to offer high compute performance, low cost, and consistent
CPU power. These instances are ideal for workloads like HPC, which require a high level of
processing per second. High Performance CPUs: The compute optimized instance family
typically uses the latest-generation processors, such as AWS Graviton2 or Intel Xeon Scalable
processors, which provide a higher number of virtual CPUs (vCPUs) and increased clock
speeds compared to other instance types. This matches the need for HPC workloads to
maximize throughput and minimize compute times.
Use Case Alignment: HPC workloads such as genomic research, computational fluid dynamics
(CFD),
financial modeling, and 3D rendering require high levels of CPU-bound tasks. Compute
optimized
instances provide the best CPU-to-memory ratio to handle these efficiently, leading to faster
processing times and cost efficiency.
Comparison with Other Instance Types:
A. General Purpose Instances: These are versatile and balanced instances (e.g., T3 or M6i)
that are suitable for various workloads but do not provide the specialized compute performance
required for HPC. They offer a balanced mix of compute, memory, and networking but are not
optimal for HPC workloads where computational power is critical.
C. Memory Optimized Instances: While these instances (e.g., R5, X1) are ideal for memory-
intensive workloads such as in-memory databases (e.g., SAP HANA) or real-time data
analytics, they do not provide the specialized compute power necessary for HPC tasks that
require heavy CPU processing. D. Storage Optimized Instances: These instances (e.g., I3, D3)
are designed for workloads that need high disk throughput, like big data or transactional
databases. While these are excellent for storage-heavy applications, they are not optimized for
compute-intensive HPC workloads. Amazon EC2 Compute Optimized Family (C Instances)
C6i Instances: Based on 3rd Gen Intel Xeon Scalable processors, C6i instances offer up to 15%
better
price/performance compared to previous generation C5 instances. These are ideal for high
compute and HPC workloads.
C5 Instances: These are built for compute-intensive workloads like batch processing, distributed
analytics, and high-performance web servers. They offer a high level of sustained CPU
performance. AWS Reference Links:
Amazon EC2 Instance Types
Amazon EC2 Compute Optimized Instances
HPC on AWS
In conclusion, Compute optimized instances (B) are the best choice for HPC workloads due to
their high compute performance, optimized CPU architecture, and suitability for computationally
intensive tasks.
15. What is a benefit of moving to the AWS Cloud in terms of improving time to market?
A. Decreased deployment speed
B. Increased application security
C. Increased business agility
D. Increased backup capabilities
Answer: C
Explanation:
Increased business agility is a benefit of moving to the AWS Cloud in terms of improving time to
market. Business agility refers to the ability of a company to adapt to changing customer needs,
market conditions, and competitive pressures. Moving to the AWS Cloud enables business
agility by providing faster access to resources, lower upfront costs, and greater scalability and
flexibility. By using the AWS Cloud, companies can launch new products and services,
experiment with new ideas, and respond to customer feedback more quickly and efficiently. For
more information, see [Benefits of Cloud Computing] and [Business Agility].
16. An ecommerce company has deployed a new web application on Amazon EC2 Instances.
The company wants to distribute incoming HTTP traffic evenly across all running instances.
Which AWS service or resource will meet this requirement?
A. Amazon EC2 Auto Scaling
B. Application Load Balancer
C. Gateway Load Balancer
D. Network Load Balancer
Answer: B
Explanation:
An Application Load Balancer (ALB) is the best choice for distributing incoming HTTP/HTTPS
traffic evenly across multiple Amazon EC2 instances. It operates at the application layer (Layer
7 of the OSI model) and is specifically designed to handle HTTP and HTTPS traffic, which is
ideal for web applications.
Here is why the ALB is the correct choice:
Layer 7 Load Balancing: The ALB works at the application layer and provides advanced routing
capabilities based on content. It can inspect the incoming HTTP requests and make decisions
on how to route traffic to various backend targets, which include Amazon EC2 instances,
containers, or Lambda functions. This is particularly useful for web applications where you need
to make routing decisions based on HTTP headers, paths, or query strings.
HTTP and HTTPS Support: The ALB natively supports HTTP and HTTPS protocols, making it
the ideal load balancer for web-based applications. It can efficiently manage and route these
types of traffic and handle tasks such as SSL/TLS termination.
Health Checks: The ALB can continuously monitor the health of the registered EC2 instances
and only route traffic to healthy instances. This ensures high availability and reliability of the web
application. Path-based and Host-based Routing: The ALB can route traffic based on the URL
path or host header. This feature allows the same load balancer to serve multiple applications
hosted on different domains or subdomains.
Integration with Auto Scaling: The ALB can integrate seamlessly with Amazon EC2 Auto
Scaling. As the number of EC2 instances increases or decreases, the ALB automatically
includes the new instances in its traffic distribution pool, ensuring even distribution of incoming
requests. WebSocket Support: It also supports WebSocket and HTTP/2 protocols, which are
essential for modern web applications that require real-time, bidirectional communication.
Why other options are not suitable:
A. Amazon EC2 Auto Scaling: This service is used to automatically scale the number of EC2
instances up or down based on specified conditions. However, it does not provide load
balancing capabilities. It works well with load balancers but does not handle the distribution of
incoming traffic by itself.
C. Gateway Load Balancer: This is designed to distribute traffic to virtual appliances like
firewalls, IDS/IPS systems, or deep packet inspection systems. It operates at Layer 3 (Network
Layer) and is not ideal for distributing HTTP/HTTPS traffic to EC2 instances.
D. Network Load Balancer: This load balancer operates at Layer 4 (Transport Layer) and is
designed
to handle millions of requests per second while maintaining ultra-low latencies. It is best suited
for
TCP, UDP, and TLS traffic but does not provide advanced Layer 7 routing features required for
HTTP/HTTPS traffic.
References:
AWS Application Load Balancer Documentation
Comparison of Elastic Load Balancing Options
17. A company needs to use standard SQL to query and combine exabytes of structured and
semi-structured data across a data warehouse, operational database, and data lake.
Which AWS service meets these requirements?
A. Amazon DynamoDB
B. Amazon Aurora
C. Amazon Athena
D. Amazon Redshift
Answer: D
Explanation:
Amazon Redshift is the service that meets the requirements of using standard SQL to query
and combine exabytes of structured and semi-structured data across a data warehouse,
operational database, and data lake. Amazon Redshift is a fully managed, petabyte-scale data
warehouse service that allows you to run complex analytic queries using standard SQL and
your existing business intelligence tools. Amazon Redshift also supports Redshift Spectrum, a
feature that allows you to directly query and join data stored in Amazon S3 using the same SQL
syntax. Amazon Redshift can scale up or down to handle any volume of data and deliver fast
query performance5
18. Which AWS service can a company use to securely store and encrypt passwords for a
database?
A. AWS Shield
B. AWS Secrets Manager
C. AWS Identity and Access Management (IAM)
D. Amazon Cognito
Answer: B
Explanation:
AWS Secrets Manager is an AWS service that can be used to securely store and encrypt
passwords for a database. It allows users to manage secrets, such as database credentials,
API keys, and tokens, in a centralized and secure way. It also provides features such as
automatic rotation, fine-grained access control, and auditing. AWS Shield is an AWS service
that provides protection against Distributed Denial of Service (DDoS) attacks for AWS resources
and services. It does not store or encrypt passwords for a database. AWS Identity and Access
Management (IAM) is an AWS service that allows users to manage access to AWS resources
and services. It can be used to create users, groups, roles, and policies that control who can do
what in AWS. It does not store or encrypt passwords for a database. Amazon Cognito is an
AWS service that provides user identity and data synchronization for web and mobile
applications. It can be used to authenticate and authorize users, manage user profiles, and sync
user data across devices. It does not store or encrypt passwords for a database.
19. Which AWS feature provides a no-cost platform for AWS users to join community groups,
ask questions, find answers, and read community-generated articles about best practices?
A. AWS Knowledge Center
B. AWS re:Post
C. AWS 10
D. AWS Enterprise Support
Answer: B
Explanation:
AWS re:Post is a no-cost platform for AWS users to join community groups, ask questions, find
answers, and read community-generated articles about best practices. AWS re:Post is a social
media platform that connects AWS users with each other and with AWS experts. Users can
create posts, comment on posts, follow topics, and join groups related to AWS services,
solutions, and use cases. AWS re:Post also features live event feeds, community stories, and
AWS Hero profiles. AWS re:Post is a great way to learn from the AWS community, share your
knowledge, and get inspired.
References: AWS re:Post
Join the Conversation
21. A company has an application that produces unstructured data continuously. The company
needs to store the data so that the data is durable and easy to query.
Which AWS service can the company use to meet these requirements?
A. Amazon RDS
B. Amazon Aurora
C. Amazon QuickSight
D. Amazon DynamoDB
Answer: D
Explanation:
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. It is designed to handle unstructured data,
offers high durability, and provides easy querying capabilities using key-value or document data
models, making it suitable for applications that continuously produce unstructured data.
Why other options are not suitable:
A. Amazon RDS: A relational database service that is more suited for structured data and SQL
queries.
B. Amazon Aurora: A MySQL- and PostgreSQL-compatible relational database, also more
suited for structured data.
C. Amazon QuickSight: A business intelligence service for data visualization, not for storing
data.
References:
Amazon DynamoDB Documentation
22. A company has deployed an application in the AWS Cloud. The company wants to ensure
that the application is highly resilient.
Which component of AWS infrastructure can the company use to meet this requirement?
A. Content delivery network (CDN)
B. Edge locations
C. Wavelength Zones
D. Availability Zones
Answer: D
Explanation:
Availability Zones are components of AWS infrastructure that can help the company ensure that
the application is highly resilient. Availability Zones are multiple, isolated locations within each
AWS Region. Each Availability Zone has independent power, cooling, and physical security,
and is connected to the other Availability Zones in the same Region via low-latency, high-
throughput, and highly redundant networking. Availability Zones allow you to operate production
applications and databases that are more highly available, fault tolerant, and scalable than
would be possible from a single data center.
23. In which of the following AWS services should database credentials be stored for maximum
security?
A. AWS Identity and Access Management (IAM)
B. AWS Secrets Manager
C. Amazon S3
D. AWS Key Management Service (AWS KMS)
Answer: B
Explanation:
AWS Secrets Manager is the AWS service where database credentials should be stored for
maximum security. AWS Secrets Manager helps to protect the secrets, such as database
credentials, passwords, API keys, and tokens, that are used to access applications, services,
and resources. AWS Secrets Manager enables secure storage, encryption, rotation, and
retrieval of the secrets. AWS Secrets Manager also integrates with other AWS services, such as
AWS Identity and Access Management (IAM), AWS Key Management Service (AWS KMS), and
AWS Lambda. For more information, see [What is AWS Secrets Manager?] and [Getting
Started with AWS Secrets Manager].
24. A developer wants to use an Amazon S3 bucket to store application logs that contain
sensitive data.
Which AWS service or feature should the developer use to restrict read and write access to the
S3 bucket?
A. Security groups
B. Amazon CloudWatch
C. AWS CloudTrail
D. ACLs
Answer: D
Explanation:
ACLs are an AWS service or feature that the developer can use to restrict read and write
access to the S3 bucket. ACLs are access control lists that grant basic permissions to other
AWS accounts or predefined groups. They can be used to grant read or write access to an S3
bucket or an object3. Security groups are virtual firewalls that control the inbound and outbound
traffic for Amazon EC2 instances. They are not a service or feature that can be used to restrict
access to an S3 bucket. Amazon CloudWatch is a service that provides monitoring and
observability for AWS resources and applications. It can be used to collect and analyze metrics,
logs, events, and alarms. It is not a service or feature that can be used to restrict access to an
S3 bucket. AWS CloudTrail is a service that provides governance, compliance, and audit for
AWS accounts and resources. It can be used to track and record the API calls and user activity
in AWS. It is not a service or feature that can be used to restrict access to an S3 bucket.
25. A retail company has recently migrated its website to AWS. The company wants to ensure
that it is protected from SQL injection attacks. The website uses an Application Load Balancer
to distribute traffic to multiple Amazon EC2 instances.
Which AWS service or feature can be used to create a custom rule that blocks SQL injection
attacks?
A. Security groups
B. AWS WAF
C. Network ACLs
D. AWS Shield
Answer: B
Explanation:
AWS WAF is a web application firewall that helps protect your web applications or APIs against
common web exploits that may affect availability, compromise security, or consume excessive
resources. AWS WAF gives you control over how traffic reaches your applications by enabling
you to create security rules that block common attack patterns, such as SQL injection or cross-
site scripting, and rules that filter out specific traffic patterns you define2. You can use AWS
WAF to create a custom rule that blocks SQL injection attacks on your website.
26. Which tasks are customer responsibilities, according to the AWS shared responsibility
model? (Select TWO.)
A. Configure the AWS provided security group firewall.
B. Classify company assets in the AWS Cloud.
C. Determine which Availability Zones to use for Amazon S3 buckets.
D. Patch or upgrade Amazon DynamoDB.
E. Select Amazon EC2 instances to run AWS Lambda on.
F. AWS Config
Answer: A, B
Explanation:
According to the AWS shared responsibility model, the customer is responsible for security in
the cloud, which includes the tasks of configuring the AWS provided security group firewall and
classifying company assets in the AWS Cloud. A security group is a virtual firewall that controls
the inbound and outbound traffic for one or more EC2 instances. The customer must configure
the security group rules to allow or deny traffic based on protocol, port, or source and
destination IP address2 Classifying company assets in the AWS Cloud means identifying the
types, categories, and sensitivity levels of the data and resources that the customer stores and
processes on AWS. The customer must also determine the applicable compliance requirements
and regulations that apply to their assets, and implement the appropriate security controls and
measures to protect them
27. In which categories does AWS Trusted Advisor provide recommended actions? (Select
TWO.)
A. Operating system patches
B. Cost optimization
C. Repetitive tasks
D. Service quotas
E. Account activity records
Answer: B, D
Explanation:
AWS Trusted Advisor is a service that provides real-time guidance to help you provision your
resources following AWS best practices. AWS Trusted Advisor provides recommended actions
in five categories: cost optimization, performance, security, fault tolerance, and service quotas.
Cost optimization helps you reduce your overall AWS costs by identifying idle and underutilized
resources. Service quotas helps you monitor and manage your usage of AWS service quotas
and request quota increases. Operating system patches, repetitive tasks, and account activity
records are not categories that AWS Trusted Advisor provides recommended actions for.
Source: [AWS Trusted Advisor]
28. Which AWS service or feature captures information about the network traffic to and from an
Amazon EC2 instance?
A. VPC Reachability Analyzer
B. Amazon Athena
C. VPC Flow Logs
D. AWS X-Ray
Answer: C
Explanation:
The correct answer is C because VPC Flow Logs is an AWS service or feature that captures
information about the network traffic to and from an Amazon EC2 instance. VPC Flow Logs is a
feature that enables customers to capture information about the IP traffic going to and from
network interfaces in their VPC. VPC Flow Logs can help customers to monitor and
troubleshoot connectivity issues, such as traffic not reaching an instance or traffic being rejected
by a security group. The other options are incorrect because they are not AWS services or
features that capture information about the network traffic to and from an Amazon EC2
instance. VPC Reachability Analyzer is an AWS service or feature that enables customers to
perform connectivity testing between resources in their VPC and identify configuration issues
that prevent connectivity. Amazon Athena is an AWS service that enables customers to query
data stored in Amazon S3 using standard SQL. AWS X-Ray is an AWS service that enables
customers to analyze and debug distributed applications, such as those built using a
microservices architecture.
Reference: VPC Flow Logs
29. A company wants to monitor its workload performance. The company wants to ensure that
the cloud services are delivered at a level that meets its business needs.
Which AWS Cloud Adoption Framework (AWS CAF) perspective will meet these requirements?
A. Business
B. Governance
C. Platform
D. Operations
Answer: D
Explanation:
The Operations perspective helps you monitor and manage your cloud workloads to ensure that
they are delivered at a level that meets your business needs. Common stakeholders include
chief operations officer (COO), cloud director, cloud operations manager, and cloud operations
engineers1. The Operations perspective covers capabilities such as workload health monitoring,
incident management, change management, release management, configuration management,
and disaster recovery2.
The Business perspective helps ensure that your cloud investments accelerate your digital
transformation ambitions and business outcomes. Common stakeholders include chief
executive officer (CEO), chief financial officer (CFO), chief information officer (CIO), and chief
technology officer (CTO). The Business perspective covers capabilities such as business case
development, value realization, portfolio management, and stakeholder management3.
The Governance perspective helps you orchestrate your cloud initiatives while maximizing
organizational benefits and minimizing transformation-related risks. Common stakeholders
include chief transformation officer, CIO, CTO, CFO, chief data officer (CDO), and chief risk
officer (CRO). The Governance perspective covers capabilities such as governance framework,
budget and cost management, compliance management, and data governance4.
The Platform perspective helps you build an enterprise-grade, scalable, hybrid cloud platform,
modernize existing workloads, and implement new cloud-native solutions. Common
stakeholders include CTO, technology leaders, architects, and engineers. The Platform
perspective covers capabilities such as platform design and implementation, workload migration
and modernization, cloud-native development, and DevOps5.
References:
AWS Cloud Adoption Framework: Operations Perspective AWS Cloud Adoption Framework -
Operations Perspective AWS Cloud Adoption Framework: Business Perspective AWS Cloud
Adoption Framework: Governance Perspective AWS Cloud Adoption Framework: Platform
Perspective
30. A company processes personally identifiable information (Pll) and must keep data in the
country where it was generated. The company wants to use Amazon EC2 instances for these
workloads.
Which AWS service will meet these requirements?
A. AWS Outposts
B. AWS Storage Gateway
C. AWS DataSync
D. AWS OpsWorks
Answer: A
Explanation:
AWS Outposts is an AWS service that extends AWS infrastructure, services, APIs, and tools to
virtually any datacenter, co-location space, or on-premises facility. AWS Outposts enables you
to run Amazon EC2 instances and other AWS services locally, while maintaining a consistent
and seamless connection to the AWS Cloud. AWS Outposts is ideal for workloads that require
low latency, local data processing, or data residency. By using AWS Outposts, the company can
process personally identifiable information (PII) and keep data in the country where it was
generated, while leveraging the benefits of AWS
32. When designing AWS workloads to be operational even when there are component failures,
what is an AWS best practice?
A. Perform quarterly disaster recovery tests.
B. Place the main component on the us-east-1 Region.
C. Design for automatic failover to healthy resources.
D. Design workloads to fit on a single Amazon EC2 instance.
Answer: C
Explanation:
Designing for automatic failover to healthy resources is an AWS best practice when designing
AWS workloads to be operational even when there are component failures. This means that you
should architect your system to handle the loss of one or more components without impacting
the availability or performance of your application. You can use various AWS services and
features to achieve this, such as Auto Scaling, Elastic Load Balancing, Amazon Route 53,
Amazon CloudFormation, and AWS CloudFormation4.
33. A company has a compliance requirement to record and evaluate configuration changes, as
well as
perform remediation actions on AWS resources.
Which AWS service should the company use?
A. AWS Config
B. AWS Secrets Manager
C. AWS CloudTrail
D. AWS Trusted Advisor
Answer: A
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of
your AWS resources. AWS Config continuously monitors and records your AWS resource
configurations and allows you to automate the evaluation of recorded configurations against
desired configurations. With AWS Config, you can review changes in configurations and
relationships between AWS resources, dive into detailed resource configuration histories, and
determine your overall compliance against the configurations specified in your internal
guidelines. This can help you simplify compliance auditing, security analysis, change
management, and operational troubleshooting1.
34. A company is configuring its AWS Cloud environment. The company's administrators need
to group users together and apply permissions to the group.
Which AWS service or feature can the company use to meet these requirements?
A. AWS Organizations
B. Resource groups
C. Resource tagging
D. AWS Identity and Access Management (IAM)
Answer: D
Explanation:
The AWS service or feature that the company can use to group users together and apply
permissions to the group is AWS Identity and Access Management (IAM). AWS IAM is a service
that enables users to create and manage users, groups, roles, and permissions for AWS
services and resources. Users can use IAM groups to organize multiple users that have similar
access requirements, and attach policies to the groups that define the permissions for the users
in the group. This simplifies the management and administration of user access
35. Which AWS service supports a hybrid architecture that gives users the ability to extend
AWS infrastructure,
AWS services, APIs, and tools to data centers, co-location environments, or on-premises
facilities?
A. AWS Snowmobile
B. AWS Local Zones
C. AWS Outposts
D. AWS Fargate
Answer: C
Explanation:
AWS Outposts is a service that delivers AWS infrastructure and services to virtually any on-
premises or edge location for a truly consistent hybrid experience. AWS Outposts allows you to
extend and run native AWS services on premises, and is available in a variety of form factors,
from 1U and 2U Outposts servers to 42U Outposts racks, and multiple rack deployments. With
AWS Outposts, you can run some AWS services locally and connect to a broad range of
services available in the local AWS Region. Run applications and workloads on premises using
familiar AWS services, tools, and APIs2. AWS Outposts is the only AWS service that supports a
hybrid architecture that gives users the ability to extend AWS infrastructure, AWS services,
APIs, and tools to data centers, co-location environments, or on-premises facilities.
References: On-Premises Infrastructure - AWS Outposts Family
36. How should the company deploy the application to meet these requirements?
A. Ina single Availability Zone
B. On AWS Direct Connect
C. On Reserved Instances
D. In multiple Availability Zones
Answer: D
Explanation:
Deploying the application in multiple Availability Zones is the best way to ensure high availability
for the application. Availability Zones are isolated locations within an AWS Region that are
engineered to be fault-tolerant from failures in other Availability Zones. By deploying the
application in multiple Availability Zones, the company can reduce the impact of outages and
increase the resilience of the application. Deploying the application in a single Availability Zone,
on AWS Direct Connect, or on Reserved Instances does not provide the same level of high
availability as deploying the application in multiple Availability Zones. Source: Availability Zones
37. A company wants to build a new web application by using AWS services. The application
must meet the on-demand load for periods of heavy activity.
Which AWS services or resources provide the necessary workload adjustments to meet these
requirements? (Select TWO.)
A. Amazon Machine Image (AMI)
B. Amazon EC2 Auto Scaling
C. Amazon EC2 instance
D. AWS Lambda
E. EC2 Image Builder
Answer: B, D
Explanation:
Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2
instances available to handle the load for your application. You create collections of EC2
instances, called Auto Scaling groups. You can specify the minimum number of instances in
each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes
below this size. You can specify the maximum number of instances in each Auto Scaling group,
and Amazon EC2 Auto Scaling ensures that your group never goes above this size4. AWS
Lambda lets you run code without provisioning or managing servers. You pay only for the
compute time you consume. With Lambda, you can run code for virtually any type of application
or backend service - all with zero administration. Just upload your code and Lambda takes care
of everything required to run and scale your code with high availability. You can set up your
code to automatically trigger from other AWS services or call it directly from any web or mobile
app.
38. Which pillar of the AWS Well-Architected Framework focuses on the return on investment of
moving into the AWS Cloud?
A. Sustainability
B. Cost optimization
C. Operational excellence
D. Reliability
Answer: B
Explanation:
Cost optimization is the pillar of the AWS Well-Architected Framework that focuses on the return
on investment of moving into the AWS Cloud. Cost optimization means that users can achieve
the desired business outcomes at the lowest possible price point, while maintaining high
performance and reliability. Cost optimization can be achieved by using various AWS features
and best practices, such as pay-as-you-go pricing, right-sizing, elasticity, reserved instances,
spot instances, cost allocation tags, cost and usage reports, and AWS Trusted Advisor. [AWS
Well-Architected Framework] AWS Certified Cloud Practitioner - aws.amazon.com
39. Which factors affect costs in the AWS Cloud? (Select TWO.)
A. The number of unused AWS Lambda functions
B. The number of configured Amazon S3 buckets
C. Inbound data transfers without acceleration
D. Outbound data transfers without acceleration
E. Compute resources that are currently in use
Answer: D, E
Explanation:
Outbound data transfers without acceleration and compute resources that are currently in use
are the factors that affect costs in the AWS Cloud. Outbound data transfers without acceleration
refer to the amount of data that is transferred from AWS to the internet, without using any
service that can optimize the speed and cost of the data transfer, such as AWS Global
Accelerator or Amazon CloudFront. Outbound data transfers are charged at different rates
depending on the source and destination AWS Regions, and the volume of data transferred.
Compute resources that are currently in use refer to the AWS services and resources that
provide computing capacity, such as Amazon EC2 instances, AWS Lambda functions, or
Amazon ECS tasks. Compute resources are charged based on the type, size, and configuration
of the resources, and the duration and frequency of their usage.
40. An administrator observed that multiple AWS resources were deleted yesterday.
Which AWS service will help identify the cause and determine which user deleted the
resources?
A. AWS CtoudTrail
B. Amazon Inspector
C. Amazon GuardDuty
D. AWS Trusted Advisor
Answer: A
Explanation:
AWS CloudTrail is a service that enables governance, compliance, and operational and risk
auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain
account activity related to actions across your AWS infrastructure. CloudTrail logs provide a
history of AWS API calls for your account, including those made by the AWS Management
Console, AWS SDKs, command-line tools, and other AWS services. In this case, AWS
CloudTrail will help the administrator identify which user deleted the resources by reviewing the
event history that records details such as which user performed the action, the time of the
action, and which resources were affected.
B. Amazon Inspector: Incorrect, as it is a security assessment service that helps identify
vulnerabilities and deviations from best practices, not for tracking user activity.
C. Amazon GuardDuty: Incorrect, as it is a threat detection service that monitors malicious
activity and unauthorized behavior, not specifically for tracking changes made by users.
D. AWS Trusted Advisor: Incorrect, as it provides best practices and guidance for cost
optimization, security, fault tolerance, and performance, not for logging user actions. AWS
Cloud References:
AWS CloudTrail
41. Which of the following is a recommended design principle of the AWS Well-Architected
Framework?
A. Reduce downtime by making infrastructure changes infrequently and in large increments.
B. Invest the time to configure infrastructure manually.
C. Learn to improve from operational failures.
D. Use monolithic application design for centralization.
Answer: C
Explanation:
The correct answer is C because learning to improve from operational failures is a
recommended design principle of the AWS Well-Architected Framework. The AWS Well-
Architected Framework is a set of best practices and guidelines for designing and operating
reliable, secure, efficient, and cost-effective systems in the cloud. The AWS Well-Architected
Framework consists of five pillars: operational excellence, security, reliability, performance
efficiency, and cost optimization. Each pillar has a set of design principles that describe the
characteristics of a well-architected system. Learning to improve from operational failures is a
design principle of the operational excellence pillar, which focuses on running and monitoring
systems to deliver business value and continually improve supporting processes and
procedures. The other options are incorrect because they are not recommended design
principles of the AWS Well-Architected Framework. Reducing downtime by making
infrastructure changes infrequently and in large increments is not a design principle of the AWS
Well-Architected Framework, but rather a source of risk and inefficiency. A well-architected
system should implement changes frequently and in small increments to minimize the impact
and scope of failures. Investing the time to configure infrastructure manually is not a design
principle of the AWS Well-Architected Framework, but rather a source of human error and
inconsistency. A well-architected system should automate manual tasks to improve the speed
and accuracy of operations. Using monolithic application design for centralization is not a design
principle of the AWS Well-Architected Framework, but rather a source of complexity and rigidity.
A well-architected system should use loosely coupled and distributed components to enable
scalability and resilience.
Reference: [AWS Well-Architected Framework]
42. A company wants to move its on-premises databases to managed cloud database services
by using a simplified migration process.
Which AWS service or tool can help the company meet this requirement?
A. AWS Storage Gateway
B. AWS Application Migration Service
C. AWS DataSync
D. AWS Database Migration Service (AWS DMS)
Answer: D
Explanation:
AWS Database Migration Service (AWS DMS) is a cloud service that makes it possible to
migrate relational databases, data warehouses, NoSQL databases, and other types of data
stores. You can use AWS DMS to migrate your data into the AWS Cloud or between
combinations of cloud and on-premises setups. With AWS DMS, you can discover your source
data stores, convert your source schemas, and migrate your data. AWS DMS supports
migration between 20-plus database and analytics engines, such as Oracle to Amazon Aurora
MySQL-Compatible Edition, MySQL to Amazon Relational Database (RDS) for MySQL,
Microsoft SQL Server to Amazon Aurora PostgreSQL-Compatible Edition, MongoDB to Amazon
DocumentDB (with MongoDB compatibility), Oracle to Amazon Redshift, and Amazon Simple
Storage Service (S3). You can perform one-time migrations or replicate ongoing changes to
keep sources and targets in sync. AWS DMS automatically manages the deployment,
management, and monitoring of all hardware and software needed for your migration. AWS
DMS is a highly resilient, secure cloud service that provides database discovery, schema
conversion, data migration, and ongoing replication to and from a wide range of databases and
analytics systems12.
References: Database Migration - AWS Database Migration Service - AWS
What is AWS Database Migration Service? - AWS Database Migration Service
43. A company needs a repository that stores source code. The company needs a way to
update the running software when the code changes.
Which combination of AWS services will meet these requirements? (Select TWO.)
A. AWS CodeCommit
B. AWS CodeDeploy
C. Amazon DynamoDB
D. Amazon S3
E. Amazon Elastic Container Service (Amazon ECS)
Answer: A, B
Explanation:
A and B are correct because AWS CodeCommit is the AWS service that provides a fully
managed source control service that hosts secure Git-based repositories1, and AWS
CodeDeploy is the AWS service that automates code deployments to any instance, including
Amazon EC2 instances and servers running on-premises2. These two services can be used
together to store source code and update the running software when the code changes. C is
incorrect because Amazon DynamoDB is the AWS service that provides a fully managed
NoSQL database service that supports key-value and document data models3. It is not related
to storing source code or updating software. D is incorrect because Amazon S3 is the AWS
service that provides object storage through a web service interface4. It can be used to store
source code, but it does not provide source control features or update software. E is incorrect
because Amazon Elastic Container Service (Amazon ECS) is the AWS service that allows users
to run, scale, and secure Docker container applications. It can be used to deploy containerized
software, but it does not store source code or update software.
44. A company is running applications on Amazon EC2 instances in the same AWS account for
several different projects. The company wants to track the infrastructure costs for each of the
projects separately. The company must conduct this tracking with the least possible impact to
the existing infrastructure and with no additional cost.
What should the company do to meet these requirements?
A. Use a different EC2 instance type for each project.
B. Publish project-specific custom Amazon CloudWatch metrics for each application.
C. Deploy EC2 instances for each project in a separate AWS account.
D. Use cost allocation tags with values that are specific to each project.
Answer: D
Explanation:
The correct answer is D because cost allocation tags are a way to track the infrastructure costs
for each of the projects separately. Cost allocation tags are key-value pairs that can be attached
to AWS resources, such as EC2 instances, and used to categorize and group them for billing
purposes. The other options are incorrect because they do not meet the requirements of the
question. Use a different EC2 instance type for each project does not help to track the costs for
each project, and may impact the performance and compatibility of the applications. Publish
project-specific custom Amazon CloudWatch metrics for each application does not help to track
the costs for each project, and may incur additional charges for using CloudWatch. Deploy EC2
instances for each project in a separate AWS account does help to track the costs for each
project, but it impacts the existing infrastructure and incurs additional charges for using multiple
accounts.
Reference: Using Cost Allocation Tags
45. A systems administrator created a new 1AM user for a developer and assigned the user an
access key instead of a user name and password.
What is the access key used for?
A. To access the AWS account as the AWS account root user
B. To access the AWS account through the AWS Management Console
C. To access the AWS account through a CLI
D. To access all of a company's AWS accounts
Answer: C
Explanation:
An access key is a pair of long-term credentials that consists of an access key ID and a secret
access key. An access key is used to sign programmatic requests to the AWS CLI or AWS API
(directly or using the AWS SDK). An access key allows a user to access the AWS account
through a CLI, which is a tool that enables users to interact with AWS services using commands
in a terminal or a script12.
The other options are not correct, because:
To access the AWS account as the AWS account root user, a user needs the email address
and password associated with the account. The root user has complete access to all AWS
resources and services in the account. However, it is not recommended to use the root user for
everyday tasks3. To access the AWS account through the AWS Management Console, a user
needs a user name and password. The console is a web-based interface that allows users to
manage their AWS resources and services using a graphical user interface4.
To access all of a company’s AWS accounts, a user needs to use AWS Organizations, which is
a service that enables users to centrally manage and govern multiple AWS accounts. AWS
Organizations allows users to create groups of accounts and apply policies to them5.
References:
Managing access keys for IAM users - AWS Identity and Access Management What Is the AWS
Command Line Interface? - AWS Command Line Interface AWS account root user - AWS
Identity and Access Management
What Is the AWS Management Console? - AWS Management Console What Is AWS
Organizations? - AWS Organizations
46. A company is planning a migration to the AWS Cloud and wants to examine the costs that
are associated with different workloads.
Which AWS tool will meet these requirements?
A. AWS Budgets
B. AWS Cost Explorer
C. AWS Pricing Calculator
D. AWS Cost and Usage Report
Answer: C
Explanation:
The AWS tool that will meet the requirements of the company that is planning a migration to the
AWS Cloud and wants to examine the costs that are associated with different workloads is AWS
Pricing Calculator. AWS Pricing Calculator is a tool that helps customers estimate the cost of
using AWS services based on their requirements and preferences. The company can use AWS
Pricing Calculator to compare the costs of different AWS services and configurations, such as
Amazon EC2, Amazon S3, Amazon RDS, and more. AWS Pricing Calculator also provides
detailed breakdowns of the cost components, such as compute, storage, network, and data
transfer. AWS Pricing Calculator helps customers plan and optimize their cloud budget and
migration strategy. AWS Budgets, AWS Cost Explorer, and AWS Cost and Usage Report are
not the best tools to use for this purpose. AWS Budgets is a tool that helps customers monitor
and manage their AWS spending and usage against predefined budget limits and thresholds.
AWS Cost Explorer is a tool that helps customers analyze and visualize their AWS spending
and usage trends over time. AWS Cost and Usage Report is a tool that helps customers access
comprehensive and granular information about their AWS costs and usage in a CSV or Parquet
file. These tools are more useful for tracking and optimizing the existing AWS costs and usage,
rather than estimating the costs of different workloads34
47. A company wants to integrate natural language processing (NLP) into business intelligence
(Bl) dashboards. The company wants to ask questions and receive answers with relevant
visualizations.
Which AWS service or tool will meet these requirements?
A. Amazon Macie
B. Amazon Rekognition
C. Amazon QuickSight Q
D. Amazon Lex
Answer: C
Explanation:
Amazon QuickSight Q is a natural language query feature that lets you ask questions about
your data using everyday language and get answers in seconds. You can type questions such
as “What are the total sales by region?” or “How did marketing campaign A perform?” and get
answers in the form of relevant visualizations, such as charts or tables. You can also use Q to
drill down into details, filter data, or perform calculations. Q uses machine learning to
understand your data and your intent, and provides suggestions and feedback to help you refine
your questions.
50. Which AWS service or feature provides log information of the inbound and outbound traffic
on network interfaces in a VPC?
A. Amazon CloudWatch Logs
B. AWS CloudTrail
C. VPC Flow Logs
D. AWS Identity and Access Management (IAM)
Answer: C
Explanation:
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to
and from network interfaces in your VPC. Flow log data can be published to the following
locations: Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose. You can
use VPC Flow Logs to monitor network traffic, diagnose security issues, troubleshoot
connectivity problems, and perform network forensics1.
References: Logging IP traffic using VPC Flow Logs - Amazon Virtual Private Cloud
51. A company wants to discover, prepare, move, and integrate data from multiple sources for
data analytics and machine learning.
Which AWS serverless data integration service should the company use to meet these
requirements?
A. AWS Glue
B. AWS Data Exchange
C. Amazon Athena
D. Amazon EMR
Answer: A
Explanation:
AWS Glue is a serverless data integration service designed to discover, prepare, move, and
integrate data from multiple sources for data analytics and machine learning purposes. It
provides a managed ETL (Extract, Transform, Load) service that is ideal for preparing and
transforming data for analytics. AWS Data Exchange is used for finding and subscribing to third-
party data, Amazon Athena is for querying data stored in Amazon S3 using SQL, and Amazon
EMR is for big data processing using Apache Hadoop and Spark, but AWS Glue is specifically
designed for data integration and preparation tasks.
52. Which AWS service is a highly available and scalable DNS web service?
A. Amazon VPC
B. Amazon CloudFront
C. Amazon Route 53
D. Amazon Connect
Answer: C
Explanation:
Amazon Route 53 is a highly available and scalable DNS web service. It is designed to give
developers and businesses an extremely reliable and cost-effective way to route end users to
Internet applications by translating domain names into the numeric IP addresses that computers
use to connect to each other2. Amazon Route 53 also offers other features such as health
checks, traffic management, domain name registration, and DNSSEC3.
53. A company has a social media platform in which users upload and share photos with other
users. The company wants to identify and remove inappropriate photos. The company has no
machine learning (ML) scientists and must build this detection capability with no ML expertise.
Which AWS service should the company use to build this capability?
A. Amazon SageMaker
B. Amazon Textract
C. Amazon Rekognition
D. Amazon Comprehend
Answer: C
Explanation:
Amazon Rekognition is the AWS service that the company should use to build the capability of
identifying and removing inappropriate photos. Amazon Rekognition is a service that uses deep
learning technology to analyze images and videos for various purposes, such as face detection,
object recognition, text extraction, and content moderation. Amazon Rekognition can help users
detect unsafe or inappropriate content in images and videos, such as nudity, violence, or drugs,
and provide confidence scores for each label. Amazon Rekognition does not require any
machine learning expertise, and users can easily integrate it with other AWS services
54. A company needs to centralize its operational data. The company also needs to automate
tasks across all of its Amazon EC2 instances.
Which AWS service can the company use to meet these requirements?
A. AWS Trusted Advisor
B. AWS Systems Manager
C. AWS CodeDeploy
D. AWS Elastic Beanstalk
Answer: B
Explanation:
AWS Systems Manager is a service that enables users to centralize and automate the
management of their AWS resources. It provides a unified user interface to view operational
data, such as inventory, patch compliance, and performance metrics. It also allows users to
automate common and repetitive tasks, such as patching, backup, and configuration
management, across all of their Amazon EC2 instances1. AWS Trusted Advisor is a service that
provides best practices and recommendations to optimize the performance, security, and cost of
AWS resources2. AWS CodeDeploy is a service that automates the deployment of code and
applications to Amazon EC2 instances or other compute services3. AWS Elastic Beanstalk is a
service that simplifies the deployment and management of web applications using popular
platforms, such as Java, PHP, and Node.js4.
55. A developer needs to build an application for a retail company. The application must provide
real-time product recommendations that are based on machine learning.
Which AWS service should the developer use to meet this requirement?
A. AWS Health Dashboard
B. Amazon Personalize
C. Amazon Forecast
D. Amazon Transcribe
Answer: B
Explanation:
Amazon Personalize is a fully managed machine learning service that customers can use to
generate personalized recommendations for their users. It can also generate user segments
based on the users’ affinity for certain items or item metadata. Amazon Personalize uses the
customers’ data to train and deploy custom recommendation models that can be integrated into
their applications. Therefore, the correct answer is B. You can learn more about Amazon
Personalize and its use case.
56. A company wants to migrate to AWS and use the same security software it uses on
premises. The security software vendor offers its security software as a service on AWS.
Where can the company purchase the security solution?
A. AWS Partner Solutions Finder
B. AWS Support Center
C. AWS Management Console
D. AWS Marketplace
Answer: D
Explanation:
AWS Marketplace is an online store that helps customers find, buy, and immediately start using
the software and services that run on AWS. Customers can choose from a wide range of
software products in popular categories such as security, networking, storage, machine
learning, business intelligence, database, and DevOps. Customers can also use AWS
Marketplace to purchase software as a service (SaaS) solutions that are integrated with AWS.
Customers can benefit from simplified procurement, billing, and deployment processes, as well
as flexible pricing options and free trials. Customers can also leverage AWS Marketplace to
discover and subscribe to solutions offered by AWS Partners, such as the security software
vendor mentioned in the question.
References: AWS Marketplace, [AWS Marketplace: Software as a Service (SaaS)], [AWS Cloud
Practitioner Essentials: Module 6 - AWS Pricing, Billing, and Support]
58. A company needs to design a solution for the efficient use of compute resources for an
enterprise workload. The company needs to make informed decisions as its technology needs
evolve.
Which pillar of the AWS Well-Architected Framework do these requirements represent?
A. Operational excellence
B. Performance efficiency
C. Cost optimization
D. Reliability
Answer: B
Explanation:
Performance efficiency is the pillar of the AWS Well-Architected Framework that represents the
requirements of designing a solution for the efficient use of compute resources for an enterprise
workload and making informed decisions as the technology needs evolve. It focuses on using
the right resources and services for the workload, monitoring performance, and continuously
improving the efficiency of the solution. Operational excellence is the pillar of the AWS Well-
Architected Framework that represents the ability to run and monitor systems to deliver
business value and to continually improve supporting processes and procedures. Cost
optimization is the pillar of the AWS Well-Architected Framework that represents the ability to
run systems to deliver business value at the lowest price point. Reliability is the pillar of the
AWS Well-Architected Framework that represents the ability of a system to recover from
infrastructure or service disruptions, dynamically acquire computing resources to meet demand,
and mitigate disruptions such as misconfigurations or transient network issues.
59. A company wants to migrate its on_premises workloads to the AWS Cloud. The company
wants to separate workloads for chargeback to different departments.
Which AWS services or features will meet these requirements? (Select TWO.)
A. Placement groups
B. Consolidated billing
C. Edge locations
D. AWS Config
E. Multiple AWS accounts
Answer: BE
Explanation:
Consolidated billing is a feature of AWS Organizations that enables customers to consolidate
billing and payment for multiple AWS accounts. With consolidated billing, customers can group
multiple AWS accounts under one payer account, making it easier to manage billing and track
costs across multiple accounts. Consolidated billing also offers benefits such as volume
discounts, Reserved Instance discounts, and Savings Plans discounts. Consolidated billing is
offered at no additional cost. Multiple AWS accounts is a feature of AWS Organizations that
enables customers to create and manage multiple AWS accounts from a central location. With
multiple AWS accounts, customers can isolate workloads for different departments, projects, or
environments, and apply granular access controls and policies to each account. Multiple AWS
accounts also helps customers improve security, compliance, and governance of their AWS
resources56.
References: 5: Consolidated billing for AWS Organizations - AWS Billing, 6: Understanding
Consolidated Bills - AWS Billing, 7: AWS Consolidated Billing: Tutorial & Best Practices, 8:
Simplifying Your Bills With Consolidated Billing on AWS - Aimably, 9: AWS Consolidated Billing
- W3Schools
60. A company hosts a large amount of data in AWS. The company wants to identify if any of
the data should be considered sensitive.
Which AWS service will meet the requirement?
A. Amazon Inspector
B. Amazon Macie
C. AWS Identity and Access Management (IAM)
D. Amazon CloudWatch
Answer: B
Explanation:
Amazon Macie is a fully managed service that uses machine learning and pattern matching to
help you detect, classify, and better protect your sensitive data stored in the AWS Cloud1.
Macie can automatically discover and scan your Amazon S3 buckets for sensitive data such as
personally identifiable information (PII), financial information, healthcare information, intellectual
property, and credentials1. Macie also provides you with a dashboard that shows the type,
location, and volume of sensitive data in your AWS environment, as well as alerts and findings
on potential security issues1.
The other options are not suitable for identifying sensitive data in AWS. Amazon Inspector is a
service that helps you find security vulnerabilities and deviations from best practices in your
Amazon EC2 instances2. AWS Identity and Access Management (IAM) is a service that helps
you manage access to your AWS resources by creating users, groups, roles, and policies3.
Amazon CloudWatch is a service that helps you monitor and troubleshoot your AWS resources
and applications by collecting metrics, logs, events, and alarms4.
References:
1: What Is Amazon Macie? - Amazon Macie
2: What Is Amazon Inspector? - Amazon Inspector
3: What Is IAM? - AWS Identity and Access Management
4: What Is Amazon CloudWatch? - Amazon CloudWatch
62. Which AWS service will help a company identify the user who deleted an Amazon EC2
instance yesterday?
A. Amazon CloudWatch
B. AWS Trusted Advisor
C. AWS CloudTrail
D. Amazon Inspector
Answer: C
Explanation:
The correct answer is C because AWS CloudTrail is a service that will help a company identify
the user who deleted an Amazon EC2 instance yesterday. AWS CloudTrail is a service that
enables users to track user activity and API usage across their AWS account. AWS CloudTrail
records the details of every API call made to AWS services, such as the identity of the caller,
the time of the call, the source IP address of the caller, the parameters and responses of the
call, and more. Users can use AWS CloudTrail to audit, monitor, and troubleshoot their AWS
resources and actions. The other options are incorrect because they are not services that will
help a company identify the user who deleted an Amazon EC2 instance yesterday. Amazon
CloudWatch is a service that enables users to collect, analyze, and visualize metrics, logs, and
events from their AWS resources and applications. AWS Trusted Advisor is a service that
provides real-time guidance to help users follow AWS best practices for security, performance,
cost optimization, and fault tolerance. Amazon Inspector is a service that helps users find
security vulnerabilities and deviations from best practices in their Amazon EC2 instances.
Reference: AWS CloudTrail FAQs
63. Which AWS Well-Architected Framework pillar focuses on structured and streamlined
allocation of computing resources?
A. Reliability
B. Operational excellence
C. Performance efficiency
D. Sustainability
Answer: C
Explanation:
Understanding Performance Efficiency: This pillar of the AWS Well-Architected Framework
focuses on using computing resources efficiently to meet system requirements and maintain
that efficiency as demand changes and technologies evolve.
Key Aspects of Performance Efficiency:
Selection: Choose the right resources for the job. This includes using the most appropriate
instance types, storage options, and database services.
Review: Regularly review your architecture to take advantage of the latest AWS services and
features, and to ensure you're using the best possible resource for your needs.
Monitoring: Continuously monitor your system performance, gather metrics, and use those
metrics to make informed decisions about scaling and performance optimization.
Trade-offs: Understand the trade-offs between various performance-related aspects, such as
cost, latency, and durability, and make decisions that align with your business goals.
How to Implement Performance Efficiency:
Use Auto Scaling: Implement Auto Scaling to automatically adjust the number of resources
based on the demand.
Choose Appropriate Storage Options: Select the right storage solution (e.g., S3, EBS, or EFS)
based on performance and access patterns.
Optimize Networking: Utilize Amazon CloudFront, AWS Global Accelerator, and VPC to
optimize your network performance.
Regular Review and Testing: Regularly review your architecture, test performance under
various
loads, and adjust configurations as needed.
References:
AWS Well-Architected Framework
Performance Efficiency Pillar
64. An application is running on multiple Amazon EC2 instances. The company wants to make
the application highly available by configuring a load balancer with requests forwarded to the
EC2 instances based on URL paths.
Which AWS load balancer will meet these requirements and take the LEAST amount of effort to
deploy?
A. Network Load Balancer
B. Application Load Balancer
C. AWS OpsWorks Load Balancer
D. Custom Load Balancer on Amazon EC2
Answer: B
Explanation:
The correct answer is B because Application Load Balancer is an AWS load balancer that will
meet the requirements and take the least amount of effort to deploy. Application Load Balancer
is a type of Elastic Load Balancing that operates at the application layer (layer 7) of the OSI
model and routes requests to targets based on the content of the request. Application Load
Balancer supports advanced features, such as path-based routing, host-based routing, and
HTTP header-based routing. The other options are incorrect because they are not AWS load
balancers that will meet the requirements and take the least amount of effort to deploy. Network
Load Balancer is a type of Elastic Load Balancing that operates at the transport layer (layer 4)
of the OSI model and routes requests to targets based on the destination IP address and port.
Network Load Balancer does not support path-based routing. AWS OpsWorks Load Balancer is
not an AWS load balancer, but rather a feature of AWS OpsWorks that enables users to attach
an Elastic Load Balancing load balancer to a layer of their stack. Custom Load Balancer on
Amazon EC2 is not an AWS load balancer, but rather a user-defined load balancer that runs on
an Amazon EC2 instance. Custom Load Balancer on Amazon EC2 requires more effort to
deploy and maintain than an AWS load balancer.
Reference: Elastic Load Balancing
65. A company has an application that runs periodically in an on-premises environment. The
application runs for a few hours most days, but runs for 8 hours a day for a week at the end of
each month.
Which AWS service or feature should be used to host the application in the AWS Cloud?
A. Amazon EC2 Standard Reserved Instances
B. Amazon EC2 On-Demand Instances
C. AWS Wavelength
D. Application Load Balancer
Answer: B
Explanation:
Amazon EC2 On-Demand Instances are instances that you pay for by the second, with no long-
term commitments or upfront payments4. This option is suitable for applications that have
unpredictable or intermittent workloads, such as the one described in the question. Amazon
EC2 Standard Reserved Instances are instances that you purchase for a one-year or three-year
term, and pay a lower hourly rate compared to On-Demand Instances. This option is suitable for
applications that have steady state or predictable usage. AWS Wavelength is a service that
enables developers to build applications that deliver ultra-low latency to mobile devices and
users by deploying AWS compute and storage at the edge of the 5G network. This option is not
relevant for the application described in the question. Application Load Balancer is a type of
load balancer that operates at the application layer and distributes traffic based on the content
of the request. This option is not a service or feature to host the application, but rather to
balance the traffic among multiple instances.
66. Which AWS service provides threat detection by monitoring for malicious activities and
unauthorized actions to protect AWS accounts, workloads, and data that is stored in Amazon
S3?
A. AWS Shield
B. AWS Firewall Manager
C. Amazon GuardDuty
D. Amazon Inspector
Answer: C
Explanation:
Amazon GuardDuty is a service that provides intelligent threat detection and continuous
monitoring for your AWS accounts, workloads, and data. Amazon GuardDuty analyzes and
processes data sources, such as VPC Flow Logs, AWS CloudTrail event logs, and DNS logs, to
identify malicious activities and unauthorized actions, such as reconnaissance, instance
compromise, account compromise, and data exfiltration. Amazon GuardDuty can also detect
threats to your data stored in Amazon S3, such as API calls from unusual locations or disabling
of preventative controls. Amazon GuardDuty generates findings that summarize the details of
the detected threats and provides recommendations for remediation. AWS Shield, AWS Firewall
Manager, and Amazon Inspector are not the best services to meet this requirement. AWS
Shield is a service that provides protection against distributed denial of service (DDoS) attacks.
AWS Firewall Manager is a service that allows you to centrally configure and manage firewall
rules across your accounts and resources. Amazon Inspector is a service that assesses the
security and compliance of your applications running on EC2 instances.
67. In the AWS shared responsibility model, which tasks are the responsibility of AWS? (Select
TWO.)
A. Patch an Amazon EC2 instance operating system.
B. Configure a security group.
C. Monitor the health of an Availability Zone.
D. Protect the infrastructure that runs Amazon EC2 instances.
E. Manage access to the data in an Amazon S3 bucket
Answer: C, D
Explanation:
According to the AWS shared responsibility model, AWS is responsible for the security of the
cloud, which includes the tasks of monitoring the health of an Availability Zone and protecting
the infrastructure that runs Amazon EC2 instances. An Availability Zone is a physically isolated
location within an AWS Region that has its own power, cooling, and network connectivity. AWS
monitors the health and performance of each Availability Zone and notifies customers of any
issues or disruptions. AWS also protects the infrastructure that runs AWS services, such as
Amazon EC2, by implementing physical, environmental, and operational security measures.
AWS is not responsible for patching an Amazon EC2 instance operating system, configuring a
security group, or managing access to the data in an Amazon S3 bucket. These are the
customer’s responsibilities for security in the cloud. The customer must ensure that the
operating system and applications on their EC2 instances are up to date and secure. The
customer must also configure the security group rules that control the inbound and outbound
traffic for their EC2 instances. The customer must also manage the access permissions and
encryption settings for their S3 buckets and objects2
68. Which option is an advantage of AWS Cloud computing that minimizes variable costs?
A. High availability
B. Economies of scale
C. Global reach
D. Agility
Answer: B
Explanation:
One of the advantages of AWS Cloud computing is that it minimizes variable costs by
leveraging economies of scale. This means that AWS can achieve lower costs per unit of
computing resources by spreading the fixed costs of building and maintaining data centers over
a large number of customers. As a result, AWS can offer lower and more predictable prices to
its customers, who only pay for the resources they consume. Therefore, the correct answer is B.
You can learn more about AWS pricing and economies of scale
69. A company is hosting an application in the AWS Cloud. The company wants to verify that
underlying AWS services and general AWS infrastructure are operating normally.
Which combination of AWS services can the company use to gather the required information?
(Select TWO.)
A. AWS Personal Health Dashboard
B. AWS Systems Manager
C. AWS Trusted Advisor
D. AWS Service Health Dashboard
E. AWS Service Catalog
Answer: AD
Explanation:
AWS Personal Health Dashboard and AWS Service Health Dashboard are two AWS services
that can help the company to verify that underlying AWS services and general AWS
infrastructure are operating normally. AWS Personal Health Dashboard provides a personalized
view into the performance and availability of the AWS services you are using, as well as alerts
that are automatically triggered by changes in the health of those services. In addition to event-
based alerts, Personal Health Dashboard provides proactive notifications of scheduled
activities, such as any changes to the infrastructure powering your resources, enabling you to
better plan for events that may affect you. These notifications can be delivered to you via email
or mobile for quick visibility, and can always be viewed from within the AWS Management
Console. When you get an alert, it includes detailed information and guidance, enabling you to
take immediate action to address AWS events impacting your resources3. AWS Service Health
Dashboard provides a general status of AWS services, and the Service health view displays the
current and historical status of all AWS services. This page shows reported service events for
services across AWS Regions. You don’t need to sign in or have an AWS account to access
the AWS Service Health Dashboard C Service health page. You can also subscribe to RSS
feeds for specific services or regions to receive notifications about service events4.
References: Getting started with your AWS Health Dashboard C Your account health,
Introducing AWS Personal Health Dashboard
70. Which AWS service or feature can be used to create a private connection between an on-
premises workload and an AWS Cloud workload?
A. Amazon Route 53
B. Amazon Macie
C. AWS Direct Connect
D. AWS PrivaleLink
Answer: C
Explanation:
AWS Direct Connect is a service that establishes a dedicated network connection between your
on-premises network and one or more AWS Regions. AWS Direct Connect can be used to
create a private connection between an on-premises workload and an AWS Cloud workload,
bypassing the public internet and reducing network costs, latency, and bandwidth issues. AWS
Direct Connect can also provide increased security and reliability for your hybrid cloud
applications and data transfers.
References:
AWS Direct Connect
What is AWS Direct Connect?
AWS Direct Connect User Guide
71. A company uses Amazon Aurora as its database service. The company wants to encrypt its
databases and database backups.
Which party manages the encryption of the database clusters and database snapshots,
according to the AWS shared responsibility model?
A. AWS
B. The company
C. AWS Marketplace partners
D. Third-party partners
Answer: A
Explanation:
AWS manages the encryption of the database clusters and database snapshots for Amazon
Aurora, as well as the encryption keys. This is part of the AWS shared responsibility model,
where AWS is responsible for the security of the cloud, and the customer is responsible for the
security in the cloud. Encryption is one of the security features that AWS provides to protect the
data at rest and in transit. For more information, see Amazon Aurora FAQs and AWS Shared
Responsibility Model.
72. A company is requesting Payment Card Industry (PCI) reports that validate the operating
effectiveness of AWS security controls.
How should the company obtain these reports?
A. Contact AWS Support
B. Download reports from AWS Artifact.
C. Download reports from AWS Security Hub.
D. Contact an AWS technical account manager (TAM).
Answer: B
Explanation:
AWS Artifact is a service provided by AWS that offers on-demand access to AWS compliance
reports, including the Payment Card Industry (PCI) reports. It is the primary tool for retrieving
compliance reports such as Service Organization Control (SOC) reports, ISO certifications, and
Payment Card Industry Data Security Standard (PCI DSS) reports.
To obtain these reports:
The company should log into the AWS Management Console and navigate to AWS Artifact.
From there, they can select and download the necessary compliance reports.
Why other options are not suitable:
A. Contact AWS Support: AWS Support is not needed to obtain these reports; they are readily
available through AWS Artifact.
C. Download reports from AWS Security Hub: AWS Security Hub is a service that provides a
comprehensive view of security alerts and compliance status, but it does not host or provide
compliance reports like PCI DSS.
D. Contact an AWS technical account manager (TAM): While a TAM may assist in various AWS-
related queries, they are not required to obtain PCI reports. AWS Artifact is designed for this
purpose.
References:
AWS Artifact Documentation
73. A company needs to configure rules to identify threats and protect applications from
malicious network access.
Which AWS service should the company use to meet these requirements?
A. AWS Identity and Access Management (IAM)
B. Amazon QuickSight
C. AWS WAF
D. Amazon Detective
Answer: C
Explanation:
AWS WAF is the AWS service that the company should use to configure rules to identify threats
and protect applications from malicious network access. AWS WAF is a web application firewall
that helps to filter, monitor, and block malicious web requests based on customizable rules.
AWS WAF can be integrated with other AWS services, such as Amazon CloudFront, Amazon
API Gateway, and Application Load Balancer. For more information, see What is AWS WAF?
and How AWS WAF Works.
74. A company has two AWS accounts in an organization in AWS Organizations for
consolidated billing.
All of the company's AWS resources are hosted in one AWS Region.
Account A has purchased five Amazon EC2 Standard Reserved Instances (RIs) and has four
EC2 instances running. Account B has not purchased any RIs and also has four EC2 instances
running.
Which statement is true regarding pricing for these eight instances?
A. The eight instances will be charged as regular instances.
B. Four instances will be charged as RIs, and four will be charged as regular instances.
C. Five instances will be charged as RIs, and three will be charged as regular instances.
D. The eight instances will be charged as RIs.
Answer: B
Explanation:
The statement that is true regarding pricing for these eight instances is: four instances will be
charged as RIs, and four will be charged as regular instances. Amazon EC2 Reserved
Instances (RIs) are a pricing model that allows users to reserve EC2 instances for a specific
term and benefit from discounted hourly rates and capacity reservation. RIs are purchased for a
specific AWS Region, and can be shared across multiple accounts in an organization in AWS
Organizations for consolidated billing. However, RIs are applied on a first-come, first-served
basis, and there is no guarantee that all instances in the organization will be charged at the RI
rate. In this case, Account A has purchased five RIs and has four instances running, so all four
instances will be charged at the RI rate. Account B has not purchased any RIs and also has four
instances running, so all four instances will be charged at the regular rate. The remaining RI in
Account A will not be applied to any instance in Account B, and will be wasted.
75. Which programming languages does AWS Cloud Development Kit (AWS CDK) currently
support? (Select TWO.)
A. Python
B. Swift
C. TypeScript
D. Ruby
E. PHP
Answer: A, C
Explanation:
The AWS Cloud Development Kit (AWS CDK) currently supports multiple programming
languages, including Python and TypeScript. These languages allow developers to define cloud
infrastructure using familiar programming constructs. Python and TypeScript are among the first
languages supported by AWS CDK, which also supports Java, C#, and JavaScript. This
enables developers to use their existing programming skills and tools to define cloud
infrastructure in code. B. Swift: Incorrect, as Swift is not currently supported by AWS CDK.
D. Ruby: Incorrect, as Ruby is not currently supported by AWS CDK.
E. PHP: Incorrect, as PHP is not currently supported by AWS CDK.
AWS Cloud References: AWS Cloud Development Kit (AWS CDK)
76. Which AWS service or feature is used to Troubleshoot network connectivity issues between
Amazon EC2 instances?
A. AWS Certificate Manager (ACM)
B. Internet gateway
C. VPC Flow Logs
D. AWS CloudHSM
Answer: C
Explanation:
VPC Flow Logs is the AWS service or feature that is used to troubleshoot network connectivity
issues between Amazon EC2 instances. VPC Flow Logs is a feature that enables users to
capture information about the IP traffic going to and from network interfaces in their VPC. VPC
Flow Logs can help users monitor and diagnose network-related issues, such as traffic not
reaching an instance, or an instance not responding to requests. VPC Flow Logs can be
published to Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose for
analysis and storage.
77. A company needs to continuously monitor its environment to analyze network and account
activity and identify potential security threats.
Which AWS service should the company use to meet these requirements?
A. AWS Artifact
B. Amazon Macie
C. AWS Identity and Access Management (IAM)
D. Amazon GuardDuty
Answer: D
Explanation:
Amazon GuardDuty is a service that provides intelligent threat detection and continuous
monitoring for the AWS environment. It analyzes network and account activity using machine
learning and threat intelligence to identify potential security threats, such as unauthorized
access, compromised credentials, malicious hosts, and reconnaissance activities. It also
generates detailed and actionable findings that can be viewed on the AWS Management
Console or sent to other AWS services, such as Amazon CloudWatch Events and AWS
Lambda, for further analysis or remediation. Amazon GuardDuty OverviewAWS Certified Cloud
Practitioner - aws.amazon.com
78. Which AWS service will help protect applications running on AWS from DDoS attacks?
A. Amazon GuardDuty
B. AWS WAF
C. AWS Shield
D. Amazon Inspector
Answer: C
Explanation:
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that
safeguards applications running on AWS. AWS Shield provides always-on detection and
automatic inline mitigations that minimize application downtime and latency, so there is no need
to engage AWS Support to benefit from DDoS protection3.
79. A company is running its application in the AWS Cloud. The company wants to periodically
review its AWS account for cost optimization opportunities.
Which AWS service or tool can the company use to meet these requirements?
A. AWS Cost Explorer
B. AWS Trusted Advisor
C. AWS Pricing Calculator
D. AWS Budgets
Answer: A
Explanation:
AWS Cost Explorer is an AWS service or tool that the company can use to periodically review
its AWS account for cost optimization opportunities. AWS Cost Explorer is a tool that enables
the company to visualize, understand, and manage their AWS costs and usage over time. The
company can use AWS Cost Explorer to access interactive graphs and tables that show the
breakdown of their costs and usage by service, region, account, tag, and more. The company
can also use AWS Cost Explorer to forecast their future costs, identify trends and anomalies,
and discover potential savings by using Reserved Instances or Savings Plans.
80. Which statements represent the cost-effectiveness of the AWS Cloud? (Select TWO.)
A. Users can trade fixed expenses for variable expenses.
B. Users can deploy all over the world in minutes.
C. AWS offers increased speed and agility.
D. AWS is responsible for patching the infrastructure.
E. Users benefit from economies of scale.
Answer: D, E
Explanation:
The statements that represent the cost-effectiveness of the AWS Cloud are:
Users can trade fixed expenses for variable expenses. By using the AWS Cloud, users can pay
only for the resources they use, instead of investing in fixed and upfront costs for hardware and
software. This can lower the total cost of ownership and increase the return on investment.
Users benefit from economies of scale. By using the AWS Cloud, users can leverage the
massive scale and efficiency of AWS to access lower prices and higher performance. AWS
passes the cost savings to the users through price reductions and innovations. AWS Cloud
Value Framework
81. A company Is designing its AWS workloads so that components can be updated regularly
and so that changes can be made in small, reversible increments.
Which pillar of the AWS Well-Architected Framework does this design support?
A. Security
B. Performance efficiency
C. Operational excellence
D. Reliability
Answer: C
Explanation:
Understanding Operational Excellence: The Operational Excellence pillar of the AWS Well-
Architected Framework focuses on running and monitoring systems to deliver business value
and continuously improving supporting processes and procedures.
Key Concepts of Operational Excellence:
Small, Reversible Changes: Making changes in small, incremental steps allows for easier
troubleshooting and rollback if issues arise.
Regular Updates: Regularly updating components ensures that systems stay up-to-date with the
latest features, security patches, and performance improvements.
Automation: Implementing automation for deployments, updates, and monitoring to reduce
human error and increase efficiency.
Continuous Improvement: Encouraging continuous learning and process improvement to
enhance operational processes.
Implementing Operational Excellence:
Deployment Automation: Use CI/CD pipelines to automate deployments and ensure that
changes can be rolled back if necessary.
Monitoring and Logging: Implement comprehensive monitoring and logging to track system
health and performance.
Incident Response: Develop a robust incident response plan to handle issues quickly and
efficiently.
Documentation and Training: Maintain thorough documentation and provide training to ensure
teams can effectively manage and improve operations.
References: AWS Well-Architected Framework: Operational Excellence Pillar
82. Which of the following are design principles for reliability in the AWS Cloud? (Select TWO.)
A. Build architectures with tightly coupled resources.
B. Use AWS Trusted Advisor to meet security best practices.
C. Use automation to recover immediately from failure.
D. Rightsize Amazon EC2 instances to ensure optimal performance.
E. Simulate failures to test recovery processes.
Answer: C, E
Explanation:
The design principles for reliability in the AWS Cloud are:
Test recovery procedures. The best way to ensure that systems can recover from failures is to
regularly test them using simulated scenarios. This can help identify gaps and improve the
recovery process.
Automatically recover from failure. By using automation, systems can detect and correct failures
without human intervention. This can reduce the impact and duration of failures and improve the
availability of the system.
Scale horizontally to increase aggregate system availability. By adding more redundant
resources to the system, the impact of individual resource failures can be reduced. This can
also improve the performance and scalability of the system.
Stop guessing capacity. By using monitoring and automation, systems can adjust the capacity
based on the demand and performance metrics. This can prevent failures due to insufficient or
excessive capacity and optimize the cost and efficiency of the system.
Manage change in automation. By using automation, changes to the system can be applied in a
consistent and controlled manner. This can reduce the risk of human errors and configuration
drifts that can cause failures. AWS Well-Architected Framework
83. Which of the following is a pillar of the AWS Well-Architected Framework?
A. Redundancy
B. Operational excellence
C. Availability
D. Multi-Region
Answer: B
Explanation:
The AWS Well-Architected Framework helps cloud architects build secure, high-performing,
resilient, and efficient infrastructure for their applications and workloads. Based on five pillars ?
operational excellence, security, reliability, performance efficiency, and cost optimization ? the
Framework provides a consistent approach for customers and partners to evaluate
architectures, and implement designs that can scale over time. Operational excellence is one of
the pillars of the Framework, and it focuses on running and monitoring systems to deliver
business value, and continually improving processes and procedures.
84. Which Amazon S3 storage class is MOST cost-effective for unknown access patterns?
A. S3 Standard
B. S3 Standard-Infrequent Access (S3 Standard-IA)
C. S3 One Zone-Infrequent Access (S3 One Zone-IA)
D. S3 Intelligent-Tiering
Answer: D
Explanation:
Understanding S3 Intelligent-Tiering: S3 Intelligent-Tiering is designed to optimize costs by
automatically moving data to the most cost-effective access tier based on changing access
patterns. It is ideal for data with unknown or unpredictable access patterns.
Why S3 Intelligent-Tiering is Cost-Effective:
Automatic Tiering: Moves data between two access tiers (frequent and infrequent access)
based on changing access patterns, optimizing storage costs without performance impact.
No Retrieval Fees: Unlike other storage classes, there are no retrieval fees in Intelligent-Tiering,
making it cost-effective for data with unpredictable access patterns.
Monitoring and Automation: Automatically monitors access patterns and transitions data,
reducing
the need for manual intervention.
When to Use S3 Intelligent-Tiering:
Unpredictable Access Patterns: Ideal for datasets where the access frequency cannot be
determined or changes frequently.
Cost Optimization: For organizations looking to minimize storage costs without sacrificing
performance or requiring manual intervention to move data between tiers.
References:
Amazon S3 Intelligent-Tiering
Amazon S3 Storage Classes
85. Which AWS service allows for file sharing between multiple Amazon EC2 Instances?
A. AWS Direct Connect
B. AWS Snowball Edge
C. AWS Backup
D. Amazon Elastic File System (Amazon EFS)
Answer: D
Explanation:
Amazon Elastic File System (Amazon EFS) is a scalable, fully managed, shared file storage
service that is accessible from multiple Amazon EC2 instances. EFS is designed to provide
highly available and durable file storage that can be mounted across multiple instances, making
it ideal for shared file access.
Why other options are not suitable:
A. AWS Direct Connect: A network service to establish a dedicated network connection to AWS,
not a file-sharing solution.
B. AWS Snowball Edge: A data transfer device for migrating data to and from AWS, not for
ongoing file sharing.
C. AWS Backup: A service for centralized backup management, not for sharing files between
EC2 instances.
References: Amazon EFS Documentation
86. A company wants to store data with high availability, encrypt the data at rest, and have
direct access to the data over the internet.
Which AWS service will meet these requirements MOST cost-effectively?
A. Amazon Elastic Block Store (AmazonEBS)
B. Amazon S3
C. Amazon Elastic File System (Amazon EFS)
D. AWS Storage Gateway
Answer: C
Explanation:
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic
NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale
on demand to petabytes without disrupting applications, growing and shrinking automatically as
you add and remove files, eliminating the need to provision and manage capacity to
accommodate growth. Amazon EFS offers two storage classes: the Standard storage class, and
the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that is cost-
optimized for files not accessed every day. Amazon EFS encrypts data at rest and in transit,
and supports direct access over the internet4.
87. Which pillar of the AWS Well-Architected Framework focuses on the ability to run workloads
effectively, gain insight into operations, and continuously improve supporting processes and
procedures?
A. Cost optimization
B. Reliability
C. Operational excellence
D. Performance efficiency
Answer: C
Explanation:
The AWS Well-Architected Framework is a set of best practices and guidelines for designing
and operating systems in the cloud. The framework consists of five pillars: operational
excellence, security, reliability, performance efficiency, and cost optimization. The operational
excellence pillar focuses on the ability to run workloads effectively, gain insight into operations,
and continuously improve supporting processes and procedures. Therefore, the correct answer
is C. You can learn more about the AWS Well-Architected Framework and its pillars.
89. Which tool should a developer use lo integrate AWS service features directly into an
application?
A. AWS Software Development Kit
B. AWS CodeDeploy
C. AWS Lambda
D. AWS Batch
Answer: A
Explanation:
AWS Software Development Kit (SDK) is a set of platform-specific tools for developers that let
them integrate AWS service features directly into their applications. AWS SDKs provide
libraries, code samples, documentation, and other resources to help developers write code that
interacts with AWS APIs. AWS SDKs support various programming languages, such as Java,
Python, Ruby, .NET, Node.js, Go, and more. AWS SDKs make it easier for developers to
access AWS services, such as Amazon S3, Amazon EC2, Amazon DynamoDB, AWS Lambda,
and more, from their applications. AWS SDKs also handle tasks such as authentication, error
handling, retries, and data serialization, so developers can focus on their application logic.
90. A network engineer needs to build a hybrid cloud architecture connecting on-premises
networks to the AWS Cloud using AWS Direct Connect. The company has a few VPCs in a
single AWS Region and expects to increase the number of VPCs to hundreds over time.
Which AWS service or feature should the engineer use to simplify and scale this connectivity as
the VPCs increase in number?
A. VPC endpoints
B. AWS Transit Gateway
C. Amazon Route 53
D. AWS Secrets Manager
Answer: B
Explanation:
AWS Transit Gateway is a network transit hub that you can use to interconnect your VPCs and
on-premises networks through a central gateway. AWS Transit Gateway simplifies and scales
the connectivity between your on-premises networks and AWS, as you only need to create and
manage a single connection from the central gateway to each on-premises network, rather than
individual connections to each VPC. You can also use AWS Transit Gateway to connect to
other AWS services, such as Amazon S3, Amazon DynamoDB, and AWS PrivateLink12. AWS
Transit Gateway supports thousands of VPCs per gateway, and enables you to peer Transit
Gateways across AWS Regions3. The other options are not AWS services or features that can
simplify and scale the connectivity between on-premises networks and hundreds of VPCs using
AWS Direct Connect. VPC endpoints enable private connectivity between your VPCs and
supported AWS services, but do not support on-premises networks4. Amazon Route 53 is a
DNS service that helps you route internet traffic to your resources, but does not provide network
connectivity5. AWS Secrets Manager is a service that helps you securely store and manage
secrets, such as database credentials and API keys, but does not relate to network connectivity
91. Which AWS services allow users to monitor and retain records of account activities that
include governance, compliance, and auditing? (Select TWO.)
A. Amazon CloudWatch
B. AWS CloudTrail
C. Amazon GuardDuty
D. AWS Shield
E. AWS WAF
Answer: A, B
Explanation:
Amazon CloudWatch and AWS CloudTrail are the AWS services that allow users to monitor and
retain records of account activities that include governance, compliance, and auditing. Amazon
CloudWatch is a service that collects and tracks metrics, collects and monitors log files, and
sets alarms. AWS CloudTrail is a service that enables governance, compliance, operational
auditing, and risk auditing of your AWS account. Amazon GuardDuty, AWS Shield, and AWS
WAF are AWS services that provide security and protection for AWS resources, but they do not
monitor and retain records of account activities. These concepts are explained in the AWS
Cloud Practitioner Essentials course3.
92. A company wants to use Amazon EC2 instances to run a stateless and restartable process
after business hours.
Which AWS service provides DNS resolution?
A. Amazon CloudFront
B. Amazon VPC
C. Amazon Route 53
D. AWS Direct Connect
Answer: C
Explanation:
Amazon Route 53 is the AWS service that provides DNS resolution. DNS (Domain Name
System) is a service that translates domain names into IP addresses. Amazon Route 53 is a
highly available and scalable cloud DNS service that offers domain name registration, DNS
routing, and health checking.
Amazon Route 53 can route the traffic to various AWS services, such as Amazon EC2, Amazon
S3, and Amazon CloudFront. Amazon Route 53 can also integrate with other AWS services,
such as AWS Certificate Manager, AWS Shield, and AWS WAF. For more information, see
[What is Amazon Route 53?] and [Amazon Route 53 Features].
93. Which task is the customer's responsibility, according to the AWS shared responsibility
model?
A. Maintain the security of the AWS Cloud.
B. Configure firewalls and networks.
C. Patch the operating system of Amazon RDS instances.
D. Implement physical and environmental controls.
Answer: B
Explanation:
According to the AWS shared responsibility model, the customer is responsible for the security
in the cloud, which includes configuring firewalls and networks. AWS provides security groups
and network access control lists (NACLs) as firewall features that customers can use to control
the traffic to and from their AWS resources. Customers are also responsible for managing their
own virtual private clouds (VPCs), subnets, route tables, internet gateways, and other network
components. AWS is responsible for the security of the cloud, which includes the physical
security of the facilities, the host operating system and virtualization layer, and the AWS global
network infrastructure12.
References:
Shared Responsibility Model - Amazon Web Services (AWS)
Shared responsibility model - Amazon Web Services: Risk and Compliance
94. Which tasks are customer responsibilities according to the AWS shared responsibility
model? (Select TWO.)
A. Determine application dependencies with operating systems.
B. Provide user access with AWS Identity and Access Management (IAM).
C. Secure the data center in an Availability Zone.
D. Patch the hypervisor.
E. Provide network availability in Availability Zones.
Answer: B
Explanation:
The correct answer to the question is B because providing user access with AWS Identity and
Access Management (IAM) is a customer responsibility according to the AWS shared
responsibility model. The AWS shared responsibility model is a framework that defines the
division of responsibilities between AWS and the customer for security and compliance. AWS is
responsible for the security of the cloud, which includes the global infrastructure, such as the
regions, availability zones, and edge locations; the hardware, software, networking, and facilities
that run the AWS services; and the virtualization layer that separates the customer instances
and storage. The customer is responsible for the security in the cloud, which includes the
customer data, the guest operating systems, the applications, the identity and access
management, the firewall configuration, and the encryption. IAM is an AWS service that enables
customers to manage access and permissions to AWS resources and services. Customers are
responsible for creating and managing IAM users, groups, roles, and policies, and ensuring that
they follow the principle of least privilege.
Reference: AWS Shared Responsibility Model
95. A company runs an uninterruptible Amazon EC2 workload on AWS 24 hours a day. 7 days
a week. The company will require the same instance family and instance type to run the
workload for the next 12 months.
Which combination of purchasing options should the company choose to MOST optimize costs?
(Select TWO.)
A. Standard Reserved Instance
B. Convertible Reserved Instance
C. Compute Savings Plan
D. Spot Instance
E. All Upfront payment
Answer: A E
Explanation:
For workloads running 24/7 for a year, Standard Reserved Instances provide a significant
discount compared to On-Demand pricing. Choosing the "All Upfront" payment option
maximizes the cost savings as AWS offers the highest discount for upfront payments.
Convertible Reserved Instances provide flexibility to change the instance type but usually at a
slightly higher cost than Standard Reserved Instances. Compute Savings Plans offer cost
savings, but in this scenario, the best optimization would be a combination of Standard
Reserved Instances with All Upfront payment. Spot Instances are not suitable due to their
interruptible nature.
References: AWS EC2 Reserved Instances
AWS Savings Plans
96. Which AWS solution gives companies the ability to use protocols such as NFS to store and
retrieve objects in Amazon S3?
A. Amazon FSx for Lustre
B. AWS Storage Gateway volume gateway
C. AWS Storage Gateway file gateway
D. Amazon Elastic File System (Amazon EFS)
Answer: C
Explanation:
AWS Storage Gateway file gateway allows companies to use protocols such as NFS and SMB
to store and retrieve objects in Amazon S3. File gateway provides a seamless integration
between on-premises applications and Amazon S3, and enables low-latency access to data
through local caching. File gateway also supports encryption, compression, and lifecycle
management of the objects in Amazon S3. For more information, see What is AWS Storage
Gateway? and File Gateway.
97. Which option is an AWS Cloud Adoption Framework (AWS CAF) foundational capability for
the operations perspective?
A. Performance and capacity management
B. Application portfolio management
C. Identity and access management
D. Product management
Answer: C
Explanation:
Identity and access management is one of the foundational capabilities for the operations
perspective of the AWS Cloud Adoption Framework (AWS CAF). It involves managing the
identities, roles, permissions, and credentials of users and systems that interact with AWS
resources. Performance and capacity management is a capability for the platform perspective.
Application portfolio management is a capability for the business perspective. Product
management is a capability for the governance perspective.
98. Which AWS service enables companies to deploy an application dose to end users?
A. Amazon CloudFront
B. AWS Auto Scaling
C. AWS AppSync
D. Amazon Route S3
Answer: A
Explanation:
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data,
videos, applications, and APIs to customers globally with low latency, high transfer speeds, all
within a developer-friendly environment. CloudFront enables companies to deploy an
application close to end users by caching the application’s content at edge locations that are
geographically closer to the users. This reduces the network latency and improves the user
experience. CloudFront also integrates with other AWS services, such as Amazon S3, Amazon
EC2, AWS Lambda, AWS Shield, and AWS WAF, to provide a secure and scalable solution for
delivering applications12.
References: What Is Amazon CloudFront? - Amazon CloudFront
Amazon CloudFront Features - Amazon CloudFront
99. Which services can be used to deploy applications on AWS? (Select TWO.)
A. AWS Elastic Beanstalk
B. AWS Config
C. AWS OpsWorks
D. AWS Application Discovery Service
E. Amazon Kinesis
Answer: A, C
Explanation:
The services that can be used to deploy applications on AWS are:
AWS Elastic Beanstalk. This is a service that simplifies the deployment and management of
web applications on AWS. Users can upload their application code and Elastic Beanstalk
automatically handles the provisioning, scaling, load balancing, monitoring, and health checking
of the resources needed to run the application. Users can also retain full control and access to
the underlying resources and customize their configuration settings. Elastic Beanstalk supports
multiple platforms, such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. [AWS
Elastic Beanstalk Overview] AWS Certified Cloud Practitioner - aws.amazon.com
AWS OpsWorks. This is a service that provides configuration management and automation for
AWS resources. Users can define the application architecture and the configuration of each
resource using Chef or Puppet, which are popular open-source automation platforms.
OpsWorks then automatically creates and configures the resources according to the user’s
specifications. OpsWorks also provides features such as auto scaling, monitoring, and
integration with other AWS services. OpsWorks has
two offerings: OpsWorks for Chef Automate and OpsWorks for Puppet Enterprise. [AWS
OpsWorks Overview] AWS Certified Cloud Practitioner - aws.amazon.com
100. A company needs help managing multiple AWS linked accounts that are reported on a
consolidated bill.
Which AWS Support plan includes an AWS concierge whom the company can ask for
assistance?
A. AWS Developer Support
B. AWS Enterprise Support
C. AWS Business Support
D. AWS Basic Support
Answer: B
Explanation:
AWS Enterprise Support is the AWS Support plan that includes an AWS concierge whom the
company can ask for assistance. According to the AWS Support Plans page, AWS Enterprise
Support provides "a dedicated Technical Account Manager (TAM) who provides advocacy and
guidance to help plan and build solutions using best practices, coordinate access to subject
matter experts, and proactively keep your AWS environment operationally healthy."2 AWS
Business Support, AWS Developer Support, and AWS Basic Support do not include a TAM or a
concierge service.
101. A company wants durable storage for static content and infinitely scalable data storage
infrastructure
at the lowest cost.
Which AWS service should the company choose?
A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon S3
C. AWS Storage Gateway
D. Amazon Elastic File System (Amazon EFS)
Answer: B
Explanation:
Amazon S3 is a service that provides durable storage for static content and infinitely scalable
data storage infrastructure at the lowest cost. Amazon S3 is an object storage service that
allows you to store and retrieve any amount of data from anywhere on the internet. Amazon S3
offers industry-leading scalability, availability, and performance, as well as 99.999999999% (11
9s) of durability and multi-AZ resilience. Amazon S3 also provides various storage classes that
offer different levels of performance and cost optimization, such as S3 Standard, S3 Intelligent-
Tiering, S3 Standard-Infrequent Access (S3 Standard-IA), S3 One Zone-Infrequent Access (S3
One Zone-IA), and S3 Glacier456. Amazon S3 is ideal for storing static content, such as
images, videos, documents, and web pages, as well as building data lakes, backup and archive
solutions, big data analytics, and machine learning applications456.
References: 4: Cloud Storage on AWS, 5: Object Storage - Amazon Simple Storage Service
(S3) - AWS, 6: Amazon S3 Documentation
102. A company plans to migrate its custom marketing application and order-processing
application to AWS. The company needs to deploy the applications on different types of
instances with various configurations of CPU, memory, storage, and networking capacity.
Which AWS service should the company use to meet these requirements?
A. AWS Lambda
B. Amazon Cognito
C. Amazon Athena
D. Amazon EC2
Answer: D
Explanation:
Amazon EC2 (Elastic Compute Cloud) provides scalable computing capacity in the AWS Cloud,
allowing customers to run virtual servers (instances) with different configurations of CPU,
memory, storage, and networking capacity. This flexibility is ideal for applications that require
specific infrastructure configurations, such as custom marketing and order-processing
applications.
A. AWS Lambda: Incorrect, as it is a serverless compute service that automatically manages
the computing resources needed to run code but does not offer the flexibility of choosing
different instance types.
B. Amazon Cognito: Incorrect, as it is used for user authentication and authorization, not for
deploying applications.
C. Amazon Athena: Incorrect, as it is an interactive query service for analyzing data in Amazon
S3
using standard SQL.
AWS Cloud References:
Amazon EC2
103. Which actions are examples of a company's effort to right size its AWS resources to
control cloud costs? (Select TWO.)
A. Switch from Amazon RDS to Amazon DynamoDB to accommodate NoSQL datasets.
B. Base the selection of Amazon EC2 instance types on past utilization patterns.
C. Use Amazon S3 Lifecycle policies to move objects that users access infrequently to lower-
cost storage tiers.
D. Use Multi-AZ deployments for Amazon RDS.
E. Replace existing Amazon EC2 instances with AWS Elastic Beanstalk.
Answer: B, C
Explanation:
Basing the selection of Amazon EC2 instance types on past utilization patterns is a way to right
size the AWS resources and optimize the performance and cost. Using Amazon S3 Lifecycle
policies to move objects that users access infrequently to lower-cost storage tiers is another
way to reduce the storage costs and align them with the business value of the data. These two
actions are recommended by the AWS Cost Optimization Pillar1. Switching from Amazon RDS
to Amazon DynamoDB is not necessarily a cost-saving action, as it depends on the use case
and the data model. Using Multi-AZ deployments for Amazon RDS is a way to improve the
availability and durability of the database, but it also increases the cost. Replacing existing
Amazon EC2 instances with AWS Elastic Beanstalk is a way to simplify the deployment and
management of the application, but it does not affect the cost of the underlying EC2 instances.
104. Which of the following is a managed AWS service that is used specifically for extract,
transform, and load (ETL) data?
A. Amazon Athena
B. AWS Glue
C. Amazon S3
D. AWS Snowball Edge
Answer: B
Explanation:
AWS Glue is a serverless data integration service that makes it easy to discover, prepare,
move, and integrate data from multiple sources for analytics, machine learning, and application
development. You can use various data integration engines, such as ETL, ELT, batch, and
streaming, and manage your data in a centralized data catalog. AWS Glue is designed
specifically for extract, transform, and load (ETL) data, whereas the other options are not.
105. A company is planning to move data backups to the AWS Cloud. The company needs to
replace on-premises storage with storage that is cloud-based but locally cached.
Which AWS service meets these requirements?
A. AWS Storage Gateway
B. AWS Snowcone
C. AWS Backup
D. Amazon Elastic File System (Amazon EFS)
Answer: A
Explanation:
AWS Storage Gateway is a hybrid cloud storage service that provides on-premises access to
virtually unlimited cloud storage. The File Gateway configuration allows for on-premises
applications to store data in Amazon S3 using NFS and SMB protocols, while keeping
frequently accessed data cached locally. This meets the company's requirement for cloud-
based storage that is locally cached.
B. AWS Snowcone: Incorrect, as it is a portable edge computing and storage device, not a
storage service that provides local caching for cloud storage.
C. AWS Backup: Incorrect, as it is a centralized backup service to automate data protection
across AWS services but does not provide local caching.
D. Amazon Elastic File System (Amazon EFS): Incorrect, as it provides scalable file storage for
use with Amazon EC2 but does not offer local caching for on-premises storage needs. AWS
Cloud References:
AWS Storage Gateway
107. Which AWS service provides a single location to track the progress of application
migrations?
A. AWS Application Discovery Service
B. AWS Application Migration Service
C. AWS Service Catalog
D. AWS Migration Hub
Answer: D
Explanation:
AWS Migration Hub is a service that provides a single location to track the progress of
application migrations across multiple AWS and partner solutions. It allows you to choose the
AWS and partner migration tools that best fit your needs, while providing visibility into the status
of migrations across your portfolio of applications1. AWS Migration Hub supports migration
status updates from the following tools: AWS Application Migration Service, AWS Database
Migration Service, CloudEndure Migration, Server Migration Service, and Migrate for Compute
Engine1.
The other options are not correct for the following reasons:
AWS Application Discovery Service is a service that helps you plan your migration projects by
automatically identifying servers, applications, and dependencies in your on-premises data
centers2. It does not track the progress of application migrations, but rather provides information
to help you plan and scope your migrations.
AWS Application Migration Service is a service that helps you migrate and modernize
applications from any source infrastructure to AWS with minimal downtime and disruption3. It is
one of the migration tools that can send status updates to AWS Migration Hub, but it is not the
service that provides a single location to track the progress of application migrations.
AWS Service Catalog is a service that allows you to create and manage catalogs of IT services
that are approved for use on AWS4. It does not track the progress of application migrations, but
rather helps you manage the provisioning and governance of your IT services.
References:
1: What Is AWS Migration Hub? - AWS Migration Hub
2: What Is AWS Application Discovery Service? - AWS Application Discovery Service
3: App Migration Tool - AWS Application Migration Service - AWS
4: What Is AWS Service Catalog? - AWS Service Catalog