Saa-C03 0
Saa-C03 0
Get the Full SAA-C03 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/SAA-C03-exam-dumps.html (551 New Questions)
Amazon-Web-Services
Exam Questions SAA-C03
AWS Certified Solutions Architect - Associate (SAA-C03)
NEW QUESTION 1
- (Topic 1)
A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly.
The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and
minimum development effort.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scalin
B. Use an Application Load Balancer to distribute the incoming requests.
C. Use two Amazon EC2 instances to host the containerized web applicatio
D. Use an Application Load Balancer to distribute the incoming requests
E. Use AWS Lambda with a new code that uses one of the supported language
F. Create multiple Lambda functions to support the loa
G. Use Amazon API Gateway as an entry point to the Lambda functions.
H. Use a high performance computing (HPC) solution such as AWS ParallelClusterto establish an HPC cluster that can process the incoming requests at the
appropriate scale.
Answer: A
Explanation:
AWS Fargate is a serverless compute engine that lets users run containers without having to manage servers or clusters of Amazon EC2 instances1. Users can
use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Amazon ECS is a fully
managed container orchestration service that supports both Docker and Kubernetes2. Service Auto Scaling is a feature that allows users to adjust the desired
number of tasks in an ECS service based on CloudWatch metrics, such as CPU utilization or request count3. Users can use AWS Fargate on
Amazon ECS to migrate the application to AWS with minimum code changes and minimum development effort, as they only need to package their application in
containers and specify the CPU and memory requirements.
Users can also use an Application Load Balancer to distribute the incoming requests. An Application Load Balancer is a load balancer that operates at the
application layer and routes traffic to targets based on the content of the request. Users can register their ECS tasks as targets for an Application Load Balancer
and configure listener rules to route requests to different target groups based on path or host headers. Users can use an Application Load Balancer to improve the
availability and performance of their web
application.
NEW QUESTION 2
- (Topic 1)
A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials tor its
Amazon ROS tor MySQL databases across multiple AWS Regions
Which solution will meet these requirements with the LEAST operational overhead?
Answer: A
Explanation:
https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager- multiple-regions/
NEW QUESTION 3
- (Topic 1)
A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a
scalable, near- real-time solution to share the details of millions of financial transactions with several other internal applications Transactions also need to be
processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?
A. Store the transactions data into Amazon DynamoDB Set up a rule in DynamoDB to remove sensitive data from every transaction upon write Use DynamoDB
Streams to share the transactions data with other applications
B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3 Use AWS Lambda integration with
Kinesis Data Firehose to remove sensitive dat
C. Other applications can consume the data stored in Amazon S3
D. Stream the transactions data into Amazon Kinesis Data Streams Use AWS Lambda integration to remove sensitive data from every transaction and then store
the transactionsdata in Amazon DynamoDB Other applications can consume the transactions data off the Kinesis data stream.
E. Store the batched transactions data in Amazon S3 as file
F. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3 The Lambda function then stores the data in Amazon
DynamoDB Other applications can consume transaction files stored in Amazon S3.
Answer: C
Explanation:
The destination of your Kinesis Data Firehose delivery stream. Kinesis Data Firehose can send data records to various destinations, including Amazon Simple
Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, and any HTTP endpoint that is owned by you or any of your third-party service
providers. The following are the supported destinations:
* Amazon OpenSearch Service
* Amazon S3
* Datadog
* Dynatrace
* Honeycomb
* HTTP Endpoint
* Logic Monitor
* MongoDB Cloud
* New Relic
* Splunk
* Sumo Logic https://docs.aws.amazon.com/firehose/latest/dev/create-name.html
https://aws.amazon.com/kinesis/data-streams/
Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per
second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and
location-tracking events.
NEW QUESTION 4
- (Topic 1)
A company observes an increase in Amazon EC2 costs in its most recent bill The billing team notices unwanted vertical scaling of instance types for a couple of
EC2 instances A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth analysis to identify the root cause
of the vertical scaling
How should the solutions architect generate the information with the LEAST operational overhead?
A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types
B. Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types
C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months
D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket Use Amazon QuickSight with Amazon S3 as a source to generate an
interactive graph based on instance types.
Answer: B
Explanation:
AWS Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost
Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 12 months, forecast how much you're likely to spend for the
next 12 months, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to identify areas that need further inquiry and see
trends that you can use to understand your
costs. https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html
NEW QUESTION 5
- (Topic 1)
A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.
What should a solutions architect do to transmit and process the clickstream data?
A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics
B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis
C. Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to
process the data tor analysis.
D. Collect the data from Amazon Kinesis Data Stream
E. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis
Answer: D
Explanation:
https://aws.amazon.com/es/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
NEW QUESTION 6
- (Topic 1)
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The
testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the
compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
Answer: A
Explanation:
To reduce the cost of running the tests without reducing the compute and memory attributes of the Amazon RDS for MySQL DB instance, the development team
can stop the instance when tests are completed and restart it when required. Stopping the DB instance when not in use can help save costs because customers
are only charged for storage while the DB instance is stopped. During this time, automated backups and automated DB instance maintenance are suspended.
When the instance is restarted, it retains the same configurations, security groups, and DB parameter groups as when it was stopped.
Reference:
Amazon RDS Documentation: Stopping and Starting a DB instance (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html)
NEW QUESTION 7
- (Topic 1)
A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics,
organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Configure the application to send the data to Amazon Kinesis Data Firehose.
B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API for the data.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the
data.
E. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by
Answer: BD
Explanation:
https://docs.aws.amazon.com/ses/latest/dg/send-email-formatted.html
* D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the
data. This step can be done using AWS Lambda to extract the shipping statistics and organize the data into an HTML format.
* B. Use Amazon Simple Email Service (Amazon SES) to format the data and send the report by email. This step can be done by using Amazon SES to send the
report to multiple email addresses at the same time every morning.
Therefore, options D and B are the correct choices for this question. Option A is incorrect because Kinesis Data Firehose is not necessary for this use case. Option
C is incorrect because AWS Glue is not required to query the application's API. Option E is incorrect because S3 event notifications cannot be used to send the
report by email.
NEW QUESTION 8
- (Topic 1)
A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an
Availability Zone Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the
costs of storing and retrieving the media files.
Which storage option meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access {S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: B
Explanation:
S3 Intelligent-Tiering - Perfect use case when you don't know the frequency of access or irregular patterns of usage.
Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed
data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard- Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent
Access (S3 One Zone- IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3
Glacier Deep Archive) for long-term archive and digital preservation. If you have data residency requirements that can’t be met by an existing AWS Region, you
can use the S3 Outposts storage class to store your S3 data on- premises. Amazon S3 also offers capabilities to manage your data throughout its lifecycle. Once
an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.
https://aws.amazon.com/getting-started/hands-on/getting-started-using-amazon-s3-intelligent-tiering/?nc1=h_ls
NEW QUESTION 9
- (Topic 1)
A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10
million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website
The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the
problem
Which solution addresses this performance issue?
Answer: A
Explanation:
https://aws.amazon.com/ebs/features/
"Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance EBS volumes designed for your critical, I/O intensive
database applications.
These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency."
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
NEW QUESTION 10
- (Topic 1)
A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2
instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a
MySQL database funning on Amazon EC2. The company wants this application to be highly available with tow operational complexity
Which architecture otters the HGHEST availability?
A. Add a second ActiveMQ server to another Availably Zone Add an additional consumer EC2 instance in another Availability Zon
B. Replicate the MySQL database to another Availability Zone.
C. Use Amazon MO with active/standby brokers configured across two Availability Zones Add an additional consumer EC2 instance in another Availability Zon
D. Replicate the MySQL database to another Availability Zone.
E. Use Amazon MO with active/standby blotters configured across two Availability Zone
F. Add an additional consumer EC2 instance in another Availability Zon
G. Use Amazon ROS tor MySQL with Multi-AZ enabled.
H. Use Amazon MQ with active/standby brokers configured across two Availability Zones Add an Auto Scaling group for the consumer EC2 instances across two
Availability Zone
I. Use Amazon RDS for MySQL with Multi-AZ enabled.
Answer: D
Explanation:
Amazon S3 is a highly scalable and durable object storage service that can store and retrieve any amount of data from anywhere on the web1. Users can
configure the application to upload images directly from each user’s browser to Amazon S3 through the use of a presigned URL. A presigned URL is a URL that
gives access to an object in an S3 bucket for a limited time and with a specific action, such as uploading an object2. Users can generate a presigned URL
programmatically using the AWS SDKs or AWS CLI. By using a presigned URL, users can reduce coupling within the application and improve website
performance, as they do not need to send the images to the web server first. AWS Lambda is a serverless compute service that runs code in response to events
and automatically manages the underlying compute resources3. Users can configure S3 Event Notifications to invoke an AWS Lambda function when an image is
uploaded. S3 Event Notifications is a feature that allows users to receive notifications when certain events happen in an S3 bucket, such as object creation or
deletion. Users can configure S3 Event Notifications to invoke a Lambda function that resizes the image and stores it back in the same or a different S3 bucket.
This way, users can offload the image resizing task from the web server to Lambda.
NEW QUESTION 10
- (Topic 1)
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling
group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data
in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company
wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?
A. Use Amazon Redshift with a single node for leader and compute functionality.
B. Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
C. Use Amazon Aurora with a Multi-AZ deploymen
D. Configure Aurora Auto Scaling with Aurora Replicas.
E. Use Amazon ElastiCache for Memcached with EC2 Spot Instances.
Answer: C
Explanation:
AURORA is 5x performance improvement over MySQL on RDS and handles more read requests than write,; maintaining high availability = Multi-AZ deployment
NEW QUESTION 15
- (Topic 1)
A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to
access this dashboard periodically. The product manager does not have an AWS account. A solution architect must provide access to the product manager by
following the principle of least privilege.
Which solution will meet these requirements?
Answer: B
Explanation:
To provide the product manager access to the Amazon CloudWatch dashboard while following the principle of least privilege, a solution architect should create an
IAM user specifically for the product manager and attach the CloudWatch Read Only Access managed policy to the user. This policy allows the user to view the
dashboard without being able to make any changes to it. The solution architect should then share the new login credential with the product manager and provide
them with the browser URL of the correct dashboard.
NEW QUESTION 18
- (Topic 1)
An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3 bucket. The
company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS) standard queue. The SQS
queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users through email.
Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are invoking the
Lambda function more than once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational overhead?
A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.
Answer: C
NEW QUESTION 21
- (Topic 1)
A company has a data ingestion workflow that consists the following:
? An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries
? An AWS Lambda function to process the data and record metadata
The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function
does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda
function ingests all data in the future? (Select TWO.)
Answer: BE
Explanation:
To ensure that the Lambda function ingests all data in the future despite occasional network connectivity issues, the following actions should be taken:
? Create an Amazon Simple Queue Service (SQS) queue and subscribe it to the SNS topic. This allows for decoupling of the notification and processing, so that
even if the processing Lambda function fails, the message remains in the queue for further processing later.
? Modify the Lambda function to read from the SQS queue instead of directly from SNS. This decoupling allows for retries and fault tolerance and ensures that all
messages are processed by the Lambda function.
Reference:
AWS SNS documentation: https://aws.amazon.com/sns/ AWS SQS documentation: https://aws.amazon.com/sqs/
AWS Lambda documentation: https://aws.amazon.com/lambda/
NEW QUESTION 23
- (Topic 1)
A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public interface for
its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL with the company's domain
name and corresponding certificate so that the third-party services can use HTTPS.
Which solution will meet these requirements?
A. Create stage variables in API Gateway with Name="Endpoint-URL" and Value="Company Domain Name" to overwrite the default UR
B. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM).
C. Create Route 53 DNS records with the company's domain nam
D. Point the alias record to the Regional API Gateway stage endpoin
E. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region.
F. Create a Regional API Gateway endpoin
G. Associate the API Gateway endpoint with the company's domain nam
H. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Regio
I. Attach the certificate to the API Gateway endpoin
J. Configure Route 53 to route traffic to the API Gateway endpoint.
K. Create a Regional API Gateway endpoin
L. Associate the API Gateway endpoint with the company's domain nam
M. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Regio
N. Attach the certificate to the API Gateway API
O. Create Route 53 DNS records with the company's domain nam
P. Point an A record to the company's domain name.
Answer: C
Explanation:
To design the API Gateway URL with the company's domain name and corresponding certificate, the company needs to do the following: 1. Create a Regional API
Gateway endpoint: This will allow the company to create an endpoint that is specific to a region. 2. Associate the API Gateway endpoint with the company's
domain name: This will allow the company to use its own domain name for the API Gateway URL. 3. Import the public certificate associated with the company's
domain name into AWS Certificate Manager (ACM) in the same Region: This will allow the company to use HTTPS for secure communication with its APIs. 4.
Attach the certificate to the API Gateway endpoint: This will allow the company to use the certificate for securing the API Gateway URL. 5. Configure Route 53 to
route traffic to the API Gateway endpoint: This will allow the company to use Route 53 to route traffic to the API Gateway URL using the company's domain name.
NEW QUESTION 24
- (Topic 1)
A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?
Answer: A
Explanation:
To ensure that Amazon S3 buckets do not have unauthorized configuration changes, a solutions architect should turn on AWS Config with the appropriate rules.
AWS Config is a service that allows users to audit and assess their AWS resource configurations for compliance with industry standards and internal policies. It
provides a detailed view of the resources and their configurations, including information on how the resources are related to each other. By turning on AWS Config
with the appropriate rules, users can identify and remediate unauthorized configuration changes to their Amazon S3 buckets.
NEW QUESTION 28
- (Topic 1)
A solutions architect must design a highly available infrastructure for a website. The website is powered by Windows web servers that run on Amazon EC2
instances. The solutions architect must implement a solution that can mitigate a large-scale DDoS attack that originates from thousands of IP addresses.
Downtime is not acceptable for the website.
Which actions should the solutions architect take to protect the website from such an attack? (Select TWO.)
Answer: AC
Explanation:
(https://aws.amazon.com/cloudfront
NEW QUESTION 31
- (Topic 1)
A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is
streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data strea
B. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data sourc
C. Use AWS Lambda functions to transform the dat
D. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
E. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glu
F. Stop source/destination checking on the EC2 instanc
G. Use AWS Glue to transform the data and to send the data to Amazon S3.
H. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data strea
I. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data sourc
J. Use AWS Lambda functions to transform the dat
K. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
L. Configure an Amazon API Gateway API to send data to AWS Glu
M. Use AWS Lambda functions to transform the dat
N. Use AWS Glue to send the data to Amazon S3.
Answer: C
NEW QUESTION 34
- (Topic 1)
A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2 instance that connects
directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The company must also implement a
solution to automatically rotate the database credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: C
Explanation:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.ht ml
NEW QUESTION 36
- (Topic 1)
A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week.
What should the company do to guarantee the EC2 capacity?
Answer: D
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity- reservations.html
Reserve instances: You will have to pay for the whole term (1 year or 3years) which is not cost effective
NEW QUESTION 41
- (Topic 1)
A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions architect needs to
design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.
What should the solutions architect recommend to meet these requirements?
Answer: B
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/PointInTimeRecov ery.html
NEW QUESTION 43
- (Topic 1)
A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to
transform the data and save the data in JSON format for later analysis.
Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days,
users will upload a few files or no files.
Which solution meets these requirements with the LEAST operational overhead?
A. Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the dat
B. Store the resulting JSON file in an Amazon Aurora DB cluster.
C. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queu
D. Use Amazon EC2 instances to read from the queue and process the dat
E. Store the resulting JSON file in Amazon DynamoDB.
F. Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queu
G. Use an AWS Lambda function to read from the queue and process the dat
H. Store the resulting JSON file in Amazon DynamoD
I. Most Voted
J. Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploade
K. Use an AWS Lambda function to consume the event from the stream and process the dat
L. Store the resulting JSON file in Amazon Aurora DB cluster.
Answer: C
Explanation:
Amazon S3 sends event notifications about S3 buckets (for example, object created, object removed, or object restored) to an SNS topic in the same Region.
The SNS topic publishes the event to an SQS queue in the central Region.
The SQS queue is configured as the event source for your Lambda function and buffers the event messages for the Lambda function.
The Lambda function polls the SQS queue for messages and processes the Amazon S3 event notifications according to your application’s requirements.
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/subscribe-a-lambda- function-to-event-notifications-from-s3-buckets-in-different-aws-
regions.html
NEW QUESTION 48
- (Topic 1)
A solutions architect is designing a VPC with public and private subnets. The VPC and subnets use IPv4 CIDR blocks. There is one public subnet and one private
subnet in each of three Availability Zones (AZs) for high availability. An internet gateway is used to provide internet access for the public subnets. The private
subnets require access to the internet to allow Amazon EC2 instances to download software updates.
What should the solutions architect do to enable Internet access for the private subnets?
A. Create three NAT gateways, one for each public subnet in each A
B. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ.
C. Create three NAT instances, one for each private subnet in each A
D. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ.
E. Create a second internet gateway on one of the private subnet
F. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway.
G. Create an egress-only internet gateway on one of the public subnet
H. Update the route table for the private subnets that forward non-VPC traffic to the egress- only internet gateway.
Answer: A
Explanation:
https://aws.amazon.com/about-aws/whats-new/2018/03/introducing-amazon-vpc-nat-gateway-in-the-aws-govcloud-us-
region/#:~:text=NAT%20Gateway%20is%20a%20highly,instances%20in%20a%20private
%20subnet.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
NEW QUESTION 52
- (Topic 1)
A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the
information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to
load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?
A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instance
B. Connect the database by using native Java Database Connectivity (JDBC) drivers.
C. Change the platform from Aurora to Amazon DynamoD
D. Provision a DynamoDB Accelerator (DAX) cluste
E. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
F. Set up two Lambda function
G. Configure one function to receive the informatio
H. Configure the other function to load the information into the databas
I. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
J. Set up two Lambda function
K. Configure one function to receive the informatio
L. Configure the other function to load the information into the databas
M. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Answer: B
Explanation:
bottlenecks can be avoided with queues (SQS).
NEW QUESTION 56
- (Topic 1)
A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application consists of
application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes overloaded. A solutions architect
must design a solution that resolves these issues and modernizes the application.
Which solution meets these requirements and is the MOST operationally efficient?
A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application laye
B. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services.
C. Use Amazon CloudWatch metrics to analyze the application performance history to determine the server's peak utilization during the performance failure
D. Increase the size of the application server's Amazon EC2 instances to meet the peak requirements.
E. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling grou
F. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required.
G. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling grou
H. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.
Answer: A
Explanation:
https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/module-4/
Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, AWS Amplify, Amazon DynamoDB, and Amazon Cognito. This example showed
similar setup as question: Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, AWS Amplify, Amazon DynamoDB, and Amazon Cognito
NEW QUESTION 59
- (Topic 2)
A company hosts a two-tier application on Amazon EC2 instances and Amazon RDS. The application's demand varies based on the time of day. The load is
minimal after work hours and on weekends. The EC2 instances run in an EC2 Auto Scaling group that is configured with a minimum of two instances and a
maximum of five instances. The application must be available at all times, but the company is concerned about overall cost.
Which solution meets the availability requirement MOST cost-effectively?
Answer: C
Explanation:
This solution meets the requirements of a two-tier application that has a variable demand based on the time of day and must be available at all times, while
minimizing the overall cost. EC2 Reserved Instances can provide significant savings compared to On-Demand Instances for the baseline level of usage, and they
can guarantee capacity reservation when needed. EC2 Spot Instances can provide up to 90% savings compared to On- Demand Instances for any additional
capacity that the application needs during peak hours. Spot Instances are suitable for stateless applications that can tolerate interruptions and can be replaced by
other instances. Stopping the RDS database when it is not in use can reduce the cost of running the database tier.
Option A is incorrect because using all EC2 Spot Instances can affect the availability of the application if there are not enough spare capacity or if the Spot price
exceeds the maximum price. Stopping the RDS database when it is not in use can reduce the cost of running the database tier, but it can also affect the availability
of the application. Option B is incorrect because purchasing EC2 Instance Savings Plans to cover five EC2 instances can lock in a fixed amount of compute usage
per hour, which may not match the actual usage pattern of the application. Purchasing an RDS Reserved DB Instance can provide savings for the database tier,
but it does not allow stopping the database when it is not in use. Option D is incorrect because purchasing EC2 Instance Savings Plans to cover two EC2
instances can lock in a fixed amount of compute usage per hour, which may not match the
actual usage pattern of the application. Using up to three additional EC2 On-Demand Instances as needed can incur higher costs than using Spot Instances.
References:
? https://aws.amazon.com/ec2/pricing/reserved-instances/
? https://aws.amazon.com/ec2/spot/
? https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_StopInstance.html
NEW QUESTION 62
- (Topic 2)
A company sells ringtones created from clips of popular songs. The files containing the ringtones are stored in Amazon S3 Standard and are at least 128 KB in
size. The company has millions of files, but downloads are infrequent for ringtones older than 90 days. The company needs to save money on storage while
keeping the most accessed files readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?
A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the objects.
B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage tier after 90 days.
C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-1A) after 90 days.
Answer: D
Explanation:
This solution meets the requirements of saving money on storage while keeping the most accessed files readily available for the users. S3 Lifecycle policy can
automatically move objects from one storage class to another based on predefined rules. S3 Standard-IA is a lower-cost storage class for data that is accessed
less frequently, but requires rapid access when needed. It is suitable for ringtones older than 90 days that are downloaded infrequently.
Option A is incorrect because configuring S3 Standard-IA for the initial storage tier of the objects can incur higher costs for frequent access and retrieval fees.
Option B is incorrect
because moving the files to S3 Intelligent-Tiering can incur additional monitoring and automation fees that may not be necessary for ringtones older than 90 days.
Option C is incorrect because using S3 inventory to manage objects and move them to S3 Standard-IA can be complex and time-consuming, and it does not
provide automatic cost savings. References:
? https://aws.amazon.com/s3/storage-classes/
? https://aws.amazon.com/s3/cloud-storage-cost-optimization-ebook/
NEW QUESTION 65
- (Topic 2)
A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system attached to
multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zones.
What should a solutions architect do to meet this requirement?
Answer: B
Explanation:
This solution meets the requirement of migrating a Windows-based application that requires the use of a shared Windows file system attached to multiple Amazon
EC2 Windows instances that are deployed across multiple Availability Zones. Amazon FSx for Windows File Server provides fully managed shared storage built on
Windows Server, and delivers a wide range of data access, data management, and administrative capabilities. It supports the Server Message Block (SMB)
protocol and can be mounted to EC2 Windows instances across multiple Availability Zones.
Option A is incorrect because AWS Storage Gateway in volume gateway mode provides cloud-backed storage volumes that can be mounted as iSCSI devices
from on-premises application servers, but it does not support SMB protocol or EC2 Windows instances. Option C is incorrect because Amazon Elastic File System
(Amazon EFS) provides a scalable and elastic NFS file system for Linux-based workloads, but it does not support SMB protocol or EC2 Windows instances.
Option D is incorrect because Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with EC2 instances, but it does not
support SMB protocol or attaching multiple instances to the same volume.
References:
? https://aws.amazon.com/fsx/windows/
? https://docs.aws.amazon.com/fsx/latest/WindowsGuide/using-file-shares.html
NEW QUESTION 69
- (Topic 2)
A company wants to direct its users to a backup static error page if the company's primary website is unavailable. The primary website's DNS records are hosted in
Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure
overhead.
Which solution will meet these requirements?
D. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
E. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static error page as endpoint
F. Configure Route 53 to send requests to the instance only if the health checks fail for the ALB.
G. Update the Route 53 records to use a multivalue answer routing polic
H. Create a health chec
I. Direct traffic to the website if the health check passe
J. Direct traffic to a static error page that is hosted in Amazon S3 if the health check does not pass.
Answer: B
Explanation:
This solution meets the requirements of directing users to a backup static error page if the primary website is unavailable, minimizing changes and infrastructure
overhead. Route 53 active-passive failover configuration can route traffic to a primary resource when it is healthy or to a secondary resource when the primary
resource is unhealthy. Route 53 health checks can monitor the health of the ALB endpoint and trigger the failover when needed. The static error page can be
hosted in an S3 bucket that is configured as a website, which is a simple and cost-effective way to serve static content.
Option A is incorrect because using a latency routing policy can route traffic based on the lowest network latency for users, but it does not provide failover
functionality. Option C is incorrect because using an active-active configuration with the ALB and an EC2 instance can increase the infrastructure overhead and
complexity, and it does not guarantee that the EC2 instance will always be healthy. Option D is incorrect because using a multivalue answer routing policy can
return multiple values for a query, but it does not provide failover functionality.
References:
? https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy- failover.html
? https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
? https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
NEW QUESTION 72
- (Topic 2)
A company needs to save the results from a medical trial to an Amazon S3 repository. The repository must allow a few scientists to add new files and must restrict
all other users to read-only access. No users can have the ability to modify or delete any files in the repository. The company must keep every file in the repository
for a minimum of 1 year after its creation date.
Which solution will meet these requirements?
Answer: B
Explanation:
In compliance mode, a protected object version can't be overwritten or deleted by any user, including the root user in your AWS account. When an object is locked
in compliance mode, its retention mode can't be changed, and its retention period can't be shortened. Compliance mode helps ensure that an object version can't
be overwritten or deleted for the duration of the retention period. In governance mode, users can't overwrite or delete an object version or alter its lock settings
unless they have special permissions. With governance mode, you protect objects against being deleted by most users, but you can still grant some users
permission to alter the retention settings or delete the object if necessary. In Governance mode, Objects can be deleted by some users with special permissions,
this is against the requirement.
Compliance:
- Object versions can't be overwritten or deleted by any user, including the root user
- Objects retention modes can't be changed, and retention periods can't be shortened
Governance:
- Most users can't overwrite or delete an object version or alter its lock settings
- Some users have special permissions to change the retention or delete the object
NEW QUESTION 74
- (Topic 2)
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application resides in the
company's data center. The application recently experienced data loss after a database server crashed because of an unexpected power outage.
The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user demand.
Which solution will meet these requirements?
A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zone
B. Use an Amazon RDS DB instance in a Multi-AZ configuration.
C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zon
D. Deploy the databaseon an EC2 instanc
E. Enable EC2 Auto Recovery.
F. Deploy the application servers by using Amazon EC2 instances in an Auto Scalinggroup across multiple Availability Zone
G. Use an Amazon RDS DB instance with a read replica in a single Availability Zon
H. Promote the read replica to replace the primary DB instance if the primary DB instance fails.
I. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones Deploy the primary and secondary
database servers on EC2 instances across multiple Availability Zones Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach to create shared storage
between the instances.
Answer: A
Explanation:
Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB instance in a
Multi-AZ configuration. To make an existing application highly available and resilient while avoiding any single points of failure and giving the application the ability
to scale to meet user demand, the best solution would be to deploy the application servers using Amazon EC2 instances in an Auto Scaling group across multiple
Availability Zones and use an Amazon RDS DB instance in a Multi-AZ configuration. By using an Amazon RDS DB instance in a Multi-AZ configuration, the
database is automatically replicated across multiple Availability Zones, ensuring that the database is highly available and can withstand the failure of a single
Availability Zone. This provides fault tolerance and avoids any single points of failure.
NEW QUESTION 77
- (Topic 2)
An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data is stored in
JSON format. The company is evaluating a disaster recovery solution to back up the data. The data must be accessible in milliseconds if it is needed, and the data
must be kept for 30 days.
Which solution meets these requirements MOST cost-effectively?
Answer: C
Explanation:
This solution meets the requirements of a disaster recovery solution to back up the data that is generated by an analytics application, stored in JSON format, and
must be accessible in milliseconds if it is needed. Amazon S3 Standard is a durable and scalable storage class for frequently accessed data. It can store any
amount of data and provide high availability and performance. It can also support millisecond access time for data retrieval.
Option A is incorrect because Amazon OpenSearch Service (Amazon Elasticsearch Service) is a search and analytics service that can index and query data, but it
is not a backup solution for data stored in JSON format. Option B is incorrect because Amazon S3 Glacier is a low-cost storage class for data archiving and long-
term backup, but it does not support millisecond access time for data retrieval. Option D is incorrect because Amazon RDS for PostgreSQL is a relational database
service that can store and query structured data, but it is not a backup solution for data stored in JSON format.
References:
? https://aws.amazon.com/s3/storage-classes/
? https://aws.amazon.com/s3/faqs/#Durability_and_data_protection
NEW QUESTION 81
- (Topic 2)
An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for
customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-
grained permissions for the data and must minimize operational overhead.
Which solution will meet these requirements?
Answer: C
Explanation:
To make all the data available to various teams and minimize operational overhead, the company can create a data lake by using AWS Lake Formation. This will
allow the company to centralize all the data in one place and use fine-grained access controls to manage access to the data. To meet the requirements of the
company, the solutions architect can create a data lake by using AWS Lake Formation, create an AWS Glue JDBC connection to Amazon RDS, and register the
S3 bucket in Lake Formation. The solutions architect can then use Lake Formation access controls to limit access to the data. This solution will provide the ability
to manage fine-grained permissions for the data and minimize operational overhead.
NEW QUESTION 83
- (Topic 2)
A company is concerned about the security of its public web application due to recent web attacks. The application uses an Application Load Balancer (ALB). A
solutions architect must reduce the risk of DDoS attacks against the application.
What should the solutions architect do to meet this requirement?
Answer: C
Explanation:
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront
distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators. https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-
waf.html
NEW QUESTION 85
- (Topic 2)
A reporting team receives files each day in an Amazon S3 bucket. The reporting team manually reviews and copies the files from this initial S3 bucket to an
analysis S3 bucket each day at the same time to use with Amazon QuickSight. Additional teams are starting to send more files in larger sizes to the initial S3
bucket.
The reporting team wants to move the files automatically analysis S3 bucket as the files enter the initial S3 bucket. The reporting team also wants to use AWS
Lambda functions to run pattern-matching code on the copied data. In addition, the reporting team wants to send the data files to a pipeline in Amazon SageMaker
Pipelines.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?
Answer: D
Explanation:
This solution meets the requirements of moving the files automatically, running Lambda functions on the copied data, and sending the data files to SageMaker
Pipelines with the least operational overhead. S3 replication can copy the files from the initial S3 bucket to the analysis S3 bucket as they arrive. The analysis S3
bucket can send event notifications to Amazon EventBridge (Amazon CloudWatch Events) when an object is created. EventBridge can trigger Lambda and
SageMaker Pipelines as targets for the ObjectCreated rule. Lambda can run pattern-matching code on the copied data, and SageMaker Pipelines can execute a
pipeline with the data files.
Option A is incorrect because creating a Lambda function to copy the files to the analysis S3 bucket is not necessary when S3 replication can do that
automatically. It also adds operational overhead to manage the Lambda function. Option B is incorrect because creating a Lambda function to copy the files to the
analysis S3 bucket is not necessary when S3 replication can do that automatically. It also adds operational overhead to manage the Lambda function. Option C is
incorrect because using S3 event notification with multiple destinations can result in throttling or delivery failures if there are too many events. References:
? https://aws.amazon.com/blogs/machine-learning/automate-feature-engineering-pipelines-with-amazon-sagemaker/
? https://docs.aws.amazon.com/sagemaker/latest/dg/automating-sagemaker-with- eventbridge.html
? https://aws.amazon.com/about-aws/whats-new/2021/04/new-options-trigger-amazon-sagemaker-pipeline-executions/
NEW QUESTION 87
- (Topic 2)
A company wants to measure the effectiveness of its recent marketing campaigns. The company performs batch processing on csv files of sales data and stores
the results in an Amazon S3 bucket once every hour. The S3 bi petabytes of objects. The company runs one-time queries in Amazon Athena to determine which
products are most popular on a particular date for a particular region Queries sometimes fail or take longer than expected to finish.
Which actions should a solutions architect take to improve the query performance and reliability? (Select TWO.)
Answer: BE
Explanation:
https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for- amazon-athena/
This solution meets the requirements of measuring the effectiveness of marketing campaigns by performing batch processing on csv files of sales data and storing
the results in an Amazon S3 bucket once every hour. An AWS duo ETL process can use services such as AWS Glue or AWS Data Pipeline to extract data from
S3, transform it into a more efficient format such as Apache Parquet, and load it back into S3. Apache Parquet is a columnar storage format that can improve the
query performance and reliability of Athena by reducing the amount of data scanned, improving compression ratio, and enabling predicate pushdown.
NEW QUESTION 92
- (Topic 2)
A company runs workloads on AWS. The company needs to connect to a service from an
external provider. The service is hosted in the provider's VPC. According to the company’s security team, the connectivity must be private and must be restricted
to the target service. The connection must be initiated only from the company’s VPC.
Which solution will mast these requirements?
A. Create a VPC peering connection between the company's VPC and the provider's VP
B. Update the route table to connect to the target service.
C. Ask the provider to create a virtual private gateway in its VP
D. Use AWS PrivateLink to connect to the target service.
E. Create a NAT gateway in a public subnet of the company's VP
F. Update the route table to connect to the target service.
G. Ask the provider to create a VPC endpoint for the target servic
H. Use AWS PrivateLink to connect to the target service.
Answer: D
Explanation:
**AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks, without exposing your traffic to the public
internet**. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify your network architecture. Interface
**VPC endpoints**, powered by AWS PrivateLink, connect you to services hosted by AWS Partners and supported solutions available in AWS Marketplace.
https://aws.amazon.com/privatelink/
NEW QUESTION 95
- (Topic 2)
A company wants to manage Amazon Machine Images (AMIs). The company currently copies AMIs to the same AWS Region where the AMIs were created. The
company needs to design an application that captures AWS API calls and sends alerts whenever the Amazon EC2 Createlmage API operation is called within the
company's account.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function to query AWS CloudTrail logs and to send an alert when a Createlmage API call is detected.
B. Configure AWS CloudTrail with an Amazon Simple Notification Service {Amazon SNS) notification that occurs when updated logs are sent to Amazon S3. Use
Amazon Athena to create a new table and to query on Createlmage when an API call is detected.
C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for the Createlmage API cal
D. Configure the target as an Amazon Simple Notification Service (Amazon SNS) topic to send an alert when a Createlmage API call is detected.
E. Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue as a target for AWS CloudTrail log
F. Create an AWS Lambda function to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic when a Createlmage API call is detected.
Answer: C
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/monitor-ami-events.html#:~:text=For%20example%2C%20you%20can%20create%20an%20EventB
ridge%20rule%20that%20detects%20when%20the%20AMI%20creation%20process%20has%20completed%20and%20then%20invokes%20an%20Amazon%20
SNS%20topic%20to% 20send%20an%20email%20notification%20to%20you.
Creating an Amazon EventBridge (Amazon CloudWatch Events) rule for the CreateImage API call and configuring the target as an Amazon Simple Notification
Service (Amazon SNS) topic to send an alert when a CreateImage API call is detected will meet the requirements with the least operational overhead. Amazon
EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications, integrated Software as a Service
(SaaS) applications, and AWS services. By creating an EventBridge rule for the CreateImage API call, the company can set up alerts whenever this operation is
called within their account. The alert can be sent to an SNS topic, which can then be configured to send notifications to the company's email or other desired
destination.
NEW QUESTION 99
- (Topic 2)
A company has an event-driven application that invokes AWS Lambda functions up to 800 times each minute with varying runtimes. The Lambda functions access
data that is stored in an Amazon Aurora MySQL OB cluster. The company is noticing connection timeouts as user activity increases The database shows no signs
of being overloaded. CPU. memory, and disk access metrics are all low.
Which solution will resolve this issue with the LEAST operational overhead?
A. Adjust the size of the Aurora MySQL nodes to handle more connection
B. Configure retry logic in the Lambda functions for attempts to connect to the database
C. Set up Amazon ElastiCache tor Redls to cache commonly read items from the databas
D. Configure the Lambda functions to connect to ElastiCache for reads.
E. Add an Aurora Replica as a reader nod
F. Configure the Lambda functions to connect to the reader endpoint of the OB cluster rather than lo the writer endpoint.
G. Use Amazon ROS Proxy to create a prox
H. Set the DB cluster as the target database Configure the Lambda functions lo connect to the proxy rather than to the DB cluster.
Answer: D
Explanation:
1. database shows no signs of being overloaded. CPU, memory, and disk access metrics are all low==>A and C out. We cannot only add nodes instance or add
read replica, because database workload is totally fine, very low. 2. "least operational overhead"==>B out, because b need to configure lambda. 3. ROS proxy:
Shares infrequently used connections; High availability with failover; Drives increased efficiency==>proxy can leverage failover to redirect traffic from timeout rds
instance to
healthy rds instance. So D is right.
Answer: B
Explanation:
ElastiCache, enhances the performance of web applications by quickly retrieving information from fully-managed in-memory data stores. It utilizes Memcached and
Redis, and manages to considerably reduce the time your applications would, otherwise, take to read data from disk-based databases. Amazon CloudFront
supports dynamic content from HTTP and WebSocket protocols, which are based on the Transmission Control Protocol (TCP) protocol. Common use cases
include dynamic API calls, web pages and web applications, as well as an application's static files such as audio and images. It also supports on-demand media
streaming over HTTP. AWS Global Accelerator supports both User Datagram Protocol (UDP) and TCP-based protocols. It is commonly used for non- HTTP use
cases, such as gaming, IoT and voice over IP. It is also good for HTTP use cases that need static IP addresses or fast regional failover
A. Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
B. Create an AWS Lambda functio
C. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
D. Create an Amazon Kinesis Data Firehose delivery strea
E. Configure the log group as the delivery stream's sourc
F. Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.
G. Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Stream
H. Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service)
Answer: B
Explanation:
https://computingforgeeks.com/stream-logs-in-aws-from-cloudwatch-to- elasticsearch/
Answer: C
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-custom.html and https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/working-with-
custom- oracle.html
Answer: B
Explanation:
Migrating to Amazon MQ reduces the overhead on the queue management. C and D are dismissed. Deciding between A and B means deciding to go for an
AutoScaling group for EC2 or an RDS for Postgress (both multi- AZ). The RDS option has less operational impact, as provide as a service the tools and software
required. Consider for instance, the effort to add an additional node like a read replica, to the DB. https://docs.aws.amazon.com/amazon-mq/latest/developer-
guide/active-standby-broker- deployment.html https://aws.amazon.com/rds/postgresql/
A. Deploy the application with the required infrastructure elements in place Use Amazon Route 53 to configure active-passive failover Create an Aurora Replica in
a second AWS Region
B. Host a scaled-down deployment of the application in a second AWS Region Use Amazon Route 53 to configure active-active failover Create an Aurora Replica
in the second Region
C. Replicate the primary infrastructure in a second AWS Region Use Amazon Route 53 to configure active-active failover Create an Aurora database that is
Answer: A
Explanation:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-types.html
A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.
B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.
D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
Answer: A
Explanation:
https://aws.amazon.com/cn/blogs/compute/cost-optimization-and-resilience-eks-with-spot-instances/
Answer: C
Explanation:
CloudFront uses a local cache to provide the response, AWS Global accelerator proxies requests and connects to the application all the time for the response.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-granting-permissions-to-oai
Answer: A
Explanation:
Composite alarms determine their states by monitoring the states of other alarms. You can **use composite alarms to reduce alarm noise**. For example, you can
create a composite alarm where the underlying metric alarms go into ALARM when they meet specific conditions. You then can set up your composite alarm to go
into ALARM and send you notifications when the underlying metric alarms go into ALARM by configuring the underlying metric alarms never to take actions.
Currently, composite alarms can take the following actions: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Al arm.html
A. Use AWS WAF in front of the ALB Associate the appropriate web ACLs with AWS WAF.
B. Create an ALB listener rule to reply to SQL injection with a fixed response
C. Subscribe to AWS Shield Advanced to block all SQL injection attempts automatically.
D. Set up Amazon Inspector to block all SOL injection attempts automatically
Answer: A
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/waf-block-common-
attacks/#:~:text=To%20protect%20your%20applications%20against,%2C%20query%20stri
ng%2C%20or%20URI. -----------------------------------------------------------------------------------------
------------------------------ Protect against SQL injection and cross-site scripting To protect your applications against SQL injection and cross-site scripting (XSS)
attacks, use the built- in SQL injection and cross-site scripting engines. Remember that attacks can be performed on different parts of the HTTP request, such as
the HTTP header, query string, or URI. Configure the AWS WAF rules to inspect different parts of the HTTP request against the built-in mitigation engines.
Answer: C
Explanation:
https://docs.aws.amazon.com/cli/latest/reference/transfer/describe- server.html
A. Add an explicit rule to the private subnet's network ACL to allow traffic from the web tier's EC2 instances.
B. Add a route in the VPC route table to allow traffic between the web tier's EC2 instances and Ihe database tier.
C. Deploy the web tier's EC2 instances and the database tier's RDS instance into two separate VPC
D. and configure VPC peering.
E. Add an inbound rule to the security group of the database tier's RDS instance to allow traffic from the web tier's security group.
Answer: D
Explanation:
This answer is correct because it allows the web tier to access the database tier by using
security groups as a source, which is a recommended best practice for VPC connectivity. Security groups are stateful and can reference other security groups in
the same VPC, which simplifies the configuration and maintenance of the firewall rules. By adding an inbound rule to the database tier’s security group, the web
tier’s EC2 instances can connect to the RDS instance on port 3306, regardless of their IP addresses or subnets. References:
? Security groups - Amazon Virtual Private Cloud
? Best practices and reference architectures for VPC design
Answer: B
Explanation:
Amazon Aurora Serverless for MySQL is a fully managed, auto-scaling relational database service that scales up or down automatically based on the application
demand. This service provides all the capabilities of Amazon Aurora, such as high availability, durability, and security, without requiring the customer to provision
any database instances. With Amazon Aurora Serverless for MySQL, the sales team can enjoy minimal downtime since the database is designed to automatically
scale to accommodate the increased traffic. Additionally, the service allows the customer to pay only for the capacity used, making it cost-effective for infrequent
access patterns. Amazon RDS for MySQL could also be an option, but it requires the customer to select an instance type, and the database administrator would
need to monitor and adjust the instance size manually to accommodate the increasing traffic.
A. Implement client-side encryption and store the images in an Amazon S3 Glacier vault Set a vault lock to prevent accidental deletion
B. Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-IA) storage class Enable versioning default encryption and MFA
Delete on the S3bucket.
C. Store the images in an Amazon FSx for Windows File Server file share Configure the Amazon FSx file share to use an AWS Key Management Service (AWS
KMS) customer master key (CMK) to encrypt the images in the file share Use NTFS permission sets on the images to prevent accidental deletion
D. Store the images in an Amazon Elastic File System (Amazon EFS) file share in the Infrequent Access storage class Configure the EFS file share to use an
AWS Key Management Service (AWS KMS) customer master key (CMK) to encrypt the images in the file shar
E. Use NFS permission sets on the images to prevent accidental deletion
Answer: B
Explanation:
This answer is correct because it provides a resilient and durable replacement for the on- premises file share that is compatible with a serverless web application.
Amazon S3 is a fully managed object storage service that can store any amount of data and serve it over the internet. It supports the following features:
? Resilience: Amazon S3 stores data across multiple Availability Zones within a Region, and offers 99.999999999% (11 9’s) of durability. It also supports cross-
region replication, which enables automatic and asynchronous copying of objects across buckets in different AWS Regions.
? Durability: Amazon S3 encrypts data at rest using server-side encryption with either Amazon S3-managed keys (SSE-S3), AWS KMS keys (SSE-KMS), or
customer-provided keys (SSE-C). It also supports encryption in transit using SSL/TLS. Amazon S3 also provides data protection features such as versioning,
which keeps multiple versions of an object in the same bucket, and MFA Delete, which requires additional authentication for deleting an object version or changing
the versioning state of a bucket.
? Performance: Amazon S3 delivers high performance and scalability for serving static and dynamic web content. It also supports features such as S3 Transfer
Acceleration, which speeds up data transfers by routing requests to AWS edge locations, and S3 Select, which enables retrieving only a subset of data from an
object by using simple SQL expressions.
The S3 Standard-Infrequent Access (S3 Standard-IA) storage class is suitable for storing images that are rarely accessed, but must be immediately available when
needed. It offers the same high durability, throughput, and low latency as S3 Standard, but with a lower storage cost per GB and a higher per-request cost.
References:
? Amazon Simple Storage Service
? Storage classes - Amazon Simple Storage Service
A. Use Amazon GuardDuty to monitor S3 bucket policies Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change
that makes the objects public
B. Use AWS Trusted Advisor to find publicly accessible S3 Dockets Configure email notifications In Trusted Advisor when a change is detected manually change
the S3 bucket policy if it allows public access
C. Use AWS Resource Access Manager to find publicly accessible S3 buckets Use Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda
function when a change it detecte
D. Deploy a Lambda function that programmatically remediates the change.
E. Use the S3 Block Public Access feature on the account leve
F. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting Apply tie SCP to tie account
Answer: D
Explanation:
The S3 Block Public Access feature allows you to restrict public access to S3 buckets and objects within the account. You can enable this feature at the account
level to prevent any S3 bucket from being made public, regardless of the bucket policy settings. AWS Organizations can be used to apply a Service Control Policy
(SCP) to the account to prevent IAM users from changing this setting, ensuring that all S3 objects remain private. This is a straightforward and effective solution
that requires minimal operational overhead.
A. Set up a new Amazon DocumentDB (with MongoDB compatibility) cluster that includes a read replica Scale the read replica to generate the reports.
B. Set up a new Amazon RDS for PostgreSQL Reserved Instance and an On-Demand read replica Scale the read replica to generate the reports
C. Set up a new Amazon Aurora PostgreSQL DB cluster that includes a Reserved Instance and an Aurora Replica issue queries to the Aurora Replica to generate
the reports.
D. Set up a new Amazon RDS for PostgreSQL Multi-AZ Reserved Instance Configure the reporting module to query the secondary RDS node so that the reporting
module does not affect the primary node
E. Set up a new Amazon DynamoDB table to store the documents Use a fixed write capacity to support new document entries Automatically scale the read
capacity to support the reports
Answer: BC
Explanation:
These options are operationally efficient because they use Amazon RDS read replicas to offload the reporting workload from the primary DB instance and avoid
affecting any document modifications or the addition of new documents1. They also use Reserved Instances for the primary DB instance to reduce costs and On-
Demand or Aurora Replicas for the read replicas to scale as needed. Option A is less efficient because it uses Amazon S3 Glacier Flexible Retrieval, which is a
cold storage class that has higher retrieval costs and longer retrieval times than Amazon S3 Standard. It also uses EventBridge rules to invoke the job nightly,
which does not meet the requirement of processing incoming data files as soon as possible. Option D is less efficient because it uses AWS Lambda to process the
files, which has a maximum execution time of 15 minutes per invocation, which might not be enough for processing each file that needs 3-8 minutes. It also uses
S3 event notifications to invoke the Lambda function when the files arrive, which could cause concurrency issues if there are thousands of small data files arriving
periodically. Option E is less efficient because it uses Amazon DynamoDB, which is a NoSQL database service that does not support relational queries, which are
needed for generating the reports. It also uses fixed write capacity, which could cause throttling or underutilization depending on the incoming data files.
Answer: B
Explanation:
the best solution is to implement Amazon ElastiCache to cache the large datasets, which will store the frequently accessed data in memory, allowing for faster
retrieval times. This can help to alleviate the frequent calls to the database, reduce latency, and improve the overall performance of the backend tier.
Answer: D
Explanation:
This option is the most efficient because it uses API usage plans and API keys, which are features of Amazon API Gateway that allow you to control who can
access your API and how much and how fast they can access it1. It also implements API usage plans and API keys to limit the access of users who do not have a
subscription, which enables you to create different tiers of access for your API and charge users accordingly. This solution meets the requirement of updating the
application so that only users who have a subscription can access premium content. Option A is less efficient because it uses API caching and throttling on the API
Gateway API, which are features of Amazon API Gateway that allow you to improve the performance and availability of your API and protect your backend
systems from traffic spikes2. However, this does not provide a way to limit the access of users who do not have a subscription. Option B is less efficient because it
uses AWS WAF on the API Gateway API, which is a web application firewall service that helps protect your web applications or APIs against common web exploits
that may affect availability, compromise security, or consume excessive resources3. However, this does not provide a way to limit the access of users who do not
have a subscription. Option C is less efficient because it uses fine-grained IAM permissions to the premium content in the DynamoDB table, which are permissions
that allow you to control access to specific items or attributes within a table4. However, this does not provide a way to limit the access of users who do not have a
subscription at the API level.
Answer: A
Explanation:
This option is the most efficient because it sets an overall password policy for the entire AWS account, which is a way to specify complexity requirements and
mandatory rotation periods for IAM user passwords1. It also meets the requirement of setting a password policy for all new users, as the password policy applies
to all IAM users in the account. This solution meets the requirement of setting specific complexity requirements and mandatory rotation periods for IAM user
passwords. Option B is less efficient because it sets a password policy for each IAM user in the AWS account, which is not possible as password policies can only
be set at the account level. Option C is less efficient because it uses third- party vendor software to set password requirements, which is not necessary as IAM
provides a built-in way to set password policies. Option D is less efficient because it attaches an Amazon CloudWatch rule to the Create_newuser event to set the
password with the appropriate requirements, which is not possible as CloudWatch rules cannot modify IAM user passwords.
was received. The solutions architect must design the application to asynchronously dispatch requests to the different application tiers.
What should the solutions architect do to meet these requirements?
A. Write a custom AWS Lambda function to generate the thumbnail and alert the use
B. Use the image upload process as an event source to invoke the Lambda function.
C. Create an AWS Step Functions workflow Configure Step Functions to handle the orchestration between the application tiers and alert the user when thumbnail
generation is complete
D. Create an Amazon Simple Queue Service (Amazon SQS) message queu
E. As images are uploaded, place a message on the SQS queue for thumbnail generatio
F. Alert the user through an application message that the image was received
G. Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions Use one subscription with the application to generate the
thumbnail after the image upload is complet
H. Use a second subscription to message the user's mobile app by way of a push notification after thumbnail generation is complete.
Answer: C
Explanation:
This option is the most efficient because it uses Amazon SQS, which is a fully managed message queuing service that lets you send, store, and receive messages
between software components at any volume, without losing messages or requiring other services to be available1. It also uses an SQS message queue to
asynchronously dispatch requests to the different application tiers, which decouples the image upload process from the thumbnail generation process and enables
scalability and reliability. It also alerts the user through an application message that the image was received, which provides a faster response time to the user than
waiting for the thumbnail generation to complete. Option A is less efficient because it uses a custom AWS Lambda function to generate the thumbnail and alert the
user, which is a way to run code without provisioning or managing servers. However, this does not use an asynchronous dispatch mechanism to separate the
image upload process from the thumbnail generation process. It also uses the image upload process as an event source to invoke the Lambda function, which
could cause concurrency issues if there are many images uploaded at once. Option B is less efficient because it uses AWS Step Functions, which is a fully
managed service that provides a graphical console to arrange and visualize the components of your application as a series of steps2. However, this does not use
an asynchronous dispatch mechanism to separate the image upload process from the thumbnail generation process. It also uses Step Functions to handle the
orchestration between the application tiers and alert the user when thumbnail generation is complete, which could introduce additional complexity and latency.
Option D is less efficient because it uses Amazon SNS, which is a fully managed messaging service that enables you to send messages or notifications directly to
users with SMS text messages or email3. However, this does not use an asynchronous dispatch mechanism to separate the image upload process from the
thumbnail generation process. It also uses SNS notification topics and subscriptions to generate the thumbnail after the image upload is complete and message
the user’s mobile app by way of a push notification after thumbnail generation is complete, which could introduce additional complexity and latency.
A. Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens
B. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period.
C. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period.
D. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens
Answer: C
Explanation:
This option will scale up capacity faster in the morning to improve performance, but will still allow capacity to scale down during off hours. It achieves this as
follows: • A target tracking action scales based on a CPU utilization target. By triggering at a lower CPU threshold in the morning, the Auto Scaling group will start
scaling up sooner as traffic ramps up, launching instances before utilization gets too high and impacts performance. • Decreasing the cooldown period allows Auto
Scaling to scale more aggressively, launching more instances faster until the target is reached. This speeds up the ramp-up of capacity. • However, unlike a
scheduled action to set a fixed minimum/maximum capacity, with target tracking the group can still scale down during off hours based on demand. This helps
minimize costs.
A. Build out the workflow in AWS Glue Use AWS Glue to invoke AWS Lambda functions to process the workflow slaps
B. Build out the workflow in AWS Step Functions Deploy the application on Amazon EC2 Instances Use Step Functions to invoke the workflow steps on the EC2
instances
C. Build out the workflow in Amazon EventBridg
D. Use EventBridge to invoke AWS Lambda functions on a schedule to process the workflow steps.
E. Build out the workflow m AWS Step Functions Use Step Functions to create a stale machine Use the stale machine to invoke AWS Lambda functions to
process the workflow steps
Answer: D
Explanation:
This answer is correct because it meets the requirements of transitioning to an event-driven architecture, using serverless concepts, and minimizing operational
overhead. AWS Step Functions is a serverless service that lets you coordinate multiple AWS services into workflows using state machines. State machines are
composed of tasks and transitions that define the logic and order of execution of the workflow steps. AWS Lambda is a serverless function-as-a-service platform
that lets you run code without provisioning or managing servers. Lambda functions can be invoked by Step Functions as tasks in a state machine, and can perform
different aspects of the data management workflow, such as data ingestion, transformation, validation, and analysis. By using Step Functions and Lambda, the
company can benefit from the following advantages:
? Event-driven: Step Functions can trigger Lambda functions based on events, such as timers, API calls, or other AWS service events. Lambda functions can also
emit events to other services or state machines, creating an event-driven architecture.
? Serverless: Step Functions and Lambda are fully managed by AWS, so the company does not need to provision or manage any servers or infrastructure. The
company only pays for the resources consumed by the workflows and functions, and can scale up or down automatically based on demand.
? Operational overhead: Step Functions and Lambda simplify the development and deployment of workflows and functions, as they provide built-in features such
as monitoring, logging, tracing, error handling, retry logic, and security. The company can focus on the business logic and data processing rather than the
operational details.
References:
? What is AWS Step Functions?
? What is AWS Lambda?
Answer: AD
Explanation:
A) Enable AWS CloudTrail and use it for auditing. AWS CloudTrail provides a record of API calls and can be used to audit changes made to EC2 instances and
security groups. By analyzing CloudTrail logs, the solutions architect can track who provisioned oversized instances or modified security groups without proper
approval. D) Enable AWS Config and create rules for auditing and compliance purposes. AWS Config can record the configuration changes made to resources like
EC2 instances and security groups. The solutions architect can create AWS Config rules to monitor for non-compliant changes, like launching certain instance
types or opening security group ports without permission. AWS Config would alert on any violations of these rules.
A. Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premise
B. Set the local cache to 10 T
C. Modify existing applications to access the files through the NFS protoco
D. To recover from a disaster, provision an Amazon EC2 instance and mount the S3 bucket that contains the files.
E. Provision an AWS Storage Gateway tape gatewa
F. Use a data backup solution to back up all existing data to a virtual tape librar
G. Configure the data backup solution to run nightly after the initial backup is complet
H. To recover from a disaster, provision an Amazon EC2 instance and restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes
in the virtual tape library.
I. Provision an AWS Storage Gateway Volume Gateway cached volum
J. Set the local cache to 10 T
K. Mount the Volume Gateway cached volume to the existing file server by using iSCS
L. and copy all files to the storage volum
M. Configure scheduled snapshots of the storage volum
N. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2
instance.
O. Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount of disk space as the existing file storage volum
P. Mount the Volume Gateway stored volume to the existing file server by using iSCSI, and copy all files to the storage volum
Q. Configure scheduled snapshots of the storage volum
R. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to an Amazon EC2
instance.
Answer: D
Explanation:
"The company wants to ensure that end users retain immediate access to all file types from the on-premises systems " - Cached volumes: low latency access to
most recent data - Stored volumes: entire dataset is on premise, scheduled backups to S3 Hence Volume Gateway stored volume is the apt choice.
A. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the
RPO
C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to
meet the RPO
D. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and
use point-in-time recovery to meet the RPO
Answer: C
Explanation:
Since the application has no local data on instances, AMIs alone can meet the RPO by restoring instances from the most recent AMI backup. When combined
with automated RDS backups for the database, this provides a complete backup solution for this environment. The other options involving EBS snapshots would
be unnecessary given the stateless nature of the instances. AMIs provide all the backup needed for the app tier. This uses native, automated AWS backup
features that require minimal ongoing management: - AMI automated backups provide point-in-time recovery for the stateless app tier. - RDS automated backups
provide point-in-time recovery for the database.
Answer: C
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed- instances-groups.html
A. Create a AWS Glue extract, transform, and load (ETL) job that runs on a schedul
B. Configure the ETL job to process the .csv files and store the processed data in Amazon Redshit.
C. Develop a Python script that runs on Amazon EC2 instances to convert th
D. csv files to sql files invoke the Python script on cron schedule to store the output files in Amazon S3.
E. Create an AWS Lambda function and an Amazon DynamoDB tabl
F. Use an S3 event toinvoke the Lambda functio
G. Configure the Lambda function to perform an extract transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB
table.
H. Use Amazon EventBridge (Amazon CloudWatch Events) to launch an Amazon EMR cluster on a weekly schedul
I. Configure the EMR cluster to perform an extract, tractform, and load (ETL) job to process the .csv files and store the processed data in an Amazon Redshift
table.
Answer: A
Explanation:
This solution meets the requirements of implementing a solution so that the COTS application can use the data that the legacy application produces with the least
operational overhead. AWS Glue is a fully managed service that provides a serverless ETL platform to prepare and load data for analytics. AWS Glue can process
data in various formats, including .csv files, and store the processed data in Amazon Redshift, which is a fully managed data warehouse service that supports
complex SQL queries. AWS Glue can run ETL jobs on a schedule, which can automate the data processing and loading process. Option B is incorrect because
developing a Python script that runs on Amazon EC2 instances to convert the .csv files to sql files can increase the operational overhead and complexity, and it
may not provide consistent data processing and loading for the COTS application. Option C is incorrect because creating an AWS Lambda function and an
Amazon DynamoDB table to process the .csv files and store the processed data in the DynamoDB table does not meet the requirement of using Amazon Redshift
as the data source for the COTS application. Option D is incorrect because using Amazon EventBridge (Amazon CloudWatch Events) to launch an Amazon EMR
cluster on a weekly schedule to process the .csv files and store the processed data in an Amazon Redshift table can increase the operational overhead and
complexity, and it may not provide timely data processing and loading for the COTS application.
References:
? https://aws.amazon.com/glue/
? https://aws.amazon.com/redshift/
A. Reconfigure the target group in the development environment to have one EC2 instance as a target.
B. Change the ALB balancing algorithm to least outstanding requests.
C. Reduce the size of the EC2 instances in both environments.
D. Reduce the maximum number of EC2 instances in the development environment’s Auto Scaling group
Answer: D
Explanation:
This option will configure the development environment in the most cost- effective way as it reduces the number of instances running in the development
environment and therefore reduces the cost of running the application. The development environment typically requires less resources than the production
environment, and it is unlikely that the development environment will have periods of high traffic that would require a large number of instances. By reducing the
maximum number of instances in the
development environment's Auto Scaling group, the company can save on costs while still maintaining a functional development environment.
Answer: C
Explanation:
Migrate MySQL database to an Amazon Aurora global database is the best solution
because it requires minimal operational overhead. Aurora is a managed service that provides automatic failover, so standby instances do not need to be manually
configured. The primary DB cluster can be hosted in the primary Region, and the secondary DB cluster can be hosted in the DR Region. This approach ensures
that the data is always available and up-to-date in multiple Regions, without requiring significant manual intervention.
* SAA-C03 Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* SAA-C03 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year