Practice Test 2
Practice Test 2
A customer wants to create EBS Volumes in AWS. The data on the volume is required to be encrypted at rest.
How can this be achieved?
Explanation :
Answer – B
When you create a volume, you have an option to encrypt the volume using keys generated by the Key Management Service.
For more information on using KMS, please refer to the below URL:
https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html
A company has a requirement to store 100TB of data to AWS. This data will be exported using AWS
Snowball and needs to then reside in a database layer. The database should have the facility to be queried from
a business intelligence application. Each item is roughly 500KB in size. Which of the following is an ideal
storage mechanism for the underlying data layer?
A. AWS DynamoDB
B. AWS Aurora
C. AWS RDS
D. AWS Redshift
Explanation :
Answer-D
For this sheer data size, the ideal storage unit would be AWS Redshift.
AWS Documentation mentions the following on AWS Redshift:
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred
gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and
customers.
The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After you provision your
cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift
offers fast query performance using the same SQL-based tools and business intelligence applications that you use today.
For more information on AWS Redshift, please refer to the URL below.
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
A company is planning on testing a large set of IoT enabled devices. These devices will be streaming data
every second. A proper service needs to be chosen in AWS which could be used to collect and analyze these
streams in real time. Which of the following could be used for this purpose?
Explanation :
Answer - B
AWS Documentation mentions the following on Amazon Kinesis:
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react
quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with
the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time
data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other
applications.
For more information on Amazon Kinesis, please refer to the below URL:
https://aws.amazon.com/kinesis/
Option A: Amazon EMR can be used to process applications with data intensive workloads.
Option B: Amazon Kinesis can be used to store, process and analyse real time streaming data.
Option C: SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed
systems, and serverless applications.
Option D: SNS is a flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of
messages to subscribing endpoints and clients.
Your company currently has a set of EC2 Instances hosted in AWS. The states of these instances need to be
monitored and each state change needs to be recorded. Which of the following can help fulfill this
requirement? Choose 2 answers from the options given below.
Explanation :
Answer – B and D
CloudWatch Events can be used to monitor the state change of EC2 Instances.
The Event Source and the Event Type can be chosen as shown below. An AWS Lambda function can then serve as a target which can
then be used to store the record in a DynamoDB table.
For more information on CloudWatch events, please refer to the below URL:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
QUESTION 5 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES
You have instances hosted in a private subnet in a VPC. There is a need for the instances to download updates
from the Internet. As an architect, what change would you suggest to the IT Operations team which would
also be the most efficient and secure?
A. Create a new public subnet and move the instance to that subnet.
B. Create a new EC2 Instance to download the updates separately and then push them to therequired
instance.
C. Use a NAT Gateway to allow the instances in the private subnet to download theupdates.
D. Create a VPC link to the Internet to allow the instances in the private subnet todownload the updates.
Explanation :
Answer – C
The NAT Gateway is an ideal option to ensure that instances in the private subnet have the ability to download updates from the
Internet.
For more information on the NAT Gateway, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html
Option A is not suitable because there may be a security reason for keeping these instances in the private subnet. (for example: db
instances)
Option B is also incorrect. The instances in the private subnet may be running various applications and db instances. Hence, it is not
advisable or practical for an EC2 Instance to download the updates separately and then push them to the required instance.
Option D is incorrect because a VPC link is not used to connect to the Internet.
QUESTION 6 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
A company has opted to store their cold data on EBS Volumes. Ensuring optimal cost, which of the following
would be the ideal EBS Volume type to host this type of data?
Explanation :
Answer - D
The below snapshot from AWS Documentation also shows that the ideal and cost efficient storage type would be Cold HDD.
For more information on EBS Volume types, please refer to the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
A company plans to have their application hosted in AWS. This application has users uploading files and then
using a public URL for downloading them at a later stage. Which of the following designs would help fulfill
this requirement?
Explanation :
Answer – B
If you need storage for the Internet, AWS Simple Storage Service is the best option. Each uploaded file automatically gets a public
URL, which can be used to download the file at a later point in time.
For more information on Amazon S3, please refer to the below URL:
https://aws.amazon.com/s3/
Options A and D are incorrect because EBS Volumes or Snapshots do not have Public URL.
Option C is incorrect because Glacier is mainly used for data archiving purposes.
QUESTION 8 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES
You plan on hosting a web application on AWS. You create an EC2 Instance in a public subnet which needs to
connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be taken to
ensure that a secure setup is in place? Choose 2 answers from the choices below.
A. Place the EC2 Instance with the Oracle database in the same public subnet as the Webserver for faster
communication.
B. Place the EC2 Instance with the Oracle database in a separate private subnet.
C. Create a database Security group which allows incoming traffic only from the Web server's security
group.
D. Ensure that the database security group allows incoming traffic from 0.0.0.0/0
Explanation :
Answer – B and C
The best and most secure option is to place the database in a private subnet. The below diagram from AWS Documentation shows this
setup. Also, you ensure that access is not allowed from all sources but only from the web servers.
For more information on this type of setup, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
Option A is incorrect because as per the best practice guidelines, db instances are placed in Private subnets and allowed to
communicate with web servers in the public subnet.
Option D is incorrect because allowing all incoming traffic from the Internet to the db instance is a security risk.
An EC2 Instance hosts a Java based application that accesses a DynamoDB table. This EC2 Instance is
currently serving production users. Which of the following is a secure way for the EC2 Instance to access the
DynamoDB table?
A. Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2Instance.
B. Use KMS Keys with the right permissions to interact with DynamoDB and assign it tothe EC2 Instance.
C. Use IAM Access Keys with the right permissions to interact with DynamoDB and assignit to the EC2
Instance.
D. Use IAM Access Groups with the right permissions to interact with DynamoDB andassign it to the EC2
Instance.
Explanation :
Answer - A
To ensure secure access to AWS resources from EC2 Instances, always assign a role to the EC2 Instance.
For more information on IAM Roles, please refer to the below URL:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot
do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs
it. Also, a role does not have standard long-term credentials (password or access keys) associated with it. Instead, if a user assumes a
role, temporary security credentials are created dynamically and provided to the user.
You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources.
Note:
You can attach IAM role to the existing EC2 instance.
https://aws.amazon.com/about-aws/whats-new/2017/02/new-attach-an-iam-role-to-your-existing-amazon-ec2-instance/
A company planning on building and deploying a web application on AWS, needs to have a data store to store
session data. Which of the below services can be used to meet this requirement? Please select 2 correct
options.
A. AWS RDS
B. AWS SQS
C. AWS ELB
D. AWS ElastiCache
Explanation :
Answer – C and D
AWS Documentation mentions the following:
Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source
compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from
high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial
Services, Healthcare, and IoT apps.
For more information on ElastiCache, please refer to the URL below.
https://aws.amazon.com/elasticache/
Option A is incorrect. RDS is a distributed relational database. It is a web service running "in the cloud" designed to simplify the
setup, operation, and scaling of a relational database for use in applications.
Option B is incorrect. SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices,
distributed systems, and serverless applications.
Option C is correct.
AWS says "Sticky sessions, also known as session affinity, allow you to route a site user to the particular web server that is managing
that individual user’s session. The session’s validity can be determined by a number of methods, including a client-side cookies or via
configurable duration parameters that can be set at the load balancer which routes requests to the web servers."
https://aws.amazon.com/caching/session-management/
Note:
In order to address scalability and to provide a shared data storage for sessions that can be accessible from any individual web server,
you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory
Key/Value store such as Redis and Memcached.
In-memory caching improves application performance by storing frequently accessed data items in memory, so that they can
be retrieved without access to the primary data store. Properly leveraging caching can result in an application that not only
performs better, but also costs less at scale. Amazon ElastiCache is a managed service that reduces the administrative burden of
deploying an in-memory cache in the cloud.
QUESTION 11 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
A company has setup an application in AWS that interacts with DynamoDB. It is required that when an item is
modified in a DynamoDB table, an immediate entry is made to the associating application. How can this be
accomplished? Choose 2 answers from the choices below.
A. Setup CloudWatch to monitor the DynamoDB table for changes. Then trigger a Lambdafunction to send
the changes to the application.
B. Setup CloudWatch logs to monitor the DynamoDB table for changes. Then trigger AWSSQS to send the
changes to the application.
C. Use DynamoDB streams to monitor the changes to the DynamoDB table.
D. Trigger a lambda function to make an associated entry in the application as soon as the DynamoDB
streams are modified
Explanation :
Answer – C and D
When you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write.
Immediately after an item in the table is modified, a new record appears in the table's stream. AWS Lambda polls the stream and
invokes your Lambda function synchronously when it detects new stream records. Since our requirement is to have an immediate
entry made to an application in case an item in the DynamoDB table is modified, a lambda function is also required.
Let us try to analyze this with an example:
Consider a mobile gaming app that writes to a GamesScores table. Whenever the top score of the GameScores table is updated, a
corresponding stream record is written to the table's stream. This event could then trigger a Lambda function that posts a
Congratulatory message on a Social media network handle.
DynamoDB streams can be used to monitor the changes to a DynamoDB table.
AWS Documentation mentions the following:
A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a
stream on a table, DynamoDB captures information about every modification to data items in the table.
For more information on DynamoDB streams, please refer to the URL below.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
Note:
DynamoDB is integrated with Lambda so that you can create triggers to events in DynamoDB Streams.
If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately
after an item in the table is modified, a new record appears in the table's stream.
AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our
requirement states that an item modified in a DynamoDB table causes an immediate entry to an associating application, a lambda
function is also required.
For more information on DynamoDB streams Lambda, please refer to the URL below.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
QUESTION 12 UNATTEMPTED
DEFINE OPERATIONALLY-EXCELLENT ARCHITECTURES
A company currently has an application hosted on their On-premises environment. The application has a
combination of web Instances with worker Instances and Rabbit-MQ for messaging purposes. This
infrastructure is now required to be moved to the AWS Cloud. What is the best way to start using messaging
on the AWS Cloud?
Explanation :
Answer – B
An ideal option would be to make use of AWS Simple Queue Service to manage the messages between the application components.
The AWS SQS Service is a highly scalable and durable service.
For more information on Amazon SQS, please refer to the below URL:
https://aws.amazon.com/sqs/
QUESTION 13 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
An application currently uses AWS RDS MySQL as its data layer. Due to recent performance issues on the
database, it has been decided to separate the querying part of the application by setting up a separate reporting
layer. Which of the following additional steps could also potentially assist in improving the performance of
the underlying database?
Explanation :
Answer - C
AWS Documentation mentions the following:
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to
elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or
more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby
increasing aggregate read throughput
For more information on Amazon Read Replicas, please refer to the URL below.
https://aws.amazon.com/rds/details/read-replicas/
Ask our Experts
A company is asking its developers to store application logs in an S3 bucket. These logs are only required for
a temporary period of time after which, they can be deleted. Which of the following steps can be used to
effectively manage this?
A. Create a cron job to detect the stale logs and delete them accordingly.
B. Use a bucket policy to manage the deletion.
C. Use an IAM Policy to manage the deletion.
D. Use S3 Lifecycle Policies to manage the deletion.
Explanation :
Answer – D
AWS Documentation mentions the following to support the above requirement:
Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or
more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition
objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage
class one year after creation.
Expiration actions – In which you specify when the objects expire. Then, Amazon S3 deletes the expired objects on your behalf.
For more information on S3 Lifecycle Policies, please refer to the URL below.
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
A built-in feature exists to do this job, hence Options A, B and C are not necessary.
QUESTION 15 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES
An application running on EC2 Instances processes sensitive information stored on Amazon S3. This
information is accessed over the Internet. The security team is concerned that the Internet connectivity to
Amazon S3 could be a security risk. Which solution will resolve the security concern?
Answer – D
AWS Documentation mentions the following:
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by
PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in
your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other
service does not leave the Amazon network.
For more information on VPC endpoints, please refer to the URL below.
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
Option A is incorrect. An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows
communication between instances in your VPC and the Internet.
Option B is incorrect. A VPN, or Virtual Private Network, allows you to create a secure connection to another network over the
Internet.
Option C is incorrect. You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to
the Internet or other AWS services, but prevent the internet from initiating a connection with those instances.
You have set up a Redshift cluster in AWS and are trying to access it, but are unable to do so. What should be
done so that you can access the Redshift Cluster?
Explanation :
Answer - C
AWS Documentation mentions the following:
When you provision an Amazon Redshift cluster, it is locked down by default so nobody has access to it. To grant other users inbound
access to an Amazon Redshift cluster, you associate the cluster with a security group.
For more information on Redshift Security Groups, please refer to the below URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.html
Ask our Experts
QUESTION 17 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
You have a web application hosted on an EC2 Instance in AWS which is being accessed by users across the
globe. The Operations team has been receiving support requests about extreme slowness from users in some
regions. What can be done to the architecture to improve the response time for these users?
Explanation :
Answer – D
AWS Documentation mentions the following:
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and
image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When
a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency
(time delay), so that content is delivered with the best possible performance.
For more information on Amazon CloudFront, please refer to the below URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Option A is incorrect. The latency issue is experienced by people from certain parts of the world only. So, increasing the number of
EC2 Instances or increasing the instance size does not make much of a difference.
Option C is incorrect. Route 53 health checks are meant to see whether the instance status is healthy or not.
Since this case deals with responding to requests from users, we do not have to worry about this. However, for improving latency
issues, CloudFront is a good solution.
Currently, you have a NAT Gateway defined for your private instances. You need to make the NAT Gateway
highly available. How can this be accomplished?
Explanation :
Answer - B
AWS Documentation mentions the following:
If you have resources in multiple Availability Zones and they share one NAT Gateway, in the event that the NAT Gateway’s
Availability Zone is down, resources in the other Availability Zones lose internet access. To create an Availability Zone-independent
architecture, create a NAT Gateway in each Availability Zone and configure your routing to ensure that resources use the NAT
Gateway in the same Availability Zone.
For more information on the NAT Gateway, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html
A company wants to have a fully managed data store in AWS. It should be a compatible MySQL database,
which is an application requirement. Which of the following databases engines can be used for this purpose?
A. AWS RDS
B. AWS Aurora
C. AWS DynamoDB
D. AWS Redshift
Explanation :
Answer - B
AWS Documentation mentions the following:
Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed
and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to
five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your
existing applications.
For more information on AWS Aurora, please refer to the URL below.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html
Note:
RDS is a generic service to provide Relational Database service which supports 6 database engines. They are Aurora, MySQL,
MariaDB, PostgreSQL, Oracle and Microsoft SQL server. Our question is to select MySQL compatible database from the options
provided. Out of the options listed Amazon Aurora is a MySQL- and PostgreSQL-compatible enterprise-class database.
QUESTION 20 UNATTEMPTED
DEFINE OPERATIONALLY-EXCELLENT ARCHITECTURES
A Solutions Architect is designing an online shopping application running in a VPC on EC2 Instances behind
an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability
Zones. The application tier must read and write data to a customer managed database cluster. There should be
no access to the database from the Internet, but the cluster must be able to obtain software patches from the
Internet. Which VPC design meets these requirements?
A. Public subnets for both the application tier and the database cluster
B. Public subnets for the application tier, and private subnets for the database cluster
C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway
Explanation :
Answer – C
The following diagram from AWS Documentation shows the right setup for this scenario:
We always need to keep Nat gateway on public Subnet only, because it needs to communicate internet.
Aws says that "To create a NAT gateway, you must specify the public subnet in which the NAT gateway should reside. You must also
specify an Elastic IP address to associate with the NAT gateway when you create it. After you've created a NAT gateway, you must
update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway. This
enables instances in your private subnets to communicate with the internet."
For more information on this setup, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html
A mobile based application requires uploading images to S3. As an architect, you do not want to make use of
the existing web server to upload the images due to the load that it would incur. How can this be handled?
A. Create a secondary S3 bucket. Then, use an AWS Lambda to sync the contents to the primary bucket.
B. Use Pre-Signed URLs instead to upload the images.
C. Use ECS Containers to upload the images.
D. Upload the images to SQS and then push them to the S3 bucket.
Explanation :
Answer – B
The S3 bucket owner can create Pre-Signed URLs to upload the images to S3.
For more information on Pre-Signed URLs, please refer to the URL below.
https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
Option A is not the correct for this question. Since Amazon has provided us with an inbuilt function for this requirement, using this
option is cost expensive and time-consuming. As a Solution Architect, you are supposed to pick the best and cost-effective solution.
Option C is incorrect. ECS is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker
containers on a cluster.
Option D is incorrect. SQS is a message queue service used by distributed applications to exchange messages through a polling model
and not through a push mechanism.
Note:
This question is basically based on the scenario where we can use pre-signed url.
You need to understand about pre-signed url - which contains the user login credentials particular resources, such as S3 in this
scenario. And user must have permission enabled that other application can use the credential to upload the data (images) in S3
buckets.
AWS definition:
"A pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has
permissions to access that object. That is, if you receive a pre-signed URL to upload an object, you can upload the object only if the
creator of the pre-signed URL has the necessary permissions to upload that object.
All objects and buckets by default are private. The pre-signed URLs are useful if you want your user/customer to be able to upload a
specific object to your bucket, but you don't require them to have AWS security credentials or permissions. When you create a pre-
signed URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for
uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration."
QUESTION 22 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
A company is required to use the AWS RDS service to host a MySQL database. This database is going to be
used for production purposes and is expected to experience a high number of read/write activities. Which of
the below underlying EBS Volume types would be ideal for this database?
Explanation :
Answer - B
The below snapshot from AWS Documentation shows that the ideal storage option in this scenario is the Provisioned IOPS SSD since
this will provide a high number of IOPS for the underlying database.
For more information on EBS Volume types, please refer to the URL below.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
QUESTION 23 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
You have a set of on-premises virtual machines used to serve a web-based application. These are placed
behind an on-premises load balanced solution. You need to ensure that a virtual machine if unhealthy is taken
out of the rotation. which of the following option can be used for health checking and DNS failover features
for a web application running behind ELB, to increase redundancy and availability.
Answer - A
Route 53 health checks can be used for any endpoint that can be accessed via the Internet. Hence, this would be an ideal option for
monitoring endpoints.
AWS Documentation mentions the following:
You can configure a health check that monitors an endpoint that you specify either by IP address or by the domain name. At regular
intervals that you specify, Route 53 submits automated requests over the internet to your application, server, or other resources to
verify that it's reachable, available and functional.
For more information on Route 53 Health checks, please refer to the URL below.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-simple-configs.html
Note:
As per AWS,
Once enabled, Route 53 automatically configures and manages health checks for individual ELB nodes. Route 53 also takes
advantage of the EC2 instance health checking that ELB performs. By combining the results of health checks of your EC2 instances
and your ELBs, Route 53 DNS Failover is able to evaluate the health of the load balancer and the health of the application running on
the EC2 instances behind it. In other words, if any part of the stack goes down, Route 53 detects the failure and routes traffic away
from the failed endpoint.
AWS documentation states, that you can create a Route 53 resource record that points to an address outside AWS, you can set up
health checks for parts of your application running outside AWS, and you can fail over to any endpoint that you choose, regardless of
location.
For example, you may have a legacy application running in a datacenter outside AWS and a backup instance of that application
running within AWS. You can set up health checks of your legacy application running outside AWS, and if the application fails the
health checks, you can fail over automatically to the backup instance in AWS.
Please refer:
https://aws.amazon.com/route53/faqs/
Note:
As per AWS,
Route 53 has health checkers in locations around the world. When you create a health check that monitors an endpoint, health
checkers start to send requests to the endpoint that you specify to determine whether the endpoint is healthy. You can choose which
locations you want Route 53 to use, and you can specify the interval between checks: every 10 seconds or every 30 seconds. Note that
Route 53 health checkers in different data centers don't coordinate with one another, so you'll sometimes see several requests per
second regardless of the interval you chose, followed by a few seconds with no health checks at all.
Each health checker evaluates the health of the endpoint based on two values:
Response time
Whether the endpoint responds to a number of consecutive health checks that you specify (the failure threshold)
Route 53 aggregates the data from the health checkers and determines whether the endpoint is healthy:
If more than 18% of health checkers report that an endpoint is healthy, Route 53 considers it healthy.
If 18% of health checkers or fewer report that an endpoint is healthy, Route 53 considers it unhealthy.
The response time that an individual health checker uses to determine whether an endpoint is healthy depends on the type of health
check:
HTTP and HTTPS health checks, TCP health checks or HTTP and HTTPS health checks with string matching.
Regarding your specific query where we are having more than 2 servers for the website, AWS docs states that:
When you have more than one resource performing the same function—for example, more than one HTTP server or mail server—you
can configure Amazon Route 53 to check the health of your resources and respond to DNS queries using only the healthy resources.
For example, suppose your website, example.com, is hosted on six servers, two each in three data centers around the world. You can
configure Route 53 to check the health of those servers and to respond to DNS queries for example.com using only the servers that are
currently healthy. The configuration details are provided in the second link.
QUESTION 24 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES
A company has a set of web servers. It is required to ensure that all the logs from these web servers can be
analyzed in real time for any sort of threat detection. Which of the following would assist in this regard?
A. Upload all the logs to the SQS Service and then use EC2 Instances to scan the logs.
B. Upload the logs to Amazon Kinesis and then analyze the logs accordingly.
C. Upload the logs to CloudTrail and then analyze the logs accordingly.
D. Upload the logs to Glacier and then analyze the logs accordingly.
Explanation :
Answer – B
AWS Documentation provides the following information to support this requirement:
Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react
quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with
the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time
data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other
applications.
For more information on Amazon Kinesis, please refer to the below URL:
https://aws.amazon.com/kinesis/
QUESTION 25 UNATTEMPTED
DEFINE OPERATIONALLY-EXCELLENT ARCHITECTURES
Explanation :
Answer - D
AWS Documentation provides the following information to support this concept:
Balancing resources across Availability Zones is a best practice for well-architected applications, as this greatly increases aggregate
system availability. Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto
Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible
across the entire fleet.
For more information on Managing resources with Auto Scaling, please refer to the URL below.
https://aws.amazon.com/blogs/compute/fleet-management-made-easy-with-auto-scaling/
Ask our Experts
Your company manages an application that currently allows users to upload images to an S3 bucket. These
images are picked up by EC2 Instances for processing and then placed in another S3 bucket. You need an area
where the metadata for these images can be stored. Which of the following would be an ideal data store for
this?
A. AWS Redshift
B. AWS Glacier
C. AWS DynamoDB
D. AWS SQS
Explanation :
Answer - C
Option A is incorrect because this is normally used for petabyte based storage.
Option B is incorrect because this is used for archive storage.
Option D is incorrect because this used for messaging purposes.
AWS DynamoDB is the best, light-weight and durable storage option for metadata.
For more information on DynamoDB, please refer to the URL below.
https://aws.amazon.com/dynamodb/
QUESTION 27 UNATTEMPTED
DEFINE OPERATIONALLY-EXCELLENT ARCHITECTURES
An application team needs to quickly provision a development environment consisting of a web and database
layer. Which of the following would be the quickest and most ideal way to get this setup in place?
A. Create Spot Instances and install the Web and database components.
B. Create Reserved Instances and install the Web and database components.
C. Use AWS Lambda to create the web components and AWS RDS for the database layer.
D. Use Elastic Beanstalk to quickly provision the environment.
Explanation :
Answer – D
AWS Documentation mentions the following:
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure
that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply
upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and
application health monitoring.
For more information on AWS Elastic Beanstalk, please refer to the URL below.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html
Option A is incorrect. Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available to you at steep discounts
compared to On-Demand prices.
Option B is incorrect. A Reserved Instance is a reservation of resources and capacity, for either one or three years, for a particular
Availability Zone within a region.
Option C is incorrect. AWS Lambda is a compute service that makes it easy for you to build applications that respond quickly to new
information and not for provisioning a new environment.
A company requires a file system which can be used across a set of instances. Which of the following storage
options would be ideal for this requirement?
A. AWS S3
B. AWS EBS Volumes
C. AWS EFS
D. AWS EBS Snapshots
Explanation :
Answer - C
Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances
to mount the file system. You can use an EFS file system as a common data source for workloads and applications running on multiple
instances.
For more information on AWS EFS, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEFS.html
QUESTION 29 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
A company has an application that stores images and thumbnail images on S3. While the thumbnail images
need to be available for download immediately, the images and thumbnails themselves are not accessed that
frequently. Which is the most cost-efficient storage option to store images that meet these requirements?
Explanation :
Answer – B
Amazon S3 Infrequent access is perfect if you want to store data that is not frequently accessed. It is more cost effective than Option D
(Amazon S3 Standard). If you choose Amazon Glacier with Expedited Retrievals, you defeat the whole purpose of the requirement,
because of its increased cost.
For more information on AWS Storage Classes, please visit the following URL:
https://aws.amazon.com/s3/storage-classes/
QUESTION 30 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES
You have an EC2 Instance placed inside a subnet. You have created the VPC from scratch, and added the EC2
Instance to the subnet. It is required to ensure that this EC2 Instance has complete access to the Internet, since
it will be used by users on the Internet.
Which of the following options would help accomplish this?
A. Launch a NAT Gateway and add routes for 0.0.0.0/0
B. Attach a VPC Endpoint and add routes for 0.0.0.0/0
C. Attach an Internet Gateway and add routes for 0.0.0.0/0
D. Deploy NAT Instances in a public subnet and add routes for 0.0.0.0/0
Explanation :
Answer – C
AWS Documentation mentions the following:
An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between
instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.
For more information on the Internet Gateway, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html
QUESTION 31 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
You have an application hosted on AWS consisting of EC2 Instances launched via an Auto Scaling Group.
You notice that the EC2 Instances are not scaling out on demand. What checks can be done to ensure that the
scaling occurs as expected?
A. Ensure that the right metrics are being used to trigger the scale out.
B. Ensure that ELB health checks are being used.
C. Ensure that the instances are placed across multiple Availability Zones.
D. Ensure that the instances are placed across multiple regions.
Explanation :
Answer – A
If your scaling events are not based on the right metrics and do not have the right threshold defined, then the scaling will not occur as
you want it to happen.
For more information on Auto Scaling Dynamic Scaling, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html
A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in
a private VPC subnet created with default ACL settings. The web servers must be accessible only to
customers on an SSL connection and the database should only be accessible to web servers in a public subnet.
As an architect, which of the following would you not recommend for such an architecture?
Explanation :
Answer – C
The question is describing a scenario where it has been instructed that the database servers should only be accessible to web servers in
the public subnet.
You have been asked which one of the following is not a recommended architecture based on the scenario.
The answer is option C. "Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0)
and apply it to the web servers."
Here in this Option C, we are allowing all the incoming traffic from the internet to the database port which is not acceptable as per the
architecture.?
A similar setup is given in AWS Documentation:
1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443.
2) You need to then ensure that traffic can flow from the database server to the web server via the database security group.
The below snapshot from AWS Documentation shows the rules tables for the security groups which relate to the same requirements as
the question.
For more information on this use case scenario, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
The requirement in the question states that the database servers should only be accessible to web servers in the public subnet.
The answer option C - "Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and
apply it to the web servers." is not a recommended architecture for the above scenario. Here, we allow all the incoming traffic from the
Internet to the database port which is not acceptable as per the architecture.
The question asks that database should only be accessible to the webservers in the public subnet.
Now in option D database server's sec grp allows inbound at port 3306 and source of the traffic as Webserver sec grp that means
request traffic from webserver is allowed to the DB server Since security groups are stateful , response will also be allowed from DB
to the webserver. Thus allowing the communication between them So the option D is right.
But wrong in terms of this question as you have to choose an incorrect/wrong option.
Note:
The question asks you to find out which of the following is not recommend i.e. incorrect and the option C is not correct because of
the incorrect inbound rule. Hence it is the answer.
QUESTION 33 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
You have an application hosted on AWS that writes images to an S3 bucket. The concurrent number of users
on the application is expected to reach around 10,000 with approximately 500 reads and writes expected per
second. How should the architect maximize Amazon S3 performance?
Explanation :
Answer – A
If the request rate is high, you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to
store the objects will be better distributed and hence allow for better read/write performance for your objects.
For more information on how to ensure performance in S3, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
STANDARD_IA storage class is for infrequent data access. Option C is not a good solution. Versioning does not make any difference
to the performance in this case.
A company has an entire infrastructure hosted on AWS. It wants to create code templates used to provision the
same set of resources in another region in case of a disaster in the primary region. Which of the following
services can help in this regard?
A. AWS Beanstalk
B. AWS CloudFormation
C. AWS CodeBuild
D. AWS CodeDeploy
Explanation :
Answer – B
AWS Documentation provides the following information to support this requirement:
AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure
and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the
right operations to perform when managing your stack, and rolls back changes automatically if errors are detected.
For more information on AWS CloudFormation, please visit the following URL:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html
QUESTION 35 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
A company has a set of EBS Volumes that need to be catered in case of a disaster. How will you achieve this
using existing AWS services effectively?
A. Create a script to copy the EBS Volume to another Availability Zone.
B. Create a script to copy the EBS Volume to another region.
C. Use EBS Snapshots to create the volumes in another region.
D. Use EBS Snapshots to create the volumes in another Availability Zone.
Explanation :
Answer - C
Options A and B are incorrect, because you can’t directly copy EBS Volumes.
Option D is incorrect, because disaster recovery always looks at ensuring resources are created in another region.
AWS Documentation provides the following information to support this requirement:
A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create
new volumes in the same region. For more information, see Restoring an Amazon EBS Volume from a Snapshot. You can also copy
snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster
recovery.
For more information on EBS Snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html
NOTE:
It's not possible to provide each and every step in the Options and moreover in AWS exam also you will see these kinds of Options.
Option C is not talking about the whole procedure. it's simply giving the idea that we can use snapshots to create the volumes in the
other region. That's the reason we also provided the explanation part to understand the concept.
You are required to host a subscription service in AWS. Users can subscribe to the same and get notifications
on new updates to this service. Which of the following services can be used to fulfill this requirement?
Explanation :
Answer – C
Use the SNS Service to send notifications.
AWS Documentation mentions the following:
Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of
messages to subscribing endpoints or clients.
For more information on AWS SNS, please visit the following URL:
https://docs.aws.amazon.com/sns/latest/dg/welcome.html
Ask our Experts
Your company has a set of EC2 Instances hosted in AWS. There is a mandate to prepare for disasters and
come up with the necessary disaster recovery procedures. Which of the following would help in mitigating the
effects of a disaster for the EC2 Instances?
Explanation :
Answer – D
You can create an AMI from the EC2 Instances and then copy them to another region. In case of a disaster, an EC2 Instance can be
created from the AMI.
Options A and B are good for fault tolerance, but cannot help completely in disaster recovery for the EC2 Instances.
Option C is incorrect because we cannot determine if CloudFront would be helpful in this scenario or not without knowing what is
hosted on the EC2 Instance.
For disaster recovery, we have to make sure that we can launch instances in another region when required. Hence, options A,B and C
are not feasible solutions.
For more information on AWS AMIs, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
QUESTION 38 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES
A company currently hosts a Redshift cluster in AWS. For security reasons, it should be ensured that all traffic
from and to the Redshift cluster does not go through the Internet. Which of the following features can be used
to fulfill this requirement in an efficient manner?
A. Enable Amazon Redshift Enhanced VPC Routing.
B. Create a NAT Gateway to route the traffic.
C. Create a NAT Instance to route the traffic.
D. Create a VPN Connection to ensure traffic does not flow through the Internet.
Explanation :
Answer-A
AWS Documentation mentions the following:
When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffic between your
cluster and your data repositories through your Amazon VPC.
If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within
the AWS network.
For more information on Redshift Enhanced Routing, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html
A company has a set of Hyper-V machines and VMware virtual machines. They are now planning on
migrating these instances to the AWS Cloud. Which of the following can be used to move these resources to
the AWS Cloud?
A. DB Migration utility
B. AWS Server Migration Service
C. Use AWS Migration Tools.
D. Use AWS Config Tools.
Explanation :
Answer - B
AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-
premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes,
making it easier for you to coordinate large-scale server migrations.
https://aws.amazon.com/server-migration-service/
A company has a set of Linux based instances on their On-premises infrastructure. They want to have an
equivalent block storage device on AWS which can be used to store the same datasets as on the Linux based
instances. As an architect, which of the following storage devices would you recommend?
A. AWS EBS
B. AWS S3
C. AWS EFS
D. AWS DynamoDB
Explanation :
Answer – A
AWS Documentation mentions the following on EBS Volumes:
Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 Instances. EBS Volumes are
highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS
Volumes that are attached to an EC2 Instance are exposed as storage volumes that persist independently from the life of the instance.
For more information on Amazon EBS, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html
QUESTION 41 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
A company with a set of Admin jobs currently setup in the C# programming language, is moving their
infrastructure to AWS. Which of the following would be an efficient means of hosting the Admin related jobs
in AWS?
A. Use AWS DynamoDB to store the jobs and then run them on demand.
B. Use AWS Lambda functions with C# for the Admin jobs.
C. Use AWS S3 to store the jobs and then run them on demand.
D. Use AWS Config functions with C# for the Admin jobs.
Explanation :
Answer - B
The best and most efficient option is to host the jobs using AWS Lambda. This service has the facility to have the code run in the C#
programming language.
AWS Documentation mentions the following on AWS Lambda:
AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code
only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time
you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of
application or backend service - all with zero administration.
For more information on AWS Lambda, please visit the following URL:
https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
QUESTION 42 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
Your company has a set of resources hosted on the AWS Cloud. As a part of the new governing model, there
is a requirement that all activity on AWS resources should be monitored. What is the most efficient way to
have this implemented?
Explanation :
Answer – D
AWS Documentation mentions the following on AWS CloudTrail:
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With
CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure.
CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console,
AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking,
and troubleshooting.
Visibility into your AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view,
search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what
took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to
activity in your AWS account.
You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of trails
you create, and control how users view CloudTrail events.
QUESTION 43 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
Below are the requirements for a data store in AWS:
a) Ability to perform SQL queries
b) Integration with existing business intelligence tools
c) High concurrency workload that generally involves reading and writing all columns for a small number of
records at a time
Which of the following would be an ideal data store for the above requirements? Choose 2 answers from the
options below.
A. AWS Redshift
B. Oracle RDS
C. AWS Aurora
D. AWS S3
Explanation :
Answer – B and C
https://d0.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf
QUESTION 44 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
A company currently uses Redshift in AWS. The Redshift cluster is required to be used in a cost-effective
manner. As an architect, which of the following would you consider to ensure cost-effectiveness?
Explanation :
Answer -B
AWS Documentation mentions the following:
Amazon Redshift provides free storage for snapshots that is equal to the storage capacity of your cluster until you delete the cluster.
After you reach the free snapshot storage limit, you are charged for any additional storage at the normal rate. Because of this, you
should evaluate how many days you need to keep automated snapshots and configure their retention period accordingly, and delete
any manual snapshots that you no longer need.
For more information on working with Redshift Snapshots, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html
Note:
Redshift pricing is based on the following elements.
Compute node hours
Backup Storage
Data transfer – There is no data transfer charge for data transferred to or from Amazon Redshift and Amazon S3 within the same AWS
Region. For all other data transfers into and out of Amazon Redshift, you will be billed at standard AWS data transfer rates.
Data scanned
There is no additional charge for using Enhanced VPC Routing. You might incur additional data transfer charges for certain
operations, such as UNLOAD to Amazon S3 in a different region or COPY from Amazon EMR or SSH with public IP
addresses.
Enhanced VPC routing does not incur any cost but any Unload operation to a different region will incur a cost.
With Enhanced VPC routing or without it any data transfer to a different region does incur the cost.
But with Storage, increasing your backup retention period or taking additional snapshots increases the backup storage consumed by
your data warehouse. There is no additional charge for backup storage up to 100% of your provisioned storage for an active data
warehouse cluster. Any amount of storage exceeding this limit does incur the cost.
@@@For Redshift spot Instances is not an option@@@
A company has a set of resources hosted in an AWS VPC. Having acquired another company with its own set
of resources hosted in AWS, it is required to ensure that resources in the VPC of the parent company can
access the resources in the VPC of the child company. How can this be accomplished?
Explanation :
Answer - D
AWS Documentation mentions the following about VPC Peering:
A VPC Peering Connection is a networking connection between two VPCs that enables you to route traffic between them privately.
Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC Peering
Connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS region.
For more information on VPC Peering, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html
NAT Instance, NAT Gateway and VPN do not allow for VPC-VPC connectivitity.
Explanation :
Answer – B and C
AWS Documentation mentions the following:
Adding Auto Scaling to your application architecture is one way to maximize the benefits of the AWS Cloud. When you use Auto
Scaling, your applications gain the following benefits:
Better fault tolerance. Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can
also configure Auto Scaling to use multiple Availability Zones. If one Availability Zone becomes unavailable, Auto Scaling can launch
instances in another one to compensate.
Better availability. Auto Scaling can help you ensure that your application always has the right amount of capacity to handle the current
traffic demands.
For more information on the benefits of Auto Scaling, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html
QUESTION 47 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
A company has a lot of data hosted on their On-premises infrastructure. Running out of storage space, the
company wants a quick win solution using AWS. Which of the following would allow easy extension of their
data infrastructure to AWS?
Explanation :
Answer - A
Volume Gateways and Cached Volumes can be used to start storing data in S3.
AWS Documentation mentions the following:
You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally.
Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also
retain low-latency access to your frequently accessed data.
For more information on Storage Gateways, please visit the following URL:
https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html
Note: The question states that they are running out of storage space and they need a solution to store data with AWS rather than a
backup. So for this purpose, gateway-cached volumes are appropriate which will help them to avoid scaling their on-premises data
center and allows them to store on AWS storage service while having the most recent files available for them at low latency.
Cached volumes – You store your data in S3 and retain a copy of frequently accessed data subsets locally. Cached volumes offer substantial
cost savings on primary storage and "minimize the need to scale your storage on-premises. You also retain low-latency access to your
frequently accessed data."
Stored volumes – If you need low-latency access to your entire data set, first configure your on-premises gateway to store all your data
locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. "This configuration provides durable and
inexpensive offsite backups that you can recover to your local data center or Amazon EC2." For example, if you need replacement capacity
for disaster recovery, you can recover the backups to Amazon EC2.
As described in the answer: The company wants a quick win solution to store data with aws avoiding scaling the on-premise setup
rather than backing up the data.
A company has a sales team and each member of this team uploads their sales figures daily. A Solutions
Architect needs a durable storage solution for these documents and also a way to preserve documents from
accidental deletions. What among the following choices would deliver protection against unintended user
actions?
Explanation :
Answer - B
Amazon S3 has an option for versioning as shown below. Versioning is on the bucket level and can be used to recover prior versions
of an object.
For more information on Amazon S3, please visit the following URL:
https://aws.amazon.com/s3/
An application requires a highly available relational database with an initial storage capacity of 8TB. This
database will grow by 8GB everyday. To support the expected traffic, at least eight read replicas will be
required to handle the database reads. Which of the below options meets these requirements?
A. DynamoDB
B. Amazon S3
C. Amazon Aurora
D. Amazon Redshift
Explanation :
Answer – C
AWS Documentation mentions the following:
Aurora Replicas
Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability.
Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. The DB
cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a
single, logical volume to the primary instance and to Aurora Replicas in the DB cluster.
As a result, all Aurora Replicas return the same data for query results with minimal replica lag—usually much less than 100
milliseconds after the primary instance has written an update. Replica lag varies depending on the rate of database change. That is,
during periods where a large amount of write operations occur for the database, you might see an increase in replica lag.
Aurora Replicas work well for read scaling because they are fully dedicated to read operations on your cluster volume. Write
operations are managed by the primary instance. Because the cluster volume is shared among all DB instances in your DB cluster,
minimal additional work is required to replicate a copy of the data for each Aurora Replica.
To increase availability, you can use Aurora Replicas as failover targets. That is, if the primary instance fails, an Aurora Replica is
promoted to the primary instance. There is a brief interruption during which read and write requests made to the primary instance fail
with an exception, and the Aurora Replicas are rebooted. If your Aurora DB cluster doesn't include any Aurora Replicas, then your
DB cluster will be unavailable for the duration it takes your DB instance to recover from the failure event. However, promoting an
Aurora Replica is much faster than recreating the primary instance. For high-availability scenarios, we recommend that you create one
or more Aurora Replicas. These should be of the same DB instance class as the primary instance and in different Availability Zones
for your Aurora DB cluster. For more information on Aurora Replicas as failover targets, see Fault Tolerance for an Aurora DB
Cluster.
Note
You can't create an encrypted Aurora Replica for an unencrypted Aurora DB cluster. You can't create an unencrypted Aurora Replica
for an encrypted Aurora DB cluster.
For details on how to create an Aurora Replica, see Adding Aurora Replicas to a DB Cluster.
QUESTION 50 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
A company has an application that delivers objects from S3 to users. Of late, some users spread across the
globe have been complaining of slow response times. Which of the following additional steps would help in
building a cost-effective solution and also help ensure that the users get an optimal response to objects from
S3?
Explanation :
Answer - D
AWS Documentation mentions the following:
If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon
CloudFront for performance optimization.
Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer
rate. You will also send fewer direct requests to Amazon S3, which will reduce your costs.
For example, suppose that you have a few objects that are very popular. Amazon CloudFront fetches those objects from Amazon S3
and caches them. Amazon CloudFront can then serve future requests for the objects from its cache, reducing the number of GET
requests it sends to Amazon S3.
For more information on performance considerations in S3, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
Options A and B are incorrect. S3 Cross-Region Replication and Transfer Acceleration incurs cost.
Option C is incorrect. ELB is used to distribute traffic on to EC2 Instances.
An application needs to have a messaging system in AWS. It is of the utmost importance that the order of
messages is preserved and duplicate messages are not sent. Which of the following services can help fulfill
this requirement?
Explanation :
Answer – A
One can use SQS FIFO queues for this purpose.
AWS Documentation mentions the following on SQS FIFO Queues:
Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application
components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering,
and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of
supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates
from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple
separate ordered message streams within the same queue.
For more information on SQS FIFO Queues, please visit the following URL:
https://aws.amazon.com/about-aws/whats-new/2016/11/amazon-sqs-introduces-fifo-queues-with-exactly-once-processing-and-lower-
prices-for-standard-queues/
Note:
As per AWS, SQS FIFO queues will ensure the delivery of the message only once and it will be delivered in a sequential order. (i.e.
First in First Out) where as SNS cannot guarantee the delivery of the message only once.
Using SQS FIFO queues will satisfy both the requirements stated in the question.
i.e. Duplication of message will not occur and the order of messages will be preserved.
QUESTION 52 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
A company is planning on building an application using the services available on AWS. This application will
be stateless in nature, and the service must have the ability to scale according to the demand. Which of the
following would be an ideal compute service to use in this scenario?
A. AWS DynamoDB
B. AWS Lambda
C. AWS S3
D. AWS SQS
Explanation :
Answer - B
The following content from an AWS Whitepaper supports the usage of AWS Lambda for this requirement:
A stateless application is an application that needs no knowledge of previous interactions and stores no session information. Such an
example could be an application that, given the same input, provides the same response to any end user. A stateless application can
scale horizontally since any request can be serviced by any of the available compute resources (e.g., EC2 instances, AWS Lambda
functions).
For more information on AWS Cloud best practices, please visit the following URL:
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf
A company has a set of EC2 Instances hosted on the AWS Cloud. These instances form a web server farm
which services a web application accessed by users on the Internet. Which of the following would help make
this architecture more fault tolerant? Choose 2 answers from the options given below.
Explanation :
Answer – A and C
AWS Documentation mentions the following:
A load balancer distributes incoming application traffic across multiple EC2 Instances in multiple Availability Zones. This increases
the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances.
For more information on the AWS Classic Load Balancer, please visit the following URL:
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html
Note:
Autoscaling will not create an ELB automatically you need to manually create it in the same region as the AutoScaling group.
Once you create an ELB, and attach the load balancer to the autoscaling group, it automatically registers the instances in the group and
distributes incoming traffic across the instances.
The following steps provides you information on attaching a load balancer to autoscaling group.
As per AWS,
You can automatically increase the size of your Auto Scaling group when demand goes up and decrease it when demand goes down.
As the Auto Scaling group adds and removes EC2 instances, you must ensure that the traffic for your application is distributed across
all of your EC2 instances. The Elastic Load Balancing service automatically routes incoming web traffic across such a
dynamically changing number of EC2 instances. Your load balancer acts as a single point of contact for all incoming traffic to the
instances in your Auto Scaling group.
To use a load balancer with your Auto Scaling group, create the load balancer and then attach it to the group.
QUESTION 54 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
You plan on hosting an application on EC2 Instances which will be used to process logs. The application is
not very critical and can resume operation even after an interruption. Which of the following steps can help
provide a cost-effective solution?
Explanation :
Answer – C
One effective solution would be to use Spot Instances in this scenario.
AWS Documentation mentions the following on Spot Instances:
Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be
interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.
For more information on using Spot Instances, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
A company stores its log data in an S3 bucket. There is a current need to have search capabilities available for
the data in S3. How can this be achieved in an efficient and ongoing manner? Choose 2 answers from the
options below. Each answer forms a part of the solution.
A. Use an AWS Lambda function which gets triggered whenever data is added to the S3bucket.
B. Create a Lifecycle Policy for the S3 bucket.
C. Load the data into Amazon Elasticsearch.
D. Load the data into Glacier.
Explanation :
Answer – A and C
AWS Elasticsearch provides full search capabilities and can be used for log files stored in the S3 bucket.
AWS Documentation mentions the following with regard to the integration of Elasticsearch with S3:
You can integrate your Amazon ES domain with Amazon S3 and AWS Lambda. Any new data sent to an S3 bucket triggers an event
notification to Lambda, which then runs your custom Java or Node.js application code. After your application processes the data, it
streams the data to your domain.
For more information on integration between Elasticsearch and S3, please visit the following URL:
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html
QUESTION 56 UNATTEMPTED
DEFINE OPERATIONALLY-EXCELLENT ARCHITECTURES
A company plans on deploying a batch processing application in AWS. Which of the following is an ideal
way to host this application? Choose 2 answers from the options below. Each answer forms a part of the
solution.
Explanation :
Answer – B and C
AWS Documentation mentions the following:
Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived and embarrassingly parallel. You
can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS
task.
For more information on the use cases for AWS ECS, please visit the following URL:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/common_use_cases.html
Explanation :
Answer - D
AWS Documentation mentions the following:
You can create an active-passive failover configuration by using failover records. Create a primary and a secondary failover record
that have the same name and type, and associate a health check with each.
The various Route 53 routing policies are as follows:
Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves
content for the example.com website.
Failover routing policy – Use when you want to configure active-passive failover.
Geolocation routing policy – Use when you want to route traffic based on the location of your users.
Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic
from resources in one location to resources in another.
Latency routing policy – Use when you have resources in multiple locations and you want to route traffic to the resource that provides
the best latency.
Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at
random.
Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.
For more information on DNS Failover using Route 53, please visit the following URL:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring-options.html
QUESTION 58 UNATTEMPTED
DESIGN COST-OPTIMIZED ARCHITECTURES
A company requires to provision test environments in a short duration. Also required is an ability to tear them
down easily for cost optimization. How can this be achieved?
Explanation :
Answer - A
The Cost optimization Whitepaper from AWS mentions the following:
"AWS CloudFormation provides templates that you can use to create AWS resources and provision them in an orderly and predictable
fashion. This can be useful for creating short-lived environments, such as test environments."
Also as per AWS cost optimization white paper, "You can leverage the AWS APIs and AWS CloudFormation to automatically
provision and decommission entire environments as you need them. This approach is well suited for development or test environments
that run only in defined business hours or periods of time."
But you can't handle the test environments by using the Autoscaling group. You can provide the list of resources which requires to
build a test environment and it's pretty easy.
For more information on the Whitepaper, please visit the following URL:
https://d1.awsstatic.com/whitepapers/architecture/AWS-Cost-Optimization-Pillar.pdf
A company wants to self-manage a database environment. Which of the following should be adopted to fulfill
this requirement?
Explanation :
Answer – D
If you want to self-manage a database, you should have an EC2 Instance. Then, you will have complete control over the underlying
database instance.
For more information on Amazon EC2, please visit the following URL:
https://aws.amazon.com/ec2/
QUESTION 60 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
A company is migrating an on-premises 5TB MySQL database to AWS and expects its database size to
increase steadily. Which Amazon RDS engine meets these requirements?
A. MySQL
B. Microsoft SQL Server
C. Oracle
D. Amazon Aurora
Explanation :
Answer – D
AWS Documentation supports the above requirements with regard to AWS Aurora.
Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed
and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to
five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your
existing applications.
All Aurora Replicas return the same data for query results with minimal replica lag—usually much lesser than 100 milliseconds after
the primary instance has written an update.
For more information on AWS Aurora, please visit the following URL:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html
NOTE:
On a MySQL DB instance, avoid tables in your database growing too large. Provisioned storage limits restrict the maximum size of a
MySQL table file to 16 TB
However, based on database usage, your Amazon Aurora storage will automatically grow, from the minimum of 10 GB up to 64 TB,
in 10 GB increments, with no impact on database performance.
A company wants to have a 50 Mbps dedicated connection to its AWS resources. Which of the below services
can help fulfill this requirement?
Explanation :
Answer - C
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct
Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many
cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-
based connections.
For more information on AWS Direct Connect, please visit the below URL:
https://aws.amazon.com/directconnect/
Ask our Experts
You work for a company that stores records for a minimum of 10 years. Most of these records will never be
accessed but must be made available upon request (within a few hours). What is the most cost-effective
storage option in this scenario? Choose the correct answer from the options below.
Explanation :
Answer – C
Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup.
Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings
compared to on-premises solutions. To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options
for access to archives, from a few minutes to several hours.
For more information on Amazon Glacier, please refer to the link below.
https://aws.amazon.com/glacier/
QUESTION 63 UNATTEMPTED
DEFINE PERFORMANT ARCHITECTURES
A company is building a Two-Tier web application to serve dynamic transaction-based content. The Data Tier
uses an Online Transactional Processing (OLTP) database. What services should you leverage to enable an
elastic and scalable Web Tier?
Answer – A
The question mentions a scalable Web Tier and not a Database Tier. So Option C, D and B can be eliminated since they are database
related options.
The below example shows an Elastic Load Balancer connected to 2 EC2 instances via Auto Scaling. This is an example of an elastic
and scalable Web Tier. By scalable, we mean that the Auto Scaling process is able to increase or decrease the number of EC2
Instances as required.
For more information on the Elastic Load Balancer, please refer to the link below.
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html
QUESTION 64 UNATTEMPTED
SPECIFY SECURE APPLICATIONS AND ARCHITECTURES
An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and
deny all outbound traffic. The instance’s security group is configured to allow SSH from any IP address and
deny all outbound traffic. What changes need to be made to allow SSH access to the instance?
Explanation :
Answer – B
For an EC2 Instance to allow SSH, you can have the below configurations for the Security and Network ACL for Inbound and
Outbound Traffic.
The reason why Network ACL has to have both an Allow for Inbound and Outbound is because network ACLs are stateless.
Responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa). Whereas for Security groups,
responses are stateful. So if an incoming request is granted, by default an outgoing request will also be granted.
Options A and D are invalid because Security Groups are stateful. Here, any traffic allowed in the Inbound rule is allowed in the
Outbound rule too. Option C is incorrect.
For more information on Network ACLs, please refer to the link below.
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
QUESTION 65 UNATTEMPTED
DEFINE OPERATIONALLY-EXCELLENT ARCHITECTURES
Your company currently has a web distribution hosted using the AWS CloudFront service. The IT Security
department has confirmed that the application using this web distribution now falls under the scope of PCI
compliance. What are the possible ways to meet the requirements? Choose two answers from the choices
below.
Explanation :
Answer – A and C
AWS Documentation mentions the following:
If you run PCI or HIPAA-compliant workloads based on the AWS Shared Responsibility Model, we recommend that you log your
CloudFront usage data for the last 365 days for future auditing purposes. To log usage data, you can do the following:
For more information on compliance with CloudFront, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/compliance.html