AWS Certified Developer Associate (DVA-C01) Practice Test
()
About this ebook
If you are here to get DVA-C01 certified, then you are at the right place. AWS Certified Developer – Associate is intended for anyone with one or more years of hands-on experience developing and maintaining an AWS-based application. This book contains 6 Realistic Practice Tests with 400 Questions and detailed explanations to get you AWS Developer associate certified on your very first attempt.
iCertify Training
iCertify Training is a New York-based Authorized Professional Certification Training Provider for PMP®, Agile®, ITIL® and Six Sigma® More than 325,000 persons have been enabled by our certification programs. More than 5,000 students are certified each month enroll with us for enabling their professional trajectory. We offer Classroom, Online and Webinars for professionals, business and government. Thanks for providing us an opportunity to serve your certification needs. You can reach out to us through our website and email Helpdesk. PMI, PMP, CAPM, PMBOK, PM Network and the PMI Registered Education Provider logo are registered marks of the Project Management Institute, Inc.
Read more from I Certify Training
Microsoft AZURE® AZ-104 Administrator Practice Tests Rating: 0 out of 5 stars0 ratingsMicrosoft Azure Data Fundamentals Practice Tests DP 900 Rating: 5 out of 5 stars5/5AI for Executives Rating: 0 out of 5 stars0 ratingsMicrosoft Azure Security Technologies Practice Tests AZ-500 Rating: 0 out of 5 stars0 ratingsAWS Certified Cloud Practitioner Practice Tests Rating: 0 out of 5 stars0 ratingsDevOps Fundamentals Practice Test Rating: 0 out of 5 stars0 ratingsDevOps Leadership Practice Test Rating: 0 out of 5 stars0 ratingsThe Scrum Master Training Manual Rating: 0 out of 5 stars0 ratingsAZ-300/303 Microsoft Azure® Architect Technologies Practice Tests Rating: 0 out of 5 stars0 ratingsMicrosoft Azure Fundamentals Exam Prep AZ 900 Rating: 0 out of 5 stars0 ratingsAzure® Essentials Rating: 0 out of 5 stars0 ratings
Related to AWS Certified Developer Associate (DVA-C01) Practice Test
Related ebooks
AWS Certified Developer Associate Step by Step Certification Study Guide, to Pass the Developers Exam With Confidence Rating: 0 out of 5 stars0 ratingsAWS Certified Cloud Practitioner - Practice Paper 3: AWS Certified Cloud Practitioner, #3 Rating: 5 out of 5 stars5/5AWS Certified Solutions Architect - Associate Exam Prep kit Rating: 0 out of 5 stars0 ratingsAWS Certified Cloud Practitioner - Practice Paper 2: AWS Certified Cloud Practitioner, #2 Rating: 5 out of 5 stars5/5AWS Cloud Practitioner Study Guide & Practice Tests Rating: 0 out of 5 stars0 ratingsAWS Solution Architect Certification Exam Practice Paper 2019 Rating: 4 out of 5 stars4/5AWS For Admins For Dummies Rating: 4 out of 5 stars4/5AWS Certified Solutions Architect Study Guide: Associate SAA-C01 Exam Rating: 4 out of 5 stars4/5AWS Certified Cloud Practitioner: Study Guide with Practice Questions and Labs Rating: 5 out of 5 stars5/5AWS Cloud Projects: Strengthen your AWS skills through practical projects, from websites to advanced AI applications Rating: 0 out of 5 stars0 ratingsAWS Solutions Architect Certification Case Based Practice Questions Latest Edition 2023 Rating: 0 out of 5 stars0 ratingsAWS Certified Solutions Architect Associate Exam Insights : Q&A with Explanations Rating: 0 out of 5 stars0 ratingsAWS Certified SysOps Administrator Practice Tests: Associate SOA-C01 Exam Rating: 0 out of 5 stars0 ratingsAWS Cloud Practitioner Exam Success Kit Rating: 0 out of 5 stars0 ratingsComprehensive Guide to AWS Amplify Development: Definitive Reference for Developers and Engineers Rating: 0 out of 5 stars0 ratingsAWS Certified Solutions Architect - Professional Rating: 0 out of 5 stars0 ratingsDVA-C02: AWS Certified Developer Associate Practice Questions Second Edition Rating: 3 out of 5 stars3/5AWS Certified Cloud Practitioner - Practice Paper 1: AWS Certified Cloud Practitioner, #1 Rating: 5 out of 5 stars5/5AWS for Non-Engineers Rating: 0 out of 5 stars0 ratings
Teaching Methods & Materials For You
How to Take Smart Notes. One Simple Technique to Boost Writing, Learning and Thinking Rating: 4 out of 5 stars4/5Speed Reading: Learn to Read a 200+ Page Book in 1 Hour: Mind Hack, #1 Rating: 5 out of 5 stars5/5Principles: Life and Work Rating: 4 out of 5 stars4/5Personal Finance for Beginners - A Simple Guide to Take Control of Your Financial Situation Rating: 5 out of 5 stars5/5Speed Reading: How to Read a Book a Day - Simple Tricks to Explode Your Reading Speed and Comprehension Rating: 4 out of 5 stars4/5Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong Rating: 4 out of 5 stars4/5Vocabulary Cartoons, SAT Word Power: Learn Hundreds of SAT Words with Easy Memory Techniques Rating: 4 out of 5 stars4/5Mental Math Secrets - How To Be a Human Calculator Rating: 5 out of 5 stars5/5How You Learn Is How You Live: Using Nine Ways of Learning to Transform Your Life Rating: 4 out of 5 stars4/5The Secrets of ChatGPT Prompt Engineering for Non-Developers Rating: 5 out of 5 stars5/5Become an Engineer Not Just an Engineering Graduate Rating: 4 out of 5 stars4/5Learn Like a Pro: Science-Based Tools to Become Better at Anything Rating: 5 out of 5 stars5/5How To Be Hilarious and Quick-Witted in Everyday Conversation Rating: 5 out of 5 stars5/5Mnemonic Memory Palace Book One Rating: 4 out of 5 stars4/5
Reviews for AWS Certified Developer Associate (DVA-C01) Practice Test
0 ratings0 reviews
Book preview
AWS Certified Developer Associate (DVA-C01) Practice Test - iCertify Training
Chapter 1: Introduction
1.1 About the Author
iCertify Training is a New York-based Authorized Professional Certification Training Provider for Cloud, PMP®, Agile®, ITIL® and Six Sigma®. More than 325,000 persons have been enabled by our certification programs. More than 5,000 students are certified each month. We offer Classroom, Online and Webinars for professionals, business and government.
You can reach us at [email protected]
1.2 About the Exam
AWS Certified Developer Associate (DVA-C01)
Length: 65 questions
Time: 130 minutes
Pass mark: 720 points from 1000 points
There are two types of questions on the examination:
Multiple-choice: Has one correct response and three incorrect responses (distractors)
Multiple-response: Has two or more correct responses out of five or more options
The AWS Certified Developer Associate (DVA-C01) Exam Guide
1.3 Who should take this exam
AWS Certified Developer – Associate is intended for anyone with one or more years of hands-on experience developing and maintaining an AWS-based application. Before you take this exam, we recommend you have:
● In-depth knowledge of at least one high-level programming language
● Understanding of core AWS services, uses of the services, and basic AWS architecture best practices, including the AWS Shared Responsibility Model, application lifecycle management, and the use of containers in the development process
● Proficiency in developing, deploying, and debugging cloud-based applications using AWS and writing code for serverless applications
● Ability to identify key features of AWS services and use the AWS service APIs, AWS CLI, and SDKs to write applications
● Ability to apply a basic understanding of cloud-native applications to write code
● Ability to author, maintain, and debug code modules on AWS
1.3 Target Candidate Description
The target candidate should have 1 or more years of hands-on experience developing and maintaining an AWS-based application.
Recommended general IT knowledge
The target candidate should have the following:
● In-depth knowledge of at least one high-level programming language
● Understanding of application lifecycle management
● The ability to write code for serverless applications
● Understanding of the use of containers in the development process
Recommended AWS knowledge
The target candidate should be able to do the following:
● Use the AWS service APIs, CLI, and software development kits (SDKs) to write applications Identify key features of AWS services
● Understand the AWS shared responsibility model
● Use a continuous integration and continuous delivery (CI/CD) pipeline to deploy applications on AWS
● Use and interact with AWS services
● Apply basic understanding of cloud-native applications to write code
● Write code by using AWS security best practices (for example, use IAM roles instead of secret and access keys in the code)
● Author, maintain, and debug code modules on AWS
1.4 Exam Content Outline
The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.
Chapter 2 : Practice Test 1
Question 1:
A Developer has made an update to an application. The application serves users around the world and uses Amazon CloudFront for caching content closer to users. It has been reported that after deploying the application updates, users are not able to see the latest changes.
How can the Developer resolve this issue?
Invalidate all the application objects from the edge caches (Correct)
Disable the CloudFront distribution and enable it again to update all the edge locations
Disable forwarding of query strings and request headers from the CloudFront distribution configuration
Remove the origin from the CloudFront configuration and add it again
Answer: A.
Explanation
If you need to remove a file from CloudFront edge caches before it expires, you can do one of the following:
Invalidate the file from edge caches. The next time a viewer requests the file, CloudFront returns to the origin to fetch the latest version of the file.
Use file versioning to serve a different version of the file that has a different name. For more information, see Updating Existing Files Using Versioned File Names.
In this case, the best option available is to invalidate all the application objects from the edge caches. This will result in the new objects being cached next time a request is made for them.
CORRECT: Invalidate all the application objects from the edge caches
is the correct answer.
INCORRECT: Remove the origin from the CloudFront configuration and add it again
is incorrect as this is going to cause all objects to be removed and then reached which is overkill and will cost more.
INCORRECT: Disable forwarding of query strings and request headers from the CloudFront distribution configuration
is incorrect as this is not a way to invalidate objects in Amazon CloudFront.
INCORRECT: Disable the CloudFront distribution and enable it again to update all the edge locations
is incorrect as this will not cause the objects to expire, they will expire whenever their expiration date occurs and must be invalidated to make this happen sooner.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
Question 2:
Users of an application using Amazon API Gateway, AWS Lambda and Amazon DynamoDB have reported errors when using the application. Which metrics should a Developer monitor in Amazon CloudWatch to determine the number of client-side and server-side errors?
IntegrationLatency and Latency
4XX Error and 5XXError
Errors
CacheHitCount and CacheMissCount
Answer: B.
Explanation
To determine the number of client-side errors captured in a given period the Developer should look at the 4XXError metric. To determine the number of server-side errors captured in a given period the Developer should look at the 5XXError.
CORRECT: 4XXError and 5XXError
is the correct answer.
INCORRECT: CacheHitCount and CacheMissCount
is incorrect as these count the number of requests served from the cache and the number of requests served from the backend.
INCORRECT: IntegrationLatency and Latency
is incorrect as these measure the amount of time between when API Gateway relays a request to the backend and when it receives a response from the backend and the time between when API Gateway receives a request from a client and when it returns a response to the client.
INCORRECT: Errors
is incorrect as this is not a metric related to Amazon API Gateway.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-metrics-and-dimensions.html
Question 3:
A serverless application uses an AWS Lambda function to process Amazon S3 events. The Lambda function executes 20 times per second and takes 20 seconds to complete each execution. How many concurrent executions will the Lambda function require?
20
400
40
5
Answer: B.
Explanation
Concurrency is the number of requests that your function is serving at any given time. When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated, which increases the function's concurrency.
To calculate the concurrency requirements for the Lambda function simply multiply the number of executions per second (20) by the time it takes to complete the execution (20).
Therefore, for this scenario, the calculation is 20 x 20 = 400.
CORRECT: 400
is the correct answer.
INCORRECT: 5
is incorrect. Please use the formula above to calculate concurrency requirements.
INCORRECT: 40
is incorrect. Please use the formula above to calculate concurrency requirements.
INCORRECT: 20
is incorrect. Please use the formula above to calculate concurrency requirements.
References:
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
Question 4:
An application needs to generate SMS text messages and emails for a large number of subscribers. Which AWS service can be used to send these messages to customers.
Amazon SWF
Amazon SNS
Amazon SES
Amazon SQS
Answer: B.
Explanation
Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Amazon SNS, there are two types of clients—publishers and subscribers—also referred to as producers and consumers.
Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel.
Subscribers (that is, web servers, email addresses, Amazon SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (that is, Amazon SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.
CORRECT: Amazon SNS
is the correct answer.
INCORRECT: Amazon SES
is incorrect as this service only sends email, not SMS text messages.
INCORRECT: Amazon SQS
is incorrect as this is a hosted message queue for decoupling application components.
INCORRECT: Amazon SWF
is incorrect as the Simple Workflow Service is used for orchestrating multi-step workflows.
References:
https://docs.aws.amazon.com/sns/latest/dg/welcome.html
Question 5:
A company runs a legacy application that uses an XML-based SOAP interface. The company needs to expose the functionality of the service to external customers and plans to use Amazon API Gateway. How can a Developer configure the integration?
Create a SOAP API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through a Network Load Balancer.
Create a SOAP API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using AWS Lambda.
Create a RESTful SOAP API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates.
Create a RESTful API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through an Application Load Balancer.
Answer: C.
Explanation
In API Gateway, an API's method request can take a payload in a different format from the corresponding integration request payload, as required in the backend. Similarly, the backend may return an integration response payload different from the method response payload, as expected by the frontend.
API Gateway lets you use mapping templates to map the payload from a method request to the corresponding integration request and from an integration response to the corresponding method response.
CORRECT: Create a RESTful API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates
is the correct answer.
INCORRECT: Create a RESTful API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through an Application Load Balancer
is incorrect. The API Gateway cannot process the XML SOAP data and cannot pass it through an ALB.
INCORRECT: Create a SOAP API using Amazon API Gateway. Transform the incoming JSON into a valid XML message for the SOAP interface using AWS Lambda
is incorrect. API Gateway does not support SOAP APIs.
INCORRECT: Create a SOAP API using Amazon API Gateway. Pass the incoming JSON to the SOAP interface through a Network Load Balancer
is incorrect. API Gateway does not support SOAP APIs.
References:
https://docs.aws.amazon.com/apigateway/latest/Developerguide/request-response-data-mappings.html
Question 6:
An application is being deployed on an Amazon EC2 instance running Linux. The EC2 instance will need to manage other AWS services. How can the EC2 instance be configured to make API calls to AWS service securely?
Create an AWS lAM Role, attach a policy with the necessary privileges and attach the role to the instance’s instance profile
Run the ‘aws configure’ AWS CLI command and specify the access key ID and secret access key
Store the access key ID and secret access key as encrypted AWS Lambda environment variables and invoke Lambda for each API call
Store a users’ console login credentials in the application code so the application can call AWS STS and gain temporary security credentials
––––––––
Answer: A.
Explanation
Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users.
However, it's challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials.
IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles as follows:
1. Create an IAM role.
2. Define which accounts or AWS services can assume the role.
3. Define which API actions and resources the application can use after assuming the role.
4. Specify the role when you launch your instance or attach the role to an existing instance.
5. Have the application retrieve a set of temporary credentials and use them.
For example, you can use IAM roles to grant permissions to applications running on your instances that need to use a bucket in Amazon S3. You can specify permissions for IAM roles by creating a policy in JSON format. These are similar to the policies that you create for IAM users. If you change a role, the change is propagated to all instances.
Therefore, the best solution is to create an AWS IAM Role with the necessary privileges (through an IAM policy) and attach the role to the instance’s instance profile.
CORRECT: Create an AWS IAM Role, attach a policy with the necessary privileges and attach the role to the instance’s instance profile
is the correct answer.
INCORRECT: Run the
aws configure AWS CLI command and specify the access key ID and secret access key
is incorrect as this is insecure as the access key ID and secret access key are stored in plaintext on the instance’s local disk.
INCORRECT: Store a users’ console login credentials in the application code so the application can call AWS STS and gain temporary security credentials
is incorrect. This is a nonsense solution that would not work for multiple reasons. Firstly, the user can console login credentials and not be used for API access; secondly the STS service will not accept user login credentials and return temporary access credentials.
INCORRECT: Store the access key ID and secret access key as encrypted AWS Lambda environment variables and invoke Lambda for each API call
is incorrect. You can encrypt Lambda variables with KMS keys; however, this is not an ideal solution as you will still need to decrypt the keys through the Lambda code and then pass them to the EC2 instance. There could be security risks in this process. This is generally a poor use case for Lambda and IAM Roles are a far superior way of providing the necessary access.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Question 7:
A Developer is creating a web application that will be used by employees working from home. The company uses a SAML directory on-premises for storing user information. The Developer must integrate with the SAML directory and authorize each employee to access only their own data when using the application. Which approach should the Developer take?
Create a unique lAM role for each employee and have each employee assume the role to access the application so they can access their personal data only.
Create the application within an Amazon VPC and use a VPC endpoint with a trust policy to grant access to the employees.
Use Amazon Cog nito user pools, federate with the SAML provider, and use user pool groups with an lAM policy.
Use an Amazon Cog nito identity pool, federate with the SAML provider, and use a trust policy with an lAM condition key to limit employee access.
Answer: D.
Explanation
Amazon Cognito leverages IAM roles to generate temporary credentials for your application's users. Access to permissions is controlled by a role's trust relationships.
In this example the Developer must limit access to specific identities in the SAML directory. The Developer can create a trust policy with an IAM condition key that limits access to a specific set of app users by checking the value of cognito-identity.amazonaws.com:sub:
CORRECT: Use an Amazon Cognito identity pool, federate with the SAML provider, and use a trust policy with an IAM condition key to limit employee access
is the correct answer.
INCORRECT: Use Amazon Cognito user pools, federate with the SAML provider, and use user pool groups with an IAM policy
is incorrect. A user pool can be used to authenticate but the identity pool is used to provide authorized access to AWS services.
INCORRECT: Create the application within an Amazon VPC and use a VPC endpoint with a trust policy to grant access to the employees
is incorrect. You cannot provide access to an on-premises SAML directory using a VPC endpoint.
INCORRECT: Create a unique IAM role for each employee and have each employee assume the role to access the application so they can access their personal data only
is incorrect. This is not an integration into the SAML directory and would be very difficult to manage.
References:
https://docs.aws.amazon.com/cognito/latest/Developerguide/role-trust-and-permissions.html
https://docs.aws.amazon.com/cognito/latest/Developerguide/iam-roles.html
Question 8:
A Developer is writing a serverless application that will process data uploaded to a file share. The Developer has created an AWS Lambda function and requires the function to be invoked every 15 minutes to process the data. What is an automated and serverless way to trigger the function?
Create an Amazon CloudWatch Events rule that triggers on a regular schedule to invoke the Lambda function
Configure an environment variable named PERIOD for the Lambda function. Set the value at 600
Deploy an Amazon EC2 instance based on Linux, and edit it’s /etc/crontab file by adding a command to periodically invoke the Lambda function
Create an Amazon SNS topic that has a subscription to the Lambda function with a 600-second timer
Answer: A.
Explanation
Amazon CloudWatch Events help you to respond to state changes in your AWS resources. When your resources change state, they automatically send events into an event stream. You can create rules that match selected events in the stream and route them to your AWS Lambda function to take action.
You can create a Lambda function and direct AWS Lambda to execute it on a regular schedule. You can specify a fixed rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a Cron expression. Therefore, the Developer should create an Amazon CloudWatch Events rule that triggers on a regular schedule to invoke the Lambda function. This is a serverless and automated solution.
CORRECT: Create an Amazon CloudWatch Events rule that triggers on a regular schedule to invoke the Lambda function
is the correct answer.
INCORRECT: Deploy an Amazon EC2 instance based on Linux, and edit it’s /etc/crontab file by adding a command to periodically invoke the Lambda function
is incorrect as EC2 is not a serverless solution.
INCORRECT: Configure an environment variable named PERIOD for the Lambda function. Set the value at 600
is incorrect as you cannot cause a Lambda function to execute based on a value in an environment variable.
INCORRECT: Create an Amazon SNS topic that has a subscription to the Lambda function with a 600-second timer
is incorrect as SNS does not run on a timer, CloudWatch Events should be used instead.
References:
https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents.html
Question 9:
A company is deploying an on-premise application server that will connect to several AWS services. What is the BEST way to provide the application server with permissions to authenticate to AWS services?
Create an lAM group with the necessary permissions and add the on-premise application server to the group
Create an lAM role with the necessary permissions and assign it to the application server
Create an lAM user and generate a key pair. Use the key pair in API calls to AWS services
Create an lAM user and generate access keys. Create a credentials file on the application server
Answer: D.
Explanation
Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).
Access keys are stored in one of the locations on a client that needs to make authenticated API calls to AWS services:
· Linux: ~/.aws/credentials
· Windows: %UserProfle%\.aws\credentials
In this scenario the application server is running on-premises. Therefore, you cannot assign an IAM role (which would be the preferable solution for Amazon EC2 instances). In this case it is therefore better to use access keys.
CORRECT: Create an IAM user and generate access keys. Create a credentials file on the application server
is the correct answer.
INCORRECT: Create an IAM role with the necessary permissions and assign it to the application server
is incorrect. This is an on-premises server so it is not possible to use an IAM role. If it was an EC2 instance, this would be the preferred (best practice) option.
INCORRECT: Create an IAM group with the necessary permissions and add the on-premise application server to the group
is incorrect. You cannot add a server to an IAM group. You put IAM users into groups and assign permissions to them using a policy.
INCORRECT: Create an IAM user and generate a key pair. Use the key pair in API calls to AWS services
is incorrect as key pairs are used for SSH access to Amazon EC2 instances. You cannot use them in API calls to AWS services.
References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Question 10:
A Developer has created a task definition that includes the following JSON code:
placementStrategy
: [
{
field
: attribute:ecs.availability-zone
,
type
: spread
},
{
field
: instanceId
,
type
: spread
}
]
What is the effect of this task placement strategy?
It distributes tasks evenly across Availability Zones and then distributes tasks evenly across the instances within each AvailabilityZone
It distributes tasks evenly across Availability Zones and then distributes tasks randomly across instances within each Availability Zone
It distributes tasks evenly across Availability Zones and then distributes tasks evenly across distinct instances within each Availability Zone
It distributes tasks evenly across Availability Zones and then bin packs tasks based on memory within each Availability Zone
Answer: A.
Explanation
A task placement strategy is an algorithm for selecting instances for task placement or tasks for termination. Task placement strategies can be specified when either running a task or creating a new service.
Amazon ECS supports the following task placement strategies:
binpack
Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use.
random
Place tasks randomly.
spread
Place tasks evenly based on the specified value. Accepted values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone.
You can specify task placement strategies with the following actions: CreateService, UpdateService, and RunTask. You can also use multiple strategies together as in the example JSON code provided with the question.
CORRECT: It distributes tasks evenly across Availability Zones and then distributes tasks evenly across the instances within each Availability Zone
is the correct answer.
INCORRECT: It distributes tasks evenly across Availability Zones and then bin packs tasks based on memory within each Availability Zone
is incorrect as it does not use the binpack strategy.
INCORRECT: It distributes tasks evenly across Availability Zones and then distributes tasks evenly across distinct instances within each Availability Zone
is incorrect as it does not spread tasks across distinct instances (use a task placement constraint).
INCORRECT: It distributes tasks evenly across Availability Zones and then distributes tasks randomly across instances within each Availability Zone
is incorrect as it does not use the random strategy.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html
Question 11:
A Developer is deploying an application in a microservices architecture on Amazon ECS. The Developer needs to choose the best task placement strategy to MINIMIZE the number of instances that are used. Which task placement strategy should be used?
spread
random
weighted
binpack
Answer: D.
Explanation
A task placement strategy is an algorithm for selecting instances for task placement or tasks for termination. Task placement strategies can be specified when either running a task or creating a new service.
Amazon ECS supports the following task placement strategies:
binpack - Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use.
random - Place tasks randomly.
spread - Place tasks evenly based on the specified value. Accepted values are instanceId (or host, which has the same effect), or any platform or custom attribute that is applied to a container instance, such as attribute:ecs.availability-zone. Service tasks are spread based on the tasks from that service. Standalone tasks are spread based on the tasks from the same task group.
The binpack task placement strategy is the most suitable for this scenario as it minimizes the number of instances used which is a requirement for this solution.
CORRECT: binpack
is the correct answer.
INCORRECT: random
is incorrect as this would assign tasks randomly to EC2 instances which would not result in minimizing the number of instances used.
INCORRECT: spread
is incorrect as this would spread the tasks based on a specified value. This is not used for minimizing the number of instances used.
INCORRECT: weighted
is incorrect as this is not an ECS task placement strategy. Weighted is associated with Amazon Route 53 routing policies.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html
Question 12:
An application component writes thousands of item-level changes to a DynamoDB table per day. The developer requires that a record is maintained of the items before they were modified. What MUST the developer do to retain this information? (Select TWO.)
Enable DynamoDB Streams for the table
Create a CloudWatch alarm that sends a notification when an item is modified
Set the StreamViewType to OLD_IMAGE
Set the StreamViewType to NEW_AND_OLD_IMAGES
Use an AWS Lambda function to extract the item records from the notification and write to an S3 bucket
Answer: A,B.
Explanation
DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours. Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time.
You can also use the CreateTable or UpdateTable API operations to enable or modify a stream. The StreamSpecification parameter determines how the stream is configured:
StreamEnabled — Specifies whether a stream is enabled (true) or disabled (false) for the table.
StreamViewType — Specifies the information that will be written to the stream whenever data in the table is modified:
KEYS_ONLY — Only the key attributes of the modified item.
NEW_IMAGE — The entire item, as it appears after it was modified.
OLD_IMAGE — The entire item, as it appeared before it was modified.
NEW_AND_OLD_IMAGES — Both the new and the old images of the item.
In this scenario, we only need to keep a copy of the items before they are modified. Therefore, the solution is to enable DynamoDB streams and set the StreamViewType to OLD_IMAGES.
CORRECT: Enable DynamoDB Streams for the table
is the correct answer.
CORRECT: Set the StreamViewType to OLD_IMAGE
is the correct answer.
INCORRECT: Create a CloudWatch alarm that sends a notification when an item is modified
is incorrect as DynamoDB streams are the best way to capture a time-ordered sequence of item-level modifications in a DynamoDB table.
INCORRECT: Set the StreamViewType to NEW_AND_OLD_IMAGES
is incorrect as we only need to keep a record of the items before they were modified. This setting would place a record in the stream that includes the item before and after modification.
INCORRECT: Use an AWS Lambda function to extract the item records from the notification and write to an S3 bucket
is incorrect. There is no requirement to write the updates to S3 and if you did want to do this with Lambda you would need to extract the information from the stream, not a notification.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
Question 13:
A developer is planning to use a Lambda function to process incoming requests from an Application Load Balancer (ALB). How can this be achieved?
Setup an API in front of the ALB using API Gateway and use an integration request to map the request to the Lambda function
Create an Auto Scaling Group (ASG) and register the Lambda function in the launch configuration
Create a target group and register the Lambda function using the AWS CLI
Configure an event-source mapping between the ALB and the Lambda function
Answer: C.
Explanation
You can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.
You need to create a target group, which is used in request routing, and register a Lambda function to the target group. If the request content matches a listener rule with an action to forward it to this target group, the load balancer invokes the registered Lambda function.
CORRECT: Create a target group and register the Lambda function using the AWS CLI
is the correct answer.
INCORRECT: Create an Auto Scaling Group (ASG) and register the Lambda function in the launch configuration
is incorrect as launch configurations and ASGs are used for launching Amazon EC2 instances, you cannot use an ASG with a Lambda function.
INCORRECT: Setup an API in front of the ALB using API Gateway and use an integration request to map the request to the Lambda function
is incorrect as it is not a common design pattern to map an API Gateway API to a Lambda function when using an ALB. Though technically possible, typically you would choose to put API Gateway or an ALB in front of your application, not both.
INCORRECT: Configure an event-source mapping between the ALB and the Lambda function
is incorrect as you cannot configure an event-source mapping between an ALB and a Lambda function.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html
Question 14:
A Developer has used a third-party tool to build, bundle, and package a software package on-premises. The software package is stored in a local file system and must be deployed to Amazon EC2 instances. How can the application be deployed onto the EC2 instances?
Use AWS CodeDeploy and point it to the local file system to deploy the software package.
Use AWS CodeBuild to commit the package and automatically deploy the software package.
Upload the bundle to an Amazon S3 bucket and specify the S3 location when doing a deployment using AWS CodeDeploy.
Create a repository using AWS CodeCommit to automatically trigger a deployment to the EC2 instances.
Answer: C.
Explanation
AWS CodeDeploy can deploy software packages using an archive that has been uploaded to an Amazon S3 bucket. The archive file will typically be a .zip file containing the code and files required to deploy the software package.
CORRECT: Upload the bundle to an Amazon S3 bucket and specify the S3 location when doing a deployment using AWS CodeDeploy
is the correct answer.
INCORRECT: Use AWS CodeDeploy and point it to the local file system to deploy the software package
is incorrect. You cannot point CodeDeploy to a local file system running on-premises.
INCORRECT: Create a repository using AWS CodeCommit to automatically trigger a deployment to the EC2 instances
is incorrect. CodeCommit is a source control system. In this case the source code has already been packaged using a third-party tool.
INCORRECT: Use AWS CodeBuild to commit the package and automatically deploy the software package
is incorrect. CodeBuild does not commit packages (CodeCommit does) or deploy the software. It is a build service.
References:
https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-windows-upload-application.html
––––––––
Question 15:
A developer is preparing to deploy a Docker container to Amazon ECS using CodeDeploy. The developer has defined the deployment actions in a file. What should the developer name the file?
buildspec.ymI
cron.ymI
appspec.json
appspec.yml
Answer: D.
Explanation
The application specification file (AppSpec file) is a YAML-formatted or JSON-formatted file used by CodeDeploy to manage a deployment. The AppSpec file defines the deployment actions you want AWS CodeDeploy to execute.
The name of the AppSpec file for an EC2/On-Premises deployment must be appspec.yml. The name of the AppSpec file for an Amazon ECS or AWS Lambda deployment must be appspec.yaml.
Therefore, as this is an ECS deployment the file name must be appspec.yaml.
CORRECT: appspec.yml
is the correct answer.
INCORRECT: buildspec.yml
is incorrect as this is the file name you should use for the file that defines the build instructions for AWS CodeBuild.
INCORRECT: cron.yml
is incorrect. This is a file you can use with Elastic Beanstalk if you want to deploy a worker application that processes periodic background tasks.
INCORRECT: "appspec.json is incorrect as the file extension for ECS or Lambda deployments should be .yml not .json.
References:
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Question 16:
A developer is troubleshooting problems with a Lambda function that is invoked by Amazon SNS and repeatedly fails. How can the developer save discarded events for further processing?
Configure a Dead Letter Queue (DLQ)
Enable Lambda streams
Enable SNS notifications for failed events
Enable CloudWatch Logs for the Lambda function
Answer: A.
Explanation
You can configure a dead letter queue (DLQ) on AWS Lambda to give you more control over message handling for all asynchronous invocations, including those delivered via AWS events (S3, SNS, IoT, etc.).
A dead-letter queue saves discarded events for further processing. A dead-letter queue acts the same as an on-failure destination in that it is used when an event fails all processing attempts or expires without being processed.
However, a dead-letter queue is part of a function's version-specific configuration, so it is locked in when you publish a version. On-failure destinations also support additional targets and include details about the function's response in the invocation record.
You can set up a DLQ by configuring the 'DeadLetterConfig' property when creating or updating your Lambda function. You can provide an SQS queue or an SNS topic as the 'TargetArn' for your DLQ, and AWS Lambda will write the event object invoking the Lambda function to this endpoint after the standard retry policy (2 additional retries on failure) is exhausted.
CORRECT: Configure a Dead Letter Queue (DLQ)
is the correct answer.
INCORRECT: Enable CloudWatch Logs for the Lambda function
is incorrect as CloudWatch logs will record metrics about the function but will not record records of the discarded events.
INCORRECT: Enable Lambda streams
is incorrect as this is not something that exists (DynamoDB streams does exist).
INCORRECT: Enable SNS notifications for failed events
is incorrect. Sending notifications from SNS will not include the data required for troubleshooting. A DLQ is the correct solution.
Question 17:
A Developer is designing a fault-tolerant application that will use Amazon EC2 instances and an Elastic Load Balancer. The Developer needs to ensure that if an EC2 instance fails session data is not lost. How can this be achieved?
Use Amazon DynamoDB to perform scalable session handling
Enable Sticky Sessions on me Elastic Load Balancer
Use Amazon SQS to save session data
Use an EC2 Auto Scaling group to automatically launch new instances
Answer: A.
Explanation
For this scenario the key requirement is to ensure the data is not lost. Therefore, the data must be stored in a durable data store outside of the EC2 instances. Amazon DynamoDB is a suitable solution for storing session data. DynamoDB has a session handling capability for multiple languages as in the below example for PHP:
The DynamoDB Session Handler is a custom session handler for PHP that allows developers to use Amazon DynamoDB as a session store. Using DynamoDB for session storage alleviates issues that occur with session handling in a distributed web application by moving sessions off of the local file system and into a shared location. DynamoDB is fast, scalable, easy to setup, and handles replication of your data automatically.
Therefore, the best answer is to use DynamoDB to store the session data.
CORRECT: Use Amazon DynamoDB to perform scalable session handling
is the correct answer.
INCORRECT: Enable Sticky Sessions on the Elastic Load Balancer
is incorrect. Sticky sessions attempts to direct a user that has reconnected to the application to the same EC2 instance that they connected to previously. However, this does not ensure that the session data is going to be available.
INCORRECT: Use an EC2 Auto Scaling group to automatically launch new instances
is incorrect as this does not provide a solution for storing the session data.
INCORRECT: Use Amazon SQS to save session data
is incorrect as Amazon SQS is not suitable for storing session data.
References:
https://docs.aws.amazon.com/aws-sdk-php/v2/guide/feature-dynamodb-session-handler.html
Question 18:
A Developer has been tasked by a client to create an application. The client has provided the following requirements for the application:
· Performance efficiency of seconds with up to a minute of latency
· Data storage requirements will be up to thousands of terabytes
· Per-message sizes may vary between 100 KB and 100 MB
· Data can be stored as key/value stores supporting eventual consistency
What is the MOST cost-effective AWS service to meet these requirements?
Amazon ElastiCache
Amazon DynamoDB
Amazon S3
Amazon RDS (with a MySQL engine)
Answer: C.
Explanation
The question is looking for a cost-effective solution. Multiple options can support the latency and scalability requirements. Amazon RDS is not a key/value store so that rules that option out. Of the remaining options ElastiCache would be expensive and DynamoDB only supports a maximum item size of 400 KB. Therefore, the best option is Amazon S3 which delivers all of the requirements.
CORRECT: Amazon S3
is the correct answer.
INCORRECT: Amazon DynamoDB
is incorrect as it supports a maximum item size of 400 KB and the messages will be up to 100 MB.
INCORRECT: Amazon RDS (with a MySQL engine)
is incorrect as it is not a key/value store.
INCORRECT: Amazon ElastiCache
is incorrect as it is an in-memory database and would be the most expensive solution.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html
Question 19:
A Developer will be launching several Docker containers on a new Amazon ECS cluster using the EC2 Launch Type. The containers will all run a web service on port 80. What is the EASIEST way the Developer can configure the task definition to ensure the web services run correctly and there are no port conflicts on the host instances?
Specify a unique port number for the container port and port 80 for the host port
Specify port 80 for the container port and a unique port number for the host port
Leave both the container port and host port configuration blank
Specify port 80 for the container port and port O for the host port
Answer: D.
Explanation
Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. The container port is the port number on the container that is bound to the user-specified or automatically assigned host port. The host port is the port number on the container instance to reserve for your container.
As we cannot have multiple services bound to the same host port, we need to ensure that each container port mapping uses a different host port. The easiest way to do this is to set the host port number to 0 and ECS will automatically assign an available port. We also need to assign port 80 to the container port so that the web service is able to run.
CORRECT: Specify port 80 for the container port and port 0 for the host port
is the correct answer.
INCORRECT: Specify port 80 for the container port and a unique port number for the host port
is incorrect as this is more difficult to manage as you have to manually assign the port number.
INCORRECT: Specify a unique port number for the container port and port 80 for the host port
is incorrect as the