0% found this document useful (0 votes)
108 views

Migration Essentials Student Guide v1

Uploaded by

LeonZY Gui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views

Migration Essentials Student Guide v1

Uploaded by

LeonZY Gui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 195

AWS Partner:

Migration Essentials
(Technical)

1
Preparing for class

3
Logistics
• Breaks and lunch
• Security
• Cell phones
• Virtual classroom features
• Audio
• Chat
• Raise hand

4
Register for access to guides and lab environments

Sign into AWS Builder Labs.


• Your instructor will provide an access URL.
• Select AWS Partner to use your Partner login.

|Student notes:
Check your inbox for a welcome email from your instructor. In this email, you will find your unique
student registration URL for the class. Use this URL to create an account or to log in to your existing AWS
Builder Labs account. In AWS Builder Labs, you can access your lab environments, lab guide, and student
guide.
Lab requirements
• Computer running: • Reliable internet connection
• Windows able to browse the internet
• macOS
using HTTPS
• Linux: Ubuntu, SUSE, or Red Hat • Register for AWS Builder Labs:
• Recommended web browser: • Turn off ad and script blockers

• Google Chrome
• Mozilla Firefox
• Microsoft Edge

9
Course overview

11
Course objectives In this course, you will learn the following:
• Understand various technical topics related to
the migration phase and AWS Well-Architected
Framework.
• Determine cloud readiness and migration
strategies using assessment tools and services
provided by AWS.
• Understand the key tasks involved in planning
and mobilizing lift and shift migrations.
• Describe, at a high level, the AWS services,
resources, and tools necessary for lift and shift
migrations of servers, databases, applications,
and data.

12
Course agenda Module 0: Course Introduction
Module 1: Assess
Module 2: Mobilize
Module 3: Migrate and Modernize:
Database and Data Migration
Lab 1: Database Migration with AWS Database
Migration Service
Module 4: Migrate and Modernize:
Application Migration
Lab 2: Application Migration with AWS
Application Migration Service
Module 5: Course Summary

13
Module 1
Assess

15

The first phase of migrating to AWS is the Assess phase.


Module On completion, you will be able to do the
objectives and following:
outline Determine cloud readiness and migration
strategies using assessment tools and services
provided by Amazon Web Services (AWS).

Topics:
• Migration phases
• Migration drivers and outcomes
• Cloud Adoption Readiness Tool (CART)
• Migration Readiness Assessment (MRA)
• AWS Migration Evaluator
• Migration Portfolio Assessment (MPA)
16

|Student notes
In this module, you will learn how to do the following:
• Identify services to assess an organization’s cloud readiness.
• Identify an organization’s strengths and weaknesses for cloud readiness.
• Use available Total Cost of Ownership (TCO) tools to make a business case for migration.
Migration phases

17
Phases of a migration

Migrate
Assess Mobilize and
Modernize

AWS Well-Architected Framework

18

You employ the Well-Architected Framework during all three phases of a cloud migration: Assess,
Mobilize, and Migrate and Modernize. You use the Well-Architected Framework to evaluate migration
readiness, plan your cloud infrastructure, move your workloads to the cloud, and implement designs that
scale over time.

In the Assess phase, you do the following:


• Validate the readiness of migration.
• Forecast cost.
• Build a business case to migrate.

In the Mobilize phase, you do the following:


• Address gaps identified in the Assess phase.
• Plan the activities that meet business case objectives.
• Align the people aspect around skills to build and operate in AWS.
• Start identifying testing and cutover timelines.

In the Migrate and Modernize phases, you do the following:


• Identify services that help with server, database, and application migration for the customer, such as
AWS Application Migration Service, AWS Database Migration Service (AWS DMS), and so forth.
• Deploy cutover workstreams and move applications to AWS.
• Evolve the migrated applications toward a modern operating model.
AWS Well-Architected Framework

19

|Student Notes
The AWS Well Architected Framework describes key concepts, design principles, and architectural best
practices for designing and running workloads in the cloud. It helps you understand the pros and cons of
decisions you make while building systems on AWS. Using the framework helps you learn architectural
best practices for designing and operating secure, reliable, efficient, cost-effective, sustainable workloads
in the AWS Cloud. It provides a way for you to consistently measure your architectures against best
practices and identify areas for improvement. The process for reviewing an architecture is a constructive
conversation about architectural decisions and is not an audit mechanism.

The framework consists of six foundational pillars:


• Security: The security pillar describes how to take advantage of cloud technologies to protect data,
systems, and assets in a way that can improve your security posture.
• Operational Excellence: With this pillar, you can support development, run workloads effectively, gain
insight into their operations, and continuously improve supporting processes and procedures to deliver
business value.
• Reliability: The reliability pillar encompasses the ability of a workload to perform its intended function
correctly and consistently when it’s expected to. This includes the functionality to operate and test the
workload through its total lifecycle.
• Performance Efficiency: With this pillar, you can use computing resources efficiently to meet system
requirements and maintain that efficiency as demand changes and technologies evolve.
• Cost Optimization: With this pillar, you can run systems to deliver business value at the lowest price
point.
• Sustainability: With this pillar, you continually improve sustainability impact by reducing energy
consumption and increasing efficiency across all components of a workload. You can make efficient
use of provisioned resources and minimize the total resources required.

For more information, explore AWS Well-Architected at https://aws.amazon.com/architecture/well-


architected.
AWS Prescriptive Guidance
• Provides strategies, guides, and patterns to help accelerate cloud migration.
• Provides a repository of business perspectives, methodologies, and frameworks
for organizations considering cloud migration.
• Provides guidance with planning and implementing strategies during migration.
• Enumerates best practices and tools for architects, managers, and technical
leads.
• Provides a repository of architectures, tools, and code for implementing
common migration, optimization, and modernization scenarios.

20

AWS Prescriptive Guidance provides a repository of time-tested strategies, guides, and patterns to help
accelerate your cloud migration, modernization, and optimization projects. These resources were
developed by AWS technology experts and the global community of AWS Partners, based on their years
of experience helping customers realize their business objectives on AWS.

For more details, explore AWS Prescriptive Guidance at https://aws.amazon.com/prescriptive-guidance.


AWS Prescriptive Guidance (public)
https://aws.amazon.com/prescriptive-guidance

20 strategies

112 guides

369 patterns

21

The AWS Prescriptive Guidance (APG) Library is a platform for authoring, reviewing, and publishing strategies,
guides, and patterns. These resources are created by AWS technology specialists and APN Partners to help
customers accelerate their AWS Cloud migration, modernization, and optimization projects.
AWS Prescriptive Guidance (partners)
https://apg-library.amazonaws.com

31 strategies

176 guides

832 patterns

22
AWS Migration Acceleration Program (MAP)
Methodology Tools Partners

AWS Migration
Acceleration Program

Investment Training Services


23

https://aws.amazon.com/migration-acceleration-program/
Flagship program for migration
AWS Migration Competency Partners (re:Invent 2023)
AWS Competency Partners

Partner migration tools found on AWS Marketplace

24

|Student notes
There are many AWS Migration Competency Partners who can help with the three phases of migration.

Partners can seek the Migration competency to validate their ability to help enterprise customers migrate
applications and legacy infrastructure to AWS.

For more information, explore AWS Migration Competency Partners at


https://aws.amazon.com/migration/partner-solutions.
Phases of a migration – Assess

Migrate
Assess Mobilize and
Modernize

AWS Well-Architected Framework

25

|Student notes
This module focuses on items within the Assess phase.
Migration services and tools
Migrate and
Assess Mobilize
Modernize
Migration assessment tools AWS Application Migration
(CART, MRA) Service (MGN)
AWS Control Tower
AWS Database Migration
Service (DMS)
AWS Migration Evaluator
AWS services for data
migration
AWS Application Discovery
Migration Portfolio Service
Assessment AWS Managed Services (AMS)

AWS Migration Hub

AWS Well-Architected Framework

26

|Student notes
This course will cover the tools you can use throughout the migration process.

The tools that you use during the assess phase help you evaluate customer cloud readiness, analyze total
cost of ownership (TCO), and develop the business case.

Tools that assess customer readiness:


• CART
• MRA

Tools that perform TCO analysis and business case development:


• AWS Migration Evaluator
• MPA
Migration drivers
and outcomes

27
Common migration drivers

Agility and development Innovation and digital


Cost reductions Facility decisions
productivity transformation

Colocation or outsourcing Large-scale compute-


Improved security Improved resilience
contract changes intensive workloads

28

You can reduce costs by paying for only what you use. Instead of paying for on-premises hardware that
you might not be using at full capacity, you can pay for compute resources only while you are using them.

Having multiple data centers is expensive and adds complexity to your operations. During a merger or
acquisition, you might need to connect separate systems. Using the AWS global network, you can
integrate operations more quickly. Facility and real-estate issues can drive a migration to the cloud. You
might need to move your data center if your rent increases or if your lease expires.

The expiration of colocation or outsourcing contracts might also drive a migration to the cloud.

Moving to the cloud increases agility and development productivity. You can add resources faster to
support innovation. Because there is less infrastructure maintenance, you can focus on improving your
applications.

You can speed up your digital transformation using AWS, which provides tools to conveniently access the
latest technologies and best practices. For example, you can use AWS to develop automations, adopt
containerization, and use machine learning (ML).

Migrating to the AWS Cloud puts your applications and data behind the state-of-the-art physical security
of the AWS data centers. With AWS, you have many tools to manage access to your resources. You can
create granular security groups to protect your applications. You can also secure your environment by
defining network access control lists (network ACLs) and create and manage cryptographic keys and
control their use across a wide range of AWS services and in your applications.

Some large-scale compute workloads might need more servers than you can afford to purchase and
maintain. In addition, if these workloads are intermittent or seasonal, additional capacity is dormant
during non-peak times. Moving to the cloud can increase your resilience in a few different ways. Moving
on-premises backups to Amazon Simple Storage Service (Amazon S3) provides 11 9s
(99.999999999 percent) of durability and can be replicated in multiple Availability Zones.
Besides storage, you might want to operate out of the cloud and use your on-premises data
center as failover.
Common migration outcomes

Cost savings Staff productivity Operational resilience Business agility


• Amazon Elastic • Continuous • Avoid single points • Deploy faster
Compute Cloud integration and of failure
• Provision resources
(Amazon EC2) delivery
• Reduce unplanned
• Scale existing
• Auto scaling • Microservices outages
resources up or
• Database options • Automated testing • Improve service level down to match
agreements demand

29

|Student notes
You can reduce costs with AWS in several ways. For example, you can choose from different Amazon
Elastic Compute Cloud (Amazon EC2) instance types based on your compute needs. Auto scaling helps
you scale resources in and out to match demand. There are also many database options to choose from
that provide savings on licensing and servers.

You can increase staff productivity in the cloud by using AWS with modern software engineering
approaches. AWS supports continuous integration and delivery, microservices architecture, automated
testing, and more.

Another benefit of migrating to the cloud is operational resilience. With this you can improve service
level agreements and reduce unplanned outages. AWS compartmentalizes infrastructure and services to
guard against outages. This avoids single points of failure by creating independent, redundant
components.

You also have greater agility in the cloud. You can deploy new features and applications faster and reduce
errors. You can provision resources to support new features and scale existing resources up or down to
match demand.
Cloud Adoption Framework (CAF)
Cloud investments accelerate your
digital transformation ambitions &
Business business outcomes.

Help organizations evolve to a culture of


People continuous growth, learning, with focus on
organizational structure, leadership, & workforce.

Orchestrate your cloud initiatives while


Governance maximizing organizational benefits and
minimizing transformation-related risks.

Build an enterprise-grade, scalable, hybrid cloud


Platform platform, modernize existing workloads, &
implement new cloud-native solutions.

Ensure the confidentiality, integrity,


Security & availability of your data &
cloud workloads.

Your cloud services are delivered at a


Operations level that meets the needs of
your business.

30

• Business perspective helps ensure that your cloud investments accelerate your digital transformation ambitions and
business outcomes. Common stakeholders include chief executive officer (CEO), chief financial officer (CFO), chief
operations officer (COO), chief information officer (CIO), and chief technology officer (CTO).

• People perspective serves as a bridge between technology and business, accelerating the cloud journey to help
organizations more rapidly evolve to a culture of continuous growth, learning, and where change becomes business-as-
normal, with focus on culture, organizational structure, leadership, and workforce. Common stakeholders include CIO,
COO, CTO, cloud director, and cross-functional and enterprise-wide leaders.

• Governance perspective helps you orchestrate your cloud initiatives while maximizing organizational benefits and
minimizing transformation-related risks. Common stakeholders include chief transformation officer, CFO, chief data officer
(CDO), and chief risk officer (CRO).

• Platform perspective helps you build an enterprise-grade, scalable, hybrid cloud platform, modernize existing workloads,
and implement new cloud-native solutions. Common stakeholders include CTO, technology leaders, architects, and
engineers.

• Security perspective helps you ensure the confidentiality, integrity, and availability of your data and cloud workloads.
Common stakeholders include chief information security officer (CISO), chief compliance officer (CCO), internal audit
leaders, and security architects and engineers.

• Operations perspective helps ensure that your cloud services are delivered at a level that meets the needs of your
business. Common stakeholders include infrastructure and operations leaders, site reliability engineers, and information
technology service managers.
AWS Cloud
Readiness
Assessment (CRA)

31

This section presents an overview of CART, which is a migration readiness tool.


CRA Overview
• 47-question assessment for AWS customers or in partnership with
AWS Partners, https://cloudreadiness.amazonaws.com
• Prescriptive guidance to jump-start cloud transformation journey
• Allows asking pertinent questions about current cloud
transformation journey and in turn, provide feedback and
recommendations in a simple and automated way
• Helps earn trust with your customers as well as an opportunity to
expand your conversation with them if they choose to allow you to
dive deep in partnership with them for their next steps in their cloud
transformation journey
32
CRA Output

33
Migration Readiness
Assessment (MRA)

37

The MRA tool performs an in-depth assessment for customers. It is required for the Migration
Acceleration Program, or MAP. Organizations seeking to be eligible for MAP, will start with MRA instead
of CART.
Migration Readiness Assessment (MRA) overview

MRA determines a customer’s level of commitment, competence, and capability. It goes beyond what is
assessed in CRA. MRA expands on CART’s features and benefits. Think of MRA as a comprehensive CART.
Use CART when your customer is in the early stages of checking their cloud-adoption readiness. Use MRA
when you want to deepen engagement and create an actionable migration strategy.

MRA identifies areas where a customer already has strong capabilities and where further development is
needed to migrate at scale. Priorities and gaps identified during the assessment help define the scope for
the Mobilize phase (the phase after Assess phase). MRA typically involves a 1-day workshop conducted
by AWS or a Migration Acceleration Program (MAP) partner, it can also take multiple days. Customers
that use MAP, must also use MRA during the Assess phase.

MRA assists APN Partners, AWS solutions architects, and AWS Professional Services consultants in
aligning pre-migration customer engagements next steps.
MRA output (online)

• Heatmap / Radar / Scores - to review current level of maturity across all readiness activities
• Report – Q&A + recommended actions
MRA output (downloadable)
MRA benefits for account team
• Provides deep knowledge of customer
motivation
• Assists Migration Acceleration Program
• Strengthens customer trust

43

|Student notes
MRA assists AWS Partners, AWS solutions architects, and AWS Professional Services consultants in
aligning pre-migration customer engagements next steps.

From the account team’s perspective, an MRA provides insight into what your customer values and use
this information to formulate a strategic approach. This process helps you become your customer’s
trusted advisor.

It exposes roadblocks that are stopping your customer from moving forward or slowing their migration
decision. Roadblocks might stem from financial, architectural, or security concerns. The team can learn
about an account’s inner workings, including political situations, which can help them recommend
appropriate actions.

The MRA output presentations and assets can help you illustrate benefits to stakeholders in your
customer’s organization.

If your customer’s migration is driven by the AWS Migration Acceleration Program, the recommended
action plan from an MRA will help demonstrate work to be done in the next phases of migration. This can
help you draw funding and incentives in MAP since the MRA is a requirement for MAP credits.
CART and MRA comparison

Comparison CART MRA

• Available for AWS Professional


Access • Available for all AWS customers Services, AWS solutions architects,
and AWS Partners
• Workshop-style or interview-style
• Customers can self-assess
engagement
Engagement • Customers control assessment
• Typically a 1-day, face-to-face
duration
engagement

• Uses six perspectives of the AWS CAF • Uses six perspectives of the AWS CAF
Content
• Uses 16 questions • Uses over 85 questions

MAP • Not required for MAP • Used in the MAP Assess phase

44

CART and MRA have a few similarities and differences in access, engagement, and content.

Access
CART is available for all AWS customers. MRA is accessible for AWS Professional Services, AWS solutions
architects, and AWS Partners.

Engagement
Both tools are accessible as web applications. The engagement format for CART is self-assessment with
customers having full control over the length of time it takes to complete the assessment. MRA is a
workshop-style engagement. Typically, MRA interaction with an organiztion happens in a 1-day, face-to-
face session. This session might last longer than a day, depending on the scope of the assessment.

Customers using CART are typically in the initial stages of assessing whether cloud adds value to their
business. Since MRAs are typically conducted in the context of a MAP, organizations using MRA are
typically further along in their consideration of a cloud migration.

Content
Both CART and MRA are based on the AWS CAF, including the six perspectives:
• Business
• People
• Governance
• Platform
• Security
• Operations

AWS CART uses 16 question, while MRA uses more than 85 questions.
Customers under the MAP do not need to use CART. MRA is required for customers using the
MAP.
AWS Migration
Evaluator

45

|Student notes
This section presents an overview of AWS Migration Evaluator.
Migration Evaluator overview

Analyzes your portfolio Proposes right-sized Summarizes cost savings


for current costs and Amazon EC2 instance realized through each
utilization types, storage, and proposed pricing model
licensing services
For Partners
46

|Student notes
You use Migration Evaluator to analyze on-premises resources. You can use Migration Evaluator during
the Assess phase to develop a business case and do a TCO analysis.

There are two methods for using Migration Evaluator to gather data about your on-premises resources.
You can install the Migration Evaluator Collector or upload files from a configuration management
database (CMDB). Using this information, Migration Evaluator can identify on-premises compute
resources, attached storage, and memory. You can see which Microsoft software licenses can be
migrated, how hardware resources are used, and what it costs to operate. This builds a baseline for TCO
analysis.

Migration Evaluator uses analyzed outcomes to build evidence-based business cases. It ingests millions of
data points from your existing environment to analyze their use and costs. Migration Evaluator models
and profiles compute patterns. It then recommends the best fit and lowest cost placements from
thousands of potential combinations on AWS.

For more information, see Migration Evaluator at https://aws.amazon.com/migration-evaluator.

To request access to Migration Evaluator, see Migration Evaluator at


https://pages.awscloud.com/Migration-Evaluator-request.html.
Migration Evaluator: how it works

Data collection

Analysis

Migration Evaluator team guidance

47

|Student notes
At a high level, the following is how Migration Evaluator works after accessing the service:
• Data collection: Ingests source data from an agentless collector or uses existing flat files.
• Analysis: Checks on-premises server use and estimates costs using industry benchmarks, and
estimates future state cost modeling scenarios at AWS, including best fit Amazon EC2
recommendations. Analyzes Microsoft licenses, including bring your own license, or BYOL, and License
Included, and identifies opportunities for Microsoft SQL core optimization.
• Migration Evaluator team guidance: Migration Evaluator customers have the option to request a
business case assessment for migration planning.

For more information, see Getting Started with Migration Evaluator at


https://aws.amazon.com/migration-evaluator/getting-started.
Migration Evaluator: Collection
Corporate data center
wmware
vSphere
Windows Migration Evaluator
Server Collector
Microsoft
Hyper-V
Certificate
and Microsoft
public key AD

Encrypted SNMP/WMI
data packages

MariaDB
Transact-SQL

AWS Cloud
CMDB

ME-managed S3 bucket Migration Evaluator Assessor

AWS Application 48
Discovery Service

Collector:
• Installed on a new dedicated server running English Windows Server 2012 R2 or greater with local
admin rights
• Supports automatic upload of daily inventory and utilization
• Supports data redaction through manual Excel export and upload via Management Console
Abbreviations:
• SNMP - Simple Network Management Protocol is an Internet Standard protocol for collecting and organizing
information about managed devices on IP networks and for modifying that information to change device
behavior. Devices that typically support SNMP include cable modems, routers, switches, servers, workstations,
printers, and more.
• WMI - Windows Management Instrumentation Remote Protocol, which uses the Common Information Model
(CIM), as specified in [DMTF-DSP004], to represent various components of the operating system. CIM is the
conceptual model for storing enterprise management information.
• Transact-SQL is Microsoft's and Sybase's proprietary extension to the SQL used to interact with relational
databases. T-SQL expands on the SQL standard to include procedural programming, local variables, various
support functions for string processing, date processing, mathematics, etc.
• CMDB - Configuration Management Database, flat files
Migration Portfolio
Assessment (MPA)

49
MPA overview
A web application to simplify the portfolio assessment process
Benefit: Uses:
Shorten the sales and planning • Manage portfolio data
cycle, and provide a consistent
• Define the business case for
and scalable solution
AWS migration

50

MPA is a process of gaining insights into the customer's cloud journey to date. It helps you understand
their current strengths and weaknesses against AWS CAF perspectives. It also recommends next actions
to be completed in the Mobilize phase (the migration phase after Assess).

MPA provides a directional outcome at the beginning of a migration, without too much detail—like a
conversation starter with your customer. MPA provides information that you can discuss and review with
your customer before migration.

Some example uses include:


• An organization needs to aggregate their portfolio data in one place to perform analysis.
• A customer has discovery output or CMDB) extract. Thew want to analyze their portfolio to validate
the business case for AWS migration.

MPA features include:


• Guided data import
• Amazon EC2 and Amazon Elastic Block Store (Amazon EBS) recommendations
• On-premises cost estimation and comparison
• Migration pattern recommendation
• Migration project cost estimation
• What-if analysis and comparison
• On-premises data visualization
• Application grouping, prioritization, and migration planning

MPA is accessed through AWS Professional Services and APN Partner services.

https://accelerate.amazonaws.com/
MPA features
• Data inventory
• Data collection
• Portfolio analysis
• Migration planning
• Business case

51

|Student notes
MPA is a web application intended to assist in the process of discovering, identifying, classifying, and
grouping information technology infrastructure and applications. You can use it to estimate the feasibility,
effort, cost, tools, and resources needed for migration to AWS. MPA helps you consolidate your
infrastructure data in one place, build the business case by providing TCO comparison between on
premises and AWS, and more.

Data inventory: You create a portfolio application into which you import application data. You will have
sole access to this data. The data can be imported from files or AWS Application Discovery Service.

Data collection: You can gather additional information.

Business case (TCO): MPA helps with cost savings by estimating on-premises costs and providing a
directional TCO for AWS. The on-premises costs are based on industry benchmarks from the AWS Cloud
Economies team. The AWS TCO supports compute, database, storage, network, admin, and AWS Support
costs. MPA also supports comparing different assessments for you to run what-if analyses. In addition to
cost savings, you might also need an estimate of the cost to migrate to AWS. MPA provides that based on
the migration strategy of the servers and databases in the portfolio, migration project duration, and
migration resource mix.

Portfolio analysis: MPA offers capabilities for analyzing the data in a portfolio, such as visualizing the
imported servers through customizable charts, assigning migration strategy, and prioritizing applications
based on your preferences

Migration planning: You can create migration waves with configurable capacity and duration. Through
dependency grouping, you manage the relations between applications and resources, creating
dependency groups that are assigned to wave plans.
For more information, see Migration Portfolio Assessment at
https://mpa.accelerate.amazonaws.com.
Migration Evaluator and MPA comparison
Comparison Migration Evaluator MPA

• Available to AWS Partners and AWS


• Open to all AWS customers and AWS Professional Services
Engagement model
Partners • Self-service through AMS Accelerate (for
AWS Partners)
• Offline manual data transformation and
• Guided self-service process to import
upload by Migration Evaluator data
Data collection discovery result, CMDB, or manually
analyst
gathered data
• Agentless method
• Rightsizing includes Operating System • Rightsizing recommendation do not
Rightsizing with (OS) licensing analysis support include OS licensing analysis support
licensing analysis • CPU and memory utilization considering • CPU and memory utilization considering
age of processor age of processor

• On premises and AWS estimate analysis


Network and labor
• Not supported for shared storage, network, labor costs,
cost analysis
and support plan

52

You have the option to use Migration Evaluator or MPA in the Assess phase. You can choose between the
two services based on engagement model, data collection, rightsizing with licensing analysis, and
network and labor cost analysis.

For engagement model, MPA is recommended if a self-service method is preferred and is available for
AWS Partners and AWS Professional Services. Migration Evaluator is available for all AWS customers.

For data collection, in scenarios where existing data is unavailable, Migration Evaluator is recommended,
where the agentless collection feature is used to collect data. Both tools support use of existing data, like
a CMDB.

Migration Evaluator supports rightsizing with licensing analysis. For example, it can model BYOL Microsoft
SQL licenses versus buying new. Using existing Software Assurance and Microsoft Developer Network
(MSDN) entitlements is also supported. Migration Evaluator is recommended for portfolios sensitive to
licensing costs and customer needs to factor in Microsoft licensing.

The MPA tool supports network and labor cost analysis. This includes on premises and AWS estimate
analysis for shared storage, network, labor costs, and support plan.

Visit the following links for more information:


• AWS Prescriptive Guidance at https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-
tools/aws-services.html#mpa
• Migration Evaluator at https://aws.amazon.com/migration-evaluator
• Accelerate your AWS journey with AWS Tooling at https://accelerate.amazonaws.com
• AWS Optimization and Licensing Assessment at https://aws.amazon.com/windows/optimization-and-
licensing-assessment
Knowledge check

53
Knowledge check 1 – question
Which tools can you use to Choice Response

calculate total cost of A Cloud Adoption Readiness Tool (CART)


ownership? (Select TWO.)
B AWS Migration Hub

C Migration Portfolio Analysis (MPA)

D AWS Migration Evaluator

E AWS Schema Conversion Tool (AWS SCT)

54
Knowledge check 1 – answer
Which tools can you use to The correct responses are C and D.
calculate total cost of A Cloud Adoption Readiness Tool (CART)
ownership? (Select TWO.)
B AWS Migration Hub

C Migration Portfolio Analysis (MPA)

D AWS Migration Evaluator

E AWS Schema Conversion Tool (AWS SCT)

55
Knowledge check 2 – question
Which tool or service can Choice Response

you use to identify an A The AWS Cloud Adoption Readiness Tool (CART)
organization’s strengths and
weaknesses for their cloud
B AWS Total Cost of Ownership (TCO) Calculator
migration readiness?

C AWS Global Accelerator

D Amazon Detective

56
Knowledge check 2 – answer
Which tool or service can The correct response is A.
you use to identify an A The AWS Cloud Adoption Readiness Tool (CART)
organization’s strengths and
weaknesses for their cloud
B AWS Total Cost of Ownership (TCO) Calculator
migration readiness?

C AWS Global Accelerator

D Amazon Detective

57
Questions?

Corrections, feedback, or other questions?


Contact us at https://support.aws.amazon.com/#/contacts/aws-training.
All trademarks are the property of their owners.

58
Module 2
Mobilize

60

In the Mobilize phase, you drive migration planning, address gaps uncovered during the Assess phase,
and develop cloud skills. At this point, migration hasn’t started yet. You are strategizing tasks that will
lead to a successful migration.
Module On completion, you will be able to:
objectives & Understand the key tasks involved in planning
outline and mobilizing migrations.

Topics:
• Landing Zones
• AWS Application Discovery Service
• Migration Strategies
• AWS Migration Hub

61

In this module, you will learn how to do the following:

• Identify how to set up an AWS multi-account baseline using best practices


• Describe how to gather information about your application portfolio data and apply the information to
guided migration strategies
• Discuss how migration strategies affect architectural decisions
• Determine which migration strategy is appropriate for a given scenario
• Identify the right migration strategy for a your application
Phases of a migration – Mobilize

Migrate
Assess Mobilize and
Modernize

AWS Well-Architected Framework

62

In the Mobilize phase, you do the following:


• Address gaps identified in the Assess phase
• Plan the activities that meet business case objectives
• Align the people aspect around skills to build and operate in AWS
• Start identifying testing and cutover timeline
Migration services and tools – Mobilize phase
Migrate and
Assess Mobilize
Modernize
Migration Assessment Tools AWS Application Migration
(CART, MRA) Service (MGN)
AWS Control Tower
AWS Database Migration
Service (DMS)
AWS Migration Evaluator
AWS Services for data
migration
AWS Application Discovery
Migration Portfolio Service
Assessment AWS Managed Services (AMS)

AWS Migration Hub

AWS Well-Architected Framework

63

|Student notes
This module discusses tools used in the mobilize phase:
• AWS Control Tower
• Application Discovery Service

It also discusses AWS Migration Hub which you can use during all three phases of a migration.
Mobilize phase

Detailed portfolio discovery Detailed business case Migration governance Application Migration

Landing zone Operations Security, risk, and compliance People: skill, culture, change, and
leadership

64

Detailed portfolio discovery: A critical aspect of your migration strategy is the collection of the
application portfolio data and rationalization of this data against the seven R’s of migration: rehost,
replatform, refactor/re-architect, repurchase, relocate, retire, and retain.

Detailed business case: A detailed, multi-year business migration case that includes current on-premises
costs, new AWS costs, and migration costs to align stakeholders and executives.

Migration governance: Includes managing migration scope, schedule, resource plan, issues and risks, and
communication to all stakeholders.

Application Migration: The application migration workstream integrates outputs from other streams
with the migration of production applications to the AWS Cloud. This workstream guides your resources
and leads you through application migration challenges, best practices, agile frameworks, and tools and
processes.

Landing zone: A landing zone is a well-architected, multi-account AWS environment that is a starting
point from which you can deploy workloads and applications. It provides a baseline to get started with
multi-account architecture, identity and access management, governance, data security, network design,
and logging. You can create your own customized landing zone or use AWS Control Tower to build one for
you.

Operations: This workstream is to review your current operational model and develop an operations
integration approach to support future-state operating models.

Security, risk, and compliance: This workstream defines a structured approach to help you build
confidence in AWS. It also enables foundational security, risk, and compliance capabilities that can
accelerate your readiness and planning for a migration project.
People, skills, culture, change, and leadership: Migration to cloud will impact the
organizational culture, structure, and communication patterns. You must have a plan to
mobilize resources and lead the resources through the transformation.
Landing Zones

65
Landing zone
• Multi-account environment
• Accounts aligned with policies and roles
• Network structure
• Identity management or single sign-on
• Centralized logging

66

In the landing zone workstream, you set up a landing zone, which is the initial AWS baseline environment
into which you will be migrating. Your landing zone consists of initial configurations for account structure,
network structure, security, logging, and monitoring.

With AWS you use a multi-account strategy for security and resource isolation. Users created within each
account can access only the policies and permissions made available to that account by a single top-level
management account. You create accounts to manage foundational capabilities, such as network
management, code hosting, security, monitoring, and logging. Other accounts are limited in scope based
on their function, such as development users, sandbox users, and deployment accounts.

You create these accounts in line with security best practices defined by your security teams with
controls in place to make sure that the users created in these accounts do not violate security policies.

You configure your landing zone to set up your baseline network infrastructure. You define your virtual
private cloud, multi-Availability Zone and multi-Region configuration, and integration with your on-
premises environment.

With your landing zone, you also configure identity management or single sign-on to allow your users to
federate into your AWS environment.
Creating a Landing Zone

Custom solution AWS Control Tower


using AWS
Organizations

67

To create your landing zone, you have two main options:

You can choose to build your own customized landing zone solution with AWS Organizations. In this case,
you manually create the baseline environment and set up identity and access management, governance,
data security, network design, and logging. Take this approach if you want to build all your environmental
components from scratch or if you can meet your requirements with only a custom solution. You must
have enough expertise in AWS to manage, upgrade, maintain, and operate the solution once it’s
deployed.

AWS Control Tower is a managed AWS service that lets you automatically set up a landing zone. It is
based on best-practices blueprints and enables governance by providing a choice of pre-configured
controls that implement roles for security, compliance, and operations.
Using AWS Organizations
Policies applied at the AWS Organizations
root organizational unit
(OU) level can be Root organizational unit (OU)
assigned to an OU or an Management Billing
account: account Organizational unit
Organizational unit Organizational unit
• Service control policies Policy
• Tagging policies AWS account AWS account
• Backup policies
• Artificial intelligence (AI) AWS account AWS account
services opt-out policies
AWS account Organizational unit

AWS account AWS account


Policy

68

|Student notes
You can use AWS Organizations to create, configure, and manage a multi-account structure. With
Organizations, you can create your own multi-account structure manually, but as you will see later in this
module, AWS Control Tower also uses Organizations in its automated processes.

The account that configures Organizations becomes the management account. This management
account creates member accounts. You group these member accounts into organizational units (OUs) by
use case or workstream. With Organizations you can apply backup policies and tagging policies. You can
also assign permissions boundaries called service control policies (SCPs). SCPs determine which
permissions can be granted to users within an account or OU. You can apply SCPs to individual accounts
and OUs, which will apply to all accounts within that OU.

Organizations aggregates usage and billing information into the top-level management account. This
simplifies the billing process and provides visibility into the cost of each business unit. This aggregated
usage across your accounts can qualify your organization for better volume pricing options.
AWS Control Tower

Automated Controls for Automated Account and


AWS Control Tower
multi-account ongoing account policy
structure setup governance provisioning management
workflows dashboard

69

AWS Control Tower is a managed service that provides an orchestration layer that combines and
integrates the capabilities of several other AWS services. This includes AWS Organizations, AWS Identity
Center, AWS Service Catalog, and others. AWS Control Tower, enables you to define the administrative
and governance structure of your organization on the cloud infrastructure.

AWS Control Tower has the following features:


• Landing zone: This landing zone is a well-architected, multi-account AWS environment that's based on
security and compliance best practices. It is the enterprise-wide container that holds your OUs,
accounts, users, and other resources that you want to manage. Building a landing zone according to
your organization’s requirement is the first step in the migration process.
• Controls: A control is a high-level rule that provides ongoing governance for your overall AWS
environment. You use controls to prevent actions that would violate a policy and to detect
noncompliant resource configurations in you accounts.
• Account Factory: An Account Factory is a configurable account template that helps to standardize the
provisioning of new accounts with pre-approved account configurations.
• Dashboard: The dashboard gives you a single location to see provisioned accounts across your
enterprise, see the controls enabled to enforce policies, and detect non-conformant configurations.
AWS Application
Discovery Service

70

To help plan a migration, you must identify applications in the portfolio, the resources that these
applications use, and their dependencies.

This section presents the AWS Application Discovery Service which helps you plan application migration
projects by automatically identifying applications that are running in your data centers.
Application Discovery Service overview

Discover Usage Dependencies

71

Planning a data center migration can involve thousands of workloads that are often interdependent.
Discovering applications and mapping their dependencies are important early first steps in the migration
process. This can be challenging to perform at scale without automated tools.

The Application Discovery Service helps you plan application migration projects by automatically
identifying applications that are running in their data centers. Migration Hub uses this data to track the
status of each application migration. The collected data can be exported to other cloud migration analysis
tools or Microsoft Excel for analysis. The service also identifies associated application dependencies and
their performance profiles.

The Application Discovery Service automatically collects configuration and usage data from servers,
storage, and networking equipment. It then develops a list of applications, how they perform, and how
they are interdependent. This information is retained in encrypted format in an Application Discovery
Service database.

To learn more, explore AWS Application Discovery Service at https://aws.amazon.com/application-


discovery.

Application Discovery Service offers two ways to perform discovery and collect data about your on-
premises servers.

Agentless discovery can be performed by deploying the Application Discovery Service Agentless Collector
(Agentless Collector) Open Virtualization Archive (OVA) file through your VMware vCenter.

Agent-based discovery can be performed by deploying the AWS Application Discovery Agent (Discovery
Agent) on each of your virtual machines (VMs) and physical servers.
Agentless Application Discovery Service process
Corporate data center AWS Cloud

Encrypted data
Linux AWS Application
Virtual Machine Application Discovery Service
Discovery
Service
Agentless
Windows
Collector
Virtual Machine Amazon Athena,
Amazon
QuickSight,
third-party
visualization
VMware vCenter tools, CSV
export

72

You perform Agentless discovery by deploying the AWS Application Discovery Service Agentless Collector
(Agentless Collector) OVA file through your VMware vCenter. After the Agentless Collector is configured,
it identifies Linux and Windows VMs and hosts associated with vCenter.

The Agentless Collector collects the following static configuration data:


• Server hostnames
• IP addresses
• MAC addresses
• Disk resource allocations

Additionally, it collects the usage data for each VM and computes average and peak usage for metrics
such as CPU, RAM, and disk I/O.

All of this data is encrypted before being sent to Application Discovery service. You can then analyze this
data using Amazon Athena and Amazon QuickSight, and some third-party visualization tools. You can also
export this data in CSV format.
Agent-based Application Discovery Service process

Corporate data center


AWS Cloud

Application
Windows Encrypted data
Discovery Agent

AWS Application
Discovery Service
Linux Application
Discovery Agent

Physical servers
Amazon Athena,
Amazon
QuickSight,
third-party
visualization
tools, CSV
export
73

|Student Notes
You perform Agent-based discovery by deploying the AWS Application Discovery Agent on each of your
VMs and physical servers targeted for discovery and migration. It collects static configuration data,
detailed time-series system-performance information, inbound and outbound network connections, and
processes that are running. The agent installer is available for Windows and Linux operating systems, and
you can deploy them on physical on-premises servers, Amazon EC2 instances, and virtual machines.

The Discovery Agent runs in your local environment and requires root privileges. When you start the
Discovery Agent, it registers with the Application Discovery Service endpoint, and pings the service at 15
minute intervals for configuration information. When you send a command that tells an agent to start
data collection, it starts collecting data for the host or VM where it resides. Collection includes system
specifications, times series utilization or performance data, network connections, and process data. You
can use this information to map your IT assets and their network dependencies. All of these data points
can help you determine the cost of running these servers in AWS and also plan for migration.

Data is transmitted securely by the Discovery Agents to Application Discovery Service using Transport
Layer Security (TLS) encryption. Agents are configured to upgrade automatically when new versions
become available. You can change this configuration setting if desired.

The Application Discovery Service retains the information it collects by encrypting the data in an AWS
Application Discovery Service data store. Customers can export this data and use it to plan their
migration to AWS. They can also use it for data exploration in Amazon Athena, Amazon Quicksight or
third-party tools, some of which are integrated in the service. You generate reports according to the data
insights you need during the migration process.
Collector and agents
Collected data Discovery Collector Discovery Agent
VMware virtual machine Yes Yes
Physical server No Yes

Collected data Discovery Collector Discovery Agent


Per server No Yes
Per vCenter Yes No

Collected data Discovery Collector Discovery Agent


Static configuration data Yes Yes
VM usage metrics Yes No
• Time series performance information
• Network inbound/outbound connections No Yes – export only
• Running processes

74

|Student notes
For additional comparison information, explore “Compare Agentless Collector and Discovery Agent” in
the AWS Application Discovery Service User Guide at https://docs.aws.amazon.com/application-
discovery/latest/userguide/what-is-appdiscovery.html#compare-tools.
Application Discovery Service benefits

Reliable discovery
Integrated with AWS
for migration
Migration Hub
planning

Data encryption to
protect data

75

Application discovery service offers three main benefits.

Reliable discovery for migration planning: Application Discovery Service collects server specification
information, performance data, and details of running processes and network connections. You use this
data to perform a detailed cost estimate in advance of migrating to AWS or to group servers into
applications for planning purposes.

Integration with Migration Hub: Application Discovery Service is integrated with Migration Hub, which
simplifies your migration tracking. After performing discovery and grouping your servers as applications,
you can use Migration Hub to track the status of migrations across your application portfolio.

Data encryption to protect data: Application Discovery Service provides protection for the collected data
by encrypting it both in transit to AWS and at rest in the Application Discovery Service data store.

Migration Hub integrates with Application Discovery Service. Collected data can be stored and tracked in
a single location. Additionally, customers using the Migration Acceleration Program (MAP) need to also
use Migration Hub to receive funding.
Application Discovery Service dependency

AWS Application
AWS Migration Hub
Discovery Service

76

AWS Migration Hub is a service that helps streamline migrations to AWS by integrating discovery and
migration tools. It allows application grouping and tracking during migrations. You can use AWS Migration
Hub during all phases of migrations.

The Application Discovery Service is integrated with the Migration Hub service, and it relies on Migration
Hub to store discovered data.

To qualify for MAP funding you must use AWS Migration Hub.
Migration Strategies

77

In this section, you will learn about some common migration strategies you can use and how AWS
Migration Hub can be useful in implementing these migration strategies.
Cloud migration strategies

Relocate

Determine
migration Rehost
path “lift and shift”
Validate Cutover Operate
Discover/analyze/plan Replatform
“lift and reshape”

Repurchase
“drop and shop”

Retire Retain Refactor


Decommission
78

|Student notes
Based on the current state of a given resource and the tools and opportunities available in the cloud, you
decide which strategy to use when migrating. It is important to note that most migration projects employ
multiple strategies, and there are different tools available for each strategy. The migration strategy
influences the time it takes to migrate and the grouping of the application for the migration process.

The following are the seven most common migration strategies:


Relocate: The new “R” for accelerated migrations (can move hundreds of applications in days) – you
quickly relocate applications to AWS based on VMware and container technologies with minimal effort
and complexity.
Rehost: The simplest approach is Rehost (aka lift-and-shift) often representing half or more of a
customer’s environment, and can typically be migrated using tools that automate the process. Many
migrations begin with a desire to rearchitect most of their applications as part of the migration. However,
to accelerate migrations and capture business benefits sooner, it is often beneficial to use the rehost
strategy, then rearchitect their workloads once in the cloud.
Replatform: Some applications require modifications to address specific challenges or provide key
benefits, such as reduced licensing costs.
Repurchase: Migrations present opportunities to change to a different licensing model. This includes
adopting SaaS and moving toward solutions like Workday, Drupal, and Salesforce.
Refactor: In some cases, you may want to completely re-imagine your application architecture.
Migrations can present opportunities to refactor your applications to use container-based architectures,
serverless, and managed noSQL services.
Retire: You use this strategy for workloads that are no longer useful and can be removed. You never know
what you’re going to find until you look. It is common to find that 10-20% of an enterprise IT portfolio no
longer serves a purpose can be retired.
Retain: Some workloads have requirements that you retain them on-premises and no include them in
your migration to AWS.
Each strategy involves changing your application and its underlying resources in different
ways. For example, rehosting an application requires minimal changes to the application, but
refactoring an application can involve a complete restructuring of the software architecture.
Comparing cloud migration strategies
Retain

Retire

Rehost

Effort (Cost & Time)


Relocate
Opportunity to maximize

Replatform

Repurchase

Refactor

88

This graph compares the seven strategies with their effort, in cost and time, and the opportunity to
maximize.

Here, retain has zero effort because you aren’t changing anything. There is also no room to maximize.

Retire has a little bit of cost and effort, but you have no opportunity to maximize because you are only
decommissioning.

Rehost requires a little more effort, but you have some opportunity to maximize.

Relocate is more effort than rehost. However, you have more opportunities to maximize.

Repurchase costs about the same as relocate, but there is less opportunity to maximize.

Refactor requires the most effort, but it provides the most opportunity to maximize.
AWS Migration Hub

89

In this section, you will learn about how to use AWS Migration Hub as a single place to analyze your
discovery data, plan migrations, and track the status of each application migration.
AWS Migration Hub overview
• Discover and assess your application
portfolio.
• Build a migration plan.
• Provide strategy recommendations for
migrations.
• Build migration workflows.
• Enable incremental app refactoring.
• Track the status of migrations.

90

With Migration Hub, you can do the following:

Before migrating, you discover information about your on-premises server and application resources. You
use Migration Hub to gather detailed server information and group the discovered servers into
applications. You can use this data to help you build a business case for migrating or build a migration
plan.

Migration Hub helps you build a migration plan by organizing data about your servers, including their
respective roles, dependencies, and functionalities. This will help you prioritize the order for migrating
the application servers.

Migration Hub Strategy Recommendations helps you plan migration and modernization initiatives. It
offers migration and modernization strategy recommendations for viable transformation paths for your
applications.

After you import the server information, you can plan the migration of servers as you perceive the
priority order. You can also connect migration tools, such as AWS Application Migration Service and AWS
Database Migration Service (AWS DMS).

You can use Migration Hub to implement the refactor strategy of migration. AWS Migration Hub Refactor
Spaces is the starting point for incremental application refactoring to microservices in AWS. You can use
Refactor Spaces to help reduce risk when evolving applications into microservices or extending existing
applications with new features written in microservices.

With a migration underway, you can use Migration Hub to track the progress, status, and details for each
server grouped to the application. The chosen migration tool communicates its status to Migration Hub
at key points during the migration.
Discovering application portfolio
You get the data about your servers and applications
into the AWS Migration Hub console by using the
following discovery tools:
• Migration Hub import
• Migration Evaluator Collector

• Agentless Collector

• AWS Application Discovery Agent

In addition to portfolio discovery, you can use


Migration Hub to generate Amazon EC2
recommendations.
91

Migration Hub import: With Migration Hub import, you can import information about your on-premises
servers and applications into Migration Hub, including server specifications and utilization data. You can
also use this data to track the status of application migrations. You can import data from any source as
long as the data is populated using the Migration Hub CSV import template. The following are some
common data sources that you can use with Migration Hub Import:
• Validated CMDB
• Output of Inventory Management system
• Output from utilities like RVTools

For more information, explore “Migration Hub Import” in the Application Discovery Service User Guide at
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-import.html.

Migration Evaluator Collector: Migration Evaluator is a migration assessment service that helps you
create a directional business case for AWS cloud planning and migration. Migration Evaluator Collector is
an on-premises agentless collector that gathers portfolio data. Migration Evaluator can send this data to
Amazon S3 or to Migration Hub through Application Discovery service.

Agentless Collector: The Agentless Collector is a VMware appliance that can collect information about
VMware virtual machines (VMs) for Application Discovery Service. You install the Agentless Collector as a
VM in your VMware vCenter Server environment using an Open Virtualization Archive (OVA) file. Using
the Agentless Collector minimizes the time required for initial on-premises infrastructure assessment.

AWS Application Discovery Agent: The Discovery Agent is AWS software that you install on your on-
premises servers and VMs to capture system configuration, system performance, running processes, and
details of the network connections between systems. Agents support most Linux and Windows operating
systems, and you can deploy them on physical on-premises servers, Amazon EC2 instances, and virtual
machines.
Amazon EC2 instance recommendations provide you with the ability to estimate the cost of
running your existing servers in AWS. Migration Hub uses the compiled server data to
recommend the least expensive Amazon EC2 instance type that can handle the existing
performance workload.
Migration Hub Orchestrator

92

Orchestration is a process automation tool that simplifies the migration of your on-premises servers and
applications to AWS.

Orchestrator provides predefined templates that offer automation capabilities and facilitate the
migration of your on-premises servers and applications to AWS. A template consists of one or more step
groups that arrange specific migration steps. These templates synchronize multiple tasks into a workflow.
You can customize these templates and add additional steps to meet your workflow needs.

With Migration Hub Orchestrator, you can reduce the migration costs and time by removing many of the
manual tasks involved in migrating large-scale enterprise applications, managing dependencies between
different tools, and providing visibility into the migration progress.

AWS Migration Hub Orchestrator is now generally available, and you can use it in all AWS Regions where
AWS Migration Hub is available. There is no additional cost for using Migration Hub Orchestrator, and you
only pay for the AWS resources that you provision for the migration.
Migration Hub Strategy Recommendations

Helps you plan migration and modernization initiatives.

Analyzes server inventory, runtime environment, and optionally, source code


and database analysis.

Recommends migration and modernization strategies for rehosting,


replatforming, and refactoring.

93

Strategy Recommendations helps you plan migration and modernization initiatives by offering migration
and modernization strategy recommendations for viable transformation paths for your applications.

Strategy Recommendations might recommend straightforward options, such as rehosting on Amazon EC2
using AWS Application Migration Service. More optimized recommendations might include replatforming
to containers using AWS App2Container or refactoring to open-source technologies, such as Microsoft
.NET Core and PostgreSQL.

Strategy Recommendations recommends migration and modernization strategies for rehosting,


replatforming, and refactoring with associated deployment destinations, tools, and programs.
Migration Hub Refactor Spaces

AWS Migration Hub Refactor Spaces is the starting point for incremental
application refactoring to microservices in AWS.

It simplifies application refactoring by:

• Reducing the time to set up a refactor environment


• Reducing the complexity of refactoring monoliths
• Simplifying management of existing apps and microservices
• Helping dev teams achieve and accelerate tech and deployment
independence

94

Refactor Spaces simplifies application refactoring by:


• Reducing the time to setup a refactor environment
• Reducing the complexity of refactoring monoliths by iteratively extracting capabilities as new
microservices and re-routing traffic from old to new.
• Simplifying management of existing apps and microservices as a single application with flexible routing
control, isolation, and centralized management.
• Helping dev teams achieve and accelerate tech and deployment independence by simplifying
development, management, and operations while apps are changing.
Knowledge Check

95
Knowledge check 1 - Question
Which statements are Choice Response

correct about the AWS The Application Discovery Service helps identify
A
interdependencies between servers.
Application Discovery
Service? The Application Discovery Service provides three
B methods for discovery: agent-based method, agentless
method, and AWS Snowball method.
(Select TWO)
The Application Discovery Service secures data in transit
C
but not at rest.
To discover an on-premises environment, the Application
D Discovery Service requires the AWS Migration Hub service
to be set up.
Customers use the Agentless Discovery Collector
E appliance when discovering in a non-VMware
environment.

96
Knowledge check 1 – Answer
Which statements are The correct response is A and D.
correct about the AWS The Application Discovery Service helps identify
A
interdependencies between servers.
Application Discovery
Service? The Application Discovery Service provides three
B methods for discovery: agent-based method, agentless
method, and AWS Snowball method.
(Select TWO)
The Application Discovery Service secures data in transit
C
but not at rest.
To discover an on-premises environment, the
D Application Discovery Service requires the AWS
Migration Hub service to be set up.
Customers use the Agentless Collector appliance when
E
discovering in a non-VMware environment.

97
Knowledge check 2 - Question
Which statements are Choice Response

correct about the seven The refactor strategy can be referred to as “lift-and-
A
shift”.
common strategies
customers apply when The rehost strategy can be referred to as “lift-and-
B
migrating to AWS? shift”.
In cases that require decommissioning applications
(Select TWO) C or stopping legacy databases, customers should
apply the retire strategy.
All applications should use the same strategy during
D
migrations.

Only AWS solutions architects can recommend migration


E
strategies to customers.

98
Knowledge check 2 – Answer
Which statements are The correct response is B and C.
correct about the seven The refactor strategy can be referred to as “lift-and-
A
shift”.
common strategies
customers apply when The rehost strategy can be referred to as “lift-and-
B
migrating to AWS? shift”.
In cases that require decommissioning applications
(Select TWO) C or stopping legacy databases, customers should
apply the retire strategy.
All applications should use the same strategy during
D
migrations.

Only AWS solutions architects can recommend migration


E
strategies to customers.

99
Questions?

Corrections, feedback, or other questions?


Contact us at https://support.aws.amazon.com/#/contacts/aws-training.
All trademarks are the property of their owners.

100
Module 3
Migrate and Modernize:
Database and Data
Migration

Lab 1
101
Module On completion, you will be able to do the
objectives and following:
outline Describe, at a high level, the Amazon Web
Services (AWS) services, resources, and tools
necessary for migrations of data and databases.

Topics:
• AWS Database Migration Service (AWS DMS)
• Data migration

102

|Student notes
After completing this module, you should be able to do the following:
• Describe how to use AWS DMS when moving to new platforms or software versions.
• Explain how to use AWS Schema Conversion Tool (AWS SCT) to present gaps and effort needed before
a heterogeneous database migration.
• Define how to automate database migration.
• Identify the right data transfer service to use to migrate on-premises storage to AWS.
Phases of a migration

Migrate
Assess Mobilize and
Modernize

AWS Well-Architected Framework

103

|Student notes
In the Migrate and Modernize phase, you do the following:
• Identify services that assist with server, database, and application migration for the customer, such as
AWS Application Migration Service, AWS DMS, and so forth.
• Deploy cutover work streams and move applications to AWS.
• Evolve the migrated applications toward a modern operating mode.
Migration services and tools – Migrate and modernize

Migrate and
Assess Mobilize
Modernize
Migration Assessment Tools AWS Application Migration
(CART, MRA) Service (MGN)
AWS Control Tower
AWS Database Migration
Service (DMS)
AWS Migration Evaluator
AWS Services for data
migration
AWS Application Discovery
Migration Portfolio Service
Assessment AWS Managed Services (AMS)

AWS Migration Hub

AWS Well-Architected Framework

104

|Student notes
This module discusses tools used in the migrate and modernize phase:
• AWS Database Migration Service (AWS DMS)
• AWS services for data migration
AWS Database
Migration Service
(AWS DMS)

105

|Student notes
AWS DMS is a managed migration and replication service that helps move your database and analytics
workloads to AWS. It supports migration between over 20 database and analytics engines, supports
homogeneous and heterogeneous migrations, and gives you the option to adopt managed database
services.

Samsung migration case study:


There was a very large Oracle to Aurora migration without disruption. Roughly 400 million of Samsung’s
1.1 billion users are active on the platform, which sees about 80,000 requests per second.

For more information, see Samsung Migrates 1.1 Billion Users Across Three Continents from Oracle to
Amazon Aurora with AWS Database Migration Service at https://aws.amazon.com/solutions/case-
studies/samsung-migrates-off-oracle-to-amazon-aurora.
Database migration challenges

Application downtime

Migration time

Data consolidation and application


refactoring

Schema conflicts

106

AWS DMS helps address common challenges of performing a database migration.

Database migration is an integral part of the cloud migration journey and is often the most challenging.
While migrating a database you should consider the following:
• You have to avoid or minimize potential disruptions to applications that rely on that database.
• When planning the timeline of the migration, you have to examine the size of your database and
available bandwidth for data transfer.
• Migration is often an opportunity to consolidate multiple data stores into a single database. You also
need to consider if there will need to be changes to the applications that use that data once migrated
to the cloud.
• The database schema also needs to be considered when conducting a heterogeneous database
migration.

In this module, you will learn how AWS DMS and AWS SCT address these challenges.
Amazon Relational Database Service
• A managed service for MySQL, Oracle, Microsoft
SQL Server, MariaDB, PostgreSQL, and Aurora

• Handles time-consuming database management


tasks

• Works with existing code, applications, and tools

Amazon RDS

107

Migrating to AWS creates the opportunity to adopt managed database services, such as Amazon
Relational Database Service (Amazon RDS).

Amazon RDS is a managed service, which differentiates it from traditional, on-premises databases. It can
replace most user-managed databases. Amazon RDS can be instantiated in minutes without the need for
hardware or software installation. Users can control patching.

Security and availability are integrated into the service for isolation, encryption, and access control.
Backup is automatically activated and is configurable. Snapshots to Amazon Simple Storage Service
(Amazon S3) are available. Users can turn on synchronous, Multi-AZ replication.

Users can choose from a broad selection of database engines and sizes. They can optimize for memory
and I/O requirements. Amazon RDS also lets customers increase performance by creating multiple
distributed read replicas. They can also select the level of backend storage performance required.

Amazon RDS is a pay-as-you-go service, so it’s cost effective. Customers can bring their own database
licenses, such as Oracle.

Amazon RDS frees the customer from the majority of day-to-day maintenance tasks by database
administrators.
Database migration patterns – lift and shift

Database on AWS
On-premises databases
infrastructure using
instances
Amazon EC2 instances

108

Database migration patterns or methodologies are dictated by the source and target database. The
pattern used during migration affects the complexity of the database migration.

The lift-and-shift strategy migrates a workload from on premises to AWS with little or no modification. A
lift and shift is a common route for enterprises to move to the cloud and can be a transitionary state to a
more cloud-native approach. A database is moved to the cloud with as few changes as possible. This
option is often selected for speed of migration.
Database migration patterns - replatform
Homogenous Heterogenous

On-premises Microsoft Amazon Aurora


SQL database PostgreSQL

On-premises Amazon RDS


Oracle for Oracle
database

On-premises Oracle Amazon RDS


database mySQL
109

|Student notes
Homogeneous migration – You continue to use the same database engine but change to an equivalent
AWS service. For example, you can choose to migrate from Oracle on premises to Amazon RDS for Oracle
to gain the benefits of a hosted service.

Heterogeneous migration – The target database employs a different engine. In one example, an on-
premises Oracle database migrates to an Amazon RDS mySQL database. In another example, an on-
premises Microsoft SQL database migrates to an Amazon Aurora PostgreSQL database.

There are tradeoffs among the patterns in migration speed, cost, and optimization. Regardless of your
approach, AWS DMS and AWS SCT support it.
AWS DMS overview

• Used for homogenous and heterogeneous


migrations.

• Supports schema conversion for heterogenous migrations.

• Migrates databases with zero downtime

• Performs data validation

AWS DMS

110

|Student guide
AWS DMS helps you migrate databases to AWS quickly and securely. You can use AWS DMS to migrate
your data to and from commercial and open-source databases, including Oracle, PostgreSQL, Microsoft
SQL Server, Aurora, MariaDB, and MySQL. You use AWS DMS to migrate your on-premises database to a
database running on an Amazon EC2 instance or a managed database service.

The service supports homogeneous migrations, such as Oracle to Oracle, and heterogeneous migrations
between different database platforms, such as Oracle to MySQL or MySQL to Aurora.

With AWS DMS, you can migrate your databases with zero downtime. Your database can continue to
support business-critical applications during a migration.

You can use AWS DMS to perform data validation, which ensures that data was migrated accurately from
the source to the target. Data validation is a setting that enables AWS DMS to compare the data on a
target data store with the data from a source data store. If the validation setting is enabled, AWS DMS
begins comparing the source and target data immediately after a full load is performed for a table.

For more information, review AWS Database Migration Service at https://aws.amazon.com/dms.


Customer scenarios
Homogeneous migration

Heterogeneous migration

Development and testing

Database consolidation

Continuous data replication

111

|Student notes
Homogeneous migration: A scenario where the customer wants to continue using the same database
engine but change to an equivalent AWS service. For example, migration from Oracle on-premises to
Amazon RDS Oracle, or Microsoft SQL Server to Amazon RDS for SQL Server to gain the benefits of a
hosted service. This type of migration is convenient and single-stepped. The schema structure, data
types, and database code are compatible between the source and target databases.

Heterogeneous migration: The customer wants a different target engine from source engine. For
example, migration from Oracle to Aurora PostgreSQL, or Microsoft SQL Server to MySQL. These
migrations take advantage of the benefits of cloud-native services and cost optimization. Use a two-step
process. First, use AWS SCT to convert the source database to match the target database. Then, migrate
data using AWS DMS from the source database to the target database.

Development and testing: A customer might want to migrate data in and out of the cloud for testing and
development reasons. AWS DMS is an efficient mechanism to achieve one-time or continuous
bidirectional data migration.

Database consolidation: A customer might have databases in different engines or locations. They might
want to consolidate their databases into a single database engine. For example, they might have MySQL
on-premises, MySQL on Amazon EC2, and MySQL on Amazon RDS. AWS DMS can consolidate all three
source databases into a single Amazon Aurora database.

Continuous data replication: Your customers might want to replicate their data for testing, geographic
data distribution, or disaster recovery. AWS DMS provides data replication for all supported database
engines. For example, a customer might have an Amazon Aurora database as a source. By using AWS
DMS, you can replicate the source to multiple target databases. The target database can be in a different
AWS Region or on an on-premises location.
AWS DMS architecture

AWS DMS

On-premises On-premises
S database AWS DMS replication instance database T
o a
Source Target
u endpoint endpoint r
r Amazon RDS
Replication
Amazon RDS
g
task
database database
c instance instance
e
e t

Database on Database on
Amazon EC2 Amazon EC2
instance instance
112

|Student notes
The four main components of AWS DMS are as followings:
• Replication instance: This is the core of AWS DMS and runs in a virtual private cloud, or VPC.
• Task: This runs on the replication instance.
• Source endpoint: This can be a database running on premises, on Amazon RDS, or on an Amazon EC2
instance.
• Target endpoint: This can be any of the database types, like those in the source endpoint.

When using AWS DMS:


1. Create a replication instance.
2. Specify the source and target endpoints.
3. Create one or more tasks on the replication instance to migrate data between the source and target
data stores.

You can use AWS DMS for one-time data migration into Amazon RDS and Amazon EC2 instance
databases, and for continuous data replication. The AWS DMS captures changes on the source database
and applies them in a transactional-consistent way to the target. Continuous replication can be done
from your data center to the databases in AWS. It can also replicate a database in your data center from a
database in AWS. Ongoing continuous replication can also occur between homogeneous or
heterogeneous databases.
Replication instance
• Runs migration tasks.
• Supports multiple tasks.
• Supports the T2, T3, C4, C5, C6i, R4, R5,
and R6i Amazon EC2 instance classes.

113

An AWS DMS replication instance, or server, runs on an Amazon EC2 instance. It hosts one or more tasks
to perform the work of database replication. The replication instance is highly available through Multi-AZ
deployment. The primary replication instance is synchronously replicated across Availability Zones to a
standby replica to provide data redundancy.

The replication instance is secure. Its storage can be encrypted by AWS DMS with a key in your account’s
AWS Key Management Service (AWS KMS) or your customer’s key. Additionally, a replication instance
runs in an Amazon Virtual Private Cloud (Amazon VPC) environment.

When you create a replication server, consider compute and storage resources:
• For Amazon EC2 instances, some of the smaller instance classes (such as T2) are sufficient for testing
the service or for small migrations.
a. If your migration involves a large number of tables, or if you intend to run multiple concurrent
replication tasks, consider using one of the larger instances (such as C4). You should use C4
instance classes if you are migrating large databases and want to minimize the migration time.
b. The R4 instance classes are memory optimized for memory-intensive workloads. Ongoing
migrations or replications of high-throughput transaction systems using AWS DMS can, at
times, consume large amounts of CPU and memory. R4 instances include more memory per
vCPU.
• Storage is used for log files and any cached changes collected during the load. Depending on the
Amazon EC2 instance class you select, your replication server comes with 50 GB or 100 GB of data
storage. If your source system takes large transactions, or if you’re running multiple tasks on the
replication server, you might need to increase this amount of storage. Usually, the default amount is
enough.
Replication task
• Runs on a replication instance.
• Contains two endpoints.
• Offers migration method.
• Applies rules and filters.

114

A replication task performs the actual data migration. A replication instance hosts one or more tasks. It
contains a source and target endpoint.

Tasks work independently and can run concurrently. Each task has its own initial load, change data
capture, or CDC, and log reading process. Tables that are related through data manipulation language
must be part of the same task.

Data changes are not coordinated across tasks, because each task has its own change capture and log
reading process. If you are using multiple tasks to perform a migration, make sure source transactions are
contained in a single task.

When you create a task, you must specify the type of migration you want to perform. There are three
types:

• Migrate existing data: This method migrates the data from your source database to your target
database, creating tables when necessary. This option is a good choice when you can afford an outage
long enough to complete database migration at once.

• Migrate existing data and replicate ongoing changes: This process performs a full-load replication
while capturing changes on the source. Once the full load is complete, the change data capture (CDC)
processes and applies the captured changes to the target database. Eventually, you can shut down
your applications, let the remaining changes flow through to the target, and then restart your
applications pointing at the target. The CDC process lets you minimize the database migration
downtime.

• Replicate data changes only: This process reads the recovery log file of the source database and
groups the entries for each transaction together. If AWS DMS can't apply changes to the target within a
reasonable time (for example, if the target is not accessible), it buffers the changes on the
replication server. It doesn't reread the source database logs, which can take a long time.
This option is suitable for situations where it might be more efficient to copy existing data
using a method such as AWS Snowball. Then, you can use DMS to replicate only the
changes after you start your bulkload.
Zero downtime migration

Customer
premises Virtual
Private
On-premises AWS DMS Amazon
Network
database database
Internet

AWS DMS performs continuous data replication using CDC

Start replication instance. Confirm AWS DMS creates tables,


loads data, any synchronizes the data.
Connect to source and destination.
Switch applications to a new target
when needed.
Select tables, schemas, or databases.
Application users
115

|Student Notes
Another benefit of AWS DMS is that it has a near-zero downtime migration. This diagram outlines a
customer’s process:
1. Create an AWS DMS instance in AWS.
2. In AWS DMS, connect to the source and target databases.
3. Choose the data to migrate. With AWS DMS, customers can migrate tables, schemas, and entire
databases.
4. Confirm that AWS DMS creates the tables and loads the data. AWS DMS keeps the tables and data
synchronized for as long as the customer needs.
5. When the customer is ready, switch applications over to point to the AWS database. The replication
capability keeps the source and target data synchronized.

AWS DMS performs continuous data replication using CDC. By using CDC, you can determine and track
data that has changed and provide it as a stream of changes that a downstream application can consume
and act on. Most database management systems manage a transaction log that records changes made to
the database contents and to metadata. By using engine-specific API operations and functions, AWS DMS
reads the transaction log. AWS DMS captures the changes made to the database in a nonintrusive
manner.

AWS DMS eliminates the need for high-stakes extended outages to migrate production data into the
cloud by providing a graceful switchover capability.
AWS Schema Conversion Tool

Convert

Source schema AWS SCT Target schema

116

|Student notes
If you are migrating between heterogeneous databases, use AWS SCT. It converts the source database
schema and the database code objects (like views, stored procedures, and functions) to a format
compatible with the target database.

AWS SCT performs the following functions:


• Converts the source database schema and objects to a format compatible with the target database
• Scans your application source code for embedded SQL statements and converts them
• Performs cloud-native code optimization
• Marks the objects that cannot be automatically converted for manual conversion
• Supports conversion of SQL code in applications for interaction with the new database engine

For example, AWS DMS can use AWS SCT to convert Oracle Procedural Language for SQL and SQL Server
Transact-SQL code to equivalent code in the Aurora MySQL dialect of SQL. When a code fragment cannot
be automatically converted to the target language, AWS DMS documents all locations that require
manual input from the application developer.

In certain cases, you can use AWS SCT within AWS DMS by using DMS Schema Conversion. For more
information explore Converting database schemas using DMS Schema Conversion in the AWS Database
Migration Service
User Guide at https://docs.aws.amazon.com/dms/latest/userguide/CHAP_SchemaConversion.html.

Some AWS DMS guides and playbooks are available to help in migrating specific source/target database
combinations, including:

• Migrate from Oracle to Amazon Redshift


• Migrate from Oracle to Amazon Aurora MySQL
• Migrate from Oracle to Amazon Aurora PostgreSQL
• Migrate from Microsoft SQL Server to Amazon Aurora MySQL

You can explore these resources in the AWS Database Migration Service Documentation at:
https://aws.amazon.com/dms/resources/

For a list of mappings for the source to the target database conversions, see AWS schema
conversion tool at https://aws.amazon.com/dms/schema-conversion-tool/?nc=sn&loc=2.
AWS DMS Fleet Advisor

Fully-managed capability of AWS DMS

Automates migration planning

Assesses your on-premises database and analytics server fleet to provide


migration paths

117

AWS DMS Fleet Advisor helps you to quickly build a database and analytics migration plan by automating
the discovery and analysis of your fleet. AWS DMS Fleet Advisor is intended for users wanting to migrate
a large number of database and analytic servers to AWS.

AWS DMS Fleet Advisor collects data from multiple database environments to provide insight into your
data infrastructure. Fleet Advisor collects data from your on-premises database and analytic servers from
one or more central locations without the need to install agents on every computer. Currently, Fleet
Advisor supports Microsoft SQL Server, MySQL, Oracle, and PostgreSQL database servers.

Based on data discovered from your network, AWS DMS Fleet Advisor builds an inventory that you can
review to determine which database servers and objects to monitor. As details about these servers,
databases, and schemas are collected, you can analyze the feasibility of your intended database
migrations.

For more information, see AWS DMS Fleet Advisor at https://aws.amazon.com/dms/fleet-advisor.


Lab 1 Database Migration with AWS DMS
In this lab, you will perform the following
tasks:
• Create a new managed database using Amazon RDS.

• Create an AWS DMS replication instance to replicate data


between databases.

• Create the source and target database endpoints.


• Modify and configure the source database to facilitate
continuous replication of the data.
• Start the replication of data through a replication task.

118
Data migration

119
Data collection and movement challenges

Data Data usage Legacy


properties patterns environment

Connectivity Bandwidth Business


limitations demands requirements

120

Challenges for data collection and migration are the following:


• To discover the data properties, identify the metadata you need to preserve during the migration. This
might include preserving ownership, permissions, attributes, and file timestamps.
• Data usage patterns can be discovered by understanding how often your data is changing. This
information will determine how to manage changes by using snapshots or delta shipping.
• Examine legacy environments that have older filesystems, like FAT32, or dated files and hardware. Take
note of restricted protocol support for legacy platforms like mainframes or Unix-based systems
running big data or database services. You might need to perform data transformation before
migrating this data.
• Identify connectivity limitations, including lack of internet, available bandwidth, and transfer capacity
restrictions.
• To assess the bandwidth demands during migration, measure usable network bandwidth to prevent
impacting production bandwidth during a migration. You might need to provision additional virtual
private network connectivity or even additional AWS Direct Connect usage.
• Understand the business requirements for the timeframe of the migration and the time to value.
When to use data migration service
Usable network bandwidth
Size 100 Mbps 1 Gbps 10 Gbps
1 TB 30 hours 3 hours 18 minutes
10 TB 12 days 30 hours 3 hours
100 TB 124 days 12 days 30 hours
1 PB 3 years 124 days 12 days
10 PB 34 years 3 years 124 days
Assumes ~25 percent network overhead

121

The core considerations behind any data migration task are the amount of data you must move and the
time it takes to move it.

The following formula gives a simplified estimation:


• Number of days = (total bytes) / (megabits per second * 125 * 1,000 * network utilization * 60 seconds
* 60 minutes * 24 hours)

For example, if you have a T1 connection (1.544 Mbps) and 1 TB (1,024 * 1,024 * 1,024 * 1,024 bytes) to
move, the theoretical minimum time it would take to migrate this data over your network connection at
80 percent network utilization is 82 days.

In addition, the number of files can impact the decision about what approach to use to migrate the data.
Validate your assumptions by using test runs and rehearsals. Monitor these test runs to validate your
timeframe expectations.

You should consider using a data migration service based on the volume of the application data being
migrated.
AWS data migration services
Online data transfer Offline data transfer

AWS DataSync AWS Snowcone

AWS Transfer Family AWS Snowball Edge

AWS Storage Gateway

122

|Student notes
There a several different AWS data migration services to consider. You can either transfer online, or
offline.

AWS DataSync: AWS DataSync is a secure, online service that automates and accelerates moving data
between on premises and AWS Storage services.

AWS Transfer Family: AWS Transfer Family securely scales your recurring business-to-business file
transfers to AWS Storage services using SFTP, FTPS, FTP, and AS2 protocols.

AWS Storage Gateway: AWS Storage Gateway is a set of hybrid cloud storage services that provide on-
premises access to virtually unlimited cloud storage.

AWS Snowcone: AWS Snowcone is a small, rugged, and secure device offering edge computing, data
storage, and data transfer on-the-go, in austere environment with little or no connectivity.

AWS Snowball Edge: AWS Snowball edge is a device with on-board storage and compute power for
select AWS capabilities. A Snowball Edge device can transport data at speeds faster than the internet.
This transport is done by shipping the data in the appliances through a regional carrier.
DataSync
DataSync is an online transfer service that simplifies,
automates, and accelerates moving data between on-
premises storage and AWS.
With DataSync, you can migrate between the following:
• Network file system (NFS) file servers
• Server message block (SMB) file servers
• Self-managed object storage
• Amazon S3
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx for Windows File Server
• Amazon FSx for Lustre DataSync
• Hadoop Distributed File System servers
123

DataSync is an online data transfer service that simplifies, automates, and accelerates the movement of
data between on-premises environments and AWS. You can use DataSync to migrate active data to AWS,
archive data for more on-premises storage capacity, replicate data to AWS for business continuity, or
transfer data to the cloud for analysis and processing.

DataSync can copy data between network file system file, or NFS, servers, server message block, or
SMB, file servers, self-managed object storage, Amazon S3 buckets, Amazon Elastic File System (Amazon
EFS) file systems, Amazon FSx for Windows File Server file systems, Amazon FSx for Lustre file systems,
and Hadoop Distributed File System servers.

For more information about DataSync, visit AWS DataSync at https://aws.amazon.com/datasync.


Migrate active application data
AWS Cloud
AWS Storage Services

On premises

Application
servers
Amazon S3

Network NFS or SMB


Amazon EFS
attached
storage DataSync DataSync
Agents

FSx for
Windows File Server

124

|Student notes
The diagram shown in this slide shows application servers and network attached storage on premises.
The network attached storage using NFS or SMB connects to DataSync Agents that in turn connect to
DataSync in the AWS Cloud. Both DataSync and Amazon EC2 then connect to AWS storage resources
within the AWS Cloud, such as Amazon S3, Amazon EFS, and FSx for Windows File Server.

You can use DataSync to migrate from on-premises data to Amazon S3 and Amazon EFS. DataSync will
make an initial copy of your entire dataset, and subsequent incremental transfers of change data can be
scheduled to run periodically until the final move from on premises to AWS. DataSync includes
encryption and integrity validation to help make sure data arrives securely, intact, and ready to use. To
minimize impact on other workloads that rely on the same network connection, you can schedule the
migration to run during off-hours or limit the amount of network bandwidth that DataSync uses by
configuring the built-in bandwidth throttle.
DataSync configuration options
• Schedule one-time or recurring transfers.
• Turn on verification.
• Set a bandwidth limit.
• Configure Amazon CloudWatch logging.
• Combine with Storage Gateway.

125

You choose whether the task runs a single transfer or recurring transfers. Recurring migrations make an
initial copy of all source data and then reupload only data that has changed since the last sync. You can
also schedule the transfer for a time when it will not take up network bandwidth that your organization
needs.

Although DataSync automatically performs in-transfer data integrity validation of each packet, you can
also set it to compare all files at the source and destination. You can use this feature to verify the
currency of incremental transfers and as a final verification before cutover from on premises to AWS.

If you need to manage bandwidth resources between DataSync and your other applications, you can set a
bandwidth limit for DataSync.

You can also assign an Amazon CloudWatch logging group where DataSync can log information about task
completion and file-level errors it runs into.

With the combination of DataSync and the File Gateway configuration of Storage Gateway, you can
rapidly move your on-premises storage to AWS, while retaining on-premises access for latency-sensitive
applications.
Transfer Family
Transfer Family supports transfers in and out
of Amazon S3 and Amazon EFS:
• Supports SFTP, FTPs, FTP and Integrates with
existing identity providers.
• Provides a secure, encrypted channel.
• Connects to other AWS services.
• Supports Applicability Statement 2 (AS2) protocol
for file transfers with Amazon S3.

Transfer Family

126

With AWS Transfer Family, you can transfer files into and out of Amazon S3 and Amazon EFS storage over
the following protocols: Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol Secure
(FTPS), and finally File Transfer Protocol (FTP).

You do not need to modify applications or run any file transfer protocol infrastructure. Transfer Family is
fully compatible with SFTP, FTPS, and FTP standards.

Transfer Family connects directly with identity provider systems like AWS Directory Service for Microsoft
Active Directory, Lightweight Directory Access Protocol, Okta, and others. You can migrate file transfer
workflows to AWS without changing existing authentication systems, domains, and hostnames.

Transfer Family is compliant with payment card industry, Health Insurance Portability and Accountability
Act of 1996, Service Organization Control 3, and Federal Information Processing Standards to ensure that
you are transferring files through a secure, encrypted channel.

It stores data in Amazon S3 or Amazon EFS, providing native support for AWS security, monitoring, and
auditing services.
Getting started with Transfer Family
AWS Cloud

Amazon EFS
Amazon VPC

SFTP, FTPS, or FTP

File transfer Transfer Family


applications Internet Amazon S3
and users
Custom identity provider

Direct
Connect Amazon API AWS Lambda
Gateway

127

|Student notes
The architecture diagram in the slide shows the placement of Transfer Family in the application
infrastructure.

With Transfer Family, you get access to an FTP-capable server in AWS without the need to run any server
infrastructure.

You create your Amazon S3 bucket or Amazon EFS and connect an AWS Identity and Access Management
(IAM) role and policy to set access and permissions. Next you create a Transfer Family server:
1. Choose the protocols you want it to support: SFTP, FTPS, or FTP.
2. Select an identity management method, either adding users through Transfer Family or integrating
with a custom identity provider.
3. Choose whether to use a public endpoint or one hosted in a VPC.
4. You can provide a custom hostname if you have one registered. If this hostname is currently in use by
your application, this will let you use Amazon Route 53 or another Domain Name System service to
route traffic to this endpoint.
5. You then select your Amazon S3 bucket or Amazon EFS to store and access data over the selected
protocol.
6. Then select a security policy to provide encryption.
7. If you did not use an existing hostname, you reconfigure your clients to use your new endpoint
hostname.

After these steps are complete, your Transfer Family server endpoint serves your users' transfer requests.
Storage Gateway types

Amazon Amazon
S3 File Gateway Tape Gateway Volume Gateway
FSx File Gateway
Native file access to Native access to FSx Virtual tape library Block-level backups of
Amazon S3 for for Windows File (VTL) using Amazon volumes with Amazon
backups, archives, and Server for on- S3 archive tiers for Elastic Block Store
ingest for data lakes. premises group file long-term retention. (Amazon EBS)
shares and home snapshots, AWS
directories. Backup integration,
and cloud recovery.
128

Choose a Storage Gateway type that is the best fit for your workload.

Amazon S3 File Gateway presents a file interface you can use to store files as objects in Amazon S3 using
the industry-standard NFS and SMB file protocols. Access your files with NFS and SMB from your data
center or Amazon EC2, or access those files as objects directly in Amazon S3.

Amazon FSx File Gateway provides fast, low-latency, on-premises access to fully managed, highly reliable,
and scalable file shares. It uses the industry-standard SMB protocol. You can store and access file data in
Amazon FSx with Microsoft Windows features including full New Technology File System support, shadow
copies, and access control lists.

Tape Gateway presents a virtual tape library, or VTL, that is based on Internet Small Computer Systems
Interface, or iSCSI, with virtual tape drives and a virtual media changer to your on-premises backup
application. Tape Gateway stores your virtual tapes in Amazon S3 and creates new ones automatically,
simplifying management and your transition to AWS.

Volume Gateway presents block storage volumes of your applications using the iSCSI protocol. You can
asynchronously back up data written to these volumes as point-in-time snapshots of your volumes and
store it in the cloud as Amazon Elastic Block Store (Amazon EBS) snapshots. You can back up your on-
premises Volume Gateway volumes using the service’s snapshot scheduler or by using the AWS Backup
service.
Storage Gateway architecture
On-premises AWS Cloud
Transfer Storage Storage services
protocol Gateway
Storage managed Amazon S3
Gateway service
appliance Amazon FSx for
NFS, SMB Windows file server
Client or
server Amazon EBS

iSCSI
AWS Backup

iSCSI VTL Amazon S3 Glacier

129

|Student notes
The Storage Gateway appliance supports the following protocols to connect to your local data:
• NFS or SMB for files
• iSCSI for volumes
• iSCSI VTL for tapes

Your Storage Gateway appliance runs in one of four modes: Amazon S3 File Gateway, Amazon FSx File
Gateway, Tape Gateway, or Volume Gateway.

Data moved to AWS using Storage Gateway can be sent to the following destinations through the Storage
Gateway managed service:
• Amazon S3 (Amazon S3 File Gateway, Tape Gateway)
• Amazon S3 Glacier (Amazon S3 File Gateway, Tape Gateway)
• Amazon FSx for Windows File Server (Amazon FSx File Gateway)
• Amazon EBS (Volume Gateway)

AWS Backup can be used to schedule volume snapshots with Volume Gateway.
AWS Snow Family overview

Snowcone Snowball Edge

130

|Student notes
The AWS Snow Family is a collection of physical devices that helps migrate large amounts of data to the
cloud without depending on networks. It can take a long time to transfer large amounts of data over the
wire, and some locations don't have any connectivity at all. The Snow Family helps expedite data
transfers in a secure and cost-effective way.

Snowcone is the smallest member of the Snow Family of edge computing and data transfer devices.
Snowcone is portable, rugged, and secure. You can use Snowcone to collect, process, and move data to
AWS either offline by shipping the device or online with DataSync. For more information about
Snowcone, see AWS Snowcone at https://aws.amazon.com/snowcone.

Snowball Edge is a suitcase-sized data migration and edge computing device that comes in two device
options: Compute Optimized and Storage Optimized. Snowball Edge Storage Optimized devices provide
80 terabytes of usable block or Amazon S3 compatible object storage. It is well-suited for local storage
and large-scale data transfer. Snowball Edge Compute Optimized devices provide 52 virtual CPUs, or
vCPUs, 42 terabytes of usable block or object storage, and an optional graphics processing unit, or GPU,
for use cases such as advanced machine learning and full motion video analysis in disconnected
environments. For more information about Snowball, see AWS Snowball at
https://aws.amazon.com/snowball.

Not shown on this slide, AWS Snowmobile is an exabyte-scale data transfer service used to move
extremely large amounts of data to AWS. Snowmobile is a shipping container moved with a tractor trailer.
You can transfer up to 100 PB per Snowmobile. These services can assist with data migration, disaster
recovery, data center shutdown, and remote data collection projects. For more information about
Snowmobile, see AWS Snowmobile at https://aws.amazon.com/snowmobile.
Snowcone
Amazon EC2 computing or
Operates in harsh environments
AWS IoT Greengrass
from freezing to desert-like
functions.
conditions. Rugged, dust tight, and
water and wind resistant.

TBs of connected storage.


On-board computing.

Wire and wireless


connectivity. Anti-tamper and tamper-evident
enclosure.

Data secured with military- Portable carry. Supports optional


grade encryption. battery for increased mobility.

131

|Student notes
Snowcone is the smallest member of the Snow Family of edge computing, edge storage, and data
transfer devices, weighing in at 4.5 pounds (2.1 kg) with 8 TB of usable storage. For data migration,
Snowcone has 8 TiB of storage. You can ship the device with data to AWS for offline data transfer, or you
can transfer data online with DataSync. Snowcone also provides on-premises compute services running
specific Amazon EC2 instances or AWS IoT Greengrass functions.

For more information, see AWS Snowcone device specifications at


https://docs.aws.amazon.com/snowball/latest/snowcone-guide/snowcone-spec-requirements.html for
up-to-date device features and specifications.
Snowball Edge options

• Larger amounts of • Larger usable storage

Storage optimized
Compute optimized

vCPUs and memory amounts that are


• Optional GPU installed compatible with
• sbe-c and sbe-g Amazon S3
instances (equivalent to • Object storage
C5, M5a, G3, and P3) clustering available
• sbe1 instances
(equivalent to C5)

132

|Student notes
Snowball Edge Compute Optimized devices provide computing resources for use cases such as machine
learning, full motion video analysis, analytics, and local computing stacks.

Snowball Edge Storage Optimized devices are good for large-scale data migrations, recurring transfer
workflows, and local computing with higher capacity needs.

Snowball Edge devices feature high-speed network connections, supporting 10 Gbps to 100 Gbps links
with registered jack 45, small form-factor pluggable plus and quad small form-factor pluggable plus
copper, and optical interfaces. The device performs encryption, helping to provide a higher data
throughput rate and shorter data transfer times.

For detailed specifications of each option, AWS Snowball Edge device specifications at
https://docs.aws.amazon.com/snowball/latest/developer-guide/sbe-specifications.html.
Migrating data with Snow Family devices

Connect the View your data


Create a job Copy data to
device in Amazon S3
the device

Device Device Device


Job
Job created delivered shipped to delivered
completed
to you AWS to AWS

133

Within the Snow Family console, select your Snow Family device and create a job. You provide job
configuration details such as the Amazon S3 bucket to use, encryption keys, the Amazon EC2 Amazon
Machine Images to load onto the device, and connectivity tools.

AWS prepares and ships the device or devices to you. Delivery takes 4–6 days. You then activate the
device using AWS OpsHub and connect it to your network. For edge computing devices, you can mount it
as an NFS device and run Amazon EC2 instances to perform data processing. If you have online access,
you can turn on DataSync to transfer data to Amazon S3, Amazon EFS, or FSx for Windows File Server.

When done, you shut down the device and return it to AWS. The device automatically displays the
shipping label on the e-ink screen on the device.

After AWS receives the device, AWS transfers your data to the Amazon S3 bucket in your selected AWS
Region and verifies the data after it is transferred. You can then access your data in the AWS Cloud. AWS
then erases all data from the device.
Knowledge check

134
Knowledge check 1 – question
Which statements are Choice Response

correct about AWS Database AWS DMS only supports database migrations that use
A
the lift-and-shift strategy.
Migration Service (AWS
DMS)? (Select TWO.) The source and target databases can be any of the
B
supported database types.

The source database can be an on-premises, Amazon


C
RDS, or Amazon EC2 database.

The target database must match the source database


D
when using AWS DMS.

E AWS DMS only supports on-premises source databases.

135
Knowledge check 1 – answer
Which statements are The correct responses are B and C.
correct about AWS Database AWS DMS only supports database migrations that use
A
the lift-and-shift strategy.
Migration Service (AWS
DMS)? (Select TWO.) The source and target databases can be any of the
B
supported database types.

The source database can be an on-premises, Amazon


C
RDS, or Amazon EC2 database.

The target database must match the source database


D
when using AWS DMS.

E AWS DMS only supports on-premises source databases.

136
Knowledge check 2 – question
Which statement is not true Choice Response

about AWS Schema AWS SCT helps with the conversion of an existing
A
database schema from one database engine to another.
Conversion Tool (AWS SCT)?
AWS SCT can convert the schema of an Oracle source
B database to Amazon Aurora, MariaDB, MySQL, or
PostgreSQL.
AWS SCT supports conversion of SQL code in
C applications for interaction with the new database
engine.
AWS SCT only supports source database schema
D
conversion to Amazon EC2 as a target.

137
Knowledge check 2 – answer
Which statement is not true The correct response is D.
about AWS Schema AWS SCT helps with the conversion of an existing
A
database schema from one database engine to another.
Conversion Tool (AWS SCT)?
AWS SCT can convert the schema of an Oracle source
B database to Amazon Aurora, MariaDB, MySQL, or
PostgreSQL.
AWS SCT supports conversion of SQL code in
C applications for interaction with the new database
engine.
AWS SCT only supports source database schema
D
conversion to Amazon EC2 as a target.

138
Questions?

Corrections, feedback, or other questions?


Contact us at https://support.aws.amazon.com/#/contacts/aws-training.
All trademarks are the property of their owners.

139
Module 4
Migrate and Modernize:
Application Migration

Lab 2
140

Module 4 covers Application Migration and Modernization. Specifically AWS Application Migration
Service and AWS Managed Services (AMS).
Module On completion, you will be able to do the
objectives and following:
outline Describe, at a high level, the AWS services,
resources, and tools necessary for migrations of
applications.

Topics:
• Migrate servers with AWS Application
Migration Service
• Modernization phases
• AWS Well-Architected Framework
• AWS Managed Services (AMS)
• Application optimization
141

|Student notes
After completing this module, you should be able to do the following:
• Migrate servers with AWS Application Migration Service.
• Use the AWS Well-Architected Framework for migration.
• Optimize your applications for and during migration.
Phases of a migration

Migrate and
Assess Mobilize
Modernize

AWS Well-Architected Framework

142

|Student notes
In the Migrate and Modernize phase, you do the following:
• Identify services that help with server, database, and application migration for the customer, such as
Application Migration Service, AWS Database Migration Service (AWS DMS), and so forth.
• Deploy cutover work streams and move applications to AWS.
• Evolve the migrated applications towards a modern operating model.

The AWS Well-Architected Framework is applied across all stages of migrations. The success of an AWS
migration heavily depends on having a Well-Architected Framework.
Migration services and tools – Migrate and Modernize phase

Migrate and
Assess Mobilize Modernize
Migration Assessment Tools AWS Application Migration
(CART, MRA) Service (MGN)
AWS Control Tower
AWS Database Migration
Service (DMS)
AWS Migration Evaluator
AWS Services for data
migration
AWS Application Discovery
Migration Portfolio Service
Assessment AWS Managed Services (AMS)

AWS Migration Hub

AWS Well-Architected Framework

143

|Student notes
This module discusses the following tools for application migration during the migrate and modernize
phase:
• AWS Application Migration Service
• AWS Managed Services

In this module also discusses the AWS Well-Architected Framework which you use throughout all three
phases of migration.
Migrate servers with
AWS Application
Migration Service

144
Application Migration Service overview
Application Migration Service is the primary
migration used to lift and shift your
applications to AWS:
• Seamless integration with other services
• Usage of Amazon Elastic Compute Cloud (Amazon
EC2) launch templates
• Network access
• Minimal downtime during migration
• Reduced costs

145

Application Migration Service is a highly automated, lift-and-shift (rehost) solution that simplifies,
expedites, and reduces the cost of migrating applications to AWS. It helps companies to rehost a large
number of physical, virtual, or cloud servers without compatibility issues, performance disruption, or long
cutover windows.

Application Migration Service features seamless integration with other AWS services. For example, you
can use AWS CloudTrail and Amazon CloudWatch for compliance and monitoring. You can also use AWS
Identity and Access Management (IAM) to manage authorization, authentication, and permissions.

The service uses Amazon EC2 launch templates to define how to launch a target machine for the selected
source machine.

You can configure private access to the source machine, the staging area, and the target networks. You
can also route communications between your source environment and Application Migration Service
over dedicated network connections. This includes VPN tunnels and connections managed by AWS VPN
or AWS Direct Connect.

It reduces manual processes, improves user management and monitoring, and accelerates your
migration. You can maintain normal business operations throughout the replication process.
It continuously replicates source servers, which means little to no performance impact. Continuous
replication also makes it convenient to conduct non-disruptive tests and shortens cutover windows.

For more information, visit AWS Application Migration Service at https://aws.amazon.com/application-


migration-service.
Application Migration Service benefits

Flexible Reliable Easy-to-use


• Migrate from any source • Robust, predictable, non- • Minimum skillset required to
• Support to a wide range of disruptive, continuous operate
operating systems and replication • Non-disruptive tests before
databases • Short cutover windows cutover
• Suitable for large scale • Highly secure • Quickly plugs into migration
migrations factories and cloud centers
of excellence

146

Application Migration Service is a highly automated, lift-and-shift solution that simplifies, expedites, and
reduces the cost of migrating applications to AWS. Use Application Migration Service to migrate physical,
virtual, or cloud servers to AWS without compatibility issues, performance disruption, or long cutover
windows.

Application Migration Service is designed for rapid, large-scale migrations with a very user-friendly setup
process that helps you to quickly begin replicating your source environment to AWS. Application
migration service supports most common Windows and Linux operating systems and continuously
replicates their data at the block level.

This means that all of the data from your source servers is replicated to your AWS account, including
machine state, operating system configuration, applications, databases, and files. Your workloads are
kept up-to-date, down to the second. After your planned cutover window arrives, Application Migration
Service quickly converts and launches your servers on AWS—typically in minutes. After the initial sync of
your source servers is complete, continuous replication keeps your servers up-to-date on AWS. You can
then conduct non-disruptive tests before cutover to avoid surprises during your actual cutover window.
You can conduct a virtually unlimited number of tests.

Application Migration Service uses a secure replication mechanism for your environments. Your data’s
journey from your source servers to your target AWS account is secure, both in transit and at rest. Data
replication uses TLS 1.3 Advanced Encryption Standard 256-bit encryption in transit. In addition, you can
use your own private connectivity, such as a VPN or Direct Connect, to replicate your data privately on
top of the encryption. You can use secure Amazon Elastic Block Store (Amazon EBS) encryption for the
copy of your replicated data and launched target servers and manage the encryption using AWS Key
Management Service (AWS KMS) Managed Keys or Customer Managed Keys.

Application Migration Service also has a rich set of fully documented APIs that you can use to plug into
migration factories or cloud centers of excellence for additional automation. This includes
post-launch scripts that you can launch to remove applications, install new applications, and
automatically make other system modifications during migration tests or cutover.
Agent-based replication
AWS Cloud
Region
Agent
control AWS Regional Service API Endpoint Access
protocols

Corporate data center


Application Migration Service Amazon EC2 Amazon S3
1
Local Network
AWS
replication
agent Staging area Production area
5
AWS 2 3
replication
agent 4 Test and cutover Amazon EC2
Replication servers instances

147

|Student notes
This architecture diagram has two main components, one of which is the source application
environment. In this diagram, it is represented as a corporate data center, but it can be a cloud
environment or an on-premises environment. The second component is the target environment on AWS,
which contains the target infrastructure and the migration tools and resources. The target infrastructure
on AWS includes a staging area where the application servers are replicated and a production area where
the servers are deployed, tested, and configured to receive production traffic during cutover. The staging
area and production area interact with EC2 instances for compute, Amazon Simple Storage Service
(Amazon S3) buckets for storage, and Application Migration Service.

Agent-based replication includes the following steps:


1. You install an AWS replication agent in the source environment. The agent communicates with
Application Migration Service on AWS. The source servers on which the AWS replication agent is
installed need to be able to send data over TCP port 1500 to the replication servers in the staging area
subnet. They also need to be able to send data to the Application Migration Service API endpoint in
the cloud region.

2. The source servers are first replicated into a staging area during the initial sync using Application
Migration Service. You can test the servers in the staging area for launch readiness using the launch
settings defined in Application Migration Service. There are specific network requirements for
replication of source servers in the staging area. The replication servers launched by Application
Migration Service in your staging area subnet need to be able to send data over TCP port 443 to the
Application Migration Service API endpoint in the cloud region.

3. Launch test instances and perform acceptance tests on the servers. After the test instance is
successful, finalize the test and delete the test instance.
4. Launch the cutover instance. Confirm that the cutover instance was launched successfully
and then finalize the cutover.

5. The final step is to archive the source servers.


Application Migration Service process

Install AWS Configure Launch


Launch test
Assess replication launch cutover Cutover
instances
agent settings instances

148

|Student notes
Identifying the source servers, networking, and instance right-sizing are the tasks carried out as part of
the Assess phase.

You must add your source servers to the Application Migration Service console to migrate them into
AWS. Source servers are added by installing the AWS replication agent on each individual server or by
performing an agentless snapshot replication using vCenter source environment. Install the AWS
replication agent, which replicates data and supports non-disruptive data transfer. You can view and
define replication settings in the Application Migration Service console. Application Migration Service
uses these settings to create and manage a staging area subnet with lightweight Amazon EC2 instances.
These act as replication servers that replicate data between your source servers and AWS.

After you have added your source servers to the Application Migration Service console, you will need to
configure the launch settings for each server. The launch settings are a set of instructions that determine
how a test or cutover instance will be launched for each source server on AWS.

After you have added all your source servers and configured their launch settings, you are ready to
launch a test instance. You can test one source server at a time or simultaneously test multiple source
servers.

After you have finalized the testing of all of your source servers, you are ready for cutover. You launch
cutover instances if you have deleted your test instances, or you can launch them as part of the cutover
process. During cutover, you can delete any remaining test instances, launch cutover instances, and
redirect traffic to your new instances.

You should perform the cutover at a set date and time. You can cutover one source server at a time or
simultaneously cutover multiple source servers.
Configure cutover instances
You should configure these attributes of the cutover instance before launching:
• Instance type right-sizing
• Overrides the instance type
• Start instance on launch
• Copy private IP
• Transfer server tags
• Operating system licensing
• Used for bring your own license

149

After you have added your source servers to the Application Migration Service console, you will need to
configure the launch settings for each server. The launch settings are a set of instructions that determine
how a test or cutover instance will be launched for each source server on AWS. You must configure the
launch settings before launching test or cutover instances. You can use the default settings or configure
the settings to fit your requirements. Application Migration Service automatically creates an Amazon EC2
launch template for each new source server.

For more information about configuring the source server, see Configuring launch settings at
https://docs.aws.amazon.com/mgn/latest/ug/configuring-target-gs.html.
Launch cutover instance
You can cut over one source server at a time or
simultaneously cut over multiple source servers. For
each cutover, Application Migration Service does the
following:

• Deletes any previously launched test instance and


dependent resources

• Launches a new cutover instance that reflects the


most up-to-date state of the source server

• Informs you of success or failure

150

After you have added your source servers to Application Migration Service, launch settings are created
for each server. Application Migration Service automatically creates an EC2 Launch Template for each
new source server. The launch settings are a set of instructions that determine the configuration of each
test instance or cutover instance. You can use the default settings or configure the settings to fit your
requirements.
Agentless replication
Corporate data center AWS Cloud
AWS Region
ESxi Virtual Disk
Development Agent control protocols
Kit 1
Application Migration Service

Guest 1 Guest 2 MGN


vCenter Staging area Creation of Automated
Appliance server staging orchestration and
configuration area system conversion
vCenter and reporting resources facilitates short
Disks Client cutover windows

Staging area subnet Migrated resources


vCenter API
appliance 2 3
AWS
replication
agents
4 Test and cutover Amazon EC2
Replication servers instances

151

|Student notes
This architecture diagram has two main components, one of which is the source application
environment. In this diagram, it is represented as a corporate data center, but it can be a cloud
environment or an on-premises environment. The second component is the target environment on AWS
which contains the target infrastructure and the migration tools and resources. The target infrastructure
on AWS includes a staging area where the application servers are replicated and a production area where
the servers are deployed, tested, and configured to receive production traffic during cutover.

Application Migration Service supports agentless replication from VMware vCenter versions 6.7 and 7.x
to AWS. Agentless snapshot-based replication helps you to replicate source servers on your vCenter
environment into AWS without installing the AWS replication agent.

1. To facilitate agentless replication, you must dedicate at least one virtual machine, or VM, in your
vCenter environment to host the Application Migration Service vCenter Client. The Application
Migration Service vCenter Client is a software bundle distributed by Application Migration Service and
is available for installation as a binary installer. The installation process will install services on the
client VM so that Application Migration Service will remotely discover your VMs that are suitable for
agentless replication.

2. The services also perform data replication between your vCenter environment and AWS though the
use of periodic snapshot shipping. Application Migration Service performs the replication by
synchronizing with the vCenter appliance. The staging area and production area interact with EC2
instances for compute, Amazon S3 buckets for storage, and Application Migration Service.

3. Launch test instances and perform acceptance tests on the servers. After the test instance is tested
successfully, finalize the test and delete the test instance.
4. Launch the cutover instance. Confirm that the cutover instance was launched successfully
and then finalize the cutover.
Requirements for agentless replication

Generate an IAM user and obtain an access key and secret key.

Install the Application Migration Service vCenter Client.

The vCenter client requires the following:

• Outbound and inbound network connectivity to the Application Migration


Service API endpoints
• Outbound and inbound network connectivity to the vCenter endpoint

152

The Application Migration Service vCenter Client is a software bundle distributed by Application
Migration Service and is available for installation as a binary installer. To use the Application Migration
Service vCenter Client, you must first generate the correct IAM credentials. You create at least one IAM
user, and assign it the proper permission policies. You generate an access key ID and secret access key,
which you enter during installation.

To facilitate agentless replication, you dedicate at least one VM in your vCenter environment to host the
Application Migration Service vCenter Client. The installation process installs services on the client VM.
This gives Application Migration Service the ability to remotely discover VMs that are suitable for
agentless replication. The services also perform data replication between your vCenter environment and
AWS though the use of periodic snapshot shipping.

You must install the Application Migration Service vCenter Client on a VM that has outbound and inbound
network connectivity to the Application Migration Service API endpoints and outbound and inbound
network connectivity to the vCenter endpoint. Customers who want to use AWS PrivateLink can use AWS
VPN or Direct Connect to connect to AWS.

For more information about agentless replication using Application Migration Service, see Agentless
Snapshot Based Replication for vCenter Source Environments at
https://docs.aws.amazon.com/mgn/latest/ug/agentless-mgn.html.
Agentless migration operations
Discovery
• Scan the vCenter environment to detect source servers.
• Add source servers to Application Migration Service in
DISCOVERED state.
• Initiate replication.

Replication
• Start snapshot shipping processes.
• Perform initial sync.
• Sync disk changes to customer’s target AWS
environment.

153

Agentless snapshot-based replication is divided into two main operations: discovery and replication.

The discovery process involves periodically scanning your vCenter environment to detect source server
VMs that are suitable for agentless replication and adding these VMs to the Application Migration Service
console. After a source server has been added, you can choose to initiate agentless replication on the
source VM using the Application Migration Service API or console. The discovery process also collects all
the necessary information from vCenter to perform an agentless conversion process after a migration job
is launched.

The replication process involves continuously starting and monitoring the snapshot shipping processes on
the source server VM being replicated. A snapshot shipping process is a long-running, logical operation
that consists of taking a VMware snapshot on the replicated VM and launching an ephemeral replication
agent process. This process uses VMware’s Changed Block Tracking, or CBT, feature to identify changed
volume data location, using Virtual Disk Development Kit to read the modified data, and sending the data
from the source environment to the customer’s target AWS account.

The first snapshot shipping process performs an initial sync that sends the entire disk contents of the
replicating VM into AWS. Subsequent snapshot shipping processes will use CBT to only sync disk changes
to the customer’s target AWS account. Each successful snapshot shipping process completes the
replication operation by creating a group of consistent Amazon EBS snapshots in the customer’s AWS
account, which can then be used by the customer to launch test and cutover instances through the
regular Application Migration Service mechanisms.

For more information, see Agentless snapshot based replication for vCenter source environments at
https://docs.aws.amazon.com/mgn/latest/ug/agentless-mgn.html.
Modernization
phases

154
Application modernization
• Application modernization is a recurrent process to achieve operational
excellence.
• A successful modernization project helps produce the following business
outcomes:
• Business agility
• Organizational agility
• Engineering effectiveness

• The AWS approach to application modernization is iterative and can be divided


into three high-level phases: assess, modernize, and manage.

155

Modernization means taking your application environment in the form that it’s in today, potentially
legacy and monolithic, and transforming it into something that is more agile, elastic, and highly available.
In doing so, you transform your business into a modern enterprise.

Modernizing your applications helps you reduce costs, gain efficiencies, and make the most of your
existing investments. It involves a multi-dimensional approach to adopt and use new technology. This
helps you deliver portfolio, application, and infrastructure value faster and position your organization to
scale at an optimal price. After you optimize your applications, you must operate in that new, modernized
model without disruption to simplify your business operations, architecture, and overall engineering
practices.

A successful modernization project helps produce the following business outcomes:


• Business agility: Modernization improves an organization’s effectiveness at translating business needs
into requirements. This includes how responsive the delivery organization is to business requests, and
how much control the business has in releasing functionality into production environments.
• Organizational agility: Modernization can also include improvements to delivery processes. This might
include agile methodologies, DevOps ceremonies, clear role assignments, and overall collaboration
and communication across the organization.
• Engineering effectiveness: Modernization can also improve quality assurance, testing, continuous
integration and continuous delivery (CI/CD), configuration management, application design, and
source code management.
Modernization approach

Assess Modernize Manage

• Assess readiness and portfolio.


• Focus on replatform, refactor, and replace.
• Apply patterns and solution with time to value.
• Start modernization at a high pace.

156

The first phase in an organization’s modernization journey is the Assess phase. During this phase you
analyze the existing application portfolio, assess systems that need to be modernized, and identify
technical solutions needed for application modernization.

During the Modernize phase, you implement infrastructure solutions that address your reliability,
accessibility, and growth requirements. This includes using cloud-native approaches and optimized
languages and frameworks. You determine project goals and resource requirements and you build out
the implementation roadmap. The goal is to revitalize your applications by using a modernization
program that creates a modern, agile application architecture.

The Manage phase includes all the elements of effective change management, program management,
quality assurance, and service excellence. It includes relearning efforts, which provide a detailed
understanding of new application characteristics and existing infrastructure services. This reduces risks
that might be caused by subsequent modernization efforts. Application workloads need to access
platform services so that application teams can understand and optimize the runtime characteristics of
their application workloads. This means that application teams should treat the operational features of
modernized applications like all other application features, and microservice operations effectively
become part of engineering. Embracing this DevOps culture in cloud-native operations, as part of
building a site reliability engineering capability in the organization, is essential to successful
modernization adoption.

For more information about application modernization, visit Strategy for modernizing applications in the
AWS Cloud at https://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-modernizing-
applications/welcome.html.
Streamline with modernization pathways

Move to Move to
cloud native modern
DevOps

Move to Move to
containers modern
analytics

Move to Move to
managed open source
databases
157
157
AWS Well-
Architected
Framework

158
AWS Well-Architected Framework pillars

159

|Student notes
The AWS Well Architected Framework describes key concepts, design principles, and architectural best
practices for designing and running workloads in the cloud. It helps you understand the pros and cons of
decisions you make while building systems on AWS. Using the framework helps you learn architectural
best practices for designing and operating secure, reliable, efficient, cost-effective, sustainable workloads
in the AWS Cloud. It provides a way for you to consistently measure your architectures against best
practices and identify areas for improvement. The process for reviewing an architecture is a constructive
conversation about architectural decisions and is not an audit mechanism.

The framework is based on six foundational pillars:


• Security: The security pillar describes how to take advantage of cloud technologies to protect data,
systems, and assets in a way that can improve your security posture.
• Operational Excellence: With this pillar, you can support development, run workloads effectively, gain
insight into their operations, and continuously improve supporting processes and procedures to
deliver business value.
• Reliability: The reliability pillar encompasses the ability of a workload to perform its intended function
correctly and consistently when it’s expected to. This includes the functionality to operate and test the
workload through its total lifecycle.
• Performance Efficiency: With this pillar, you can use computing resources efficiently to meet system
requirements and maintain that efficiency as demand changes and technologies evolve.
• Cost Optimization: With this pillar, you can run systems to deliver business value at the lowest price
point.
• Sustainability: With this pillar, you can continually improve sustainability impacts by reducing energy
consumption and increasing efficiency across all components of a workload by maximizing the benefits
from the provisioned resources and minimizing the total resources required.

For more information, see AWS Well-Architected at https://aws.amazon.com/architecture/well-


architected.
AWS Well-Architected Framework usage
• Evaluate, test, and improve your application
architecture in the cloud environment.
• The framework includes the following tools
that help customers to evaluate application
design:
• AWS Well-Architected Tool
• AWS Well-Architected Lenses

160

The AWS Well Architected Tool provides a mechanism for regularly evaluating workloads, identifying
high-risk issues, and recording improvements. It is available at no cost in the console. To use the AWS WA
Tool, define your workload, apply one of the AWS Well-Architected lenses, and begin your review. The
tool then provides an action plan to help you build for the cloud using the defined best practices.

AWS Well Architected Lenses extends the guidance offered by AWS Well-Architected Framework to
specific industry and technology domains, such as machine learning, data analytics, serverless, high
performance computing, Internet of Things, SAP, streaming media, the games industry, hybrid
networking, and financial services. These lenses can be used to evaluate workloads and application
design.
Importance of the AWS Well-Architected Framework
Migrations Workload re-design New workloads
• To prevent negative • To identify the • To align
impact from impact that stakeholders to
workload failures proposed design common approach
• To apply the AWS changes create • To ensure
Well-Architected • To identify potential appropriate
Framework to the risks availability and
Assess, Mobilize, • To ensure business continuity
and Migrate and continuous
Modernize phases improvement

161

The AWS Well-Architected Framework applies to conducting migrations, re-designing existing workloads,
and designing new workloads. You should incorporate it throughout migration projects to optimize the
benefits of the cloud, such as minimizing negative impacts from workload failures.

Organizations might have a different understanding of how an impacted workload translates to a


business continuity perspective. For example, a migrated workload that is deployed in a single Availability
Zone without data replication, security layers, cost measurements, recovery design, and recovery testing
is destined to impact business continuity at some point. The AWS Well-Architected Framework applies to
the Assess, Mobilize, and Migrate and Modernize phases of migration.

As you mature in AWS, you might redesign a workload or add new features. You should also apply the
AWS Well-Architected Framework to the redesigned workload to ensure a continuous improvement of
existing workloads. In some scenarios, a workload might have multiple stakeholders. When one team
identifies a need to remediate an issue in the workload, another team might not align with that particular
need. By employing data assessed by the AWS Well-Architected Framework, you can help align with all
stakeholders of the workload. Recognize that the AWS Well-Architected Framework is not an audit or a
punitive strategy; instead, it’s a collaboration and continuous workload improvement tool.
Operational excellence
• Anticipate failure.
• Perform operations as code.
• Make regular, small, reversible
changes.
• Refine operations procedures
frequently.

162

The operational excellence pillar has a strong role in developing an operational strategy for your migrated
workloads in the cloud. The design principles for the operational excellence pillar include the following:
• Anticipate failure: Perform pre-mortem exercises to identify potential sources of failure, so they can
be removed or mitigated. Test failure scenarios and validate understanding of their impact. Test
response procedures to ensure that they are effective and teams are familiar with their use.
• Perform operations as code: Apply the same engineering discipline used for application code to the
entire cloud environment. You can define your entire workload as code. You can script your
operational procedures and automate them to automatically respond to events. By performing
operations as code, you limit human error and empower consistent responses to events.
• Make frequent, small, reversible changes: Design workloads to assist components to be updated
regularly. Make changes in small increments that can be reversed if they fail without major impact,
when possible.
• Refine operations procedures frequently: As you re-use on-premises operations procedures, look for
opportunities to improve. As you evolve your workloads, also update the procedures.
Customer goals for operational excellence

Improve Have access to


Elevate operation Augment cloud
operational cloud experts and
excellence and teams and
performance and deep
security operations
reduce costs specialization

163

The operational excellence pillar focuses on running and monitoring systems, while continually improving
processes and procedures. Following are some of the tasks ypu would want to accomplish:
• Elevate operational excellence and security.
• Augment cloud teams and operations.
• Improve operational performance and reduce costs.
• Have access to cloud experts and deep specialization.
AWS Managed
Services (AMS)

164

In this section, you will learn about AMS. AMS provides solutions for cloud operations so that you can
concentrate on developing features for your customers.
AWS Managed Services

Provides the following operational services to provide customers:


• Security monitoring, incident management and incident response
• Extend and scale customer’s platform and app teams
• Achieve cost effectiveness by improving operations
• Extend a focused delivery team for access to cloud experts

165

|Student notes
AWS Managed Services is also known as AMS, offers a prescriptive, guided architecture that helps with
compliance, security, and scalability. AMS provides organizations with the framework, tools, support, and
automation that can reduce risks. The architecture guide includes PCI DSS in place, ISO 9001, 27001, 17
and 18, and HIPAA eligible.

AMS is focused on moving the important but undifferentiated tasks away from human hands to
automation. This allows cloud experts to focus on the higher-level tasks that matter to the business.

AMS customers have access to the same cloud experts that AWS uses to build its services directly and
indirectly. This help AMS customers to reach into the AMS organization and use AMS as an example of
what good looks like as they start to build centers of cloud competency in-house.

AMS continuously and consistently works with its customers to identify ways to optimize the
environment and operating model. This helps ensure that the customers' investments are fine-tuned to
what they actually need to consume.

Except for endpoint protection, AMS built its platform to use available AWS tools. Contractually, AMS is a
month-to-month, consumption-based service, similar to the AWS utility compute model. AMS wants its
customers to have the same flexibility in their managed environments as they do with AWS. AMS also
offers its service for as long as customers want it and does not hold them to an arbitrary contract term.
AMS capabilities

166

|Student notes
AMS uses curated collections of AWS management tools and services to provide operational excellence
to customer applications. AMS is able to achieve this through provisioning, monitoring, patching,
ensuring availability, security, compliance, managing change, incidents, and costs.

Provisioning
AMS helps customers quickly and easily deploy their cloud infrastructure. It simplifies the on-demand
provisioning of commonly used, predefined cloud stacks. AMS is designed to meet customers’ application
needs. Automation and integration with your customers’ existing IT service management catalog helps
them to quickly stand up applications in either test or production environments through a self-service
portal.

Monitoring and event management


With AMS, your customer’s managed environment is configured for logging and alerts, based on best
practices to ensure security and system health. AMS monitors, correlates, and investigates alerts to
detect and resolve incidents proactively. AMS also aggregates and stores all operational logs. Customers
have full access to Amazon CloudWatch and AWS CloudTrail for transparency.

Patch and continuity management


AMS helps keep customers’ resources current and secure by taking care of patching and backup activities.
When updates or patches are released by OS vendors, AMS applies them in a timely, consistent manner
to minimize business impacts.

AMS applies critical security patches immediately, while others are applied based on customers’
requested schedules. Backups of stacks are automated using Amazon Elastic Block Store (Amazon EBS)
and Amazon Relational Database Service (Amazon RDS) snapshots. They can be restored if a failure or
outage occurs, ensuring business continuity.
Availability
AMS is hosted in multiple AWS Regions. Each Region is a separate geographic area. All
components of the AMS service are deployed, validated, and operationalized within a
Region. This helps customers who require full support solely out of one Region with their
Region redundancy plan requirements. AMS also offers service commitments for key aspects
of the service, ensuring customers receive a high level of operational service for AWS.

Security and access management


AMS protects customers’ information assets and helps keep their AWS solutions secure. With
anti-malware protection, intrusion detection, and intrusion prevention systems, AMS
manages security policies, and can quickly recognize and respond to any intrusion. By
configuring default AWS security capabilities and best practices, such as IAM roles and
Amazon EC2 security groups, AMS removes the complexity of managing multiple
authentication mechanisms. Your customers can use their corporate credentials to access
their AWS resources.

Compliance
AMS offers a secure landing zone and a step-by-step process for extending customers’
security and identity perimeter to the cloud. AMS also provides features to help customers
meet various compliance program requirements, including HIPAA, HITRUST, GDPR, SOC, NIST,
ISO, PCI, FedRAMP, and others. AMS rigor and controls help enforce your customers’
corporate and security infrastructure policies. AMS enables customers to develop solutions
and applications using their preferred development approach.

Change management
AMS provides a secure, efficient way to make controlled changes to infrastructures that can
help with compliance. Changes are approved and automated through an approval engine,
and can be scheduled as self-service. Whether your customer wants to deploy a new
Amazon EC2 stack or change their Amazon Relational Database Service (Amazon RDS)
database configuration, AMS help them to quickly and easily request it in a dedicated self-
service console.

Incident management
AMS monitors overall health of infrastructure resources and handles the daily activities of
investigating and resolving incidents. For example, if an EC2 instance stops unexpectedly,
AMS detects the event and automatically launches another instance. AMS also takes
appropriate action to minimize and avoid service interruption.

Cost management
Your customer’s personal cloud service delivery manager will provide a monthly summary of
key performance metrics. This includes operational activities, events and their impacts, and
recommendations to optimize platform use and cost to get the most out of AWS.
AMS Operations plans

AMS Managed Services offers two operations plans

AMS Accelerate AMS Advanced

167

|Student notes
AMS is available with two operations plans: AMS Accelerate and AMS Advanced. An operations plan
offers a specific set of features and has differing levels of service, technical capabilities, requirements,
price, and restrictions. Our operations plans give you the flexibility to select the right-sized operational
capabilities for each of your AWS workloads.

AMS Accelerate helps you operate the day-to-day infrastructure management of your new or existing
AWS environment. AMS Accelerate provides operational services, such as monitoring, incident
management, and security. AMS Accelerate also offers an optional patch add-on for EC2-based workloads
that require regular patching. You decide which AWS accounts you want AMS Accelerate to operate, the
Regions you want AMS Accelerate to operate in, the add-ons you require, and the service-level
agreements (SLAs) you need.

AMS Advanced provides full-lifecycle services to provision, run, and support your infrastructure. In
addition to operational services, AMS Advanced also includes additional services, such as landing zone
management, infrastructure changes and provisioning, access management, and endpoint security.

AMS Advanced deploys a landing zone to which you migrate your AWS workloads and receive AMS
operational services. Our managed multi-account landing zones are pre-configured with the
infrastructure to facilitate authentication, security, networking, and logging.

AMS Advanced also includes a change and access management system that protects your workloads by
preventing unauthorized access or the implementation of risky changes to your AWS infrastructure.
Customers need to create a Request for Change (RFC) using the change management system to
implement most changes in AMS Advanced accounts. You create RFCs from a library of automated
changes that are pre-vetted by security and operations teams or request manual changes that are
reviewed and implemented by our operations team if they are deemed both safe and supported by AMS
Advanced.
APN Navigate
• Partner enablement program
• Step-by-step guidance on how to build, market, sell,
and specialize as an AWS Partner
• Enables your company by providing clear and
effective guidance on how to gain AWS technical
knowledge

168

|Student notes
APN Navigate is the AWS Partner enablement program that provides AWS Partners with prescriptive
guidance on how to build their AWS business and specialize on AWS. APN Navigate provides step-by-step
guidance on how to build, market, sell, and specialize as an AWS Partner. It offers direction to AWS
Partners on how to establish fundamental building blocks to become a successful AWS Partner.

APN Navigate offers a single, consolidated experience that ties all APN programs together. At no cost,
organizations gain access to a curated list of business and technical enablement recommended by AWS
experts, saving you time and helping your organization align to AWS best practices faster. It provides a
clear path and removes the ambiguity on how to meet top AWS standards.

Navigate can be broken down into three main themes: learn, act, and connect. First, as this is an
enablement program, we encourage you and your teams to learn as much as you can from the curated
list of resources in your dedicated toolbox.

Second, Navigate offers a goals checklist that clearly outlines and reminds organizations of the actions
they need to take to help them establish their AWS business and progress towards achieving tier status
and APN program designations.

Third, we’ve created checkpoints throughout Navigate, which are key to your success in Navigate and are
clearly outlined in the AWS Partner Goals Checklist on when it’s a good time to connect with your AWS
Partner Manager.
Application
optimization

169
Application optimization examples

Containerize Move application to containers


Optimized
Employ
Decouple application components
microservices

Use serverless Run code without managing servers

Automate Scale service consumption to meet demand

170

Organizations can modernize in a few ways.

As a first step, assess the application portfolio and determine the migration patterns. You can combine
and optimize migration approaches by using automation.

The following are options to consider:


• Containerization: Moving applications into containers provides many benefits, such as the ability to
scale compute resources with greater precision and granularity.
• Employ microservices: Microservices architectures make applications more manageable to scale and
faster to develop. They help you innovate and accelerate time-to-market for new features. For more
information about microservices, see Microservices at https://aws.amazon.com/microservices.
• Serverless: Serverless compute services such as AWS Lambda let you run code without provisioning or
managing servers.
• More automation: AWS services such as Elastic Load Balancing (ELB) and Amazon Auto Scaling can
automatically scale resources to meet demand.

For more information about a phased approach to modernizing, see Phased approach to modernizing
applications in the AWS Cloud at https://docs.aws.amazon.com/prescriptive-
guidance/latest/modernization-phased-approach/welcome.html.
Before modernization: migrate as-is
AWS Cloud
Corporate data center

Application Application on Database on


Users
Server Amazon EC2 Amazon EC2

Database Database on
Application on
Server Amazon Route 53 Amazon EC2
Amazon EC2

Rehost to resilient secure networks


Rehost
Gain benefits of scale

No rewriting or reconfiguring applications


171

|Student Notes
This solution shows a basic lift-and-shift migration. The migration process rehosts on-premises servers to
Amazon EC2 instances and use Amazon Route 53 for DNS routing. Application and database servers
migrate onto EC2 instances as-is and gain the benefits of reliable, scalable computing in a secure
network. Amazon Route 53 preforms weighted routing to split traffic between the application instances.
Modernized infrastructure
AWS Cloud

Application on
Amazon EC2

Users Amazon Elastic Load Amazon EC2 Amazon RDS


CloudFront Balancing Auto Scaling

Application on
Amazon EC2

Route 53
Amazon S3

Rehost to resilient secure networks


Rehost Replatform
Replatform critical components
Minimize changes
172

|Student notes
This solution shows a basic approach to modernization. Unlike the previous example where on-premises
servers are rehosted, this example combines rehosting and replatforming with minimal changes:
• Rehost: On-premises application servers are rehosted on Amazon EC2 instances.
• Replatform: On-premises, self-managed databases are replatformed to managed databases on
Amazon RDS.
• Infrastructure changes: Amazon CloudFront and ELB are added to the infrastructure to provide
scalability and resilience to the rehosted applications and servers. Amazon S3 hosts static files.

By migrating existing applications and services to Amazon EC2 instances, you gain the benefits of reliable,
scalable computing in a secure network. You will not need to deconstruct your applications, which
refactoring sometimes requires.
Knowledge check

173
Knowledge check 1 – question
What is the last step of the Choice Response

migration process when A Stop all operational services on the source server.
migrating source servers
using a replication agent?
B Archive the source server.

Finalize the test and delete the test instance after the
C
test is successful.

Confirm the cutover instance was launched successfully


D
and then finalize the cutover.

174
Knowledge check 1 – answer
What is the last step of the The correct response is B.
migration process when A Stop all operational services on the source server.
migrating source servers
using a replication agent?
B Archive the source server.

Finalize the test and delete the test instance after the
C
test is successful.

Confirm the cutover instance was launched successfully


D
and then finalize the cutover.

175
Knowledge check 2 – question
Which statements are Choice Response

correct about the AWS Well- The AWS Well-Architected Framework should only be
A
used during the Assess phase of migrating to AWS.
Architected Framework?
The AWS Well-Architected Framework should be
(Select TWO.) B incorporated during the Assess, Mobilize, and Migrate
and Modernize phases of migration.
The key pillars in the AWS Well-Architected Framework
C
are the security and cost optimization pillars.
The AWS Well-Architected Framework does not require
D an AWS Solutions architect or AWS Professional Services
to run an assessment.
The AWS Well-Architected Framework is made up of
E
three key pillars.

176
Knowledge check 2 – answer
Which statements are The correct responses are B and D.
correct about the AWS Well- The AWS Well-Architected Framework should only be
A
used during the Assess phase of migrating to AWS.
Architected Framework?
The AWS Well-Architected Framework should be
(Select TWO.) B incorporated during the Assess, Mobilize, and Migrate
and Modernize phases of migration.
The key pillars in the AWS Well-Architected Framework
C
are the security and cost optimization pillars.
The AWS Well-Architected Framework does not
D require an AWS Solutions architect or AWS
Professional Services to run an assessment.
The AWS Well-Architected Framework is made up of
E
three key pillars.

177
Lab 2 Application Migration with Application
Migration Service
In this lab, you will perform the following
tasks:
• Demonstrate how to migrate a server to AWS using
Application Migration Service.
• Install and configure a web application server.
• Install the Application Migration Service on a source
machine and configure Application Migration Service for
replication.

• Replicate and migrate a server from on-premises to AWS.

178
Questions?

Corrections, feedback, or other questions?


Contact us at https://support.aws.amazon.com/#/contacts/aws-training.
All trademarks are the property of their owners.

179
Module 5
Course Summary

180
Recap

181
Summary
In this course we discussed the following:
• Business and technical drivers for migrating to the cloud
• Three phases of a migration
• AWS migration tools and best practices
• Cloud migration strategies
• Application optimization

182
AWS Rapid Migration
An end-to-end solution to de-risk
and accelerate moving to the cloud

183
How Rapid Migration works

Use a prescriptive Create a fully Plan and automate Access cloud ops and
• Use detailed automated AWS migration tasks specialist support
framework and tools
portfolio and foundation
licensing
assessment to • Access predesigned • Implement a • Use comprehensive • Augment your
scope timeline templates, secure, resilient, discovery and team’s AWS
and cost. runbooks, and and scalable AWS analysis to validate infrastructure
• Access migration backlogs that yield foundation. the migration operating
readiness gap velocity. • Use no-code strategy and plan. capabilities with
analysis. • Gain control and solutions to • Gain increased AWS support.
• Create a real-time visibility manage and automation to • Access assigned
customized into migration govern a multi- simplify, expedite, technical engineers
learning plan. activities and account and reduce with at minimum
progress. environment. migration costs. business support
SLAs.

184

Rapid Migration provides an end-to-end migration approach. We’ve taken our experience of thousands of
migrations to create a set of predefined automated solutions and prescriptive tooling. We can use them
to accelerate migrations into the cloud.

It all starts with a set of prerequisite Amazon Web Services (AWS) investments. First is a detailed portfolio
and licensing assessment for the migration. It also involves a readiness gap analysis, including the
building of a customized learning plan. These inputs are key to determining the size and scope of the
Rapid Migration project.

There are four main features of Rapid Migration. They are as follows:
• Rapid Migration includes a streamlined set of prescriptive, predesigned templates, runbooks, and
backlogs for Rapid Migration. Governance tooling tracks and provides detailed insights into migration
activities.
• It includes a simplified process for creating a scalable AWS foundation and landing zone by
predesigning architectures to AWS best practices. You use this to govern and operate the environment
at scale. It also lessens the time to the first migration wave. You can start realizing the value of the
cloud quicker.
• The migration itself begins with a well-documented plan that started from the initial scope gathering.
The portfolio and discovery work continues through the program. As the migration waves are formed,
you use automated migration tooling, turning you migrations into a factory process that we can scale.
• No migration is complete without operations. Rapid Migration brings an infrastructure operating team
to the environment from the start. It handles things like patching, backup, monitoring, and more. It
also brings an additional technical subject matter expert (SME) to help with any blockers that might
come up as workload migrations begin.
How to get started

Migration Readiness Assessment Discovery acceleration Learning needs analysis


• You can use this to align your • This involves AWS funded • AWS invests to help identify
stakeholders on migration infrastructure and application cloud skills gaps and develop a
business objectives and discovery to create the initial customized learning plan.
readiness gap analysis. wave plan, timeline and cost of • This is a data-driven approach
migration. to align training investments
with business goals.

185

There are three key areas AWS invests in to get started. They are as follows:
• Migration Readiness Assessment (MRA): You can use this to align on migration business objections and
readiness gap analysis.
• Discovery acceleration: This AWS investment for application and infrastructure discovery is used to
create an initial wave plan. It contains a set of directional R patterns for the migration and provides an
optimized licensing assessment. The data discovered informs the statement of work (SOW) with
timeline and costs.
• Learning needs analysis: This is used to create a customized learning plan to be used alongside the
migration.
Resources

186
Course evaluation
• Sign in to aws.training via APN Partner Central
https://partnercentral.awspartner.com/LmsSsoRedirect?RelayState=%2fAccount%2fTranscript%2fArchived

• My Account  Transcript  Archived


• Find today course  Evaluate

https://partnercentral.awspartner.com/LmsSsoRedirect?RelayState=%2fAccount%2fTranscript%2fArchive
d
Additional resources
For more information, see the following resources:
• AWS Prescriptive Guidance • “Migrate with confidence”
• https://aws.amazon.com/prescriptive-guidance/ • https://aws.amazon.com/cloud-migration
• AWS Competency Program • AWS Migration Acceleration Program
• https://aws.amazon.com/partners/competencies
• https://aws.amazon.com/migration-acceleration-
• Migration Consulting and Delivery Competency program
Checklist • “Service Description” in the AMS Advanced
• https://apn- User Guide
checklists.s3.amazonaws.com/competency/migration/
consulting/CNIBv7Tt8.html • https://docs.aws.amazon.com/managedservices/l
atest/userguide/ams-sd.html

188

|Student notes
For more information, see the following resources:
• AWS Prescriptive Guidance at https://aws.amazon.com/prescriptive-guidance
• AWS Competency Program at https://aws.amazon.com/partners/competencies
• Migration Consulting/Delivery Competency Checklist at https://apn-
checklists.s3.amazonaws.com/competency/migration/consulting/CNIBv7Tt8.html
• “Migrate with confidence” at https://aws.amazon.com/cloud-migration
• AWS Migration Acceleration Program at https://aws.amazon.com/migration-acceleration-program
• “Service Description” in the AMS Advanced User Guide at
https://docs.aws.amazon.com/managedservices/latest/userguide/ams-sd.html
Continue your
learning

189
AWS Certification levels
Foundational Professional
Knowledge-based certification for foundational Role-based certifications that validate advanced skills and
understanding of AWS Cloud. No prior knowledge. At least two years of AWS Cloud experience
experience necessary. recommended.

Associate Specialty
Role-based certifications that showcase your knowledge Certifications focused on specific topics. Recommended level
and skills and build your credibility as an AWS Cloud of experience varies.
professional. Prior AWS Cloud or strong on-premises IT
experience recommended.

190

AWS Certification helps learners to build credibility and confidence by validating their cloud expertise
with an industry-recognized credential. Certification helps organizations to identify skilled professionals
who can lead cloud initiatives by using Amazon Web Services (AWS).

The slide shows the AWS certifications that are currently available. To earn an AWS certification, you
must earn a passing score on a proctored exam. Each certification level for role-based certifications
provides a recommended experience level with AWS Cloud services as follows:
• Professional – Two years of comprehensive experience designing, operating, and troubleshooting
solutions by using the AWS Cloud
• Associate – One year of experience solving problems and implementing solutions by using the AWS
Cloud
• Foundational – Six months of fundamental AWS Cloud and industry knowledge

Specialty certifications focus on a particular technical domain. The recommended experience for taking a
specialty exam is technical experience in the domain as specified in the exam guide.

AWS does not publish a list of all services or features that are covered in a certification exam. However,
the exam guide for each exam lists the current topic areas and objectives that the exam covers. For more
information, the exam guides and other preparation materials are available on the AWS Certification
exam preparation page at https://aws.amazon.com/certification/certification-prep/.

The information on this slide is current as of March 2023. However, exams are frequently updated, and
the details regarding which exams are available—and what is tested by each exam—are subject to
change. For more information about the latest AWS certification exam information, see the AWS
Certification page at https://aws.amazon.com/certification/.

You are required to update your certification (or recertify) every 3 years. For more information, see the
AWS Recertification page at https://aws.amazon.com/certification/recertification/.
Core 4 – Steps to prepare for an AWS Certification exam
Approach exam day with confidence

Step 1 Step 2 Step 3 Step 4


Get to know Learn about Take exam Validate your Explore all AWS
the exam and exam topics preparation exam readiness Certification Exams
exam-style in AWS Skill training in with Official
questions Builder AWS Skill Practice Exams
Builder

191

|Student notes
This course includes content that might be related to an AWS Certification exam. To continue preparing
for the exam, follow these core 4 steps.

For more information about each exam, you can scan the QR code to see “Explore AWS Certification
exams” at https://aws.amazon.com/certification/exams/.
Prepare for AWS Certification – step 1
Get to know the exam and exam-style questions

1 Review the exam guide.

Sign up for access to AWS Skill


2
Builder, the AWS online learning
center.
Enroll and take an AWS Certification
3
Official Practice Question Set.

192

Step one is getting to know the exam and exam-style questions.

You can review the exam guide for each exam by exploring the AWS Certification Exams page. For more
information, see Explore all AWS Certification exams at https://aws.amazon.com/certification/exams/.

For sample exam questions, you can sign up on AWS Skill Builder. Within Skill builder, you can enroll in an
Official Practice Question Set. For more information, see AWS Skill Builder: Your learning center to build
in-demand cloud skills at https://explore.skillbuilder.aws/learn.

The questions in the practice sets are created by following the same process as questions that you will
see on the actual AWS Certification exams. They include detailed feedback and recommended resources
to help you prepare for your exam.
Prepare for AWS Certification – step 2
Learn about exam topics in Skill Builder

1 Identify gaps in your exam topic


knowledge.
Enroll in self-paced digital courses
2
you need to learn about.

Access AWS Builder Labs to get


3 hands-on; apply your skills in the
AWS Console.
193

Step two is brushing up on exam topics.

In addition to the reviewing the exam guide and enrolling in self-paced courses on AWS Skill Builder, you
can explore AWS Builder Labs to get hands-on experience with AWS. For more information, see
AWS Builder Labs: Learn cloud skills in a live AWS environment at
https://aws.amazon.com/training/digital/aws-builder-labs/.
Prepare for AWS Certification – step 3
Take exam prep training in AWS Skill Builder

AWS Skill Builder offers courses


1
across all domains.

AWS Builder Labs contain more


2
than 500 self-paced labs.

3 Use gaming to prepare for your


AWS Certification with AWS Cloud
Quest.
194

|Student notes
Next, you can take exam preparation courses in AWS Skill Builder

Skill Builder also offers many resources that you can use to address any gaps in your knowledge that you
discover.
1. Skill Builder offers courses across all domains.
2. There also are more than 500 self-paced labs.
3. Finally, if you’d like to gain hands-on experience with AWS services by playing an actual game – try
AWS Cloud Quest.

Note: some of these resources require a digital subscription.


Prepare for AWS Certification – step 4
Validate your exam readiness

Take an AWS
Certification
Official Practice
Exam with exam-
style scoring.

195

Finally, determine your exam readiness by taking an official practice exam.

Each practice exam includes the same number of questions as the actual exam. The practice exams
provide practice with the same question style, depth, and rigor as the certification exam. They include
exam-style scoring and a pass or fail. You’ll also receive feedback on the answer choices for each question
with recommended resources to deepen your understanding of key topics. You can determine whether
you want to simulate the exam experience by taking a timed exam with answers only shown at the end.
Or you can choose other options, like untimed, or with answers shown after submitting each question.
Register for your exam
Learn about options for taking the exam.

196

AWS offers flexible, convenient options for taking exams. Explore the Schedule an Exam page to choose
the exam option that works best for you. For more information, see Schedule an Exam: Find the testing
option that works best for you at https://aws.amazon.com/certification/certification-prep/testing/.
AWS Skill Builder online learning center

Continue to deepen the skills you


need, your way, with more than 500
courses and interactive training
developed by the experts at AWS.
Game-based learning Self-paced labs

Get started
Use case challenges Exam preparation https://aws.amazon.com/training/digital

197

|Student notes
Continue your learning with AWS Skill Builder, our online learning center.

Are you ready to achieve your goals at your pace? Free digital training on AWS Skill Builder offers more
than 500 on-demand courses and learning plans so you can build the skills that you need, your way.

Want to build problem-solving cloud skills in an interactive, engaging experience? A Skill Builder
subscription offers access to self-paced labs, practice exams, role-based games, and real-world challenges
to accelerate your learning.

For more information about how to learn more and get started, see AWS Skill Builder at
https://aws.amazon.com/training/digital.
Don’t miss these learning opportunities

Free Digital Training Classroom Training AWS Certification

Learn with hundreds of free, Deepen your technical skills Validate your expertise with
self-paced digital courses on and learn from an accredited an industry-recognized
AWS fundamentals. AWS instructor. credential.

198

AWS Training and Certification is an organization dedicated to expanding and deepening knowledge of
AWS, and driving proliferation in the use of AWS services. Our programs are designed for customers, AWS
Partners, and AWS employees. Over the past several months, we have rolled out several new courses,
training labs, and certifications to our customers and partners.

Expand your AWS Cloud skills. For more information, see the following resources:
• Digital training – https://explore.skillbuilder.aws/
• Classroom training – https://aws.amazon.com/training
• AWS Certification – https://aws.amazon.com/certification
• AWS Workshops – https://workshops.aws/
• Tech Talks – https://aws.amazon.com/events/online-tech-talks/on-demand/
• AWS Ramp-Up Guides – https://aws.amazon.com/training/ramp-up-guides/
Thanks for participating!

Corrections, feedback, or other questions?


Contact us at https://support.aws.amazon.com/#/contacts/aws-training.
All trademarks are the property of their owners.

200

You might also like