Sap Hana
Sap Hana
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
SAP HANA on AWS SAP HANA Guides
Table of Contents
Home ............................................................................................................................................... 1
AWS Backint Agent for SAP HANA ....................................................................................................... 2
What is AWS Backint agent? ........................................................................................................ 2
How AWS Backint Agent for SAP HANA works ....................................................................... 2
Billing ............................................................................................................................... 3
Supported operating systems .............................................................................................. 3
Supported databases .......................................................................................................... 3
Supported Regions ............................................................................................................. 3
Get started ................................................................................................................................ 3
Prerequisites ...................................................................................................................... 3
Install and configure AWS Backint agent ............................................................................... 6
Back up and restore your SAP HANA system ................................................................................ 25
Backup and recovery using SQL statements ......................................................................... 25
Backup and recovery using SAP HANA Cockpit or SAP HANA Studio ........................................ 27
Get backup and recovery status ......................................................................................... 27
Find your backup in an Amazon S3 bucket .......................................................................... 27
Schedule and manage backups .......................................................................................... 28
Backup retention .............................................................................................................. 28
Verify signature ........................................................................................................................ 28
Troubleshoot ........................................................................................................................... 30
Agent logs ....................................................................................................................... 30
Installation ...................................................................................................................... 30
Backup and recovery ......................................................................................................... 31
Backup deletion ............................................................................................................... 35
Version history ......................................................................................................................... 35
Migrating SAP HANA to AWS ............................................................................................................. 38
.............................................................................................................................................. 38
Migration frameworks ............................................................................................................... 38
6 Rs ................................................................................................................................ 38
AWS CAF ......................................................................................................................... 39
Planning .................................................................................................................................. 40
Understanding on-premises resource utilization ................................................................... 40
Reviewing AWS automation tools for SAP ........................................................................... 40
Data tiering ..................................................................................................................... 40
Prerequisites .................................................................................................................... 41
SAP HANA sizing ...................................................................................................................... 42
Memory requirements for rehosting ................................................................................... 42
Memory requirements for replatforming ............................................................................. 42
Instance sizing for SAP HANA ............................................................................................ 43
Network planning and sizing ............................................................................................. 43
SAP HANA scale-up and scale-out ...................................................................................... 44
Migration tools and methodologies ............................................................................................ 44
AWS Quick Starts ............................................................................................................. 45
Migration using DMO with System Move ............................................................................. 45
SAP HANA classical migration ............................................................................................ 46
SAP Software SUM DMO ................................................................................................... 46
SAP HANA HSR ................................................................................................................ 46
SAP HANA HSR with initialization via backup and restore ...................................................... 46
Backup/restore tools ......................................................................................................... 46
AWS Snowball ................................................................................................................. 47
Amazon S3 Transfer Acceleration ....................................................................................... 48
Amazon S3 Transfer Acceleration ....................................................................................... 48
AMIs ............................................................................................................................... 48
Migration scenarios ................................................................................................................... 48
iii
SAP HANA on AWS SAP HANA Guides
iv
SAP HANA on AWS SAP HANA Guides
v
SAP HANA on AWS SAP HANA Guides
vi
SAP HANA on AWS SAP HANA Guides
SAP on AWS technical documentation provides detailed information on how to migrate, implement,
configure, and operate SAP solutions on AWS.
1
SAP HANA on AWS SAP HANA Guides
What is AWS Backint agent?
Topics
• What is AWS Backint Agent for SAP HANA? (p. 2)
• Get started with AWS Backint Agent for SAP HANA (p. 3)
• Back up and restore your SAP HANA system with the AWS Backint Agent for SAP HANA (p. 25)
• Verify the signature of the AWS Backint agent and installer for SAP HANA (p. 28)
• Troubleshoot AWS Backint Agent for SAP HANA (p. 30)
• Version history (p. 35)
If you want to deploy an SAP HANA database application with AWS Backint agent, you can use AWS
Launch Wizard for SAP, a service that guides you through the sizing, configuration, and deployment of
SAP applications on AWS, and follows AWS cloud application best practices.
Topics
• How AWS Backint Agent for SAP HANA works (p. 2)
• Billing (p. 3)
• Supported operating systems (p. 3)
• Supported databases (p. 3)
• Supported Regions (p. 3)
2
SAP HANA on AWS SAP HANA Guides
Billing
download and manually install and configure the agent. When the agent is installed, you can back up
your SAP HANA database directly to Amazon S3.
AWS Backint agent increases scalability through parallel processing of backup and restore processes,
providing maximum throughput and reducing backup Recovery Time Objective (RTO) during recovery.
Billing
AWS Backint agent is a free service. You pay for only the underlying AWS services that you use, for
example Amazon S3. For more information about Amazon S3 pricing, see the Amazon S3 pricing page.
Supported databases
AWS Backint agent supports the following databases:
Supported Regions
AWS Backint agent is available in all commercial Regions, as well as in China (Beijing), China (Ningxia),
and GovCloud.
Topics
• Prerequisites (p. 3)
• Install and configure AWS Backint Agent for SAP HANA (p. 6)
Prerequisites
After your SAP HANA system is successfully running on an Amazon EC2 instance, verify the following
prerequisites to install AWS Backint agent using the Amazon EC2 Systems Manager document or using
AWS Backint installer.
Topics
• AWS Identity and Access Management (p. 4)
• Amazon EC2 Systems Manager (p. 5)
3
SAP HANA on AWS SAP HANA Guides
Prerequisites
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetBucketPolicyStatus",
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:GetBucketPolicy"
],
"Resource": [
"arn:aws:s3:::<Bucket Name>/*",
"arn:aws:s3:::<Bucket Name>"
]
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "<KMS Arn>"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObjectTagging",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<bucket name>/<folder name>/*"
}
]
}
Note
If you want to allow cross-account backup and restore, you must add your account details
under a principal element in your policy. For more information about principal policies,
see AWS JSON Policy Elements: Principal in the AWS Identity and Access Management
4
SAP HANA on AWS SAP HANA Guides
Prerequisites
User Guide. In addition, you must ensure that the S3 bucket policies allow your account to
perform the actions specified in the IAM policy example above. For more information, see
the example for Bucket owner granting cross-account bucket permissions in the Amazon S3
Developer Guide.
For more information about managed and inline policies, see the IAM User Guide.
Amazon S3 bucket
When you install the AWS Backint agent, you must provide the name of the S3 bucket where you want
to store your SAP HANA backups. Only Amazon S3 buckets created after May 2019 are compatible with
AWS Backint agent. If you do not own a bucket created after May 2019, create a new S3 bucket in your
target Region. Additionally, ensure that the Amazon S3 bucket where you want to store your backups
doesn’t have public access enabled. If the S3 bucket has public access enabled, backups will fail.
AWS Backint agent supports backing up to Amazon S3 with VPC endpoints. For more information, see
VPC Endpoints.
S3 storage classes —AWS Backint agent supports backing up your SAP HANA database to an Amazon
S3 bucket with the S3 Standard, S3 Standard-IA, S3 One Zone-IA, and S3 Intelligent-Tiering storage
classes. S3 Reduced Redundancy, Deep Archive, and Glacier storage classes are not supported by AWS
Backint agent. By default, the S3 Standard storage class is used to store your backups. You can change
the storage class to use for backups by modifying the AWS Backint agent configuration file (p. 13).
Alternatively, you can change your backup files to one of the supported storage classes through S3
LifeCycle configuration or directly using APIs. To learn more about Amazon S3 storage classes, see
Amazon S3 Storage Classes in the Amazon S3 Developer Guide.
Note
S3 Intelligent-Tiering storage class enables movement of objects between four access tiers.
It can also move objects to the archival tiers. However, AWS Backint agent for SAP HANA
does not support backup and recovery from archival tiers. To recover or delete objects from
the archival tiers, you must first restore the archived S3 objects before initiating a recovery or
deletion with the AWS Backint agent.
Encryption— AWS Backint agent supports encrypting your SAP HANA backup files while storing them
in Amazon S3, using server-side encryption with AWS KMS (KMS). You can encrypt your backups with a
aws-managed-key called aws/s3 or you can use your own custom symmetrical AWS KMS key stored
in KMS. To encrypt your backup files with keys stored in KMS (AWS-managed or custom), you must
provide the KMS ARN during the install, or update the AWS Backint agent configuration file (p. 13)
at a later time. To learn more about encrypting your S3 objects using AWS KMS, see How Amazon S3
uses AWS KMS in the AWS Key Management Service Developer Guide. Alternatively, you can enable default
encryption for your Amazon S3 bucket using keys managed by Amazon S3. To learn more about enabling
default encryption for your bucket, see How do I enable default encryption for an Amazon S3 bucket? in
the Amazon S3 Console User Guide.
5
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
Object locking— You can store objects using a write-once-read-many (WORM) model with S3 Object
Lock. Use S3 Object Lock if you want to prevent your SAP HANA backup files from being accidentally
deleted or overwritten for a specific time period or indefinitely. If S3 Object Lock is enabled, you can't
delete your SAP HANA backups stored in Amazon S3 using SAP HANA Cockpit, SAP HANA Studio, or SQL
commands until the retention period expires. To learn about S3 Object Lock, see Locking objects using
S3 Object Lock in the Amazon S3 Developer Guide.
Object tagging — By default, AWS Backint agent adds a tag called AWSBackintAgentVersion when it
stores your SAP HANA backup files in your S3 bucket. This tag helps to identify the AWS Backint version
and the SAP HANA version used when backing up your SAP HANA database. You can list the value
of the tags from S3 console or using APIs. To disable default tagging, modify the AWS Backint agent
configuration file (p. 13).
Topics
• Install AWS Backint agent using the AWS Systems Manager document (p. 6)
• Install AWS Backint agent using AWS Backint installer — interactive mode (p. 8)
• Install AWS Backint agent using AWS Backint installer — silent mode (p. 11)
• Use a proxy address with AWS Backint agent (p. 12)
• Backint-related SAP HANA parameters (p. 12)
• Modify AWS Backint agent configuration parameters (p. 13)
• Configure SAP HANA to use a different Amazon S3 bucket and folder for data and log
backup (p. 16)
• Configure SAP HANA to use a different Amazon S3 bucket and folder for catalog backup (p. 19)
• Configure AWS Backint agent to use shorter Amazon S3 paths (p. 22)
• View AWS Backint agent logs (p. 23)
• Get the currently installed AWS Backint agent version (p. 23)
• Update or install a previous version of AWS Backint agent (p. 23)
• Performance tuning (p. 24)
• Subscribe to AWS Backint agent notifications (p. 24)
1. From the AWS Management Console, choose Systems Manager under Management & Governance,
or enter Systems Manager in the Find Services search bar.
2. From the Systems Manager console, choose Documents under Shared Resources in the left
navigation pane.
6
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
3. On the Documents page, select the Owned by Amazon tab. You should see a document named
AWSSAP-InstallBackint.
4. Select the AWSSAP-InstallBackint document and choose Run command.
5. Under the Command parameters, enter the following
a. Bucket Name. Enter the name of the Amazon S3 bucket where you want to store your SAP
HANA backup files.
b. Bucket Folder. Optionally, enter the name of the folder within your Amazon S3 bucket where
you want to store your SAP HANA backup files.
c. System ID. Enter your SAP HANA System ID, for example HDB.
d. Bucket Region. Enter the AWS Region of the Amazon S3 bucket where you want to store your
SAP HANA backup files. AWS Backint agent supports cross-Region and cross-account backups.
You must provide the AWS Region and Amazon S3 bucket owner account ID along with the
Amazon S3 bucket name for the agent to perform successfully.
e. Bucket Owner Account ID. Enter the account ID of the Amazon S3 bucket where you want to
store your SAP HANA backup files.
f. Kms Key. Enter the ARN of AWS KMS that AWS Backint agent can use to encrypt the backup
files stored in your Amazon S3 bucket.
g. Installation Directory. Enter the path of the directory location where you want to install the
AWS Backint agent. Avoid using /tmp as the install path.
h. Agent Version. Enter the version number of the agent that you want to install. If you do not
enter a version number, the latest published version of the agent is installed.
Note
1.0 versions are unavailable in the GovCloud Regions.
i. Modify Global ini file. Choose how you want to modify the global.ini file. The global.ini
file of the SAP HANA SYSTEM DB must be updated to complete the setup.
8. When the agent is successfully installed, you will see the Success status under the Command ID.
9. To verify the installation, log in to your instance and view the /<install directory>/aws-
backint-agent directory. You should see the following files in the directory: the AWS Backint
agent binary, THIRD_PARTY_LICENSES.txt file, which contains licenses of libraries used by the
agent, the launcher script, the YAML configuration file, and the optional modify_global_ini.sql
file. In addition, a source file (aws-backint-agent.tar.gz) of AWS Backint agent is stored in the
package directory. You can verify the signature of this file to ensure that the downloaded source file
is original and unmodified. See the Verifying the signature of AWS Backint agent and installer for
SAP HANA (p. 28) section in this document for details.
The SSM document creates symbolic links (symlinks) in the SAP HANA global directory for the
Backint configuration. Verify that the symlink for hdbbackint exists in the /usr/sap/<SID>/
SYS/global/hdb/opt directory and the symlink for aws-backint-agent-config.yaml exists
in the /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig directory.
Follow these steps to install AWS Backint agent using the AWS Backint installer from an SSH session on
your SAP HANA instance.
Important
Disable any existing backup processes (including scheduled log backups) before continuing with
the installation. If you don’t disable existing backup processes before running the AWS Backint
agent installer, you can corrupt an in-progress backup, which can impact your ability to recover
your database.
1. Navigate to /tmp (or another temporary directory where you downloaded the installer).
cd /tmp
or
8
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
Note
If you encounter permission issues while downloading the AWS Backint installer using
the AWS CLI, check your IAM policy and ensure that your policies allow for downloading
objects from the awssap-backint-agent bucket. See the Identity and Access
Management (p. 4) section of this documentation for details.
3. (Optional) For AWS GovCloud (US-East) and AWS GovCloud (US-West), run one of the following
commands to download the installer.
or
4. Run the installer with the -h flag to find all of the available options.
Note
Run the installer with the -l flag if you want the installer to get the AWS Backint agent
binary file from your own file system or Amazon S3 bucket. Specify the location of the
aws-backint-agent.tar.gz file.
a. Installation directory — Enter the path of the directory location where you want to install the
AWS Backint agent. The default value for the installation directory is /hana/shared/.
b. Amazon S3 bucket owner — Enter the account ID of the Amazon S3 bucket owner of the
bucket where you want to store your SAP HANA backup files.
9
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
c. Amazon S3 bucket Region — Enter the AWS Region of the Amazon S3 bucket where you want
to store your SAP HANA backup files.
d. Amazon S3 bucket name — Enter the name of the Amazon S3 bucket where you want to store
your SAP HANA backup files.
e. Folder in the S3 bucket — Enter the name of the folder in the Amazon S3 bucket where you
want to store your SAP HANA backup files. This parameter is optional.
f. Amazon S3 SSE KMS ARN — Enter the ARN of the AWS KMS that AWS Backint agent can use to
encrypt the backup files stored in your Amazon S3 bucket.
Note
If you leave this field empty, AWS Backint installer will prompt you to confirm that
you don’t want to encrypt your backup files with encryption keys stored in AWS
KMS. If you do not confirm that you do not want to encrypt with the kms-key, the
installer will abort. We strongly recommend that you encrypt your data. See the
Encryption (p. 5) section of this documentation for available options.
g. SAP HANA system ID — Enter your SAP HANA System ID, for example HDB.
h. HANA opt dir — Confirm the location of the SAP HANA opt directory.
i. Modify global.ini [modify/sql/[none]] — Choose how you want to modify the global.ini
file. The global.ini file of the SAP HANA SYSTEM must be updated to complete the setup.
i. “modify” — AWS Backint installer will update the global.ini file directly.
ii. “sql” — AWS Backint installer will create a file called modify_global_ini.sql with
SQL statements that you can run in your target SAP HANA system to set the required
parameters. You can find the modify_global_ini.sql file in the <installation
directory>/aws-backint-agent/ folder.
iii. “none” — No action will be taken by AWS Backint installer to modify the global.ini file.
You must manually update them to complete the setup.
j. HANA SYSTEM db global.ini file — Confirm the location of global.ini file.
k. Verify signature of the agent binary .tar file —
• Choose y to verify the signature of the AWS Backint agent source file. If you choose y,
enter the Amazon S3 bucket location of the signature file of the agent binary .tar file, for
example, https://s3.amazonaws.com/awssap-backint-agent/binary/latest/
aws-backint-agent.sig. Or, provide a local file that is stored on the instance. If you
proceed without making a selection, the default location listed within brackets ([]) is used.
• Choose n if you do not want to verify the signature of the AWS Backint agent source file.
l. Save responses for future usage? — You can save your information for the AWS Backint
installer to a file. You can then use it later to run the installer in silent mode, if needed.
m. Do you want to proceed with the installation? — Confirm that you have disabled the existing
backups and are ready to proceed with the installation.
7. To verify the installation, log in to your instance and view the /<install directory>/
aws-backint-agent directory. You should see the following files in the directory: the AWS
Backint agent binary, the THIRD_PARTY_LICENSES.txt file, which contains licenses of
libraries used by the agent, the launcher script, the YAML configuration file, and the optional
modify_global_ini.sql file. In addition, a source file (aws-backint-agent.tar.gz) of AWS
Backint agent is stored in the package directory. You can verify the signature of this file to ensure
that the downloaded source file is original and unmodified. See the Verifying the signature of AWS
Backint agent and installer for SAP HANA (p. 28) section in this document for details.
In addition, the AWS Backint installer creates symbolic links (symlinks) in the SAP HANA global
directory for the Backint configuration. Verify that the symlink for hdbbackint exists in the /usr/
sap/<SID>/SYS/global/hdb/opt directory, and that the symlink for aws-backint-agent-
config.yaml exists in the /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig directory.
10
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
Note
If your installation fails due to validation errors and you want to ignore the validation and
proceed with the installation, you can execute the installer with the -n flag to ignore the
validation steps. You can also use the -d flag to run the installer in debug mode to generate
detailed installation logs for troubleshooting.
To run the AWS Backint installer in silent mode, create a response file with all of the required installation
parameters. Follow the steps in the section on installing using the interactive mode (p. 8) to
download AWS Backint installer and create a response file. You don’t have to confirm to continue with
the AWS Backint agent installation in interactive mode. AWS Backint installer will create a response file
called aws-backint-agent-install-YYYYMMDDHHMMSS.rsp.
When you have a response file, you can modify it with a vim editor and adjust the parameters as needed.
[DEFAULT]
s3_bucket_name = awsdoc-example-bucket
s3_bucket_owner_account_id = 111122223333
modify_global_ini = sql
s3_bucket_region = us-east-1
s3_sse_kms_arn = arn:aws:kms:us-east-1:111122223333:key/1abcd9b9-
ab12-1a2a-1abc-12345abc12a3
s3_bucket_folder = myfolder
hana_sid = TST
installation_directory = /hana/shared/
If you want to generate the response file programmatically instead of using AWS Backint installer in
interactive mode, you can use the -g flag to generate a new response file. The following is an example of
how to generate a response file using AWS Backint installer.
After the response file is created, use the following steps to run AWS Backint installer in silent mode.
Important
Disable any existing backup processes (including scheduled log backups) before continuing with
the installation. If you don’t disable existing backup processes before running the AWS Backint
agent installer, you can corrupt an in-progress backup, which can impact your ability to recover
your database.
Run the following command to execute the installer using the generated response file.
If you want to choose the location from which to install the agent, run the command with the -l flag
and specify the location.
11
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
Note
You must confirm that you have disabled the existing backups and are ready to proceed with the
installation in silent mode by passing an acknowledgement flag (-a yes). If you don’t pass the
acknowledgement flag, AWS Backint installer will fail to execute.
#!/bin/bash
export https_proxy=<PROXY_ADDRESS>:<PROXY_PORT>
export HTTP_PROXY=<PROXY_ADDRESS>:<PROXY_PORT>
export no_proxy=169.254.169.254
export NO_PROXY=169.254.169.254
sudo python install-aws-backint-agent
If you use a proxy address in your SAP HANA environment, you must update the aws-backint-agent-
launcher.sh file, which is located in the AWS Backint agent installation directory (for example, /hana/
shared/aws-backint-agent/). You must perform the following update to ensure that the correct
proxy settings are used by AWS Backint agent during backup and restore operations.
#!/bin/bash
export https_proxy=<PROXY_ADDRESS>:<PROXY_PORT>
export HTTP_PROXY=<PROXY_ADDRESS>:<PROXY_PORT>
export no_proxy=169.254.169.254
export NO_PROXY=169.254.169.254
/hana/shared/aws-backint-agent/aws-backint-agent "$@"
[backup]
catalog_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-
agent-config.yaml
data_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-
config.yaml
log_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-
config.yaml
catalog_backup_using_backint = true
12
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
log_backup_using_backint = true
parallel_data_backup_backint_channels = 8
data_backup_buffer_size = 4096
max_recovery_backint_channels = 1
[communication]
tcp_backlog = 2048
[persistence]
enable_auto_log_backup = yes
verify_signature = yes
input_signature_filepath = https://s3.amazonaws.com/awssap-backint-agent/binary/latest/aws-
backint-agent.sig
Note
Changing the tcp_backlog parameter requires a restart of SAP HANA to take effect.
max_recovery_backint_channels determines the number of log files restored/recovered
in parallel during the recovery process. When multistreamed backups are recovered, SAP
HANA always uses the same number of channels that were used during the backup. For more
information, see Multistreaming Data Backups with Third-Party Backup Tools in the SAP
documentation.
13
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
EnableTagging
must be set to true
in order to use
BackupObjectTags.
-BackupObjectTags:
"[{Key=string,Value=string},
{Key=string,Value=string},...]
14
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
Allowed values:
minute, hour, day, or
never.
Allowed values:
STANDARD,
STANDARD_IA,
ONEZONE_IA, or
INTELLIGENT_TIERING.
Allowed values: 1 to
200.
Specifies whether to
S3ShortenBackupDestinationEnabled false Version 1.05
use a shorter Amazon
S3 path.
15
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
data_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-
config.yaml
log_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-
config.yaml
To use a different Amazon S3 bucket and folder for the data and log backups, follow these steps.
If this is a new setup or you do not want to retain the previous logs backup, skip this step and
continue with Step 3.
Move the previous logs backup with source type volume to the new Amazon S3 location for logs
backup only. You can confirm the source type by running the following SQL command.
16
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
Note
Before doing steps a and b, ensure that there is no backup process running.
Run the following commands to move the volume type of SYSTEM DB logs. In the example, we
use the same Amazon S3 bucket, but create another folder for the logs backup.
Run the following commands to move the volume type TENANT DB logs. In the example, we use
the same Amazon S3 bucket, and create another folder for the logs backup. You need to repeat
this step for every TENANT DB.
17
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
a. Make a copy of the existing AWS Backint agent configuration for logs backup.
cp /hana/shared/aws-backint-agent/aws-backint-agent-config.yaml \
/hana/shared/aws-backint-agent/aws-backint-agent-config-logs.yaml
ln -s /hana/shared/aws-backint-agent/aws-backint-agent-config-logs.yaml \
/usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-config-logs.yaml
4. Change the parameter to point to the new AWS BACKINT configuration file
a. Run a point-in-time recovery to a previous state, to ensure that you can access the previous log
files in the new Amazon S3 location.
b. Verify that new logs are uploaded to the new S3 location.
6. Delete previous backups
After a successful validation, we recommend waiting for at least a week before deleting the previous
logs.
When you're ready, delete the previous logs with the following commands.
# Delete previous backups in the TENANT database (Repeat for each tenant)
aws s3 rm s3://<S3 bucket>/<S3 folder>/<SID/usr/sap/<SID/SYS/global/hdb/backint/
DB_<SID/ --exclude "" --include --include "log_backup_2_0" --include "log_backup_3_0" —
recursive —dryrun
aws s3 rm s3://<S3 bucket>/<S3 folder>/<SID/usr/sap/<SID/SYS/global/hdb/backint/
DB_<SID/ --exclude "" --include "log_backup_2_0" --include "log_backup_3_0" —recursive
18
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
data_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-
config.yaml
log_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-
config.yaml
catalog_backup_parameter_file = /usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-
agent-config.yaml
To use a different Amazon S3 bucket and folder for catalog backup, follow these steps.
If this is a new setup or you do not want to retain the previous catalog backup, skip this step and
continue with Step 3.
Move the previous catalog backup with source type catalog to the new Amazon S3 location for
catalog backup only. You can confirm the source type by running the following SQL command.
19
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
Note
Before doing steps a and b, ensure that there is no backup process running.
Run the following commands to move the catalog type of SYSTEM DB logs. In the example,
we use the same Amazon S3 bucket, but create another folder for catalog backup.
Run the following commands to move the catalog type tenant database logs. In the example,
we use the same Amazon S3 bucket, and create another folder for catalog backup. You need to
repeat this step for every TENANT DB.
20
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
a. Make a copy of the existing AWS Backint agent configuration for catalog backup.
cp /hana/shared/aws-backint-agent/aws-backint-agent-config.yaml \
/hana/shared/aws-backint-agent/aws-backint-agent-config-catalog.yaml
ln -s /hana/shared/aws-backint-agent/aws-backint-agent-config-catalog.yaml \
/usr/sap/<SID>/SYS/global/hdb/opt/hdbconfig/aws-backint-agent-config-catalog.yaml
4. Change the parameter to point to the new AWS BACKINT configuration file
a. Run a point-in-time recovery to a previous state to ensure that you can access the previous log
files in the new Amazon S3 location.
b. Verify that new logs are uploaded to the new S3 location.
6. Delete previous backups
After a successful validation, we recommend waiting for at least a week before deleting the previous
catalog.
When you're ready, delete the previous logs with the following commands.
# Delete previous backups in the TENANT database (Repeat for each tenant)
aws s3 rm s3://<S3 bucket>/<S3 folder>/<SID/usr/sap/<SID/SYS/global/hdb/backint/
DB_<SID/ --exclude "" --include --include "log_backup_0_0_0_0" —recursive —dryrun
aws s3 rm s3://<S3 bucket>/<S3 folder>/<SID/usr/sap/<SID/SYS/global/hdb/backint/
DB_<SID/ --exclude "" --include "log_backup_0_0_0_0" —recursive
21
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
If this is a new setup or you do not want to retain the previous catalog backup, skip this step and
continue with Step 3.
Ensure that there is no backup process running, then run the following command to move all of the
previous backups to the new Amazon S3 location. This step assumes that you are using the same
configuration parameter for both data and log. The example below uses the same S3 bucket, but
you can use a new bucket.
3. Modify aws-backint-agent-config.yaml.
vi /hana/shared/aws-backint-agent/aws-backint-agent-config.yaml
S3ShortenBackupDestinationEnabled: "true"
a. Run a point-in-time recovery to a previous state to ensure that you can access the previous log
files in the new Amazon S3 location.
b. Verify that new logs are uploaded to the new S3 location.
5. Delete previous backups
22
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
After a successful validation, we recommend waiting for at least a week before deleting the previous
catalog.
When you're ready, delete the previous logs with the following commands.
/usr/sap/<SID>/SYS/global/hdb/opt/hdbbackint -v
For instance, running the preceding command on a system with SID as HDB returns the AWS Backint
agent version as 1.05 as displayed in the image below.
/usr/sap/<SID>/SYS/global/hdb/opt/hdbbackint -v
23
SAP HANA on AWS SAP HANA Guides
Install and configure AWS Backint agent
To install a previous version of the installer, download the version you want to install from the S3 folder
that contains the previous version.
Note
The installer will download and install the version of the agent that corresponds to the installer
version.
When you install the agent using the SSM document, you can input the version you want to install.
Performance tuning
AWS Backint agent is installed with default values that optimize the performance of backup and restore
operations. If you want to further optimize the performance of your backup and restore operations, you
can adjust the UploadChannelSize and MaximumConcurrentFilesForRestore parameters. Ensure
that you are using the right instance type and storage configurations to get the best performance. AWS
Backint agent is constrained by the resources available in the instance.
The UploadChannelSize parameter is used to determine how many files can be uploaded in parallel
to the S3 bucket during backups. The default value for this parameter is 10 and it provides optimal
performance in most cases.
The UploadConcurrency parameter is used to determine how many S3 threads can work in parallel
during backups. The default value for this parameter is 100 and it provides optimal performance in most
cases.
If you want to adjust these parameters, you can add them to the aws-backint-agent-config.yaml
file and adjust the values (up to the allowed maximum). We strongly recommend that you test both
the backup and recovery operations after the change to ensure there is no unintended impact to your
backup and restore operations, as well as to other standard operations.
Additionally, to get the best performance during backup and restore operations, ensure that your SAP
HANA data and log volumes are configured following the best practices from AWS. See the Storage
Configuration for SAP HANA section in the SAP HANA on AWS documentation for more details.
24
SAP HANA on AWS SAP HANA Guides
Back up and restore your SAP HANA system
a. For Topic ARN, use the following Amazon Resource Name (ARN):
arn:aws:sns:us-east-1:464188257626:AWS-Backint-Agent-Update
For AWS GovCloud (US-East) and AWS GovCloud (US-West), use arn:aws-us-gov:sns:us-
gov-east-1:516607370456:AWS-Backint-Agent-Update
b. For Protocol, choose Email or SMS.
c. For Endpoint, enter an email address that you can use to receive the notifications. If you choose
SMS, enter an area code and number.
d. Choose Create subscription.
6. If you chose Email, you'll receive an email asking you to confirm your subscription. Open the email
and follow the directions to complete your subscription.
Whenever a new version of AWS Backint agent or AWS Backint installer is released, we send
notifications to subscribers. If you no longer want to receive these notifications, use the following
procedure to unsubscribe.
Topics
• Backup and recovery using SQL statements (p. 25)
• Backup and recovery using SAP HANA Cockpit or SAP HANA Studio (p. 27)
• Get backup and recovery status (p. 27)
• Find your backup in an Amazon S3 bucket (p. 27)
• Schedule and manage backups (p. 28)
• Backup retention (p. 28)
25
SAP HANA on AWS SAP HANA Guides
Backup and recovery using SQL statements
The following example shows the syntax to initiate a full data backup of the system database.
The following example shows the syntax to initiate a full data backup of the tenant database.
The following example shows the syntax to initiate a differential data backup of the tenant database.
The following example shows the syntax to initiate an incremental data backup of the tenant database.
The following example shows the syntax to recover your tenant database to a particular point in time.
RECOVER DATABASE FOR <TENANT DB ID> UNTIL TIMESTAMP 'YYYY-MM-DD HH:MM:SS' USING DATA PATH
('/usr/sap/<SID>/SYS/global/hdb/backint/DB_<TENANT DB ID>/') USING LOG PATH ('/usr/sap/
<SID>/SYS/global/hdb/backint/DB_<TENANT DB ID>') USING BACKUP_ID 1234567890123 CHECK ACCESS
USING BACKINT
The following example shows the syntax to recover your tenant database with a specific data backup
using catalogs stored in S3.
RECOVER DATA FOR <TENANT DB ID> USING BACKUP_ID 1234567890123 USING CATALOG BACKINT USING
DATA PATH ('/usr/sap/<SID>/SYS/global/hdb/backint/DB_<TENANT DB ID>/') CLEAR LOG
The following example shows the syntax to recover your tenant database with a specific data backup
without using a catalog.
With AWS Backint agent, you can perform system copies by restoring a backup of the source database
into the target database. To perform system copies using AWS Backint agent, verify the following
requirements.
1. You must have AWS Backint agent configured in both the source and target systems.
2. Check the compatibility of the SAP HANA software version of the source and target systems.
3. The AWS Backint agent in your target system should be able to access the Amazon S3 bucket where
the backups of the source system are stored. If you use a different Amazon S3 bucket for backups in
the source and target systems, you have to adjust the configuration parameters of the AWS Backint
agent in the target system to temporarily point to the Amazon S3 bucket where the backups are
stored in the source system.
4. If you are performing a system copy across two different AWS accounts, ensure that you have the
appropriate IAM permissions and Amazon S3 bucket policies in place. See the Identity and Access
Management (p. 4) section in this document for details.
26
SAP HANA on AWS SAP HANA Guides
Backup and recovery using SAP
HANA Cockpit or SAP HANA Studio
The following is the syntax to restore a specific backup of the source tenant database into your target
tenant database.
RECOVER DATA FOR <TARGET TENANT DB ID> USING SOURCE '<SOURCE TENANT DB ID>@<SOURCE SYSTEM
ID>' USING BACKUP_ID 1234567890123 USING CATALOG BACKINT USING DATA PATH ('/usr/sap/
<SOURCE SYSTEM ID>/SYS/global/hdb/backint/DB_<SOURCE TENANT DB ID>/') CLEAR LOG
The following is an example of a SQL statement to restore a specific backup of the source tenant
database, called SRC, in the source system QAS into a target tenant database called TGT.
RECOVER DATA FOR TGT USING SOURCE 'SRC@QAS' USING BACKUP_ID 1234567890123 USING CATALOG
BACKINT USING DATA PATH ('/usr/sap/QAS/SYS/global/hdb/backint/DB_SRC/') CLEAR LOG
The following is an example of a SQL statement to perform a point-in-time recovery of a source tenant
database, called SRC, in a source system QAS into a target tenant database called TGT.
RECOVER DATABASE FOR TGT UNTIL TIMESTAMP '2020-01-31 01:00:00' CLEAR LOG USING SOURCE
'SRC@QAS' USING CATALOG BACKINT USING LOG PATH ('/usr/sap/QAS/SYS/global/hdb/backint/
DB_SRC') USING DATA PATH ('/usr/sap/QAS/SYS/global/hdb/backint/DB_SRC/') USING BACKUP_ID
1234567890123 CHECK ACCESS USING BACKINT
For system and tenant databases, you can find your data, log, and catalog backups in the following
locations. Your data backups will include an additional prefix that you used during the backup.
<awsdoc-example-bucket>/<optional-my-folder>/<SID>/usr/sap/<SID>/SYS/global/hdb/backint/
SYSTEMDB/
27
SAP HANA on AWS SAP HANA Guides
Schedule and manage backups
<awsdoc-example-bucket>/<optional-my-folder>/<SID>/usr/sap/<SID>/SYS/global/hdb/backint/
DB_<Tenant ID>/
Backup retention
Beginning with SAP HANA 2 SPS 03, you can use SAP HANA Cockpit to set the retention policies for your
SAP HANA database backups. Based on your retention policies, SAP HANA Cockpit can automatically
trigger jobs to delete old backups from catalogs, as well as the physical backups. This process also
automatically deletes backup files stored in your Amazon S3 buckets. For more information, see
“Retention Policy” under Backup Configuration Settings in the SAP HANA Administration with SAP HANA
Cockpit Guide.
To enable automatic signature verification during agent installation, see the parameter descriptions at
Install AWS Backint agent using AWS Backint installer — interactive mode (p. 8) (Step 6k).
2. (Optional) For AWS GovCloud (US-East) or AWS GovCloud (US-West), download one of the following
keys.
28
SAP HANA on AWS SAP HANA Guides
Verify signature
Make a note of the key value, as you will need it in the next step. In the preceding example, the key
value is 1E65925B.
4. Verify the fingerprint by running the following command.
BD35 7A5F 1AE9 38A0 213A 82A8 80D8 5C5E 1E65 925B
If the fingerprint string doesn't match, don't install the agent. Contact Amazon Web Services.
After you have verified the fingerprint, you can use it to verify the signature of the AWS Backint
agent binary.
5. Download the signature files for the source file and the installer.
6. (Optional) For AWS GovCloud (US-East) and AWS GovCloud (US-West), download the signature files
from one of the following locations.
7. To verify the signature, run gpg --verify against the aws-backint-agent.tar.gz source file
and install-aws-backint-agent installer.
29
SAP HANA on AWS SAP HANA Guides
Troubleshoot
gpg: Signature made Fri 08 May 2020 12:15:40 AM UTC using RSA key ID 1E65925B
gpg: Good signature from "AWS Backint Agent" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: BD35 7A5F 1AE9 38A0 213A 82A8 80D8 5C5E 1E65 925B
If the output includes the phrase BAD signature, check whether you performed the procedure
correctly. If you continue to get this response, contact Amazon Web Services and avoid using the
downloaded files.
Note
A key is trusted only if you or someone you trust has signed it. If you receive a warning
about trust, this doesn't mean that the signature is invalid. Instead, it means that you have
not verified the public key.
Topics
• Agent logs (p. 30)
• Installation (p. 30)
• Backup and recovery (p. 31)
• Backup deletion (p. 35)
Agent logs
To find logs to help you troubleshoot errors and failures, check the following locations.
Agent logs
{INSTALLATION DIRECTORY}/aws-backint-agent/aws-backint-agent.log
/usr/sap/<SID>/HDB<Instance No>/<hostname>/trace/backup.log
/usr/sap/<SID>/HDB<Instance No>/<hostname>/trace/backint.log
/usr/sap/<SID>/HDB<Instance No>/<hostname>/trace/DB_<TENANT>/backup.log
/usr/sap/<SID>/HDB<Instance No>/<hostname>/trace/DB_<TENANT>/backint.log
Installation
Problem: Error returned when installing AWS Backint agent.
Error returned:
30
SAP HANA on AWS SAP HANA Guides
Backup and recovery
ln -s /usr/bin/python2.7 /usr/bin/python
Problem: Unable to view the instance listed for installation with the SSM document.
• Root Causes:
1. The SSM Agent is not installed on the instance.
2. If the SSM Agent is installed, either the instance is not running or the SSM Agent on the instance is
not running.
3. The SSM Agent installed on the instance is a version older than 2.3.274.0.
• Resolution: Follow the steps listed at Practice Installing or Updating SSM Agent on an Instance. You
can verify whether the SSM Agent is running with the following command.
Problem: The following error is returned when you use the SSM installation document.
s3://awssap-backint-agent/binary/agent-version
• Root Causes:
1. The IAM role for the EC2 instance does not have the correct permissions to access the S3 bucket.
2. The agent configuration file does not have the S3BucketOwnerAccountID in double quotes. The
S3BucketOwnerAccountID is the 12-digit AWS Account ID.
3. The S3 bucket is not owned by the provided account for S3BucketOwnerAccountID.
31
SAP HANA on AWS SAP HANA Guides
Backup and recovery
4. The S3 bucket provided for the S3BucketOwnerAccountID was created before May 2019.
• Resolution: Verify the prerequisite steps (p. 3) for installing the AWS Backint agent.
• Root Cause: The IAM role attached to the instance does not have the correct permissions to access the
S3 bucket.
• Resolution: Verify the prerequisite steps (p. 3) for installing the AWS Backint agent.
Problem: Agent logs display Backint cannot execute hdbbackint or No such file or
directory.
• Root Causes:
1. If you are installing the agent manually, the creation of a symlink for the agent executable did not
succeed.
2. If you are using the SSM agent, step 2 of the agent failed while creating symlinks. You can verify this
by viewing the RunCommand implementation details.
• Resolution: Verify that you have correctly followed the installation steps (p. 6) in this document.
Problem: The following error is displayed when initiating a backup from the SAP HANA
console:
Could not start backup for system <SID> DBC: [447]: backup could not be
completed: [110091] Invalid path selection for data backup using backint: /usr/
sap/<SID>/SYS/global/hdb/backint/COMPLETE_DATA_BACKUP must start with /usr/sap/
<SID>/SYS/global/hdb/backint/DB_<TENANT>
• Root Cause: When adding your SAP HANA system to SAP HANA Studio, you chose the single container
mode instead of the multiple container mode.
• Resolution: Add the SAP HANA system to SAP HANA Studio and select multiple container mode, and
then try to initiate your backup again. For more details, see Invalid path selection for data backup
using backint.
Problem: Your backup fails and the following error appears in aws-backint-agent.log:
• Root Cause: You specified an incorrect Region ID for the AwsRegion parameter in the aws-backint-
agent-config.yaml configuration file.
• Resolution: Specify the AWS Region of your Amazon S3 bucket and initiate the backup again. You can
find the Region in which your Amazon S3 bucket is created from the Amazon S3 console.
Problem: Any AWS Backint agent operation fails with one of the following errors, which
appear in the aws-backint-agent.log:
or
• Potential Root Cause: No IAM role is attached to your Amazon EC2 instance.
32
SAP HANA on AWS SAP HANA Guides
Backup and recovery
• Resolution: AWS Backint agent requires an attached IAM role to your EC2 instance to access AWS
resources for backup and restore operations. Attach an IAM role to your EC2 instance and attempt
the operation again. For more information, see the prerequisites (p. 3) for installing AWS Backint
agent.
• Potential Root Cause: Use of proxy for HANA instance on which agent is run causes agent failure.
• Resolution: When using a proxy for the HANA instance on which the agent is run, do not use a
proxy for the instance metadata call, otherwise the call hangs. Instance metadata information can
not be obtained via proxy, so it must be excluded. Update the launcher script at {INSTALLATION
DIRECTORY}/aws-backint-agent-launcher.sh to designate 169.254.169.254 as a no_proxy
host.
# cat aws-backint-agent-launcher.sh
#!/bin/bash
export https_proxy=<PROXY_ADDRESS>:<PROXY_PORT>
export HTTP_PROXY=<PROXY_ADDRESS>:<PROXY_PORT>
export no_proxy=169.254.169.254
export NO_PROXY=169.254.169.254
/hana/shared/aws-backint-agent/aws-backint-agent "$@"
For more information about using a proxy address in your SAP HANA environment, see Use a proxy
address with AWS Backint agent (p. 12).
Problem: When you initiate a backup or restore, you get the following error in SAP HANA
Studio or SAP HANA Cockpit:
backup could not be completed, Backint cannot execute /usr/sap/<SID>/SYS/
global/hdb/opt/hdbbackint, Permission denied (13)
• Root Cause: The AWS Backint agent binary or launcher script doesn’t have the execute permission at
the operating system level.
• Resolution: Set the execute permission for AWS Backint agent binary aws-backint-agent and for
the launcher script aws-backint-agent-launcher.sh in the installation directory (for example, /
hana/shared/aws-backint-agent/).
Problem: My backup is running too slowly and is taking a longer time to complete.
• Root Cause: The performance of backup and restore depends on many factors, such as the type
of EC2 instance used, the EBS volumes, and the number of SAP HANA channels. If your database
size is less than 128 GB, SAP HANA defaults to a single channel, or your SAP HANA parameter
parallel_data_backup_backint_channels is set to 1.
• Resolution: The speed of your database backup depends on how much storage throughput is
available to your SAP HANA data volumes (/hana/data). Total storage throughput available for
SAP HANA data volumes depends on your Amazon EBS storage type and the number of volumes
used for striping. For best performance, follow the storage configuration best practices. You can
switch your Amazon EBS volumes associated with SAP HANA data filesystem to io1, io2 or gp3
volume type. Additionally, if your database size is greater than 128 GB, you can improve your
backup performance by adjusting the number of parallel backup channels. Increase the value of
parallel_data_backup_backint_channels and try to initiate your backup again. We recommend
that you take the resource contention with normal system operation performance into consideration
when you try to tune the performance of your backup.
33
SAP HANA on AWS SAP HANA Guides
Backup and recovery
• Root Cause: The connection between the AWS Backint agent and S3 fails due to high throughput.
• Resolution: Update AWS Backint agent to version 1.02 or later.
• Root Cause: AWS Backint agent searches for the logs and data backups only in the Amazon S3 path
that's provided in the configuration file. Because the S3ShortenBackupDestinationEnabled
parameter changes the Amazon S3 folder, it cannot find the backup.
• Resolution: You can either change the S3ShortenBackupDestinationEnabled parameter to
false and run the restore, or you can move the previous backups and the SAP HANA backup catalog
to the new S3 location. For more details, see the section called “Configure AWS Backint agent to use
shorter Amazon S3 paths” (p. 22).
Problem: When processing a database recovery, a 'No data backups found' error is displayed
and the agent log shows, 'The operation is not valid for the objects' access tier'.
34
SAP HANA on AWS SAP HANA Guides
Backup deletion
• Root Cause: With the S3StorageClass = ‘INTELLIGENT_TIERING’ parameter set in the aws-backint-
agent-config.yaml, the objects have moved to archival storage tiers. AWS Backint agent does not
support recovery from archival tiers.
• Resolution: You must first restore the archived S3 objects to move them in the access tier. This
can take from a few minutes to 12 hours, depending on the archival tier and restore option that is
selected. After the S3 restore is complete, you can initiate recovery for the HANA database.
Backup deletion
Problem: You deleted your SAP HANA backup from the SAP HANA backup console (SAP
HANA Studio or SAP HANA Cockpit) but the deleted backup files still appear in the Amazon
S3 folder.
• Root Cause: AWS Backint agent couldn’t delete the associated backup files from the Amazon S3
bucket due to a permission issue.
• Resolution: AWS Backint agent requires s3:DeleteObject permission to delete the backup files
from your target Amazon S3 bucket when you delete the backup from the SAP HANA backup console.
Ensure that the IAM profile attached to your EC2 instance has s3:DeleteObject permission. For
backups that are already deleted from SAP HANA, you can manually delete the associated files from
the Amazon S3 bucket. We recommend that you take additional precaution before manually deleting
any backup files. Manually deleting the wrong backup file could impact your ability to recover your
SAP HANA system in the future.
Version history
The following table summarizes the changes for each release of AWS Backint agent.
35
SAP HANA on AWS SAP HANA Guides
Version history
Manual installer
Manual installer
SSM installer
Manual installer
36
SAP HANA on AWS SAP HANA Guides
Version history
Manual installer
• Removed -o flag.
• Added -l flag, which allows you to specify the location of
the agent .tar file.
SSM installer
37
SAP HANA on AWS SAP HANA Guides
This guide describes the most common scenarios, use cases, and options for migrating SAP HANA
systems from on-premises or other cloud platforms to the Amazon Web Services Cloud.
This guide is intended for SAP architects, SAP engineers, IT architects, and IT administrators who want to
learn about the methodologies for migrating SAP HANA systems to AWS, or who want to have a better
understanding of migration approaches to AWS in general.
This guide does not replace AWS and SAP documentation and is not intended to be a step-by-step,
detailed migration guide. For a list of helpful resources, see the Additional Reading (p. 62) section.
Information and recommendations regarding integrator and partner tools are also beyond the scope
of this guide. Also, some of the migration scenarios may involve additional technology, expertise, and
process changes, as discussed later in this guide (p. 49).
Note
To access the SAP notes and Knowledge Base articles (KBA) referenced in this guide, you must
have an SAP ONE Support Launchpad user account. For more information, see the SAP Support
website.
Migration Frameworks
Although this guide focuses on SAP HANA migrations to AWS, it is important to understand AWS
migrations in a broader context. To help our customers conceptualize and understand AWS migrations in
general, we have developed two major guidelines: 6 Rs and CAF.
6 Rs Framework
The 6 Rs migration strategy helps you understand and prioritize portfolio and application discovery,
planning, change management, and the technical processes involved in migrating your applications
to AWS. The 6 Rs represent six strategies listed in the following table that help you plan for your
application migrations.
38
SAP HANA on AWS SAP HANA Guides
AWS CAF
The decision tree diagram in Figure 1 will help you visualize the end-to-end process, starting from
application discovery and moving through each 6 R strategy.
Figure 1: 6 Rs framework
The two strategies that are specifically applicable for SAP HANA migrations to AWS are rehosting and
replatforming. Rehosting is applicable when you want to move your SAP HANA system as is to AWS.
This type of migration involves minimal change and can be seen as a natural fit for customers who are
already running some sort of SAP HANA system. Replatforming is applicable when you want to migrate
from an anyDB source database (such as IBM DB2, Oracle Database, or SQL Server) to an SAP HANA
database.
39
SAP HANA on AWS SAP HANA Guides
Planning
successful cloud journey. Both the CAF and 6 Rs frameworks help you understand and plan the broader
context of an AWS migration and what it means to you and your company.
Planning
Before you start migrating your SAP environment to AWS, there are some prerequisites that we
recommend you go over, to ensure minimal interruptions or delays. For details, see the SAP on AWS
overview. The following sections discuss additional considerations for planning your migration.
Application Discovery Service can be deployed in an agentless mode (for VMware environments) or with
an agent-based mode (all VMs and physical servers). We recommend that you run Application Discovery
Service for a few weeks to get a complete, initial assessment of how your on-premises environment is
utilized, before you migrate to AWS.
Data Tiering
If you are planning on replatforming your SAP HANA environment on AWS, you can also consider
different services and options available to you for distributing your data into warm and cold SAP-
certified storage solutions like SAP HANA dynamic tiering or Hadoop on AWS. Currently, SAP supports
Cloudera, HortonWorks, and MapR as possible Hadoop distributions for SAP HANA. See the SAP HANA
administration guide for details on how to connect SAP HANA systems with Hadoop distribution using
smart data access.
40
SAP HANA on AWS SAP HANA Guides
Prerequisites
Migrating warm or cold data can further simplify your SAP environment and help reduce your total
cost of ownership (TCO). For more information, see our web post for SAP dynamic tiering sizes and
recommendations.
Prerequisites
SAP HANA system migration requires a moderate to high-level knowledge of the source and target
IT technologies and environments. We recommend that you familiarize yourself with the following
information:
AWS services:
41
SAP HANA on AWS SAP HANA Guides
SAP HANA sizing
SAP on AWS:
There are three ways to determine peak memory utilization of your existing SAP HANA system:
• SAP HANA Studio: The overview tab of the SAP HANA Studio administration view provides a memory
utilization summary.
• SAP EarlyWatch alerts: This is a free, automated service from SAP that helps you monitor major
administrative areas of your SAP system. See the SAP portal for details.
• SQL statements: SAP provides SQL statements that you can use to determine peak memory utilization.
For details, see SAP KBA 1999997 – FAQ: SAP HANA Memory and SAP Note 1969700 – SQL statement
collection for SAP HANA.
Tip
We recommend determining peak memory utilization for a timeframe during which your system
utilization is likely to be high (for example, during year-end processing or a major sales event).
• You are already running SAP HANA but you want to change your operating system—for example, from
Red Hat Enterprise Linux (RHEL) to SUSE Linux Enterprise Server (SLES) or the other way around—
when you migrate to the AWS Cloud, or you are migrating from an IBM POWER system to the x86
platform. In this case, you should size SAP HANA as described for the rehosting scenario.
• You are migrating from anyDB to SAP HANA. There are multiple ways you can estimate your memory
requirements:
• SAP standard reports for estimation: This is the best possible approach and is based on standard
sizing reports provided by SAP. For examples, see the following SAP Notes:
• 1736976 – Sizing Report for BW on HANA
• 1637145 – SAP BW on HANA: Sizing SAP In-Memory Database
42
SAP HANA on AWS SAP HANA Guides
Instance sizing for SAP HANA
You should also consider the following SAP notes and Knowledge Base articles for SAP HANA sizing
considerations:
As a guideline, you can use this formula to help estimate how long your network data transfer might
take:
(Total bytes to be transferred / Transfer rate per second) = Total transfer time in seconds
For example, for a 1 TB SAP HANA appliance, the total bytes to be transferred is usually 50% of the
memory, which would be 512 GB. The transfer rate per second is your network transfer rate—if you had
a 1 Gb AWS Direct Connect connection to AWS, you could transfer up to 125 MB per second, and your
total data transfer time would be:
43
SAP HANA on AWS SAP HANA Guides
SAP HANA scale-up and scale-out
After you determine the amount of data you need to transfer and how much time you have available
to transfer the files, you can determine the AWS connectivity options that best fit your cost, speed, and
connectivity requirements. Presenting all available network connectivity options is beyond the scope
of this document; see the Additional Reading (p. 62) section of this document for more detailed
references.
In a scale-out scenario, you add capacity to your SAP HANA system by adding new EC2 instances to the
SAP HANA cluster. For example, once you reach the maximum memory capacity of a single EC2 instance,
you can scale out your SAP HANA cluster and add more instances. AWS has certified SAP HANA scale-out
clusters that support up to 100 TiB of memory. Please note that the minimum number of recommended
nodes in an SAP HANA scale-out cluster can be as low as two nodes; for more information, see SAP Note
1702409 - HANA DB: Optimal number of scale out nodes for BW on HANA. It’s likely that your sizing
estimates will reveal the need to plan for a scale-out configuration before you start your SAP HANA
migration. AWS gives you the ability to easily deploy SAP HANA scale-out configurations when you use
the SAP HANA Quick Start.
When you finalize your SAP sizing and SAP HANA deployment models, you can plan your migration
strategy.
In addition to SAP HANA sizing, you may also need to size your SAP application tier. To find the SAP
Application Performance Standard (SAPS) ratings of SAP-certified EC2 instances, see SAP Standard
Application Benchmarks and the SAP on AWS support note on the SAP website (SAP login required).
44
SAP HANA on AWS SAP HANA Guides
AWS Quick Starts
You can then use the AWS Quick Start for SAP HANA to rapidly provision SAP HANA instances and build
your SAP application servers on AWS, when you are ready to trigger the import process of the DMO tool.
The SUM DMO tool can convert data from anyDB to SAP HANA or SAP ASE, with OS migrations, release/
enhancement pack upgrades, and Unicode conversions occurring at the same time. Results are written
to flat files, which are transferred to the target SAP HANA system on AWS. The second phase of DMO
with System Move imports the flat files and builds the migrated SAP application with the extracted data,
code, and configuration. Here’s a conceptual flow of the major steps involved:
45
SAP HANA on AWS SAP HANA Guides
SAP HANA classical migration
Backup/Restore Tools
Backup and restore options are tried-and-true mechanisms for saving data on a source system and
restoring it to another destination. AWS has various storage options available to help facilitate data
transfer to AWS. Some of those are explained in this section. We recommend that you discuss which
option would work best for your specific workload with your systems integrator (SI) partner or with an
AWS solutions architect.
• Storage Gateway: This is a virtual appliance installed in your on-premises data center that helps you
replicate files, block storage, or tape libraries by integrating with AWS storage services such as Amazon
S3 and by using standard protocols like Network File system (NFS) or Internet Small Computer System
Interface (iSCSI). Storage Gateway offers file-based, volume-based, and tape-based storage solutions.
For SAP systems, we will focus on file replication using a file gateway and block storage replication
using a volume gateway. For scenarios where multiple backups or logs need to be continuously copied
to AWS, you can copy these files to the locally mounted storage and they will be replicated to AWS.
46
SAP HANA on AWS SAP HANA Guides
AWS Snowball
See the SAP ASE Cloud Backup to Amazon S3 using AWS File Gateway whitepaper on the SAP website
to learn how to use a file gateway to manage backup files of SAP ASE on AWS with Amazon S3, with
the STANDARD-IA (infrequent access) and Amazon S3 Glacier storage classes. For more information
about these storage classes, see the Amazon S3 documentation.
• Amazon EFS file transfer: AWS provides options to copy data from an on-premises environment to
AWS by using Amazon Elastic File System (Amazon EFS). Amazon EFS is a fully managed service,
and you pay only for the storage that you use. You can mount an Amazon EFS file share on your on-
premises server, as long as you have AWS Direct Connect set up between your corporate data center
and AWS. This is illustrated in Figure 5.
AWS Snowball
With AWS Snowball, you can copy large amounts of data from your on-premises environment to
AWS, when it’s not practical or possible to copy the data over the network. AWS Snowball is a storage
appliance that is shipped to your data center. You plug it into your local network to copy large volumes
of data at high speed. When your data has been copied to the appliance, you can ship it back to AWS,
and your data will be copied to Amazon S3 based on the desired target storage destination that you
47
SAP HANA on AWS SAP HANA Guides
Amazon S3 Transfer Acceleration
specify. AWS Snowball is very useful when you’re planning very large, multi-TB SAP system migrations.
For more information, see When should I consider using Snowball instead of the Internet in the AWS
Snowball FAQ.
AMIs
You can use an Amazon Machine Image (AMI) to launch any EC2 instance. You can create an AMI of
an EC2 instance that hosts SAP HANA, including the attached EBS volumes, through the Amazon EC2
console, the AWS CLI, or the Amazon EC2 API. You can then use the AMI to launch a new EC2 instance
with SAP HANA in any Availability Zone within the AWS Region where the AMI was created. You can also
copy your AMI to another AWS Region and use it to launch a new instance. You can use this feature to
move your SAP HANA instance to another Availability Zone or AWS Region, or to change the tenancy
type of your EC2 instance. For example, you can create an AMI of your EC2 instance with default tenancy
and use it to launch a new EC2 instance with host or dedicated tenancy and vice versa. For details, see
the Amazon Machine Images (AMIs) in the Amazon EC2 documentation.
Migration Scenarios
The following table lists the migration scenarios that we will cover in detail in this guide. The tools and
methodologies listed in the table were discussed in the previous section.
Migration of anyDB anyDB (any non-SAP SAP HANA [✔] SAP HANA classical
from other platforms HANA database such migration
to AWS* as IBM DB2, Oracle
Database, or SQL [✔] SAP DMO with
Server) System Move
Migration of SAP SAP HANA (scale- SAP HANA [✔] SAP HANA backup
HANA from other up and scale-out and restore
platforms to AWS*
48
SAP HANA on AWS SAP HANA Guides
Migrating anyDB to SAP HANA on AWS
* Other platforms include on-premises infrastructures and other cloud infrastructures outside of AWS.
• SAP ABAP code changes. For example, you might have custom code that has database or operating
system dependencies, such as database hints coded for the anyDB platform. You might also need to
change custom ABAP code so it performs optimally on SAP HANA. See SAP’s recommendations and
guidance for these SAP HANA-specific optimizations. For details and guidance, see Considerations for
Custom ABAP Code During a Migration to SAP HANA and SAP Notes 1885926 – ABAP SQL monitor
and 1912445 – ABAP custom code migration for SAP HANA on the SAP website.
• Operating system-specific dependencies such as custom file shares and scripts that would need to be
re-created or moved to a different solution.
• Operating system tunings (for example, kernel parameters) that would need to be accounted for. Note
that the AWS Quick Start for SAP HANA incorporates best practices from operating system partners
like SUSE and Red Hat for SAP HANA.
• Technology expertise such as Linux administration and support, if your organization doesn’t already
have experience with Linux.
SAP provides tools and methodologies such as classical migration and SUM DMO to help its customers
with the migration process for this scenario. (For more information, see the section Migration Tools
and Methodologies (p. 44).) AWS customers can use the SAP SUM DMO tool (p. 46) to migrate
their database to SAP HANA on AWS. Some considerations for the SAP SUM DMO method are network
bandwidth, amount of data to be transferred, and the amount of time available for the data to be
transferred.
49
SAP HANA on AWS SAP HANA Guides
Migrating SAP HANA to AWS
Implementing SAP HANA on AWS enables quick provisioning of scale-up and scale-out SAP HANA
configurations and enables you to have your SAP HANA system available in minutes. In addition to fast
provisioning, AWS lets you quickly scale up by changing your EC2 instance type, as discussed earlier in
the SAP HANA Sizing (p. 42) section. With this capability, you can react to changing requirements
promptly and focus less on getting your sizing absolutely perfect. This means that you can spend less
time sizing (that is, you can move through your project’s planning and sizing phase faster) knowing that
you can scale up later, if needed.
EC2 instance memory capabilities give you the option to consolidate multiple SAP HANA databases on
a single EC2 instance (scale-up) or multiple EC2 instances (scale-out). SAP calls these options HANA and
ABAP One Server, Multiple Components in One Database (MCOD), Multiple Components in One System
(MCOS), and Multitenant Database Containers (MDC). It is beyond the scope of this guide to recommend
specific consolidation combinations; for possible combinations, see SAP Note 1661202 – Support for
multiple applications on SAP HANA.
This migration scenario involves provisioning your SAP HANA system on AWS, backing up your source
database, transferring your data to AWS, and installing your SAP application servers. If you are resizing
your HANA environment from scale-up to scale-out, please follow the process highlighted in SAP Note
2130603. If you are resizing your HANA environment from scale-out to scale-up, refer to SAP Note
2093572. Depending on your specific scenario, you can use standard backup and restore, SAP HANA
classical migration, SAP HANA HSR, AWS Server Migration Service (AWS SMS), or third-party continuous
data protection (CDP) tools; see the following sections for details on each option.
50
SAP HANA on AWS SAP HANA Guides
Option 1: Backup and restore
1. Provision your SAP HANA system and landscape on AWS. (The AWS Quick Start for SAP NetWeaver
can help expedite and automate this process for you.)
2. Transfer (sftp or rsync) a full SAP HANA backup, making sure to transfer any necessary SAP HANA
logs for point-in-time recovery, from your source system to your target EC2 instance on AWS. A
general tip here is to compress your files and split your files into smaller chunks to parallelize the
transfer. If your transfer destination is Amazon S3, using the aws s3 cp command will automatically
parallelize the file upload for you. For other options for transferring your data to AWS, see the AWS
services listed previously in the Backup/Restore Tools (p. 46) section.
3. Recover your SAP HANA database.
4. Install your SAP application servers. (Skip this step if you used the AWS Quick Start for SAP
NetWeaver in step 1.)
5. Depending on your application architecture, you might need to reconnect your applications to the
newly migrated SAP HANA system.
51
SAP HANA on AWS SAP HANA Guides
Option 2: Classical migration
1. Provision your SAP HANA system and landscape on AWS. (The AWS Quick Start for SAP NetWeaver
can help expedite and automate this process for you.)
2. Perform an SAP homogeneous system copy to export your source SAP HANA database. You may also
choose to use a database backup as the export; see SAP Note 1844468 – Homogeneous system copy
on SAP HANA. When export is complete, transfer your data into AWS.
3. Continue the SAP system copy process on your SAP HANA system on AWS to import the data you
exported in step 2.
4. Install your SAP application servers. (Skip this step if you used the AWS Quick Start for SAP
NetWeaver in step 1.)
5. Depending on your application architecture, you might need to reconnect your applications to the
newly migrated SAP HANA system.
52
SAP HANA on AWS SAP HANA Guides
Option 3: HSR
1. Provision your SAP HANA system and landscape on AWS. (The AWS Quick Start for SAP NetWeaver
can help expedite and automate this process for you.) To save costs, you might choose to stand up a
smaller EC2 instance type.
2. Establish asynchronous SAP HANA system replication from your source database to your standby
SAP HANA database on AWS.
3. Perform an SAP HANA takeover on your standby database.
4. Install your SAP application servers. (Skip this step if you used the AWS Quick Start for SAP
NetWeaver in step 1.)
5. Depending on your application architecture, you might need to reconnect your applications to the
newly migrated SAP HANA system.
53
SAP HANA on AWS SAP HANA Guides
Option 4: HSR (with initialization via backup and restore)
Figure 9: SAP HANA system replication (with initialization via backup and restore)
1. Provision your SAP HANA system and landscape on AWS. (The AWS Quick Start for SAP NetWeaver
can help expedite and automate this process for you.) To save costs, you might choose to stand up a
smaller EC2 instance type.
2. Stop the source SAP HANA database and obtain a copy of the data files (this is essentially a cold
backup). After the files have been saved, you may start up your SAP HANA database again.
3. Transfer the SAP HANA data files to AWS, to the SAP HANA server you provisioned in step 1. (For
example, you can store the data files in the /backup directory or in Amazon S3 during the transfer
process.)
4. Stop the SAP HANA database on the target system in AWS. Replace the SAP HANA data files (on the
target server) with the SAP HANA data files you transferred in step 3.
5. Start the SAP HANA system on the target system and establish asynchronous SAP HANA system
replication from your source system to your target SAP HANA system in AWS.
6. Perform an SAP HANA takeover on your standby database.
7. Install your SAP application servers. (Skip this step if you used the AWS Quick Start for SAP
NetWeaver in step 1.)
8. Depending on your application architecture, you might need to reconnect your applications to the
newly migrated SAP HANA system.
54
SAP HANA on AWS SAP HANA Guides
Migrating SAP HANA to a High Memory instance
For SAP HANA workloads, EC2 High Memory instances support SUSE Linux Enterprise Server for SAP
Applications (SLES for SAP) and Red Hat Enterprise Linux for SAP Solutions (RHEL for SAP) operating
systems. The following table provides the minimum supported operating system version for SAP HANA
workloads.
u-6tb1.metal, u-9tb1.metal, and u-12tb1.metal SLES for SAP 12 SP3 and above; RHEL for SAP 7.4
and above
u-18tb1.metal and u-24tb1.metal SLES for SAP 12 SP4 and above; RHEL for SAP 8.1
and above
u-3tb1.56xlarge SLES for SAP 12 SP3 and above; RHEL for SAP 7.4
and above
u-6tb1.56xlarge SLES for SAP 12 SP3 and above; RHEL for SAP 7.4,
RHEL for SAP 7.7 and above
u-6tb1.112xlarge, u-9tb1.112xlarge, and SLES for SAP 12 SP4 and above; RHEL for SAP 8.1
u-12tb1.112xlarge and above
See the SAP HANA hardware directory for a list of supported operating systems for your instance type.
Important
If you are using u-*tb1.112xlarge instance types with one of the following operating system
version, verify that your system has the minimum required kernel version in order to use all
available vCPUs.
Note
u-*tb1.metal instances can be launched only as Amazon EC2 Dedicated Hosts with host
tenancy. u-6tb1.56xlarge and u-*tb1.112xlarge instances can be launched with default,
dedicated or host tenancy.
Before you start your migration, if you plan to use u-*tb1.metal instances, make sure that an
u-*tb1.metal instance is allocated to your target account, Availability Zone, and AWS Region.
If you plan to use u-6tb1.56xlarge or u-*tb1.112xlarge, ensure your account limit for
55
SAP HANA on AWS SAP HANA Guides
Option 1: Resizing an instance
with host or dedicated tenancy
resource “On-Demand High Memory instances” or “U*TB1 Dedicated Hosts” (required only if you
intend to use it as dedicated host) is set appropriately. If needed, submit a request from AWS
console to increase your account limit. For more information, see Amazon EC2 service quotas
and On-Demand Instance limits in the AWS documentation.
You have several options for migrating your existing SAP HANA workload on AWS to an EC2 High
Memory instance, as discussed in the following sections.
Note
In the following sections, we show X1 instance as the source instance type for migration. These
procedures are applicable for any other source instance types as well.
Figure 10: Migrating to an EC2 High Memory instance with resize option
1. Verify that your source system is running on a supported operating system version. If not, you might
have to upgrade your operating system before resizing to an EC2 High Memory instance.
2. EC2 High Memory instances are based on the Nitro system. On Nitro-based instances, EBS volumes
are presented as NVMe block devices. If your source system has any mount point entries in /etc/
fstab with reference to block devices such as /dev/xvd<x>, you need to create a label for these
devices and mount them by label before migrating to EC2 High Memory instances. Otherwise, you
will face issues when you start SAP HANA on an EC2 High Memory instance.
3. Verify that you don’t exceed the maximum supported EBS volumes to your instance. An
-*tb1.metalu EC2 High Memory instance currently supports up to 19 EBS volumes.
u-6tb1.56xlarge and u-*tb1.112xlarge instances supports up to 27 EBS volumes. For details,
see Instance Type Limits in the AWS documentation.
56
SAP HANA on AWS SAP HANA Guides
Option 2: Migrating from an instance with default tenancy
4. When you are ready to migrate, make sure that you have a good backup of your source system. You
can use AWS Backint Agent for SAP HANA to easily backup your SAP HANA database to Amazon S3.
For details, see AWS Backint Agent for SAP HANA in the AWS documentation.
5. Stop the source instance in the Amazon EC2 console or by using the AWS CLI.
6. If your source EC2 instance is running with dedicated tenancy, modify the instance placement
to host tenancy. For instructions, see Modifying instance Tenancy and Affinity in the AWS
documentation. Skip this step if your instance is running with host tenancy.
7. Modify the instance placement of your existing instance to your target EC2 High Memory Dedicated
Host through the Amazon EC2 console or the AWS CLI. For details, see modify-instance-placement in
the AWS documentation.
8. Change your instance type to the desired EC2 High Memory instance type (for example,
u-12tb1.metal or u-12tb1.112xlarge) through the AWS CLI or AWS Console.
Note
You can change the instance type to u-*tb1.metal only through the AWS CLI or Amazon
EC2 API.
9. Start your instance in the Amazon EC2 console or by using the AWS CLI.
10. When you increase the memory of your SAP HANA system, you might need to adjust the storage size
of SAP HANA data, log, shared, and backup volumes as well to accommodate data growth and to get
improved performance. For details, see SAP HANA on AWS Operations Guide.
11. Start your SAP HANA database and perform your validation.
12. Complete any SAP HANA-specific post-migration activities.
13. Complete any AWS-specific post-migration activities, such as setting up Amazon CloudWatch, AWS
Config, and AWS CloudTrail.
14. Configure your SAP HANA system for high availability on the EC2 High Memory instance with SAP
HANA HSR and clustering software, and test it.
57
SAP HANA on AWS SAP HANA Guides
Option 2: Migrating from an instance with default tenancy
1. Verify that your source system is running on a supported operating system version. If it isn’t, you
might have to upgrade your operating system before resizing to an EC2 High Memory instance.
2. EC2 High Memory instances are based on the Nitro system. On Nitro-based instances, EBS volumes
are presented as NVMe block devices. If your source system has any mount point entries in /etc/
fstab with reference to block devices such as /dev/xvd<x>, you need to create a label for these
devices and mount them by label before migrating to EC2 High Memory instances. Otherwise, you
will face issues during instance launch.
3. EC2 High Memory instances are based on the Nitro system. On Nitro-based instances, EBS volumes
are presented as NVMe block devices. If your source system has any mount point entries in /etc/
fstab with reference to block devices such as /dev/xvd<x>, you need to create a label for these
devices and mount them by label before migrating to EC2 High Memory instances. Otherwise, you
will face issues when you start SAP HANA on an EC2 High Memory instance.
4. When you are ready to migrate, verify that you have a good backup of your source system.
5. Stop the source instance in the Amazon EC2 console or by using the AWS CLI.
6. Change the instance type to target EC2 High Memory instance size such as u-6tb1.56xlarge or u-
*tb1.112xlarge
7. When you increase the memory of your SAP HANA system, you might need to adjust the storage size
of SAP HANA data, log, shared, and backup volumes as well to accommodate data growth and to get
improved performance. For details, see the SAP HANA on AWS Operations Guide.
8. Start your SAP HANA database and perform your validation.
Note
If necessary, complete any SAP HANA-specific post-migration activities.
9. Check the connectivity between your SAP application servers and the new SAP HANA instance.
10. If necessary, complete any AWS-specific post-migration activities, such as setting up Amazon
CloudWatch, AWS Config, and AWS CloudTrail.
11. Configure your SAP HANA system for high availability on the EC2 High Memory instance with SAP
HANA HSR and clustering software, and test it.
58
SAP HANA on AWS SAP HANA Guides
Option 2: Migrating from an instance with default tenancy
1. Verify that your source system is running on a supported operating system version. If it isn’t, you
might have to upgrade your operating system before resizing to an EC2 High Memory instance.
2. EC2 High Memory instances are based on the Nitro system. On Nitro-based instances, EBS volumes
are presented as NVMe block devices. If your source system has any mount point entries in /etc/
fstab with reference to block devices such as /dev/xvd<x>, you need to create a label for these
devices and mount them by label before migrating to EC2 High Memory instances. Otherwise, you
will face issues when you start SAP HANA on an EC2 High Memory instance.
3. When you are ready to migrate, verify that you have a good backup of your source system.
4. Stop the source instance in the Amazon EC2 console or by using the AWS CLI.
5. Create an AMI of your source instance. For details, see Creating an Amazon EBS-Backed Linux AMI in
the AWS documentation.
Tip
Creating an AMI for the first time with the attached EBS volumes could take a long time,
depending on your data size. To expedite this process, we recommend that you take
snapshots of EBS volumes attached to the instance ahead of time.
6. Launch a new EC2 High Memory instance with host tenancy for u-*tb1.metal instances. For
u-6tb1.56xlarge and u-*tb1.112xlarge, you can launch a new EC2 High Memory instance
with default, dedicated or host tenancy.
7. The new instance will have a new IP address. Update all references to the IP address of the source
system, including the /etc/hosts file for the operating system and DNS entries, to reflect the new IP
address. The hostname and storage layout will remain the same as on the source system.
8. When you increase the memory of your SAP HANA system, you might need to adjust the storage size
of SAP HANA data, log, shared, and backup volumes as well to accommodate data growth and to get
improved performance. For details, see the SAP HANA on AWS Operations Guide.
59
SAP HANA on AWS SAP HANA Guides
Option 2: Migrating from an instance with default tenancy
1. Launch a new SAP HANA EC2 High Memory instance with host tenancy for u-*tb1.metal
instances. For u-6tb1.56xlarge and u-*tb1.112xlarge, you can launch your instance with
default, dedicated or host tenancy. You can use the SAP HANA Quick Start or the AWS Launch
Wizard for SAP to set up your instance automatically, or follow the SAP HANA Environment Setup on
AWS guide to set up your instance manually. Make sure that you are using an operating system that
supports EC2 High Memory instances.
2. Complete any AWS-specific post-migration activities, such as setting up Amazon CloudWatch, AWS
Config, and AWS CloudTrail, ahead of time.
3. Migrate the data from your existing SAP HANA instance by using SAP HANA HSR or SAP HANA
backup and restore tools.
• If you plan to use SAP HANA HSR for data migration, configure HSR to move data from your
source system to your target system. This is illustrated in Figure 13. For details, see the SAP HANA
Administration Guide from SAP.
60
SAP HANA on AWS SAP HANA Guides
Option 2: Migrating from an instance with default tenancy
• If you plan to use the SAP HANA backup and restore feature to migrate your data, back up your
source SAP HANA system. When backup is complete, move the backup data to your target system
and perform a restore in your target system. If you back up your source SAP HANA system directly
to Amazon S3 using AWS Backint Agent for SAP HANA, you can directly restore it in the target
system from Amazon S3. For details, see the AWS Backint Agent for SAP HANA in the AWS
documentation. This is illustrated in Figure 14.
Figure 14: Migrating to an EC2 High Memory instance with SAP backup and restore
61
SAP HANA on AWS SAP HANA Guides
Third-party migration tools
4. Stop your source system, complete any additional post-migration steps, like updating DNS and
checking the connectivity between your SAP application servers and the new SAP HANA instance.
5. Configure your SAP HANA system for high availability on the EC2 High Memory instance with SAP
HANA HSR and clustering software, and test it.
Security
In the AWS Cloud Adoption Framework (CAF), security is a perspective that focuses on subjects such as
account governance, account ownership, control frameworks, change and access management, and other
security best practices. We recommend that you become familiar with these security processes when
planning any type of migration. In some cases, you might need to get sign-off from your internal IT audit
and security teams before you start your migration project or during migration. See the CAF security
whitepaper for a deeper dive into each of these topic areas.
Additionally, there are AWS services that help you secure your systems in AWS. For example, AWS
CloudTrail, Amazon CloudWatch, and AWS Config can help you secure your AWS environment.
See the following AWS blog posts for help analyzing and evaluating architectures and design patterns
for the VPC setup and configuration of your SAP landscape.
• VPC Subnet Zoning Patterns for SAP on AWS, Part 1: Internal-Only Access
• VPC Subnet Zoning Patterns for SAP on AWS, Part 2: Network Zoning
• VPC Subnet Zoning Patterns for SAP on AWS, Part 3: Internal and External Access
Beyond VPC and network security, SAP HANA systems require routine maintenance to remain secure,
reliable, and available; see the SAP HANA operations overview for specific recommendations in this topic
area.
Additional Reading
• SAP FAST
• SAP HANA on the AWS Cloud: Quick Start Reference Deployment
• X1 Overview
• SAP and Amazon Web Services website
• SAP on AWS whitepapers
62
SAP HANA on AWS SAP HANA Guides
Document revisions
• AWS documentation
Document Revisions
Date Change
63
SAP HANA on AWS SAP HANA Guides
Prerequisites
This guide is part of a content series that provides detailed information about hosting, configuring, and
using SAP technologies in the AWS Cloud. For the other guides in the series, ranging from overviews to
advanced topics, see the SAP on AWS Technical Documentation home page.
This document provides guidance on how to set up AWS resources and configure SUSE Linux Enterprise
Server (SLES) and Red Hat Enterprise Linux (RHEL) operating systems to deploy SAP HANA on Amazon
Elastic Compute Cloud (Amazon EC2) instances in an existing virtual private cloud (VPC). It includes
instructions for configuring storage for scale-up and scale-out workloads with Amazon Elastic Block
Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS).
This document follows AWS best practices to ensure that your system meets all key performance
indicators (KPIs) that are required for Tailored Data Center Integration (TDI)–based SAP HANA
implementations on AWS. In addition, this document also follows recommendations provided by
SAP, SUSE, and Red Hat for SAP HANA in SAP OSS Notes 2205917, 1944799, 2292690 and 2009879.
SAP regularly updates these OSS notes. Review the latest version of the OSS notes for up-to-date
information before proceeding.
This guide is intended for users with a good understanding of AWS services, network concepts, the Linux
operating system and SAP HANA administration to successfully launch and configure the resources that
are required for SAP HANA.
AWS provides a Quick Start reference deployment for SAP HANA to fast-track your SAP HANA
deployment on the AWS cloud. The Quick Start leverages AWS CloudFormation and scripts to quickly
provision resources needed to deploy SAP HANA, and usually takes less than an hour to complete with
minimal manual intervention. Refer to the SAP HANA on AWS Quick Start deployment guide if you want
to use the automated deployment.
If your organization can’t use the Quick Start reference deployment and you require additional
customization to meet internal policies, you can follow the steps in this document to manually set
up AWS resources such as Amazon EC2, Amazon EBS, Amazon EFS, by using the AWS Command Line
Interface (AWS CLI) or the AWS Management Console.
Unlike the SAP HANA on AWS Quick Start, this document doesn’t provide guidance on how to set up
network and security constructs such as Amazon VPC, subnets, route tables, access control lists (ACLs),
NAT Gateway, AWS Identity and Access Management (IAM) roles, security groups, etc. Instead, this
document focuses on configuring compute, storage, and operating system resources for SAP HANA
deployment on AWS.
Prerequisites
Specialized Knowledge
If you are new to AWS, see Getting Started with AWS.
64
SAP HANA on AWS SAP HANA Guides
Technical requirements
Technical Requirements
1. If necessary, request a service limit increase for the instance type that you’re planning to use for your
SAP HANA system. If you already have an existing deployment that uses this instance type, and you
think you might exceed the default limit with this deployment, you will need to request an increase.
For details, see Amazon EC2 Service Limits in the AWS documentation.
2. Ensure that you have a key pair that you can use to launch your Amazon EC2 instance. If you need to
create or import a key pair, refer to Amazon EC2 Key Pairs in the AWS documentation.
3. Ensure that you have the network details of the VPC, such as VPC ID and subnet ID, where you plan
to launch the Amazon EC2 instance that will host SAP HANA.
4. Ensure that you have a security group to attach to the Amazon EC2 instance that will host SAP
HANA and that the required ports are open. If needed, create a new security group that allows the
traffic for SAP HANA ports. For a detailed list of ports, see Appendix C in the SAP HANA on AWS
Quick Start guide.
5. If you intend to use AWS CLI to launch your instances, ensure that you have installed and configured
AWS CLI with the necessary credentials. For details, see Installing the AWS Command Line Interface
in the AWS documentation.
6. If you intend to use the console to launch your instances, ensure that you have credentials and
permissions to launch and configure Amazon EC2, Amazon EBS, and other services. For details, see
Access Management in the AWS documentation.
Architecture
This guide contains instructions for the following two environment setups:
65
SAP HANA on AWS SAP HANA Guides
Architecture
66
SAP HANA on AWS SAP HANA Guides
Planning the deployment
Operating System
You can deploy your SAP HANA workload on SLES, SLES for SAP, RHEL for SAP with High Availability and
Update Services (RHEL for SAP with HA and US), or RHEL for SAP Solutions.
SLES for SAP and RHEL for SAP with HA and US products are available in AWS Marketplace under an
hourly or an annual subscription model.
67
SAP HANA on AWS SAP HANA Guides
AMI
details, see the SUSE SLES for SAP product page to learn more about the benefits of using SLES for SAP.
We strongly recommend using SLES for SAP instead of SLES for all your SAP workloads.
If you plan to use Bring Your Own Subscription (BYOS) images provided by SUSE, ensure that you have
the registration code required to register your instance with SUSE to access repositories for software
updates.
If you plan to use the BYOS model with RHEL, either through the Red Hat Cloud Access program or
another means, ensure that you have access to a RHEL for SAP Solutions subscription. For details, see
Overview of Red Hat Enterprise Linux for SAP Solutions subscription in the Red Hat Knowledgebase.
If you plan to use the SLES for SAP or RHEL for SAP Amazon Machine Images (AMIs) offered in AWS
Marketplace, ensure that you have completed the subscription process. For details on how to subscribe to
one of these AMIs, see the Appendix sections of the SAP HANA on AWS Quick Start deployment guide.
If you are using AWS CLI, you will need to provide the AMI ID when you launch the instance.
Storage
Deploying SAP HANA on AWS requires specific storage size and performance to ensure that SAP HANA
data and log volumes both meet the SAP KPIs and sizing recommendations. Refer the SAP HANA on AWS
Operations Guide to understand the storage configuration details for different instance types. You need
to configure your storage based on these recommendations during instance launch.
Network
Ensure that your network constructs are set up to deploy resources related to SAP HANA. If you haven’t
already set up network components such as Amazon VPC, subnets, route table, etc., you can use the
AWS Modular and Scalable VPC Quick Start to easily deploy a scalable VPC architecture in minutes. For
details, see the deployment guide.
68
SAP HANA on AWS SAP HANA Guides
Step 2. Launch the instance
[
{"DeviceName":"/dev/sda1","Ebs":
{"VolumeSize":50,"VolumeType":"gp2","DeleteOnTermination":false}},
{"DeviceName":"/dev/sdb","Ebs":
{"VolumeSize":800,"VolumeType":"io1","Iops":3000,"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdc","Ebs":
{"VolumeSize":800,"VolumeType":"io1","Iops":3000,"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdd","Ebs":
{"VolumeSize":800,"VolumeType":"io1","Iops":3000,"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sde","Ebs":
{"VolumeSize":1024,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdf","Ebs":
{"VolumeSize":4096,"VolumeType":"st1","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdh","Ebs":
{"VolumeSize":525,"VolumeType":"io1","Iops":2000,"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdr","Ebs":
{"VolumeSize":50,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}}
]
Important
If the DeleteOnTermination flag is set to false, Amazon EBS volumes are not deleted
when you terminate your Amazon EC2 instance. This helps preserve your data from accidental
termination of your Amazon EC2 instance. When you terminate the instance, you need to
manually delete the Amazon EBS volumes that are associated with the terminated instance to
stop incurring storage cost.
See Appendix A (p. 81) for more examples of block device mappings for other Amazon EC2 instance
types and Amazon EBS volume types.
Note
If you plan to deploy scale-out workloads, you don’t have to include Amazon EBS volumes for
SAP HANA shared and backup volumes. You can use Amazon EFS and Network File System (NFS)
to mount the SAP HANA shared and backup volumes to your master and worker nodes.
Notes
• The --placement parameter is optional and needed only when you use a dedicated host with host
tenancy or you want to place all your Amazon EC2 instances in close proximity. You may also pass
69
SAP HANA on AWS SAP HANA Guides
Deployment steps using the console
• Choose AWS Marketplace to search for RHEL for SAP and SLES for SAP images.
• Choose My AMIs to search for your BYOS or custom AMI ID.
When you find the image, choose Select, and then confirm to continue.
5. On the Choose an Instance Type page, select the instance type that you identified when planning
the deployment (p. 67), and choose Configure Instance Details to proceed with instance
configuration.
6. On the Configure Instance Details page, do the following:
a. Enter the number of instances (typically 1). For scale-out workloads, specify the number of
nodes.
b. Select the VPC ID and subnet for the network.
c. Turn off the Auto-assign Public IP option.
d. Select Add instance to placement group if needed (recommended for scale-out workloads; for
details, see the AWS documentation).
e. Select any IAM role that you want to assign to the instance to access AWS services from the
instance.
f. Select Stop for Shutdown behavior.
g. Enable termination protection if needed (strongly recommended).
h. Enable Amazon CloudWatch detailed monitoring (strongly recommended; for details, see the
AWS documentation).
i. Select the Tenancy or proceed with the default (Shared). For dedicated hosts, select the
Dedicated host option.
j. Choose Add Storage to proceed with storage configuration.
7. On the Add Storage page, choose Add New Volume to add volumes required for SAP HANA with
the appropriate device, size, volume type, IOPS (for io1 only), and the Delete on Termination flag.
Ensure that you follow the storage guidance (p. 68) discussed earlier in this document. Add
volumes for SAP HANA data, log, shared, backup, and binaries.
Figure 3 shows the storage configuration for x1.32xlarge instance type with io1 volume type for SAP
HANA data and log.
70
SAP HANA on AWS SAP HANA Guides
Operating system and storage configuration
Note
Amazon EBS volumes are presented as NVME block devices on Nitro-based instances. You
need to perform additional mapping at the operating system level when you configure these
volumes.
Note
For scale-out workloads, repeat these steps for every node in the cluster.
71
SAP HANA on AWS SAP HANA Guides
Configure operating system – SLES
1. After your instance is up and running, connect to the instance by using Secure Shell (SSH) and the
key pair that you used to launch the instance.
Note
Depending on your network and security settings, you might have to first connect by using
SSH to a bastion host before accessing your SAP HANA instance, or you might have to add
IP addresses or ports to the security group to allow SSH access.
2. Switch to root user.
Alternatively, you can use sudo to execute the following commands as ec2-user.
3. Set a hostname and fully qualified domain name (FQDN) for your instance by executing the
hostnamectl command and updating the /etc/hostname file.
5. Set the preserve_hostname parameter to true to ensure your hostname is preserved during
restart.
6. Add an entry to the /etc/hosts file with the new hostname and IP address.
7. If you are using a BYOS SLES for SAP image, register your instance with SUSE. Ensure that your
subscription is for SLES for SAP.
# SUSEConnect -r Your_Registration_Code
# SUSEConnect -s
You can use the rpm command to check whether a package is installed.
You can then use the zypper install command to install the missing packages.
72
SAP HANA on AWS SAP HANA Guides
Configure operating system – SLES
Note
If you are importing your own SLES image, additional packages might be required to ensure
that your instance is optimally setup. For the latest information, refer to the Package
List section in the SLES for SAP Application Configuration Guide for SAP HANA, which is
attached to SAP OSS Note 1944799
9. Ensure that your instance is running on a kernel version that is recommended in SAP OSS Note
2205917. If needed, update your system to meet the minimum kernel version. You can check the
version of the kernel and other packages by using the following command:
10. Start saptune daemon and use the following command to set it to automatically start when the
system reboots.
11. Check whether the force_latency parameter is set in the saptune configuration file.
If the parameter is set, skip the next step and proceed with activating the HANA profile with
saptune.
12. Update the saptune HANA profile according to SAP OSS Note 2205917, and then run the following
commands to create a custom profile for SAP HANA. This step is not required if the force_latency
parameter is already set.
# mkdir /etc/tuned/saptune
# cp /usr/lib/tuned/saptune/tuned.conf /etc/tuned/saptune/tuned.conf
# sed -i "/\[cpu\]/ a force_latency=70" /etc/tuned/saptune/tuned.conf
# sed -i "s/script.sh/\/usr\/lib\/tuned\/saptune\/script.sh/"
13. Switch the tuned profile to HANA and verify that all settings are configured appropriately.
14. Configure and start the Network Time Protocol (NTP) service. You can adjust the NTP server pool
based on your requirements; for example:
Note
Remove any existing invalid NTP server pools from /etc/ntp.conf before adding the
following.
73
SAP HANA on AWS SAP HANA Guides
Configure operating system – RHEL
Tip
Instead of connecting to the global NTP server pool, you can connect to your internal NTP
server if needed. Or you can use Amazon Time Sync Service to keep your system time in
sync.
15. Set the clocksource to tsc by updating the current_clocksource file and the GRUB2 boot
loader.
1. After your instance is up and running, connect to the instance by using Secure Shell (SSH) and the
key pair that you used to launch the instance.
Note
Depending on your network and security settings, you might have to first connect by using
SSH to a bastion host before accessing your SAP HANA instance, or you might have to add
IP addresses or ports to the security group to allow SSH access.
2. Switch to root user.
Alternatively, you can use sudo to execute the following commands as ec2-user.
3. Set a hostname for your instance by executing the hostnamectl command and update the /etc/
cloud/cloud.cfg file to ensure that your hostname is preserved during system reboots.
Note that your instance should have access to the SAP HANA channel to install libraries requires for
SAP HANA installations.
You can use the rpm command to check whether a package is installed:
74
SAP HANA on AWS SAP HANA Guides
Configure operating system – RHEL
You can then install any missing packages by using the yum –y install command.
Note
Depending on your base RHEL image, additional packages might be required to ensure that
your instance is optimally setup. (You can skip this step if you are using the RHEL for SAP
with HA & US image.) For the latest information, refer to the RHEL configuration guide
that is attached to SAP OSS Note 2009879. Review the packages in the Install Additional
Required Packages section and the Appendix–Required Packages for SAP HANA on RHEL 7
section.
5. Ensure that your instance is running on a kernel version that is recommended in SAP OSS Note
2292690. If needed, update your system to meet the minimum kernel version. You can check the
version of the kernel and other packages using the following command.
6. Start tuned daemon and use the following commands to set it to automatically start when the
system reboots.
7. Configure the tuned HANA profile to optimize your instance for SAP HANA workloads.
If the force_latency parameter is not set, execute the following steps to modify and activate the
sap-hana profile.
# mkdir /etc/tuned/sap-hana
# cp /usr/lib/tuned/sap-hana/tuned.conf /etc/tuned/sap-hana/tuned.conf
# sed -i '/force_latency/ c\force_latency=70' /etc/tuned/sap-hana/tuned.conf
# tuned-adm profile sap-hana
# tuned-adm active
8. Disable Security-Enhanced Linux (SELinux) by running the following command. (Skip this step if you
are using the RHEL for SAP with HA & US image.)
9. Disable Transparent Hugepages (THP) at boot time by adding the following to the line that starts
with GRUB_CMDLINE_LINUX in the /etc/default/grub file. Execute the following commands to
add the required parameter and to re-configure grub (Skip this step if you are using the RHEL for
SAP with HA & US image.)
75
SAP HANA on AWS SAP HANA Guides
Configure storage
10. Add symbolic links by executing following commands. (Skip this step if you are using the RHEL for
SAP with HA & US image.)
# ln -s /usr/lib64/libssl.so.10 /usr/lib64/libssl.so.1.0.1
# ln -s /usr/lib64/libcrypto.so.10 /usr/lib64/libcrypto.so.1.0.1
11. Configure and start the Network Time Protocol (NTP) service. You can adjust the NTP server pool
based on your requirements. The following is just an example.
Note
Remove any existing invalid NTP server pools from /etc/ntp.conf before adding the
following.
Tip
Instead of connecting to the global NTP server pool, you can connect to your internal NTP
server if needed. Alternatively, you can also use Amazon Time Sync Service to keep your
system time in sync.
12. Set clocksource to tsc by the updating the current_clocksource file and the GRUB2 boot
loader.
# tuned-adm verify
“tuned-adm verify” creates a log file under /var/log/tuned/tuned.log Review this log
file and ensure that all checks have passed.
76
SAP HANA on AWS SAP HANA Guides
Configure storage
Note
On Nitro-based instances, Amazon EBS volumes are presented as NVME block devices. You
need to perform additional mapping when configuring these volumes.
Depending on the instance and storage volume types, your block device mapping will look similar to
the following examples.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 50G 0 disk
##xvda1 202:1 0 1M 0 part
##xvda2 202:2 0 50G 0 part /
xvdb 202:16 0 800G 0 disk
xvdc 202:32 0 800G 0 disk
xvdd 202:48 0 800G 0 disk
xvde 202:64 0 1T 0 disk
xvdf 202:80 0 4T 0 disk
xvdh 202:112 0 525G 0 disk
xvdr 202:4352 0 50G 0 disk
#
## lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 50G 0 disk
##nvme0n1p1 259:1 0 50G 0 part /
nvme1n1 259:2 0 4T 0 disk
nvme2n1 259:3 0 800G 0 disk
nvme3n1 259:4 0 800G 0 disk
nvme4n1 259:5 0 800G 0 disk
nvme5n1 259:6 0 525G 0 disk
nvme6n1 259:7 0 1T 0 disk
nvme7n1 259:8 0 50G 0 disk
#
2. Initialize the volumes of SAP HANA data, log, and backup to use with Linux Logical Volume Manager
(LVM).
Note
Ensure you are choosing the devices that are associated with the SAP HANA data, log, and
backup volumes. The device names might be different in your environment.
77
SAP HANA on AWS SAP HANA Guides
Configure storage
3. Create volume groups for SAP HANA data, log, and backup. Ensure that device IDs are associated
correctly with the appropriate volume group.
In the following command, -i 3 represents stripes based on the number of volumes that are used
for a HANA data volume group. Adjust the number depending on the number of volumes that are
allocated to the HANA data volume group, based on instance and storage type.
In the following command, -i 1 represents stripes based on the number of volumes that are used
for a HANA log volume group. Adjust the number depending on the number of volumes that are
allocated to the HANA log volume group, based on instance and storage type.
7. Construct XFS file systems with the newly created logical volumes for HANA data, log, and backup
by using the following commands:
78
SAP HANA on AWS SAP HANA Guides
Configure storage
# mkfs.xfs -f /dev/mapper/vghanadata-lvhanadata
# mkfs.xfs -f /dev/mapper/vghanalog-lvhanalog
# mkfs.xfs -f /dev/mapper/vghanaback-lvhanaback
8. Construct XFS file systems for HANA shared and HANA binaries.
Note
On Nitro-based instance types, device names can change during instance restarts. To
prevent file system mount issues, it is important to create labels for devices that aren’t part
of logical volumes so that the devices can be mounted by using labels instead of the actual
device names.
9. Create directories for HANA data, log, backup, shared, and binaries.
10. Use the echo command to add entries to the /etc/fstab file with the following mount options to
automatically mount these file systems during restart.
# mount -a
12. Check to make sure that all file systems are mounted appropriately; for example, here is the output
from an x1.32xlarge system:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 50G 1.8G 49G 4% /
devtmpfs 961G 0 961G 0% /dev
tmpfs 960G 0 960G 0% /dev/shm
tmpfs 960G 17M 960G 1% /run
tmpfs 960G 0 960G 0% /sys/fs/cgroup
tmpfs 192G 0 192G 0% /run/user/1000
/dev/mapper/vghanadata-lvhanadata 2.3T 34M 2.3T 1% /hana/data
/dev/mapper/vghanalog-lvhanalog 512G 33M 512G 1% /hana/log
/dev/mapper/vghanaback-lvhanaback 4.0T 33M 4.0T 1% /backup
/dev/xvde 1.0T 33M 1.0T 1% /hana/shared
/dev/xvdr 50G 33M 50G 1% /usr/sap
#
13. At this time, we recommend rebooting the system and confirming that all the file systems mount
automatically after the restart.
14. If you are deploying a scale-out workload, follow the steps specified in Configure NFS for scale-out
workloads (p. 80) to set up SAP HANA shared and backup NFS file systems with Amazon EFS.
79
SAP HANA on AWS SAP HANA Guides
Configure NFS
If you are not deploying a scale-out workload, you can now proceed with your SAP HANA software
installation.
1. Install the nfs-utils package in all the nodes in your scale-out cluster.
# mount -t nfs -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS DNS Name:/ /hana/
shared
# mount -t nfs -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 EFS DNS Name:/ /
backup
Note
If you have trouble mounting the NFS file systems, you might need to adjust your security
groups to allow access to port 2049. For details, see Security Groups for Amazon EC2
Instances and Mount Targets in the AWS documentation.
4. Add NFS mount entries to the /etc/fstab file in all the nodes to automatically mount these file
systems during system restart; for example:
5. Set appropriate permissions and ownership for your target mount points.
80
SAP HANA on AWS SAP HANA Guides
Additional reading
3. Set up a CloudWatch alarm and Amazon EC2 automatic recovery to automatically recover your
instance from hardware failures. For details, see Recover Your Instance in the AWS documentation.
You can also refer to the Knowledge Center video for detailed instructions.
Note
Automatic recovery is not supported for Amazon EC2 instances running in dedicated hosts.
4. Create an AMI of your newly deployed system to take a full backup of your instance. For details, see
Create an AMI from an Amazon EC2 Instance in the AWS documentation.
5. If you have deployed an SAP HANA scale-out cluster, consider adding additional elastic network
interfaces and security groups to logically separate network traffic for client, inter-node, and
optional SAP HANA System Replication (HSR) communications. For details, see the SAP HANA on
AWS Operations Guide.
Additional Reading
AWS services
• Amazon EC2
• Amazon EBS
• Amazon VPC
• Amazon EFS
• SAP OSS Note 2292690 - SAP HANA DB: Recommended OS settings for RHEL 7
• SAP OSS Note 2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System
• SAP OSS Note 2205917 - SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP
Applications 12
• SAP OSS Note 1944799 - SAP HANA Guidelines for SLES Operating System Installation
[
{"DeviceName":"/dev/sda1","Ebs":
{"VolumeSize":50,"VolumeType":"gp2","DeleteOnTermination":false}},
{"DeviceName":"/dev/sdb","Ebs":
{"VolumeSize":400,"VolumeType":"gp2,"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdc","Ebs":
{"VolumeSize":400,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdd","Ebs":
{"VolumeSize":400,"VolumeType":"gp2", ,"Encrypted":true,"DeleteOnTermination":false}},
81
SAP HANA on AWS SAP HANA Guides
Document Revisions
{"DeviceName":"/dev/sde","Ebs":
{"VolumeSize":1024,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdf","Ebs":
{"VolumeSize":2048,"VolumeType":"st1","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdh","Ebs":{"VolumeSize":300,"VolumeType":"gp2",
"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdh","Ebs":
{"VolumeSize":300,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdr","Ebs":
{"VolumeSize":50,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}}
]
[
{"DeviceName":"/dev/sda1","Ebs":
{"VolumeSize":50,"VolumeType":"gp2","DeleteOnTermination":false}},
{"DeviceName":"/dev/sdb","Ebs":
{"VolumeSize":600,"VolumeType":"io1","Iops":7500,"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sde","Ebs":
{"VolumeSize":512,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdf","Ebs":
{"VolumeSize":1024,"VolumeType":"st1","Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdh","Ebs":
{"VolumeSize":260,"VolumeType":"io1","Iops":2000,"Encrypted":true,"DeleteOnTermination":false}},
{"DeviceName":"/dev/sdr","Ebs":
{"VolumeSize":50,"VolumeType":"gp2","Encrypted":true,"DeleteOnTermination":false}}
]
Document Revisions
Date Change In sections
82
SAP HANA on AWS SAP HANA Guides
About this guide
Amazon Web Services offers you the ability to run your SAP HANA systems of various sizes and operating
systems. Running SAP systems on AWS is very similar to running SAP systems in your data center. To a
SAP Basis or NetWeaver administrator, there are minimal differences between the two environments.
There are a number of AWS Cloud considerations relating to security, storage, compute configurations,
management, and monitoring that will help you get the most out of your SAP HANA implementation on
AWS.
This technical article provides the best practices for deployment, operations, and management of SAP
HANA systems on AWS. The target audience is SAP Basis and NetWeaver administrators who have
experience running SAP HANA systems in an on-premises environment and want to run their SAP HANA
systems on AWS.
Note
The SAP notes and Knowledge Base articles (KBA) referenced in this guide require an SAP ONE
Support Launchpad user account. For more information, see the SAP Support website.
Introduction
This guide provides best practices for operating SAP HANA systems that have been deployed on AWS
either by using the SAP HANA Quick Start reference deployment process or by manually following the
instructions in Setting up AWS Resources and the SLES Operating System for SAP HANA Installation. This
guide is not intended to replace any of the standard SAP documentation. See the following SAP guides
and notes:
This guide assumes that you have a basic knowledge of AWS. If you are new to AWS, see the following on
the AWS website before continuing:
• SAP on AWS Implementation and Operations Guide provides best practices for achieving optimal
performance, availability, and reliability, and lower total cost of ownership (TCO) while running SAP
solutions on AWS.
83
SAP HANA on AWS SAP HANA Guides
Administration
• SAP on AWS High Availability Guide explains how to configure SAP systems on Amazon Elastic
Compute Cloud (Amazon EC2) to protect your application from various single points of failure.
• SAP on AWS Backup and Recovery Guide explains how to back up SAP systems running on AWS, in
contrast to backing up SAP systems on traditional infrastructure.
Administration
This section provides guidance on common administrative tasks required to operate an SAP HANA
system, including information about starting, stopping, and cloning systems.
When you resume the instance, it will automatically start with the same IP address, network, and storage
configuration as before. You also have the option of using the EC2 Scheduler to schedule starts and stops
of your EC2 instances. The EC2 Scheduler relies on the native shutdown and start-up mechanisms of the
operating system. These native mechanisms will invoke the orderly shutdown and startup of your SAP
HANA instance. Here is an architectural diagram of how the EC2 Scheduler works:
84
SAP HANA on AWS SAP HANA Guides
Monitoring
After you have tagged your resources, you can apply specific security restrictions such as access control,
based on the tag values. Here is an example of such a policy from the AWS Security blog:
{
"Version" : "2012-10-17",
"Statement" : [
{
"Sid" : "LaunchEC2Instances", "Effect" : "Allow",
"Action" : [
"ec2:Describe*", "ec2:RunInstances"
],
"Resource" : [
"*"
]
},
{
"Sid" : "AllowActionsIfYouAreTheOwner",
"Effect" : "Allow",
"Action" : [
"ec2:StopInstances",
"ec2:StartInstances",
"ec2:RebootInstances",
"ec2:TerminateInstances"
],
"Condition" : {
"StringEquals" : {
"ec2:ResourceTag/PrincipalId" : "${aws:userid}"
}
},
"Resource" : [
"*"
]
}
]
}
The AWS Identity and Access Management (IAM) policy allows only specific permissions based on the tag
value. In this scenario, the current user ID must match the tag value in order for the user to be granted
permissions. For more information on tagging, see the AWS documentation and AWS blog.
Monitoring
You can use various AWS, SAP, and third-party solutions to monitor your SAP workloads. Here are some
of the core AWS monitoring services:
• Amazon CloudWatch – CloudWatch is a monitoring service for AWS resources. It’s critical for SAP
workloads where it’s used to collect resource utilization logs and to create alarms to automatically
react to changes in AWS resources.
85
SAP HANA on AWS SAP HANA Guides
Automation
• AWS CloudTrail – CloudTrail keeps track of all API calls made within your AWS account. It captures key
metrics about the API calls and can be useful for automating trail creation for your SAP resources.
Configuring CloudWatch detailed monitoring for SAP resources is mandatory for getting AWS and SAP
support. You can use native AWS monitoring services in a complementary fashion with the SAP Solution
Manager. You can find third-party monitoring tools in AWS Marketplace.
Automation
AWS offers multiple options for programmatically scripting your resources to operate or scale them in
a predictable and repeatable manner. You can use AWS CloudFormation to automate and operate SAP
systems on AWS. Here are some examples for automating your SAP environment on AWS:
Patching
There are two ways for you to patch your SAP HANA database, with options for minimizing cost and/or
downtime. With AWS, you can provision additional servers as needed to minimize downtime for patching
in a cost-effective manner. You can also minimize risks by creating on-demand copies of your existing
production SAP HANA databases for lifelike production readiness testing.
Patch an existing No costs for additional Need to patch the Native OS patching
server on-demand instances existing operating tools Patch Manager
system and database
Lowest levels of relative Native SAP HANA
complexity and setup Longest downtime to patching tools
tasks involved the existing server and
database
Provision and patch a Leverage latest AMIs More costs for Amazon Machine Image
new server (only database patch is additional on-demand (AMI)
required) instances
AWS CLI
AWS CloudFormation
86
SAP HANA on AWS SAP HANA Guides
Patching
1913302 - HANA:
Suspend DB
connections for short
maintenance tasks
The first method (patch an existing server) involves patching the operating system (OS) and database
(DB) components of your SAP HANA server. The goal of this method is to minimize any additional server
costs and to avoid any tasks needed to set up additional systems or tests. This method may be most
appropriate if you have a well-defined patching process and are satisfied with your current downtime
and costs. With this method you must use the correct operating system (OS) update process and tools for
your Linux distribution. See this SUSE blog and Red Hat FAQ, or check each vendor’s documentation for
their specific processes and procedures.
In addition to patching tools provided by our Linux partners,AWS offers a free of charge patching service
called Patch Manager. Patch Manager is an automated tool that helps you simplify your OS patching
process. You can scan your EC2 instances for missing patches and automatically install them, select the
timing for patch rollouts, control instance reboots, and many other tasks. You can also define auto-
approval rules for patches with an added ability to black-list or white-list specific patches, control how
the patches are deployed on the target instances (e.g., stop services before applying the patch), and
schedule the automatic rollout through maintenance windows.
The second method (provision and patch a new server) involves provisioning a new EC2 instance that will
receive a copy of your source system and database. The goal of the method is to minimize downtime,
minimize risks (by having production data and executing production-like testing), and have repeatable
processes. This method may be most appropriate if you are looking for higher degrees of automation
to enable these goals and are comfortable with the trade- offs. This method is more complex and has a
many more options to fit your requirements. Certain options are not exclusive and can be used together.
For example, your AWS CloudFormation template can include the latest Amazon Machine Images (AMIs),
which you can then use to automate the provisioning, set up, and configuration of a new SAP HANA
server.
1. Download the AWS CloudFormation template offered in the SAP HANA Quick Start.
2. Update the CloudFormation template with the latest OS AMI ID and execute the updated template
to provision a new SAP HANA server. The latest OS AMI ID has the specific security patches that your
organization needs. As part of the provisioning process, you need to provide the latest SAP HANA
installation binaries to get to the required version. This allows you to provision a new HANA system
with the required OS version and security patches along with SAP HANA software versions.
3. After the new SAP HANA system is available, use one of the following methods to copy the data
from the original SAP HANA instance to the newly created system:
87
SAP HANA on AWS SAP HANA Guides
Patching
At the end of this process, you will have a new SAP HANA system that is ready to test.
SAP Note 1984882 (Using HANA System Replication for Hardware Exchange with Minimum/Zero
Downtime) has specific recommendations and guidelines for promoting your system to production.
• To create a full offline system backup (of the OS /usr/sap, HANA shared, backup, data, and log files) –
AMIs are automatically saved in multiple Availability Zones within the same AWS Region.
• To move a HANA system from one AWS Region to another – You can create an image of an
existing EC2 instance and move it to another AWS Region by following the instructions in the AWS
88
SAP HANA on AWS SAP HANA Guides
Patching
documentation. When the AMI has been copied to the target AWS Region, you can launch the new
instance there.
• To clone an SAP HANA system – You can create an AMI of an existing SAP HANA system to create an
exact clone of the system. See the next section for additional information.
Note
See Restoring SAP HANA Backups and Snapshots (p. 96) later in this whitepaper to view the
recommended restoration steps for production environments.
Tip
The SAP HANA system should be in a consistent state before you create an AMI. To do this, stop
the SAP HANA instance before creating the AMI or by following the instructions in SAP Note
1703435.
Amazon S3
Amazon S3 is the center of any SAP backup and recovery solution on AWS. It provides a highly durable
storage infrastructure designed for mission-critical and primary data storage. It is designed to provide
99.999999999% durability and 99.99% availability over a given year. See the Amazon S3 documentation
for detailed instructions on how to create and configure an S3 bucket to store your SAP HANA backup
files.
IAM
With IAM, you can securely control access to AWS services and resources for your users. You can create
and manage AWS users and groups and use permissions to grant user access to AWS resources. You can
create roles in IAM and manage permissions to control which operations can be performed by the entity,
or AWS service, that assumes the role. You can also define which entity is allowed to assume the role.
During the deployment process, AWS CloudFormation creates an IAM role that allows access to get
objects from and/or put objects into Amazon S3. That role is subsequently assigned to each EC2 instance
that is hosting SAP HANA master and worker nodes at launch time as they are deployed.
89
SAP HANA on AWS SAP HANA Guides
Patching
To ensure security that applies the principle of least privilege, permissions for this role are limited only to
actions that are required for backup and recovery.
{"Statement":[
{"Resource":"arn:aws:s3::: <your-s3-bucket-name>/*",
"Action":["s3:GetObject","s3:PutObject","s3:DeleteObject",
"s3:ListBucket","s3:Get*","s3:List*"], "Effect":"Allow"},
{"Resource":"*","Action":["s3:List*","ec2:Describe*","ec2:Attach NetworkInterface",
"ec2:AttachVolume","ec2:CreateTags","ec2:CreateVolume","ec2:RunI nstances",
"ec2:StartInstances"],"Effect":"Allow"}]}
To add functions later, you can use the AWS Management Console to modify the IAM role.
S3 Glacier
S3 Glacier is an extremely low-cost service that provides secure and durable storage for data archiving
and backup. S3 Glacier is optimized for data that is infrequently accessed and provides multiple options
such as expedited, standard, and bulk methods for data retrieval. With standard and bulk retrievals, data
is available in 3-5 hours or 5-12 hours, respectively.
However, with expedited retrieval, S3 Glacier provides you with an option to retrieve data in 3-5 minutes,
which can be ideal for occasional urgent requests. With S3 Glacier, you can reliably store large or small
amounts of data for as little as $0.01 per gigabyte per month, a significant savings compared to on-
premises solutions. You can use lifecycle policies, as explained in the Amazon S3 Developer Guide, to push
SAP HANA backups to S3 Glacier for long-term archiving.
Backup Destination
The primary difference between backing up SAP systems on AWS compared with traditional on-premises
infrastructure is the backup destination. Tape is the typical backup destination used with on-premises
infrastructure. On AWS, backups are stored in Amazon S3. Amazon S3 has many benefits over tape,
including the ability to automatically store backups offsite from the source system, since data in Amazon
S3 is replicated across multiple facilities within the AWS Region.
SAP HANA systems provisioned by using the SAP HANA Quick Start reference deployment are configured
with a set of EBS volumes to be used as an initial local backup destination. HANA backups are first stored
on these local EBS volumes and then copied to Amazon S3 for long-term storage.
You can use SAP HANA Studio, SQL commands, or the DBA Cockpit to start or schedule SAP HANA data
backups. Log backups are written automatically unless disabled. The /backup file system is configured as
part of the deployment process.
90
SAP HANA on AWS SAP HANA Guides
Patching
The SAP HANA global.ini configuration file has been customized by the SAP HANA Quick Start reference
deployment process as follows: database backups go directly to /backup/data/<SID>, while automatic
log archival files go to /backup/log/<SID>.
[persistence]
basepath_shared = no
savepoint_intervals = 300
basepath_datavolumes = /hana/data/<SID>
basepath_logvolumes = /hana/log/<SID>
basepath_databackup = /backup/data/<SID>
basepath_logbackup = /backup/log/<SID>
Some third-party backup tools like Commvault, NetBackup, and IBM Tivoli Storage Manager (IBM TSM)
are integrated with Amazon S3 capabilities and can be used to trigger and save SAP HANA backups
directly into Amazon S3 without needing to store the backups on EBS volumes first.
AWS CLI
The AWS Command Line Interface (AWS CLI), which is a unified tool to manage AWS services, is installed
as part of the base image. Using various commands, you can control multiple AWS services from the
command line directly and automate them through scripts. Access to your S3 bucket is available through
the IAM role assigned to the instance (as discussed earlier (p. 89)). Using the AWS CLI commands for
Amazon S3, you can list the contents of the previously created bucket, back up files, and restore files, as
explained in the AWS CLI documentation.
Bucket: node2-hana-s3bucket-gcynh5v2nqs3
Prefix:
LastWriteTime Length Name
------------- ------ ----
Backup Example
Here are the steps you can take for a typical backup task:
1. In the SAP HANA Backup Editor, choose Open Backup Wizard. You can also open the Backup Wizard
by right-clicking the system that you want to back up and choosing Back Up.
1. Select the destination type File. This will back up the database to files in the specified file system.
2. Specify the backup destination (/backup/data/<SID>) and the backup prefix.
91
SAP HANA on AWS SAP HANA Guides
Patching
2. Use the AWS Management Console to verify that the files have been pushed to Amazon S3.
You can also use the aws s3 ls command shown previously in the AWS Command Line Interface
section (p. 91).
92
SAP HANA on AWS SAP HANA Guides
Patching
Tip
The aws s3 sync command will only upload new files that don’t exist in Amazon S3. Use a
periodically scheduled cron job to sync, and then delete files that have been uploaded. See SAP
Note 1651055 for scheduling periodic backup jobs in Linux, and extend the supplied scripts with
aws s3 sync commands.
The Systems Manager Run Command lets you remotely and securely manage the configuration of
your managed instances. A managed instance is any EC2 instance or on-premises machine in your
hybrid environment that has been configured for Systems Manager. The Run Command enables you to
automate common administrative tasks and perform ad hoc configuration changes at
scale. You can use the Run Command from the Amazon EC2 console, the AWS CLI, Windows PowerShell,
or the AWS SDKs.
Supported operating system (Linux) Instances must run a supported version of Linux.
Roles for Systems Manager Systems Manager requires an IAM role for
instances that will process commands and
a separate role for users who are executing
commands. Both roles require permission policies
that enable them to communicate with the
Systems Manager API. You can choose to use
Systems Manager managed policies or you can
create your own roles and specify permissions. For
93
SAP HANA on AWS SAP HANA Guides
Patching
SSM Agent (EC2 Linux instances) AWS Systems Manager Agent (SSM Agent)
processes Systems Manager requests and
configures your machine as specified in the
request. You must download and install SSM
Agent to your EC2 Linux instances. For more
information, see Installing SSM Agent on Linux in
the AWS documentation.
1. Install and configure SSM Agent on the EC2 instance. For detailed installation steps, see the AWS
Systems Manager documentation.
2. Provide SSM access to the EC2 instance role that is assigned to the SAP HANA instance. For detailed
information on how to assign SSM access to a role, see the AWS Systems Manager documentation.
3. Create an SAP HANA backup script. You can use the following sample script as a starting point and
modify it to meet your requirements.
#!/bin/sh
set -x
S3Bucket_Name=<Name of the S3 bucket where backup files will be copied>
TIMESTAMP=$(date +\%F\_%H\%M)
exec 1>/backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_log.out 2>&1
echo "Starting to take backup of Hana Database and Upload the backup files to S3"
echo "Backup Timestamp for $SAPSYSTEMNAME is $TIMESTAMP" BACKUP_PREFIX=
${SAPSYSTEMNAME}_${TIMESTAMP}
echo $BACKUP_PREFIX
# source HANA environment
source $DIR_INSTANCE/hdbenv.sh
# execute command with user key
hdbsql -U BACKUP "backup data using file ('$BACKUP_PREFIX')" echo "HANA Backup is
completed"
echo "Continue with copying the backup files in to S3" echo $BACKUP_PREFIX
sudo -u root /usr/local/bin/aws s3 cp --recursive
/backup/data/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/data/ --
exclude "*" --include "${BACKUP_PREFIX}*"
echo "Copying HANA Database log files in to S3"
sudo -u root /usr/local/bin/aws s3 sync
/backup/log/${SAPSYSTEMNAME}/ s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}/log/ --
exclude "*" --include "log_backup*"
sudo -u root /usr/local/bin/aws s3 cp
/backup/data/${SAPSYSTEMNAME}/${TIMESTAMP}_backup_log.out
s3://${S3Bucket_Name}/bkps/${SAPSYSTEMNAME}
Note
This script takes into consideration that hdbuserstore has a key named Backup.
4. Test a one-time backup by executing an ssm command directly.
94
SAP HANA on AWS SAP HANA Guides
Patching
Note
For this command to execute successfully, you will have to enable <sid>adm login using
sudo.
aws ssm send-command --instance-ids <HANA master instance ID> --document-name AWS-
RunShellScript
--parameters commands="sudo - u <HANA_SID>adm TIMESTAMP=$(date +\%F\_%H\%M)
SAPSYSTEMNAME=<HANA_SID>
DIR_INSTANCE=/hana/shared/${SAPSYSTEMNAME}/HDB00 -i /usr/sap/HDB/HDB00/hana_backup.sh"
5. Using CloudWatch Events, you can schedule backups remotely at any desired frequency. Navigate to
the CloudWatch Events page and create a rule.
95
SAP HANA on AWS SAP HANA Guides
Restoring backups and snapshots
1. Choose Schedule.
2. Select SSM Run Command as the target.
3. Select AWS-RunShellScript (Linux) as the document type.
4. Choose InstanceIds or Tags as the target key.
5. Choose Constant under Configure Parameters, and type the run command.
1. If the backup files are not already available in the /backup file system but are in Amazon S3, restore
the files from Amazon S3 by using the aws s3 cp command. This command has the following syntax:
For example:
2. Recover the SAP HANA database by using the Recovery Wizard as outlined in the SAP HANA
Administration Guide. Specify File as the destination type and enter the correct backup prefix.
96
SAP HANA on AWS SAP HANA Guides
Restoring backups and snapshots
3. Mount the logical volume associated with SAP HANA data on the host:
Note
For large mission-critical systems, we highly recommend that you execute the volume
initialization command on the database data and log volumes after restoring the AMI but
before starting the database. Executing the volume initialization command will help you avoid
extensive wait times before the database is available. Here is the sample fio command that you
can use:
For more information about initializing Amazon EBS volumes, see the AWS documentation.
Choose the AMI that you want to restore, expand Actions, and then choose Launch.
97
SAP HANA on AWS SAP HANA Guides
Storage configuration
gp2 and gp3 volumes balance price and performance for a variety of workloads, while io1, io2, and
io2 Block Express volumes provide the highest performance for mission-critical applications.
From these options, you can choose the best storage solution that meets your performance and cost
requirements. We recommend the io2 or io2 Block Express configuration for mission-critical SAP
HANA production workloads.
Note that only the following instances are certified for production use: r3.8xlarge, r4.8xlarge,
r4.16xlarge, r5.8xlarge, r5.12xlarge, r5.16xlarge, r5.24xlarge, r5.metal, r5b.8xlarge,
r5b.12xlarge, r5b.16xlarge, r5b.24xlarge, r5b.metal, r6i.12xlarge, r6i.16xlarge,
r6i.24xlarge, r6i.32xlarge, x1.16xlarge, x1.32xlarge, x1e.32xlarge, x2idn.16xlarge,
x2idn.24xlarge, x2idn.32xlarge, x2iden.24xlarge, x2iden.32xlarge, u-3tb1.56xlarge,
u-6tb1.56xlarge, u-6tb1.112xlarge, u-9tb1.112xlarge, u-12tb1.112xlarge,
u-6tb1.metal, u-9tb1.metal, u-12tb1.metal, u-18tb1.metal, and u-24tb1.metal. For
nonproduction use, all of the instance types in this guide are supported.
For multi-node deployments, storage volumes for SAP HANA data and logs are provisioned in the master
and worker nodes.
98
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
In the following configurations, we intentionally kept the same storage configuration for SAP HANA data
and log volumes for all R3, certain R4 and R5, and smaller X1e/X2iedn instance types so you can scale up
from smaller instances to larger instances without having to reconfigure your storage.
Note
The X1, X1e, X2idn, and X2iedn instance types include instance storage but should not be used
to persist any SAP HANA related files.
u-12tb1.112xlarge
12,288 448 6 x 2,400 1,500 43,200 N/A
GiB
u-9tb1.112xlarge
9,216 448 6 x 1,800 1,500 32,400 N/A
GiB
u-6tb1.112xlarge
6,144 448 6 x 1,200 1,500 21,600 N/A
GiB
u-6tb1.56xlarge
6,144 224 6 x 1,200 1,500 21,600 N/A
GiB
u-3tb1.56xlarge
3,072 224 3 x 1,200 750 10,800 N/A
GiB
x2iedn.32xlarge
4,096 128 3 x 1,600 750 14,400 N/A
GiB
x2iedn.24xlarge
3,072 96 3 x 1,200 750 10,800 N/A
GiB
99
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
r3.8xlarge
100
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
***gp3 based configurations are only supported in production for Nitro based instances, not for Xen
based instances.
gp2 for HANA logs
u-12tb1.112xlarge
12,288 448 2 x 300 GiB 500 1,800 6,000
u-9tb1.112xlarge
9,216 448 2 x 300 GiB 500 1,800 6,000
101
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
u-6tb1.112xlarge
6,144 448 2 x 300 GiB 500 1,800 6,000
u-6tb1.56xlarge
6,144 224 2 x 300 GiB 500 1,800 6,000
u-3tb1.56xlarge
3,072 224 2 x 300 GiB 500 1,800 6,000
x2iedn.32xlarge
4,096 128 2 x 300 GiB 500 1,800 6,000
x2iedn.24xlarge
3,072 96 2 x 300 GiB 500 1,800 6,000
r3.8xlarge
102
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
***gp3 based configurations are only supported in production for Nitro based instances, not for Xen
based instances.
103
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
u-24tb1.metal
24,576 448 2 x 14,400 1,000 9,000 2,000 18,000
GiB
u-18tb1.metal
18,432 448 2 x 10,800 1,000 9,000 2,000 18,000
GiB
u-12tb1.112xlarge
12,228 448 2 x 7,200 1,000 6,000 2,000 12,000
GiB
u-12tb1.metal
12,228 448 2 x 7,200 1,000 6,000 2,000 12,000
GiB
u-9tb1.112xlarge
9,216 448 2 x 5,400 1,000 6,000 2,000 12,000
GiB
u-6tb1.112xlarge
6,114 448 2 x 3,600 1,000 6,000 2,000 12,000
GiB
u-6tb1.56xlarge
6,114 224 2 x 3,600 1,000 6,000 2,000 12,000
GiB
u-3tb1.56xlarge
3,072 224 2 x 1,800 750 4,500 1,500 9,000
GiB
x2iedn.32xlarge
4,096 128 2 x 2,400 750 4,500 1,500 9,000
GiB
x2iedn.24xlarge
3,072 96 2 x 1,800 750 4,500 1,500 9,000
GiB
x2idn.32xlarge
2,048 128 2 x 1,200 750 4,500 1,500 9,000
GiB
x2idn.24xlarge
1,536 96 2 x 900 750 4,500 1,500 9,000
GiB
x2idn.16xlarge
1,024 64 2 x 600 500 3,750 1,000 7,500
GiB
x1.32xlarge***
1,952 128 2 x 1,200 750 4,500 1,500 9,000
GiB
104
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
x1.16xlarge***
976 64 1 x 1,200 500 7,500 500 7,500
GiB
r3.8xlarge***
105
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
x2iedn.4xlarge
512 16 1 x 585 125 3,000 125 3,000
GiB
x2iedn.2xlarge
256 8 1 x 295 125 3,000 125 3,000
GiB
x2iedn.xlarge
128 4 1 x 150 125 3,000 125 3,000
GiB
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
***gp3 based configurations are only supported in production for Nitro based instances, not for Xen
based instances.
106
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
u-24tb1.metal
24,576 448 1 x 512 500 3,000 500 3,000
GiB
u-18tb1.metal
18,432 448 1 x 512 500 3,000 500 3,000
GiB
u-12tb1.112xlarge
12,228 448 1 x 512 500 3,000 500 3,000
GiB
u-12tb1.metal
12,228 448 1 x 512 500 3000 500 3,000
GiB
u-9tb1.112xlarge
9,216 448 1 x 512 300 3,000 300 3,000
GiB
u-6tb1.112xlarge
6,114 448 1 x 512 300 3,000 300 3,000
GiB
u-6tb1.56xlarge
6,114 224 1 x 512 300 3,000 300 3,000
GiB
u-3tb1.56xlarge
3,072 224 1 x 512 300 3,000 300 3,000
GiB
x2iedn.32xlarge
4,096 128 1 x 512 300 3,000 300 3,000
GiB
x2iedn.24xlarge
3,072 96 1 x 512 300 3,000 300 3,000
GiB
x2idn.32xlarge
2,048 128 1 x 512 300 3,000 300 3,000
GiB
x2idn.24xlarge
1,536 96 1 x 512 300 3,000 300 3,000
GiB
x2idn.16xlarge
1,024 64 1 x 512 300 3,000 300 3,000
GiB
x1.32xlarge***
1,952 128 1 x 512 300 3,000 300 3,000
GiB
107
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
x1.16xlarge***
976 64 1 x 512 300 3,000 300 3,000
GiB
r3.8xlarge***
108
SAP HANA on AWS SAP HANA Guides
gp2 and gp3
x2iedn.4xlarge
512 16 1 x 245 125 3,000 125 3,000
GiB
x2iedn.2xlarge
256 8 1 x 125 125 3,000 125 3,000
GiB
x2iedn.xlarge
128 4 1 x 64 GiB 125 3,000 125 3,000
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
***gp3 based configurations are only supported in production for Nitro based instances, not for Xen
based instances.
General Purpose SSD (gp2) volumes created or modified after 12/03/2018 have a throughput maximum
between 128 MiB/s and 250 MiB/s depending on volume size. Volumes greater than 170 GiB and below
334 GiB deliver a maximum throughput of 250 MiB/s if burst credits are available. Volumes with 334 GiB
109
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
and above deliver 250 MiB/s, irrespective of burst credits. For details, see Amazon EBS Volume Types in
the AWS documentation.
General Purpose SSD gp3 volumes deliver a consistent baseline of 3,000 IOPS and 125 MiB/s. You can
also purchase additional IOPS (up to 16,000) and throughput (up to 1,000 MiB/s). While we recommend
you to use the configurations shown in this guide, gp3 volumes provide flexibility to customize SAP
HANA’s storage configuration (IOPS and throughput) according to your needs and usage.
The minimum gp3 configuration required to meet SAP HANA KPIs are the following:
u-12tb1.112xlarge
12,288 448 6 x 2,400 GiB 3,000 12,000
u-9tb1.112xlarge
9,216 448 6 x 1,800 GiB 3,000 12,000
u-6tb1.112xlarge
6,144 448 6 x 1,200 GiB 3,000 12,000
110
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
r3.8xlarge
111
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
io1 for HANA logs
u-12tb1.112xlarge
12,288 448 1 x 525 GiB 500 2,000
u-9tb1.112xlarge
9,216 448 1 x 525 GiB 500 2,000
u-6tb1.112xlarge
6,144 448 1 x 525 GiB 500 2,000
112
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
r3.8xlarge
113
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum achievable throughput when striping multiple EBS volumes.
Actual throughput depends on the instance type. Every instance type has its own Amazon EBS
throughput maximum. For more information, see Amazon EBS-Optimized Instances.
io2 for HANA data
u-12tb1.112xlarge
12,288 448 6 x 2,400 GiB 3,000 12,000
u-9tb1.112xlarge
9,216 448 6 x 1,800 GiB 3,000 12,000
114
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
u-6tb1.112xlarge
6,144 448 6 x 1,200 GiB 3,000 12,000
r3.8xlarge
115
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
io2 for HANA logs
u-12tb1.112xlarge
12,288 448 1 x 525 GiB 500 2,000
u-9tb1.112xlarge
9,216 448 1 x 525 GiB 500 2,000
u-6tb1.112xlarge
6,144 448 1 x 525 GiB 500 2,000
116
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
r3.8xlarge
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum achievable throughput when striping multiple EBS volumes.
Actual throughput depends on the instance type. Every instance type has its own Amazon EBS
throughput maximum. For more information, see Amazon EBS-Optimized Instances.
117
SAP HANA on AWS SAP HANA Guides
io1, io2, and io2 Block Express
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
io2 Block Express for HANA logs
118
SAP HANA on AWS SAP HANA Guides
Root and binaries
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a
physical CPU core.
** This value represents the maximum throughput that could be achieved when striping multiple
EBS volumes. Actual throughput depends on the instance type. Every instance type has its own
Amazon EBS throughput maximum. For details, see Amazon EBS-Optimized Instances in the AWS
documentation.
Note
io2 Block Express volume supports up to 4000 MiB/s throughput per volume with 16,000 IOPS
at 256 KiB I/O size or with 64,000 IOPS at 16 KiB I/O size. The maximum throughput value
represented in the Total maximum throughput column = Total provisioned IOPS * 256 KiB I/O.
To increase the throughput, increase the provisioned IOPS.
119
SAP HANA on AWS SAP HANA Guides
Root and binaries
u-12tb1.112xlarge
12,288 448 1 x 50 GiB 1 x 50 GiB 1 x 1,024 1 x 16,384
GiB GiB
u-9tb1.112xlarge
9,216 448 1 x 50 GiB 1 x 50 GiB 1 x 1,024 1 x 16,384
GiB GiB
u-6tb1.112xlarge
6,144 448 1 x 50 GiB 1 x 50 GiB 1 x 1,024 1 x 12,288
GiB GiB
u-6tb1.56xlarge
6,144 224 1 x 50 GiB 1 x 50 GiB 1 x 1,024 1 x 12,288
GiB GiB
u-3tb1.56xlarge
3,072 224 1 x 50 GiB 1 x 50 GiB 1 x 1,024 1 x 6,144
GiB GiB
x2iedn.32xlarge
4,096 128 1 x 50 GiB 1 x 50 GiB 1 x 1,024 1 x 8,192
GiB GiB
x2iedn.24xlarge
3,072 96 1 x 50 GiB 1 x 50 GiB 1 x 1,024 1 x 6,144
GiB GiB
120
SAP HANA on AWS SAP HANA Guides
Root and binaries
121
SAP HANA on AWS SAP HANA Guides
Backup options
r3.4xlarge
r3.2xlarge
* Each logical processor offered by Amazon EC2 High Memory Instances is a hyperthread on a physical CPU
core.
** In a multi-node architecture, the SAP HANA NFS shared volume is provisioned only once on the master
node.
*** In a multi-node architecture, the SAP HANA backup volume can be deployed as NFS or Amazon EFS.
The size of the SAP HANA NFS backup volume is multiplied by the number of nodes. The SAP HANA backup
volume is provisioned only once on the master node, and NFS is mounted on the worker nodes. There is no
provision needed for Amazon EFS as it is built to scale on demand, growing and shrinking automatically as
files are added and removed.
Backup options
For SAP HANA backup, you can choose file-based backup with storage configuration recommended in
this guide or AWS Backint for SAP HANA to backup your database on Amazon S3. AWS Backint Agent for
SAP HANA is an SAP-certified backup and restore solution for SAP HANA workloads running on Amazon
EC2 instances. With AWS Backint for SAP HANA as your backup solution, provisioning additional Amazon
EBS storage volumes or Amazon EFS file systems becomes optional. For more details, see AWS Backint
Agent for SAP HANA.
For single-node deployment, we recommend using Amazon EBS Throughput Optimized HDD (st1)
volumes for SAP HANA to perform file-based backup. This volume type provides low-cost magnetic
storage designed for large sequential workloads. SAP HANA uses sequential I/O with large blocks to back
122
SAP HANA on AWS SAP HANA Guides
Networking
up the database, so st1 volumes provide a low-cost, high-performance option for this scenario. To learn
more about st1 volumes, see Amazon EBS Volume Types.
The SAP HANA backup volume size is designed to provide optimal baseline and burst throughput as well
as the ability to hold several backup sets. Holding multiple backup sets in the backup volume makes it
easier to recover your database if necessary. You may resize your SAP HANA backup volume after initial
setup if needed. To learn more about resizing your Amazon EBS volumes, see Expanding the Storage Size
of an EBS Volume on Linux.
For multi-node deployment, we recommend using Amazon EFS for SAP HANA to perform file-based
backup. It can support performance over 10 GB/sec and over 500,000 IOPS.
Note
The configurations recommended in this guide are used by both, AWS Launch Wizard for SAP
and AWS Quick Start for SAP HANA.
Networking
SAP HANA components communicate over the following logical network zones:
• Client zone – to communicate with different clients such as SQL clients, SAP Application Server, SAP
HANA Extended Application Services (XS), and SAP HANA Studio
• Internal zone – to communicate with hosts in a distributed SAP HANA system as well as for SAP HSR
• Storage zone – to persist SAP HANA data in the storage infrastructure for resumption after start or
recovery after failure
Separating network zones for SAP HANA is considered an AWS and SAP best practice. It enables you to
isolate the traffic required for each communication channel.
In a traditional, bare-metal setup, these different network zones are set up by having multiple physical
network cards or virtual LANs (VLANs). Conversely, on the AWS Cloud, you can use elastic network
interfaces combined with security groups to achieve this network isolation. Amazon EBS-optimized
instances can also be used for further isolation for storage I/O.
EBS-Optimized Instances
Many newer Amazon EC2 instance types such as the X1 use an optimized configuration stack and provide
additional, dedicated capacity for Amazon EBS I/O. These are called EBS-optimized instances. This
optimization provides the best performance for your EBS volumes by minimizing contention between
Amazon EBS I/O and other traffic from your instance.
123
SAP HANA on AWS SAP HANA Guides
Elastic network interfaces
For more information about network interfaces, see the AWS documentation. In the following example,
two network interfaces are attached to each SAP HANA node as well as in a separate communication
channel for storage.
124
SAP HANA on AWS SAP HANA Guides
Security groups
Security Groups
A security group acts as a virtual firewall that controls the traffic for one or more instances. When you
launch an instance, you associate one or more security groups with the instance. You add rules to each
security group that allow traffic to or from its associated instances. You can modify the rules for a
security group at any time. The new rules are automatically applied to all instances that are associated
with the security group. To learn more about security groups, see the AWS documentation. In the
following example, ENI-1 of each instance shown is a member of the same security group that controls
inbound and outbound network traffic for the client network.
125
SAP HANA on AWS SAP HANA Guides
Configuration for logical network separation
Figure 12: Further isolation with additional ENIs and security groups
1. Create new security groups to allow for isolation of client, internal communication, and, if
applicable, SAP HSR network traffic. See Ports and Connections in the SAP HANA documentation to
learn about the list of ports used for different network zones. For more information about how to
create and configure security groups, see the AWS documentation.
2. Use Secure Shell (SSH) to connect to your EC2 instance at the OS level. Follow the steps described
in the appendix (p. 141) to configure the OS to properly recognize and name the Ethernet devices
associated with the new network interfaces you will be creating.
3. Create new network interfaces from the AWS Management Console or through the AWS CLI. Make
sure that the new network interfaces are created in the subnet where your SAP HANA instance is
deployed. As you create each new network interface, associate it with the appropriate security group
you created in step 1. For more information about how to create a new network interface, see the
AWS documentation.
4. Attach the network interfaces you created to your EC2 instance where SAP HANA is installed.
For more information about how to attach a network interface to an EC2 instance, see the AWS
documentation.
5. Create virtual host names and map them to the IP addresses associated with client, internal, and
replication network interfaces. Ensure that host name-to-IP-address resolution is working by
creating entries in all applicable host files or in the Domain Name System (DNS). When complete,
test that the virtual host names can be resolved from all SAP HANA nodes and clients.
6. For scale-out deployments, configure SAP HANA inter-service communication to let SAP HANA
communicate over the internal network. To learn more about this step, see Configuring SAP HANA
Inter-Service Communication in the SAP HANA documentation.
7. Configure SAP HANA hostname resolution to let SAP HANA communicate over the replication
network for SAP HSR. To learn more about this step, see Configuring Hostname Resolution for SAP
HANA System Replication in the SAP HANA documentation.
126
SAP HANA on AWS SAP HANA Guides
SAP support access
A few steps are required to configure proper connectivity to SAP. These steps differ depending on
whether you want to use an existing remote network connection to SAP, or you are setting up a new
connection directly with SAP from systems on AWS.
1. For the SAProuter instance, create and configure a specific SAProuter security group, which only
allows the required inbound and outbound access to the SAP support network. This should be
limited to a specific IP address that SAP gives you to connect to, along with TCP port 3299. See the
Amazon EC2 security group documentation for additional details about creating and configuring
security groups.
2. Launch the instance that the SAProuter software will be installed on into a public subnet of the VPC
and assign it an Elastic IP address.
3. Install the SAProuter software and create a saprouttab file that allows access from SAP to your SAP
HANA system on AWS.
4. Set up the connection with SAP. For your internet connection, use Secure Network Communication
(SNC). For more information, see the SAP Remote Support – Help page.
5. Modify the existing SAP HANA security groups to trust the new SAProuter security group you have
created.
Tip
For added security, shut down the EC2 instance that hosts the SAProuter service when it is
not needed for support purposes
127
SAP HANA on AWS SAP HANA Guides
Support channel setup with SAProuter on premises
1. Ensure that the proper saprouttab entries exist to allow access from SAP to resources in the VPC.
2. Modify the SAP HANA security groups to allow access from the on- premises SAProuter IP address.
3. Ensure that the proper firewall ports are open on your gateway to allow traffic to pass over TCP port
3299.
128
SAP HANA on AWS SAP HANA Guides
Security
Security
This section discusses additional security topics you may want to consider that are not covered in the
SAP HANA Quick Start reference deployment guide.
Here are additional AWS security resources to help you achieve the level of security you require for your
SAP HANA environment on AWS:
OS Hardening
You may want to lock down the OS configuration further, for example, to avoid providing a DB
administrator with root credentials when logging into an instance.
129
SAP HANA on AWS SAP HANA Guides
Disabling HANA services
With CloudTrail, you can get a history of AWS API calls for your account, including API calls made via the
AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS
CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource
change tracking, and compliance auditing.
Notifications on Access
You can use Amazon Simple Notification Service (Amazon SNS) or third-party applications to set up
notifications on SSH login to your email address or mobile phone.
You can change the patterns to fit your changing business requirements with minimum to no downtime,
depending on the complexity of your chosen architecture pattern.
Topics
• SAP HANA System Replication (p. 130)
• Secondary SAP HANA instance (p. 130)
• Overview of patterns (p. 131)
• Single Region architecture patterns for SAP HANA (p. 132)
• Multi-Region architecture patterns for SAP HANA (p. 135)
130
SAP HANA on AWS SAP HANA Guides
Overview of patterns
secondary instance can be deployed as a passive instance or an active (read-only) instance. When the
secondary instance is deployed as a passive instance, you can reuse the Amazon EC2 instance capacity to
accommodate a non-production SAP HANA workload.
Overview of patterns
The architecture patterns for SAP HANA are divided into the following two categories:
You must consider the risk and impact of each failure type, and the cost of mitigation when choosing
a pattern. The following table provides a quick overview of the architecture patterns for SAP HANA
systems on AWS.
Pattern Single Near zero Low Medium Medium Optional 2-tier Same
1 (p. 132) Region Region
disaster
Pattern recovery Near zero Low Medium High Yes 3-tier
2 (p. 133)
Pattern Multi- Near zero Low Medium Medium Optional 2-tier Cross
5 (p. 136) Region Region
disaster
Pattern recovery Near zero Low High High Optional 3-tier
6 (p. 137)
Pattern Near zero Low Very high Very high Optional Multi-
7 (p. 138) target
1
To achieve near zero recovery point objective, SAP HANA System Replication must be setup in sync mode
for the SAP HANA instances within the same Region.
2
To achieve the lowest recovery time objective, we recommend using a high availability setup with third-
party cluster solutions in combination with SAP HANA System Replication.
3
A production sized Amazon EC2 instance can be deployed as an MCOS installation to accommodate a non-
production SAP HANA instance.
4
SAP HANA System Replication and the number of SAP HANA instance copies as targets.
5
Same-Region replication copies objects across Amazon S3 buckets in the same Region.
131
SAP HANA on AWS SAP HANA Guides
Single Region patterns
You can choose these patterns when you need to ensure that your SAP data resides within regional
boundaries stipulated by the data sovereignty laws.
Patterns
• Pattern 1: Single Region with two Availability Zones for production (p. 132)
• Pattern 2: Single Region with two Availability Zones for production and production sized non-
production in a third Availability Zone (p. 133)
• Pattern 3: Single Region with one Availability Zone for production and another Availability Zone for
non-production (p. 134)
• Pattern 4: Single Region with one Availability Zone for production (p. 135)
This pattern is foundational if you are looking for high availability cluster solutions for automated
failover to fulfill near-zero recovery point and time objectives. SAP HANA System Replication with high
availability cluster solutions for automated failover provides resiliency against failure scenarios. For more
information, see Failure scenarios.
You need to consider the cost of licensing for third-party cluster solutions. If the secondary SAP HANA
instance is not being used for read-only operations, then it is an idle capacity. Provisioning production
equivalent instance type as standby adds to the total cost of ownership.
Your SAP HANA instance backups can be stored in Amazon S3 buckets using AWS Backint Agent for SAP
HANA. Amazon S3 objects are automatically stored across multiple devices spanning a minimum of three
Availability Zones across a Region. To protect against logical data loss, you can use the Same-Region
Replication feature of Amazon S3. For more information, see Setting up replication.
132
SAP HANA on AWS SAP HANA Guides
Single Region patterns
This architectural pattern is cost-optimized. It aids disaster recovery in the unlikely event of losing
connection to two Availability Zones at the same time. For disaster recovery, the non-production SAP
HANA workload is stopped to make resources available for production workload. However, invoking
disaster recovery (third Availability Zone) is a manual activity. As per the requirements of MCOS, you are
required to provision the non-production SAP HANA instance with the same AWS instance type as that of
the primary instance and it has to be located in a third Availability Zone. Also, operating an MCOS system
requires additional storage for non-production workloads and detailed tested procedures to invoke a
disaster recovery.
133
SAP HANA on AWS SAP HANA Guides
Single Region patterns
The secondary instance is an MCOS installation and co-hosts a non-production SAP HANA workload. For
more information, see SAP Note Multiple SAP HANA DBMSs (SIDs) on one SAP HANA system. This is a
cost-optimized solution without high availability. In the event of a failure on the primary instance, the
non-production SAP HANA workload is stopped and a takeover is performed on the secondary instance.
Considering the time taken in recovering services on the secondary instance, this type of pattern is
suitable for SAP HANA workloads that can have higher return time objectives and are functioning as
disaster recovery systems.
134
SAP HANA on AWS SAP HANA Guides
Multi-Region patterns
When deploying a multi-Region pattern, you can benefit from using an automated approach such as,
cluster solution, for fail over between Availability Zones to minimize the overall downtime and remove
the need for human intervention. Multi-Region patterns not only provide high availability but also
disaster recovery, thereby lowering overall costs. Distance between the chosen regions have direct impact
on latency and hence, in a multi-Region pattern, this has to be considered into the overall design of SAP
HANA System Replication.
There are additional cost implications from cross-Region replication or data transfer that also need to be
factored into the overall solution pricing. The pricing varies between Regions.
Patterns
• Pattern 5: Primary Region with two Availability Zones for production and secondary Region with a
replica of backups/AMIs (p. 136)
• Pattern 6: Primary Region with two Availability Zones for production and secondary Region with
compute and storage capacity deployed in a single Availability Zone (p. 137)
135
SAP HANA on AWS SAP HANA Guides
Multi-Region patterns
• Pattern 7: Primary Region with two Availability Zones for production and a secondary Region with
compute and storage capacity deployed, and data replication across two Availability Zones (p. 138)
• Pattern 8: Primary Region with one Availability Zone for production and a secondary Region with a
replica of backups/AMIs (p. 139)
• Summary (p. 140)
With cross-Region replication of files stored in Amazon S3, the data stored in a bucket is automatically
(asynchronously) copied to the target Region. Amazon EBS snapshots can be copied between Regions.
For more information, see Copy an Amazon EBS snapshot. You can copy an AMI within or across Regions
using AWS CLI, AWS Management Console, AWS SDKs or Amazon EC2 APIs. For more information,
see Copy an AMI. You can also use AWS Backup to schedule and run snapshots and replications across
Regions.
In the event of a complete Region failure, the production SAP HANA instance needs to be built in the
secondary Region using AMI. You can use AWS CloudFormation templates to automate the launch of a
new SAP HANA instance. Once your instance is launched, you can then download the last set of backup
from Amazon S3 to restore your SAP HANA instance to a point-in-time before the disaster event. You
can also use AWS Backint Agent to restore and recover your SAP HANA instance and redirect your client
traffic to the new instance in the secondary Region.
This architecture provides you with the advantage of implementing your SAP HANA instance across
multiple Availability Zones with the ability to failover instantly in the event of a failure. For disaster
recovery that is outside the primary Region, recovery point objective is constrained by how often you
store your SAP HANA backup files in your Amazon S3 bucket and the time it takes to replicate your
Amazon S3 bucket to the target Region. You can use Amazon S3 replication time control for a time-
bound replication. For more information, see Enabling Amazon S3 Replication Time Control.
Your recovery time objective depends on the time it takes to build the system in the secondary Region
and restore operations from backup files. The amount of time will vary depending on the size of the
database. Also, the time required to get the compute capacity for restore procedures may be more in
the absence of a reserved instance capacity. This pattern is suitable when you need the lowest possible
recovery time and point objectives within a region and high recovery point and time objectives for
disaster recovery outside the primary Region.
136
SAP HANA on AWS SAP HANA Guides
Multi-Region patterns
In the event of a failure in the primary Region, the production workloads are failed over to the secondary
Region manually. This pattern ensures that your SAP systems are highly available and are disaster-
tolerant. This pattern provides a quicker failover and continuity of business operations with continuous
data replication.
There is an increased cost of deploying the required compute and storage for the production SAP HANA
instance in the secondary Region and of data transfers between Regions. This pattern is suitable when
you require disaster recovery outside of the primary Region with low recovery point and time objectives.
The following diagram shows a multi-target replication where the primary SAP HANA instance is
replicated on both Availability Zones within the same Region and also in the secondary Region.
137
SAP HANA on AWS SAP HANA Guides
Multi-Region patterns
The following diagram shows a multi-tier replication where the replication is configured in a chained
fashion.
138
SAP HANA on AWS SAP HANA Guides
Multi-Region patterns
Region and the replication outside of the primary Region is configured using SAP HANA Multi-target
System Replication. This setup can be extended with high availability cluster solution for automatic
failover capability on the primary Region. For more information, see SAP HANA Multi-target System
Replication.
This pattern provides protection against failures in the Availability Zones and Regions. However, a cross-
Region takeover of SAP HANA instance requires manual intervention. During a failover of the secondary
Region, the SAP HANA instance continues to have SAP HANA System Replication up and running in the
new Region without any manual intervention. This setup is applicable if you are looking for the highest
application availability at all times and disaster recovery outside the primary Region with the least
possible recovery point and time objectives. This pattern can withstand an extremely rare possibility of
the failure of three Availability Zones spread across multiple Regions.
This pattern is highly suitable for you if you operate active/active (read-only) SAP HANA instances in
the primary Region and plan to continue the same SAP HANA System Replication configuration with
read-only capability. If you are looking for read-only capability across two Regions along with an existing
read-only instance within the Region, you can configure multiple secondary systems supporting active/
active (read-only) configuration. However, only one of the systems can be accessed via hint-based
statement routing and the others must be accessed via direct connection.
With this pattern, the redundant compute and storage capacity deployed across two Availability Zones in
two Regions and the cross-Region communication add to the total cost of ownership.
With this pattern, your SAP HANA instance is not highly available. In the event of a complete Region
failure, the production SAP HANA instance needs to be built in the secondary Region using AMI. You can
139
SAP HANA on AWS SAP HANA Guides
High availability and disaster recovery
use AWS CloudFormation templates to automate the launch of a new SAP HANA instance. Once your
instance is launched, you can then download the last set of backup from Amazon S3 to restore your
SAP HANA instance to a point-in-time before the disaster event. You can also use AWS Backint Agent
to restore recover your SAP HANA instance and redirect your client traffic to the new instance in the
secondary Region.
For disaster recovery that is outside the primary Region, recovery point objective is constrained by how
often you store your SAP HANA backup files in your Amazon S3 bucket and the time it takes to replicate
your Amazon S3 bucket to the target Region. Your recovery time objective depends on the time it takes
to build the system in the secondary Region and restore operations from backup files. The amount of
time will vary depending on the size of the database. This pattern is suitable for non-production or non-
critical production systems that can tolerate a downtime required to restore normal operations.
Summary
We highly recommend operating business critical SAP HANA instances across two Availability Zones. You
can use a third-party cluster solution, such as, Pacemaker along with SAP HANA System Replication to
ensure a highly availability setup.
A high availability setup with third-party cluster solution adds to the licensing cost and is still
recommended as it can provide high resiliency architecture, a near-zero recovery time and point
objectives.
140
SAP HANA on AWS SAP HANA Guides
Appendix: Configuring Linux to recognize
Ethernet devices for multiple network interfaces
1. Use SSH to connect to your SAP HANA host as ec2-user, and sudo to root.
2. Remove the existing udev rule; for example:
hanamaster:# rm -f /etc/udev/rules.d/70-persistent-net.rules
3. Create a new udev rule that writes rules based on MAC address rather than other device attributes.
This will ensure that on reboot, eth0 is still eth0, eth1 is eth1, and so on. For example:
hanamaster:# cd /etc/sysconfig/network/
141
SAP HANA on AWS SAP HANA Guides
Document history
5. Ensure that you can accommodate up to seven more Ethernet devices or network interaces, and
restart wicked. For example:
For example:
hanamaster:# cd /etc/iproute2
hanamaster:/etc/iproute2 # echo "2 eth1_rt" >> rt_tables
hanamaster:/etc/iproute2 # ip route add default via 172.16.1.122 dev eth1 table eth1_rt
hanamaster:/etc/iproute2 # ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
Document history
Date Change Location
142
SAP HANA on AWS SAP HANA Guides
Overview
This guide is part of a content series that provides detailed information about hosting, configuring, and
using SAP technologies in the Amazon Web Services Cloud. For the other guides in the series, ranging
from overviews to advanced topics, see SAP on AWS Technical Documentation home page.
Overview
This guide provides an overview of data tiering for SAP customers and partners who are considering
implementing or migrating SAP environments or systems to the Amazon Web Services Cloud.
This guide is for users who architect, design, deploy, and support SAP systems directly and IT
professionals that support these same functions for their SAP systems.
Prerequisites
Specialized Knowledge
You should have previous experience installing, migrating, and operating SAP environments and systems.
Technical Requirements
To access the SAP notes referenced in this guide, you must have an SAP One Support Launchpad user
account.
Assigning your data to the correct category is a process that is specific to your business and IT
requirements. Here are some ways to align these categories with your specific requirements.
143
SAP HANA on AWS SAP HANA Guides
SAP Data Tiering
Hot Tier: The hot tier is for storing data that is used (read, accessed or updated) in real time and that
must be available in a performant and timely manner. This hot data is critical and valuable to the
business for its operational and analytical processes.
Warm Tier: The warm tier is for data that is read less often than hot data, has less stringent performance
requirements, but must still be updatable. The warm tier is integrated with the hot tier in the SAP HANA
database. The benefit of this integration is a more transparent view of the data in the hot and warm data
tiers. Applications accessing the data are unware that the data physically resides on different data tiers.
Cold Tier: The cold tier is for storing data that is infrequently accessed, does not require updates, can be
accessed in a longer timeframe, and is not critical for daily operational or analytical processes.
The following table summarizes the data tiers and their characteristics.
After you have assigned the data to your preferred tiers, you can map your SAP product to the data
tiering solution that is supported by SAP on AWS. For more information, see SAP HANA on AWS:
Supported Amazon EC2 products and SAP HANA on AWS: Dynamic Tiering.
For the hot tier, this guide does not cover SAP HANA on AWS specifically. See SAP HANA on AWS
documentation for more information about running SAP HANA on AWS. For the warm and cold tier, you
have the following technology options shown in the following table, depending on your SAP product:
Hot Certified SAP HANA EC2 Amazon EC2 instances Amazon EC2 instances
instances certified for SAP HANA certified for SAP HANA
Native Storage
Extension
Cold Data Lifecycle Manager SAP BW NLS with SAP ILM Store with SAP IQ
(DLM) with SAP Data IQ
Hub and Amazon S3 Data archiving and
SAP BW NLS with Amazon S3
DLM with SAP HANA Hadoop and Amazon S3
Spark Controller
SAP BW/4 HANA Data
Tiering Optimization
144
SAP HANA on AWS SAP HANA Guides
Warm Data Tiering Options
145
SAP HANA on AWS SAP HANA Guides
SAP HANA Extension Node
The total amount of data that can be stored on the SAP HANA extension node ranges from 1 to 2x of
the total amount of memory of your extension node. For example, if your extension node had 2 TB of
memory, you could potentially store up to 4 TB of warm data on your extension node.
146
SAP HANA on AWS SAP HANA Guides
Data Aging
Data Aging
Data aging can be used for SAP products like SAP Business Suite on HANA (SoH) or SAP S/4HANA to
move data from SAP HANA memory to the disk area. The disk area is additional disk space that is a part
of the SAP HANA database. This helps free up more SAP HANA memory by storing older, less frequently
accessed data in the disk area. When the data is read or updated, data aging uses the paged attribute
property to selectively load the pages of a table into memory instead of loading the entire table into
memory. This helps you conserve your memory space by only loading the required data (instead of the
entire table) into memory. In addition, paged attributes are marked for a higher unload priority by SAP
HANA and are paged out to disk first when SAP HANA needs to free up memory. To size your SAP HANA
memory requirements for data aging, SAP recommends that you run the sizing report provided in the
SAP Note 1872170 - ABAP on HANA sizing report (S/4HANA, Suite on HANA).
The Data Lifecycle Manager (DLM) tool, which is part of SAP HANA Data Warehousing Foundation, can
be used to move data from SAP HANA memory to a cold storage location. For your native SAP HANA use
case, you have two options.
With this option, you can use the SAP Data Hub product to move data in and out of SAP HANA into your
cold store location. On AWS, you are able to use native AWS services such as Amazon Simple Storage
Service to store your cold data. Once your data is in Amazon S3, you can use Amazon S3 features such as
S3 Intelligent-Tiering and Amazon S3 Lifecycle to optimize your costs. Once you have determined that
you no longer need to access your cold data from SAP HANA, you can archive your data in Amazon S3
Glacier for long-term retention.
147
SAP HANA on AWS SAP HANA Guides
DLM with SAP HANA Spark Controller
148
SAP HANA on AWS SAP HANA Guides
Cold Tier Options for SAP BW
149
SAP HANA on AWS SAP HANA Guides
SAP BW/4HANA DTO with Data Hub
150
SAP HANA on AWS SAP HANA Guides
Cold Tier Options for SAP S/4HANA or Suite on HANA
SAP Archiving
With this option, you can use ILM or your standard data archiving process. You can use Amazon Elastic
File System (Amazon EFS) to store your archive file in a highly available, scalable and durable manner.
Similarly, for Windows based systems, you can use Amazon FSx to store your archive files. Amazon EFS
and Amazon FSx can be mounted as your archive file system and you can archive your data from SAP to
this file system through SAP transaction code SARA.
151
SAP HANA on AWS SAP HANA Guides
SAP Archiving
Figure 10: SAP archiving with Amazon EFS for cold tier
For archiving, another option is to use the Amazon Elastic Block Store (Amazon EBS) sc1 volume type
as the underlying storage type for your archive file system. Amazon EBS sc1 volumes are inexpensive
block storage and are designed for less frequently accessed workloads like data archiving. To increase
durability and availability of your archived data, we recommend that you copy the data to Amazon S3 for
backup and Amazon S3 Glacier for long term retention.
152
SAP HANA on AWS SAP HANA Guides
Additional Reading
Figure 11: SAP archiving with Amazon EBS for cold tier
Additional Reading
SAP on AWS technical documentation
SAP documentation
• SAP Note 1872170 - ABAP on HANA sizing report (S/4HANA, Suite on HANA)
• SAP HANA Extension Nodes as a Warm Store
• SAP HANA Dynamic Tiering Architecture
• Extended Store Table Function Restrictions
• DLM on Amazon Elastic Map Reduce
Document Revisions
Date Change
153
SAP HANA on AWS SAP HANA Guides
Overview
This guide is part of a content series that provides detailed information about hosting, configuring, and
using SAP technologies in the Amazon Web Services Cloud. For the other guides in the series, ranging
from overviews to advanced topics, see SAP on AWS Technical Documentation.
Overview
This guide provides SAP customers and partners instructions to set up a highly available SAP architecture
that uses overlay IP addresses on Amazon Web Services. This guide includes two configuration
approaches:
• AWS Transit Gateway serves as central hub to facilitate network connection to an overlay IP address.
• Elastic Load Balancing where a Network Load Balancer enables network access to an overlay IP
address.
This guide is intended for users who have previous experience installing and operating highly available
SAP environments and systems.
Prerequisites
Specialized Knowledge
Before you follow the configuration instructions in this guide, we recommend that you become familiar
with the following AWS services. (If you are new to AWS, see Getting Started with AWS.)
• Amazon VPC
• AWS Transit Gateway
• Elastic Load Balancing
154
SAP HANA on AWS SAP HANA Guides
Overlay IP Routing using AWS Transit Gateway
system and database. AWS offers the use of multiple Availability Zones within an AWS Region to provide
resiliency for the SAP applications.
As part of your SAP implementation, you create an Amazon Virtual Private Cloud (Amazon VPC) to
logically isolate the network from other virtual networks in the AWS Cloud. Then, you use AWS network
routing features to direct the traffic to any instance in the VPCs or between different subnets in a VPC.
Amazon VPC setup includes assigning subnets to your SAP ASCS/ERS for NetWeaver and primary/
secondary nodes for the SAP HANA database. Each of these configured subnets has a classless inter-
domain routing (CIDR) IP assignment from the VPC which resides entirely within one Availability Zone.
This CIDR IP assignment cannot span multiple zones or be reassigned to the secondary instance in a
different AZ during a failover scenario.
For this reason, AWS allows you to configure Overlay IP (OIP) outside of your VPC CIDR block to
access the active SAP instance. With IP overlay routing, you can allow the AWS network to use a non-
overlapping RFC1918 private IP address that resides outside an VPC CIDR range and direct the SAP traffic
to any instance setup across the Availability Zone within the VPC by changing the routing entry in AWS.
A SAP HANA database or SAP NetWeaver application that is protected by a cluster solution such as
SUSE Linux Enterprise Server High Availability Extension (SLES HAE), RedHat Enterprise Linux HA Add-
On(RHEL HA) or SIOS uses the overlay IP address assigned to ensure that the HA cluster is still accessible
during the failover scenarios. Since the overlay IP address uses the IP address range outside the VPC
CIDR range and Virtual Private Gateway connection, you can use AWS Transit Gateway as a central hub
to facilitate the network connection to an overlay IP address from multiple locations including Amazon
VPCs, other AWS Regions, and on-premises using AWS Direct Connect or AWS Client VPN.
If you do not have AWS Transit Gateway set up as a network transit hub or if AWS Transit Gateway is not
available in your preferred AWS Region, you can use a Network Load Balancer to enable network access
to an OIP.
Note: If you do not use Amazon Route 53 or AWS Transit Gateway, see the Overlay IP Routing with
Network Load Balancer (p. 160) section.
Architecture
AWS Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks
which act like spokes. Your Transit Gateway routes packets between source and destination attachments
using Transit Gateway route tables. You can configure these route tables to propagate routes from the
route tables for the attached VPCs and VPN connections. You can also add static routes to the Transit
Gateway route tables. You can add the overlay IP address or address CIDR range as a static route in
the transit gateway route table with a target as the VPC where the EC2 instances of SAP cluster are
running. This way, all the network traffic directed towards overlay IP addresses is routed to this VPC. The
following figure shows this scenario with connectivity from different VPC and corporate network.
155
SAP HANA on AWS SAP HANA Guides
Configuration Steps for AWS Transit Gateway
AWS Transit Gateway pricing is based on the number of connections made to the Transit Gateway per
hour and the amount of traffic that flows through AWS Transit Gateway. For more information, see AWS
Transit Gateway Service Level Agreement.
Note: For attachment, select only the subnet where the SAP instances are running with cluster and
overlay IP configured. In the following figure, the private subnet of the SAP instance is selected for the
Transit Gateway attachment.
156
SAP HANA on AWS SAP HANA Guides
Configuration Steps for AWS Transit Gateway
• VPN connection. Attach a VPN to this Transit Gateway. For detailed steps, see Transit Gateway VPN
Attachments.
When you create a site-to-site VPN connection, you specify the static routes for the overlay IP address.
For detailed steps, see VPN routing options.
• AWS Direct Connect. Attach a Direct Connect Gateway to this Transit Gateway. First, associate a Direct
Connect Gateway with the Transit Gateway. Then, create a transit virtual interface for your AWS Direct
Connect connection to the Direct Connect gateway. Here, you can advertise prefixes from on-premises
to AWS and from AWS to on-premises. For detailed steps, see Transit Gateway Attachments to a Direct
Connect Gateway.
When you associate a Transit Gateway with a Direct Connect gateway, you specify the prefix lists to
advertise the overlay IP address to the on-premises environment. For detailed steps, see Allowed
prefixes interactions.
Note:AWS Direct Connect is recommended for business critical workloads. See Resilience in AWS Direct
Connect to learn about resiliency at the network level.
157
SAP HANA on AWS SAP HANA Guides
Configuration Steps for AWS Transit Gateway
Note
If you are using AWS Client VPN, you do not need to configure Transit Gateway. You can create
additional entries in the routing table for overlay IP addresses. Route traffic to the subnets of
the VPC of production SAP system where overlay IP addresses are configured.
When you create a Transit Gateway attachment to a VPC, the propagation route is created in the default
Transit Gateway route table. In Figure 3, the first and second entry shows the propagated route created
automatically for VPCs where SAP production and non-production systems are running through VPC
attachment.
1. To route traffic from AWS Transit Gateway to the overlay IP address, create static routes in the Transit
Gateway route tables to route overlay IP addresses to the VPC of production SAP system where the
overlay IP addresses are configured. In Figure 3, the third entry shows that the static route created for
the overlay IP range is attached. The target for this route is the SAP Production VPC.
Figure 3: Transit Gateway route table: Overlay IP static route with VPC of production SAP system
target
2. To route the outgoing traffic from VPCs where SAP instances are running to private IP addresses of
another VPC where SAP instances are running attached to same Transit Gateway, create entries in the
route tables associated with these VPC subnets. The target of these routes is AWS Transit Gateway.
In the following VPC of production SAP system route table example, the non-production SAP VPC
(third entry) and corporate network (fourth entry) are routed to the Transit Gateway.
158
SAP HANA on AWS SAP HANA Guides
Configuration Steps for AWS Transit Gateway
Figure 4: VPC of production SAP system route table: VPC of production SAP system and corporate
network routed to AWS Transit Gateway
3. In the VPC of the non-production SAP system, to route the outgoing traffic from the overlay IP
address, create entries in the route tables with Transit Gateway as the target. In the following VPC of
non-production SAP system route table example, the destination is the overlay IP range and the target
is Transit Gateway.
Figure 5: VPC of non-production SAP system route table: Outgoing traffic from overlay IP address
routed to Transit Gateway
159
SAP HANA on AWS SAP HANA Guides
Overlay IP Routing with Network Load Balancer
Architecture
The following figure shows the network access flow of ASCS or SAP HANA overlay IP from outside the
VPC.
Figure 6: SAP High Availability with Overlay IP and Elastic Load Balancer
With Network Load Balancers, you only pay for what you use. See Elastic Load Balancing pricing, for
more information.
160
SAP HANA on AWS SAP HANA Guides
Configuration Steps for Network Load Balancer
161
SAP HANA on AWS SAP HANA Guides
Configuration Steps for Network Load Balancer
This setup allows the static Network Load Balancer DNS to forward the traffic to your SAP instance
network interface through the static overlay IP address. During failover scenarios, you can point to
the you can point to the elastic network interface of the active SAP instance using manual steps or
automatically using cluster management software.
162
SAP HANA on AWS SAP HANA Guides
Configuration Steps for Network Load Balancer
163
SAP HANA on AWS SAP HANA Guides
Configuration Steps for Network Load Balancer
164
SAP HANA on AWS SAP HANA Guides
Configuration Steps for Network Load Balancer
2. In the Host Name parameter of SAP HANA Studio, use the Network Load Balancer DNS name and
provide additional credentials to connect to the SAP HANA system.
165
SAP HANA on AWS SAP HANA Guides
Additional Implementation Notes
Additional Reading
SAP on AWS technical documentation
SAP documentation
Document Revisions
Date Change
166
SAP HANA on AWS SAP HANA Guides
Automated deployment of SAP
HANA on AWS with high availability
This guide is part of a content series that provides detailed information about hosting, configuring, and
using SAP technologies in the Amazon Web Services Cloud. For the other guides in the series, ranging
from overviews to advanced topics, see the SAP on AWS Technical Documentation home page.
This guide provides guidance about how to set up AWS resources and configure a high availability cluster
on SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL) operating systems to deploy
a highly available configuration of SAP HANA on Amazon Elastic Compute Cloud (Amazon EC2) instances
in an existing virtual private cloud (VPC).
After you complete the deployment using either AWS Quick Start or Launch Wizard, you can follow the
steps provided in these sections of the document to perform failover testing:
167
SAP HANA on AWS SAP HANA Guides
Manual deployment of SAP HANA on
AWS with high availability clusters
This guide helps you configure high availability clusters on SLES or RHEL operating systems for your SAP
HANA databases, deployed on Amazon EC2 instances in two different Availability Zones (AZs) within an
AWS Region.
Operating systems
You can deploy your SAP workload on any of the following operating systems:
SLES for SAP and RHEL for SAP with HA and US are available in the AWS Marketplace with an hourly or
an annual subscription model.
SUSE Linux Enterprise Server for SAP Applications (SLES for SAP)
SLES for SAP provides additional benefits, including Extended Service Pack Overlap Support (ESPOS),
configuration and tuning packages for SAP applications, and High Availability Extensions (HAE). See the
SUSE SLES for SAP product page.AWS strongly recommends using SLES for SAP instead of SLES for all
your SAP workloads.
If you plan to use Bring Your Own Subscription (BYOS) images provided by SUSE, ensure that you have
the registration code required to register your instance with SUSE to access repositories for software
updates.
RHEL for SAP with HA and US provides access to Red Hat Pacemaker cluster software for High
Availability, extended update support, and the libraries that are required to configure pacemaker cluster.
For details, see the RHEL for SAP Offerings on AWS FAQ in the Red Hat knowledge base.
If you plan to use the BYOS model with RHEL, either through the Red Hat Cloud Access program or
another means, ensure that you have access to a RHEL for SAP Solutions subscription. For details, see
Overview of the Red Hat Enterprise Linux for SAP Solutions subscription in the Red Hat knowledge base.
168
SAP HANA on AWS SAP HANA Guides
AWS infrastructure, operating
system setup and HANA installation
The correct subscription is required to download the required packages for configuring the Pacemaker
cluster.
SAP Notes
After you have the AWS infrastructure ready, you will have to perform operating system configuration
and installations of primary and secondary SAP HANA databases as per the architecture diagram in the
previous section. SAP HANA installation steps are detailed in SAP Installation Guides and Setup Manuals
available on the SAP Help Portal.
Hostname resolution
Ensure that both the systems are able to resolve the hostnames of both the cluster nodes. To fix any DNS
issues, add the hostnames of both the cluster nodes to /etc/hosts.
# cat /etc/hosts
10.0.0.1 prihana.example.com prihana
10.0.0.2 sechana.example.com sechana
169
SAP HANA on AWS SAP HANA Guides
AWS infrastructure, operating
system setup and HANA installation
1. Login as the sidadm user on each cluster node and run the following command.
# HDB stop
# cdpro
2. Edit the SAP HANA profile file named SID_HDB_instNum_hostname and set the autostart property
to 0.
3. Save the profile file and start SAP HANA.
# HDB start
# cdpro
1. Enable HANA system replication for the database on the primary cluster node.
2. Register the secondary SAP HANA database node with the primary cluster node and start the
secondary SAP HANA database.
3. Verify the state of replication.
The following values are used to configure HSR and high availability cluster in this example:
After the database instance is stopped, you can register the instance using hdbnsutil. On the
secondary node, the mode should be either “SYNC“ or “SYNCMEM“.
170
SAP HANA on AWS SAP HANA Guides
AWS infrastructure, operating
system setup and HANA installation
As a <sid>adm user, stop the secondary SAP HANA database, register the secondary node, and start the
SAP HANA database:
You can view the replication state of the whole SAP HANA landscape using the following command as a
<sid>adm user on the primary node:
To have your secondary site as a hot standby system, the operation mode configured must be
‘logreplay’.
For more details regarding all operation modes, see How To Perform System Replication for SAP HANA.
171
SAP HANA on AWS SAP HANA Guides
Configuring the SAP HANA HA/DR provider hook
Ensure the operation_mode parameter is set to your desired operation mode in the global.ini
configuration file on both the primary and secondary nodes.
operation_mode = logreplay
SAP HANA provides "hooks" that allows SAP HANA to send out notifications for certain events. A hook is
used to improve the detection of when a takeover is required. Both SLES and RHEL provide such a hook
in their respective resource packages which allows SAP HANA to report to the cluster immediately if the
secondary gets out of sync. These hooks must be configured on both nodes – primary and secondary.
To integrate the HA/DR hook script with SAP HANA, you must stop the database and update the
global.ini configuration file.
As a root user, copy the hook from the SAPHanaSR package into a read/writable directory on both
nodes, as shown in the following example.
Update the global.ini file on each node to enable use of the hook script by both SAP HANA
instances. Ensure that you make a copy/backup of global.ini before updating the file.
See the following example for updating the global.ini at location (/hana/shared/HDB/global/hdb/
custom/config/global.ini):
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[trace]
ha_dr_saphanasr = info
The current version of the SAPHanaSR python hook uses the command sudo to allow the <sid>adm
user to access the cluster attributes. To enable this, update the file /etc/sudoers as a root user with
entries as shown in the following example:
172
SAP HANA on AWS SAP HANA Guides
Cluster configuration prerequisites
Note
When using the above example for your HANA system, replace hdbam with <sid>adm.
Stop the SAP HANA database, either with HDB or using sapcontrol, before proceeding further with
changes, as shown in the following example.
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /usr/share/SAPHanaSR
execution_order = 1
[trace]
ha_dr_saphanasr = info
The current version of the SAPHanaSR python hook uses the command sudo to allow the <sid>adm to
access the cluster attributes. To enable this, edit and update the file /etc/sudoers as a root user with
entries as shown in the following example:
Note
In the preceding example, replace hdbam with <sid>adm of your SAP HANA system.
173
SAP HANA on AWS SAP HANA Guides
Cluster configuration prerequisites
check must be disabled on both EC2 instances which are supposed to receive traffic from the Overlay IP
address. You can use the AWS CLI or AWS Management Console to disable source/destination check. For
details, see the ec2 modify-instance-attribute documentation.
Create a new IAM role and associate it to the two EC2 instances which are part of the cluster. Attach the
following IAM policies to this IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeTags"
],
"Resource": [
"arn:aws:ec2:<Region>:<account-id>:route-table/<route table
identifier 1>",
"arn:aws:ec2:<Region>:<account-id>:route-table/<route table
identifier 2>”
]
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifyInstanceAttribute",
"ec2:RebootInstances",
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": [
"arn:aws:ec2:<Region-name>:<account-id>:instance/<instance-id>",
"arn:aws:ec2: <Region-name>:<account-id>:instance/<instance-id>"
]
}
]
}
Replace region name, account-id, and instance identifier with the appropriate values.
174
SAP HANA on AWS SAP HANA Guides
Cluster configuration prerequisites
access the active SAP instance. With IP overlay routing, you can allow the AWS network to use a non-
overlapping RFC1918 private IP address that resides outside an VPC CIDR range and direct the SAP traffic
to any instance setup across the Availability Zone within the VPC by changing the routing entry in AWS
using SLES/RHEL Overlay IP agent.
For the SLES/RHEL Overlay IP agent to change a routing entry in AWS routing tables, create the
following policy and attach to the IAM role which is assigned to both cluster instances:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:ReplaceRoute",
"Resource": [
"arn:aws:ec2:<Region>:<account-id>:route-table/<route table identifier 1>",
"arn:aws:ec2:<Region>:<account-id>:route-table/<route table identifier 2>”
]
},
{
"Effect": "Allow",
"Action": "ec2:DescribeRouteTables",
"Resource": "*"
}
]
}
Replace region name, account-id, and route table identifiers with appropriate values.
175
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Cluster installation
SLES for SAP Images sold by AWS through AWS Marketplace comes with pre-installed SUSE HAE
packages. Ensure you have the latest version of the following packages. If needed, update them using
the zypper command. If you are using BYOS images, ensure that the following packages are installed:
• corosync
• crmsh
• fence-agents
• ha-cluster-bootstrap
• pacemaker
• patterns-ha-ha_sles
• resource-agents
• cluster-glue
176
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Cluster configuration
Topics
• System logging (p. 177)
• Corosync configuration (p. 177)
• Create encryption keys (p. 177)
• Create secondary IP addresses for a redundant cluster ring (p. 177)
• Review instance settings that conflict with cluster actions (p. 178)
• Create the Corosync configuration file (p. 178)
• Update the hacluster password (p. 179)
• Start the cluster (p. 179)
System logging
SUSE recommends using the rsyslogd daemon for logging in the SUSE cluster. Install the rsyslog
package as a root user on all cluster nodes. logd is a subsystem to log additional information coming
from the STONITH agent:
Corosync configuration
The cluster service (Pacemaker) should be in a stopped state when performing cluster configuration.
Check the status and stop the Pacemaker service if it is running.
prihana:~ # corosync-keygen
A new key file called “authkey” is created at location /etc/corosync/. Copy this file to the same
location on the second cluster node with the same permissions and ownership.
To create a redundant communication channel, you must add a secondary IP address on both the nodes.
These IPs are only used in cluster configurations. They provide the same fault tolerance as a secondary
Elastic Network Interface (ENI). For more information, see Assign a secondary private IPv4 address.
177
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
You must ensure that stop protection is disabled for Amazon EC2 instances that are part of a pacemaker
cluster. Use the following command to disable stop protection.
interface {
ringnumber: 0
bindnetaddr: 11.0.1.132
mcastport: 5405
ttl: 1
}
transport: udpu
}
logging {
fileline: off
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
nodelist {
node {
ring0_addr: 11.0.1.132
ring1_addr: 11.0.1.75
nodeid: 1
}
node {
ring0_addr: 11.0.2.139
ring1_addr: 11.0.2.35
nodeid: 2
178
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
}
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Replace the values for the following variables with those for your environment:
Also update the value of for crypto_cipher and crypto_hash as per your encryption requirements.
After the cluster service (Pacemaker) is started, check the cluster status with the crm_mon command as
shown in the following example. You will see both nodes online and a full list of resources.
prihana:~ # crm_mon -r
Stack: corosync
Current DC: prihana (version 1.1.18+20180430.b12c320f5-3.24.1-b12c320f5) - partition with
quorum
Last updated: Wed Nov 11 16:20:40 2020
Last change: Wed Nov 11 16:20:21 2020 by root via crm_attribute on sechana
2 nodes configured
0 resources configured
179
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
No resources
You can find the ring status and the associated IP address of the cluster with the corosync-cfgtool
command as shown in the following example:
prihana:~ # corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
id = 11.0.1.132
status = ring 0 active with no faults
RING ID 1
id = 11.0.1.75
status = ring 1 active with no faults
Cluster resources
This section describes how to configure the bootstrap, STONITH, resources, and constraints using the
crm command. You can use the command “crm” to add objects.
Setting the stonith-action parameter value to “off” forces the agents to shut down the instance
during failover. This is desirable to avoid split brain scenarios.
Add the cluster bootstrap configuration to the cluster with the following command:
STONITH
Create a file called “aws-stonith.txt” with the following STONITH options:
180
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Ensure the value parameter “tag” matches the tag key you created for your EC2 instance in the
“Prerequisites” section. In this example, “pacemaker” is used for the parameter tag. The name of the
profile “cluster” needs to be matched with the configured AWS profile.
Add the STONITH configuration file to the cluster with the following command:
Overlay IP resource
Create a file called “aws-move-ip.txt” with the following cluster bootstrap options to move IP
resources during failover:
Replace the value for parameters ip and routing_table with your overlay IP address and route table
names.
Add the move IP configuration file to the cluster with the following command:
You can also use multiple Amazon VPC routing tables in the routing_table table parameter, as shown
in the following example.
SAPHanaTopology
Create a file called “crm-saphanatop.txt” with the following cluster bootstrap options for SAP HANA
topology information:
Update the value of parameters SID and InstanceNumber with your SAP
HANA system information. In addition, update the SID and Instance number
referred in the rsc_SAPHanaTopology_<SID>HDB<Instance Number> and
181
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Add the SAP HANA topology configuration file to the cluster with the following command:
SAPHana
Create a file called “crm-saphana.txt” with the following cluster bootstrap options for SAP HANA:
Update the value of parameters SID and InstanceNumber with your SAP HANA system information.
In addition, update the SID and Instance number referred in the rsc_SAPHana_<SID>HDB<Instance
Number> and msl_SAPHana<SID>_HDB<Instance Number> configuration.
Note
You can find the detailed information about all the parameters with the command “man
ocf_suse_SAPHana”
Add the SAP HANA configuration file to the cluster with the following command:
Constraints
Define two constraints, one for the Overlay IP address which helps with routing client traffic to active
database host and the second one for the start order between the SAPHANA and SAPHANATopology
resource agents.
Create a file called “crm-cs.txt” with following cluster bootstrap options for constraints:
Add the constraints configuration file to the cluster with the following command:
Cluster status
After the cluster is configured, you should see two online nodes, and six resources. You can check it with
the following command:
182
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
2 nodes configured
6 resources configured
You can check the status of the replication by executing the crm_mon command as shown in the
following example. Ensure that the state of the replication in the secondary node is "SOK".
2 nodes configured
6 resources configured
Active resources:
Node Attributes:
* Node prihana:
+ hana_hdb_clone_state : PROMOTED
+ hana_hdb_op_mode : logreplay
+ hana_hdb_remoteHost : sechana
+ hana_hdb_roles : 4:P:master1:master:worker:master
+ hana_hdb_site : PRI
+ hana_hdb_srmode : sync
+ hana_hdb_sync_state : PRIM
+ hana_hdb_version : 2.00.030.00.1522209842
+ hana_hdb_vhost : prihana
+ lpa_hdb_lpt : 1605181053
+ master-rsc_SAPHana_HDB_HDB00 : 150
* Node sechana:
+ hana_hdb_clone_state : DEMOTED
+ hana_hdb_op_mode : logreplay
+ hana_hdb_remoteHost : prihana
183
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
+ hana_hdb_roles : 4:S:master1:master:worker:master
+ hana_hdb_site : SEC
+ hana_hdb_srmode : sync
+ hana_hdb_sync_state : SOK
+ hana_hdb_version : 2.00.030.00.1522209842
+ hana_hdb_vhost : sechana
+ lpa_hdb_lpt : 30
+ master-rsc_SAPHana_HDB_HDB00 : 100
• the section called “Stop the SAP HANA database on the primary node” (p. 184)
• the section called “Stop the SAP HANA database on the secondary node” (p. 186)
• the section called “Crash the primary SAP HANA database on node 1” (p. 188)
• the section called “Crash the primary database on node 2” (p. 189)
Run steps:
prihana:~ # su - hdbadm
hdbadm@prihana:/usr/sap/HDB/HDB00> HDB stop
hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
Stopping instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot
NI_HTTP -nr 00 -function Stop 400
12.11.2020 11:39:19
Stop
OK
Waiting for stopped instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol
-prot NI_HTTP -nr 00 -function WaitforStopped 600 2
12.11.2020 11:39:51
WaitforStopped
OK
hdbdaemon is stopped.
184
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Expected result:
• The cluster detects stopped primary SAP HANA database (on node 1) and promotes secondary SAP
HANA database (on node 2) to take over as primary.
2 nodes configured
6 resources configured
Failed Actions:
* rsc_SAPHana_HDB_HDB00_monitor_60000 on prihana 'master (failed)' (9):
call=30, status=complete, exitreason='',
last-rc-change='Thu Nov 12 11:40:42 2020', queued=0ms, exec=0ms
• The overlay IP address is migrated to the new primary (on node 2).
• With the AUTOMATIC_REGISTER parameter set to "true", the cluster restarts the failed SAP HANA
database and automatically registers it against the new primary.
Recovery procedure:
185
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
• After you run the crm command to clean up the resource, “failed actions” messages should
disappear from the cluster status.
2 nodes configured
6 resources configured
Run steps:
sechana:~ # su - hdbadm
hdbadm@sechana:/usr/sap/HDB/HDB00> HDB stop
hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
Stopping instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot
NI_HTTP -nr 00 -function Stop 400
12.11.2020 11:45:21
Stop
OK
Waiting for stopped instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol
-prot NI_HTTP -nr 00 -function WaitforStopped 600 2
12.11.2020 11:45:53
WaitforStopped
OK
hdbdaemon is stopped.
186
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Expected result:
• The cluster detects stopped primary SAP HANA database (on node 2) and promotes the secondary SAP
HANA database (on node 1) to take over as primary.
2 nodes configured
6 resources configured
Failed Actions:
* rsc_SAPHana_HDB_HDB00_monitor_60000 on sechana 'master (failed)' (9):
call=46, status=complete, exitreason='',
last-rc-change='Thu Nov 12 11:46:45 2020', queued=0ms, exec=0ms
• The overlay IP address is migrated to the new primary (on node 1).
• With the AUTOMATIC_REGISTER parameter set to "true", the cluster restarts the failed SAP HANA
database and automatically registers it against the new primary.
Recovery procedure:
• After you run the crm command to clean up the resource, "failed actions" messages should
disappear from the cluster status.
187
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
• After resource cleanup, the cluster “failed actions” are cleaned up.
2 nodes configured
6 resources configured
Run steps:
• Stop the primary database system using the following command as <sid>adm.
Expected result:
188
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
• The cluster detects the stopped primary SAP HANA database (on node 1) and promotes the secondary
SAP HANA database (on node 2) to take over as primary.
2 nodes configured
6 resources configured
Failed Actions:
* rsc_SAPHana_HDB_HDB00_monitor_60000 on prihana 'master (failed)' (9): call=50,
status=complete, exitreason='',
last-rc-change='Thu Nov 12 11:51:45 2020', queued=0ms, exec=0ms
• The overlay IP address is migrated to the new primary (on node 2).
• With the AUTOMATIC_REGISTER parameter set to "true", the cluster restarts the failed SAP HANA
database and automatically registers it against the new primary.
Recovery procedure:
• After resource cleanup, the cluster “failed actions” are cleaned up.
Run node — Primary SAP HANA database node (on node 2).
Run steps:
• Stop the primary database (on node 2) system using the following command as <sid>adm.
sechana:~ # su - hdbadm
hdbadm@sechana:/usr/sap/HDB/HDB00> HDB kill -9
189
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Expected result:
• The cluster detects stopped primary SAP HANA database (on node 2) and promotes the secondary SAP
HANA database (on node 1) to take over as primary.
2 nodes configured
6 resources configured
Failed Actions:
* rsc_SAPHana_HDB_HDB00_monitor_60000 on sechana 'master (failed)' (9):
call=66, status=complete, exitreason='',
last-rc-change='Thu Nov 12 11:58:53 2020', queued=0ms, exec=0ms
• The overlay IP address is migrated to the new primary (on node 1).
• With the AUTOMATIC_REGISTER parameter set to "true", the cluster restarts the failed SAP HANA
database and automatically registers it against the new primary.
Recovery procedure:
190
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
• After resource cleanup, the cluster “failed actions” are cleaned up.
Run steps:
• Crash the primary database system using the following command as root:
2 nodes configured
6 resources configured
Note
To simulate a system crash, you must first ensure that /proc/sys/kernel/sysrq is set to 1.
Expected result:
• The cluster detects failed node (node 1), declares it “UNCLEAN” and sets the secondary node (node 2)
to status “partition WITHOUT quorum”.
• The cluster fences node 1 and promotes the secondary SAP HANA database (on node 2) to take over as
primary.
2 nodes configured
6 resources configured
Online: [ sechana ]
OFFLINE: [ prihana ]
191
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
• The overlay IP address is migrated to the new primary (on node 2).
• With the AUTOMATIC_REGISTER parameter set to "true", the cluster restarts the failed SAP HANA
database and automatically registers it against the new primary.
Recovery procedure:
• Start node 1 (EC2 instance) with the AWS Management Console or AWS CLI tools and start Pacemaker
(if it’s not enabled by default).
Run steps:
• Crash the primary database system (on node 2) using the following command as root:
2 nodes configured
6 resources configured
Note
To simulate a system crash, you must first ensure that /proc/sys/kernel/sysrq is set to 1.
Expected result:
192
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
• The cluster detects failed node (node 2), declares it “UNCLEAN”, and sets the secondary node (node 1)
to status “partition WITHOUT quorum”.
• The cluster fences node 2 and promotes the secondary SAP HANA database (on node 1) to take over as
primary.
2 nodes configured
6 resources configured
Online: [ prihana ]
OFFLINE: [ sechana ]
• The overlay IP address is migrated to the new primary (on node 1).
• With the AUTOMATIC_REGISTER parameter set to "true", the cluster restarts the failed SAP HANA
database and automatically registers it against the new primary.
Recovery procedure:
• Start node 2 (EC2 instance) with AWS Management Console or AWS CLI tools and start Pacemaker (if
it’s not enabled by default).
Run node — Can be run on any node. In this test case, this is done on node B.
Run steps:
• Drop all the traffic coming from and going to node A with the following command:
193
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Expected result:
• The cluster detects network failure and fence node 1. It promotes the secondary SAP HANA database
(on node 2) to take over as primary without going to a split brain situation.
2 nodes configured
6 resources configured
Failed Actions:
* rsc_SAPHanaTopology_HDB_HDB00_monitor_10000 on prihana 'unknown error'
(1): call=317, status=Timed Out, exitreason='',
last-rc-change='Fri Jan 22 16:58:19 2021', queued=0ms, exec=300001ms
* rsc_SAPHana_HDB_HDB00_start_0 on prihana 'unknown error' (1): call=28, status=Timed
Out,
exitreason='',
last-rc-change='Fri Jan 22 02:40:38 2021', queued=0ms, exec=3600001ms
Recovery procedure:
194
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
2 nodes configured
6 resources configured
prihana:~ # crm_mon -1
Stack: corosync
Current DC: prihana (version 1.1.18+20180430.b12c320f5-3.24.1-b12c320f5) - partition with
quorum
Last updated: Thu Nov 12 12:36:24 2020
Last change: Thu Nov 12 12:36:01 2020 by root via crm_attribute on prihana
2 nodes configured
6 resources configured
Active resources:
2 nodes configured
6 resources configured
Active resources:
195
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
Node Attributes:
* Node prihana:
+ hana_hdb_clone_state : PROMOTED
+ hana_hdb_op_mode : logreplay
+ hana_hdb_remoteHost : sechana
+ hana_hdb_roles : 4:P:master1:master:worker:master
+ hana_hdb_site : PRI
+ hana_hdb_srmode : sync
+ hana_hdb_sync_state : PRIM
+ hana_hdb_version : 2.00.030.00.1522209842
+ hana_hdb_vhost : prihana
+ lpa_hdb_lpt : 1605184624
+ master-rsc_SAPHana_HDB_HDB00 : 150
* Node sechana:
+ hana_hdb_clone_state : DEMOTED
+ hana_hdb_op_mode : logreplay
+ hana_hdb_remoteHost : prihana
+ hana_hdb_roles : 4:S:master1:master:worker:master
+ hana_hdb_site : SEC
+ hana_hdb_srmode : sync
+ hana_hdb_sync_state : SOK
+ hana_hdb_version : 2.00.030.00.1522209842
+ hana_hdb_vhost : sechana
+ lpa_hdb_lpt : 30
+ master-rsc_SAPHana_HDB_HDB00 : 100
Cluster administration
To manually migrate the cluster resources from one node to another, run the following command:
Check the status of the migration using the command “crm_mon –r”.
prihana:~ # crm_mon -r
Stack: corosync
Current DC: prihana (version 1.1.18+20180430.b12c320f5-3.24.1-b12c320f5) - partition with
quorum
Last updated: Thu Nov 12 12:39:00 2020
Last change: Thu Nov 12 12:38:47 2020 by root via crm_attribute on prihana
2 nodes configured
6 resources configured
196
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
After the resource is migrated, you can check the status of the cluster. Clean up the failed actions as
shown in next section.
2 nodes configured
6 resources configured
Failed Actions:
* rsc_SAPHana_HDB_HDB00_monitor_61000 on prihana 'not running' (7): call=35,
status=complete, exitreason='',
last-rc-change='Thu Nov 12 12:39:49 2020', queued=0ms, exec=0ms
2 nodes configured
6 resources configured
Failed Actions:
* rsc_SAPHana_HDB_HDB00_monitor_61000 on prihana 'not running' (7): call=35,
status=complete, exitreason='',
last-rc-change='Thu Nov 12 12:39:49 2020', queued=0ms, exec=0ms
197
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
• When you manually migrate resources from one node to another, there will be constraints in the crm
configuration. You can find the constraints with the command "crm configure show" as shown in
the following example:
198
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
resource-stickiness=1000 \
migration-threshold=5000
op_defaults op-options: \
timeout=600
You must clean up these location constraints before you perform any further cluster actions with
following command:
• Cluster logs — Cluster logs are updated in the corosync.log file located under /var/log/cluster
folder.
• Pacemaker logs — Pacemaker logs are updated in the pacemaker.log file located in the /var/log/
pacemaker folder.
199
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on SLES
DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true
ms msl_SAPHana_HDB_HDB00 rsc_SAPHana_HDB_HDB00 \
meta clone-max=2 clone-node-max=1 interleave=true
clone cln_SAPHanaTopology_HDB_HDB00 rsc_SAPHanaTopology_HDB_HDB00 \
meta clone-node-max=1 interleave=true
colocation col_IP_Primary 2000: res_AWS_IP:Started msl_SAPHana_HDB_HDB00:Master
order ord_SAPHana 2000: cln_SAPHanaTopology_HDB_HDB00 msl_SAPHana_HDB_HDB00
property SAPHanaSR: \
hana_hdb_site_srHook_SEC=PRIM \
hana_hdb_site_srHook_PRI=SOK
property cib-bootstrap-options: \
stonith-enabled=true \
stonith-action=off \
stonith-timeout=600s \
have-watchdog=false \
dc-version="1.1.18+20180430.b12c320f5-3.24.1-b12c320f5" \
cluster-infrastructure=corosync \
last-lrm-refresh=1605184909
rsc_defaults rsc-options: \
resource-stickiness=1000 \
migration-threshold=5000
op_defaults op-options: \
timeout=600
interface {
ringnumber: 0
bindnetaddr: 11.0.1.132
mcastport: 5405
ttl: 1
}
transport: udpu
}
logging {
fileline: off
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: QUORUM
debug: off
}
}
nodelist {
node {
ring0_addr: 11.0.1.132
ring1_addr: 11.0.1.75
nodeid: 1
}
node {
200
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
ring0_addr: 11.0.2.139
ring1_addr: 11.0.2.35
nodeid: 2
}
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Cluster installation
Prerequisite – The system must be subscribed to the required subscription; in this case, RHEL for SAP
Solutions.
Note
If you are using a BYOS image, ensure your system is configured with RHEL for SAP and
Pacemaker repositories to install the required packages.
Cluster configuration
Topics
• Update user hacluster password (p. 201)
• Start and enable the pcs services (p. 201)
• Authenticate pcs with user hacluster (p. 202)
• Review instance settings that conflict with cluster actions (p. 202)
• Set up the cluster (p. 202)
• Enable and start the cluster (p. 203)
• Increase corosync totem token timeout (p. 203)
201
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
RHEL 7.x
RHEL 8.x
You must ensure that stop protection is disabled for Amazon EC2 instances that are part of a pacemaker
cluster. Use the following command to disable stop protection.
202
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
1. Edit the /etc/corosync/corosync.conf file in all the cluster nodes and increase or add the
value for token, as shown in the following example.
totem {
version: 2
secauth: off
cluster_name: my-rhel-sap-cluster
transport: udpu
rrp_mode: passive
token: 30000 <------ Value to be set
}
2. Reload corosync by running the following command in only one cluster node to reload. This will
not require any downtime.
3. Run the following command to confirm that your changes are active.
RHEL 8.x
203
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Cluster resources
This section describes how to create the cluster resources.
STONITH
The following command creates the STONITH resource. This is to protect your data from being corrupted
by rogue nodes or concurrent access in an event of split brain or dual primary situations.
The default pcmk action is reboot. If you want to have the instance remain in a stopped state until it has
been investigated and then manually started again, add pcmk_reboot_action=off. Any High Memory
(u-*tb1.*) instances or metal instance running on a dedicated host won't support reboot and will require
pcmk_reboot_action=off. To do this, update the previously created STONITH resource as:
SAPHanaTopology
The SAPHanaTopology resource gathers the status and configuration of SAP HANA System Replication
on each node. Configure the following attributes for SAPHanaTopology.
204
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
SAPHana
The SAPHana resource is responsible for starting, stopping, and relocating the SAP HANA database. This
resource must be run as a primary/secondary cluster resource. To create this resource, run the following
command:
RHEL 7.x
RHEL 8.x
Note
If the AUTOMATED_REGISTER parameter is set to true, the secondary instance will automatically
register after startup, and start the replication.
Overlay IP
Add the Overlay IP (OIP) address to the primary node using the following command:
To route the traffic to your primary SAP HANA database with Overlay IP, you must update the route table
and map the Overlay IP address to the primary SAP HANA database instance-id.
205
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
If you are using different route tables for subnet in each Availability Zone where you are deploying the
SAP HANA instances, you need to update the OIP in the route table associated with both the subnets.
To create the resource in such scenario, you can use the previous command and mention both the route
table IDs separated by a comma. See the following example:
Constraints
Define two constraints, one for the Overlay IP address which helps with routing client traffic to active
database host and the second one for the start order between the SAPHANA and SAPHanaTopology
resource agents.
Following command will create the constraint that mandates the start order of these resources.
RHEL 7.x
RHEL 8.x
• symmetrical=false — This attribute defines that it is just the start order of resources and they
don't need to be stopped in reverse order.
• interleave = true — This attribute allows parallel start of these resources on nodes. This allows
the SAPHana resource to start on any node as soon as the SAPHanaTopology resource is running on
any one node.
Both resources (SAPHana and SAPHanaTopology) have the attribute interleave=true that allows
parallel start of these resources on nodes.
Constraint co-locate the aws-vpc-move-ip resource with the primary SAPHana resource
The following command will co-locate the aws-vpc-move-ip resource with the SAPHana resource
when promoted as primary.
RHEL 7.x
[root@prihana ~]# pcs constraint colocation add hana-oip with master SAPHana_HDB_00-master
2000
206
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
RHEL 8.x
[root@prihana ~]# pcs constraint colocation add hana-oip with master SAPHana_HDB_00-clone
2000
You can use the following command to check the final status of the cluster:
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@prihana ~]#
This concludes the configuration of the SAP HANA cluster setup. You can proceed with testing.
• the section called “Stop the SAP HANA database on the primary node” (p. 208)
• the section called “Stop the SAP HANA database on the secondary node” (p. 210)
• the section called “Crash the primary database on node 1” (p. 212)
• the section called “Crash the primary database on node 2” (p. 214)
207
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Run steps:
12.11.2020 11:39:19
Stop
OK
Waiting for stopped instance using:
/usr/sap/HDB/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function
WaitforStopped 600 2
12.11.2020 11:39:51
WaitforStopped
OK
hdbdaemon is stopped.
Expected result:
• The cluster detects stopped primary SAP HANA database (on node 1) and promotes the secondary SAP
HANA database (on node 2) to take over as primary.
2 nodes configured
6 resources configured
Failed Actions:
* SAPHana_HDB_00_monitor_59000 on prihana 'master (failed)' (9): call=31,
status=complete, exitreason='',
last-rc-change='Tue Nov 10 17:56:52 2020', queued=0ms, exec=0ms
208
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
• The overlay IP address is migrated to the new primary (on node 2).
• Because AUTOMATED_REGISTER is set to true, the cluster restarts the failed SAP HANA database and
registers it against the new primary. Validate the status of the primary SAP HANA database using the
following command:
10.11.2020 17:59:49
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GREEN, Running, 2020 11 10 17:58:47, 0:01:02, 25979
hdbcompileserver, HDB Compileserver, GREEN, Running, 2020 11 10 17:58:52, 0:00:57, 26152
hdbindexserver, HDB Indexserver-HDB, GREEN, Running, 2020 11 10 17:58:53, 0:00:56, 26201
hdbnameserver, HDB Nameserver, GREEN, Running, 2020 11 10 17:58:48, 0:01:01, 25997
hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2020 11 10 17:58:52, 0:00:57, 26155
hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2020 11 10 17:59:02, 0:00:47, 27100
hdbxsengine, HDB XSEngine-HDB, GREEN, Running, 2020 11 10 17:58:53, 0:00:56, 26204
hdbadm@prihana:/usr/sap/HDB/HDB00>
Recovery procedure:
• Clean up the cluster “failed actions” on node 1 as root using the following command:
• After you run the cleanup command, “failed actions” messages should disappear from the cluster
status.
209
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@prihana ~]#
Run steps:
12.11.2020 11:45:21
Stop
OK
Waiting for stopped instance using: /usr/sap/HDB/SYS/exe/hdb/sapcontrol
-prot NI_HTTP -nr 00 -function WaitforStopped 600 2
12.11.2020 11:45:53
WaitforStopped
OK
hdbdaemon is stopped.
Expected result:
210
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
• The cluster detects the stopped primary SAP HANA database (on node 2) and promotes the secondary
SAP HANA database (on node 1) to take over as primary.
2 nodes configured
6 resources configured
Failed Actions:
* SAPHana_HDB_00_monitor_59000 on sechana 'master (failed)' (9): call=41,
status=complete, exitreason='',
last-rc-change='Tue Nov 10 18:03:49 2020', queued=0ms, exec=0ms
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@sechana ~]#
• The overlay IP address is migrated to the new primary (on node 1).
• With AUTOMATED_REGISTER set to true, the cluster restarts the failed SAP HANA database and
registers it against the new primary.
211
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
10.11.2020 18:08:47
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GREEN, Running, 2020 11 10 18:05:44, 0:03:03, 6601
hdbcompileserver, HDB Compileserver, GREEN, Running, 2020 11 10 18:05:48, 0:02:59, 6725
hdbindexserver, HDB Indexserver-HDB, GREEN, Running, 2020 11 10 18:05:49, 0:02:58, 6828
hdbnameserver, HDB Nameserver, GREEN, Running, 2020 11 10 18:05:44, 0:03:03, 6619
hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2020 11 10 18:05:48, 0:02:59, 6730
hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2020 11 10 18:05:58, 0:02:49, 7797
hdbxsengine, HDB XSEngine-HDB, GREEN, Running, 2020 11 10 18:05:49, 0:02:58, 6831
hdbadm@sechana:/usr/sap/HDB/HDB00>
Recovery procedure:
• Clean up the cluster “failed actions” on node 2 as root using the following command:
• After resource cleanup, ensure the cluster “failed actions” are cleaned up.
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@sechana ~]#
212
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Run steps:
• Crash the primary database system using the following command as <sid>adm:
Expected result:
• The cluster detects the stopped primary SAP HANA database (on node 1) and promotes the secondary
SAP HANA database (on node 2) to take over as primary.
• The overlay IP address is migrated to the new primary (on node 2).
• Because AUTOMATED_REGISTER is set to true, the cluster restarts the failed SAP HANA database and
registers it against the new primary.
213
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Recovery procedure:
• After resource cleanup, ensure the cluster “failed actions” are cleaned up.
Run node — The primary SAP HANA database node (on node 2).
Run steps:
• Crash the primary database (on node 2) system using the following command as <sid>adm.
Expected result:
• The cluster detects the stopped primary SAP HANA database (on node 2) and promotes the secondary
SAP HANA database (on node 1) to take over as primary.
2 nodes configured
6 resources configured
214
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Masters: [ prihana ]
Slaves: [ sechana ]
hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started prihana
Failed Actions:
* SAPHana_HDB_00_monitor_59000 on sechana 'master (failed)' (9): call=41,
status=complete, exitreason='',
last-rc-change='Tue Nov 10 18:03:49 2020', queued=0ms, exec=0ms
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
• The overlay IP address is migrated to the new primary (on node 1).
• Because AUTOMATED_REGISTER is set to true, the cluster restarts the failed SAP HANA database and
registers it against the new primary.
Recovery procedure:
• After resource cleanup, ensure the cluster “failed actions” are cleaned up.
Run steps:
• Crash the primary database system using the following command as root:
215
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
pcsd: active/enabled
[root@prihana ~]# echo 'b' > /proc/sysrq-trigger
Note
To simulate a system crash, you must first ensure that /proc/sys/kernel/sysrq is set to 1.
Expected result:
• The cluster detects the failed node (node 1), declares it “UNCLEAN”, and sets the secondary node (node
2) to status “partition WITHOUT quorum”.
• The cluster fences node 1, promotes the secondary SAP HANA database, and registers it against the
new primary when the EC2 instance is back up. Node 1 is currently in a stopped state because it is
being rebooted.
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@sechana ~]#
• The overlay IP address is migrated to the new primary (on node 2).
• Because AUTOMATIC_REGISTER = true, the cluster restarts the failed HANA database and registers it
against the new primary when the EC2 instance is back up.
Recovery procedure:
• Start node 1 (EC2 Instance) using AWS Management Console or AWS CLI tools.
216
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Run steps:
• Crash the node running primary SAP HANA (on node 2) using the following command as root:
shutdown
Note
To simulate a system crash, you must first ensure that /proc/sys/kernel/sysrq is set to 1.
Expected result:
• The cluster detects the failed node (node 2), declares it “UNCLEAN”, and sets the secondary node (node
1) to status “partition WITHOUT quorum”.
• The cluster fences node 2 and promotes the secondary SAP HANA database (on node 1) to take over as
primary.
2 nodes configured
6 resources configured
Online: [ prihana ]
OFFLINE: [ sechana ]
217
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Stopped: [ sechana ]
hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started prihana
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@prihana ~]#
• The overlay IP address is migrated to the new primary (on node 2).
• Because AUTOMATED_REGISTER is set to true, the cluster restarts the failed SAP HANA database and
registers it against the new primary when the EC2 instance is back up.
Recovery procedure:
• Start node 2 (EC2 instance) using AWS Management Console or AWS CLI tools.
Run node: Can be run on any node. In this test case, this is done on node B.
Run steps:
• Drop all the traffic coming from and going to node A with the following command:
Expected result:
218
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
• The cluster detects network failure and fences node 1. The cluster promotes the secondary SAP HANA
database (on node 2) to take over as primary without going to a split brain situation.
Recovery procedure:
2 nodes configured
6 resources configured
219
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Masters: [ prihana ]
Slaves: [ sechana ]
hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started prihana
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@prihana ~]#
You can check the SAP HANA replication status with the following command as a <sid>adm user:
hdbadm@prihana:/usr/sap/HDB/HDB00> python
/usr/sap/HDB/HDB00/exe/python_support/systemReplicationStatus.py
| Database | Host | Port | Service Name | Volume ID | Site
ID | Site Name | Secondary | Secondary | Secondary | Secondary
| Secondary | Replication | Replication | Replication |
| | | | | |
| | Host | Port | Site ID | Site Name |
Active Status | Mode | Status | Status Details |
| -------- | ---------- | ----- | ------------ | --------- | -------
| ---------- | ---------- | --------- | --------- | ------------
| ------------- | ----------- | ----------- | -------------- |
| SYSTEMDB | prihana | 30001 | nameserver | 1 | 1 |
HDBPrimary | sechana | 30001 | 2 | HDBSecondary | YES
| SYNCMEM | ACTIVE | |
| HDB | prihana | 30007 | xsengine | 2 | 1 |
HDBPrimary | sechana | 30007 | 2 | HDBSecondary | YES
| SYNCMEM | ACTIVE | |
| HDB | prihana | 30003 | indexserver | 3 | 1 |
HDBPrimary | sechana | 30003 | 2 | HDBSecondary | YES
| SYNCMEM | ACTIVE | |
mode: PRIMARY
site id: 1
site name: HDBPrimary
hdbadm@prihana:/usr/sap/HDB/HDB00>
Cluster administration
You can manually migrate cluster resources from one node to another with the following command as
root user:
You can check the status of the cluster again to verify the status of resource migration.
220
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Clean up the failed actions as shown in next section. With each pcs resource move command invocation,
the cluster creates location constraints to cause the resource to move. These constraints must be
removed to allow automated failover in the future. To remove the constraints created by the move, run
the following command:
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
221
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Manual migration of resources from one node to another node (as shown in the preceding section) will
create constraints in pcs configuration “pcs config show”.
Resources:
Clone: SAPHanaTopology_HDB_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_HDB_00 (class=ocf provider=heartbeat type=SAPHanaTopology)
Attributes: InstanceNumber=00 SID=HDB
Operations: methods interval=0s timeout=5 (SAPHanaTopology_HDB_00-methods-interval-0s)
monitor interval=10 timeout=600 (SAPHanaTopology_HDB_00-monitor-interval-10)
reload interval=0s timeout=5 (SAPHanaTopology_HDB_00-reload-interval-0s)
start interval=0s timeout=600 (SAPHanaTopology_HDB_00-start-interval-0s)
stop interval=0s timeout=300 (SAPHanaTopology_HDB_00-stop-interval-0s)
Master: SAPHana_HDB_00-master
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true
Resource: SAPHana_HDB_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=7200
InstanceNumber=00 PREFER_SITE_TAKEOVER=true SID=HDB
Operations: demote interval=0s timeout=3600 (SAPHana_HDB_00-demote-interval-0s)
methods interval=0s timeout=5 (SAPHana_HDB_00-methods-interval-0s)
222
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
Stonith Devices:
Resource: clusterfence (class=stonith type=fence_aws)
Attributes: pcmk_host_map=prihana:i-01b7ceb0d8799eccf;sechana:i-05b924af2f83ffe0b
pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=us-east-1
Operations: monitor interval=60s (clusterfence-monitor-interval-60s)
Fencing Levels:
Location Constraints:
Ordering Constraints:
start SAPHanaTopology_HDB_00-clone then start SAPHana_HDB_00-master
(kind:Mandatory) (non-symmetrical)
Colocation Constraints:
hana-oip with SAPHana_HDB_00-master (score:2000) (rsc-role:Started)
(with-rsc-role:Master)
Ticket Constraints:
Alerts:
No alerts defined
Resources Defaults:
resource-stickiness: 1000
migration-threshold: 5000
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: rhelhanaha
dc-version: 1.1.19-8.el7_6.5-c3c624ea3d
have-watchdog: false
last-lrm-refresh: 1605053571
Node Attributes:
prihana: hana_hdb_op_mode=logreplay hana_hdb_remoteHost=sechana
hana_hdb_site=HDBPrimary hana_hdb_srmode=syncmem hana_hdb_vhost=prihana
lpa_hdb_lpt=1605196167
sechana: hana_hdb_op_mode=logreplay hana_hdb_remoteHost=prihana
hana_hdb_site=HDBSecondary hana_hdb_srmode=syncmem hana_hdb_vhost=sechana
lpa_hdb_lpt=30
Quorum:
Options:
These location constraints need to be cleaned up before you perform any further cluster actions with the
following command:
223
SAP HANA on AWS SAP HANA Guides
HA cluster configuration on RHEL
clone-SAPHana_HDB_00-master-mandatory)
Colocation Constraints:
hana-oip with SAPHana_HDB_00-master (score:2000) (rsc-role:Started)
(with-rsc-role:Master) (id:colocation-hana-oip-SAPHana_HDB_00-master-2000)
Ticket Constraints:
[root@prihana ~]#
• Cluster logs — Cluster logs are updated in the corosync.log file located at var/log/cluster/
corosync.log.
• Pacemaker logs — Pacemaker logs are updated in the pacemaker.log file located at /var/log/
pacemaker.
Resources:
Clone: SAPHanaTopology_HDB_00-clone
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true
Resource: SAPHanaTopology_HDB_00 (class=ocf provider=heartbeat type=SAPHanaTopology)
Attributes: InstanceNumber=00 SID=HDB
Operations: methods interval=0s timeout=5 (SAPHanaTopology_HDB_00-methods-interval-0s)
monitor interval=60 timeout=60 (SAPHanaTopology_HDB_00-monitor-interval-60)
start interval=0s timeout=180 (SAPHanaTopology_HDB_00-start-interval-0s)
stop interval=0s timeout=60 (SAPHanaTopology_HDB_00-stop-interval-0s)
Master: SAPHana_HDB_00-master
Resource: SAPHana_HDB_00 (class=ocf provider=heartbeat type=SAPHana)
Attributes: AUTOMATED_REGISTER=true DUPLICATE_PRIMARY_TIMEOUT=7200 InstanceNumber=00
PREFER_SITE_TAKEOVER=true SID=HDB
Meta Attrs: clone-max=2 clone-node-max=1 interleave=true notify=true
Operations: demote interval=0s timeout=320 (SAPHana_HDB_00-demote-interval-0s)
methods interval=0s timeout=5 (SAPHana_HDB_00-methods-interval-0s)
monitor interval=120 timeout=60 (SAPHana_HDB_00-monitor-interval-120)
monitor interval=121 role=Slave timeout=60 (SAPHana_HDB_00-monitor-
interval-121)
monitor interval=119 role=Master timeout=60 (SAPHana_HDB_00-monitor-
interval-119)
promote interval=0s timeout=320 (SAPHana_HDB_00-promote-interval-0s)
224
SAP HANA on AWS SAP HANA Guides
High availability cluster and shared Amazon VPC
Stonith Devices:
Resource: clusterfence (class=stonith type=fence_aws)
Attributes: pcmk_host_map=prihana:i-0df8622xxxxxxxxxxx;sechana:i-0b2e372xxxxxxxxxxx
pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=us-east-1
pcmk_reboot_action=off
Operations: monitor interval=60s (clusterfence-monitor-interval-60s)
Fencing Levels:
Location Constraints:
Ordering Constraints:
start SAPHanaTopology_HDB_00-clone then start SAPHana_HDB_00-master (kind:Mandatory)
(non-symmetrical)
Colocation Constraints:
hana-oip with SAPHana_HDB_00-master (score:2000) (rsc-role:Started) (with-rsc-
role:Master)
Ticket Constraints:
Alerts:
No alerts defined
Resources Defaults:
No defaults set
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: rhelhanaha
dc-version: 1.1.19-8.el7_6.4-c3c624ea3d
have-watchdog: false
last-lrm-refresh: 1553719142
maintenance-mode: false
Node Attributes:
prihana: hana_hdb_op_mode=logreplay hana_hdb_remoteHost=sechana hana_hdb_
site=SiteA hana_hdb_srmode=syncmem hana_hdb_vhost=prihana lpa_hdb_lpt=10
sechana: hana_hdb_op_mode=logreplay hana_hdb_remoteHost=prihana hana_hdb_
site=SiteB hana_hdb_srmode=syncmem hana_hdb_vhost=sechana lpa_hdb_lpt=1553719113
Cluster name: rhelhanaha
Stack: corosync
This guide assumes that an AWS Organization is already setup and that Amazon VPC subnets have been
shared between AWS accounts using the AWS RAM. For more details, see Create a resource share.
225
SAP HANA on AWS SAP HANA Guides
High availability cluster and shared Amazon VPC
Note
Further in this guide, we refer to the AWS account that owns the Amazon VPC as the Amazon
VPC account and to the account using the Amazon VPC where the cluster nodes are going to be
deployed as the Cluster account.
Overlay IP address
Create an overlay IP address on the Amazon VPC routing table which will be used by the Amazon VPC
subnets and will be accessible to the cluster. This must be created on the AWS account sharing the
Amazon VPC.
Create an IAM role to delegate permissions to the Amazon EC2 instances that will be a part of the cluster.
When creating the IAM role, select Another AWS account for the type of trusted entity and enter the
AWS Account ID where the Amazon EC2 instances will be deployed.
Create the following IAM policy on the Amazon VPC account and attach it to the IAM role. Add or remove
route table entries as needed.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": “ec2:ReplaceRoute”,
"Resource": [
"arn:aws:ec2:<AWS Region>:<VPC-Account-Number>:route-table/rtb-xxxxxxxxxxxxxxxxx",
"arn:aws:ec2:<AWS Region>:<VPC-Account-Number>:route-table/rtb-xxxxxxxxxxxxxxxxx"
],
},
{
"Effect": "Allow",
"Action": “ec2:DescribeRouteTables”,
"Resource": “*”
226
SAP HANA on AWS SAP HANA Guides
High availability cluster and shared Amazon VPC
}
]
}
Cluster account
Create a new IAM role and select Amazon EC2 as the use case. Associate this IAM role to the two Amazon
EC2 instances which are a part of the cluster. Attach the following IAM policies (AWS STS and STONITH)
to the IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<VPC-Account-Number>:role/<Sharing-VPC-Account-
Cluster-Role>"
}
]
Replace VPC-Account-Number with your AWS account number that owns the Amazon VPC. Replace
Sharing-VPC-Account-Cluster-Role with the IAM role that was created in the AWS account owning the
Amazon VPC.
STONITH policy
Both instances of the cluster require access to start and stop other nodes within the cluster. Create the
following STONITH policy and attach it to the IAM role that is assigned to both of the cluster instances.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeTags"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifyInstanceAttribute",
"ec2:RebootInstances",
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": [
"arn:aws:ec2:<Region-name>:<account-id>:instance/<instance-id>",
"arn:aws:ec2: <Region-name>:<account-id>:instance/<instance-id>"
]
}
]
227
SAP HANA on AWS SAP HANA Guides
High availability cluster and shared Amazon VPC
Setup on SLES
1. Add the overlay ip address to the primary node using the following command.
3. Use the following command to add the overlay ip configuration file to the cluster.
Setup on RHEL
1. Add the overlay ip address to the primary node using the following command.
228
SAP HANA on AWS SAP HANA Guides
Further reading
Further reading
SAP on AWS technical documentation:
SAP documentation:
Document history
Date Change
229
SAP HANA on AWS SAP HANA Guides
Instances and sizing
SAP HANA stores and processes all of its data in memory and provides protection against data loss by
saving the data in persistent storage locations. To achieve optimal performance, the storage solution
used for SAP HANA data and log volumes must meet SAP's storage KPI. As a fully managed service,
Amazon FSx for NetApp ONTAP makes it easier to launch and scale reliable, high-performing, and secure
shared file storage in the cloud.
If you are a first-time user, see How Amazon FSx for NetApp ONTAP works.
For SAP specifications, refer to SAP Note 2039883 - FAQ: SAP HANA database and data snapshots and
SAP Note 3024346 - Linux Kernel Settings for NetApp NFS.
• To enable FSx for ONTAP with SAP HANA on AWS, you must use Single-AZ file systems.
• Amazon EC2 instance where you plan to deploy your SAP HANA workload and FSx for ONTAP must be
in the same subnet.
• Use separate storage virtual machines (SVM) for SAP HANA data and log volumes. This ensures that
your I/O traffic flows through different IP addresses and TCP sessions.
• /hana/data, /hana/log, /hana/shared, and /usr/sap must have their own FSx for ONTAP
volume.
• Thin provisioning is not supported for SAP HANA data and log volumes.
Topics
• Supported instance types (p. 231)
• Sizing (p. 231)
• SAP HANA parameters (p. 230)
230
SAP HANA on AWS SAP HANA Guides
Supported instance types
r6i.24xlarge 96 768
r6i.16xlarge 64 512
r6i.12xlarge 48 384
x2idn.24xlarge 96 1,536
x2idn.16xlarge 64 1,024
x2iedn.24xlarge 96 3,072
For a complete list of supported Amazon EC2 instances for SAP HANA, see SAP HANA certified instances.
Sizing
You can configure the throughput capacity of FSx for ONTAP when you create a new file system by
scaling up to 2 GB/s of read throughput and 750 MB/s of write throughput in a single Availability Zone
deployment.
SAP KPIs
SAP requires the following KPIs for SAP HANA volumes.
Read Write
Latency for log Less than 1 millisecond write latency with 4K and 16K block sized I/
O
231
SAP HANA on AWS SAP HANA Guides
SAP HANA parameters
Minimum requirement
A single FSx for ONTAP file system provides sufficient performance for a single SAP HANA workload. To
meet the storage KPIs for SAP HANA from SAP, you need a throughput capacity of at least 1,024 MB/
s. You can choose to use lower throughput for non-production systems. We recommend the minimum
throughput configuration of 1,024 MB/s to avoid any performance issues.
Higher throughput
If you require higher throughput, you can do one of the following:
• Create separate data and log volumes on different FSx for ONTAP file systems.
• Create additional data volume partitions across multiple FSx for ONTAP file systems.
The following table summarizes the throughput limits available with different scaling options.
Data Log
To learn more about FSx for ONTAP performance, see Performance details.
[fileio]
max_parallel_io_requests=128
async_read_submit=on
async_write_submit_active=on
async_write_submit_blocks=all
Use the following SQL commands to set these parameters on SYSTEM level.
232
SAP HANA on AWS SAP HANA Guides
FSx for ONTAP
To create a FSx for ONTAP file system, see Step 1: Create an Amazon FSx for NetApp ONTAP file system.
For more information, see Managing FSx for ONTAP file systems.
Note
Only single Availability Zone file systems are supported for SAP HANA workloads.
Topics
• Create storage virtual machines (SVM) (p. 233)
• Create volume (p. 233)
• Volume layout (p. 234)
• File system setup (p. 234)
• Disable snapshots (p. 234)
Create volume
The storage capacity of your file system should align with the needs of /hana/shared, /hana/data,
and /hana/log volumes. You must also consider the capacity required for snapshots, if applicable.
We recommend creating a separate FSx for ONTAP file system for each of SAP HANA data, log, shared,
and binary volumes. The following table lists the recommended minimum sizes per volume.
Volume Size
/usr/sap 50 GiB
The following limitations apply when you create a FSx for ONTAP file system for SAP HANA.
• Storage Efficiency is not supported for SAP HANA and must be disabled.
• Capacity Pool Tiering is not supported for SAP HANA and must be set to None.
• Daily automatic backups must be disabled for SAP HANA. Default FSx for ONTAP backups are not
application-aware and cannot be used to restore SAP HANA to a consistent state.
233
SAP HANA on AWS SAP HANA Guides
Volume layout
Volume layout
The following is an example of volume and mount point configuration in a single Availability Zone for an
SAP HANA workload with the SAP System ID HDB.
To place the home directory of the hdbadm user on the central storage, the /usr/sap/HDB file system
must be mounted from the HDB_shared volume.
/hana/shared
The administrative password enables you to access the file system via SSH, the ONTAP CLI, and REST API.
To use tools like NetApp SnapCenter, you must have an administrative password.
set advanced
modify -vserver <svm> -tcp-max-xfer-size 262144
set admin
Disable snapshots
FSx for ONTAP automatically enables a snapshot policy for volumes that take hourly snapshots. The
default policy offers limited value to SAP HANA due to missing application awareness. We recommend
disabling the automatic snapshots by setting the policy to none.
234
SAP HANA on AWS SAP HANA Guides
Host setup
Data volume
The automatic FSx for ONTAP snapshots do not have application awareness. A database-consistent
snapshot of the SAP HANA data volume must be prepared by creating a data snapshot. For more
information, see Create a Data Snapshot.
Log volume
The log volume is automatically backed up every 15 minutes by SAP HANA. An hourly volume snapshot
does not offer any additional value in terms of RPO reduction.
The high frequency of changes on the log volume can rapidly increase the total capacity used for
snapshots. This can cause the log volume to run out of capacity, making the SAP HANA workload
unresponsive.
Host setup
You must configure your Amazon EC2 instance on an operating system level to use FSx for ONTAP with
SAP HANA on AWS.
Note
The following examples apply to an SAP HANA workload with SAP System ID HDB. The
operating system user is hdbadm.
Topics
• Linux kernel parameters (p. 235)
• Network File System (NFS) (p. 236)
• Create mount points (p. 236)
• Mount file systems (p. 236)
• Data volume partitions (p. 237)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
235
SAP HANA on AWS SAP HANA Guides
Network File System (NFS)
You must reboot your instance for the kernel parameters and NFS settings to take effect.
Network File System (NFS) version 4 and higher requires user authentication. You can authenticate with
Lightweight Directory Access Protocol (LDAP) server or local user accounts.
If you are using local user accounts, the NFSv4 domain must be set to the same value on all Linux servers
and SVMs. You can set the domain parameter (Domain = <domain name>) in the /etc/idmapd.conf
file on the Linux hosts.
mkdir -p /hana/data/HDB/mnt00001
mkdir -p /hana/log/HDB/mnt00001
mkdir -p /hana/shared
mkdir -p /usr/sap/HDB
• Changes to the nconnect parameter take effect only if the NFS file system is unmounted and
mounted again.
• Client systems must have unique host names when accessing FSx for ONTAP. If there are systems with
the same name, the second system may not be able to access FSx for ONTAP.
Example
Add the following lines to /etc/fstab to preserve mounted file systems during an instance reboot. You
can then run mount -a to mount the NFS file systems.
236
SAP HANA on AWS SAP HANA Guides
Data volume partitions
Host preparation
Additional mount points and /etc/fstab entries must be created and the new volumes must be
mounted.
• Create additional mount points and assign the required permissions, group, and ownership.
mkdir -p /hana/data/HDB/mnt00002
chmod -R 777 /hana/data/HDB/mnt00002
• Set the permissions to 777. This is required to enable SAP HANA to add a new data volume in the
subsequent step. SAP HANA sets more restrictive permissions automatically during data volume
creation.
[customizable_functionalities]
persistence_datavolume_partition_multipath = true
Adding a data volume partition is quick. The new data volume partitions are empty after creation. Data is
distributed equally across data volumes over time.
237
SAP HANA on AWS SAP HANA Guides
Data volume partitions
After you configure and mount FSx for ONTAP file systems, you can install and setup your SAP HANA
workload on AWS. For more information, see SAP HANA Environment Setup on AWS.
238
SAP HANA on AWS SAP HANA Guides
Notices
Customers are responsible for making their own independent assessment of the information in this
document. This document: (a) is for informational purposes only, (b) represents current AWS product
offerings and practices, which are subject to change without notice, and (c) does not create any
commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services
are provided “as is” without warranties, representations, or conditions of any kind, whether express or
implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements,
and this document is not part of, nor does it modify, any agreement between AWS and its customers.
The software included with this document is licensed under the Apache License, Version 2.0 (the
"License"). You may not use this file except in compliance with the License. A copy of the License is
located at http://aws.amazon.com/apache2.0/ or in the "license" file accompanying this file. This code is
distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the License for the specific language governing permissions and limitations under the
License.
© 2021 Amazon Web Services, Inc. or its affiliates. All rights reserved.
239