mod2
mod2
AWS offers a wide range of storage services that can be configured depending on your project requirements
and use cases. AWS comes up with different types of storage services for maintaining highly confidential data,
frequently accessed data, and often accessed storage data. You can choose from various storage service types
such as Object Storage as a Service (Amazon S3), File Storage as a Service (Amazon EFS), Block Storage as
a Service (Amazon EBS), backups, and data migration options.
AWS S3 is a scalable storage service, often integrated into DevOps pipelines for storing application data. To
learn more about using AWS S3 in a DevOps context, the DevOps Engineering – Planning to Production
course covers practical examples of integrating AWS services into your workflows.
Data Storage: Amazon s3 acts as the best option for scaling both small and large storage applications. It helps
in storing and retrieving the data-insensitive applications as per needs in ideal time.
Backup and Recovery: Many Organizations are using Amazon S3 to back up their critical data and maintain
the data durability and availability for recovery needs.
Hosting Static Websites: Amazon S3 facilitates in storing HTML, CSS and other web content from
Users/developers allowing them for hosting Static Websites benefiting with low-latency access and cost-
effectiveness. To know more detailing refer this Article – How to host static websites using Amazon S3
Data Archiving: Amazon S3 Glacier service integration helps as a cost-effective solution for long-term data
storing which are less frequently accessed applications.
Big Data Analytics: Amazon S3 is often considered as data lake because of its capacity to store large amounts
of both structured and unstructured data offering seamless integration with other AWS Analytics and AWS
Machine Learning Services.
What is an Amazon S3 bucket?
Amazon S3 bucket is a fundamental Storage Container feature in AWS S3 Service. It provides a secure and
scalable repository for storing of Objects such as Text data, Images, Audio and Video files over AWS Cloud.
Each S3 bucket name should be named globally unique and should be configured with ACL (Access Control
List).
How Does Amazon S3 works?
Amazon S3 works on organizing the data into unique S3 Buckets, customizing the buckets with Acccess
controls. It allows the users to store objects inside the S3 buckets with facilitating features like versioning and
lifecycle management of data storage with scaling. The following are a few main features of Amazon s3:
Access control lists (ACLs): A document for verifying access to S3 buckets from outside your AWS account.
An ACL is specific to each bucket. You can utilize S3 Object Ownership, an Amazon S3 bucket-level feature,
to manage who owns the objects you upload to your bucket and to enable or disable ACLs.
Lifecycle Rules: This is a cost-saving practice that can move your files to AWS Glacier (The AWS Data
Archive Service) or to some other S3 storage class for cheaper storage of old data or completely delete the
data after the specified time. To know more about refer this article – Amazon S3 Life Cycle Management
4. Keys and Null Objects
Keys: The key, in S3, is a unique identifier for an object in a bucket. For example, in a bucket ‘ABC’ your
GFG.java file is stored at java Programs /GFG.java then ‘java Programs/GFG.java’ is your object key for
GFG.java.
Null Object: Version ID for objects in a bucket where versioning is suspended is null. Such objects may be
referred to as null objects. List) and Other settings for managing data efficiently.
Step 1: Login into the Amazon account with your credentials and search form S3 and click on the S3. Now
click on the option which is “Create bucket” and configure all the options which are shown while configuring.
Step 2: After configuring the AWS bucket now upload the objects into the buckets based upon your
requirement. By using the AWS console or by using AWS CLI following is the command to upload the object
into the AWS S3 bucket.
aws s3 cp <local-file-path> s3://<bucket-name>/
Step 3: You can control the permissions of the objects which was uploaded into the S3 buckets and also who
can access the bucket. You can make the bucket public or private by default the S3 buckets will be in private
mode.
Step 4: You can manage the S3 bucket lifecycle management by transitioning. Based upon the rules that you
defined S3 bucket will be transitioning into different storage classes based on the age of the object which is
uploaded into the S3 bucket.
Step 5: You need to turn to enable the services to monitor and analyze S3. You need to enable the S3 access
logging to record who was requesting the objects which are in the S3 buckets.
What are the types of S3 Storage Classes?
AWS S3 provides multiple storage types that offer different performance and features and different cost
structures.
Standard: Suitable for frequently accessed data, that needs to be highly available and durable.
Standard Infrequent Access (Standard IA): This is a cheaper data-storage class and as the name suggests, this
class is best suited for storing infrequently accessed data like log files or data archives. Note that there may
be a per GB data retrieval fee associated with the Standard IA class.
Intelligent Tiering: This service class classifies your files automatically into frequently accessed and
infrequently accessed and stores the infrequently accessed data in infrequent access storage to save costs. This
is useful for unpredictable data access to an S3 bucket.
One Zone Infrequent Access (One Zone IA): All the files on your S3 have their copies stored in a minimum
of 3 Availability Zones. One Zone IA stores this data in a single availability zone. It is only recommended to
use this storage class for infrequently accessed, non-essential data. There may be a per GB cost for data
retrieval.
Reduced Redundancy Storage (RRS): All the other S3 classes ensure the durability of 99.999999999%. RRS
only ensures 99.99% durability. AWS no longer recommends RRS due to its less durability. However, it can
be used to store non-essential data.
RICHARD FRISBY
JIMMY MCGIBNEY
Amazon Virtual Private Cloud (VPC)
Internet Customer
Network
AWS Cloud
Security in your VPC
• Security groups
• Network access
Security Security Security Security
Group Group Group Group
AWS VPN CloudHub You can create multiple AWS hardware VPN
connections via your VPC to enable communications
between various remote networks.
§ Identity management
§ Small, single applications managed by one person or very small team
For most use cases, there are two primary patterns for organizing your infrastructure:
Account pattern
Shared Services Development Test Production
AWS Account AWS Account AWS Account AWS Account
Choosing A Pattern
10 . 0 . 0 . 0
00001010 00000000 00000000 00000000
10 . 0 . 255 . 255
00001010 00000000 11111111 11111111
IPs and CIDR
10 . 0 . 0 . 0 /16
00001010 00000000 00000000 00000000
16 bits
locked
IPs and CIDR
10 . 0 . 0 . 0 /16
00001010 00000000 00000000 00000000
10 . 0 . 0 . 0
Lowest 00001010 00000000 00000000 00000000
possible
IP
10 . 0 . 255 . 255
Highest
possible IP 00001010 00000000 11111111 11111111
VPCs and IP Addresses
Example:
Subnet 1 Subnet 2
251 251
A VPC with CIDR /22
includes 1,024 total IPs
1024
Subnet 4 Subnet 3
251
IPs 251
Recommendation: Start with one public and one private subnet per Availability Zone.
10.0.0.0/21 (10.0.0.0-10.0.7.255)
Recommendation: Start with one public and one private subnet per Availability Zone.
10.0.0.0/21 (10.0.0.0-10.0.7.255)
Recommendation: Consider larger subnets over smaller ones (/24 and larger).
Which subnet type (public or private) should you use for these resources ?
Public Private
Datastore instances ü
Batch processing instances ü
Back-end instances ü
Web application instances ü ü
How do you control your VPC traffic?
§ Route tables
§ Security groups
§ Network ACLs
§ Internet gateways
Route Tables
routed
§ Main and custom route tables Main route table
§ VPC route table: Local route
Destination Target
§ Only one route table per subnet
10.0.0.0/16 local
Best practice:
Use custom route tables for each subnet to enable granular routing for destinations.
Security Groups
§ Are virtual firewalls that control inbound and outbound traffic for
one or more instances.
§ Deny all incoming traffic by default and use allow rules that can
filter based on TCP, UDP, and ICMP protocols.
Use security groups to control traffic into, out of, and between resources.
data data
Data tier
security group
Inbound Rule
App tier ELB
Allow TCP Port 8080
security group Source: Web tier
Inbound Rule
Allow TCP Port 8080
App tier
app app Source: App tier ELB
security group
Inbound Rule
Allow TCP Port 3306
data Data tier data Source: App tier
security group
§ Are optional virtual firewalls that control traffic in and out of a subnet.
§ Enforce rules only at the boundary of the subnet, not at the instance-
level, like security groups.
Internet gateways
10.0.10.0/24
Public Subnet
Directing Traffic To Your VPC
To enable access to or from the Internet for instances in a VPC subnet, you must:
§ Ensure that your subnet's route table points to the Internet gateway
§ Ensure that your NACLs and security groups allow the relevant
traffic to flow to and from your instance
What About Outbound Traffic From Private Instances?
10.0.10.0/24
Public subnet
Two primary options:
Private instance
§ Amazon EC2 instance set up as a NAT in a with private IP
public subnet
10.0.20.0/24
Private subnet
§ VPC NAT Gateway
VPC NAT Gateways vs. NAT Instances On Amazon EC2
Destination Target
10.0.0.0/20 local
Destination Target
0.0.0.0/0 IGW Internet
10.0.0.0/20 local
10.0.0.0/20 gateway
0.0.0.0/0 NAT
security
group
NAT Instance Private
DynamoDB
Public IP Instance
Private IP
10.0.0.0/24 10.0.4.0/23
Public Subnet Private subnet
Destination Target Availability Zone 1
10.0.0.0/20 local
0.0.0.0/0 NAT
Private
Instance
Private IP
security group
10.0.2.0/23
Private subnet
Availability Zone 2
Region
route table
Logging VPC Traffic
From the Your VPCs screen note the details for your VPC – VPC
ID, DHCP Options set, Main Route table, Default Network ACL.
Also note the Subnets, Internet Gateways and Elastic IPs that
have been created for your VPC. Your should clearly name your
VPC resources.
AWS VPC
Now you can launch a Linux instance – you can choose a basic
AMI - this instance must be launched in the private. This Server
should be launched into the DBServerSG.
You DO NOT want to Auto-Assign a Public IP address to this
server.
If you enable ssh from the WebServerSG to the DBServerSG you
will be able to login from the Server in the Public subnet to the
server in the Private subnet.
Once you ssh from your webserver instance to your
dbinstance you can check your public IP address using wget
http://ipinfo.io/ip -qO –
What is the Public IP address of the server in your Private
Network ? What does it correspond with?
AWS VPC
http://docs.aws.amazon.com/AmazonVPC/latest/Us
erGuide/VPC_Scenario2.html
How to securely manage AWS credentials
¡ https://blogs.aws.amazon.com/security/post/Tx3D6U6WSFG
OK2H/A-New-and-Standardized-Way-to-Manage-
Credentials-in-the-AWS-SDKs
How to login securely to Linux AMI in VPC Private
subnet using ssh agent forwarding
¡ https://blogs.aws.amazon.com/security/post/Tx3N8GFK85U
N1G6/Securely-connect-to-Linux-instances-running-in-a-
private-Amazon-VPC