0% found this document useful (0 votes)
20 views

mod2

Uploaded by

vaishbhat19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

mod2

Uploaded by

vaishbhat19
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Introduction to AWS Simple Storage Service (AWS S3)

AWS offers a wide range of storage services that can be configured depending on your project requirements
and use cases. AWS comes up with different types of storage services for maintaining highly confidential data,
frequently accessed data, and often accessed storage data. You can choose from various storage service types
such as Object Storage as a Service (Amazon S3), File Storage as a Service (Amazon EFS), Block Storage as
a Service (Amazon EBS), backups, and data migration options.

AWS S3 is a scalable storage service, often integrated into DevOps pipelines for storing application data. To
learn more about using AWS S3 in a DevOps context, the DevOps Engineering – Planning to Production
course covers practical examples of integrating AWS services into your workflows.

What is Amazon S3?


Amazon S3 is a Simple Storage Service in AWS that stores files of different types like Photos, Audio, and
Videos as Objects providing more scalability and security to. It allows the users to store and retrieve any
amount of data at any point in time from anywhere on the web. It facilitates features such as extremely high
availability, security, and simple connection to other AWS Services.

What is Amazon S3 Used for?


Amazon S3 is used for various purposes in the Cloud because of its robust features with scaling and Securing
of data. It helps people with all kinds of use cases from fields such as Mobile/Web applications, Big data,
Machine Learning and many more. The following are a few Wide Usage of Amazon S3 service.

Data Storage: Amazon s3 acts as the best option for scaling both small and large storage applications. It helps
in storing and retrieving the data-insensitive applications as per needs in ideal time.
Backup and Recovery: Many Organizations are using Amazon S3 to back up their critical data and maintain
the data durability and availability for recovery needs.
Hosting Static Websites: Amazon S3 facilitates in storing HTML, CSS and other web content from
Users/developers allowing them for hosting Static Websites benefiting with low-latency access and cost-
effectiveness. To know more detailing refer this Article – How to host static websites using Amazon S3
Data Archiving: Amazon S3 Glacier service integration helps as a cost-effective solution for long-term data
storing which are less frequently accessed applications.
Big Data Analytics: Amazon S3 is often considered as data lake because of its capacity to store large amounts
of both structured and unstructured data offering seamless integration with other AWS Analytics and AWS
Machine Learning Services.
What is an Amazon S3 bucket?
Amazon S3 bucket is a fundamental Storage Container feature in AWS S3 Service. It provides a secure and
scalable repository for storing of Objects such as Text data, Images, Audio and Video files over AWS Cloud.
Each S3 bucket name should be named globally unique and should be configured with ACL (Access Control
List).
How Does Amazon S3 works?
Amazon S3 works on organizing the data into unique S3 Buckets, customizing the buckets with Acccess
controls. It allows the users to store objects inside the S3 buckets with facilitating features like versioning and
lifecycle management of data storage with scaling. The following are a few main features of Amazon s3:

1. Amazon S3 Buckets and Objects


Amazon S3 Bucket: Data, in S3, is stored in containers called buckets. Each bucket will have its own set of
policies and configurations. This enables users to have more control over their data. Bucket Names must be
unique. Can be thought of as a parent folder of data. There is a limit of 100 buckets per AWS account. But it
can be increased if requested by AWS support.

Amazon S3 Objects: Fundamental entity type stored in AWS S3.


You can store as many objects as you want to store. The maximum size of an AWS S3 bucket is 5TB. It
consists of the following:
• Key
• Version ID
• Value
• Metadata
• Sub-resources
• Access control information
• Tags
2. Amazon S3 Versioning and Access Control
S3 Versioning: Versioning means always keeping a record of previously uploaded files in S3. Points to
Versioning are not enabled by default. Once enabled, it is enabled for all objects in a bucket. Versioning keeps
all the copies of your file, so, it adds cost for storing multiple copies of your data. For example, 10 copies of
a file of size 1GB will have you charged for using 10GBs for S3 space. Versioning is helpful to prevent
unintended overwrites and deletions. Objects with the same key can be stored in a bucket if versioning is
enabled (since they have a unique version ID). To know more about versioning refer this article – Amazon S3
Versioning

Access control lists (ACLs): A document for verifying access to S3 buckets from outside your AWS account.
An ACL is specific to each bucket. You can utilize S3 Object Ownership, an Amazon S3 bucket-level feature,
to manage who owns the objects you upload to your bucket and to enable or disable ACLs.

3. Bucket policies and Life Cycles


Bucket Policies: A document for verifying the access to S3 buckets from within your AWS account, controls
which services and users have what kind of access to your S3 bucket. Each bucket has its own Bucket Policies.

Lifecycle Rules: This is a cost-saving practice that can move your files to AWS Glacier (The AWS Data
Archive Service) or to some other S3 storage class for cheaper storage of old data or completely delete the
data after the specified time. To know more about refer this article – Amazon S3 Life Cycle Management
4. Keys and Null Objects
Keys: The key, in S3, is a unique identifier for an object in a bucket. For example, in a bucket ‘ABC’ your
GFG.java file is stored at java Programs /GFG.java then ‘java Programs/GFG.java’ is your object key for
GFG.java.

Null Object: Version ID for objects in a bucket where versioning is suspended is null. Such objects may be
referred to as null objects. List) and Other settings for managing data efficiently.

How To Use an Amazon S3 Bucket?


You can use the Amazon S3 buckets by following the simple steps which are mentioned below. To know more
how to configure about Amazon S3 refer to the Amazon S3 – Creating a S3 Bucket.

Step 1: Login into the Amazon account with your credentials and search form S3 and click on the S3. Now
click on the option which is “Create bucket” and configure all the options which are shown while configuring.

Step 2: After configuring the AWS bucket now upload the objects into the buckets based upon your
requirement. By using the AWS console or by using AWS CLI following is the command to upload the object
into the AWS S3 bucket.
aws s3 cp <local-file-path> s3://<bucket-name>/
Step 3: You can control the permissions of the objects which was uploaded into the S3 buckets and also who
can access the bucket. You can make the bucket public or private by default the S3 buckets will be in private
mode.
Step 4: You can manage the S3 bucket lifecycle management by transitioning. Based upon the rules that you
defined S3 bucket will be transitioning into different storage classes based on the age of the object which is
uploaded into the S3 bucket.
Step 5: You need to turn to enable the services to monitor and analyze S3. You need to enable the S3 access
logging to record who was requesting the objects which are in the S3 buckets.
What are the types of S3 Storage Classes?
AWS S3 provides multiple storage types that offer different performance and features and different cost
structures.
Standard: Suitable for frequently accessed data, that needs to be highly available and durable.
Standard Infrequent Access (Standard IA): This is a cheaper data-storage class and as the name suggests, this
class is best suited for storing infrequently accessed data like log files or data archives. Note that there may
be a per GB data retrieval fee associated with the Standard IA class.
Intelligent Tiering: This service class classifies your files automatically into frequently accessed and
infrequently accessed and stores the infrequently accessed data in infrequent access storage to save costs. This
is useful for unpredictable data access to an S3 bucket.
One Zone Infrequent Access (One Zone IA): All the files on your S3 have their copies stored in a minimum
of 3 Availability Zones. One Zone IA stores this data in a single availability zone. It is only recommended to
use this storage class for infrequently accessed, non-essential data. There may be a per GB cost for data
retrieval.
Reduced Redundancy Storage (RRS): All the other S3 classes ensure the durability of 99.999999999%. RRS
only ensures 99.99% durability. AWS no longer recommends RRS due to its less durability. However, it can
be used to store non-essential data.

How to Upload and Manage Files on Amazon S3?


Firstly, you have to Amazon s3 bucket for uploading and managing the files on Amazon S3. Try to create the
S3 Bucket as discussed above. Once the S3 Bucket is created, you can upload the files through various ways
such as AWS SDKs, AWS CLI, and Amazon S3 Management Console. Try managing the files by organizing
them into folders within the S3 Bucket and applying access controls to secure the access. Features like
Versioning and Lifecycle policies provide the management of data efficiently with optimization of storage
classes.
How to Store and Download Obejcts in Amazon S3?
AWS CLI Commands
Programming Scripts ( Using boto3 library of Python )
1. AWS Management Console
You can access the AWS S3 bucket using the AWS management console which is a web-based user interface.
Firstly you need to create an AWS account and login to the Web console and from there you can choose the
S3 bucket option from Amazon S3 service. ( AWS Console >> Amazon S3 >> S3 Buckets )

2. AWS CLI Commands


In these methods firstly you have to install the aws cli software in the terminal and try on configuring the aws
account with access key, secret key and the default region. Then on taking the `aws –help` , you can figure out
the s3 service usage. For example , To view try on running following command:
aws s3 ls
3. Programming scripts
You can configure the Amazon S3 bucket by using a scripting programming languages like Python and with
using libraries such as boto3 library you can perform the AWS S3 tasks. To know more about refer this article
– How to access Amazon S3 using python script.
AWS S3 Bucket Permissions
You can manage the permission of S3 buckets by using several methods following are a few of them.
Bucket Policies: Bucket policies can be attached directly to the S3 bucket and they are in JSON format which
can perform the bucket level operations. With the help of bucket policies, you can grant permissions to the
users who can access the objects present in the bucket. If you grant permissions to any user he can download,
and upload the objects to the bucket. You can create the bucket policy by using Python.
Access Control Lists (ACLs): ACLs are legacy access control mechanisms for S3 buckets instead of ACLs we
are using the bucket policies to control the permissions of the S3 bucket. By using ACL you can grant the
read, and access to the S3 bucket or you can make the objects public based on the requirements.
IAM Policies: IAM policies are mostly used to manage the permissions to the users and groups and resources
available in the AWS by using the IAM roles options. You can attach an IAM policy to an IAM entity (user,
group, or role) granting them access to specific S3 buckets and operations.
The most effective way to control the permissions to the S3 buckets is by using bucket policies.
Features of Amazon S3
Durability: AWS claims Amazon S3 to have a 99.999999999% of durability (11 9’s). This means the
possibility of losing your data stored on S3 is one in a billion.
Availability: AWS ensures that the up-time of AWS S3 is 99.99% for standard access.
Note that availability is related to being able to access data and durability is related to losing data altogether.
Server-Side-Encryption (SSE): AWS S3 supports three types of SSE models:
SSE-S3: AWS S3 manages encryption keys.
SSE-C: The customer manages encryption keys.
SSE-KMS: The AWS Key Management Service (KMS) manages the encryption keys.
File Size support: AWS S3 can hold files of size ranging from 0 bytes to 5 terabytes. A 5TB limit on file size
should not be a blocker for most of the applications in the world.
Infinite storage space: Theoretically AWS S3 is supposed to have infinite storage space. This makes S3
infinitely scalable for all kinds of use cases.
Pay as you use: The users are charged according to the S3 storage they hold.
Advantages of Amazon S3
Scalability: Amazon S3 can be scalable horizontally which makes it handle a large amount of data. It can be
scaled automatically without human intervention.
High availability: AmazonS3 bucket is famous for its high availability nature you can access the data whenever
you required it from any region. It offers a Service Level Agreement (SLA) guaranteeing 99.9% uptime.
Data Lifecycle Management: You can manage the data which is stored in the S3 bucket by automating the
transition and expiration of objects based on predefined rules. You can automatically move the data to the
Standard-IA or Glacier, after a specified period.
Integration with Other AWS Services: You can integrate the S3 bucket service with different services in the
AWS like you can integrate with the AWS lambda function where the lambda will be triggered based upon the
files or objects added to the S3 bucket.
Amazon Web Services – Virtual
Private Cloud (VPC)

RICHARD FRISBY
JIMMY MCGIBNEY
Amazon Virtual Private Cloud (VPC)

— An Amazon VPC is an isolated portion of the AWS cloud.


You use Amazon VPC to create a virtual network
topology for your Amazon EC2 resources.
— You have complete control over your virtual networking
environment, including selection of your own IP address
range, creation of subnets, and configuration of route
tables and network gateways.
— You can create a public-facing subnet for your
webservers that has access to the Internet, and place your
backend systems such as databases or application servers
in a private-facing subnet with no Internet access
Amazon Virtual Private Cloud (VPC)

§ Provision a private, isolated


virtual network on the AWS
cloud.
Amazon
VPC § Have complete control over your
virtual networking environment.
VPCs and subnets

§ A subnet defines a range of IP addresses in your VPC.


§ You can launch AWS resources into a subnet that you
select.
§ A private subnet should be used for resources that won’t
be accessible over the Internet.
§ A public subnet should be used for resources that will be
accessed over the Internet.
§ Each subnet must reside entirely within one Availability
Zone and cannot span zones.
VPC example

Internet Customer
Network

Internet Virtual Private


Gateway Gateway

Web Server VPC NAT


Gateway App Server
DB Server

Web Server App Server DB Server

Public Subnet Private Subnet VPN Only Subnet

Virtual Private Cloud

AWS Cloud
Security in your VPC

instance instance instance instance

• Security groups
• Network access
Security Security Security Security
Group Group Group Group

control lists Subnet Subnet

(ACLs) 10.0.0.0/24 10.0.1.0/24

• Key Pairs Network ACL Network ACL

Routing Table Routing Table


VPC Router
10.0.0.0/16

VPN Gateway Internet Gateway


VPN connections

VPN Connectivity Description


option
AWS Hardware VPN You can create an IPsec hardware VPN connection
between your VPC and your remote network.

AWS Direct Connect AWS Direct Connect provides a dedicated private


connection from a remote network to your VPC.

AWS VPN CloudHub You can create multiple AWS hardware VPN
connections via your VPC to enable communications
between various remote networks.

Software VPN You can create a VPN connection to your remote


network by using an Amazon EC2 instance in your VPC
that’s running a software VPN appliance.
Using One VPC

There are limited use cases where one VPC could be


appropriate:
§ High-performance computing

§ Identity management
§ Small, single applications managed by one person or very small team

For most use cases, there are two primary patterns for organizing your infrastructure:

Multi-VPC and Multi-Account


AWS Infrastructure Patterns

VPC pattern Shared Services


Amazon VPC
Development
Amazon VPC
Test
Amazon VPC
Production
Amazon VPC

Account pattern
Shared Services Development Test Production
AWS Account AWS Account AWS Account AWS Account
Choosing A Pattern

How do you know which pattern to use?


§ The primary factors for determining this are the
complexity of your organization and your workload
isolation requirements:
§ Single IT team? Multi-VPC
§ Large organization with many IT teams? Multi-
account
§ High workload isolation required? Multi-account
Other Important Considerations

The majority of AWS services do not actually sit


within a VPC.
§ For these services, a VPC cannot provide any isolation
outside of connectivity.
§ Network traffic between AWS Regions traverse the AWS
global network backbone by default.
§ Amazon S3 and DynamoDB offer VPC endpoints to
connect without traversing the public Internet.
VPCs And IP Addresses

§ When you create your VPC, you specify its set of IP


addresses with CIDR notation

§ Classless Inter-Domain Routing (CIDR) notation is a


simplified way to show a specific range of IP
addresses

§ Example: 10.0.0.0/16 = all IPs from 10.0.0.0 to


10.0.255.255

§ How does that work? What does the 16 define?


IPs and CIDR

Every set of 4 digits in an IP address represents a set of 8


binary values (8 bits).

10 . 0 . 0 . 0
00001010 00000000 00000000 00000000

10 . 0 . 255 . 255
00001010 00000000 11111111 11111111
IPs and CIDR

The 16 in the CIDR notation example represents how many


of those bits are "locked down" and cannot change.

10 . 0 . 0 . 0 /16
00001010 00000000 00000000 00000000

16 bits
locked
IPs and CIDR

10 . 0 . 0 . 0 /16
00001010 00000000 00000000 00000000

The unlocked bits can change


between 1 and 0, allowing the
full range of possible values.
CIDR Example: 10.0.0.0/16

10 . 0 . 0 . 0
Lowest 00001010 00000000 00000000 00000000
possible
IP
10 . 0 . 255 . 255
Highest
possible IP 00001010 00000000 11111111 11111111
VPCs and IP Addresses

§ AWS VPCs can use CIDR ranges between


/16 and /28.
§ For every one step a CIDR range increases,
the total number of IPs is cut in half:
CIDR / Total IPs
/16 /17 /18 /19 /20 /21 /22
65,536 32,768 16,384 8,192 4,096 2,048 1,024

/23 /24 /25 /26 /27 /28


512 256 128 64 32 16
What Are Subnets?

Subnets are segments or partitions of a network, divided by CIDR range.

Example:
Subnet 1 Subnet 2
251 251
A VPC with CIDR /22
includes 1,024 total IPs
1024
Subnet 4 Subnet 3
251
IPs 251

Note: In every subnet,


the first four and last
one IP addresses are
reserved for AWS use.
How to Use Subnets

Recommendation: Use subnets to


define Internet accessibility.
Public subnets Private subnets
Include a routing table entry to an Do not have a routing table entry to
Internet gateway to support an Internet gateway and are not
inbound/outbound access to the directly accessible from the public
public Internet. Internet.

Typically use a "jump box"


(NAT/proxy/bastion host) to
support restricted, outbound-only
public Internet access.
Subnets

Recommendation: Start with one public and one private subnet per Availability Zone.

10.0.0.0/21 (10.0.0.0-10.0.7.255)

Public subnet Private subnet Public subnet Private subnet

Availability Zone A Availability Zone A


Subnets

Recommendation: Start with one public and one private subnet per Availability Zone.

10.0.0.0/21 (10.0.0.0-10.0.7.255)

Public subnet Private subnet Public subnet Private subnet


10.0.0.0/24 10.0.2.0/23 10.0.1.0/24 10.0.4.0/23

10.0.0.0- 10.0.2.0- 10.0.1.0- 10.0.4.0-


10.0.0.255 10.0.3.255 10.0.1.255 10.0.5.255

Availability Zone A Availability Zone A


Subnet Sizes

Recommendation: Consider larger subnets over smaller ones (/24 and larger).

Simplifies workload placement: Less likely to waste or run out of IPs:


Choosing where to place a workload among If your subnet runs out of available IPs, you can't
10 small subnets is more complicated than add more to that subnet.
with one large subnet.
Example: If you have 251 IPs in a subnet that's
using only 25 of them, you can't share the unused
226 IPs with another subnet that's running out.
Subnet Types

Which subnet type (public or private) should you use for these resources ?

Public Private

Datastore instances ü
Batch processing instances ü
Back-end instances ü
Web application instances ü ü
How do you control your VPC traffic?

§ Route tables
§ Security groups
§ Network ACLs
§ Internet gateways
Route Tables

Directing Traffic Between VPC Resources


§ Determine where network traffic is 10.0.0.0/16

routed
§ Main and custom route tables Main route table
§ VPC route table: Local route
Destination Target
§ Only one route table per subnet
10.0.0.0/16 local

Best practice:
Use custom route tables for each subnet to enable granular routing for destinations.
Security Groups

Securing VPC Traffic With Security Groups

§ Are virtual firewalls that control inbound and outbound traffic for
one or more instances.

§ Deny all incoming traffic by default and use allow rules that can
filter based on TCP, UDP, and ICMP protocols.

§ Are stateful, which means that if your inbound request is allowed,


the outbound response does not have to be inspected/tracked, and
vice versa.

§ Can define a source/target as either a CIDR block or another


security group to handle situations like auto scaling.
Security Groups

Use security groups to control traffic into, out of, and between resources.

app app App tier app app


security group

data data
Data tier
security group

Private subnet Private subnet

Availability Zone A Availability Zone B


How Security Groups Are Configured

§ By default, all newly created security groups allow all outbound


traffic to all destinations.
Modifying the default outbound rule on security groups increases
complexity and is not recommended unless required for compliance.

§ Most organizations create security groups with inbound rules for


each functional tier (web/app/data/etc.) within an application.
Security Group Chaining Diagram

Security group rules per application tier

Web tier ELB


security group
Inbound Rule
Allow TCP Port 443
Source: 0.0.0.0/0 (Any)
Web tier
web web
security group Inbound Rule
Allow TCP Port 80
Source: Web tier ELB

Inbound Rule
App tier ELB
Allow TCP Port 8080
security group Source: Web tier

Inbound Rule
Allow TCP Port 8080
App tier
app app Source: App tier ELB
security group
Inbound Rule
Allow TCP Port 3306
data Data tier data Source: App tier

security group

Availability Zone A Availability Zone B


Network ACLs

§ Are optional virtual firewalls that control traffic in and out of a subnet.

§ Allow all incoming/outgoing traffic by default and use stateless rules


to allow or deny traffic.
"Stateless rules" inspect all inbound and outbound traffic and do not keep track
of connections.

§ Enforce rules only at the boundary of the subnet, not at the instance-
level, like security groups.
Internet gateways

Directing Traffic To Your VPC users

§ Allow communication between


instances in your VPC and the
10.0.0.0/16
Internet.
Internet
gateway
§ Are horizontally scaled, redundant,
and highly available by default.
§ Provide a target in your VPC route Instance A
tables for Internet-routable traffic. with public IP

10.0.10.0/24
Public Subnet
Directing Traffic To Your VPC

To enable access to or from the Internet for instances in a VPC subnet, you must:

§ Attach an Internet gateway to your VPC

§ Ensure that your subnet's route table points to the Internet gateway

§ Ensure that instances in your subnet have public IP addresses or


Elastic IP addresses

§ Ensure that your NACLs and security groups allow the relevant
traffic to flow to and from your instance
What About Outbound Traffic From Private Instances?

Network Address Translation services:


§ Enable instances in the private subnet to initiate users
outbound traffic to the Internet or other AWS
services. Internet
gateway
10.0.0.0/16
§ Prevent private instances from receiving inbound
traffic from the Internet.
NAT instance
with public IP
Destination Target
10.0.0.0/16 local 10.0.10.0/24
Public subnet
Two primary options: 0.0.0.0/0 NAT
Private instance
§ Amazon EC2 instance set up as a NAT in a with private IP
public subnet
10.0.20.0/24
Private subnet
§ VPC NAT Gateway
What About Outbound Traffic From Private Instances ?

Network Address Translation services: users

§ Enable instances in the private subnet to initiate Internet


outbound traffic to the Internet or other AWS gateway
10.0.0.0/16
services.

§ Prevent private instances from receiving inbound


VPC NAT
traffic from the Internet. gateway

10.0.10.0/24
Public subnet
Two primary options:
Private instance
§ Amazon EC2 instance set up as a NAT in a with private IP
public subnet
10.0.20.0/24
Private subnet
§ VPC NAT Gateway
VPC NAT Gateways vs. NAT Instances On Amazon EC2

VPC NAT gateway NAT instance

Availability Highly available by default Use script to manage failover

Bandwidth Bursts to 10 Gbps Based on bandwidth of instance type

Maintenance Managed by AWS Managed by you

Security NACLs Security groups and NACLs

Port forwarding Not supported Supported


Subnets, Gateways, and Routes

Destination Target
10.0.0.0/20 local
Destination Target
0.0.0.0/0 IGW Internet
10.0.0.0/20 local
10.0.0.0/20 gateway
0.0.0.0/0 NAT

security
group
NAT Instance Private
DynamoDB
Public IP Instance
Private IP
10.0.0.0/24 10.0.4.0/23
Public Subnet Private subnet
Destination Target Availability Zone 1
10.0.0.0/20 local
0.0.0.0/0 NAT
Private
Instance
Private IP
security group
10.0.2.0/23
Private subnet
Availability Zone 2
Region
route table
Logging VPC Traffic

Amazon VPC Flow Logs

§ Captures traffic flow details in your VPC Use cases:


Accepted and rejected traffic
• Troubleshoot connectivity issues.
§ Can be enabled for VPCs, subnets, and ENIs • Test network access rules.
• Monitor traffic.
§ Logs published to CloudWatch Logs • Detect and investigate security incidents.
Amazon Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (VPC)
AWS VPC (Single Public Subnet)

Your instances run in a


private, isolated section
of the AWS cloud with
direct access to the
Internet. Network access
control lists and security
groups can be used to
provide strict control
over inbound and
outbound network traffic
to your instances.
AWS VPC (Single Private Subnet H/W VPN)

Your instances run in a


private, isolated section
of the AWS cloud with a
private subnet whose
instances are not
addressable from the
Internet. You can connect
this private subnet to
your corporate data
center via an IPsec
Virtual Private Network
(VPN) tunnel.
AWS VPC

— This is a diagram of a typical scenario you can create


full details can be found here.
AWS VPC
AWS VPC

You will need to create the following security groups

• WebServerSG—For the web servers in the public subnet


• DBServerSG—For the database servers in the private
subnet
AWS VPC

— From the Your VPCs screen note the details for your VPC – VPC
ID, DHCP Options set, Main Route table, Default Network ACL.
— Also note the Subnets, Internet Gateways and Elastic IPs that
have been created for your VPC. Your should clearly name your
VPC resources.
AWS VPC

— You can choose yourself whether you want to work


with Windows or Linux machines or a mixture of both.
— Launch a web server in the Public subnet in the VPC.
Make sure you enable Auto-Assign Public IP address.
— You should put in some meaningful details in the
Instance details tags key – value screen e.g.
RFwebserver
— Launch the server in the relevant Security Group e.g.
RFWebServerSG
— You will see both the Private and Public IP addresses
assigned to this server. You can configure a webserver
and connect to the Public IP address from your own
desktop.
AWS VPC

— Now you can launch a Linux instance – you can choose a basic
AMI - this instance must be launched in the private. This Server
should be launched into the DBServerSG.
— You DO NOT want to Auto-Assign a Public IP address to this
server.
— If you enable ssh from the WebServerSG to the DBServerSG you
will be able to login from the Server in the Public subnet to the
server in the Private subnet.
— Once you ssh from your webserver instance to your
dbinstance you can check your public IP address using wget
http://ipinfo.io/ip -qO –
— What is the Public IP address of the server in your Private
Network ? What does it correspond with?
AWS VPC

— When you have investigated this VPC Scenario you can


terminate your instances in the Public and Private subnets.
— In this exercise you created your own VPC with Public and
Private subnets.
— Note you can delete your VPC and all associated resources
(NAT gateway, instances, Elastic IPs, etc.)
References

— http://docs.aws.amazon.com/AmazonVPC/latest/Us
erGuide/VPC_Scenario2.html
— How to securely manage AWS credentials
¡ https://blogs.aws.amazon.com/security/post/Tx3D6U6WSFG
OK2H/A-New-and-Standardized-Way-to-Manage-
Credentials-in-the-AWS-SDKs
— How to login securely to Linux AMI in VPC Private
subnet using ssh agent forwarding
¡ https://blogs.aws.amazon.com/security/post/Tx3N8GFK85U
N1G6/Securely-connect-to-Linux-instances-running-in-a-
private-Amazon-VPC

You might also like