AWS
AWS
AWS
Amazon S3 supports durability at the scale of 99.999999999% of time. This is 9 nines after
decimal.
Amazon S3 supports Read after Write consistency when we create a new object by PUT. It
means as soon as we Write a new object, we can access it.
Amazon S3 supports Eventual Consistency when we overwrite an existing object by PUT.
Eventual Consistency means that the effect of overwrite will not be immediate but will
happen after some time.
For deletion of an object, Amazon S3 supports Eventual Consistency after DELETE.
425. What are the different tiers in Amazon S3 storage?
2. S3 Standard -Infrequent Access (IA): In this tier, S3 provides durable storage that
is immediately available. But in this tier files are infrequently accessed.
Amazon S3 supports storing objects or files up to 5 terabytes. To upload a file greater than 100
megabytes, we have to use Multipart upload utility from AWS. By using Multipart upload we
can upload a large file in multiple parts.
Each part will be independently uploaded. It doesn’t matter in what order each part is uploaded.
It even supports uploading these parts in parallel to decrease overall time. Once all the parts are
uploaded, this utility makes these as one single object or file from which the parts were created.
If the bucket in which the object exists is version controlled, then we can specify the version of
the object that we want to delete. The other versions of the Object still exist within the bucket.
If we do not specify the version, and just pass the key name, Amazon S3 will delete the object
and return the version id. And the object will not appear on the bucket.
In case the bucket is Multi-factor authentication (MFA) enabled, then the DELETE
request will fail if we do not specify a MFA token.
428. What is the use of Amazon Glacier?
Amazon Glacier is an extremely low cost cloud based storage service provided by
Amazon.
We mainly use Amazon Glacier for long-term backup purpose.
Amazon Glacier can be used for storing data archives for months, years or even decades.
It can also be used for long term immutable storage based on regulatory and archiving
requirements. It provides Vault Lock support for this purpose. In this option, we write once but
can read many times same data.
One use case is for storing certificates that can be issued only once and only the original
person keeps the main copy.
We can use Cross Region Replication Amazon S3 to make copies of an object across buckets
in different AWS Regions. This copying takes place automatically and in an asynchronous
mode.
We have to add replication configuration on our source bucket in S3 to make use of Cross
Region Replication. It will create exact replicas of the objects from source bucket to
destination buckets in different regions.
Some of the main use cases of Cross Region Replication are as follows:
1. Compliance: Some times there are laws/regulatory requirements that ask for storing
data at farther geographic locations. This kind of compliance can be achieved by
using AWS Regions that are spread across the world.
2. Failover: At times, we want to minimize the probability of system failure due to
complete blackout in a region. We can use Cross-Region Replication in such a
scenario.
3. Latency: In case we are serving multiple geographies, it makes sense to replicate
objects in the geographical Regions that are closer to end customer. This helps in
reducing the latency.
431. Can we do Cross Region replication in Amazon S3
without enabling versioning on a bucket?
No, we have to enable versioning on a bucket to perform Cross Region Replication.
If our application is content rich and used across multiple locations, we can use Amazon
CloudFront to increase its performance. Some of the techniques used by Amazon CloudFront
are as follows:
Caching: Amazon CloudFront caches the copies of our application’s content at locations closer
to our viewers. By this caching our users get our content very fast. Also due to caching the load
on our main server decreases.
Edge / Regional Locations: CloudFront uses a global network of Edge and Regional edge
locations to cache our content. These locations cater to almost all of the geographical areas
across the world.
Persistent Connections: In certain cases, CloudFront keeps persistent connections
with the main server to fetch the content quickly.
Other Optimization: Amazon CloudFront also uses other optimization techniques like TCP
initial congestion window etc to deliver high performance experience.
434. What is the mechanism behind Regional Edge Cache in
Amazon CloudFront?
A Regional Edge Cache location lies between the main webserver and the global edge
location. When the popularity of an object/content decreases, the global edge location may
take it out from the cache.
But Regional Edge location maintains a larger cache. Due to this the object/content can stay for
long time in Regional Edge location. Due to this CloudFront does not have to go back to main
webserver. When it does not find any object in Global Edge location it just looks for in Regional
Edge location.
This improves the performance for serving content to our users in Amazon CloudFront.
2. Content: With streaming our entire content does not stay at a user’s device. Users
gets only the part they are watching. Once the session is over, content is removed
from the user’s device.
3. Cost: With streaming there is no need to download all the content to a user’s device.
A user can start viewing content as soon as some part is available for viewing. This
saves costs since we do not have to download a large media file before starting each
viewing session.
In Amazon CloudFront we can detect the country from where end users are requesting our
content. This information can be passed to our Origin server by Amazon CloudFront. It is sent
in a new HTTP header.
Based on different countries we can generate different content for different versions of the
same content. These versions can be cached at different Edge Locations that are closer to the
end users of that country.
In this way we are able to target our end users based on their geographic locations.
1. Device Detection
2. Protocol Detection
3. Geo Targeting
4. Cache Behavior
5. Cross Origin Resource Sharing
6. Multiple Origin Servers
7. HTTP Cookies
8. Query String Parameters
9. Custom SSL
440. What are the security mechanisms available in Amazon S3?
Amazon S3 is a very secure storage service. Some of the main security mechanisms
available in Amazon S3 are as follows:
1. Access: When we create a bucket or an object, only the owner get the access to the
bucket and objects.
3. Access Control List: We can create Access Control Lists (ACL) to provide
selective permissions to users and groups.