https://unsplash.com/photos/c_4LLTtE3mc

Know The Important AWS S3 Propeties As AWS Cloud Pratitioner

what is AWS Simple storage service (s3)

Amazon Simple Storage Service or simply AWS S3 is an object storage service in Amazon web service.  It boast of industry-leading scalability, data availability, security, performance, and 99.999999999% (11 9s) of data durability.

It has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Amazon S3 gives any user access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of websites. The service aims to maximize benefits of scale and to pass those benefits on to users.

General Cloud Storage

The cloud is a vast platform that can hold enormous amount of data for usually less cost than traditional storage device. These data can be your files, documents, multimedia, computer datas, etc, stored in a secure and sometimes shared resources. Unlike the relational data stored in database that needs tables, the object data do not need a table or any relational construct. This is because their size and structure supports such use cases.
Some examples of cloud storage are Digital Ocean Space, Firebase Cloud Storage, Google Drive, Azure Storage, AWS S3, AWS File System Storage, etc.

durability and security

One important feature of AWS Simple Storage Service is its ability to redundantly store objects on multiple devices across a minimum of three Availability Zones in an AWS Region. This is one of the best disaster recovery measure that any cloud storage service has ever provided. Also, S3 is secured by encrypting the data both at rest and in transit with the aid of client-side encryption or the server-side encryption property.
S3 also have the capacity to grant or deny access using the AWS IAM service role, policies, or the Access Control List.

components and Structures of S3

Let’s talk about what make S3 a unique object storage among it’s competition. These component are responsible for controlling the access and permissions, and properties of AWS S3.

Buckets

S3 Buckets are the root level folders created in S3. You should know that a single AWS account have a 100 bucket limit. However, you can request for more if needed through the AWS Service Quotas. Any subfolder you create in a bucket is referred to as a folder.
Interestingly, access to buckets can be modified using the Bucket policy, the Access Control List (ACL), and the Identity access Management. You should take note that AWS Bucket name has to be unique, this is because AWS S3 is a global service. Your bucket name should not be used by any other AWS account anywhere in the world. So, when creating a name, be creative about it.

AWS S3 bucket name
Objects

Files (media, documents, pictures, etc) stored in a bucket are referred to as objects. Objects can be uploaded into a bucket or downloaded from a bucket. Objects can be accessed using the object link

simple storage objects storage
versioning

This is one way AWS secures the datas in each buckets. You can easily recover from both unintended user actions and application failures. Versioning can be used to preserve, retrieve, and restore every version of every object that is stored in your Amazon S3 bucket. Also, you can easily retrieve older versions of your stored object. To use versioning, you will need to enable it from the S3 service. Otherwise, the recent version of the objects stored is retrieved by default. You can enable versioning at the point of creating your bucket.

S3 bucket versioning
object lock

This feature add a layer of security to your S3 object against accidental deletion and object changes using the Write-Once-Read-Many (WORM) model. The Object Lock can work for a fixed amount of time or indefinitely on the object. Object lock works with versioned object, therefore, Versioning must be enabled to use object lock. You can enable this when creating your bucket.

An Object lock retention can be:
Retention Period, which specifies a period of time that object lock will be active on your object. During this time, the WORM model protects your object is from delection.
Legal Holds: This retention type has the same protection level like the Retention Period, however, with no expiry date for object lock. Here, the object remains protected except you explicitly remove it. The retention periods and legal holds apply to individual object versions. Read more here.

Object lock settings
ACCESS CONTROL List

The ACL manage access to the bucket and objects in S3. It define which AWS accounts or groups are granted access to S3, and also control the type of access that is granted. Although the use of ACL is obsolete for most of use cases, it is a good thing to know should you need it. Read more about ACLs here.

Object ownership
bucket policy

Like the ACL, bucket policy can is a resource based policy that you can use to grant permissions to bucket and the objects in it. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies can allow or deny requests based on the elements in the policy.

Bucket policy for S3 access AWS Simple storage service
Regions

When you create a bucket, you must select a specific region for it to exist in. This however, does not mean you can access you bucket from only that region. AWS storage service is global. It simply means that any data you upload to the S3 bucket will be physically located in a data center in that region. The region also serves as the identifier as appended to your bucket URL. And more importantly, it determine the latency of the objects during requests.
Best practice is to create the bucket in a region closest to your customers (to reduce latency for your customers).

encryption:

This is another security feature than protects all data in the amazon S3 either at rest or in transit. Encryption will convert you data to a difficult to read format so as not protect your informations from unathorized use. By default all Amazon S3 buckets have encryption configured. Objects are automatically encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3). This encryption setting applies to all objects in your Amazon S3 buckets. However, you can use the client side encryption and provide your own customer encryption keys. The AWS Key Management System (KMS) can be use along side the SSE to manage the rotation of the keys lifecycle. When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk and decrypts it when you download the object

AWS Keys encryption for simple storage service s3
cors

Cross Origin Resourse Sharing configuration is a JSON document that contain rules which identify the origins that you will allow to access your bucket, the operations (HTTP methods) that you will support for each origin, and other operation-specific information. Add up to 100 rules to the configuration. Also, you can add the CORS configuration as the cors subresource to the bucket. You can specify methods to allow in S3 CORS i.e GET, PUT, OPTIONS, DELETE, POST. Below is a sample CORS configuration.

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "PUT",
            "POST",
        ],
        "AllowedOrigins": [
            "http://www.yourorigin.com"
        ],
        "ExposeHeaders": [
            "x-amz-server-side-encryption",
        ]
    }
] 
Cross origin resource sharing cors aws s3
lifecyle rule

This feature defines set of actions that AWS performs on a grougp of objects in S3 bucket. Lifecycle can save you much cost on your S3 usage by moving the objects according to their usage and frequency of access within a specified time. This means you can choose to move objects in S3 Standard class to S3 Standard Infrequent-Access after 60 days of creating it. There are two types of Lifecylcle action:
Transistion Actions: It defines when an object is moved from one class to the other after a specified number of days.
The Expiration Action: This defines when an object expires, and then Amazon deletes such expired object from the bucket on your behalf.

lifecycle rules
replication policy

The automatic, asynchronous copying of objects across Amazon S3 buckets which can be owned by the same AWS account, or another AWS account. Also, you can replicate objects to a single destination bucket or to multiple destination buckets. The Cross-Region Replication will automatically replicate new objects as they are written to the bucket. Alternatively, you can use the Batch Replication to replicate existing objects to a different bucket on demand

STATIC WEB HOSTING

The AWS S3 has the capability to host a static website directly from the service without spinning an EC2 instance. This is a cheaper way to host web applications with simple designs. The static web can be attached to CloudFront CDN service to reduce latency.

Static web hosting feature
storage class

A storage class represents the “classification” assigned to each Object in S3. Therefore, the cost and speed of retrieval of objects from S3 is largely dependent on the class it is configured to use. Some of the available storage classes includes:

  • Standard access
  • Standard-IA (Infrequent Access)
  • One Zone-IA (Infrequent Access)
  • Intelligent-Tiering
  • Glacier
  • Deep Glacier

Properties: Each storage class has varying attributes that dictate things like:
Storage cost which is the cost in per GB of the object transfer, retrieval or storage.
Object availability which denote how fast you can retrieve an object from a class.
Object durability which is how durable you data are in event of
Frequency of access (to the object)

Also, every object must is assigned a storage class. The Standard class is the default storage class. You can likewise change the storage class of an object at any time (for the most part). Now, let’s talk about each class.

Standard storage class
  • Designed for general, all purpose storage.
  • It is the default storage option.
  • SLA guarantee 99.9999999999% object durability (“eleven nines”).
  • SLA guarantee 99.99 object availability.
  • It also isthe most expensive storage class due to it high availability and durability function.
Standard-IA (Infrequent Access)
  • Designed for objects that you do not access frequently, but must be immediately available when accessed.
  • It uses multiple Availability Zones.
  • 99.999999999% object durability
  • 99.9 object availability
  • Standard-IA is less expensive than the standard storage class.
One Zone-IA (infrequent Access)
  • Designed for objects that you do not access frequently, but must be immediately available when accessed.
  • It supports only one (1) Availability Zone storage.
  • 99.999999999% object durability.
  • 99.5% object availability.
  • It is 20% less expensive than the standard-IA storage class
Intelligent-Tiering
  • Designed to optimize costs by automatically moving stored data to the most cost-effective tier based on your usage. This means, AWS will move an object having infrequent access into a class suitable for it to save costs.
  • 99.999999999% object durability (eleven nines)
  • 99.9% object availability
  • Pricing depends on the assigned storage class per time.
Intelligent tiering for aws simple storage service
AMAzon S3 Glacier
  • Amazon S3 enables you to utilize Amazon S3 Glacier’s extremely low-cost storage service for data archival.
  • It has an object durability of 99.999999999%
  • The object is also 99.5% object availability.
  • Compared to Standard class, Object retrieval is very slow.
aws S3 Glacier Deep Archive
  • S3 Glacier Deep Archive is a new Amazon S3 storage class that provides secure and durable object storage for long-term retention of data that is accessed once or twice in a year.
  • 99.999999999% object durability (eleven nines)
  • 99.9% object availability
  • It is the lowest cost storage class.
conclusion

In this post, we discusses the AWS Simple Storage Service and some of its major features and components. All of these services can created using the AWS Console, Commandline interface, S3, API or the AWS SDK.
I recommend you check out the post on how to host websites using S3 and cloud front.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *