Deploy a Secure Static Site with AWS & Terraform

14. December 2018 2018 0

There are many uses for static websites. A static site is the simplest form of website, though every website consists of delivering HTML, CSS and other resources to a browser. With a static website, initial page content is delivered the same to every user, regardless as to how they’ve interacted with your site previously. There’s no database, authentication or anything else associated with sending the site to the user – just a straight HTTPS connection and some text content. This content can benefit from caching on servers closer to its users for faster delivery; it will generally also be lower cost as the servers to deliver this content to not themselves need to interpret scripting languages or make database connections on behalf of the application.

The static website now has another use, as there are more tools to provide highly interactive in-browser applications based on JavaScript frameworks (such as React, Vue or Angular) which manage client interaction, maintain local data and interact with the web service via small but often frequent API calls. These systems decouple front-end applications from back-end services and allow those back-ends to be written in multiple languages or as small siloed applications, often called microservices. Microservices may take advantage of modern back-end technologies such as containers (via Docker and/or Kubernetes) and “serverless” providers like AWS Lambda.

People deploying static sites fall into these two very different categories – for one the site is the whole of their business, for the other the static site is a very minor part supporting the API. However, each category of static site use still shares similar requirements. In this article we explore deploying a static site with the following attributes:

  • Must work at the root domain of a business, e.g., example.com
  • Must redirect from the common (but unnecessary) www. subdomain to the root domain
  • Must be served via HTTPS (and upgrade HTTP to HTTPS)
  • Must support “pretty” canonical URLs – e.g., example.com/about-us rather than example.com/about-us.html
  • Must not cost anything when not being accessed (except for domain name costs)

AWS Service Offerings

We achieve these requirements through use of the following AWS services:

  • S3
  • CloudFront
  • ACM (Amazon Certificate Manager)
  • Route53
  • Lambda

This may seem like quite a lot of services to host a simple static website; let’s review and summarise why each item is being used:

  • S3 – object storage; allows you to put files in the cloud. Other AWS users or AWS services may be permitted access to these files. They can be made public. S3 supports website hosting, but only via HTTP. For HTTPS you need…
  • CloudFront – content delivery system; can sit in front of an S3 bucket or a website served via any other domain (doesn’t need to be on AWS) and deliver files from servers close to users, caching them if allowed. Allows you to import HTTPS certificates managed by…
  • ACM – generates and stores certificates (you can also upload your own). Will automatically renew certificates which it generates. For generating certificates, your domain must be validated via adding custom CNAME records. This can be done automatically in…
  • Route53 – AWS nameservers and DNS service. R53 replaces your domain provider’s nameservers (at the cost of $0.50 per month per domain) and allows both traditional DNS records (A, CNAME, MX, TXT, etc.) and “alias” records which map to a specific other AWS service – such as S3 websites or CloudFront distributions. Thus an A record on your root domain can link directly to Cloudfront, and your CNAMEs to validate your ACM certificate can also be automatically provisioned
  • Lambda – functions as a service. Lambda lets you run custom code on events, which can come directly or from a variety of other AWS services. Crucially you can put a Lambda function into Cloudfront, manipulating requests or responses as they’re received from or sent to your users. This is how we’ll make our URLs look nice

Hopefully, that gives you some understanding of the services – you could cut out CloudFront and ACM if you didn’t care about HTTPS, but there’s a worldwide push for HTTPS adoption to provide improved security for users and browsers including Chrome are marking pages not served via HTTPS as “insecure” as part of their commitment.

All this is well and good, but whilst AWS is powerful their console leaves much to be desired, and setting up one site can take some time – replicating it for multiple sites is as much an exercise in memory and box ticking as it is in technical prowess. What we need is a way to do this once, or even better have somebody else do this once, and then replicate it as many times as we need.

Enter Terraform from HashiCorp

One of the most powerful parts of AWS isn’t clear when you first start using the console to manage your resources. AWS has a super powerful API that drives pretty much everything. It’s key to so much of their automation, to the entirety of their security model and tools, tools like Terraform.

Terraform from HashiCorp is “Infrastructure-as-Code” or IaC. It lets you define resources on a variety of cloud providers and then run commands to:

  • Check the current state of your environment
  • Make required changes such that your actual environment matches the code you’ve written

In code form, Terraform uses blocks of code called resources:

resource “aws_s3_bucket” “some-internal-reference” {
  bucket = “my-bucket-name”
}

Each resource can include variables (documented on the provider’s website), and these can be text, numbers, true/false, lists (of the above) or maps (basically like subresources with their variables).

Terraform is distributed as pre-built binaries (it’s also open source, written in Go so you can build it yourself) that you can run simply by downloading, making them executable and then executing them. To work with AWS, you need to define a “provider” which is formatted similarly to a resource:

provider “aws” {
}

To run any AWS API (via command line, terraform or a language of your choice) you’ll need to generate an access key and secret key for the account you’d like to use. That’s beyond the scope of this article, but given you should also avoid hardcoding those credentials into Terraform, and given you’d be very well served to have access to it, skip over to the AWS CLI setup instructions and set this up with the correct keys before continuing.

(NB: in this step you’re best provisioning an account with admin rights, or at least full access to IAM, S3, Route53, Cloudfront, ACM & Lambda. However don’t be tempted to create access keys for your root account – AWS recommends against this)

Now that you’ve got your system set up to use AWS programmatically, installed Terraform and been introduced to the basics of its syntax it’s a good time to look at our code on GitHub.

Clone the repository above; you’ll see we have one file in the root (main.tf.example) and then a directory called modules. One of the best parts of Terraform is modules and how they behave. Modules allow one user to define a specific set of infrastructure that may either relate directly to each other or interact by being on the same account. These modules can define variables allowing some aspects (names, domains, tags) to be customised, whilst other items that may be necessary for the module to function (like a certain configuration of a CloudFront distribution) are fixed.

To start off run bash ./setup which will copy the example file to main.tf and also ensure your local Terraform installation has the correct providers (AWS and file archiving) as well as set up the modules. In main.tf then you’ll see a suggested set up using three modules. Of course, you’d be free to just remove main.tf entirely and use each module in its own right, but for this tutorial, it helps to have a complete picture.

At the top of the main.tf file are defined three variables which you’ll need to fill in correctly:

  1. The first is the domain you wish to use – it can be your root domain (example.com) or any sort of subdomain (my-site.example.com).
  2. Second, you’ll need the Zone ID associated with your domain on Route 53. Each Route 53 domain gets a zone ID which relates to AWS’ internal domain mapping system. To find your Zone ID visit the Route53 Hosted Zones page whilst signed in to your AWS account and check the right-hand column next to the root domain you’re interested in using for your static site.
  3. Finally choose a region; if you already use AWS you may have a preferred region, otherwise, choose one from the AWS list nearest to you. As a note, it’s generally best to avoid us-east-1 where possible, as on balance this tends to have more issues arise due to its centrality in various AWS services.

Now for the fun part. Run terraform plan – if your AWS CLI environment is set up the plan should execute and show the creation of a whole list of resources – S3 Buckets, CloudFront distributions, a number of DNS records and even some new IAM roles & policies. If this bit fails entirely, check that the provider entity in main.tf is using the right profile name based on your ~/.aws/credentials file.

Once the plan has run and told you it’s creating resources (it shouldn’t say updating or destroying at this point), you’re ready to go. Run terraform apply – this basically does another plan, but at the end, you can type yes to have Terraform create the resources. This can take a while as Terraform has to call various AWS APIs and some are quicker than others – DNS records can be slightly slower, and ACM generation may wait until it’s verified DNS before returning a positive response. Be patient and eventually it will inform you that it’s finished, or tell you if there have been problems applying.

If the plan or apply options have problems you may need to change some of your variables based on the following possible issues:

  • Names of S3 buckets should be globally unique – so if anyone in the world has a bucket with the name you want, you can’t have it. A good system is to prefix buckets with your company name or suffix them with random characters. By default, the system names your buckets for you, but you can override this.
  • You shouldn’t have an A record for your root or www. domain already in Route53.
  • You shouldn’t have an ACM certificate for your root domain already.

It’s safe (in the case of this code at least) to re-run Terraform if problems have occurred and you’ve tried to fix them – it will only modify or remove resources it has already created, so other resources on the account are safe.

Go into the AWS console and browse S3, CloudFront, Route53 and you should see your various resources created. You can also view the Lambda function and ACM but be aware that for the former you’ll need to be in the specific region you chose to run in, and for the latter, you must select us-east-1 (N. Virginia)

What now?

It’s time to deploy a website. This is the easy part – you can use the S3 console to drag and drop files (remember to use the website bucket and not the logs or www redirect buckets), use awscli to upload yourself (via aws s3 cp or aws s3 sync) or run the example bash script provided in the repo which takes one argument, a directory of all files you want to upload. Be aware – any files uploaded to your bucket will immediately be public on the internet if somebody knows the URL!

If you don’t have a website, check the “example-website” directory – running the bash script above without any arguments will deploy this for you. Once you’ve deployed something, visit your domain and all being well you should see your site. Cloudfront distributions have a variable time to set up so in some cases it might be 15ish minutes before the site works as expected.

Note also that CloudFront is set to cache files for 5 minutes; even a hard refresh won’t reload resource files like CSS or JavaScript as Cloudfront won’t go and fetch them again from your bucket for 5 minutes after first fetching them. During development you may wish to turn this off – you can do this in the CloudFront console, set the TTL values to 0. Once you’re ready to go live, run terraform apply again and it will reconfigure Cloudfront to recommended settings.

Summary

With a minimal amount of work we now have a framework that can deploy a secure static site to any domain we choose in a matter of minutes. We could use this to deploy websites for marketing clients rapidly, publish a blog generated with a static site builder like Jekyll, or use it as the basis for a serverless web application using ReactJS delivered to the client and a back-end provided by AWS Lambda accessed via AWS API Gateway or (newly released) an AWS Application Load Balancer.

About the Author

Mike has been working in web application development for 10 years, including 3 years managing a development team for a property tech startup and before that 4 years building a real time application for managing operations at skydiving centres, as well as some time freelancing. He uses Terraform to manage all the AWS infrastructure for his current work and has dabbled in other custom AWS tools such as an improvement to the CloudWatch logging agent and a deployment tool for S3. You can find him on Twitter @m1ke and GitHub.

About the Editor

Jennifer Davis is a Senior Cloud Advocate at Microsoft. Jennifer is the coauthor of Effective DevOps. Previously, she was a principal site reliability engineer at RealSelf, developed cookbooks to simplify building and managing infrastructure at Chef, and built reliable service platforms at Yahoo. She is a core organizer of devopsdays and organizes the Silicon Valley event. She is the founder of CoffeeOps. She has spoken and written about DevOps, Operations, Monitoring, and Automation.


Athena Savior of Adhoc Analytics

06. December 2018 2018 0

Introduction

Companies strive to attract customers by creating an excellent product with many features. Previously, product to reality took months to years. Nowadays, product to reality can take a matter of weeks. Companies can fail-fast, learn and move ahead to make it better. Data analytics often takes a back seat becoming a bottleneck.

Some of the problems that cause bottlenecks are

  • schema differences,
  • missing data,
  • security restrictions,
  • encryption

AWS Athena, an ad-hoc query tool can alleviate these problems. The main compelling characteristics include :

  • Serverless
  • Query Ease
  • Cost ($5 per TB of data scanned)
  • Availability
  • Durability
  • Performance
  • Security

Athena behind the scene uses Hive and Presto for analytical queries of any size, stored in S3. Athena processes structured, semi-structured and unstructured data sets including CSV, JSON, ORC, Avro, and Parquet. There are multiple languages supported for Athena drivers to query datastores including java, python, and other languages.

Let’s examine a few different use cases with Athena.

Use cases

Case 1: Storage Analysis

Let us say you have a service where you store user data such as documents, contacts, videos, and images. You have an accounting system in the relational database whereas user resources in S3 orchestrated through metadata housed in DynamoDB.  How do we get ad-hoc storage statistics individually as well as the entire customer base across various parameters and events?

Steps :

  • Create AWS data pipeline to export  Relational Database data to S3
    • Data persisted in S3 in CSV
  • Create AWS data pipeline to export  DynamoDB data to S3
    • Data persisted in S3 in JSON string
  • Create Database in Athena
  • Create tables for data sources
  • Run queries
  • Clean the resources

Figure 1: Data Ingestion

Figure 2: Schema and Queries

Case 2: Bucket Inventory

Why is S3 usage growing out of sync from user base changes? Do you know how your S3 bucket is being used? How many objects did it store? How many duplicate files? How many deleted?

AWS Bucket Inventory helps to manage the storage and provides audit and report on the replication and encryption status the objects in the bucket. Let us create a bucket and enable Inventory and perform the following steps.

Steps :

  • Go to S3 bucket
  • Create buckets vijay-yelanji-insights for objects and vijay-yelanji-inventory for inventory.
  • Enable inventory
    • AWS generates report into the inventory bucket at regular intervals as per schedule job.
  • Upload files
  • Delete files
  • Upload same files to check duplicates
  • Create Athena table pointing to vijay-yelanji-inventory
  • Run queries as shown in Figure 5 to get S3 usage to take necessary actions to reduce the cost.

Figure 3: S3 Inventory

Figure 4: Bucket Insights


Figure 5: Bucket Insight Queries

Case 3: Event comparison

Let’s say you are sending a stream of events to two different targets after pre-processing the events very differently and experiencing discrepancy in the data. How do you fix the events counts? What if event and or data are missing? How do you resolve inconsistencies and or quality issues?

If data is stored in S3, and the data format is supported by Athena, you expose it as tables and identify the gaps as shown in figure 7

Figure 6: Event Comparison

Steps:

  • Data ingested in S3 in snappy or JSON and forwarded to the legacy system of records
  • Data ingested in S3 in CSV (column separated by ‘|’ ) and forwarded to a new system of records
    • Event Forwarder system consumes the source event, modifies the data before pushing into the multiple targets.
  • Create Athena table from legacy source data and compare it problematic event forwarder data.


Figure 7: Comparison Inference

 

Case 4: API Call Analysis

If you have not enabled CloudWatch or set up your own ELK stack, but need to analyze service patterns like total HTTP requests by type, 4XX and 5XX errors by call types, this is possible by enabling  ELB access logs and reading through Athena.

Figure 8: Calls Inference

Steps :

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

You can do the same on CloudTrail Logs with more information here:

https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html

 

Case 5: Python S3 Crawler

If you have  tons of JSON data in S3 spread across directories and files, want to analyze keys and its values, all you need to do is use python libraries like PyAthena or JayDeBe to read compressed snappy files after unzipping through SnZip and set these keys into Set data structure before passing as columns to the Athena as shown in Figure 10

Figure 9: Event Crawling

Figure 10: Events to Athena

Limitations

Athena has some limitations including:
  • Data must reside in S3.
  • To reduce the cost of the query and improve performance, data must be compressed, partitioned and converted to columnar formats.
  • User-defined functions, stored procedure, and many DDL are not supported.
  • If you are generating data continuously or has large data sets, want to get insights into real-time or frequently you should rely on analytical and visualization tools such as RedShift, Kinesis, EMR, Denodo, Spotfire and Tableau.
  • Check Athena FAQ to understand more about its benefits and limitations.

Summary

In this post, I shared how to leverage Athena to get analytics and minimize bottlenecks to product delivery. Be aware that some of the methods used were implemented when Athena was new. New tools may have changed how best to solve these use cases. Lately, it has been integrated with Glue for building, maintaining, and running ETL jobs and then QuickSight for visualization.

Reference

Athena documentation is at https://docs.aws.amazon.com/athena/latest/ug/what-is.html

About the Author

Vijay Yelanji (@VijayYelanji) is an architect at Asurion working at San Mateo, CA. has more than 20+ years of experience across various domains like Cloud enabled Micro Services to support enterprise level Account, File, Order, and Subscription Management Systems, Websphere Integration Servers and Solutions, IBM Enterprise Storage Solutions, Informix Databases, and 4GL tools.

In Asurion, he was Instrumental in designing and developing multi-tenant, multi-carrier, highly scalable Backup and Restore Mobile Application using various AWS services.

You can download the Asurion Memories application for free at 

Recently Vijay presented a topic  ‘Logging in AWS’ at AWS Meetup, Mountain View, CA.

Many thanks to AnanthakrishnaChar, Kashyap and Cathy, Hui for their assistance in fine-tuning some of the use cases.

About the Editor

Jennifer Davis is a Senior Cloud Advocate at Microsoft. Jennifer is the coauthor of Effective DevOps. Previously, she was a principal site reliability engineer at RealSelf, developed cookbooks to simplify building and managing infrastructure at Chef, and built reliable service platforms at Yahoo. She is a core organizer of devopsdays and organizes the Silicon Valley event. She is the founder of CoffeeOps. She has spoken and written about DevOps, Operations, Monitoring, and Automation.


Vanquishing CORS with Cloudfront and Lambda@Edge

03. December 2018 2018 0

If you’re deploying a traditional server-rendered web app, it makes sense to host static files on the same machine. The HTML, being server-rendered, will have to be served there, and it is simple and easy to serve css, javascript, and other assets from the same domain.

When you’re deploying a single-page web app (or SPA), the best choice is less obvious. A SPA consists of a collection of static files, as opposed to server-rendered files that might change depending on the requester’s identity, logged-in state, etc. The files may still change when a new version is deployed, but not for every request.

In a single-page web app, you might access several APIs on different domains, or a single API might serve multiple SPAs. Imagine you want to have the main site at mysite.com and some admin views at admin.mysite.com, both talking to api.mysite.com.

Problems with S3 as a static site host

S3 is a good option for serving the static files of a SPA, but it’s not perfect. It doesn’t support SSL—a requirement for any serious website in 2018. There are a couple other deficiencies that you may encounter, namely client-side routing and CORS headaches.

Client-side routing

Most SPA frameworks rely on client-side routing. With client-side routing, every path should receive the content for index.html, and the specific “page” to show is determined on the client. It’s possible to configure this to use the fragment portion of the url, giving routes like /#!/login and /#!/users/253/profile. These “hashbang” routes are trivially supported by S3: the fragment portion of a URL is not interpreted as a filename. S3 just serves the content for /, or index.html, just like we wanted.

However, many developers prefer to use client-side routers in “history” mode (aka “push-state” or “HTML5” mode). In history mode, routes omit that #! portion and look like /login and /users/253/profile. This is usually done for SEO reasons, or just for aesthetics. Regardless, it doesn’t work with S3 at all. From S3’s perspective, those look like totally different files. It will fruitlessly search your bucket for files called /login or /users/253/profile. Your users will see 404 errors instead of lovingly crafted pages.

CORS headaches

Another potential problem, not unique to S3, is due to Cross-Origin Resource Sharing (CORS). CORS polices which routes and data are accesible from other origins. For example, a request from your SPA mysite.com to api.mysite.com is considered cross-origin, so it’s subject to CORS rules. Browsers enforce that cross-origin requests are only permitted when the server at api.mysite.com sets headers explicitly allowing them.

Even when you have control of the server, CORS headers can be tricky to set up correctly. Some SPA tutorials recommend side-stepping the problem using webpack-dev-server’s proxy setting. In this configuration, webpack-dev-server accepts requests to /api/* (or some other prefix) and forwards them to a server (eg, http://localhost:5000). As far as the browser is concerned, your API is hosted on the same domain—not a cross-origin request at all.

Some browsers will also reject third-party cookies. If your API server is on a subdomain this can make it difficult to maintain a logged-in state, depending on your users’ browser settings. The same fix for CORS—proxying /api/* requests from mysite.com to api.mysite.com—would also make the browser see these as first-party cookies.

In production or staging environments, you wouldn’t be using webpack-dev-server, so you could see new issues due to CORS that didn’t happen on your local computer. We need a way to achieve similar proxy behavior that can stand up to a production load.

CloudFront enters, stage left

To solve these issues, I’ve found CloudFront to be an invaluable tool. CloudFront acts as a distributed cache and proxy layer. You make DNS records that resolve mysite.com to something.CloudFront.net. A CloudFront distribution accepts requests and forwards them to another origin you configure. It will cache the responses from the origin (unless you tell it not to). For a SPA, the origin is just your S3 bucket.

In addition to providing caching, SSL, and automatic gzipping, CloudFront is a programmable cache. It gives us the tools to implement push-state client-side routing and to set up a proxy for your API requests to avoid CORS problems.

Client-side routing

There are many suggestions to use CloudFront’s “Custom Error Response” feature in order to achieve pretty push-state-style URLs. When CloudFront receives a request to /login it will dutifully forward that request to your S3 origin. S3, remember, knows nothing about any file called login so it responds with a 404. With a Custom Error Response, CloudFront can be configured to transform that 404 NOT FOUND into a 200 OK where the content is from index.html. That’s exactly what we need for client-side routing!

The Custom Error Response method works well, but it has a drawback. It turns all 404s into 200s with index.html for the body. That isn’t a problem yet, but we’re about to set up our API so it is accessible at mysite.com/api/* (in the next section). It can cause some confusing bugs if your API’s 404 responses are being silently rewritten into 200s with an HTML body!

If you don’t need to talk to any APIs or don’t care to side-step the CORS issues by proxying /api/* to another server, the Custom Error Response method is simpler to set up. Otherwise, we can use Lambda@Edge to rewrite our URLs instead.

Lambda@Edge gives us hooks where we can step in and change the behavior of the CloudFront distribution. The one we’ll need is “Origin Request”, which fires when a request is about to be sent to the S3 origin.

We’ll make some assumptions about the routes in our SPA.

  1. Any request with an extension (eg, styles/app.css, vendor.js, or imgs/logo.png) is an asset and not a client-side route. That means it’s actually backed by a file in S3.
  2. A request without an extension is a SPA client-side route path. That means we should respond with the content from index.html.

If those assumptions aren’t true for your app, you’ll need to adjust the code in the Lambda accordingly. For the rest of us, we can write a lambda to say “If the request doesn’t have an extension, rewrite it to go to index.html instead”. Here it is in Node.js:

Make a new Node.js Lambda, and copy that code into it. At this time, in order to be used with CloudFront, your Lambda must be deployed to the us-east-1 region. Additionally, you’ll have to click “Publish a new version” on the Lambda page. An unpublished Lambda cannot be used with Lambda@Edge.

Copy the ARN at the top of the page and past it in the “Lambda function associations” section of your S3 origin’s Behavior. This is what tells CloudFront to call your Lambda when an Origin Request occurs.

Et Voila! You now have pretty SPA URLs for client-side routing.

Sidestep CORS Headaches

A single CloudFront “distribution” (that’s the name for the cache rules for a domain) can forward requests to multiple servers, which CloudFront calls “Origins”. So far, we only have one: the S3 bucket. In order to have CloudFront forward our API requests, we’ll add another origin that points at our API server.

Probably, you want to set up this origin with minimal or no caching. Be sure to forward all headers and cookies as well. We’re not really using any of CloudFront’s caching capabilities for the API server. Rather, we’re treating it like a reverse proxy.

At this point you have two origins set up: the original one one for S3 and the new one for your API. Now we need to set up the “Behavior” for the distribution. This controls which origin responds to which path.

Choose /api/* as the Path Pattern to go to your API. All other requests will hit the S3 origin. If you need to communicate with multiple API servers, set up a different path prefix for each one.

CloudFront is now serving the same purpose as the webpack-dev-server proxy. Both frontend and API endpoints are available on the same mysite.com domain, so we’ll have zero issues with CORS.

Cache-busting on Deployment

The CloudFront cache makes our sites load faster, but it can cause problems too. When you deploy a new version of your site, the cache might continue to serve an old version for 10-20 minutes.

I like to include a step in my continuous integration deploy to bust the cache, ensuring that new versions of my asset files are picked up right away. Using the AWS CLI, this looks like

About the Author

Brian Schiller (@bgschiller) is a Senior Software Engineer at Devetry in Denver. He especially enjoys teaching, and leads Code Forward, a free coding bootcamp sponsored by Devetry. You can read more of his writing at brianschiller.com.

About the Editor

Jennifer Davis is a Senior Cloud Advocate at Microsoft. Previously, she was a principal site reliability engineer at RealSelf and developed cookbooks to simplify building and managing infrastructure at Chef. Jennifer is the coauthor of Effective DevOps and speaks about DevOps, tech culture, and monitoring. She also gives tutorials on a variety of technical topics. When she’s not working, she enjoys learning to make things and spending quality time with her family.


Key AWS Concepts

02. December 2012 2012 0

To kick off AWS Advent 2012 we’re going to take a tour of the main AWS concepts that people use.

Regions and Availability Zones

Regions are the top level compartmentalization of AWS services. Regions are geographic locations in which you create and run your AWS resources.

As of December 2012, there are currently eight regions

  • N. Virginia – us-east–1
  • Oregon – us-west–2
  • N. California – us-west–1
  • Ireland – eu-west–1
  • Singapore – ap-southeast–1
  • Tokyo – ap-northeast–1
  • Sydney – ap-southeast–2
  • São Paulo – sa-east–1

Within a region there are multiple Availability Zones (AZ). An availability is analagous to a data center and your AWS resources of certain types, within a region, can be created in one or more availability zones to improve redundancy within a region.

AZs are designed so that networking between them is meant to be low latency and fairly reliable, but ideally you’ll run your instances and services in multiple AZs as part of your architecture.

One thing to note about regions and pricing is that it will vary by region for the same AWS service. US-EAST–1 is by far the most popular region, as it was the lowest cost for a long time, so most services built in EC2 tend to run in this region. US-WEST–2 recently had it’s EC2 cost set to match EAST–1, but not all services are available in this region at the same cost yet.

EC2

EC2 is the Elastic Compute Cloud. It provides you with a variety of compute instances with CPU, RAM, and Disk allocations, on demand, and with hourly based pricing being the main way to pay for instanance, but you can also reserve instances.

EC2 instances are packaged as AMI (Amazon Machine Images) and these are the base from which your instances will be created. A number of operating systems are supported, including Linux, Windows Server, FreeBSD (on some instance types), and OmniOS.

There are two types of instance storage available.

  1. Ephemeral storage: Ephemeral storage is local to the instance host and the number of disks you get depends on the size of your instance. This storage is wiped whenever there is an event that terminates an instance, whether an EC2 failure or an action by a user.
  2. Elastic Block Store (EBS): EBS is a separate AWS service, but one of it’s uses is for the root storage of instances. These are called EBS backed instances. EBS volumes are block devices of N gigabytes that are available over the network and have some advanced snapshotting and performance features. This storage persists even if you terminate the instance, but this incurs additional costs as well. We’ll cover more EBS details below. If you choose to use EBS optimized instance types, your instance will be provisioned with a dedicated NIC for your EBS traffic. Non-EBS optimized instanced share EBS traffic with all other traffic on the instance’s primary NIC.

EC2 instances offer a number of useful feature, but it important to be aware that instances are not meant to be reliable, it is possible for an instance to go away at any time (host failure, network partitions, disk failure), so it is important to utilize instances in a redundant (ideally multi-AZ) fashion.

S3

S3 is the Simple Storage Service. It provides you with the ability to store objects via interaction with an HTTP API and have those objects be stored in a highly available way. You pay for objects stored in S3 based on the total size of your objects, GET/PUT requests, and bandwidth transferred.

S3 can be coupled with Amazon’s CDN service, CloudFront, for a simple solution to object storage and delivery. You’re even able to complete host a static site on S3.

The main pitfalls of using S3 are that latency and response can vary, particularly with large files, as each object is stored synchronosly on multiple storage devices within the S3 service. Additionally, some organizations have found that S3 can become expensive for many terabytes of data and it was cheaper to bring in-house, but this will depend on your existing infrastructure outside of AWS.

EBS

As previously mentioned, EBS is Amazon’s Elastic Block Store service, it provides block level storage volumes for use with Amazon EC2 instances. Amazon EBS volumes provided over the network, and are persistant, independent from the life of your instance. An EBS volume is local to an availability zone and can only be attached to one instance at a time. You’re able to take snapshots of EBS volumes for backups and cloning, that are persisted to S3. You’re also able to create a Provisioned IOPS volume that has guaranteed performance, for an additional cost. You pay for EBS volumes based on the total size of the volume and per million I/O requests

While EBS provides flexibility as a network block device and offers some compelling features with snapshotting and being persistant, its performance can vary, wildly at times, and unless you use EBS optimized instance types, your EBS traffic is shared with all other traffic on your EC2 instances single NIC. So this should be taken into consideration before basing important parts of your infrastructure on top of EBS volumes.

ELB

ELB is the Elastic Load Balancer service, it provides you with the ability to load balance TCP and UDP services, with both IPv4 and IPv6 interfaces. ELB instances are local to a region and are able to send traffic to EC2 instances in multiple availability zones at the same time. Like any good load balancer, ELB instances are able to do sticky sessions and detect backend failures. When coupled with CloudWatch metrics and an Auto-Scaling Group, you’re also able to have an ELB instance automatically create and stop additional EC2 instances based on a variety of performance metrics. You pay for ELB instances based on each ELB instance running and the amount of data, in GB, transferred through each ELB instance

While ELB offers ease of use and the most commonly needed features of load balancers, without the need to build your own load balancing with additional EC2 instances, it does add significant latency to your requests (often 50–100ms per request), and has been shown to be dependent on other services, like EBS, as the most recent issue in US-EAST–1 demonstrated. These should be taken into consideration when choosing to utilize ELB instances.

Authentication and Authorization

As you’ve seen, any serious usage of AWS resources is going to cost money, so it’s important to be able to control who within your organization can manage your AWS resources and affect your billing. This is done through Amazon’s Identity Access and Management (IAM) service. As the administrator of your organization’s AWS account (or if you’ve been given the proper permissions via IAM), you’re able to easily provide users with logins, API keys, and through a variety of security roles, let them manage resources in some or all of the AWS services your organization uses. As IAM is for managing access to your AWS resources, there is no cost for it.

Managing your AWS resources

Since AWS is meant to be use to dynamically create and destroy various computing resources on demand, all AWS services include APIs with libraries available for most languages.

But if you’re new to AWS and want to poke around without writing code, you can use the AWS Console to create and manage resources with a point and click GUI.

Other services

Amazon provides a variety of other useful services for you to build your infrastructure on top of. Some of which we will cover in more detail in future posts to AWS Advent. Others we will not, but you can quickly learn about in the AWS documentation.

A good thing to know is that all the AWS services are built from the same primitives that you can use as the basis of your infrastructure on AWS, namely EC2 insances, EBS volumes, S3 storage.

Further reading

EC2

S3

EBS

ELB