Amazon Simple Notification Service

Amazon Simple Notification Service (SNS) is a web service which helps you easily publish and deliver notifications to a variety of end points in an automated and low cost fashion. SNS currently supports sending messages to Email, SMS, HTTP/S, and SQS Queue endpoints.

You’re able to use SNS through the AWS console, the SNS CLI tools or through the SNS API.

The moving parts

SNS is composed of three main parts

  1. A topic
  2. A subscription
  3. Published messages

A topic is a communication channel to send messages and subscribe to notifications. Once you create a topic, you’re provided with a topic ARN (Amazon Resource Name), which you’ll use for subscriptions and publishing messages.

A subscription is done to a specific endpoint of a topic. This can be a web service, an email address or an SQS queue.

Published messages are generated by publishers, which can be scripts calling the SNS API, users using the AWS console, or using the CLI tools. Once a new message is published, Amazon SNS attempts to deliver that message to every endpoint that is subscribed to the topic.

Costs

SNS has a number of cost factors, including API requests, notifications to HTTP/S, notifications to Email, notifications to SMS, and data transferred out of a region.

You can get started using SNS with AWS’s Free Usage tier though. So you won’t have to pay to play right away.

Using SNS

To get started with using SNS, we’ll walk through making a topic, creating an email subscription, and publishing a message, all through the AWS console.

Making a topic

  1. Login to the AWS Console
  2. Click Create New Topic.
  3. Enter a topic name in the Topic Name field.
  4. Click Create Topic.
  5. Copy the Topic ARN for the next task.

You’re now ready to make a subscription.

Creating an email subscription

  1. In the AWS Console click on My Subscriptions
  2. Click the Create New Subscription button.
  3. In the Topic ARN field, paste the topic ARN you created in the previous task, for example: arn:aws:sns:us-east-1:054794666397:MyTopic.
  4. Select Email in the Protocol drop-down box.
  5. Enter your email address for the notification in the Endpoint field.
  6. Click Subscribe.
  7. Go to your email and open the message from AWS Notifications, and then click the link to confirm your subscription.
  8. You should see a confirmation response from SNS

You’re now ready to publish a message.

Publishing a message

  1. In the AWS Console click the topic you want to publish to, under My Topics in the Navigation pane.
  2. Click the Publish to Topic button.
  3. Enter a subject line for your message in the Subject field.
  4. Enter a brief message in the Message field.
  5. Click Publish Message.
  6. A confirmation dialog box will appear, click Close to close the confirmation dialog box.
  7. You should get the email shortly.

The SNS documentation has more details on:

Automating SNS

You’ve learned how to manually work with SNS, but as with all AWS services, things are best when automated.

Building on Day 4’s post, Getting Started with Boto, we’ll walk through automating SNS with some boto scripts.

Making a topic

This script will connect to us-west–2 and create a topic named adventtopic.

If the topic is successfully created, it will return the topic ARN. Otherwise, it will log any errors to sns-topic.log.

sns-topic.py

Making an email subscription

This script will connect to us-west–2 and create an email subscription to the topic named adventtopic for the email address you specify.

If the subscription is successfully created, it will return the topic ARN. Otherwise, it will log any errors to sns-topic.log.

  • Note: You’ll need to manually confirm the subscription in your email client before you can move on to using the pre for publishing a message.

sns-email-sub.py

Publishing a message

This script will connect to us-west–2 and create a message to the topic named adventtopic with the subject and message body you specify.

If the publication is successfully performed, it will return the topic ARN. Otherwise, it will log any errors to sns-publish.log.

sns-publish.py

Final thoughts

At this point you’ve successfully automated the main use cases for SNS.

As you can see, SNS can be very useful for sending notifications and with a little automation, can quickly become a part of your application infrastructure toolkit.

All the code samples are available in the Github repository


AWS re: Invent Recap

Amazon recently held their first AWS specific conference, on Nov. 27th through 29th. Featuring a series of sessions with technical content on cloud use cases, new AWS services, cloud migration best practices, architecting for scale, operating at high availability and making your cloud apps secure.

While I was unable to attend, I’ve been watching some of the session videos and talking with folks who were able to go.

I wanted to highlight a few of the talks I found informative and interesting.

Day 1 Keynote

The day 1 keynote contains a lot of great information on where AWS is headed and what they’re planning for the future.

Failures at Scale and How to Ignore Them

While this talk didn’t necessarily cover anything groundbreaking, it covered a lot of good ideas on failure and scaling.

Highly Available Architecture at Netflix

Netflix is definitely leading the way in building a distributed systems on AWS and building tools, many open source, for running systems on AWS. This talk gives a nice overview of how they’re doing this.

Building Web-Scale Applications With AWS

Lots of good information, the Top 5 list is particularly good.

Zero to Millions of Requests

This talk was explaining how NASA software engineers were able to architect and deploy a full solution to stream the curiosity landing to millions of users, with a timeline of one week from start to finish.

Big Data and the US Presidential Campaign

There was recently a great write up on the Obama campaign’s technology and this presentation goes into a lot of the same details.

Pinterest Pins AWS! Running Lean on AWS Once You’ve Made It

A great talk on how Pinterest has used AWS to quickly and efficiently scale their site.

All the videos

There is an excellent playlist with all the videos as well.


Amazon CloudFormation Primer

Amazon CloudFormation (CFN) is an AWS service which provides users with a way to describe the AWS resources they want created, in a declarative and re-usable fashion, through the use of simple JSON formatted text files. It supports a wide variety of AWS services, includes the ability to pass in user supplied paramaters, has a nice set of CLI tools, and a few handy functions you are able to use in the JSON files.

Stacks

CloudFormation starts with the idea of a stack. A stack is a JSON formatted file with the following attributes

  • A template the stack will be based on
  • A list of template parameters, user supplied inputs, such as a EC2 instance or VPC id
  • An optional list of mappings which are used to lookup values, such as AMI ids for different regions
  • An optional list of data tables used to lookup static configuration values (e.g. AMI names)
  • The list of AWS resources and their configuration values
  • A list of outputs, such as the id of an ELB instance that was created

A stack can be created using the AWS Console, the CLI tools, or an API library. Stacks can be as re-usable or as monolithic as your choose to make them. A future post will cover some ideas on CFN stack re-usable, design goals, and driving CFN stacks with Boto, but this post is going to focus on getting you up and running with a simple stack and give you some jumping off points for further research on your own.

You’re able to use templates to create new stacks or update existing stacks.

Cost

CloudFormation itself does not cost anything. You are charged the normal AWS costs for any resources created as part of creating a new stack or updating an existing stack.

It’s important to note you’re charged a full hour for any resources costs, even if you’re stack gets rolled back due to an error during stack creation.

* This can mean it could become costly if you’re not careful while testing and building your templates and stacks. *

Getting Started

We’re going to assume you already have an AWS account and are familiar with editing JSON files.

To get started you’ll need to install the CFN CLI tools.

Writing a basic template

A template is a JSON formatted text file. Amazon ends theirs with a .template, while I prefer to name mine .json as, for naming and syntax highlighting reasons, but ultimately this is arbitrary.

A template begins with the AWSTemplateFormatVersion and a Description, and must contain at least one Resources block with a single Resource.

A most basic template only needs what is show below

basic.template

A template can contain Parameters for user input. An example of this would be a parameter for the instance type.

As you’ll see in the example below, you refer to paramaters or other values using a special function, called Ref.

basic-paramater.template

Sometimes Mappings are a better option than Parameters, a common pattern you’ll see in CFN templates is using a Mapping for the AMI ids in various AWS rgions, as shown below

basic-mapping.template

Finally, you’re usually going to want to use one or more Outputs in your template to provide you with information about the resources the creation of stack made.

basic-output.template

Once you’ve created a template, you’ll want to validate that it works with the cfn-validate-template command from the CLI tools.

An example of using it with a local file is shown below

cfn-validate-template --template-file basic-output.template

PARAMETERS InstanceTypeInput false EC2 instance type

After you’ve verified the template is valid, you can try creating it using the cfn-create-stack command. The you give the command a stackname and a file or URL for the template you want to use. The command will return some info, including the new stack id

Note:__Running this command with a template will create AWS resources, that you will be billed for if they exceed your free tier__

An example of creating a stack is shown below

cfn-create-stack basic-test-1 --template-file basic.template

arn:aws:cloudformation:us-west-2:740810067088:stack/basic-test-1/bae25430-4037-11e2-ac91-50698256405b

You can check the progress of the stack creation with the cfn-describe-stack-events, which you give the stackname.

An example of a stack creation in progress

cfn-describe-stack-events basic-test-1

STACK_EVENT basic-test-1 Ec2Instance AWS::EC2::Instance 2012-12-07T06:35:42Z CREATE_IN_PROGRESS

STACK_EVENT basic-test-1 basic-test-1 AWS::CloudFormation::Stack 2012-12-07T06:35:37Z CREATE_IN_PROGRESS User Initiated

An example of the stack creation finished

cfn-describe-stack-events basic-test-1

STACK_EVENT basic-test-1 basic-test-1 AWS::CloudFormation::Stack 2012-12-07T06:36:24Z CREATE_COMPLETE

STACK_EVENT basic-test-1 Ec2Instance AWS::EC2::Instance 2012-12-07T06:36:24Z CREATE_COMPLETE

STACK_EVENT basic-test-1 Ec2Instance AWS::EC2::Instance 2012-12-07T06:35:42Z CREATE_IN_PROGRESS

STACK_EVENT basic-test-1 basic-test-1 AWS::CloudFormation::Stack 2012-12-07T06:35:37Z CREATE_IN_PROGRESS User Initiated

To delete the stack, you use the cfn_delete_stack command and give it the stackname. An example run is shown below.

cfn-delete-stack basic-test-1

Warning: Deleting a stack will lead to deallocation of all of the stack's resources. Are you sure you want to delete this stack? [Ny]y

At this point we’ve covered writing some basic templates and how to get started using a template with the CLI tools.

Where to go from here

To start you should read the Learn Template Basics and Working with Templates documentation.

While writing and exploring templates, I highly recommend getting familiar with the Template Reference which has detailed docs on the various Template types, their properties, return values, etc.

Finally, Amazon has provided a wide variety of templates in the Sample Templates library, ranging from single EC2 instances, to Drupal or Redmine application stacks, and even a full blown multi-tier application in a VPC, which you’re able to download and run.

I’ve put the samples from this article in the Github repository as well.

I hope you’ve found this post helpful in getting started with CloudFormation.


Getting Started with Boto

Boto is a Python library that provides you with an easy way to interact with and automate using various Amazon Web Services.

If you’re familiar with Python or interested in learning it, in conjunction with learning and use AWS, you won’t find a better option than Boto.

Installing

Installing boto is very straightforward, assuming your using an OS with pip installed. If you do not currently have pip, then do that first.

Once you have pip, the following command will get you up and running.

pip install boto

Basic configuration

This configuration assumes you’ve already created an AWS account and obtained your API Key and Secret Access Key from IAM in the AWS console

With those in hand, you’ll want to create a .boto file in your home directory and populate it with the secrets.

  • Example .boto:

    [Credentials]

    aws_access_key_id = <your access key>

    aws_secret_access_key = <your secret key>

  • There are some additional configurations you can set, as needed, for debugging, local proxies, etc, as shown below

    [Boto]

    debug = 0

    num_retries = 10

    proxy = myproxy.com

    proxy_port = 8080

    proxy_user = foo

    proxy_pass = bar

Using boto with EC2

Now that you have a basic .boto file, you can begin using boto with AWS resources.

The most likely place to start is connecting to EC2 and making an instance, which can be done with a few short lines of code.

simple-ec2.py

import boto.ec2

regions = boto.ec2.regions()

oregon = regions[4]

# known from looking at regions[]

e = boto.ec2.EC2Connection(region=oregon)

# EC2Connection() will pick up your keys from .boto

conn.run_instances('<ami-image-id>')

You can also specify a number of options to the AMI you’re launching.

options-ec2.py

import boto.ec2

regions = boto.ec2.regions()

oregon = regions[4]

# known from looking at regions[]

e = boto.ec2.EC2Connection(region=oregon)

# EC2Connection() will pick up your keys from .boto

conn.run_instances('<ami-image-id>'

key_name='myKey',

instance_type='c1.xlarge',

security_groups=['your-security-group-here'])

The EC2 API has a number of options and function calls you will find useful in managing your EC2 resources with boto.

Using boto with VPC

EC2 isn’t the only service boto supports, one of my favorites, VPC is also supported.

With a few short lines of code, you can create a VPC and its various objects.

vpc.py

# this will create a VPC, a single subnet, and attach an internet gateway to the VPC

import boto.vpc

regions = boto.ec2.regions()

oregon = regions[4]

# known from looking at regions[]

v = boto.vpc.VPCConnection(region=oregon)

vpc = v.create_vpc('10.20.0.0/24')

subnet = v.create_subnet(vpc.id, '10.20.10.0/24')

ig = v.create_internet_gateway()

v.attach_internet_gateway(ig, vpc.id)

The VPC API has a number of options and function calls you will find useful in managing your EC2 resources with boto.

What AWS resources are supported?

A variety of services are supported. According to the boto README, they are currently

Compute

  • Amazon Elastic Compute Cloud (EC2)
  • Amazon Elastic Map Reduce (EMR)
  • AutoScaling
  • Elastic Load Balancing (ELB)

Content Delivery

  • Amazon CloudFront

Database

  • Amazon Relational Data Service (RDS)
  • Amazon DynamoDB
  • Amazon SimpleDB

Deployment and Management

  • AWS Identity and Access Management (IAM)
  • Amazon CloudWatch
  • AWS Elastic Beanstalk
  • AWS CloudFormation

Application Services

  • Amazon CloudSearch
  • Amazon Simple Workflow Service (SWF)
  • Amazon Simple Queue Service (SQS)
  • Amazon Simple Notification Server (SNS)
  • Amazon Simple Email Service (SES)

Networking

  • Amazon Route53
  • Amazon Virtual Private Cloud (VPC)

Payments and Billing

  • Amazon Flexible Payment Service (FPS)

Storage

  • Amazon Simple Storage Service (S3)
  • Amazon Glacier
  • Amazon Elastic Block Store (EBS)
  • Google Cloud Storage

Workforce

  • Amazon Mechanical Turk

Other

  • Marketplace Web Services

Automating with boto

As you can see from the examples above, you can very quickly begin automating your AWS resources with boto.

As you learn boto there are a number of resources to consult.

  1. There are a number of Tutorials for some services to help you get started
  2. The [API documentation] is very comprehensive.
  3. I find bpython to be very helpful, as it’s autocompletion makes it easily to quickly and interactively learn new parts of a library. Obligatory bpython and boto action shot
  4. Reading the boto source code. Never underestimate the power of just going to the source. Looking under the hood and seeing how things are put together can be very valuable and educational.
  5. Join the community. [#boto](irc://irc.freenode.net:6667/boto) on freenode and the google group are both excellent places to start.

Amazon Virtual Private Cloud

Amazon Virtual Private Cloud (VPC) is a service which allows you to create an isolated, private network within an AWS region where you can run and use a variety of other AWS resources. You’re able to create a variety of private IP space subnets and build routes and security policies between them to fully host a multi-tier application within AWS while maintaining isolation from other AWS customers.

How do I build a VPC?

A VPC is built from a number of parts

  1. The VPC object: which you declare with a name and a broad private network space. (You can define 5 VPCs in a single region)
  2. 1 or more subnets: which are segments of the VPC IP space
  3. An Internet Gateway (IG): which connects your VPC to the public Internet via a NAT Instance
  4. NAT Instance: an Amazon managed EC2 instance that provides NAT services to your VPC
  5. Router: the router is a VPC service that performs routing between subnets with your user defined route tables

Optionally you can setup IPSec VPN tunnels which you terminate on your hardware in a DC or home network.

VPC supports four options for its network architecture.

  1. VPC with a Public Subnet Only
  2. VPC with Public and Private Subnets
  3. VPC with Public and Private Subnets and Hardware VPN Access
  4. VPC with a Private Subnet Only and Hardware VPN Access

Further Reading

AWS services you can use inside a VPC

A number of AWS services provide you with instance based resources, and you’re able to run those resources inside your VPC. These include

ELB

ELB instances are able to function inside VPCs in two ways

  1. They are able to create interfaces inside your VPC subnets and then send traffic to EC2 instances inside your VPC
  2. An ELB instance can be created with an internal IP in a VPC subnet. This is useful if for load balancing between internal tiers of your application architecture

Further Reading

EC2

All classes of EC2 instances are available to deploy inside your VPC.

Availability Zone placement of EC2 instances can be controlled by which subnet you place your EC2 instance(s) into.

Further Reading

RDS

All classes and types of RDS instances are available to deploy inside your VPC.

Further Reading

Auto Scaling

You’re able to use Auto Scaling to scale EC2 instances inside your VPC, in conjunction with ELB instances.

Further Reading

Networking inside your VPC

Your VPC is divided into a set of subnets. You control traffic between subnets and to the Internet with two necessary things and one optional.

The required things are route tables and security groups.

A route table defines a subnet and a destination, which can be an instance ID, a network interface ID, or your Internet gateway.

A security group acts like a firewall and is associated with a set of EC2 instances. You define two sets of rules, based on TCP/UDP/ICMP and ports, one for ingress traffic and one for egress traffic. Security group rules are stateful.

Optionally, you can use Network ACLsto control your TCP/UDP/ICMP traffic flow at the subnet layer. Rules defined in Network ACLs are not stateful, as so your rules must match up for ingress and egress traffic of a given service (e.g. TCP 22/SSH) to function.

Further Reading

Some limitations of using VPCs

As with any product, VPC comes with some limitations. These include:

  • You can only create five VPCs in a single AWS region
  • You need to create a VPN tunnel or attach an Elastic IP (EIP) to get to instances, each if which has associated costs.
  • You can only create 20 subnets per VPC
  • You can only create 1 Internet Gateway per VPC

Further Reading

Cost

Your VPC(s) do not cost anything to create or run. Additionally, subnets, security groups, and network ACLs are also free.

There will be costs associated with how you choose to access your instances inside your VPC, be that a VPN solution or using Elastic IPs.

All other AWS services cost the same whether you run those instances inside a VPC or outside.

Further Reading

Summary

In summary, VPCs provide an easy way to isolate application infrastructure, while still using a variety of AWS resources. With a little additional configuration, you’re able to take advantage of the VPC service.


Amazon Relational Database Service

Amazon Relational Database Service

Amazon’s Relational Database Service (RDS) allows you to create and run MySQL, Oracle, or SQL Server database servers without the need to manually create EC2 instances, manage the instance operating system, and install, then manage the database software itself. Amazon has also done the work of automating synchronous replication and failover so that you can run a pair of database instances in a Multi-AZ (for MySQL and Oracle) with a couple clicks/API calls. And through CloudWatch integration, you’re able to get metrics and alerting for your RDS database instances. As with all AWS services, you pay for your RDS instances by the hour, with some options for paying ahead and saving some cost, the cost of an RDS instance depends on the instance size, if you use Multi-AZ, the database type, if you use Provisioned IOPS, and any data transferred to the Internet or other AWS regions.

This post will take you through getting started with RDS, some of the specifics of each database engines, and some suggestions on using RDS in your application’s infrastructure.

RDS instance options

RDS instances come in two flavors, On Demand and Reserved. On Demand instances are paid for by the hour, based on the size of the instance, while Reserved instances are paid for based on a one or three year basis.

RDS instance classes mirror those of normal EC2 instances and are described in detail on Amazon’s site.

A couple compelling features of RDS instance types are that

  1. You’re able to scale your RDS instances up in memory and compute resources on the fly, and with MySQL and Oracle instances, you’re also able to grow your storage size on the fly, from 100GB to 1TB of space.
  2. You’re able to use Provisioned IOPS to provide guaranteed performance to your database storage. You can provision from 1,000 IOPS to 10,000 IOPS with corresponding storage from 100GB to 1TB for MySQL and Oracle engines but if you are using SQL Server then the maximum IOPS you can provision is 7,000 IOPS.

RDS instances are automatically managed, including OS and database server/engine updates, which occur during your weekly scheduled maintenance window.

Further Reading

Creating RDS instances

We’re going to assume you’ve already setup with an AWS IAM account and API key to manage your resources.

You can get started with creating RDS instances through one of three methods

  1. The AWS Console
  2. AWS’s command line tools
  3. The RDS API or one of the API’s many libraries

To create an RDS instance through the console, you do the following:

  1. Select your region, then select the RDS service
  2. Click on database instances
  3. Select Launch a database instance
  4. Select the database engine you need
  5. Select the instance details, this may include the DB engine version
  6. Select the instance class desired. If you’re just experiment, a db.t1.micro is a low-cost option for this.
  7. Select if you want this to be a Multi-AZ deployment
  8. Choose the amount of allocated storage in GB
  9. Select if you wish to use Provisioned IOPS (this costs extra)
  10. Fill in your DB identifier, username, and desired password.
  11. Choose your database name
  12. You can modify the port your database service listens on, customize if you want to use a VPC, or choose your AZ. I would consider these advanced topics and details on some will be covered in future AWS Advent posts.
  13. You can choose to disable backups (you really shouldn’t) and then set the details of how many backups you want and how often they should be made.
  14. At this point you are ready to launch the database instance, start using it (and paying for it).

To create a database instance with AWS’s cli tools, you do the following:

  1. Download and Install the CLI tools

  2. Once you have the tools installed and working, you’ll use the rds-create-db-instance tool to create your instance

  3. An example usage of the command can be found below

    rds-create-db-instance SimCoProd01 -s 10 -c db.m1.large -e mysql -u master -p Kew2401Sd

To create a database instance using the API, you do the following:

  1. Review the API docs to familiarize yourself with the API or obtain the library for the programming language of your choice and review it’s documentation.

  2. If you want to try creating an instance directly from the API, can do so with the CreateDBInstance API call.

An example of calling the API directly can be found below

curl -v https://rds.amazonaws.com/?Action=CreateDBInstance&DBInstanceIdentifier=SimCoProd01&Engine=mysql&MasterUserPassword=Password01&AllocatedStorage=10&MasterUsername=master&Version=2012-09-17&DBInstanceClass=db.m1.large&DBSubnetGroupName=dbSubnetgroup01&SignatureVersion=2&SignatureMethod=HmacSHA256&Timestamp=2011-05-23T05%3A54%3A53.578Z&AWSAccessKeyId=<AWS Access Key ID>&Signature=<Signature>

Modifying existing instances

There are a number of modifications you can make to existing instances. Including:

  • Changing the engine version of a specific database type, e.g. going from MySQL 5.1 to MySQL 5.5
  • Converting a single instance to a Multi-AZ deployment.
  • Increasing the allocated storage
  • Changing your backup options
  • Changing the scheduled maintenance window

All these kinds of changes can be made through the console, via the cli tools, or through the API/libraries.

Further Reading

Things to consider when using RDS instances

There are a number of things to consider when using RDS instances, both in terms of sizing your instances, and AWS actions that can affect your instances.

Sizing

Since RDS instances are easily resizable and include CloudWatch metrics, it is relatively simple to start with a smaller instance class and amount of storage, and grow as needed. If possible, I recommend doing some benchmarking with what you think would be a good starting point and verify if the class and storage you’ve chosen does meet your needs.

I would also recommend that you choose to start with using Provisioned IOPS and a Multi-AZ setup. While this is more expensive, you’re guaranteeing a level of performance and reliability from the get-go, and will help mitigate some of the things below that can affect your RDS instances.

Further Reading

Backups

Backup storage up to the amount of your instance’s allocated storage is included at no additional cost, so you should at least leave the default of 1 day of backups, but should consider using a longer window, at least 7 days.

Per the RDS FAQ on Backups:

The automated backup feature of Amazon RDS enables point-in-time recovery of your DB Instance. When automated backups are turned on for your DB Instance, Amazon RDS automatically performs a full daily snapshot of your data (during your preferred backup window) and captures transaction logs (as updates to your DB Instance are made). When you initiate a point-in-time recovery, transaction logs are applied to the most appropriate daily backup in order to restore your DB Instance to the specific time you requested. Amazon RDS retains backups of a DB Instance for a limited, user-specified period of time called the retention period, which by default is one day but can be set to up to thirty five days. You can initiate a point-in-time restore and specify any second during your retention period, up to the Latest Restorable Time. You can use the DescribeDBInstances API to return the latest restorable time for you DB Instance(s), which is typically within the last five minutes.

So have a good window of point-in-time and daily backups will ensure you have sufficient recovery options in the case of disaster or any kind of data loss.

The point in time recovery does not affect instances, but the daily snapshots do cause a pause in all IO to your RDS instance in the case of single instances, but if you’re using a Multi-AZ deployment, this snapshot is done from the hidden secondary, causing the secondary to fall slightly behind your primary, but without causing IO pauses to the primary. This is an additional reason I recommend accepting the cost and using Multi-AZ as your default.

Further Reading

Snapshots

You can initiate additional snapshots of the database at any time, via the console/CLI/API, which will cause a pause in all IO to single RDS instances and a pause to the hidden secondary of Multi-AZ instances.

All snapshots are stored to S3, and so are insulated from RDS instance failure. However, these snapshots are not accessible to other services, so if you’re wanting backups for offsite DR, you’ll need to orchestrate your own SQL level dumps via another method. A t1.micro EC2 instance that makes dumps and stores to S3 in another region is a relatively straightforward strategy for accomplishing this.

Further Reading

Upgrades and Maintenance Windows_

Because RDS instances are meant to be automatically managed, each RDS instance will have a weekly scheduled maintenance window. During this window the instance becomes unavailable while OS and database server/engine updates are applied. If you’re using a Multi-AZ deployment, then your secondary will be updated, failed over to, then your previous primary is upgraded as the new secondary, this is another reason I recommend accepting the cost and using Multi-AZ as your default.

Further Reading

MySQL

Multi-AZ

MySQL RDS instances support a Multi-AZ deployment. A Multi-AZ deployment is comprised of a primary server which accepts reads and writes and a hidden secondary, in another AZ within the region, which synchronously replicates from the primary. You send your client traffic to a CNAME, which is automatically failed over to the secondary in the event of a primary failure.

Backups and snapshots are performed against the hidden secondary, and automatic failover to the secondary occurs during maintenance window activities.

Further Reading

Read Replicas

MySQL RDS instances also support a unique feature called Read Replicas. These are additional replicas you create, within any AZ in a region, which asynchronously replicate from a source RDS instance. The primary in the case of Multi-AZ deployments.

Further Reading

Oracle

Multi-AZ

Oracle RDS instances support a Multi-AZ deployment. Similar in setup to the MySQL Multi-AZ setup, there is a primary server which accepts reads and writes and a hidden secondary, in another AZ within the region, which synchronously replicates from the primary. You send your client traffic to a CNAME, which is automatically failed over to the secondary in the event of a primary failure.

Further Reading

SQL Server

Multi-AZ

Unfortunately, SQL Server RDS instances do not have a Multi-AZ option at this time.

Further Reading


Key AWS Concepts

Some key AWS Concepts

To kick off AWS Advent 2012 we’re going to take a tour of the main AWS concepts that people use.

Regions and Availability Zones

Regions are the top level compartmentalization of AWS services. Regions are geographic locations in which you create and run your AWS resources.

As of December 2012, there are currently eight regions

  • N. Virginia – us-east–1
  • Oregon – us-west–2
  • N. California – us-west–1
  • Ireland – eu-west–1
  • Singapore – ap-southeast–1
  • Tokyo – ap-northeast–1
  • Sydney – ap-southeast–2
  • São Paulo – sa-east–1

Within a region there are multiple Availability Zones (AZ). An availability is analagous to a data center and your AWS resources of certain types, within a region, can be created in one or more availability zones to improve redundancy within a region.

AZs are designed so that networking between them is meant to be low latency and fairly reliable, but ideally you’ll run your instances and services in multiple AZs as part of your architecture.

One thing to note about regions and pricing is that it will vary by region for the same AWS service. US-EAST–1 is by far the most popular region, as it was the lowest cost for a long time, so most services built in EC2 tend to run in this region. US-WEST–2 recently had it’s EC2 cost set to match EAST–1, but not all services are available in this region at the same cost yet.

EC2

EC2 is the Elastic Compute Cloud. It provides you with a variety of compute instances with CPU, RAM, and Disk allocations, on demand, and with hourly based pricing being the main way to pay for instanance, but you can also reserve instances.

EC2 instances are packaged as AMI (Amazon Machine Images) and these are the base from which your instances will be created. A number of operating systems are supported, including Linux, Windows Server, FreeBSD (on some instance types), and OmniOS.

There are two types of instance storage available.

  1. Ephemeral storage: Ephemeral storage is local to the instance host and the number of disks you get depends on the size of your instance. This storage is wiped whenever there is an event that terminates an instance, whether an EC2 failure or an action by a user.

  2. Elastic Block Store (EBS): EBS is a separate AWS service, but one of it’s uses is for the root storage of instances. These are called EBS backed instances. EBS volumes are block devices of N gigabytes that are available over the network and have some advanced snapshotting and performance features. This storage persists even if you terminate the instance, but this incurs additional costs as well. We’ll cover more EBS details below. If you choose to use EBS optimized instance types, your instance will be provisioned with a dedicated NIC for your EBS traffic. Non-EBS optimized instanced share EBS traffic with all other traffic on the instance’s primary NIC.

EC2 instances offer a number of useful feature, but it important to be aware that instances are not meant to be reliable, it is possible for an instance to go away at any time (host failure, network partitions, disk failure), so it is important to utilize instances in a redundant (ideally multi-AZ) fashion.

S3

S3 is the Simple Storage Service. It provides you with the ability to store objects via interaction with an HTTP API and have those objects be stored in a highly available way. You pay for objects stored in S3 based on the total size of your objects, GET/PUT requests, and bandwidth transferred.

S3 can be coupled with Amazon’s CDN service, CloudFront, for a simple solution to object storage and delivery. You’re even able to complete host a static site on S3.

The main pitfalls of using S3 are that latency and response can vary, particularly with large files, as each object is stored synchronosly on multiple storage devices within the S3 service. Additionally, some organizations have found that S3 can become expensive for many terabytes of data and it was cheaper to bring in-house, but this will depend on your existing infrastructure outside of AWS.

EBS

As previously mentioned, EBS is Amazon’s Elastic Block Store service, it provides block level storage volumes for use with Amazon EC2 instances. Amazon EBS volumes provided over the network, and are persistant, independent from the life of your instance. An EBS volume is local to an availability zone and can only be attached to one instance at a time. You’re able to take snapshots of EBS volumes for backups and cloning, that are persisted to S3. You’re also able to create a Provisioned IOPS volume that has guaranteed performance, for an additional cost. You pay for EBS volumes based on the total size of the volume and per million I/O requests

While EBS provides flexibility as a network block device and offers some compelling features with snapshotting and being persistant, its performance can vary, wildly at times, and unless you use EBS optimized instance types, your EBS traffic is shared with all other traffic on your EC2 instances single NIC. So this should be taken into consideration before basing important parts of your infrastructure on top of EBS volumes.

ELB

ELB is the Elastic Load Balancer service, it provides you with the ability to load balance TCP and UDP services, with both IPv4 and IPv6 interfaces. ELB instances are local to a region and are able to send traffic to EC2 instances in multiple availability zones at the same time. Like any good load balancer, ELB instances are able to do sticky sessions and detect backend failures. When coupled with CloudWatch metrics and an Auto-Scaling Group, you’re also able to have an ELB instance automatically create and stop additional EC2 instances based on a variety of performance metrics. You pay for ELB instances based on each ELB instance running and the amount of data, in GB, transferred through each ELB instance

While ELB offers ease of use and the most commonly needed features of load balancers, without the need to build your own load balancing with additional EC2 instances, it does add significant latency to your requests (often 50–100ms per request), and has been shown to be dependent on other services, like EBS, as the most recent issue in US-EAST–1 demonstrated. These should be taken into consideration when choosing to utilize ELB instances.

Authentication and Authorization

As you’ve seen, any serious usage of AWS resources is going to cost money, so it’s important to be able to control who within your organization can manage your AWS resources and affect your billing. This is done through Amazon’s Identity Access and Management (IAM) service. As the administrator of your organization’s AWS account (or if you’ve been given the proper permissions via IAM), you’re able to easily provide users with logins, API keys, and through a variety of security roles, let them manage resources in some or all of the AWS services your organization uses. As IAM is for managing access to your AWS resources, there is no cost for it.

Managing your AWS resources

Since AWS is meant to be use to dynamically create and destroy various computing resources on demand, all AWS services include APIs with libraries available for most languages.

But if you’re new to AWS and want to poke around without writing code, you can use the AWS Console to create and manage resources with a point and click GUI.

Other services

Amazon provides a variety of other useful services for you to build your infrastructure on top of. Some of which we will cover in more detail in future posts to AWS Advent. Others we will not, but you can quickly learn about in the AWS documentation.

A good thing to know is that all the AWS services are built from the same primitives that you can use as the basis of your infrastructure on AWS, namely EC2 insances, EBS volumes, S3 storage.

Further reading

EC2

S3

EBS

ELB


Welcome to AWS Advent 2012

Welcome to the AWS Advent calendar for 2012.

We’ll be exploring a variety of things from AWS ecosystem, including RDS, using VPCs, CloudFormation, strategies for bootstrapping Puppet/Chef onto new EC2 instances, automating your AWS usage with Boto, and some of the exciting announcements/talks that just came out of AWS Re:Invent.

The goal of this advent calendar is to help folks new to AWS services and concepts learn more about them in a practical way, as well as expose and enlighten seasoned AWS users to some things they have missed.

You can follow along here or on Twitter