Getting Started with Boto

05. December 2012 2012 0

Boto is a Python library that provides you with an easy way to interact with and automate using various Amazon Web Services.

If you’re familiar with Python or interested in learning it, in conjunction with learning and use AWS, you won’t find a better option than Boto.


Installing boto is very straightforward, assuming your using an OS with pip installed. If you do not currently have pip, then do that first.

Once you have pip, the following command will get you up and running.

pip install boto

Basic configuration

This configuration assumes you’ve already created an AWS account and obtained your API Key and Secret Access Key from IAM in the AWS console

With those in hand, you’ll want to create a .boto file in your home directory and populate it with the secrets.

  • Example .boto:


    aws_access_key_id = <your access key>

    aws_secret_access_key = <your secret key>

  • There are some additional configurations you can set, as needed, for debugging, local proxies, etc, as shown below


    debug = 0

    num_retries = 10

    proxy =

    proxy_port = 8080

    proxy_user = foo

    proxy_pass = bar

Using boto with EC2

Now that you have a basic .boto file, you can begin using boto with AWS resources.

The most likely place to start is connecting to EC2 and making an instance, which can be done with a few short lines of code.

import boto.ec2

regions = boto.ec2.regions()

oregon = regions[4]

# known from looking at regions[]

e = boto.ec2.EC2Connection(region=oregon)

# EC2Connection() will pick up your keys from .boto


You can also specify a number of options to the AMI you’re launching.

import boto.ec2

regions = boto.ec2.regions()

oregon = regions[4]

# known from looking at regions[]

e = boto.ec2.EC2Connection(region=oregon)

# EC2Connection() will pick up your keys from .boto





The EC2 API has a number of options and function calls you will find useful in managing your EC2 resources with boto.

Using boto with VPC

EC2 isn’t the only service boto supports, one of my favorites, VPC is also supported.

With a few short lines of code, you can create a VPC and its various objects.

# this will create a VPC, a single subnet, and attach an internet gateway to the VPC

import boto.vpc

regions = boto.ec2.regions()

oregon = regions[4]

# known from looking at regions[]

v = boto.vpc.VPCConnection(region=oregon)

vpc = v.create_vpc('')

subnet = v.create_subnet(, '')

ig = v.create_internet_gateway()


The VPC API has a number of options and function calls you will find useful in managing your EC2 resources with boto.

What AWS resources are supported?

A variety of services are supported. According to the boto README, they are currently


  • Amazon Elastic Compute Cloud (EC2)
  • Amazon Elastic Map Reduce (EMR)
  • AutoScaling
  • Elastic Load Balancing (ELB)

Content Delivery

  • Amazon CloudFront


  • Amazon Relational Data Service (RDS)
  • Amazon DynamoDB
  • Amazon SimpleDB

Deployment and Management

  • AWS Identity and Access Management (IAM)
  • Amazon CloudWatch
  • AWS Elastic Beanstalk
  • AWS CloudFormation

Application Services

  • Amazon CloudSearch
  • Amazon Simple Workflow Service (SWF)
  • Amazon Simple Queue Service (SQS)
  • Amazon Simple Notification Server (SNS)
  • Amazon Simple Email Service (SES)


  • Amazon Route53
  • Amazon Virtual Private Cloud (VPC)

Payments and Billing

  • Amazon Flexible Payment Service (FPS)


  • Amazon Simple Storage Service (S3)
  • Amazon Glacier
  • Amazon Elastic Block Store (EBS)
  • Google Cloud Storage


  • Amazon Mechanical Turk


  • Marketplace Web Services

Automating with boto

As you can see from the examples above, you can very quickly begin automating your AWS resources with boto.

As you learn boto there are a number of resources to consult.

  1. There are a number of Tutorials for some services to help you get started
  2. The [API documentation] is very comprehensive.
  3. I find bpython to be very helpful, as it’s autocompletion makes it easily to quickly and interactively learn new parts of a library. Obligatory bpython and boto action shot
  4. Reading the boto source code. Never underestimate the power of just going to the source. Looking under the hood and seeing how things are put together can be very valuable and educational.
  5. Join the community. [#boto](irc:// on freenode and the google group are both excellent places to start.

Amazon Virtual Private Cloud

04. December 2012 2012 0

Amazon Virtual Private Cloud

Amazon Virtual Private Cloud (VPC) is a service which allows you to create an isolated, private network within an AWS region where you can run and use a variety of other AWS resources. You’re able to create a variety of private IP space subnets and build routes and security policies between them to fully host a multi-tier application within AWS while maintaining isolation from other AWS customers.

How do I build a VPC?

A VPC is built from a number of parts

  1. The VPC object: which you declare with a name and a broad private network space. (You can define 5 VPCs in a single region)
  2. 1 or more subnets: which are segments of the VPC IP space
  3. An Internet Gateway (IG): which connects your VPC to the public Internet via a NAT Instance
  4. NAT Instance: an Amazon managed EC2 instance that provides NAT services to your VPC
  5. Router: the router is a VPC service that performs routing between subnets with your user defined route tables

Optionally you can setup IPSec VPN tunnels which you terminate on your hardware in a DC or home network.

VPC supports four options for its network architecture.

  1. VPC with a Public Subnet Only
  2. VPC with Public and Private Subnets
  3. VPC with Public and Private Subnets and Hardware VPN Access
  4. VPC with a Private Subnet Only and Hardware VPN Access

Further Reading

AWS services you can use inside a VPC

A number of AWS services provide you with instance based resources, and you’re able to run those resources inside your VPC. These include


ELB instances are able to function inside VPCs in two ways

  1. They are able to create interfaces inside your VPC subnets and then send traffic to EC2 instances inside your VPC
  2. An ELB instance can be created with an internal IP in a VPC subnet. This is useful if for load balancing between internal tiers of your application architecture

Further Reading


All classes of EC2 instances are available to deploy inside your VPC.

Availability Zone placement of EC2 instances can be controlled by which subnet you place your EC2 instance(s) into.

Further Reading


All classes and types of RDS instances are available to deploy inside your VPC.

Further Reading

Auto Scaling

You’re able to use Auto Scaling to scale EC2 instances inside your VPC, in conjunction with ELB instances.

Further Reading

Networking inside your VPC

Your VPC is divided into a set of subnets. You control traffic between subnets and to the Internet with two necessary things and one optional.

The required things are route tables and security groups.

A route table defines a subnet and a destination, which can be an instance ID, a network interface ID, or your Internet gateway.

A security group acts like a firewall and is associated with a set of EC2 instances. You define two sets of rules, based on TCP/UDP/ICMP and ports, one for ingress traffic and one for egress traffic. Security group rules are stateful.

Optionally, you can use Network ACLsto control your TCP/UDP/ICMP traffic flow at the subnet layer. Rules defined in Network ACLs are not stateful, as so your rules must match up for ingress and egress traffic of a given service (e.g. TCP 22/SSH) to function.

Further Reading

Some limitations of using VPCs

As with any product, VPC comes with some limitations. These include:

  • You can only create five VPCs in a single AWS region
  • You need to create a VPN tunnel or attach an Elastic IP (EIP) to get to instances, each if which has associated costs.
  • You can only create 20 subnets per VPC
  • You can only create 1 Internet Gateway per VPC

Further Reading


Your VPC(s) do not cost anything to create or run. Additionally, subnets, security groups, and network ACLs are also free.

There will be costs associated with how you choose to access your instances inside your VPC, be that a VPN solution or using Elastic IPs.

All other AWS services cost the same whether you run those instances inside a VPC or outside.

Further Reading


In summary, VPCs provide an easy way to isolate application infrastructure, while still using a variety of AWS resources. With a little additional configuration, you’re able to take advantage of the VPC service.

Key AWS Concepts

02. December 2012 2012 0

To kick off AWS Advent 2012 we’re going to take a tour of the main AWS concepts that people use.

Regions and Availability Zones

Regions are the top level compartmentalization of AWS services. Regions are geographic locations in which you create and run your AWS resources.

As of December 2012, there are currently eight regions

  • N. Virginia – us-east–1
  • Oregon – us-west–2
  • N. California – us-west–1
  • Ireland – eu-west–1
  • Singapore – ap-southeast–1
  • Tokyo – ap-northeast–1
  • Sydney – ap-southeast–2
  • São Paulo – sa-east–1

Within a region there are multiple Availability Zones (AZ). An availability is analagous to a data center and your AWS resources of certain types, within a region, can be created in one or more availability zones to improve redundancy within a region.

AZs are designed so that networking between them is meant to be low latency and fairly reliable, but ideally you’ll run your instances and services in multiple AZs as part of your architecture.

One thing to note about regions and pricing is that it will vary by region for the same AWS service. US-EAST–1 is by far the most popular region, as it was the lowest cost for a long time, so most services built in EC2 tend to run in this region. US-WEST–2 recently had it’s EC2 cost set to match EAST–1, but not all services are available in this region at the same cost yet.


EC2 is the Elastic Compute Cloud. It provides you with a variety of compute instances with CPU, RAM, and Disk allocations, on demand, and with hourly based pricing being the main way to pay for instanance, but you can also reserve instances.

EC2 instances are packaged as AMI (Amazon Machine Images) and these are the base from which your instances will be created. A number of operating systems are supported, including Linux, Windows Server, FreeBSD (on some instance types), and OmniOS.

There are two types of instance storage available.

  1. Ephemeral storage: Ephemeral storage is local to the instance host and the number of disks you get depends on the size of your instance. This storage is wiped whenever there is an event that terminates an instance, whether an EC2 failure or an action by a user.
  2. Elastic Block Store (EBS): EBS is a separate AWS service, but one of it’s uses is for the root storage of instances. These are called EBS backed instances. EBS volumes are block devices of N gigabytes that are available over the network and have some advanced snapshotting and performance features. This storage persists even if you terminate the instance, but this incurs additional costs as well. We’ll cover more EBS details below. If you choose to use EBS optimized instance types, your instance will be provisioned with a dedicated NIC for your EBS traffic. Non-EBS optimized instanced share EBS traffic with all other traffic on the instance’s primary NIC.

EC2 instances offer a number of useful feature, but it important to be aware that instances are not meant to be reliable, it is possible for an instance to go away at any time (host failure, network partitions, disk failure), so it is important to utilize instances in a redundant (ideally multi-AZ) fashion.


S3 is the Simple Storage Service. It provides you with the ability to store objects via interaction with an HTTP API and have those objects be stored in a highly available way. You pay for objects stored in S3 based on the total size of your objects, GET/PUT requests, and bandwidth transferred.

S3 can be coupled with Amazon’s CDN service, CloudFront, for a simple solution to object storage and delivery. You’re even able to complete host a static site on S3.

The main pitfalls of using S3 are that latency and response can vary, particularly with large files, as each object is stored synchronosly on multiple storage devices within the S3 service. Additionally, some organizations have found that S3 can become expensive for many terabytes of data and it was cheaper to bring in-house, but this will depend on your existing infrastructure outside of AWS.


As previously mentioned, EBS is Amazon’s Elastic Block Store service, it provides block level storage volumes for use with Amazon EC2 instances. Amazon EBS volumes provided over the network, and are persistant, independent from the life of your instance. An EBS volume is local to an availability zone and can only be attached to one instance at a time. You’re able to take snapshots of EBS volumes for backups and cloning, that are persisted to S3. You’re also able to create a Provisioned IOPS volume that has guaranteed performance, for an additional cost. You pay for EBS volumes based on the total size of the volume and per million I/O requests

While EBS provides flexibility as a network block device and offers some compelling features with snapshotting and being persistant, its performance can vary, wildly at times, and unless you use EBS optimized instance types, your EBS traffic is shared with all other traffic on your EC2 instances single NIC. So this should be taken into consideration before basing important parts of your infrastructure on top of EBS volumes.


ELB is the Elastic Load Balancer service, it provides you with the ability to load balance TCP and UDP services, with both IPv4 and IPv6 interfaces. ELB instances are local to a region and are able to send traffic to EC2 instances in multiple availability zones at the same time. Like any good load balancer, ELB instances are able to do sticky sessions and detect backend failures. When coupled with CloudWatch metrics and an Auto-Scaling Group, you’re also able to have an ELB instance automatically create and stop additional EC2 instances based on a variety of performance metrics. You pay for ELB instances based on each ELB instance running and the amount of data, in GB, transferred through each ELB instance

While ELB offers ease of use and the most commonly needed features of load balancers, without the need to build your own load balancing with additional EC2 instances, it does add significant latency to your requests (often 50–100ms per request), and has been shown to be dependent on other services, like EBS, as the most recent issue in US-EAST–1 demonstrated. These should be taken into consideration when choosing to utilize ELB instances.

Authentication and Authorization

As you’ve seen, any serious usage of AWS resources is going to cost money, so it’s important to be able to control who within your organization can manage your AWS resources and affect your billing. This is done through Amazon’s Identity Access and Management (IAM) service. As the administrator of your organization’s AWS account (or if you’ve been given the proper permissions via IAM), you’re able to easily provide users with logins, API keys, and through a variety of security roles, let them manage resources in some or all of the AWS services your organization uses. As IAM is for managing access to your AWS resources, there is no cost for it.

Managing your AWS resources

Since AWS is meant to be use to dynamically create and destroy various computing resources on demand, all AWS services include APIs with libraries available for most languages.

But if you’re new to AWS and want to poke around without writing code, you can use the AWS Console to create and manage resources with a point and click GUI.

Other services

Amazon provides a variety of other useful services for you to build your infrastructure on top of. Some of which we will cover in more detail in future posts to AWS Advent. Others we will not, but you can quickly learn about in the AWS documentation.

A good thing to know is that all the AWS services are built from the same primitives that you can use as the basis of your infrastructure on AWS, namely EC2 insances, EBS volumes, S3 storage.

Further reading