Protecting AWS Credentials

15. December 2016 2016 0

Author: Brian Nuszkowski
Editors: Chris Henry, Chris Castle

AWS provides its users with the opportunity to leverage their vast offering of advanced data center resources via a tightly integrated API. While the goal is to provide easy access to these resources, we must do so with security in mind. Peak efficiency via automation is the pinnacle of our industry. At the core of our automation and operation efforts lie the ‘keys’ to the kingdom. Actually, I really do mean keys; access keys. As an AWS Administrator or Power User, we’ve probably all used them, and there is probably at least 1 (valid!) forgotten copy somewhere in your home directory. Unauthorized access to AWS resources via compromised access keys usually occurs via:

  • Accidental commit to version control systems
  • Machine compromise
  • Unintentional sharing/capturing during live demonstrations or recordings

Without the proper controls, if your credentials are obtained by an unauthorized party, they can be used by anyone with internet access. So, we’ll work to transform how we look at our access keys, by treating them less as secrets that we guard with great care, and more like disposable items. We’ll do that by embracing Multi-factor Authentication (MFA, but also referred to as Two Factor Authentication or 2FA).

In this scenario, we’re looking to protect IAM users who are members of the Administrators IAM group. We’ll do this by:

  1. Enabling MFA for IAM Users
  2. Authoring and applying an MFA Enforced IAM Policy
  3. Leveraging the Security Token Service to create MFA enabled credentials

1. Enable MFA for applicable IAM Users

This can be done by adding a Multi-Factor Authentication Device in each user’s Security Credentials section. I prefer Duo Mobile, but any TOTP application will work. Your MFA device will be uniquely identified by it’s ARN and will look something like: arn:aws:iam::1234567889902:mfa/hugo

2. Apply an MFA Enforced Policy

Create a Managed or Inline policy using the json above and attach it to the IAM User or Group whose credentials you wish to protect. This IAM policy above allows all actions against any resource if the request’s credentials are labeled as having successfully performed MFA.

3. Create MFA Enabled Credentials via the Security Token Service

Now that you’re enforcing MFA for API requests via Step 2, your existing access keys are no longer primarily used for making requests. Instead, you’ll use these keys in combination with your MFA passcode to create a new set of temporary credentials that are issued via the Security Token Service.

The idea now is to keep your temporary, priviliged credentials valid for only as long as you need them. e.g. The life of an administrative task or action. I like to recommend creating credentials that have a valid duration of less than or equal to 1 hour. Shrinking the timeframe for which your credentials are valid, limits the risk of their exposure. Credentials that provide administrative level privileges on Friday, from 10am to 11am, aren’t very useful to an attacker on Friday evening.

To create temporary credentials, you reference the current Time Based One Time Passcode (TOTP) in your MFA application and perform either of the following operations:


3a. Use a credential helper tool such as aws-mfa to fetch and manage your AWS credentials file
3b. If you’re an aws cli user, you can run:

aws sts get-session-token --duration-seconds 3600 --serial-number <ARN of your MFA Device> --token-code 783462 and using its output, manually update your AWS credentials file or environment variables.

3c. Write your own application that interfaces with STS using one of AWS’s SDKs!

AWS Console and MFA

Implementing MFA for console usage is a much simpler process. By performing Step 1, the console automatically prompts for your MFA passcode upon login. Awesome, and easy!

Service Accounts

There are scenarios where temporary credentials do not fit the workload of long running tasks. Having to renew credentials every 60 minutes for long-running or recurring automated processes seems highly counterintuitive. In this case, it’s best to create what I like to call an IAM Service Account. An IAM Service Account is just a normal IAM User, but it’s functionally used by an application or process, instead of a human being. Because the service account won’t use MFA, you’ll want to reduce the risk associated to its credentials in the event of their exposure. You do this by combining a least privilege policy, meaning only give access to what’s absolutely necessary, with additional controls, such as source IP address restrictions.

An example Service Account IAM Policy that only allows EC2 instance termination from an allowed IP address range.

MFA Protection on Identity Providers and Federation

While AWS offers MFA Protection for Cross-Account Delegation, this only applies to requests originating from an AWS account. AWS does not have visibility into the MFA status of external identity providers (IdP). If your organization uses an external Identity Provider to broker access to AWS, either via SAML or a custom federation solution, it is advised that you implement a MFA solution, such as Duo, in your IdP’s authentication workflow.

Stay safe, have fun, and keep building!

About the Author

Brian Nuszkowski (@nuszkowski) is a Software Engineer at Uber’s Advanced Technologies Center. He is on the organizing committee for DevOpsDays Detroit and has spoken at several conferences throughout North America such as DevOps Days Austin, Pittsburg, Toronto, and O’Reilly’s Velocity conference in New York.

About the Editors

Chris Henry is a technologist who has devoted his professional life to building technology and teams that create truly useful products. He believes in leveraging the right technologies to keep systems up and safe, and teams productive. Previously, Chris led technology teams at Behance, the leading online platform for creatives to showcase and discover creative work, and later at Adobe, which acquired Behance in 2012. A large part of his time has been spent continually examining and improving systems and processes with the goal of providing the best experience to end users. He’s currently building, a simple way to send Congress opinions about current legislature and track their results. He occasionally blogs at about tech, travel, and his cat.

Chris Castle is a Delivery Manager within Accenture’s Technology Architecture practice. During his tenure, he has spent time with major financial services and media companies. He is currently involved in the creation of a compute request and deployment platform to enable migration of his client’s internal applications to AWS.

Are you getting the most out of IAM?

07. December 2016 2016 0

Author: Jon Topper
Editors: Bill Weiss, Alfredo Cambera

Identity Concepts

Identity is everywhere, whether we’re talking about your GitHub id, Twitter handle, or email address. A strong notion of identity is important in information systems, particularly where security and compliance is involved, and a good identity system supports access control, trust delegation, and audit trail.

AWS provides a number of services for managing identity, and today we’ll be looking at their main service in this area: IAM – Identity and Access Management.

IAM Concepts

Let’s take a look at the building blocks that IAM provides.

First of all, there’s the root user. This is how you’ll log in when you’ve first set up your AWS account. This identity is permitted to do anything and everything to any resource you create in that account, and – like the unix root user – you should really avoid using it for day to day work.

As well as the root user, IAM supports other users. These are separate identities which will typically be used by the people in your organization. Ideally, you’ll have just one user per person; and only one person will have access to that user’s credentials – sharing usernames and passwords is bad form. Users can have permissions granted to them by the use of policies.

Policies are JSON documents which, when attached to another entity, dictate what those entities are allowed to do.

Just like a unix system, we also have groups. Groups pull together lists of users, and any policies applied to the group are available to the members.

IAM also provides roles. In the standard AWS icon set, an IAM Role is represented as a hard hat. This is fairly appropriate, since other entities can “wear” a role, a little like putting on a hat. You can’t log directly into a role – they can’t have passwords – but users and instances can assume a role, and when they do so, the policies associated with that role dictate what they’re allowed to do.

Finally we have tokens. These are sets of credentials you can hold, either permanent or temporary. If you have a token you can present these to API calls to prove to them who you are.

IAM works across regions, so any IAM entity you create is available everywhere. Unlike other AWS services, IAM itself doesn’t cost anything – though obviously anything created in your account by an IAM user will incur costs in the same way as if you’ve done it yourself.

Basic Example

In a typical example we may have three members of staff: Alice, Bob and Carla. Alice is the person who runs the AWS account, and to stop her using the root account for day to day work, she can create herself an IAM user, and assign it one of the default IAM Policies: AdministratorAccess.

As we said earlier, IAM Policies are JSON documents. The AdministratorAccess policy looks like this:

The Version number here establishes which version of the JSON policy schema we’re using and this will likely be the same across all of your policies. For the purpose of this discussion it can be ignored.

The Statement list is the interesting bit: here, we’re saying that anyone using this policy is permitted to call any Action(the * is a wildcard match), on any Resource. Essentially this holder of this policy has the same level of access as the root account, which is what Alice wants, because she’s in charge.

Bob and Carla are part of Alice’s team. We want them to be able to make changes to most of the AWS account, but we don’t want to let them manipulate users – otherwise they might disable Alice’s account, and she doesn’t want that! We can create a group called PowerUsers to put Bob and Carla in, and assign another default policy to that group, PowerUserAccess, which looks like this:

Here you can see that we’re using a NotAction match in the Statement list. We’re saying that users with this policy are allowed to access all actions that don’t match the iam:* wildcard. When we give this policy to Bob and Carla, they’re no longer able to manipulate users with IAM, either in the console, on the CLI or via API calls.

This, though, presents a problem. Now Bob and Carla can’t make changes to their own users either! They won’t be able to change their passwords, for a start, which isn’t great news.

So, we want to allow PowerUsers to perform certain IAM activities, but only on their own users – we shouldn’t let Bob change Carla’s password. IAM provides us with a way to do that. See, for example, this ManageOwnCredentials policy:

The important part of this policy is the ${aws:username} variable expansion. This is expanded when the policy is evaluated, so when Bob is making calls against the IAM service, that variable is expanded to bob.

There’s a great set of example policies for administering IAM resources in the IAM docs, and these cover a number of other useful scenarios.

Multi-Factor Authentication

You can increase the level of security in your IAM accounts by requiring users to make use of a multi-factor authentication token.

Your password is something that you know. An MFA token adds a possession factor: it’s something that you have. You’re then only granted access to the system when both of these factors are present.

If someone finds out your password, but they don’t have access to your MFA token, they still won’t be able to get into the system.

There are instructions on how to set up MFA tokens in the IAM documentaion. For most types of user, a “virtual token” such as the Google Authenticator app is sufficient.

Once this is set up, we can prevent non-MFA users from accessing certain policies by adding this condition to IAM policy statements:

As an aside, several other services permit the use of MFA tokens (they may refer to it as 2FA) – enabling MFA where available is a good practise to get into. I use it with my Google accounts, with Github, Slack, and Dropbox.

Instance Profiles

If your app needs to write to an S3 bucket, or use DynamoDB, or otherwise make AWS API calls, you may have AWS access credentials hard-coded in your application config. There is a better way!

In the Roles section of the IAM console, you can create a new AWS Service Role, and choose the “Amazon EC2” type. On creation of that role, you can attach policy documents to it, and define what that role is allowed to do.

As a real life example, we host application artefacts as package repositories in an S3 bucket. We want our EC2 instances to be able to install these packages, and so we create a policy which allows our instances read-only access to our S3 bucket.

When we create new EC2 instances, we can attach our new role to it. Code running on the instance can then request temporary tokens associated with the new server role.

These tokens are served by the Instance Metadata Service. They can be used to call actions on AWS resources as dictated by the policies attached to the role.


The diagram shows the flow of requests. At step 1, the application connects to the instance metadata service with a request to assume a role. In step 2, the metadata service returns a temporary access token back to the application. In step 3, the application connects to S3 using that token.

The official AWS SDKs are all capable of obtaining credentials from the Metadata Service without you needing to worry about it. Refer to the documentation for details.

The benefit of this approach is that if your application is compromised and your AWS tokens leak out, these can only be used for a short amount of time before they’ll expire, reducing the amount of damage that can be caused in this scenario. With hard-coded credentials you’d have to rotate these yourself.

Cross-Account Role Assumption

One other use of roles is really useful if you use multiple AWS accounts. It’s considered best practice to use separate AWS accounts for different environments (eg. live and test). In our consultancy work, we work with a number of customers, who each have four or more accounts, so this is invaluable to us.

In our main account (in this example, account ID 00001), we create a group for our users who are allowed to access customer accounts. We create a policy for that group, AssumeRoleCustomer, that looks like this:

In this example, our customer’s account is ID 00005, and they have a role in that account called ScaleFactoryUser. ThisAssumeRoleCustomer policy permits our users to call sts:AssumeRole to take on the ScaleFactoryUser role in the customer’s account.

sts:AssumeRole is an API call which will return a temporary token for the role specified in the resource, which we can then use in order to behave as that role.

Of course, the other account (00005) also needs a policy to allow this, and so we set up a Trust Relationship Policy:

This policy allows any entity in the 00001 account to call sts:AssumeRole in our account, as long as it is using an MFA token (remember we saw that conditional in the earlier example?).

Having set that up, we can now log into our main account in the web console, click our username in the top right of the console and choose “Switch Role”. By filling in the account number of the other account (0005), and the name of the role we want to assume (ScaleFactoryUser) the web console calls sts:AssumeRole in the background, and uses that to start accessing the customer account.

Role assumption doesn’t have to be cross-account, by the way. You can also allow users to assume roles in the same account – and this can be used to allow unprivileged users occasional access to superuser privileges, in the same way you might use sudo on a unix system.

Federated Identity

When we’re talking about identity, It’s important to know the difference between the two “auth”s: authentication and authorization.

Authentication is used to establish who you are. So, when we use a username and password (and optionally an MFA token) to connect to the web console, that’s authentication at work.

This is distinct from Authorization which is used to establish what you can do. IAM policies control this.

In IAM, these two concepts are separate. It is possible to configure an Identity Provider (IdP) which is external to IAM, and use that for authentication. Users authenticated against the external IdP can then be assigned IAM roles which control the authentication part of the story.

IdPs can be either SAML or using OpenID Connect. Google Apps (or are we calling it G-Suite now?) can be set up as a SAML provider, and I followed this blog post with some success. I can now jump straight from my Google account into my AWS console, taking on a role I’ve called GoogleSSO, without having to give any other credentials.

Wrapping Up

I hope I’ve given you a flavour of some of the things you can do with IAM. If you’re still logging in with the root account, if you’re not using MFA, or if you’re hard-coding credentials in your application config, you should now be armed with the information you need to level up your security practice.

In addition to that, you may benefit from using role assumption, cross-account access, or an external IdP. As a bonus hint, you should also look into CloudTrail logging, so that your Alice can keep an eye on what Bob and Carla are up to!

However you’re spending the rest of this year, I wish you all the best.

About the Author

Jon Topper has been building Linux infrastructure for fifteen years. His UK-based consultancy, The Scale Factory, are a team of DevOps and infrastructure specialists, helping organizations of various sizes design, build, operate and scale their platforms.

About the Editors

Bill is a senior manager at Puppet in the SRE group.  Before his move to Portland to join Puppet he spent six years in Chicago working for Backstop Solutions Group, prior to which he was in New Mexico working for the Department of Energy.  He still loves him some hardware, but is accepting that AWS is pretty rad for some some things.

Alfredo Cambera is a Venezuelan outdoorsman, passionate about DevOps, AWS, automation, Data Visualization, Python and open source technologies. He works as Senior Operations Engineer for a company that offers Mobile Engagement Solutions around the globe.

IAM Policies, Roles and Profiles and how to keep secrets away from your instances

05. December 2016 2016 0

Author: Mark Harrison
Editors: Jyrki Puttonen

AWS Identity and Access Management (IAM) is Amazon’s service for controlling access to AWS resources, or more simply, it provides a way for you to decide who has access to what in AWS. This simple description however hides the depth and complexity of what is probably one of the most misunderstood of Amazon’s services.

Many of you will have made use of IAM in order to create multiple users in AWS rather than sharing a single root user, but there are many more ways IAM can be useful to you. This article will be focusing on one use of IAM in particular: instance roles. Instance roles allow you to give AWS access to EC2 instances without them needing to store an AWS API key. I’ll be taking you through how to set them up, how to use them with your applications, and some of the things instance roles are useful for.

Throughout this article, I’ll be using terraform to create instances, roles and policies. However, the principles will apply if you use a different provisioning tool or if you use the API directly.

An example

We’re going to start off with a simple terraform configuration that creates a single micro instance in EC2. Here I’ve created a blank directory and a file inside called

When I run terraform apply, terraform creates a running EC2 instance based on the configuration in my file. This will be the starting point for us to add IAM roles/policies to.

Let’s say we are writing an application and want to provide access to an S3 bucket. One way would be simply to copy your AWS API keys into the configuration file for your application, but this would give your application full access to your AWS account just as if you had logged in yourself. A better option would be to make a new IAM user, give them just the permissions needed to access the S3 bucket, and create API keys for that user. However, you still have to store the API keys in the application’s configuration file, along with all the hassles of managing secrets that entails.

Instead, what we’re going to do is create a role that allows access to the S3 bucket, and assign it to the instance. First, we’re going to make the S3 bucket:

Then, we’re going to create an AWS IAM policy that grants access to the bucket. A policy is simple a JSON document that lists permissions to things in AWS:

The actual policy document is the JSON bit between the <<EOF and EOF:

There’s quite a bit going on here, but the important parts are the Action and Resource sections. The Action section says what you can do, and in this case we’re saying you can get objects from S3 (in other words, we’re providing read only access to something in S3). The Resource section specifies what you can do it with, and in this case we say you can get S3 objects from anywhere inside the myawsadventapp bucket. If we wanted to provide write access to the bucket we would add another action, s3:PutObject, to the list of actions we allow. We can also change the name of the S3 bucket as needed to provide access to other buckets.

Now that we have the Policy set up to allow access to S3, we need to actually give that set of permissions to the instance itself. To do this, we make a role:

The first part of this is pretty straightforward: we give the role a name. But why is there another Policy JSON document there? This assume role policy specifies who, or what, can become the role. In this case, the policy is just stating that EC2 instances can have the role assigned to them. Generally, when making instance roles, you don’t need to change this.

The policy is already linked to the role (we added a role = section when making the policy. All that remains is to link the role with our instance.

If you were using the AWS web console to make a new instance, assigning a role to it is easy, you just pick the role from the list of roles in the instance details section as you make the instance. However, if you are using terraform, the AWS cli tools, or some other provisioning tool, then there is one more link in the chain: Instance Profiles.

Instance profiles are simply containers for roles that can be attached directly to instances, and can be thought of as simply an implementation detail. Whenever you make a role, make a matching profile, and then attach the profile to the instance. Here’s the profile to match the role we just created:

Notice how the name of the profile is the same as the name of the role. This is how it works with the AWS web console: AWS creates a profile with the same name as the role behind the scenes. Keeping the name the same makes things easier, and once you have done this you can then completely forget that profiles exist.

Finally, now that the profile has been created, we just edit the instance and assign the profile to it:

And now, with all of the required configuration made, we can go ahead and make the instance:

There is one thing to be aware of: an instance profile can’t be changed after an instance has been created, so if you were following along and created the instance earlier without adding the instance profile then you have to recreate the instance from scratch. With this toy instance that’s not a problem, but it may be if you’re adding this to existing infrastructure.

Accessing API keys from the instance

Once the terraform run is complete, we can ssh into the instance and see that the instance profile has been applied:

And if we run a slightly different curl command, we can obtain AWS API keys:

Your application can simply look up the keys when it wants to use an AWS API and doesn’t need to store them in a config file or elsewhere. Note that the credentials listed have an expiration time mentioned. The keys change approximately every 6 hours and you will need to look them up again after this time.

To make life easier, most AWS libraries and commands already support instance roles as a method of getting credentials, and will automatically use any credentials that are available without any further configuration. For example, you can just use the aws cli without needing to configure your credentials:

Some things you can do with IAM roles and instance profiles

So far we’ve shown an example of giving instances access to a particular S3 bucket. This is great, but there are some other uses for instance roles:

One good use case is managing EBS volumes. Say you have an autoscaling group (because AWS instances break and autoscaling groups allow AWS to launch replacements for broken instances), but you have state that needs to be stored on instances that you’d like to not disappear every time an instance is recreated. The way you deal with this is to store the stateful data on EBS volumes, and use a script that runs on boot to attach any EBS volume that isn’t currently in use.

Another case where having IAM roles is really handy: If you install grafana on an AWS instance, the cloudwatch data sourcesupports using IAM roles, and so you can use grafana to view cloudwatch graphs for your AWS account without needing to set up credentials. To do this, use the following IAM policy:

Finally, a special case of the S3 access policy above is to use the S3 bucket to store secrets. This uses S3 as a trusted store, and you use IAM profiles to determine which instances get access to the secrets. This is the basis of the citadel cookbook for Chef that can be used to manage secrets in AWS.

More information

Hopefully this article has given you a taste for IAM roles and instance profiles and how they can make your life much easier when interacting with the AWS API from EC2 instances. If you want more information on using IAM roles, the AWS Documentation on IAM Roles goes into much more detail and is well worth a read.

About the Author

Mark Harrison is a Systems Administrator on the Chef operations team, where he is responsible for the care and feeding of Hosted Chef as well as maintaining several of Chef’s internal systems. Before coming to Chef, Mark led the operations team at OmniTI, helping clients scale their web architectures and supporting some of the largest infrastructures in the world.

About the Editors

Jyrki Puttonen is Chief Solutions Executive at Symbio Finland (@SymbioFinland) who tries to keep on track what happens in cloud.

server-free pubsub ( and nearly code-free )

02. December 2016 2016 0

Author: Ed Anderson

Editors: Evan Mouzakitis, Brian O’Rourke


This article will introduce you to creating serverless PubSub microservices by building a simple Slack based word counting service.

Lambda Overview

These PubSub microservices are AWS Lambda based. Lambda is a service that does not require you to manage servers in order to run code. The high level overview is that you define events ( called triggers ) that will cause a packaging of your code ( called a function ) to be invoked. Inside your package ( aka function ), a specific function within a file ( called a handler ) will be called.

If you’re feeling a bit confused by overloaded terminology, you are not alone. For now, here’s the short list:

Lambda term  Common Name Description
Trigger AWS Service Component that invokes Lambda
Function software package Group of files needed to run code (includes libraries)
Handler file.function in your package The filename/function name to execute


There are many different types of triggers ( S3, API Gateway, Kinesis streams, and more). See this page for a complete list. Lambdas run in the context of a specific IAM Role. This means that, in addition to features provided by your language of choice ( python, nodejs, java, scala ), you can call from your Lambda to other AWS Services ( like DynamoDB ).

Intro to the PubSub Microservices

These microservices, once built, will count words typed into Slack. The services are:

  1. The first service splits up the user-input into individual words and:
    • increments the counter for each word
    • supplies a response to the user showing the current count of any seen words
    • triggers functions 2 and 3 which execute concurrently
  2. The second service also splits up the user-input into individual words and:
    • adds a count of 10 to each of those words
  3. The third service logs the input it receives.

While you might not have a specific need for a word counter, the concepts demonstrated here can be applied elsewhere. For example, you may have a project where you need to run several things in series, or perhaps you have a single event that needs to trigger concurrent workflows.

For example:

  • Concurrent workflows triggered by a single event:
    • New user joins org, and needs accounts created in several systems
    • Website user is interested in a specific topic, and you want to curate additional content to present to the user
    • There is a software outage, and you need to update several systems ( statuspage, nagios, etc ) at the same time
    • Website clicks need to be tracked in a system used by Operations, and a different system used by Analytics
  • Serial workflows triggered by a single event:
    • New user needs a Google account created, then that Google account needs to be given permission to access another system integrated with Google auth.
    • A new version of software needs to be packaged, then deployed, then activated –
    • Cup is inserted to a coffee machine, then the coffee machine dispenses coffee into the cup


  • The API Gateway ( trigger ) will call a Lambda Function that will split whatever text it is given into specific words
    • Upsert a key in a DynamoDB table with the number 1
    • Drop a message on a SNS Topic
  • The SNS Topic ( trigger ) will have two lambda functions attached to it that will
    • Upsert the same keys in the dynamodb with the number 10
    • Log a message to CloudWatchLogs
Visualization of Different Microservices comprising the Slack Based Word counter
Visualization of the Microservices


Example code for AWS Advent near-code-free PubSub. Technologies used:

  • Slack ( outgoing webhooks )
  • API Gateway
  • IAM
  • SNS
  • Lambda
  • DynamoDB

Pub/Sub is* ( *for some values of best )

I came into the world of computing by way of The Operations Path. The Publish-Subscribe Pattern has always been near and dear to my ❤️.

There are a few things about PubSub that I really appreciate as an “infrastructure person”.

  1. Scalability. In terms of the transport layer ( usually a message bus of some kind ), the ability to scale is separate from the publishers and the consumers. In this wonderful thing which is AWS, we as infrastructure admins can get out of this aspect of the business of running PubSub entirely.
  2. Loose Coupling. In the happy path, publishers don’t know anything about what subscribers are doing with the messages they publish. There’s admittedly a little hand-waving here, and folks new to PubSub ( and sometimes those that are experienced ) get rude surprises as messages mutate over time.
  3. Asynchronous. This is not necessarily inherent in the PubSub pattern, but it’s the most common implementation that I’ve seen. There’s quite a lot of pressure that can be absent from Dev Teams, Operations Teams, or DevOps Teams when there is no expectation from the business that systems will retain single millisecond response times.
  4. New Cloud Ways. Once upon a time, we needed to queue messages in PubSub systems ( and you might you might still have a need for that feature ), but with Lambda, we can also invoke consumers on demand as messages pass through our system. We don’t necessarily hace to keep things in the queue at all. Message appears, processing code runs, everybody’s happy.

Yo dawg, I heard you like ️☁️

One of the biggest benefits that we can enjoy from being hosted with AWS is not having to manage stuff. Running your own message bus might be something that separates your business from your competition, but it might also be undifferentiated heavy lifting.

IMO, if AWS can and will handle scaling issues for you ( to say nothing of only paying for the transactions that you use ), then it might be the right choice for you to let them take care of that for you.

I would also like to point out that running these things without servers isn’t quite the same thing as running them in a traditional setup. I ended up redoing this implementation a few times as I kept finding the rough edges of running things serverless. All were ultimately addressable, but I wanted to keep the complexity of this down somewhat.



CloudFormation is pretty well covered by AWS Advent, we’ll configure this little diddy via the AWS console.


Setup the first lambda, which will be linked to an outgoing webhook in slack

Setup the DynamoDB

👇 You can follow the steps below, or view this video 👉 Video to DynamoDB Create

  1. Console
  2. DynamoDB
  3. Create Table
    1. Table Name table
    2. Primary Key word
    3. Create

Setup the First Lambda

This Lambda accepts the input from a Slack outgoing webhook, splits the input into separate words, and adds a count of one to each word. It further returns a json response body to the outgoing webhook that displays a message in slack.

If the Lambda is triggered with the input awsadvent some words, this Lambda will create the following three keys in dynamodb, and give each the value of one.

  • awsadvent = 1
  • some = 1
  • words = 1

👇 You can follow the steps below, or view this video 👉 Video to Create the first Lambda

  1. Make the first Lambda, which accepts slack outgoing webook input, and saves that in DynamoDB
    1. Console
    2. Lambda
    3. Get Started Now
    4. Select Blueprint
      1. Blank Function
    5. Configure Triggers
      1. Click in the empty box
      2. Choose API Gateway
    6. API Name
      1. aws_advent ( This will be the /PATH of your API Call )
    7. Security
      1. Open
    8. Name
      1. aws_advent
    9. Runtime
      1. Python 2.7
    10. Code Entry Type
      1. Inline
      2. It’s included as in this repo. There are more Lambda Packaging Examples here
    11. Environment Variables
      1. DYNAMO_TABLE = table
    12. Handler
      1. app.handler
    13. Role
      1. Create new role from template(s)
      2. Name
        1. aws_advent_lambda_dynamo
    14. Policy Templates
      1. Simple Microservice permissions
    15. Triggers
      1. API Gateway
      2. save the URL

Link it to your favorite slack

👇 You can follow the steps below, or view this video 👉 Video for setting up the slack outbound wehbook

  1. Setup an outbound webhook in your favorite Slack team.
  2. Manage
  3. Search
  4. outgoing wehbooks
  5. Channel ( optional )
  6. Trigger words
    1. awsadvent
    2. URLs
  7.  Your API Gateway Endpoint on the Lambda from above
  8. Customize Name
  9.  awsadvent-bot
  10. Go to slack
    1. Join the room
    2. Say the trigger word
    3. You should see something like 👉 something like this


Ok. now we want to do the awesome PubSub stuff

Make the SNS Topic

We’re using a SNS Topic as a broker. The producer ( the aws_advent Lambda ) publishes messages to the SNS Topic. Two other Lambdas will be consumers of the SNS Topic, and they’ll get triggered as new messages come into the Topic.

👇 You can follow the steps below, or view this video 👉 Video for setting up the SNS Topic

  1. Console
  2. SNS
  3. New Topic
  4. Name awsadvent
  5. Note the topic ARN

Add additional permissions to the first Lambda

This permission will allow the first Lambda to talk to the SNS Topic. You also need to set an environment variable on the aws_advent Lambda to have it be able to talk to the SNS Topic.

👇 You can follow the steps below, or view this video 👉 Adding additional IAM Permissions to the aws_lambda role

  1. Give additional IAM permissions on the role for the first lambda
    1. Console
    2. IAM
    3. Roles aws_advent_lambda_dynamo
      1. Permissions
      2. Inline Policies
      3. click here
      4. Policy Name
      5. aws_advent_lambda_dynamo_snspublish

Add the SNS Topic ARN to the aws_advent Lambda

👇 You can follow the steps below, or view this video 👉 Adding a new environment variable to the lambda

There’s a conditional in the aws_advent lambda that will publish to a SNS topic, if the SNS_TOPIC_ARN environment variable is set. Set it, and watch more PubSub magic happen.

  1. Add the SNS_TOPIC_ARN environment variable to the aws_advent lambda
    1. Console
    2. LAMBDA
    3. aws_advent
    4. Scroll down
      1. The SNS Topic ARN from above.

Create a consumer Lambda: aws_advent_sns_multiplier

This microservice increments the values collected by the aws_advent Lambda. In a real world application, I would probably not take the approach of having a second Lambda function update values in a database that are originally input by another Lambda function. It’s useful here to show how work can be done outside of the Request->Response flow for a request. A less contrived example might be that this Lambda checks for words with high counts, to build a leaderboard of words.

This Lambda function will subscribe to the SNS Topic, and it is triggered when a message is delivered to the SNS Topic. In the real world, this Lambda might do something like copy data to a secondary database that internal users can query without impacting the user experience.

👇 You can follow the steps below, or view this video 👉 Creating the sns_multiplier lambda

  1. Console
  2. lambda
  3. Create a Lambda function
  4. Select Blueprint 1. search sns 1. sns-message python2.7 runtime
  5. Configure Triggers
    1. SNS topic
      1. awsadvent
      2. click enable trigger
  6. Name
    1. sns_multiplier
  7. Runtime
    1. Python 2.7
  8. Code Entry Type
    1. Inline
      1. It’s included as in this repo.
  9. Handler
    1. sns_multiplier.handler
  10. Role
    1. Create new role from template(s)
  11. Policy Templates
    1. Simple Microservice permissions
  12. Next
  13. Create Function

Go back to slack and test it out.

Now that you have the most interesting parts hooked up together, test it out!

What we’d expect to happen is pictured here 👉 everything working

👇 Writeup is below, or view this video 👉 Watch it work

  • The first time we sent a message, the count of the number of times the words are seen is one. This is provided by our first Lambda
  • The second time we sent a message, the count of the number of times the words are seen is twelve. This is a combination of our first and second Lambdas working together.
    1. The first invocation set the count to current(0) + one, and passed the words off to the SNS topic. The value of each word in the database was set to 1.
    2. After SNS recieved the message, it ran the sns_multiplier Lambda, which added ten to the value of each word current(1) + 10. The value of each word in the database was set to 11.
    3. The second invocation set the count of each word to current(11) + 1. The value of each word in the database was set to 12.

️️💯💯💯 Now you’re doing pubsub microservices 💯💯💯

Setup the logger Lambda as well

This output of this Lambda will be viewable in the CloudWatch Logs console, and it’s only showing that we could do something else ( anything else, even ) with this microservice implementation.

  1. Console
  2. Lambda
  3. Create a Lambda function
  4. Select Blueprint
    1. search sns
    2. sns-message python2.7 runtime
  5. Configure Triggers
    1. SNS topic
      1. awsadvent
      2. click enable trigger
  6. Name
    1. sns_logger
  7. Runtime
    1. Python 2.7
  8. Code Entry Type
    1. Inline
      1. It’s included as in this repo.
  9. Handler
    1. sns_logger.handler
  10. Role
    1. Create new role from template(s)
  11. Policy Templates
    1. Simple Microservice permissions
  12. Next
  13. Create Function

In conclusion

PubSub is an awsome model for some types of work, and in AWS with Lambda we can work inside this model relatively simply. Plenty of real-word work depends on the PubSub model.

You might translate this project to things that you do need to do like software deployment, user account management, building leaderboards, etc.

AWS + Lambda == the happy path

It’s ok to lean on AWS for the heavy lifting. As our word counter becomes more popular, we probably won’t have to do anything at all to scale with traffic. Having our code execute on a request-driven basis is a big win from my point of view. “Serverless” computing is a very interesting development in cloud computing. Look for ways to experiment with it, there are plenty of benefits to it ( other than novelty ).

Some benefits you can enjoy via Serverless PubSub in AWS:

  1. Scaling the publishers. Since this used API Gateway to terminate user requests to a Lambda function:
    1. You don’t have idle resources burning money, waiting for traffic
    2. You don’t have to scale because traffic has increased or decreased
  2. Scaling the bus / interconnection. SNS did the following for you:
    1. Scaled to accommodate the volume of traffic we send to it
    2. Provided HA for the bus
    3. Pay-per-transaction. You don’t have to pay for idle resources!
  3. Scaling the consumers. Having lambda functions that trigger on a message being delivered to SNS:
    1. Scaled the lambda invocations to the volume of traffic.
    2. Provides some sense of HA

Lambda and the API Gateway are works in progress.

Lambda is a new technology. If you use it, you will find some rough edges.

The API Gateway is a new technology. If you use it, you will find some rough edges.

Don’t let that dissuade you from trying them out!

I’m open for further discussion on these topics. Find me on twitter @edyesed

About the Author:

Ed Anderson has been working with the internet since the days of gopher and lynx. Ed has worked in healthcare, regional telecom, failed startups, multinational shipping conglomerates, and is currently working at

Ed is into dadops,  devops, and chat bots.

Writing in the third person is not Ed’s gift. He’s much more comfortable poking the private cloud bear,  destroying ec2 instances, and writing lambda functions be they use case appropriate or not.

He can be found on Twitter at @edyesed.

About the Editors:

Evan Mouzakitis is a Research Engineer at Datadog. He is passionate about solving problems and helping others. He has written about monitoring many popular technologies, including Lambda, OpenStack, Hadoop, and Kafka.

Brian O’Rourke is the co-founder of RedisGreen, a highly available and highly instrumented Redis service. He has more than a decade of experience building and scaling systems and happy teams, and has been an active AWS user since S3 was a baby.