Alexa is checking your list

20. December 2016 2016 0

Author: Matthew Williams
Editors: Benjamin Marsteau, Scott Francis

Recently I made a kitchen upgrade: I bought an Amazon Dot. Alexa, the voice assistant inside the intelligent puck, now plays a key role in the preparation of meals every day. With both hands full, I can say “Alexa, start a 40-minute timer” and not have to worry about burning the casserole. However, there is a bigger problem coming up that I feel it might also help me out on. It is the gift-giving season, and I have been known to get the wrong things. Wouldn’t it be great if I could have Alexa remind me what I need to get for each person on my list? Well, that simple idea took me down a path that has consumed me for a little too long. And as long as I built it, I figured I would share it with you.

Architecting a Solution

Now it is important to remember that I am a technologist and therefore I am going to go way beyond what’s necessary. [ “anything worth doing is worth overdoing.” — anon. ] Rather than just building the Alexa side of things, I decided to create the entire ecosystem. My wife and I are the first in our families to add Alexa to their household, so that means I need a website for my friends and family to add what they want. And of course, that website needs to talk to a backend server with a REST API to collect the lists into a database. And then Alexa needs to use that same API to read off my lists.

OK, so spin up an EC2 instance and build away, right? I did say I am a technologist, right? That means I have to use the shiniest tools to get the job done. Otherwise, it would just be too easy.

My plan is to use a combination of AWS Lambda to serve the logic of the application, the API Gateway to host the REST endpoints, DynamoDB for saving the data, and another Lambda to respond to Alexa’s queries.

The Plan of Attack

Based on my needs, I think I came up with the ideal plan of attack. I would tackle the problems in the following order:

  1. Build the Backend – The backend includes the logic, API, and database.
    1. Build a Database to Store the Items
    2. Lambda Function to Add an Item
    3. Lambda Function to Delete an Item
    4. Lambda Function to List All Items
    5. Configure the API Gateway
  2. Build the User Interface – The frontend can be simple: show a list, and let folks add and remove from that list.
  3. Get Alexa Talking to the Service – That is why we are here, right?

There are some technologies used that you should understand before beginning. You do not have to know everything about Lambda or the API Gateway or DynamoDB, but let’s go over a few of the essentials.

Lambda Essentials

The purpose of Lambda is to run the functions you write. Configuration is pretty minimal, and you only get charged for the time your functions run (you get a lot of free time). You can do everything from the web console, but after setting up a few functions, you will want another way. See this page for more about AWS Lambda.

API Gateway Essentials

The API Gateway is a service to make it easier to maintain and secure your APIs. Even if I get super popular, I probably won’t get charged much here as it is $3.50 per million API calls. See this page for more about the Amazon API Gateway.

DynamoDB Essentials

DynamoDB is a simple (and super fast) NoSQL database. My application has simple needs, and I am going to need a lot more friends before I reach the 25 GB and 200 million requests per month that are on the free plan. See this page for more about Amazon DynamoDB.

Serverless Framework

Sure I can go to each service’s console page and configure them, but I find it a lot easier to have it automated and in source control. There are many choices in this category including the Serverless framework, Apex, Node Lambda, and many others. They all share similar features so you should review them to see which fits your needs best. I used the Serverless framework for my implementation.

Alexa Skills

When you get your Amazon Echo or Dot home, you interact with Alexa, the voice assistant. The things that she does are Alexa Skills. To build a skill you need to define a list of phrases to recognize, what actions they correspond to, and write the code that performs those actions.

Let’s Start Building

There are three main components that need to be built here: API, Web, and Skill. I chose a different workflow for each of them. The API uses the Serverless framework to define the CloudFormation template, Lambda Functions, IAM Roles, and API Gateway configuration. The Webpage uses a Gulp workflow to compile and preview the site. And the Alexa skill uses a Yeoman generator. Each workflow has its benefits and it was exciting to use each.

If you would like to follow along, you can clone the GitHub repo: https://github.com/DataDog/AWS-Advent-Alexa-Skill-on-Lambda.

Building the Server

The process I went through was:

  1. Install Serverless Framework (npm i -g serverless)
  2. Create the first function (sls create -n <service name> -t aws-nodejs)The top-level concept in Serverless is that of a service. You create a service, then all the Lambda functions, CloudFormation templates, and IAM roles defined in the serverless.yaml file support that service.Add the resources needed to a CloudFormation template in the serverless.yaml file. For example:Refer to the CloudFormation docs and the Serverless Resources docs for more about this section.
  3. Add the resources needed to a CloudFormation template in the serverless.yaml file. For example:
    alexa_1
    Refer to the CloudFormation docs and the Serverless Resources docs for more about this section.
  4. Add the IAM Role statements to allow your Lambda access to everything needed. For example:
    alexa_2
  5. Add the Lambda functions you want to use in this service. For example:
    alexa_3
    The events section lists the triggers that can kick off this function. **http** means to use the API Gateway. I spent a little time in the API Gateway console and got confused. But these four lines in the serverless.yaml file were all I needed.
  6. Install serverless-webpack npm and add it to the YAML file:
    alexa_4
    This configuration tells Serverless to use WebPack to bundle all your npm modules together in the right way. And if you want to use EcmaScript 2015 this will run Babel to convert back down to a JavaScript version that Lambda can use.  You will have to setup your webpack.config.js and .babelrc files to get everything working.
  7. Write the functions. For the function I mentioned earlier, I added the following to my items.js file:
    alexa_5
    This function sets the table name in my DynamoDB and then grabs all the rows. No matter what the result is, a response is formatted using this createResponse function:
    alexa_6Notice the header. Without this, Cross Origin Resource Sharing will not work. You will get nothing but 502 errors when you try to consume the API.
  8. Deploy the Service:

    Now I use 99Design’s aws-vault to store my AWS access keys rather than adding them to a rc file that could accidentally find its way up to GitHub. So the command I use is:

    If everything works, it creates the DynamoDB table, configures the API Gateway APIs, and sets up the Lambdas. All I have to do is try them out from a new application or using a tool like Paw or Postman. Then rinse and repeat until everything works.

Building the Frontend

alexa_7

Remember, I am a technologist, not an artist. It works, but I will not be winning any design awards. It is a webpage with a simple table on it and loads up some Javascript to show my DynamoDB table:

alexa_8

Have I raised the technologist card enough times yet? Well, because of that I need to keep to the new stuff even with the Javascript features I am using. That means I am writing the code in ECMAScript 2015, so I need to use Babel to convert it to something usable in most browsers. I used Gulp for this stage to keep building the files and then reloading my browser with each change.

Building the Alexa Skill

Now that we have everything else working, it is time to build the Alexa Skill. Again, Amazon has a console for this which I used for the initial configuration on the Lambda that backs the skill. But then I switched over to using Matt Kruse’s Alexa App framework. What I found especially cool about his framework was that it works with his alexa-app-server so I can test out the skill locally without having to deploy to Amazon.

For this one I went back to the pre-ECMAScript 2015 syntax but I hope that doesn’t mean I lose technologist status in your eyes.

Here is a quick look at a simple Alexa response to read out the gift list:

alexa_9

Summary

And now we have an end to end solution around working with your gift lists. We built the beginnings of an API to work with gift lists. Then we added a web frontend to allow users to add to the list. And then we added an Alexa skill to read the list while both hands are on a hot pan. Is this overkill? Maybe. Could I have stuck with a pen and scrap of paper? Well, I guess one could do that. But what kind of technologist would I be then?

About the Author

Matt Williams is the DevOps Evangelist at Datadog. He is passionate about the power of monitoring and metrics to make large-scale systems stable and manageable. So he tours the country speaking and writing about monitoring with Datadog. When he’s not on the road, he’s coding. You can find Matt on Twitter at @Technovangelist.

About the Editors

Benjamin Marsteau is a System administrator | Ops | Dad | and tries to give back to the community has much as it gives him.


Serverless everything: One-button serverless deployment pipeline for a serverless app

14. December 2016 2016 0

Author: Soenke Ruempler
Editors: Ryan S. Brown

Update: Since AWS recently released CodeBuild, things got much simpler. Please also read my follow-up post AWS CodeBuild: The missing link for deployment pipelines in AWS.

Infrastructure as Code is the new default: With tools like Ansible, Terraform, CloudFormation, and others it is getting more and more common. A multitude of services and tools can be orchestrated with code. The main advantages of automation are reproducibility, fewer human errors, and exact documentation of the steps involved.

With infrastructure expressed as code, it’s not a stretch to also want to codify deployment pipelines. Luckily, AWS has it’s own service for that named CodePipeline, which in turn can be fully codified and automated by CloudFormation (“Pipelines as Code”).

This article will show you how to create a deploy pipeline for a serverless app with a “one-button” CloudFormation template. The more concrete goals are:

  • Fully serverless: neither the pipeline nor the app itself involves server, VM or container setup/management (and yes, there are still servers, just not managed by us).
  • Demonstrate a fully automated deployment pipeline blueprint with AWS CodePipeline for a serverless app consisting of a sample backend powered by the Serverless framework and a sample frontend powered by “create-react-app”.
  • Provide a one-button quick start for creating deployment pipelines for serverless apps within minutes. Nothing should be run from a developer machine, not even an “inception script”.
  • Show that it is possible to lower complexity by leveraging AWS components so you don’t need to configure/click third party providers (e.g. TravisCi/CircleCi) as pipeline steps.

We will start with a repository consisting of a typical small web application with a front end and a back end. The deployment pipeline described in this article makes some assumptions about the project layout (see the sample project):

  • a frontend/ folder with a package.json which will produce a build into build/ when npm run build is called by the pipeline.
  • a backend/ folder with a serverless.yml. The pipeline will call the serverless deploy (the Serverless framework). It should have at least one http event so that the Serverless framework creates a service endpoint which can then be used in the frontend to call the APIs.

For a start, you can just clone or copy the sample project into your own GitHub account.

As soon as you have your project ready, we can continue to create a deployment pipeline with CloudFormation.

The actual CloudFormation template we will use here to create the deployment pipeline does not reside in the project repository. This allows us to develop/evolve the pipeline and the pipeline code and the projects using the pipeline independent from each other. It is published to an S3 bucket so we can build a one-click launch button. The launch button will direct users to the CloudFormation console with the URL to the template prefilled:

Launch Stack

After you click on the link (you need to be logged in into the AWS Console), and click “Next” to confirm that you want to use the predefined template, some CloudFormation stack parameters have to be specified:CloudFormation stack parameters

First you need to specify the GitHub Owner/Repository of the project (the one you copied earlier), a branch (usually master) and a GitHub Oauth Token as described in the CodePipeline documentation.

The other parameters specify where to find the Lambda function source code for the deployment steps, we can live with the defaults for now, stuff for another blog post. (Update: the Lambda functions became obsolete the move to AWS CodeBuild,, and so did the template parameters regarding Lambda source code location.)

The next step of the CloudFormation stack setup allows you to specify advanced settings like tags, notifications and so on. We can leave as-is as well.

On the last assistant page you need to acknowledge that CloudFormation will create IAM roles on your behalf:

CloudFormation IAM confirmation

The IAM roles are needed to give Lambda functions the right permissions to run and logs to CloudWatch. Once you pressed the “Create” button, CloudFormation will create the following AWS resources:

  • An S3 Bucket containing the website assets with website hosting enabled.
  • A deployment pipeline (AWS CodePipeline) consisting of the following steps:
    • Checks out the source code from GitHub and saves it as an artifact.
    • Back end deployment: A Lambda function build step which takes the source artifact, installs and calls the Serverless framework.
    • Front end deployment: Another Lambda function build step which takes the source artifact, runs npm build and deploys the build to the Website S3 bucket

(Update: in the meantime, I replaced the Lambda functions with AWS CodeBuild).

No servers harmed so far, and also no workstations: No error-prone installation steps in READMEs to be followed, no curl | sudo bash or other awkward setup instructions. Also no hardcoded AWS access key pairs anywhere!

A platform team in an organization could provide several of these types of templates for particular use cases, then development teams could get going just by clicking the link.

Ok, back to our example: Once the CloudFormation stack creation is fully finished, the created CodePipeline is going to run for the first time. On the AWS console:

CodePipeline running

As soon as the initial pipeline run is finished:

  • the back end CloudFormation stack has been created by the Serverless framework, depending on what you defined in the backend/serverless.yml configuration file.
  • the front end has been built and put into the website bucket.

To find out the URL of our website hosted in S3, open the resources of the CloudFormation stack and expand the outputs. The WebsiteUrl output will show the actual URL:

CloudFormation Stack output

Click on the URL link and view the website:

Deployed sample website

Voila! We are up and running!

As might have seen in the picture above, there is some JSON output: It’s actually the result of a HTTP call the front end made against the back end: the hello function, which just responds the Lambda event object.

Let’s dig a bit deeper into this detail as it shows the integration of frontend and backend: To pass the ServiceEndpoint URL to the front end build step, the back end build step is exporting all CloudFormation Outputs of the Serverless-created stack as a CodePipeline build artifact, which the front end build step takes in turn to pass it to npm build (in our case via a react-specific environment var). This is how the API call looks in react:

This Cross-site request actually works, because we specified CORS to be on in the serverless.yml:

Here is a high-level overview of the created CloudFormation stack:

Overview of the CloudFormation Stack

With the serverless pipeline and serverless project running, change something in your project, commit it and view the change propagated through the pipeline!

Additional thoughts:

I want to setup my own S3 bucket with my own CloudFormation templates/blueprints!

In case that you don’t trust me as a template provider, or you want to change the one-button CloudFormation template, you can of course host your own S3 bucket. The scope of doing that is beyond this article but you can start by looking at my CloudFormation template repo.

I want to have testing/staging in the pipeline!

The sample pipeline does not have any testing or staging steps. You can add more steps to the pipeline, e.g. another Lambda step, which calls e.g. npm test on your source code.

I need a database/cache/whatever for my app!

No problem, just add additional resources to the serverless.yml configuration file.

Summary

In this blog post I demonstrated a CloudFormation template which bootstraps a serverless deployment pipeline with AWS CodePipeline. This enables rapid application development and deployment, as development teams can use the template in a “one-button” fashion for their projects.

We have deployed a sample project with a deployment pipeline with a front and back end.

AWS gives us all the lego bricks we need to create such pipelines in an automated, codified and (almost) maintenance-free way.

Known issues / caveats

  • I am describing an experiment / working prototype here. Don’t expect high quality, battle tested code (esp. the JavaScript parts 🙂 ). It’s more about the architectural concept. Issues and Pull requests to the mentioned projects are welcome 🙂 (Update: luckily I could delete all the JS code with  the move to AWS CodeBuild)
  • All deployment steps currently run with full access roles (AdministratorAccess) which is a bad practice but it was not the focus of this article.
  • The website could also be backed by a CloudFront CDN with HTTPS and a custom domain.
  • Beware of the 5 minute execution limit in Lambda functions (e.g. more complex serverless.yml setups might take longer, this could be worked around by sourcing it resource creation a CloudFormation pipeline step, Michael Wittig has blogged about that). (Update: this point became invalid with the move to AWS CodeBuild)
  • The build steps are currently not optimized, e.g. installing npm/serverless every time is not necessary. It could use an artifact from an earlier CodePipeline step (Update: this point became invalid with the move to AWS CodeBuild)
  • The CloudFormation stack created by the Serverless framework is currently suffixed with “dev”, because that’s their default environment. The prefix should be omitted or made configurable.

Acknowledgements

Special thanks goes to the folks at Stelligent.

First for their open source work on serverless deploy pipelines with Lambda, especially the “dromedary-serverless” project. I adapted much from the Lambda code.

Second for their “one-button” concept which influenced this article a lot.

About the Author

Along with 18 years of web software development and web operations experience, Soenke Ruempler is an expert in AWS technologies (6 years in experience of development and operating), and in moving on-premise/legacy systems to the Cloud without service interruptions.

His special interests and fields of knowledge are Cloud/AWS, Serverless architectures, Systems Thinking, Toyota Kata (Kaizen), Lean Software Development and Operations, High Performance/Reliability Organizations, Chaos Engineering.

You can find him on Twitter, Github and occasionally blogging on ruempler.eu.

About the Editors

Ryan Brown is a Sr. Software Engineer at Ansible (by Red Hat) and contributor to the Serverless Framework. He’s all about using the best tool for the job, and finds simplicity and automation are a winning combo for running in AWS.


Four ways AWS Lambda makes me happy

09. December 2016 2016 0

Author: Tal Perry
Editors: Jyrki Puttonen, Bill Weiss

Intro

What is Lambda

Side projects are my way of learning new technology. One that I’ve been anxious to try is AWS Lambda. In this article, I will focus on the things that make Lambda a great service in my opinion.

For the uninitiated, Lambda is a service that allows you to essentially upload a function and AWS will make sure the hardware is there to run it. You pay for the compute time in hundred millisecond increments instead of by the hour, and you can run as many copies of your lambda function as needed.

You can think of Lambda as a natural extension to containers. Containers (like Docker) allow you to easily deploy multiple workloads to a fleet of servers. You no longer deploy to a server, you deploy to the fleet and if there is enough room in the fleet your container runs. Lambda takes this one step further by abstracting away the management of the underlying server fleet and containerization. You just upload code, AWS containerizes it and puts it on their fleet.

Why did I choose Lambda?

My latest side project is SmartScribe, an automated transcription service. SmartScribe transcribes hours of audio in minutes, a feat which requires considerable memory and parallel processing of audio. While a fleet of containers could get the job done, I didn’t want to manage a fleet, integrate it with other services nor did I want to pay for my peak capacity when my baseline usage was far lower. Lambda abstracts away these issues, which made it a very satisfying choice.

How AWS Lambda makes me happy

It’s very cheap

I love to invest my time in side projects, I get to create and learn. Perhaps irrationally, I don’t like to put a lot of money into them from the get go. When I start building a project I want it up all the time so that I can show it around. On the other hand I know that 98% of the time, my resources will not be used.

Serverless infrastructure saves me that 98% by allowing me to pay by the millisecond instead of by the hour. 98% is a lot of savings by any account

I don’t have to think about servers

As I mentioned, I like to invest my time in side projects but I don’t like to invest it in maintaining or configuring infrastructure. A thousand little things can go wrong on your server and any one of those will bring your product to a halt. I’m more than happy to never think about another server again.

Here are a few things that have slowed me down before that Lambda has abstracted away:

  1. Having to reconfigure because I forgot to set the IP address of an instance to elastic and the address went away when I stopped it (to save money)
  2. Worrying about disk space. My processes write to the disk. Were I to use a traditional architecture I’d have to worry about multiple concurrent processes consuming the entire disk, a subtle and aggravating bug. With lambda, each function invocation is guaranteed a (small) chunk of tmp space which reduces my concern.
  3. Running out of memory. This is a fine point because a single lambda function can only use 1.5 G of memory.

Two caveats:

  1. Applications that hold large data sets in memory might not benefit from Lambda. Applications that hold small to medium sized data sets in memory are prime candidates.
  2. 512MB of provisioned tmp space is a major bottleneck to writing larger files to disk.

Smart Scribe works with fairly large media files and we need to store them in memory with overhead. Even a few concurrent users can easily lead to problems with available memory – even with a swap file (and we hate configuring servers so we don’t want one). Lambda guarantees that every call to my endpoints will receive the requisite amount of memory. That’s priceless.

I use Apex to deploy my functions, which happens in one line

Apex is smart enough to only deploy the functions that have changed. And in that one line, my changes and only them reach every “server” I have. Compare that to the time it takes to do a blue green deployment or, heaven forbid, sshing into your server and pulling the latest changes.

But wait, there is more. Pardon last year’s buzzword, but AWS Lambda induces or at least encourages a microservice architecture. Since each function exists as its own unit, testing becomes much easier and more isolated which saves loads of time.

Tight integration with other AWS services

What makes microservices hard is the overhead of orchestration and communications between all of the services in your system. What makes Lambda so convenient is that it integrates with other AWS services, abstracting away that overhead.

Having AWS invoke my functions based on an event in S3 or SNS means that I don’t have to create some channel of communication between these services, nor monitor that channel. I think that this fact is what makes Lambda so convenient, the overhead you pay for a scalable, maintainable and simple code base is virtually nullified.

The punch line

One of the deep axioms of the world is “Good, Fast, Cheap : Choose two”. AWS Lambda takes a stab at challenging that axiom.

About the Author:

By day, Tal is a data science researcher at Citi’s Innovation lab Tel Aviv focusing on NLP. By night he is the founder of SmartScribe, a fully serverless automated transcription service hosted on AWS. Previously Tal was CTO of Superfly where he and his team leveraged AWS technologies and good Devops to scale the data pipeline 1000x. Check out his projects and reach out on Twitter @thetalperry

About the Editors:

Jyrki Puttonen is Chief Solutions Executive at Symbio Finland (@SymbioFinland) who tries to keep on track what happens in cloud.

Bill Weiss is a senior manager at Puppet in the SRE group.  Before his move to
Portland to join Puppet he spent six years in Chicago working for Backstop
Solutions Group, prior to which he was in New Mexico working for the
Department of Energy.  He still loves him some hardware, but is accepting
that AWS is pretty rad for some some things.