Hacking together an Alexa skill

24. December 2018 2018 0

Alexa is an Amazon technology that allows users to build voice-driven applications. Amazon takes care of converting voice input to text and vice versa, provisioning the software to devices, and calling into your business logic. You use the Amazon interface to build your model and provide the logic that executes based on the text input. The combination of the model and the business logic is an Alexa skill. Alexa skills run on a variety of devices, most prominently the Amazon Echo.

I built an Alexa skill as a proof of concept for a hackathon; I had approximately six hours to build something to demo. My goals for the proof of concept were to:

  • Run without error in the simulator
  • Pull data from a remote service using an AWS lambda function
  • Take input from a user
  • Respond to that input

I was working with horse racing data because it is both timely and I had access to an API that provided interesting data. Horse races happen at a specific track on a specific date and time. Each race has a number of horses that compete.

The flow of my Alexa skill was:

  • Notify Alexa to call into the custom skill using a phrase.
  • Prompt the user to choose a track from one of N provided by me.
  • Store the value of the track for future operations.
  • Store the track name in the session.
  • Prompt the user to choose between two sets of information that might be of use: the number of races today or the date of the next featured race.
  • Return the requested information.
  • Exit the skill, which meant that Alexa was no longer listening for voice input.

The barriers to entry of creating a proof of concept are low. If you can write a python script and navigate around the AWS console, you can write an Alexa skill. However, there are some other considerations that I didn’t have to work through because this was a hackathon project. Tasks including UX, testing, and deployment to a device would be crucial to any production project, however.

Jargon and Terminology

Like any other technology, Alexa has its own terminology. And there’s a lot of it.

A skill is a package of a model to convert voice to text and vice versa as well as business logic to execute against the text input. A skill is accessed by a phrase the user says, like “listen to NPR” or “talk horse racing.” This phrase is an “invocation.” The business logic is written by you, but the Alexa service handles the voice to text and text to voice conversion.

A skill is made of up of one or more intents. An intent is one branch of logic and is also identified by a phrase, called an utterance. While invocations need to be globally unique, utterances only trigger after a skill is invoked so the phrasing can overlap between different skills. An example intent utterance would be “please play ‘Fresh Air’” or “my favorite track is Arapahoe Park.” You can also use a prepackaged intent, such as one that returns search results, and tie that to an utterance. Utterances are also called samples.

Slots are placeholders within utterances. If the intent phrase is “please play ‘Fresh Air’” you can parameterize the words ‘Fresh Air’ and have that phrase converted to text and delivered to you. A slot is basically a multiple choice selection, so you can provide N values and have the text delivered to you. Each slot has a defined data type. It was unclear to me what happens when a slot is filled with a value that is not one of your N values. (I didn’t get a chance to test that.)

A session is a place for your business logic to store temporary data between invocations of intents. A session is tied both to an application and a user (more info here). Sessions stay around for about the length of time a user is interacting with your application. Depending on application settings it will be about 30 seconds. If you need to store data for longer, connect your business logic to a durable storage solution like S3 or DynamoDB.

Getting started

To get started, I’d suggest using this tutorial as a foundation. If you are looking for a different flow, look at this set of tutorials and see if any of them match your problem space. All of them walk you through creating an Alexa skill using a python lambda function. It’s worth noting that you’ll have to sign up with an Amazon account for access to the Alexa console (it’s a different authentication system than AWS IAM). I’d also start out using a lambda function to eliminate a possible complication. If you use lambda, you don’t have to worry about making sure Alexa can access your https endpoint (the other place your business logic can reside).

Once you have the tutorial working in the Alexa console, you can start extending the main components of the Alexa skill: the model or the business logic.

Components

You configure the model using the Alexa console and the Alexa skills kit or via the CLI or skills kit API. In either case, you’re going to end up with a JSON configuration file with information about the invocation phrase, the intents and the slots. You also can trigger a model build and test your model using a browser when using the console, as long as you have a microphone.

Here are selected portions of the JSON configuration file for the Alexa skill I created. You can see this was a proof of concept as I didn’t delete the color scheme from the tutorial and only added two tracks that the user can select as their favorite.

{  
   "interactionModel":{  
      "languageModel":{  
         "invocationName":"talk horse racing",
         "intents":[  
            {  
               "name":"MyColorIsIntent",
               "slots":[  
                  {  
                     "name":"TrackName",
                     "type":"TrackNameType"
                  }
               ],
               "samples":[  
                  "my favorite track is {TrackName}"
               ]
            },
            {  
               "name":"AMAZON.HelpIntent",
               "samples":[  

               ]
            },
            {  
               "name":"HowManyRaces",
               "slots":[  

               ],
               "samples":[  
                  "how many races"
               ]
            },
            {  
               "name":"NextStakesRace",
               "slots":[  

               ],
               "samples":[  
                  "when is the stakes race",
                  "when is the next stakes race"
               ]
            }
         ],
         "types":[  
            {  
               "name":"LIST_OF_COLORS",
               "values":[  
                  {}
               ]
            },
            {  
               "name":"TrackNameType",
               "values":[  
                  {  
                     "name":{  
                        "value":"Arapahoe Park"
                     }
                  },
                  {  
                     "name":{  
                        "value":"Tampa Bay Downs"
                     }
                  }
               ]
            }
         ]
      }
   }
}

The other component of the system is business logic. This can either be an AWS Lambda, written in any language supported by that service, or service that responds to an HTTPS request. That could be useful in leveraging existing code or data, not in AWS. If you use Lambda, you can deploy the skill just like any other Lambda, which means you can leverage whatever lifecycle, frameworks or testing solution you use for other Lambda functions. Using a non-AWS Lambda solution requires a bit more work when processing a request, but it can be done.

The business logic I wrote for this was basically hacked tutorial code. The first section is the lambda handler. Below is a relevant snippet where we examine the event passed to the lambda function by the Alexa system and call the appropriate business method.

def lambda_handler(event, context):

    if event['session']['new']:

       on_session_started({'requestId': event['request']['requestId']},

                         event['session'])

    if event['request']['type'] == "LaunchRequest":

       return on_launch(event['request'], event['session'])

    elif event['request']['type'] == "IntentRequest":

       return on_intent(event['request'], event['session'])

   elif event['request']['type'] == "SessionEndedRequest":

       return on_session_ended(event['request'], event['session'])

...

on_intent is the logic dispatcher which retrieves the intent name and then calls the appropriate internal function.

def on_intent(intent_request, session):

   """ Called when the user specifies an intent for this skill """

   print("on_intent requestId=" + intent_request['requestId'] +

         ", sessionId=" + session['sessionId'])

   intent = intent_request['intent']

   intent_name = intent_request['intent']['name']

    if intent_name == "MyColorIsIntent":

       return set_color_in_session(intent, session)

…

    elif intent_name == "HowManyRaces":

       return get_how_many_races(intent, session)

…

Each business logic function can be independent and could call into different services if need be.

def get_how_many_races(intent, session):

   session_attributes = {}

   reprompt_text = None

    # Setting reprompt_text to None signifies that we do not want to reprompt

    # the user. If the user does not respond or says something that is not

    # understood, the session will end.

    if session.get('attributes', {}) and "favoriteColor" in session.get('attributes', {}):

       favorite_track = session['attributes']['favoriteColor']

       speech_output = "There are " + get_number_races(favorite_track) + " races at " +favorite_track + " today. Thank you, good bye."

       should_end_session = True

   else:

       speech_output = "Please tell me your favorite track by saying, " \

                   "my favorite track is Arapahoe Park"

       should_end_session = False

   

   return build_response(session_attributes, build_speechlet_response(

       intent['name'], speech_output, reprompt_text, should_end_session))

build_response is directly from the sample code and creates a correctly formatted string response. This response will be interpreted by Alexa and converted into speech.

def build_response(session_attributes, speechlet_response):

   return {

       'version': '1.0',

       'sessionAttributes': session_attributes,

       'response': speechlet_response

    }

Based on the firm foundation of the tutorial, you can easily add more slots, intents and change the invocation. You also can build out additional business logic that can respond to the additional voice input.

Testing

I tested my skill manually using the built-in simulator in the Alexa console. I tried other simulators, but they were not effective. At the bottom of the python tutorial mentioned above, there is a reference to echosim.io, which is an online Alexa skill simulator; I couldn’t seem to make it work.

After each model change (new or modified utterances, intents or invocations) you will need to rebuild the model (approximately 30-90 seconds, depending on the complexity of your model). Changing the business logic does not require rebuilding the model, and you can use that functionality to iterate more quickly.

I did not investigate automated testing. If I were building a production Alexa skill, I’d add a small layer of indirection so that the business logic could be easily unit tested, apart from any dependencies on Alexa objects. I’d also plan to build a CI/CD pipeline so that changes to the model or the lambda function, something like what is outlined here.

User Experience (UX)

Voice UX is very different from the UX of desktop or a mobile device. Because information transmission is slow, it’s even more important to think about voice UX for an Alexa skill than it would be if you were building a more traditional web-based app. If you are building a skill for any other purpose than exploration or proof of concept, make sure to devote some time to learning about voice UX. This webinar appears useful.

Some lessons I learned:

  • Don’t go too deep in navigation level. With Alexa, you can provide choice after choice for the user, but remember the last time you dealt with an interactive phone voice recognition system. Did you like it? Keep interactions short.
  • Repeat back what Alexa “heard” as this gives the user another chance to correct course.
  • Offer a help option. If I were building a production app, I’d want to get some kind of statistics on how often the help option was invoked to see if there was an oversight on my part.
  • Think about error handling using reprompts. If the skill hasn’t received input, it can reprompt and possibly get more user input.

After the simulator

A lot of testing and development can take place in the Amazon Alexa simulator. However, at some point, you’ll need to deploy to a device. Since this was a proof of concept, I didn’t do that, but there is documentation on that process here.

Conclusion

This custom Alexa skill was the result of a six-hour exploration during a company hackfest. At the end of the day, I had a demo I could run on the Alexa Simulator. Alexa is mainstream enough that it makes sense for anyone who works with timely, textual information to evaluate building a skill, especially since a prototype can be built relatively quickly. For instance, it seems to me that a newspaper should have an Alexa skill, but it doesn’t make as much sense for an e-commerce store (unless you have certain timely information and a broad audience) because complex navigation is problematic. Given the low barrier to entry, Alexa skills are worth exploring as this new UX for interacting with computers becomes more prevalent.

About the Author

Dan Moore is director of engineering at Culture Foundry. He is a developer with two decades of experience, former AWS trainer, and author of “Introduction to Amazon Machine Learning,” a video course from O’Reilly. He blogs at http://www.mooreds.com/wordpress/ . You can find him on Twitter at @mooreds.

About the Editors

Ed Anderson is the SRE Manager at RealSelf, organizer of ServerlessDays Seattle, and occasional public speaker. Find him on Twitter at @edyesed.

Jennifer Davis is a Senior Cloud Advocate at Microsoft. Jennifer is the coauthor of Effective DevOps. Previously, she was a principal site reliability engineer at RealSelf, developed cookbooks to simplify building and managing infrastructure at Chef, and built reliable service platforms at Yahoo. She is a core organizer of devopsdays and organizes the Silicon Valley event. She is the founder of CoffeeOps. She has spoken and written about DevOps, Operations, Monitoring, and Automation.


2018’s 10 most popular AWS products according to Jefferson Frank

Earlier this year, Jefferson Frank released its first ever report into salaries, benefits, and working trends in the AWS ecosystem.

Featuring self-reported opinions and input from more than 500 AWS professionals, the annual AWS Salary Survey report uses over 47,000 thousand data points to determine average salaries for a number of job roles and seniorities across four countries.

We hope this survey will be a useful tool to help AWS professionals benchmark their salaries and get the latest information on industry trends for many years to come; we’d love to hear your thoughts for our next edition, so keep an eye out for future surveys.

As part of the survey, we asked AWS professionals to give us their take on the good, the bad, and the ugly of the AWS product catalog.

We’re going to take a look at the top ten most-used AWS services as reported by our survey respondents, find out why they’re so popular with cloud pros, and hear what can be done to improve them; directly from the AWS community.

Amazon EC2

The most popular AWS product among cloud professionals was Elastic Compute Cloud (EC2). EC2 has become a core part of many AWS users’ infrastructure, offering raw computing resources on demand.

EC2 benefits include that the instances it provides are scalable both horizontally and vertically, it offers an enormous amount of freedom, and its pay-as-you-use pricing make it accessible.

The service is continuing to evolve with a number of new features for EC2 launched in the past year, including the ability to pause and resume workloads without having to modify existing applications.

Last month, ahead of this year’s re:Invent conference, a powerful new predictive scaling upgrade was announced. This feature can be added to existing scaling configurations using a checkbox, and uses custom parameters to predict the length of an SQS queue.

According to the survey, the majority of observations about EC2 were positive, with users praising the product’s scalability, quick deployment times, and cost-cutting potential. The service’s main benefits, users stated are its elasticity and the ease and speed with which it can be deployed.

It’s certainly not a perfect platform, though, with some respondents criticizing EC2’s lack of flexibility when it comes to cross-region networking, and the fact that it’s not as granular as EKS. Some users also commented that, although in theory EC2’s pricing structure should make costs more manageable, running certain types of workload with EC2 can be expensive.

Amazon EC2 Auto Scaling

A useful tool for Amazon EC2 users, Auto Scaling allows users to automatically scale capacity based on pre-defined conditions. Users praised the service for providing high availability, less downtime, and the ability to scale instances up and down at speed.

Another key advantage of using EC2 Auto Scaling, according to survey participants, is its ability to improve fault tolerance. Users can create a group with Auto Scaling to autonomously kill off unhealthy instances and launch new ones in their place.

Amazon Elastic Block Store (EBS)

Another great sidekick to EC2, Amazon Elastic Block Store offers backup services for AWS instances, with high availability, and data encrypted block-level storage volumes in a number of volume options.

The majority of respondents using Amazon EBS noted the product’s ease of set-up, storage flexibility—thanks to the option to support volumes by a solid state drive or hard disk drive—and straightforward storage management.

Lack of ability to reduce size, limited configuration options, and sluggishness in deployment were a few of the drawbacks cited by users.

Amazon Simple Storage Service 

AWS’s object storage service, Amazon Simple Storage Service(S3) is a core tool for backup and data archiving on the AWS platform. With various storage classes available depending on how often, and how quickly, data needs to be accessed, cloud professionals lauded S3 for its low costs, in addition to its ease of use, reliability, and powerful access and retention management.

Limited file system support and a cumbersome interface that can make essential file management, and nightmarish cross-account access were the key aspects said to be holding the service back.

Amazon CloudWatch

Monitoring your cloud infrastructure is essential to keeping things working as they should. Amazon CloudWatch allows individuals regardless of their role to monitor their infrastructure.

Users found CloudWatch especially useful for allowing simple, non-interventionist management of security procedures, and enabling full automation of security best practices.

Amazon Relational Database Service

Amazon Relational Database Service (RDS) is AWS’s console that allows users to build, manage, and scale relational databases using the world’s most popular open source database platform.

Survey respondents touted RDS as an industry-standard service, offering a huge range of engines, amazingly easy to setup, and far easier to manage than MySQL on EC2.

All these great features obviously come with a price tag however, and many users also commented that RDS can be expensive, particularly in comparison to other AWS database services.

AWS Lambda

An on-demand serverless compute tool, AWS Lambda lets users run and scale backend code automatically, without the need for dedicated EC2 servers.

Operated on a pay-per-run basis, AWS professionals were big fans of how manageable AWS Lambda costs were, and how fast convenient the service was to use, especially without the need to manage infrastructure to access it. The expansive choice of languages and its dependability were also extoled.

Like the Mary Poppins of AWS, Lambda was called “excellent in almost every way” by survey respondents, indicating its extensive handiness across many use cases. It’s not ideal for cases when always-on or low latency is required. Users also pointed out that Cloud9 Integrated Development Environment (IDE) for Lambda is not stable for enterprise usage.

Amazon Simple Notification Service

As its name implies, Amazon Simple Notification Service(SNS) is a simple service that allows AWS users to deliver push notifications, email, and SMS messages. Survey respondents commended the service as easy to use, and highly scalable.

AWS CloudFormation

AWS CloudFormation helps users model and deploy their AWS resources in a more efficient way, meaning less time needs to be spent on resource management.

Users applauded CloudFormation as an easy way to manage infrastructure, enabling more time to be dedicated to applications by facilitating automation and repeatability. Though extremely fast, and useful for multi-account setup, some cloud pros complained about the lack of support for newly launched AWS services. Issues were also raised around the product’s limitation to AWS components and lack of multi-cloud options, plus the fact that not all parameters can be set automatically.

Elastic Load Balancing 

A load-balancing service for AWS instances, Amazon Elastic Load Balancing (ELB) distributes incoming app traffic and automatically scales available resources in order to meet varying traffic demands. Its application load balancing was commended as one of the service’s best features, along with impressive path routing. Drawbacks, however, included complexities with multi-region load balancing, and a lack of external monitoring.

About the Author

Sam Samarasekera is a Business Manager at Jefferson Frank, the global experts in AWS recruitment. With almost five years of experience, Sam specializes in finding great jobs for contractors in the Amazon Web Services space. Over the course of his professional career, Sam has developed an in-depth understanding of the cloud computing market and the various tech staffing challenges faced by businesses around the world. Using his expertise and industry knowledge, Sam has played an integral role in building Jefferson Frank into the recruitment agency of choice for AWS.

About the Editor

Jennifer Davis is a Senior Cloud Advocate at Microsoft. Jennifer is the coauthor of Effective DevOps. Previously, she was a principal site reliability engineer at RealSelf, developed cookbooks to simplify building and managing infrastructure at Chef, and built reliable service platforms at Yahoo. She is a core organizer of devopsdays and organizes the Silicon Valley event. She is the founder of CoffeeOps.