IAM Policies, Roles and Profiles and how to keep secrets away from your instances

05. December 2016 2016 0

Author: Mark Harrison
Editors: Jyrki Puttonen

AWS Identity and Access Management (IAM) is Amazon’s service for controlling access to AWS resources, or more simply, it provides a way for you to decide who has access to what in AWS. This simple description however hides the depth and complexity of what is probably one of the most misunderstood of Amazon’s services.

Many of you will have made use of IAM in order to create multiple users in AWS rather than sharing a single root user, but there are many more ways IAM can be useful to you. This article will be focusing on one use of IAM in particular: instance roles. Instance roles allow you to give AWS access to EC2 instances without them needing to store an AWS API key. I’ll be taking you through how to set them up, how to use them with your applications, and some of the things instance roles are useful for.

Throughout this article, I’ll be using terraform to create instances, roles and policies. However, the principles will apply if you use a different provisioning tool or if you use the API directly.

An example

We’re going to start off with a simple terraform configuration that creates a single micro instance in EC2. Here I’ve created a blank directory and a file inside called infrastructure.tf:

When I run terraform apply, terraform creates a running EC2 instance based on the configuration in my infrastructure.tf file. This will be the starting point for us to add IAM roles/policies to.

Let’s say we are writing an application and want to provide access to an S3 bucket. One way would be simply to copy your AWS API keys into the configuration file for your application, but this would give your application full access to your AWS account just as if you had logged in yourself. A better option would be to make a new IAM user, give them just the permissions needed to access the S3 bucket, and create API keys for that user. However, you still have to store the API keys in the application’s configuration file, along with all the hassles of managing secrets that entails.

Instead, what we’re going to do is create a role that allows access to the S3 bucket, and assign it to the instance. First, we’re going to make the S3 bucket:

Then, we’re going to create an AWS IAM policy that grants access to the bucket. A policy is simple a JSON document that lists permissions to things in AWS:

The actual policy document is the JSON bit between the <<EOF and EOF:

There’s quite a bit going on here, but the important parts are the Action and Resource sections. The Action section says what you can do, and in this case we’re saying you can get objects from S3 (in other words, we’re providing read only access to something in S3). The Resource section specifies what you can do it with, and in this case we say you can get S3 objects from anywhere inside the myawsadventapp bucket. If we wanted to provide write access to the bucket we would add another action, s3:PutObject, to the list of actions we allow. We can also change the name of the S3 bucket as needed to provide access to other buckets.

Now that we have the Policy set up to allow access to S3, we need to actually give that set of permissions to the instance itself. To do this, we make a role:

The first part of this is pretty straightforward: we give the role a name. But why is there another Policy JSON document there? This assume role policy specifies who, or what, can become the role. In this case, the policy is just stating that EC2 instances can have the role assigned to them. Generally, when making instance roles, you don’t need to change this.

The policy is already linked to the role (we added a role = section when making the policy. All that remains is to link the role with our instance.

If you were using the AWS web console to make a new instance, assigning a role to it is easy, you just pick the role from the list of roles in the instance details section as you make the instance. However, if you are using terraform, the AWS cli tools, or some other provisioning tool, then there is one more link in the chain: Instance Profiles.

Instance profiles are simply containers for roles that can be attached directly to instances, and can be thought of as simply an implementation detail. Whenever you make a role, make a matching profile, and then attach the profile to the instance. Here’s the profile to match the role we just created:

Notice how the name of the profile is the same as the name of the role. This is how it works with the AWS web console: AWS creates a profile with the same name as the role behind the scenes. Keeping the name the same makes things easier, and once you have done this you can then completely forget that profiles exist.

Finally, now that the profile has been created, we just edit the instance and assign the profile to it:

And now, with all of the required configuration made, we can go ahead and make the instance:

There is one thing to be aware of: an instance profile can’t be changed after an instance has been created, so if you were following along and created the instance earlier without adding the instance profile then you have to recreate the instance from scratch. With this toy instance that’s not a problem, but it may be if you’re adding this to existing infrastructure.

Accessing API keys from the instance

Once the terraform run is complete, we can ssh into the instance and see that the instance profile has been applied:

And if we run a slightly different curl command, we can obtain AWS API keys:

Your application can simply look up the keys when it wants to use an AWS API and doesn’t need to store them in a config file or elsewhere. Note that the credentials listed have an expiration time mentioned. The keys change approximately every 6 hours and you will need to look them up again after this time.

To make life easier, most AWS libraries and commands already support instance roles as a method of getting credentials, and will automatically use any credentials that are available without any further configuration. For example, you can just use the aws cli without needing to configure your credentials:

Some things you can do with IAM roles and instance profiles

So far we’ve shown an example of giving instances access to a particular S3 bucket. This is great, but there are some other uses for instance roles:

One good use case is managing EBS volumes. Say you have an autoscaling group (because AWS instances break and autoscaling groups allow AWS to launch replacements for broken instances), but you have state that needs to be stored on instances that you’d like to not disappear every time an instance is recreated. The way you deal with this is to store the stateful data on EBS volumes, and use a script that runs on boot to attach any EBS volume that isn’t currently in use.

Another case where having IAM roles is really handy: If you install grafana on an AWS instance, the cloudwatch data sourcesupports using IAM roles, and so you can use grafana to view cloudwatch graphs for your AWS account without needing to set up credentials. To do this, use the following IAM policy:

Finally, a special case of the S3 access policy above is to use the S3 bucket to store secrets. This uses S3 as a trusted store, and you use IAM profiles to determine which instances get access to the secrets. This is the basis of the citadel cookbook for Chef that can be used to manage secrets in AWS.

More information

Hopefully this article has given you a taste for IAM roles and instance profiles and how they can make your life much easier when interacting with the AWS API from EC2 instances. If you want more information on using IAM roles, the AWS Documentation on IAM Roles goes into much more detail and is well worth a read.

About the Author

Mark Harrison is a Systems Administrator on the Chef operations team, where he is responsible for the care and feeding of Hosted Chef as well as maintaining several of Chef’s internal systems. Before coming to Chef, Mark led the operations team at OmniTI, helping clients scale their web architectures and supporting some of the largest infrastructures in the world.

About the Editors

Jyrki Puttonen is Chief Solutions Executive at Symbio Finland (@SymbioFinland) who tries to keep on track what happens in cloud.