Adoption of the cloud is becoming more and more popular for all types of businesses. When you’re starting, you have a blank canvas to work from – there’s no existing blueprint or guide. But what if you’re not in that position? What if you’ve already got an established security policy in place, or you’re working in a regulated industry that sets limits of what’s appropriate or acceptable for your company’s IT infrastructure?
Being able to leverage the elasticity of the public cloud is one of its biggest – if not, the biggest – advantage over a traditional corporate IT environment. Building a private cloud takes time, money and a significant amount of investment. This investment might not be acceptable to your organisation or ever generate returns…
But what if we can build a “private” cloud using public cloud services?
The Virtual Private Cloud
If you’ve looked at AWS, you’ll be familiar with the concept of a “VPC” – A “Virtual Private Cloud”, the first resource you’ll create in your AWS account (if you don’t use the default VPC created in every region when your account is created, that is!). It’s private in the sense that it’s your little bubble, to do with as you please. You control it, nurture it and manage it (hopefully with automation tools!). But private doesn’t mean isolated, and this does not fit the definition of a “private cloud.”
If you misconfigure your AWS environment, you can accidentally expose your environment to the public Internet, and an intruder may be able to use this as a stepping-stone into the rest of your network.
In this article, we’re going to look at the building blocks of your own “private” cloud in the AWS environment. We’ll cover isolating your VPC from the public internet, controlling what data enters and, crucially, leaves your cloud, as well as ensuring that your users can get the best out of their new shiny cloud.
Connecting to your “private” Cloud
AWS is most commonly accessed over the Internet. You publish ‘services’ to be consumed by your users. This is how many people think of AWS – a Load balancer with a couple of web servers, some databases and perhaps a bit of email or workflow.
In the “private” world, it’s unlikely you’ll want to provide direct access to your services over the Internet. You need to guarantee the integrity and security of your data. To maximise your use of the new environment you want to make sure it’s as close to your users and the rest of your infrastructure as possible.
AWS has two private connectivity methods you can use for this: DirectConnect and AWS managed VPN.
Both technologies allow you to “extend” your network into AWS. When you create your VPC, you allocate an IP range (that doesn’t clash with your internet network), and you can then establish a site-to-site connection to your new VPC. Any instance or service you spin up in your VPC is accessed directly from your internal network, using its private IP address. It’s just as if a new datacenter appeared on your network. Remember, you can still configure your VPC with an Internet Gateway and allocate Public IP addresses (or Elastic IPs) to your instances, which would then give them both an Internet IP and an IP on your internal network – you probably don’t want to do this!
The AWS managed VPN service allows you to establish a VPN over the Internet between your network(s) and AWS. You’re limited by the speed of your internet connection. Also, you’re accessing your cloud environment over the Internet, with all the variable performance and latency that entails.
The diagram below shows an example how of AWS Managed VPN connectivity interfaces with your network:
AWS DirectConnect allows you to establish a private circuit with AWS (like a traditional “leased line”). Your network traffic never touches the Internet or any other uncontrolled public network. You can directly connect to AWS’ routers at one of their shared facilities, or you can use a third-party service to provide the physical connectivity. The right option depends on your connectivity requirements: directly connecting to AWS means you can own the service end-to-end, but using a third party allows you greater flexibility in how you design the resiliency and the connection speed you want to AWS (DirectConnect offers physical 1GbE or 10GbE connectivity options, but you might want something in between, which is where a third party can really help here).
The diagram below shows an example of how you can architect DirectConnect connectivity between your corporate datacenter and the AWS cloud. DirectConnect also allows you to connect directly to Amazon services over your private connection, if required. This ensures that no traffic traverses the public Internet when you’re accessing AWS hosted services (such as API endpoints, S3, etc.). DirectConnect also allows you to access services across different regions, so you could have your primary infrastructure in eu-west-1 and your DR infrastructure in eu-west-2, and use the same DirectConnect to access both regions.
Both connectivity options offer the same native approach to access control you’re familiar with. Network ACLs (NACLs) and Security Groups function exactly as before – you can reference your internal network IP addresses/CIDR ranges as normal and control service access by IP and port. There’s no NAT in place between your network and AWS; it’s just like another datacenter on your network.
Pro Tip: You probably want to delete your default VPCs. By default, AWS services will launch into the default VPC for a specific region, and this comes configured with the standard AWS template of ‘public/private’ subnets and internet gateways. Deleting the default VPCs and associated security groups makes it slightly harder for someone to spin up a service in the wrong place accidentally.
You’re not restricted to a single AWS VPC (by default, you’re able to create 5 per region, but this limit can be increased by contacting AWS support). VPCs make it very easy to isolate services – services you might not want to be accessed directly from your corporate network. You can build a ‘DMZ-like’ structure in your “private” cloud environment.
One good example of this is in the diagram below – you have a “landing zone” VPC where you host services that should be accessible directly from your corporate network (allowing you to create a bastion host environment), and you run your workloads elsewhere – isolated from your internal corporate network. In the example below, we also show an ‘external’ VPC – allowing us to access Internet-based services, as well as providing a secure inbound zone where we can accept incoming connectivity if required (essentially, this is a DMZ network, and can be used for both inbound and outbound traffic).
Through the use of VPC Peering, you can ensure that your workload VPCs can be reached from your inbound-gateway VPC, but as VPCs do not support transitive networking configurations by default, you cannot connect from the internal network directly to your workload VPC.
Once your connectivity between your corporate network and AWS is established, you’ll want to deploy some services. Sure, spinning up an EC2 instance and connecting to it is easy, but what if you need to connect to an authentication service such as LDAP or Active Directory? Do you need to route your access via an on-premise web proxy server? Or, what if you want to publish services to the rest of your AWS environment or your corporate network but keep them isolated in your DMZ VPC?
Enter AWS PrivateLink: Launched at re:Invent in 2017, it allows you to “publish” a Network Load Balancer to other VPCs or other AWS Accounts without needing to establish VPC peering. It’s commonly used to expose specific services or to supply MarketPlace services (“SaaS” offerings) without needing to provide any more connectivity over and above precisely what your service requires.
We’re going to offer an example here of using PrivateLink to expose access to an AWS hosted web proxy server to our isolated inbound and workload VPCs. This gives you the ability to keep sensitive services isolated from the rest of your network but still provide essential functionality. AWS prohibit transitive VPCs for network traffic (i.e., you cannot route from VPC A to VPC C via a shared VPC B) but PrivateLink allows you to work around this limitation for individual services (basically, anything you can “hide” behind a Network Load Balancer).
Assuming we’ve created the network architecture as per the diagram above, we need to create our Network Load Balancer first. NLBs are the only load balancer type supported by PrivateLink at present.
Once this is complete, we can then create our ‘Endpoint Service,’ which is in the VPC section of the console:
Once the Endpoint Service is created, take note of the Endpoint Service Name, you’ll need this to create the actual endpoints in your VPCs.
The Endpoint Service Name is unique across all VPC endpoints in a specific region. This means you can share this with other accounts, which are then able to discover your endpoint service. By default, you need to accept all requests to your endpoint manually, but this can be disabled (you probably don’t want this, though!). You can also whitelist specific account IDs that are allowed to create a PrivateLink connection to your endpoint.
Once your Endpoint Service is created, you then need to expose this into your VPCs. This is done from the ‘Endpoints’ configuration screen under VPCs in the AWS console. Validate your endpoint service name and select the VPC required – simple!
You can then use this DNS name to reference your VPC endpoint. It will resolve to an IP address in your VPC (via an Elastic Network Interface), but traffic to this endpoint will be routed directly across the Amazon network to the Network Load Balancer.
What’s in a Name?
Typically, one of the biggest hurdles with connecting between your internal network and AWS is the ability to route DNS queries correctly. DNS is key to many Amazon services, and Amazon Provided DNS (now Route53 Resolver) contains a significant amount of behind-the-scenes intelligence, such as allowing you to reach the correct Availability Zone target for your ALB or EFS mount point.
Hot off the press is the launch of Route53 Resolver, which removes the need to create your own DNS infrastructure to route requests between your AWS network and your internal network, while allowing you to continue to leverage the intelligence built into the Amazon DNS service. Previously, you would need to build your own DNS forwarder on an EC2 instance to route queries to your corporate network. This means that, from the AWS perspective, all your DNS requests are originating from a single server in a specific AZ (which might be different to the AZ of the client system), and so you’d end up getting the endpoint in a different region for your service. With a service such as EFS, this could result in increased latency and a high cross-AZ data transfer bill.
Here’s an example of how the Route53 resolver automatically picks the correct mount point target based on the location of your client system:
[ec2-user@test-euw-1a ~]$ curl http://169.254.169.254/latest/meta-data/placement/availability-zone
[ec2-user@test-euw-1a ~]$ host fs-a2c10c6a.efs.eu-west-1.amazonaws.com
fs-a2c10c6a.efs.eu-west-1.amazonaws.com has address <b>10.10.0.156</b>
[ec2-user@test-euw-1a ~]$ host eu-west-1a.fs-a2c10c6a.efs.eu-west-1.amazonaws.com
eu-west-1a.fs-a2c10c6a.efs.eu-west-1.amazonaws.com has address <b>10.10.0.156</b>
[ec2-user@test-euw-1b ~]$ curl http://169.254.169.254/latest/meta-data/placement/availability-zone
[ec2-user@test-euw-1b ~]$ host fs-a2c10c6a.efs.eu-west-1.amazonaws.com
fs-a2c10c6a.efs.eu-west-1.amazonaws.com has address <b>10.10.10.244</b>
[ec2-user@test-euw-1b ~]$ host eu-west-1b.fs-a2c10c6a.efs.eu-west-1.amazonaws.com
eu-west-1b.fs-a2c10c6a.efs.eu-west-1.amazonaws.com has address <b>10.10.10.244
Pro Tip: If you’re using a lot of standardised endpoint services (such as proxy servers), using a common DNS name which can be used across VPCs is a real time-saver. This requires you to create a Route53 internal zone for each VPC (such as workload.example.com, inbound.example.com) and update the VPC DHCP Option Set to hand out this domain name via DHCP to your instances. This, then allows you to create a record in each zone with a CNAME to the endpoint service, for example:
From an instance in our workload VPC:
[ec2-user@workload-eu1a ~]$ host proxy.privatelink
proxy.privatelink.workload.example.com is an alias for vpce-0a6ef969793d167ba-k8e62rfb.vpce-svc-0845ad71888c654ac.eu-west-1.vpce.amazonaws.com.
And the same commands from an instance in our inbound VPC:
[ec2-user@inbound-eu1b ~]$ host proxy.privatelink
proxy.privatelink.inbound.example.com is an alias for vpce-0fec135ef4ffe3b82-yggf9a3f.vpce-svc-0845ad71888c654ac.eu-west-1.vpce.amazonaws.com.
In this example above, we could use our configuration management system to set the http_proxy environment variable to ‘proxy.privatelink:3128’ and not have to have per-VPC specific logic configured. Neat!
There are still AWS services that expect to have Internet access available from your VPC by default. One example of this is AWS Fargate – the Amazon-hosted and managed container deployment solution. However, Amazon is constantly migrating more and more services to PrivateLink, meaning this restriction is slowly going away.
A full list of currently available VPC endpoint services is available in the VPC Endpoint documentation. AWS provided VPC Endpoints also give you the option to update DNS to return the VPC endpoint IPs when you resolve the relevant AWS endpoint service name (i.e. ec2.eu-west-1.amazonaws.com -> vpce-123-abc.ec2.eu-west-1.
About the Author
Jon is a freelance cloud devoperative buzzword-hater, currently governing the clouds for a financial investment company in London, helping them expand their research activities into “the cloud.”
Before branching out into the big bad world of corporate consulting, Jon spent five years at Red Hat, focusing on the financial services sector as a Technical Account Manager, and then as an on-site consultant.
When he’s not yelling at the cloud, Jon is a trustee of the charity Service By Emergency Rider Volunteers – Surrey & South London, the “Blood Runners,” who provide free out-of-hours transport services to the UK National Health Service. He is also guardian to two small dogs and a flock of chickens.
About the Editor
Jennifer Davis is a Senior Cloud Advocate at Microsoft. Jennifer is the coauthor of Effective DevOps. Previously, she was a principal site reliability engineer at RealSelf, developed cookbooks to simplify building and managing infrastructure at Chef, and built reliable service platforms at Yahoo. She is a core organizer of devopsdays and organizes the Silicon Valley event. She is the founder of CoffeeOps. She has spoken and written about DevOps, Operations, Monitoring, and Automation.