In Part 1 of this blog post, we discussed the purpose and benefits of using AWS Application Load Balancers (ALB) and the use cases where it comes handy. In this post, we will discuss how we simplify implementation of ALBs using Cloudamatic.
Amazon’s SDK support for ALBs is very similar to ELBs, so much so that it’s referred to in the API as ElasticLoadBalancingV2. Given the amount of overlap, we chose to integrate directly with our existing LoadBalancer resource, rather than add an entirely new resource type. We added new configuration options to describe some of the new ALB-specific features, a flag called classic to allow explicitly requesting an old-style Elastic Load Balancer, and then made the new default behavior when requesting a LoadBalancer resource in AWS the construction of an Application Load Balancer.
A key design principle behind our stack descriptor language is simplicity-of-reading. We use sensible defaults wherever possible to minimize the amount of explicit description needed to build complete resources, and only trot out the verbose detail work when fine-tuning is called for. For example, the following stack is used to create a standalone LoadBalancer in the client’s account, without explicitly declaring the nitty-gritty details of VPC subnet targeting, timeouts, and logging:
– name: mylb
– lb-port: 80
– lb-port: 443
– port: 80
– port: 443
…if we use our mu-deploy tool’s –cloudformation option to generate a CloudFormation template equivalent to the above, it comes out to around 380 lines. That’s the sort of thing we don’t like to maintain!
In keeping with this simplicity-first principle, we wanted the language to change as little as possible when migrating from ELBs to ALBs, which meant smoothing over some differences. For example, the specifics of what back-end instances to use, and with what ports and protocols, are no longer part of a Listener artifact as they are with ELBs. Rather, there’s a separate entity called a Target Group, which must be created and populated, then associated with one or more LoadBalancers. Similarly, a Health Check is no longer a singular artifact for the LoadBalancer itself, but rather exists for each Target Group.
We added logic to our configuration parser that attempts to translate ELB-style healthcheck, listener, and other declarations into a reasonable ALB equivalent. It will generate a targetgroup that maps to each set of backend connectivity parameters declared in those listeners, and clone a healthcheck into each one as it goes.
In fact, the above example ELB description builds a perfectly nice Application Load Balancer, with functionality identical to the Elastic Load Balancer it used to produce. If you were to write all of those listeners and targetgroups out the long way, it’d look a bit like this:
– lb_port: 80
– lb_port: 443
– name: mylbhttp80
– name: mylbhttps443
…which is about as long as our entire original Basket of Kittens. I don’t want to write all that unless I need to. Shorthand is good!
Integrating with Existing Deployments
Our mu-deploy tool already knows how to update deployment metadata from updated stack descriptions with its –update flag. But that just updates metadata, and doesn’t touch real live resources. So we began work on a feature to update or add live cloud resources in existing deployments. mu-deploy now has a –liveupdate modifier to –update, which will attempt to add or modify live resources based on metadata updates.
We were able to build enough capability to insert ALBs to our live Windows deployments, alongside their existing ELBs, while framing out support for other resource types. That done, we could fall forward onto our new ALBs, with the old ELBs intact in case things didn’t go as well as we hoped.
While this feature remains incomplete and experimental, we hope to flesh out the capability to cover all of our supported resource types, end perhaps end up a bit smarter than CloudFormation’s UPDATE capability.
Oh Yeah, Wasn’t There Something About Blacklists?
Now that we had ourselves a pretty array of Application Load Balancers all along our boundary, we could start leveraging the key feature we needed. Adding blacklisted IP addresses is simple enough through the AWS Console. Creating a Web ACL, associating it with our ALBs, and then adding our list of blacklisted IPs just took a few minutes.
But that’s only half of our requirement- we were also asked to block all traffic from certain domains. This is an odder request that it sounds; DNS is spoofed with relative ease, and domains associated with, say, large dialup ISPs can map to large number of IP addresses, both hostile and benign. As such, commercial firewall solutions tend not to bother supporting DNS-based blocking, and Amazon’s WAF is no exception.
Still, we have a requirement to meet. We could do a one-time lookup of the domains in the list and add the results to our IP blacklist… but that’s not really correct. The results can and will change over time. Instead, we’ll have to do a little dynamic updating. Amazon doesn’t have a bundled solution for this, but there a number of example architectures that use Lambda to modify WAF ACLs. So we decided to build something of our own, a Lambda function that reads Load Balancer logs from across the environment, looks for traffic that matches one of our blacklisted domains, and adds IPs that match those domains to our IP-based blacklist.
We’ve added a sample version of this reactive DNS blacklist Lambda function, written in Python, to the Cloudamatic repository. Feel free to try it out or modify it for your own use!