Yan Cui
I help clients go faster for less using serverless technologies.
This article is brought to you by
Don’t reinvent the patterns. Catalyst gives you consistent APIs for messaging, data, and workflow with key microservice patterns like circuit-breakers and retries for free.
When you deploy an API to API Gateway, throttling is enabled by default in the stage configurations.
By default, every method inherits its throttling settings from the stage.
Having built-in throttling enabled by default is great. However, the default method limits – 10k req/s with a burst of 5000 concurrent requests – matches your account level limits. As a result, ALL your APIs in the entire region share a rate limit that can be exhausted by a single method.
It also means that, as an attacker, I only need to DOS attack one public endpoint. I can bring down not just the API in question, but all your APIs in the entire region. Effectively rendering your entire system unavailable.
Given that many organizations run their entire production environment out of a single AWS region and account, this is a risk you can’t afford to ignore.
Is WAF not the answer to DOS?
You can configure WAF rules for both API Gateway as well as CloudFront. You can do this in the API Gateway stage settings.
With AWS WAF, you can create rate-based rules that rate limits at the IP level.
This is sufficient to repel basic DOS attacks where all the requests originate from a handful of IP addresses. But it’s far from a foolproof system.
For starters, it won’t protect you from DDOS attacks from even a small botnet with thousands of hosts. The rise of IoT devices (and their poor security) has also given rise to IoT botnets. These botnets can comprise of millions of compromised devices.
These rate-based WAF rules also struggle to deal with low and slow DOS attacks. These attacks generate a slow and steady stream of requests that are hard to differentiate from normal traffic.
This naive IP level rate limiting can also block traffic from institutions that share the same IP address for its users. This can include universities and in some cases even small towns. In the past, I also observed that many AOL users would share the same IP address.
In short, WAF can keep the script kiddies out but is not good enough an answer to the threat of DOS attacks. The core of the problem here is that one method is allowed to inflict maximum damage to the whole region. And it’s a problem that really needs to be addressed at the platform level.
So what can we do?
The solution is simple, but the challenge is in governance.
“All you have to do” is to apply a sensible rate limit for each method individually. However, doing so requires developer discipline, constantly. And we know from history that this leads to failure as humans are terrible at the same thing over and over consistently.
At the time of writing, there’s no built-in support in the Serverless framework to configure these method settings. The best solution seems to be the serverless-api-stage plugin. It works but has been dormant for over a year. And the author has not responded to any of the recent issues or PRs.
You can create a custom rule in AWS Config to check that every API Gateway method is created with a rate limit override. This is a good way to catch non-compliance and enforce better practices in the organization.
You can also implement some automated remediation. For example, you can trigger a Lambda function after every API Gateway deployment with CloudTrail and CloudWatch Events/EventBridge. If the API author had left the default rate limits on then we can override it with a more sensible rate limit settings. This wouldn’t be my first port of call though. As it can be confusing to the API author why the configuration of his API is changed without any action on his part.
Another strategy would be to reduce the amount of traffic that reaches API Gateway by leveraging CloudFront as CDN. The rate-based WAF rules can be applied to CloudFront too, although the same limitations we discussed earlier still apply. Which means you can incur extra CloudFront cost during a DDOS attack.
With AWS Shield Advanced ($3000/month plus various other fees), you can get payment protection against this extra cost incurred during an attack. Perhaps more importantly, you also get access to the DDoS Response Team if you have an existing Business or Enterprise support. Given the cost involved, this is likely to be out-of-reach for many startups.
All in all, the tooling needs to improve to help people do the right thing by default. We need better support from the likes of Serverless framework so we can configure these rate limits easily. And I hope AWS change the default behaviour of applying region-wide limits on every method. Or at the very least, show warning messages in the console that your rate limit settings are exposing you to serious risk.
Update 25/11/2019: my good friend Diana Ionita published a new Serverless framework plugin serverless-api-gateway-throttling. It lets you easily configure the default throttling setting for your API but also override the setting for individual endpoints too. If you’re using the Serverless framework, you should definitely check it out.
Whenever you’re ready, here are 3 ways I can help you:
- Production-Ready Serverless: Join 20+ AWS Heroes & Community Builders and 1000+ other students in levelling up your serverless game. This is your one-stop shop for quickly levelling up your serverless skills.
- I help clients launch product ideas, improve their development processes and upskill their teams. If you’d like to work together, then let’s get in touch.
- Join my community on Discord, ask questions, and join the discussion on all things AWS and Serverless.