Check out my new course Learn you some Lambda best practice for great good! and learn the best practices for performance, cost, security, resilience, observability and scalability.
A while back we looked at the performance difference between the language runtimes AWS Lambda supports natively.
We intentionally omitted coldstart time from that experiment as we were interested in performance differences when a function is “warm”.
However, coldstart is still an important performance consideration, so let’s take a closer look with some experiments designed to measure only coldstart times.
From my personal experience running Lambda functions in production, coldstarts happen when a function is idle for ~5 mins. Additionally, functions will be recycled 4 hours after it starts – which was also backed up by analysis by the folks at IO Pipe.
However, the 5 mins rule seems to have changed. After a few tests, I was not able to see coldstart even after a function had been idle for more than 30 mins.
I needed a more reliable way to trigger coldstart.
After a few failed attempts, I settled on a surefire way to cause coldstart : by deploying a new version of my functions before invoking them.
I have a total of 45 functions for both experiments. Using a simple script (see below) I’m able to:
- deploy all 45 functions using the Serverless framework
- after each round of deployments, invoke the functions programmatically
the deploy + invoke loop takes around 3 mins. I ran the experiment for over 24 hours to collect a meaningful amount of data points. Thankfully the Serverless framework made it easy to create variants of the same function with different memory sizes and to deploy them quickly.
Here were my hypothesis before the experiments, based on the knowledge that the amount of CPU resource you get is proportional to the amount of memory you allocate to a AWS Lambda function.
- C# and Java have higher coldstart time
- memory size affects coldstart time linearly
- code size affects coldstart time linearly
Let’s see if the experiments support these hypothesis.
Experiment 1 : coldstart time by runtime & memory
For this experiment, I created 20 functions with 5 variants (different memory sizes) for each language runtime – C#, Java, Python and Nodejs.
After running the experiment for a little over 24 hours, I collected a bunch of metric data (which you can download yourself here).
Here is how they look.
Observation #1 : C# and Java have much higher coldstart time
The most obvious trend is that statically typed languages (C# and Java) have over 100 times higher coldstart time. This clearly supports our hypothesis, although to a much greater extent than I anticipated.
Observation #2 : Python has ridiculously low codstart time
I’m pleasantly surprised by how little coldstart the Python runtime experiences. OK, there were some outlier data points that heavily influenced some of the 99 percentile and standard deviations, but you can’t argue with a 0.41ms coldstart time at the 95 percentile of a 128MB function.
Observation #3 : memory size improves coldstart time linearly
The more memory you allocate to your function, the smaller the coldstart time and the less standard deviation in coldstart time too. This is most obvious with the C# and Java runtimes as the baseline (128MB) coldstart time for both are very significant.
Again, the data from this experiment clearly supports our hypothesis.
Experiment 2: coldstart time by code size & memory
For this second experiment, I decided to fix the runtime to Nodejs and create variants with different deployment package size and memory.
Here are the results.
Observation #1 : memory size improves coldstart time linearly
As with the first experiment, the memory size improves the coldstart time (and standard deviation) in a roughly linear fashion.
Observation #2 : code size improves coldstart time
Interestingly the size of the deployment package does not increase the coldstart time (bigger package = more time to download & unzip, or so one might assume). Instead it seems to have a positive effect and decreases the overall coldstart time.
I would love to see someone else repeat the experiment with another language runtime to see if the behaviour is consistent.
The things I learnt from these experiments are:
- functions are no longer recycled after ~5 mins of idleness, which makes coldstarts far less punishing than before
- memory size improves coldstart time linearly
- C# and Java runtimes experience ~100 times the coldstart time of Python and suffer from much higher standard deviation too
- as a result of the above you should consider running your C#/Java Lambda functions with a higher memory allocation than you would Nodejs/Python functions
- bigger deployment package size does not increase coldstart time
I specialise in rapidly transitioning teams to serverless and building production-ready services on AWS.
Are you struggling with serverless or need guidance on best practices? Do you want someone to review your architecture and help you avoid costly mistakes down the line? Whatever the case, I’m here to help.
Check out my new podcast Real-World Serverless where I talk with engineers who are building amazing things with serverless technologies and discuss the real-world use cases and challenges they face. If you’re interested in what people are actually doing with serverless and what it’s really like to be working with serverless day-to-day, then this is the podcast for you.
Check out my new course, Learn you some Lambda best practice for great good! In this course, you will learn best practices for working with AWS Lambda in terms of performance, cost, security, scalability, resilience and observability. We will also cover latest features from re:Invent 2019 such as Provisioned Concurrency and Lambda Destinations. Enrol now and start learning!
Check out my video course, Complete Guide to AWS Step Functions. In this course, we’ll cover everything you need to know to use AWS Step Functions service effectively. There is something for everyone from beginners to more advanced users looking for design patterns and best practices. Enrol now and start learning!
Are you working with Serverless and looking for expert training to level-up your skills? Or are you looking for a solid foundation to start from? Look no further, register for my Production-Ready Serverless workshop to learn how to build production-grade Serverless applications!
Here is a complete list of all my posts on serverless and AWS Lambda. In the meantime, here are a few of my most popular blog posts.
- Lambda optimization tip – enable HTTP keep-alive
- You are wrong about serverless and vendor lock-in
- You are thinking about serverless costs all wrong
- Just how expensive is the full AWS SDK?
- Many faced threats to Serverless security
- We can do better than percentile latencies
- Yubl’s road to Serverless
- AWS Lambda – should you have few monolithic functions or many single-purposed functions?
- AWS Lambda – compare coldstart time with different languages, memory and code sizes
- Guys, we’re doing pagination wrong
- Top 10 Serverless framework best practices