aws lambda — compare coldstart time with different languages, memory and code sizes

A while back we looked at the per­for­mance dif­fer­ence between the lan­guage run­times AWS Lamb­da sup­ports native­ly.

We inten­tion­al­ly omit­ted cold­start time from that exper­i­ment as we were inter­est­ed in per­for­mance dif­fer­ences when a func­tion is “warm”.

How­ev­er, cold­start is still an impor­tant per­for­mance con­sid­er­a­tion, so let’s take a clos­er look with some exper­i­ments designed to mea­sure only cold­start times.


From my per­son­al expe­ri­ence run­ning Lamb­da func­tions in pro­duc­tion, cold­starts hap­pen when a func­tion is idle for ~5 mins. Addi­tion­al­ly, func­tions will be recy­cled 4 hours after it starts — which was also backed up by analy­sis by the folks at IO Pipe.

How­ev­er, the 5 mins rule seems to have changed. After a few tests, I was not able to see cold­start even after a func­tion had been idle for more than 30 mins.

I need­ed a more reli­able way to trig­ger cold­start.

After a few failed attempts, I set­tled on a sure­fire way to cause cold­start : by deploy­ing a new ver­sion of my func­tions before invok­ing them.

I have a total of 45 func­tions for both exper­i­ments. Using a sim­ple script (see below) I’m able to:

  1. deploy all 45 func­tions using the Server­less frame­work
  2. after each round of deploy­ments, invoke the func­tions pro­gram­mat­i­cal­ly

the deploy + invoke loop takes around 3 mins. I ran the exper­i­ment for over 24 hours to col­lect a mean­ing­ful amount of data points. Thank­ful­ly the Server­less frame­work made it easy to cre­ate vari­ants of the same func­tion with dif­fer­ent mem­o­ry sizes and to deploy them quick­ly.


Here were my hypoth­e­sis before the exper­i­ments, based on the knowl­edge that the amount of CPU resource you get is pro­por­tion­al to the amount of mem­o­ry you allo­cate to a AWS Lamb­da func­tion.

  1. C# and Java have high­er cold­start time
  2. mem­o­ry size affects cold­start time lin­ear­ly
  3. code size affects cold­start time lin­ear­ly

Let’s see if the exper­i­ments sup­port these hypoth­e­sis.

Experiment 1 : coldstart time by runtime & memory

For this exper­i­ment, I cre­at­ed 20 func­tions with 5 vari­ants (dif­fer­ent mem­o­ry sizes) for each lan­guage run­time — C#, Java, Python and Node­js.

After run­ning the exper­i­ment for a lit­tle over 24 hours, I col­lect­ed a bunch of met­ric data (which you can down­load your­self here).

Here is how they look.

Observation #1 : C# and Java have much higher coldstart time

The most obvi­ous trend is that sta­t­i­cal­ly typed lan­guages (C# and Java) have over 100 times high­er cold­start time. This clear­ly sup­ports our hypoth­e­sis, although to a much greater extent than I antic­i­pat­ed.

Observation #2 : Python has ridiculously low codstart time

I’m pleas­ant­ly sur­prised by how lit­tle cold­start the Python run­time expe­ri­ences. OK, there were some out­lier data points that heav­i­ly influ­enced some of the 99 per­centile and stan­dard devi­a­tions, but you can’t argue with a 0.41ms cold­start time at the 95 per­centile of a 128MB func­tion.

Observation #3 : memory size improves coldstart time linearly

The more mem­o­ry you allo­cate to your func­tion, the small­er the cold­start time and the less stan­dard devi­a­tion in cold­start time too. This is most obvi­ous with the C# and Java run­times as the base­line (128MB) cold­start time for both are very sig­nif­i­cant.

Again, the data from this exper­i­ment clear­ly sup­ports our hypoth­e­sis.

Experiment 2: coldstart time by code size & memory

For this sec­ond exper­i­ment, I decid­ed to fix the run­time to Node­js and cre­ate vari­ants with dif­fer­ent deploy­ment pack­age size and mem­o­ry.

Here are the results.

Obser­va­tion #1 : mem­o­ry size improves cold­start time lin­ear­ly

As with the first exper­i­ment, the mem­o­ry size improves the cold­start time (and stan­dard devi­a­tion) in a rough­ly lin­ear fash­ion.

Obser­va­tion #2 : code size improves cold­start time

Inter­est­ing­ly the size of the deploy­ment pack­age does not increase the cold­start time (big­ger pack­age = more time to down­load & unzip, or so one might assume). Instead it seems to have a pos­i­tive effect and decreas­es the over­all cold­start time.

I would love to see some­one else repeat the exper­i­ment with anoth­er lan­guage run­time to see if the behav­iour is con­sis­tent.


The things I learnt from these exper­i­ments are:

  • func­tions are no longer recy­cled after ~5 mins of idle­ness, which makes cold­starts far less pun­ish­ing than before
  • mem­o­ry size improves cold­start time lin­ear­ly
  • C# and Java run­times expe­ri­ence ~100 times the cold­start time of Python and suf­fer from much high­er stan­dard devi­a­tion too
  • as a result of the above you should con­sid­er run­ning your C#/Java Lamb­da func­tions with a high­er mem­o­ry allo­ca­tion than you would Nodejs/Python func­tions
  • big­ger deploy­ment pack­age size does not increase cold­start time

ps. the source code used for these exper­i­ments can be found here, includ­ing the scripts used to cal­cu­late the stats and gen­er­ate the box charts.

Like what you’re read­ing but want more help? I’m hap­py to offer my ser­vices as an inde­pen­dent con­sul­tant and help you with your server­less project — archi­tec­ture reviews, code reviews, build­ing proof-of-con­cepts, or offer advice on lead­ing prac­tices and tools.

I’m based in Lon­don, UK and cur­rent­ly the only UK-based AWS Server­less Hero. I have near­ly 10 years of expe­ri­ence with run­ning pro­duc­tion work­loads in AWS at scale. I oper­ate pre­dom­i­nant­ly in the UK but I’m open to trav­el­ling for engage­ments that are longer than a week. To see how we might be able to work togeth­er, tell me more about the prob­lems you are try­ing to solve here.

I can also run an in-house work­shops to help you get pro­duc­tion-ready with your server­less archi­tec­ture. You can find out more about the two-day work­shop here, which takes you from the basics of AWS Lamb­da all the way through to com­mon oper­a­tional pat­terns for log aggre­ga­tion, dis­tri­b­u­tion trac­ing and secu­ri­ty best prac­tices.

If you pre­fer to study at your own pace, then you can also find all the same con­tent of the work­shop as a video course I have pro­duced for Man­ning. We will cov­er top­ics includ­ing:

  • authen­ti­ca­tion & autho­riza­tion with API Gate­way & Cog­ni­to
  • test­ing & run­ning func­tions local­ly
  • CI/CD
  • log aggre­ga­tion
  • mon­i­tor­ing best prac­tices
  • dis­trib­uted trac­ing with X-Ray
  • track­ing cor­re­la­tion IDs
  • per­for­mance & cost opti­miza­tion
  • error han­dling
  • con­fig man­age­ment
  • canary deploy­ment
  • VPC
  • secu­ri­ty
  • lead­ing prac­tices for Lamb­da, Kine­sis, and API Gate­way

You can also get 40% off the face price with the code ytcui. Hur­ry though, this dis­count is only avail­able while we’re in Manning’s Ear­ly Access Pro­gram (MEAP).