AWS Lambda — monolithic functions won’t help you with cold starts

After my post on mono­lith­ic func­tions vs sin­gle-pur­posed func­tions, a few peo­ple asked me about the effect mono­lith­ic func­tions have on cold starts, so I thought I’d share my thoughts here.

The ques­tion goes some­thing like this:

Mono­lith­ic func­tions are invoked more fre­quent­ly so they are less like­ly to be in cold state, while sin­gle-pur­posed func­tions that are not being used fre­quent­ly may always be cold state, don’t you think?

That seems like a fair assump­tion, but the actu­al behav­iour of cold starts is a more nuanced dis­cus­sion and can have dras­ti­cal­ly dif­fer­ent results depend­ing on traf­fic pat­tern. Check out my oth­er post that goes into this behav­iour in more detail.

The effect of consolidation into monolithic functions (on the no. of cold starts experienced) quickly diminishes with load

To sim­pli­fy things, let’s con­sid­er “the num­ber of cold starts you’ll have expe­ri­enced as you ramp up to X req/s”. Assum­ing that:

  • the ramp up was grad­ual so there was no mas­sive spikes (which could trig­ger a lot more cold starts)
  • each request’s dura­tion is short, say, 100ms

At a small scale, say, 1 req/s per end­point, and a total of 10 end­points (which is 1 mono­lith­ic func­tion vs 10 sin­gle pur­posed func­tions) we’ll have a total of 10 req/s. Giv­en the 100ms exe­cu­tion time, it’s just with­in what one con­cur­rent func­tion is able to han­dle.

To reach 1 req/s per end­point, you will have expe­ri­enced:

  • mono­lith­ic: 1 cold start
  • sin­gle-pur­posed: 10 cold starts

As the load goes up, to 100 req/s per end­point, which equates to a total of 1000 req/s. To han­dle this load you’ll need at least 100 con­cur­rent exe­cu­tions of the mono­lith­ic func­tion (100ms per req, so the through­put per con­cur­rent exe­cu­tion is 10 req/s, hence con­cur­rent exe­cu­tions = 1000 / 10 = 100). To reach this lev­el of con­cur­ren­cy, you will have expe­ri­enced:

  • mono­lith­ic: 100 cold starts

At this point, 100 req/s per end­point = 10 con­cur­rent exe­cu­tions for each of the sin­gle-pur­posed func­tions. To reach that lev­el of con­cur­ren­cy, you will also have expe­ri­enced:

  • sin­gle-pur­posed: 10 con­cur­rent execs * 10 func­tions = 100 cold starts

So, mono­lith­ic func­tions don’t help you with the no. of cold starts you’ll expe­ri­ence even at a mod­er­ate amount of load.

Also, when the load is low, there are sim­ple things you can do to mit­i­gate cold starts by pre-warm­ing your func­tions (as dis­cussed in the oth­er post). You can even use the server­less-plu­g­in-warmup to do that for you, and it even comes with the option to do a pre-warmup run after a deploy­ment.

How­ev­er, this prac­tice stops being effec­tive when you have even a mod­er­ate amount of con­cur­ren­cy. At which point, mono­lith­ic func­tions would incur just as many cold starts as sin­gle-pur­posed func­tions.

Consolidating into monolithic functions can increase initialization time, which increases the duration of cold start

By pack­ing more “actions” into one func­tion, we also increase the no. of mod­ules that need to be ini­tial­ized dur­ing the cold start of that func­tion, and are there­fore high­ly to expe­ri­ence longer cold starts as a result (basi­cal­ly, any­thing out­side of the export­ed han­dler func­tion is ini­tial­ized dur­ing the Bootstrap runtime phase (see below) of the cold start.

from Ajay Nair’s talk at re:invent 2017 —

Imag­ine in the mono­lith­ic ver­sion of the fic­tion­al user-api I used in the pre­vi­ous post to illus­trate the point, our han­dler mod­ule would need to require all the depen­den­cies used by all the end­points.

const depA = require('lodash');
const depB = require('facebook-node-sdk');
const depC = require('aws-sdk');

Where­as in the sin­gle-pur­posed ver­sion of the user-api, only the get-user-by-facebook-id endpoint’s han­dler func­tion would need to incur the extra over­head of ini­tial­iz­ing the facebook-node-sdk depen­den­cy dur­ing cold start.

You also have to fac­tor in any oth­er mod­ules in the same project, and their depen­den­cies, and any code that will be run dur­ing those mod­ules’ ini­tial­iza­tion, and so on.

Wrong place to optimize cold start

So, con­trary to one’s intu­ition, mono­lith­ic func­tions don’t offer any ben­e­fit towards cold starts out­side what basic pre­warm­ing can achieve already, and can quite like­ly extend the dura­tion of cold starts.

Since cold start affects you wild­ly dif­fer­ent­ly depend­ing on lan­guage, mem­o­ry and how much ini­tial­iza­tion you’re doing in your code. I’ll argue that, if cold starts is a con­cern for you, then you’re far bet­ter off switch­ing to anoth­er lan­guage (i.e. Go, Node.js or Python) and to invest effort into opti­miz­ing your code so it suf­fers short­er cold starts.

Also, keep in mind that this is some­thing that AWS and oth­er providers are active­ly work­ing on and I sus­pect the sit­u­a­tion will be vast­ly improved in the future by the plat­form.

All and all, I think chang­ing the deploy­ment units (one big func­tion vs many small func­tions) is not the right way to address cold starts.

Liked this post? Why not support me on Patreon and help me get rid of the ads!