AWS X-Ray and Lambda : the good, the bad and the ugly

AWS announced general availability of AWS Lambda support for AWS X-Ray back in May. It’s taken me a while to try it out, and whilst I see plenty of values I think its current limitations significantly restricts its usefulness in a complex system.

I found Lambda-specific documentations to be fragmented and I had to rely on experimentation and piece together clues from several sources:

I also found recording annotations and metadata didn’t work as advertised in the sample (although admittedly I could be doing something wrong…).

The Sample Application

The sample project I created centres around a Lambda function called service-a, which in term calls a number of downstream systems:

  • publishing to a SNS topic
  • GET’ng and PUT’ng an object in S3
  • GET’ng and PUT’ng a row in DynamoDB
  • invoking another Lambda function (service-c) using the Lambda API
  • making a HTTP request to an API Gateway endpoint backed by another Lambda function (one of service-b, error and timeout functions in the diagram above, which represents the success, error and timeout cases respectively)

You can find all the source code here.

The Good

Once I figured out the magic incantations I was able to get the results I’m after. It took more time and energy than should have, but by and large most features worked as advertised at the first (or second) time of asking.

This is a trace of the service-a function, which includes the time it takes for Lambda to initialise the function, and the various downstream systems it talked to, all nested by under custom subsegments. It even includes the trace of the service-c function (and the time it spent publishing to SNS) which was invoked using the Lambda API.

The service map for service-a includes service-c as a downstream dependency, as well as service-c’s dependency on SNS.

The Bad

It’s always 200…

When the service-a function is invoked through its API Gateway endpoint and errors, the corresponding trace still reports a 200 response code.

Presumably what X-Ray sees is a 200 response from the Lambda service whose payload indicates a 502 response to the API Gateway invocation and so it thought “hey, it’s a 200!”.

Here, I can see the service-a endpoint returned a 502 in Postman..

..but the trace reported a 200 response code.

Oddly enough, the trace for the error function also reports a 200 even though its own status field indicates it had errored.

This behaviour is both confusing and inconsistent to me, perhaps I have misunderstood how it works. Sadly, the X-Ray’s concepts page also does not explain the difference between an Error and a Fault

Whilst this might seem like a small nuisance now, the inability to quickly identify error traces will hurt you most when you need to diagnose problems in production, possibly when you’re under the most time pressure.

Traces don’t span over API Gateway

When the service-a function makes an outgoing HTTP request to an API Gateway endpoint the trace stops at the API Gateway endpoint and doesn’t extend to the Lambda functions that are triggered by API Gateway.

This behaviour was consistent with all 3 endpoints I tested—service-b, error and timeout.

For this test, I have followed the X-Ray documentation and used the X-Ray SDK to wrap the Nodejs https module when making the HTTP request.

I can see the trace IDs are correctly passed along in the outgoing HTTP request and received by the handling Lambda function.

This is the service map I expected to see in this case—where service-a’s trace follows through the HTTP request to API Gateway and includes the invocation of the timeout function.

ps. this is not an actual screenshot, it’s an image I composed together to show what I expected to see!

Instead, the actual service map stops at the API Gateway.

However, when invoking another Lambda function directly (using the Lambda API and wrapped AWS SDK) the trace worked as expected.

Perhaps the limitation lies with API Gateway?

The Ugly

No sampling

According to the Lambda’s documentation on X-Ray, requests should be sampled at 1 request per minute.

However, that wasn’t the case in my experiments. EVERY request was sampled, as you can see from the Age of the traces in the screenshot below.

This behaviour was consistent when invoking Lambda via API Gateway as well as via the Lambda management console.

Whilst the X-Ray service is not expensive per se—$5.00 per million traces—it’s nonetheless a cost that can easily spring up on you if you are unwillingly tracing every request through your system. As an example, I worked on a moderately successful social game at Gamesys with ~1M DAU. At roughly 250M user requests per day, X-Ray would have cost $5 * 250 * 30 days = $37500, which was more than our entire AWS bill at the time!

Annotations and Metadata only work on subsegments

The one thing that just refused to work (even though I have followed the examples) was adding annotation and metadata to the root segment:

module.exports.handler = (event, context, callback) => {
  ...
  let segment = AWSXRay.getSegment();
  let n = Math.random() * 3;
  segment.addMetadata('random', `${n}`);      // this doesn't work
  segment.addAnnotation('path', event.path);  // this doesn't work
  ...
}

Interestingly, adding annotations and metadata to subsegments works just fine.



Looking at the logs, I found something interesting: the segment ID for the root segment doesn’t match the segment ID in the X-Ray trace.

For instance, I fetch the root segment for my function in the handler and logs it to CloudWatch Logs.

const AWSXRay = require('aws-xray-sdk');
module.exports.handler = (event, context, callback) => {
  ...
  // this should be the root segment for my function
  let segment = AWSXRay.getSegment();
  console.log(JSON.stringify(segment));
  ...
}

In the logs I can see the segment ID is 05b2b9ac6c9e5682.

But in the X-Ray trace, the segment ID for the root segment is 2b7d5b4a2a2d96e9.


Furthermore, the trace ID is also different:

  • in the logs it’s 1–59504311-d765e7accb8558871fa89d6d
  • in the X-Ray console it’s 1–59504312–5ef2a3eda0c1b2c4d64dda00

This was very odd, so I decided to track the trace ID in the logs vs in the X-Ray console, starting with a coldstart.

Bingo! Looks like it’s a bug in the X-Ray SDK for Nodejs where AWSXray.getSegment() returns the root segment from the previous invocation..

Conclusion

So there you have it, my weekend escapade with AWS X-Ray and Lambda :-)

Overall I’m impressed with what I saw, and feel X-Ray would have added a lot of value to the serverless architecture I was building at Yubl. However, the inability to span traces over API Gateway endpoints makes it a far less useful addition to our ecosystem.

Furthermore, the X-Ray service is focused on execution time and helping you identify performance bottlenecks. However, there’s another important aspect to distributed tracing—helping you debug your system by ensuring a set of correlation IDs are captured in all log messages. X-Ray does not concern itself with this, although you can probably use the trace ids X-Ray provides you with it’s still up to you to capture them in all log messages and propagating all your logs to one easily searchable place. We invested some effort into enabling distributed tracing in our serverless architecture at Yubl, which you can read about in detail in this post.

Are you using X-Ray in production? I’d love to hear your thoughts and feedbacks on the service, and any pitfalls to look out for.

Finding coldstarts : how long does AWS Lambda keep your idle functions around?

In the last post I compared the coldstart time for Lambda functions with different language, memory and code size. One of the things I learnt was that idle functions are no longer terminated after 5 minutes of inactivity.

AWS Lambda – compare coldstart time with different languages, memory and code sizes

It is a fantastic news and something that Amazon has quietly changed behind the scene. However, it lead me to ask some follow up questions:

  1. what’s the new idle time that would trigger a coldstart?
  2. does it differ by memory allocation?
  3. are functions still recycled 4 hours from the creation of host VM?

To answer the first 2 questions, I devised an experiment.

First, here are my hypotheses going into the experiment.


WARNING: this experiment is intended to help us glimpse into implementation details of the AWS Lambda platform, they are fun and satisfy my curiosity but you shouldn’t build your application with the results in mind as AWS can change these implementation details without notice!


Hypotheses

Hypothesis 1 : there is an upper bound to how long Lambda allows your function to stay idle before reclaiming the associated resources

This should be a given. Idle functions occupy resources that can be used to help other AWS customers scale up to meet their needs (and not to mention the first customer is not paying for his idle functions!), it simply wouldn’t make any sense for AWS to keep idle functions around forever.

Hypothesis 2 : the idle timeout is not a constant

From an implementor’s point-of-view, it might be simpler to keep this timeout a constant?—?ie. functions are always terminated after X mins of inactivity. However, I’m sure AWS will vary this timeout to optimise for higher utilisation and keep the utilisation levels more evenly distributed across its fleet of physical servers.

For example, if there’s an elevated level of resource contention in a region, why not terminate idle functions earlier to free up space?

Hypothesis 3 : the upper bound for inactivity varies by memory allocation

An idle function with 1536 MB of memory allocation is wasting a lot more resource than an idle function with 128 MB of memory, so it makes sense for AWS to terminate idle functions with higher memory allocation earlier.

Experiment : find the upper bound for inactivity

To find the upper bound for inactivity, we need a Lambda function to act as the system-under-test and report when it has experienced a coldstart. We also need a mechanism to progressively increase the interval between invocations until we arrive at an interval where each invocation is guaranteed to be a coldstart?—?the upper bound. We will determine the upper bound when we see 10 consecutive coldstarts when invoked X minutes apart.

To answer hypothesis 3 we will also replicate the system-under-test function with different memory allocations.

This experiment is a time consuming process, it requires discipline and a degree of precision in timing. Suffice to say I won’t be doing this by hand!

My first approach was to use a CloudWatch Schedule to trigger the system-under-test function, and let the function dynamically adjust the schedule based on whether it’s experienced a coldstart. It failed miserably?—?whenever the system-under-test updates the schedule the schedule will fire shortly after rather than wait for the newly specified interval…

Instead, I turned to Step Functions for help.

AWS Step Functions allows you to create a state machine where you can invoke Lambda functions, wait for a specified amount of time, execute parallel tasks, retry, catch errors, etc.

A Wait state allows you to drive the no. of seconds to wait using data (see SecondsPath param in the documentation). Which means I can start the state machine with an input like this:

{ 
    “target”: “when-will-i-coldstart-dev-system-under-test-128”, 
    “interval”: 600, 
    “coldstarts”: 0 
}

The input is passed to another find-idle-timeout function as invocation event. The function will invoke the target (which is one of the variants of the system-under-test function with different memory allocations) and increase the interval if the system-under-test function doesn’t report a coldstart. The find-idle-timeout function will return a new piece of data for the Step Function execution:

{ 
    “target”: “when-will-i-coldstart-dev-system-under-test-128”, 
    “interval”: 660, 
    “coldstarts”: 0 
}

Now, the Wait state will use the interval value and wait 660 seconds before switching back to the FindIdleTimeout state where it’ll invoke our system-under-test function again (with the previous output as input).

"Wait": {
    "Type": "Wait",
    "SecondsPath": "$.interval",
    "Next": "FindIdleTimeout"
},

With this setup I’m able to kick off multiple executions?—?one for each memory setting.

Along the way I have plenty of visibility into what’s happening, all from the comfort of the Step Functions management console.

Here are the results of the experiment:

From the data, it’s clear that AWS Lambda shuts down idle functions around the hour mark. It’s also interesting to note that the function with 1536 MB memory is terminate over 10 mins earlier, this supports hypothesis 3.

I also collected data on all the idle intervals where we saw a coldstart and categorised them into 5 minute brackets.

Even though the data is serious lacking, but from what little data I managed to collect you can still spot some high level trends:

  • over 60% of coldstarts (prior to hitting the upper bound) happened after 45 mins of inactivity
  • the function with 1536 MB memory sees significantly fewer no. of cold starts prior to hitting the upper bound (worth noting that it also has a lower upper bound (48 mins) than other functions in this test

The data supports hypothesis 2 though there’s no way for us to figure out the reason behind these coldstarts or if there’s significance to the 45 mins barrier.

Conclusions

To summary the findings from our little experiment in one line:

AWS Lambda will generally terminate functions after 45–60 mins of inactivity, although idle functions can sometimes be terminated a lot earlier to free up resources needed by other customers.

I hope you find this experiment interesting, but please do not build applications on the assumptions that:

    a) these results are valid, and

    b) they will remain valid for the foreseeable future

I cannot stress enough that this experiment is meant for fun and to satisfy a curious mind, and nothing more!

The results from this experiment also deserve further investigation. For instance, the 1536 MB function exhibited very different behaviour to other functions, but is it a special case or would functions with more than 1024 MB of memory all share these traits? I’d love to find out, maybe I’ll write a follow up to this experiment in the future.

Watch this space ;-)

AWS Lambda – compare coldstart time with different languages, memory and code sizes

A while back we looked at the performance difference between the language runtimes AWS Lambda supports natively.

AWS Lambda – comparing platform performances

We intentionally omitted coldstart time from that experiment as we were interested in performance differences when a function is “warm”.

However, coldstart is still an important performance consideration, so let’s take a closer look with some experiments designed to measure only coldstart times.

Methodology

From my personal experience running Lambda functions in production, coldstarts happen when a function is idle for ~5 mins. Additionally, functions will be recycled 4 hours after it starts – which was also backed up by analysis by the folks at IO Pipe.

However, the 5 mins rule seems to have changed. After a few tests, I was not able to see coldstart even after a function had been idle for more than 30 mins.

I needed a more reliable way to trigger coldstart.

After a few failed attempts, I settled on a surefire way to cause coldstart : by deploying a new version of my functions before invoking them.

I have a total of 45 functions for both experiments. Using a simple script (see below) I’m able to:

  1. deploy all 45 functions using the Serverless framework
  2. after each round of deployments, invoke the functions programmatically

the deploy + invoke loop takes around 3 mins. I ran the experiment for over 24 hours to collect a meaningful amount of data points. Thankfully the Serverless framework made it easy to create variants of the same function with different memory sizes and to deploy them quickly.

Hypothesis

Here were my hypothesis before the experiments, based on the knowledge that the amount of CPU resource you get is proportional to the amount of memory you allocate to a AWS Lambda function.

  1. C# and Java have higher coldstart time
  2. memory size affects coldstart time linearly
  3. code size affects coldstart time linearly

Let’s see if the experiments support these hypothesis.

Experiment 1 : coldstart time by runtime & memory

For this experiment, I created 20 functions with 5 variants (different memory sizes) for each language runtime – C#, Java, Python and Nodejs.

After running the experiment for a little over 24 hours, I collected a bunch of metric data (which you can download yourself here).

Here is how they look.

Observation #1 : C# and Java have much higher coldstart time

The most obvious trend is that statically typed languages (C# and Java) have over 100 times higher coldstart time. This clearly supports our hypothesis, although to a much greater extent than I anticipated.

Observation #2 : Python has ridiculously low codstart time

I’m pleasantly surprised by how little coldstart the Python runtime experiences. OK, there were some outlier data points that heavily influenced some of the 99 percentile and standard deviations, but you can’t argue with a 0.41ms coldstart time at the 95 percentile of a 128MB function.

Observation #3 : memory size improves coldstart time linearly

The more memory you allocate to your function, the smaller the coldstart time and the less standard deviation in coldstart time too. This is most obvious with the C# and Java runtimes as the baseline (128MB) coldstart time for both are very significant.

Again, the data from this experiment clearly supports our hypothesis.

Experiment 2: coldstart time by code size & memory

For this second experiment, I decided to fix the runtime to Nodejs and create variants with different deployment package size and memory.

Here are the results.

Observation #1 : memory size improves coldstart time linearly

As with the first experiment, the memory size improves the coldstart time (and standard deviation) in a roughly linear fashion.

Observation #2 : code size improves coldstart time

Interestingly the size of the deployment package does not increase the coldstart time (bigger package = more time to download & unzip, or so one might assume). Instead it seems to have a positive effect and decreases the overall coldstart time.

I would love to see someone else repeat the experiment with another language runtime to see if the behaviour is consistent.

Conclusions

The things I learnt from these experiments are:

  • functions are no longer recycled after ~5 mins of idleness, which makes coldstarts far less punishing than before
  • memory size improves coldstart time linearly
  • C# and Java runtimes experience ~100 times the coldstart time of Python and suffer from much higher standard deviation too
  • as a result of the above you should consider running your C#/Java Lambda functions with a higher memory allocation than you would Nodejs/Python functions
  • bigger deployment package size does not increase coldstart time

ps. the source code used for these experiments can be found here, including the scripts used to calculate the stats and generate the plot.ly box charts.

Beware of dilution of DynamoDB throughput due to excessive scaling

TL;DR – The no. of partitions in a DynamoDB table goes up in response to increased load or storage size, but it never come back down, ever.

DynamoDB is pretty great, but as I have seen this particular problem at 3 different companies – Gamesys, JUST EAT, and now Space Ape Games – I think it’s a behaviour that more folks should be aware of.

Credit to AWS, they have regularly talked about the formula for working out the no. of partitions at DynamoDB Deep Dive sessions.

However, they often forget to mention that the DynamoDB will not decrease the no. of partitions when you reduce your throughput units. It’s a crucial detail that is badly under-represented in a lengthy Best Practice guide.

Consider the following scenario:

  • you dial up the throughput for a table because there’s a sudden spike in traffic or you need the extra throughput to run an expensive scan
  • the extra throughputs cause DynamoDB to increase the no. of partitions
  • you dial down the throughput to previous levels, but now you notice that some requests are throttled even when you have not exceeded the provisioned throughput on the table

This happens because there are less read and write throughput units per partition than before due to the increased no. of partitions. It translates to higher likelihood of exceeding read/write throughput on a per-partition basis (even if you’re still under the throughput limits on the table overall).

When this dilution of throughput happens you can:

  1. migrate to a new table
  2. specify higher table-level throughput to boost the through units per partition to previous levels

Given the difficulty of table migrations most folks would opt for option 2, which is how JUST EAT ended up with a table with 3000+ write throughput unit despite consuming closer to 200 write units/s.

In conclusion, you should think very carefully before scaling up a DynamoDB table drastically in response to temporary needs, it can have long lasting cost implications.

Serverless 1.X – enable API Gateway caching on request parameters

Having previously blogged about the untrodden path to enable caching on API Gateway request parameters in the Serverless framework 0.5.X, it’s a little disappointing that it’s still not officially fixed in the 1.X versions…

The Problem

The problem is two-fold:

  1. there’s currently no way to specify caching should be enabled for path & query string parameters
  2. the CloudFormation template Serverless 1.X generates for API Gateway is missing a few optional fields, these missing fields stop you from manually enable caching in the API Gateway management console too

After you deploy your Lambda function with associated API, if you go to the management console and enable caching on path or request parameters you will get an error saying “Invalid cache key parameter specified”.

The Workaround

A friend pointed me to a neat trick to modify the CloudFormation template that Serverless 1.X auto-generates for you.

After the project is deployed, you can go to CloudFormation and view the template that Serverless has generated. These templates are pretty big (and poorly formatted), so I find it easier to open them up in the Designer view and use that view to navigate to the endpoint I’m looking for.

Once you find the resource template for the endpoint, write down its name. Now go back to the serverless.yml file in your project, and add the resource name to the resources section at the bottom. You only need to include fields that you want to update or add to the template.

The CloudFormation syntax for an API Gateway method looks like this:

We also need to fill in some blanks for the Integration section:

For more details on the CloudFormation syntax, see here and here.

After some trial-and-error, the minimum set of fields I had to add are:

Redeploy with Serverless and the path parameter is enabled for caching:

Wrap Up

I hope you have found this post useful, though I’m surprised by the lack of information out there during my research and the lack of official support from the Serverless framework.

You know of a better way to do this, please let me know in the comments.

Links