AWS X-Ray and Lambda : the good, the bad and the ugly

AWS announced general availability of AWS Lambda support for AWS X-Ray back in May. It’s taken me a while to try it out, and whilst I see plenty of values I think its current limitations significantly restricts its usefulness in a complex system.

I found Lambda-specific documentations to be fragmented and I had to rely on experimentation and piece together clues from several sources:

I also found recording annotations and metadata didn’t work as advertised in the sample (although admittedly I could be doing something wrong…).

The Sample Application

The sample project I created centres around a Lambda function called service-a, which in term calls a number of downstream systems:

  • publishing to a SNS topic
  • GET’ng and PUT’ng an object in S3
  • GET’ng and PUT’ng a row in DynamoDB
  • invoking another Lambda function (service-c) using the Lambda API
  • making a HTTP request to an API Gateway endpoint backed by another Lambda function (one of service-b, error and timeout functions in the diagram above, which represents the success, error and timeout cases respectively)

You can find all the source code here.

The Good

Once I figured out the magic incantations I was able to get the results I’m after. It took more time and energy than should have, but by and large most features worked as advertised at the first (or second) time of asking.

This is a trace of the service-a function, which includes the time it takes for Lambda to initialise the function, and the various downstream systems it talked to, all nested by under custom subsegments. It even includes the trace of the service-c function (and the time it spent publishing to SNS) which was invoked using the Lambda API.

The service map for service-a includes service-c as a downstream dependency, as well as service-c’s dependency on SNS.

The Bad

It’s always 200…

When the service-a function is invoked through its API Gateway endpoint and errors, the corresponding trace still reports a 200 response code.

Presumably what X-Ray sees is a 200 response from the Lambda service whose payload indicates a 502 response to the API Gateway invocation and so it thought “hey, it’s a 200!”.

Here, I can see the service-a endpoint returned a 502 in Postman..

..but the trace reported a 200 response code.

Oddly enough, the trace for the error function also reports a 200 even though its own status field indicates it had errored.

This behaviour is both confusing and inconsistent to me, perhaps I have misunderstood how it works. Sadly, the X-Ray’s concepts page also does not explain the difference between an Error and a Fault

Whilst this might seem like a small nuisance now, the inability to quickly identify error traces will hurt you most when you need to diagnose problems in production, possibly when you’re under the most time pressure.

Traces don’t span over API Gateway

When the service-a function makes an outgoing HTTP request to an API Gateway endpoint the trace stops at the API Gateway endpoint and doesn’t extend to the Lambda functions that are triggered by API Gateway.

This behaviour was consistent with all 3 endpoints I tested—service-b, error and timeout.

For this test, I have followed the X-Ray documentation and used the X-Ray SDK to wrap the Nodejs https module when making the HTTP request.

I can see the trace IDs are correctly passed along in the outgoing HTTP request and received by the handling Lambda function.

This is the service map I expected to see in this case—where service-a’s trace follows through the HTTP request to API Gateway and includes the invocation of the timeout function.

ps. this is not an actual screenshot, it’s an image I composed together to show what I expected to see!

Instead, the actual service map stops at the API Gateway.

However, when invoking another Lambda function directly (using the Lambda API and wrapped AWS SDK) the trace worked as expected.

Perhaps the limitation lies with API Gateway?

The Ugly

No sampling

According to the Lambda’s documentation on X-Ray, requests should be sampled at 1 request per minute.

However, that wasn’t the case in my experiments. EVERY request was sampled, as you can see from the Age of the traces in the screenshot below.

This behaviour was consistent when invoking Lambda via API Gateway as well as via the Lambda management console.

Whilst the X-Ray service is not expensive per se—$5.00 per million traces—it’s nonetheless a cost that can easily spring up on you if you are unwillingly tracing every request through your system. As an example, I worked on a moderately successful social game at Gamesys with ~1M DAU. At roughly 250M user requests per day, X-Ray would have cost $5 * 250 * 30 days = $37500, which was more than our entire AWS bill at the time!

Annotations and Metadata only work on subsegments

The one thing that just refused to work (even though I have followed the examples) was adding annotation and metadata to the root segment:

module.exports.handler = (event, context, callback) => {
  let segment = AWSXRay.getSegment();
  let n = Math.random() * 3;
  segment.addMetadata('random', `${n}`);      // this doesn't work
  segment.addAnnotation('path', event.path);  // this doesn't work

Interestingly, adding annotations and metadata to subsegments works just fine.

Looking at the logs, I found something interesting: the segment ID for the root segment doesn’t match the segment ID in the X-Ray trace.

For instance, I fetch the root segment for my function in the handler and logs it to CloudWatch Logs.

const AWSXRay = require('aws-xray-sdk');
module.exports.handler = (event, context, callback) => {
  // this should be the root segment for my function
  let segment = AWSXRay.getSegment();

In the logs I can see the segment ID is 05b2b9ac6c9e5682.

But in the X-Ray trace, the segment ID for the root segment is 2b7d5b4a2a2d96e9.

Furthermore, the trace ID is also different:

  • in the logs it’s 1–59504311-d765e7accb8558871fa89d6d
  • in the X-Ray console it’s 1–59504312–5ef2a3eda0c1b2c4d64dda00

This was very odd, so I decided to track the trace ID in the logs vs in the X-Ray console, starting with a coldstart.

Bingo! Looks like it’s a bug in the X-Ray SDK for Nodejs where AWSXray.getSegment() returns the root segment from the previous invocation..


So there you have it, my weekend escapade with AWS X-Ray and Lambda :-)

Overall I’m impressed with what I saw, and feel X-Ray would have added a lot of value to the serverless architecture I was building at Yubl. However, the inability to span traces over API Gateway endpoints makes it a far less useful addition to our ecosystem.

Furthermore, the X-Ray service is focused on execution time and helping you identify performance bottlenecks. However, there’s another important aspect to distributed tracing—helping you debug your system by ensuring a set of correlation IDs are captured in all log messages. X-Ray does not concern itself with this, although you can probably use the trace ids X-Ray provides you with it’s still up to you to capture them in all log messages and propagating all your logs to one easily searchable place. We invested some effort into enabling distributed tracing in our serverless architecture at Yubl, which you can read about in detail in this post.

Are you using X-Ray in production? I’d love to hear your thoughts and feedbacks on the service, and any pitfalls to look out for.

Yubl’s road to Serverless architecture – Part 4 – building a scalable push notification system

The Road So Far

part 1 : overview

part 2 : testing and continuous delivery strategies

part 3 : ops


Just before Yubl’s untimely demise we did an interesting piece of work to redesign the system for sending targeted push notifications to our users to improve retention.

The old system relied on MixPanel for both selecting users as well as sending out the push notifications. Whilst MixPanel was great for getting us basic analytics quickly, we soon found our use cases outgrew MixPanel. The most pressing limitation was that we were not able to query users based on their social graph to create target push notifications – eg. notify an influencer’s followers when he/she publishes a new post or runs a new social media campaign.

Since all of our analytics events are streamed to Google BigQuery (using a combination of Kinesis Firehose, S3 and Lambda) we have all the data we need to support the complex use cases the product team has.

What we needed, was a push notification system that can integrate with BigQuery results and is capable of sending millions of push notifications in a batch.

Design Goals

From a high level, we need to support 2 types of notifications.

Ad-hoc notifications are driven by the marketing team, working closely with influencers and the BI team to match users with influencers or contents that they might be interested in. Example notifications include:

  • users who follow Accessorize and other fashion brands might be interested to know when another notable fashion brand joins the platform
  • users who follow an influencer might be interested to know when the influencer publishes a new post or is running a social media campaign (usually with give-away prizes, etc.)
  • users who have shared/liked music related contents might be interested to know that Tinie Tempah has joined the platform

Scheduled notifications are driven by the product team, these notifications are designed to nudge users to finish the sign up process or to come back to the platform after they have lapsed. Example notifications include:

  • day-1 unfinished sign up : notify users who didn’t finish the sign up process to come back to complete the process
  • day-2 engagement : notify users to come back and follow more people or invite friends on day 2
  • day-21 inactive : notify users who have not logged into the app for 21 days to come back and check out what’s new

A/B testing

For the scheduled notifications, we want to test out different messages/layouts to optimise their effectiveness over time. To do that, we wanted to support A/B testing as part of the new system (which MixPanel already supports).

We should be able to create multiple variants (each with a percentage), along with a control group who will not receive any push notifications.

Oversight vs Frictionless

For the ad-hoc notifications, we don’t want to get in the way of the marketing team doing their job, so the process for creating ad-hoc push notifications should be as frictionless as possible. However, we also don’t want the marketing team to operate completely without oversight and run the risk of long term damage by spamming users with unwanted push notifications (which might cause users to disable notifications or even rage quit the app).

The compromise we reached was an automated approval process whereby:

  1. the marketing team will work with BI on a query to identify users (eg. followers of Tinie Tempah)
  2. fill in a request form, which informs designated approvers via email
  3. approvers can send themselves a test push notification to see how it will be formatted on both Android and iOS
  4. approvers can approve or reject the request
  5. once approved, the request will be executed


We decided to use S3 as the source for a send-batch-notifications function because it allows us to pass large list of users (remember, the goal is to support sending push notifications to millions of users in a batch) without having to worry about pagination or limits on payload size.

The function will work with any JSON file in the right format, and that JSON file can be generated in many ways:

  • by the cron jobs that generate scheduled notifications
  • by the approval system after an ad-hoc push notification is approved
  • by the approval system to send a test push notification to the approvers (to visually inspect how the message will appear on both Android and iOS devices)
  • by members of the engineering team when manual interventions are required

We also considered moving to SNS but decided against it in the end because it doesn’t provide useful enough an abstraction to justify the effort to migrate (involves client work) and the additional cost for sending push notifications. Instead, we used node-gcm and apn to communicate with GCM and APN directly.

Recursive Functions FTW

Lambda has a hard limit of 5 mins execution time (it might be softened in the near future), and that might not be enough time to send millions of push notifications.

Our approach to long-running tasks like this is to run the Lambda function as a recursive function.

A naive recursive function would process the payload in fixed size batches and recurse at the end of each batch whilst passing along a token/position to allow the next invocation to continue from where it left off. In this particular case, we have additional considerations because the total number of work items can be very large:

  • minimising the no. of recursions required, which equates to no. of Invoke requests to Lambda and carries a cost implication at scale
  • caching the content of the JSON file to improve performance (by avoiding loading and parsing a large JSON file more than once) and reduce S3 cost

To minimise the no. of recursions, our function would:

  1. process the list of users in small batches of 500
  2. at the end of each batch, call context.getRemainingTimeInMillis() to check how much time is left in this invocation
  3. if there is more than 1 min left in the invocation then process another batch; otherwise recurse

When caching the content of the JSON file from S3, we also need to compare the ETAG to ensure that the content of the file hasn’t changed.

With this set up the system was able to easily handle JSON files with more than 1 million users during our load test (sorry Apple and Google for sending all those fake device tokens :-P).

Yubl’s road to Serverless architecture – Part 1

Note: see here for the rest of the series.


Since Yubl’s closure quite a few people have asked about the serverless architecture we ended up with and some of the things we have learnt along the way.

As such, this is the first of a series of posts where I’d share some of the lessons we learnt. However, bear in mind the pace of change in this particular space so some of the challenges/problems we encountered might have been solved by the time you read this.

ps. many aspects of this series is already covered in a talk I gave on Amazon Lambda at Leetspeak this year, you can find the slides and recording of the talk here.


From A Monolithic Beginning

Back when I joined Yubl in April I inherited a monolithic Node.js backend running on EC2 instances, with MongoLab (hosted MongoDB) and CloudAMQP (hosted RabbitMQ) thrown into the mix.


There were numerous problems with the legacy system, some could be rectified with incremental changes (eg. blue-green deployment) but others required a rethink at an architectural level. Although things look really simple on paper (at the architecture diagram level), all the complexities are hidden inside each of these 3 services and boy, there were complexities!

My first tasks were to work with the ops team to improve the existing deployment pipeline and to draw up a list of characteristics we’d want from our architecture:

  • able to do small, incremental deployments
  • deployments should be fast, and requires no downtime
  • no lock-step deployments
  • features can be deployed independently
  • features are loosely coupled through messages
  • minimise cost for unused resources
  • minimise ops effort

From here we decided on a service-oriented architecture, and Amazon Lambda seemed the perfect tool for the job given the workloads we had:

  • lots of APIs, all HTTPS, no ultra-low latency requirement
  • lots of background tasks, many of which has soft-realtime requirement (eg. distributing post to follower’s timeline)


To a Serverless End

It’s suffice to say that we knew the migration was going to be a long road with many challenges along the way, and we wanted to do it incrementally and gradually increase the speed of delivery as we go.

“The lead time to someone saying thank you is the only reputation metric that matters”

– Dan North

The first step of the migration was to make the legacy systems publish state changes in the system (eg. user joined, user A followed user B, etc.) so that we can start building new features on top of the legacy systems.

To do this, we updated the legacy systems to publish events to Kinesis streams.


Our general strategy is:

  • build new features on top of these events, which usually have their own data stores (eg. DynamoDB, CloudSearch, S3, BigQuery, etc.) together with background processing pipelines and APIs
  • extract existing features/concepts from the legacy system into services that will run side-by-side
    • these new services will initially be backed by the same shared MongoLab database
    • other services (including the legacy ones) are updated to use hand-crafted API clients to access the encapsulated resources via the new APIs rather than hitting the shared MongoLab database directly
    • once all access to these resources are done via the new APIs, data migration (usually to DynamoDB tables) will commence behind the scenes
  • wherever possible, requests to existing API endpoints are forwarded to the new APIs so that we don’t have to wait for the iOS and Android apps to be updated (which can take weeks) and can start reaping the benefits earlier


After 6 months of hard work, my team of 6 backend engineers (including myself) have drastically transformed our backend infrastructure. Amazon was very impressed by the work we were doing with Lambda and in the process of writing up a case study of our work when Yubl was shut down at the whim of our major shareholder.

Here’s an almost complete picture of the architecture we ended up with (some details are omitted for brevity and clarity).


Some interesting stats:

  • 170 Lambda functions running in production
  • roughly 1GB of total deployment package size (after Janitor Lambda cleans up unreferenced versions)
  • Lambda cost was around 5% of what we pay for EC2 for a comparable amount of compute
  • the no. of production deployments increased from 9/month in April to 155 in September


For the rest of the series I’ll drill down into specific features, how we utilised various AWS services, and how we tackled the challenges of:

  • centralised logging
  • centralised configuration management
  • distributed tracing with correlation IDs for Lambda functions
  • keeping Lambda functions warm to avoid coldstart penalty
  • auto-scaling AWS resources that do not scale dynamically
  • automatically clean up old Lambda function versions
  • securing sensitive data (eg. mongodb connection string, service credentials, etc.)

I can also explain our strategy for testing, and running/debugging functions locally, and so on. If there’s anything you’d like me to cover in particular, please leave a comment and let me know.



Slides for my Craft-Conf meetup talk on DSLs and F#

S3 – Masterclass Webinar slides

I stumbled across a set of slides with a rather comprehensive overview of the different aspects of S3, worthwhile reading for anyone who works with Amazon S3 regularly. Enjoy!