Using CloudWatch and Lambda to implement ad-hoc scheduling

You can become a serverless blackbelt. Enrol to my 4-week online workshop Production-Ready Serverless and gain hands-on experience building something from scratch using serverless technologies. At the end of the workshop, you should have a broader view of the challenges you will face as your serverless architecture matures and expands. You should also have a firm grasp on when serverless is a good fit for your system as well as common pitfalls you need to avoid. Sign up now and get 15% discount with the code yanprs15!

A while back I wrote about using DynamoDB TTL to implement ad-hoc scheduling. It generated some healthy debate and a few of you have mentioned alternatives including using Step Functions. So let’s take a look at some of these alternatives, starting with the simplest – using a cron job.

We will assess this approach using the same criteria laid out in the last post:

  • Precision: how close to my scheduled time is the task executed? The closer, the better.
  • Scale (number of open tasks): can the solution scale to support many open tasks. I.e. tasks that are scheduled but not yet executed.
  • Scale (hotspots): can the solution scale to execute many tasks around the same time? E.g. millions of people set a timer to remind themselves to watch the Superbowl, so all the timers fire within close proximity to kickoff time.

CloudWatch schedule and Lambda

The set up is very simple:

  • A database (such as DynamoDB) that stores all the scheduled tasks, including when they should execute.
  • A CloudWatch schedule (cron) that runs every X minutes.
  • A Lambda function that reads overdue tasks from the database and executes them.

Scalability (number of open tasks)

Since the number of open tasks just translates to the number of items in the database, this approach can scale to millions of open tasks.


With CloudWatch Events, you can run a scheduled task as often as every minute. That’s the best precision you can get with this approach.

However, this approach is often constrained by the number of tasks that can be processed in each iteration. When there are too many tasks that need to be executed at the same time, they can be stacked up and cause delays. These delays are a symptom of the biggest challenge with this approach – dealing with hot spots.

Scalability (hotspots)

When the Lambda function executes, it would look for tasks that are at or past their scheduled execution time. For example, the time now is 00:00 UTC, our function would find tasks whose scheduled_time is <= 00:00 UTC.

What happens if it finds too many tasks that it can’t complete the current cycle before the next cycle kicks off at 00:01 UTC? To avoid the same task being executed more than once, often we’d set the function’s Reserved Concurrency to 1. This ensures that at any moment in time, only one instance of our Lambda function is running.

However, this means tasks that are scheduled for 00:02 UTC is not processed until the first batch is complete (see above). This can create unpredictable delays and can significantly impact the precision of the system.

Alternatively, if you can mark a task as having started then you can prevent subsequent cycles from picking it up again except in failure cases. One such scheme might be to add a started_at attribute to the task definitions. When the cron function looks for tasks to execute, it’ll ignore tasks that are started but have not yet completed or timed out.

Doing this would allow you to run multiple instances of the cron function without risking processing a task more than once.

But that still leaves the issue that hotspots can cause some real damage:

  • The function can timeout before it’s able to process the entire batch
  • The precision of the system is highly unpredictable. If a function takes the full 15 mins to process the batch then some tasks will be executed 15 mins after their scheduled time.

Of course, you can process the tasks in parallel inside your code. But that adds complexity to your business logic and has other failure modes such as:

  • Out of memory exception – if you start too many concurrent Promise (in Node) or threads then you can hit the dreaded OutOfMemoryException at runtime.
  • Promise.all in Node.js would reject if any of the individual promises rejects. If you have an unhandled exception then it can reject all other promises, without knowing if which would have succeeded in the end.

Personally, I think a better approach would be to hand off the tasks to a SQS queue instead. This SQS queue can trigger another Lambda function.

With SQS, you have built-in retry and dead letter queue (DLQ) support. The Lambda function would process SQS tasks in batches (of up to 10). The Lambda service would also auto-scale the number of concurrent executions based on traffic. Both characteristics would help you with throughput.

However, since this is an extra hop it will add delay to tasks being executed on time. Which impacts the precision of the system. So, maybe you don’t do this all the time. Instead, process the batch right away if the batch is small. Only when the batch is over a certain size, then defer processing through a SQS queue.

Database choice

I used DynamoDB in all the examples in this post because it’s easy to use and would suffice for many use cases. However, it’s not great at dealing with hot keys. Unfortunately, that’s exactly what we’d be doing here…

You are constraint by the limit of 3000 read throughput units per partition. So if your system needs to scale to more than a few thousand tasks per batch then DynamoDB is probably not for you.


This batch-based approach is not considered modern. But for many use cases, it’s also the simplest way to schedule ad-hoc tasks.

For instance, if your use case is such that:

  • You can tolerate tasks being executed several minutes late.
  • Tasks are evenly distributed across the day, and unlikely to form hotspots.

Then this approach of using a CloudWatch schedule and Lambda is likely sufficient for your needs. Even if you’re going to experience some hotspots, there are ways to mitigate them to a degree. As we discussed in this post, there are several things you can do to increase your throughput:

  • You can track the status of the tasks. This allows multiple instances of the cron function to overlap.
  • You can offload the tasks to SQS first. This lets you leverage the batching and parallelism support you get with SQS and Lambda. And you also get DLQ support for free as well.

I hope you have enjoyed this post, as part of this series we will discuss several other approaches to scheduling ad-hoc tasks. Watch this space ;-)

Read about other approaches

Liked this article? Support me on Patreon and get direct help from me via a private Slack channel or 1-2-1 mentoring.
Subscribe to my newsletter

Hi, I’m Yan. I’m an AWS Serverless Hero and I help companies go faster for less by adopting serverless technologies successfully.

Are you struggling with serverless or need guidance on best practices? Do you want someone to review your architecture and help you avoid costly mistakes down the line? Whatever the case, I’m here to help.

Hire me.

Skill up your serverless game with this hands-on workshop.

My 4-week Production-Ready Serverless online workshop is back!

This course takes you through building a production-ready serverless web application from testing, deployment, security, all the way through to observability. The motivation for this course is to give you hands-on experience building something with serverless technologies while giving you a broader view of the challenges you will face as the architecture matures and expands.

We will start at the basics and give you a firm introduction to Lambda and all the relevant concepts and service features (including the latest announcements in 2020). And then gradually ramping up and cover a wide array of topics such as API security, testing strategies, CI/CD, secret management, and operational best practices for monitoring and troubleshooting.

If you enrol now you can also get 15% OFF with the promo code “yanprs15”.

Enrol now and SAVE 15%.

Check out my new podcast Real-World Serverless where I talk with engineers who are building amazing things with serverless technologies and discuss the real-world use cases and challenges they face. If you’re interested in what people are actually doing with serverless and what it’s really like to be working with serverless day-to-day, then this is the podcast for you.

Check out my new course, Learn you some Lambda best practice for great good! In this course, you will learn best practices for working with AWS Lambda in terms of performance, cost, security, scalability, resilience and observability. We will also cover latest features from re:Invent 2019 such as Provisioned Concurrency and Lambda Destinations. Enrol now and start learning!

Check out my video course, Complete Guide to AWS Step Functions. In this course, we’ll cover everything you need to know to use AWS Step Functions service effectively. There is something for everyone from beginners to more advanced users looking for design patterns and best practices. Enrol now and start learning!