SmartThreadPool – What happens when you take on more than you can chew

You can become a serverless blackbelt. Enrol to my 4-week online workshop Production-Ready Serverless and gain hands-on experience building something from scratch using serverless technologies. At the end of the workshop, you should have a broader view of the challenges you will face as your serverless architecture matures and expands. You should also have a firm grasp on when serverless is a good fit for your system as well as common pitfalls you need to avoid. Sign up now and get 15% discount with the code yanprs15!

I’ve covered the topic of using SmartThreadPool and the framework thread pool in more details here and here, this post will instead focus on a more specific scenario where the rate of new work items being queued outstrips the pool’s ability to process those items and what happens then.

First, let’s try to quantify the work items being queued when you do something like this:

   1: var threadPool = new SmartThreadPool();

   2: var result = threadPool.QueueWorkItem(....);

The work item being queued is a delegate of some sort, basically some piece of code that needs to be run, until a thread in the pool becomes available and process the work item, it’ll simply stay in memory as a bunch of 1’s and 0’s just like everything else.

Now, if new work items are queued at a faster rate than the threads in the pool are able to process them, it’s easy to imagine that the amount of memory required to keep the delegates will follow an upward trend until you eventually run out of available memory and an OutOfMemoryException gets thrown.

Does that sound like a reasonable assumption? So let’s find out what actually happens!

Test 1 – Simple delegate

To simulate a scenario where the thread pool gets overrun by work items, I’m going to instantiate a new smart thread pool and make sure there’s only one thread in the pool at all times. Then I recursively queue up an action which puts the thread (the one in the pool) to sleep for a long time so that there’s no threads to process subsequent work items:

   1: // instantiate a basic smt with only one thread in the pool

   2: var threadpool = new SmartThreadPool(new STPStartInfo

   3:                                          {

   4:                                              MaxWorkerThreads = 1,

   5:                                              MinWorkerThreads = 1,

   6:                                          });

   7:  

   8: var queuedItemCount = 0;

   9: try

  10: {

  11:     // keep queuing a new items which just put the one and only thread

  12:     // in the threadpool to sleep for a very long time

  13:     while (true)

  14:     {

  15:         // put the thread to sleep for a long long time so it can't handle anymore

  16:         // queued work items

  17:         threadpool.QueueWorkItem(() => Thread.Sleep(10000000));

  18:         queuedItemCount++;

  19:     }

  20: }

  21: catch (OutOfMemoryException)

  22: {

  23:     Console.WriteLine("OutOfMemoryException caught after queuing {0} work items", queuedItemCount);

  24: }

The result? As expected, the memory used by the process went on a pretty steep climb and within a minute it bombed out after eating up just over 1.8GB of RAM:

image 

image

All the while we managed to queue up 7205254 instances of the simple delegate used in this test, keep this number in mind as we look at what happens when the closure also requires some expensive piece of data to be kept around in memory too.

Test 2 – Delegate with very long string

For this test, I’m gonna include a 1000 character long string in the closures being queued so that string objects need to be kept around in memory for as long as the closures are still around. Now let’s see what happens!

   1: // instantiate a basic smt with only one thread in the pool

   2: var threadpool = new SmartThreadPool(new STPStartInfo

   3:                                          {

   4:                                              MaxWorkerThreads = 1,

   5:                                              MinWorkerThreads = 1,

   6:                                          });

   7:  

   8: var queuedItemCount = 0;

   9: try

  10: {

  11:     // keep queuing a new items which just put the one and only thread

  12:     // in the threadpool to sleep for a very long time

  13:     while (true)

  14:     {

  15:         // generate a 1000 character long string, that's 1000 bytes

  16:         var veryLongText = new string(Enumerable.Range(1, 1000).Select(i => 'E').ToArray());

  17:  

  18:         // include the very long string in the closure here

  19:         threadpool.QueueWorkItem(() =>

  20:                                      {

  21:                                          Thread.Sleep(10000000);

  22:                                          Console.WriteLine(veryLongText);

  23:                                      });

  24:         queuedItemCount++;

  25:     }

  26: }

  27: catch (OutOfMemoryException)

  28: {

  29:     Console.WriteLine("OutOfMemoryException caught after queuing {0} work items", queuedItemCount);

  30: }

Unsurprisingly, the memory was ate up even faster this time around and at the end we were only able to queue 782232 work items before we ran out of memory, which is significantly lower compared to the previous test:

image

Parting thoughts…

Besides it being a fun little experiment to try out, there is a story here, one that tells of a worst case scenario (albeit one that’s highly unlikely but not impossible) which is worth keeping in the back of your mind of when utilising thread pools to deal with highly frequent, data intense, blocking calls.

Liked this article? Support me on Patreon and get direct help from me via a private Slack channel or 1-2-1 mentoring.
Subscribe to my newsletter


Hi, I’m Yan. I’m an AWS Serverless Hero and I help companies go faster for less by adopting serverless technologies successfully.

Are you struggling with serverless or need guidance on best practices? Do you want someone to review your architecture and help you avoid costly mistakes down the line? Whatever the case, I’m here to help.

Hire me.


Skill up your serverless game with this hands-on workshop.

My 4-week Production-Ready Serverless online workshop is back!

This course takes you through building a production-ready serverless web application from testing, deployment, security, all the way through to observability. The motivation for this course is to give you hands-on experience building something with serverless technologies while giving you a broader view of the challenges you will face as the architecture matures and expands.

We will start at the basics and give you a firm introduction to Lambda and all the relevant concepts and service features (including the latest announcements in 2020). And then gradually ramping up and cover a wide array of topics such as API security, testing strategies, CI/CD, secret management, and operational best practices for monitoring and troubleshooting.

If you enrol now you can also get 15% OFF with the promo code “yanprs15”.

Enrol now and SAVE 15%.


Check out my new podcast Real-World Serverless where I talk with engineers who are building amazing things with serverless technologies and discuss the real-world use cases and challenges they face. If you’re interested in what people are actually doing with serverless and what it’s really like to be working with serverless day-to-day, then this is the podcast for you.


Check out my new course, Learn you some Lambda best practice for great good! In this course, you will learn best practices for working with AWS Lambda in terms of performance, cost, security, scalability, resilience and observability. We will also cover latest features from re:Invent 2019 such as Provisioned Concurrency and Lambda Destinations. Enrol now and start learning!


Check out my video course, Complete Guide to AWS Step Functions. In this course, we’ll cover everything you need to know to use AWS Step Functions service effectively. There is something for everyone from beginners to more advanced users looking for design patterns and best practices. Enrol now and start learning!