WCF – Cross-machine semaphore with WCF

Yan Cui

I help clients go faster for less using serverless technologies.

Came across an interesting question on StackOverflow on how one might be able to throttle the number of requests across multiple servers running the same WCF service. So for instance, if you have 3 servers sitting behind a load balancer and for one reason or another you can only allow 5 requests to be made against the service at any moment in time and any subsequent requests need to be queued until one of the previous requests finish.

For those of you familiar with the programming concept of a semaphore, you might see that the above requirement describes a semaphore which applies across multiple machines. A quick search on Google for ‘cross-machine semaphore’ reveals several implementations of such system using memcached.

Naturally, a distributed cache is a good way to go about implementing a cross-machine semaphore IF you are already using it for something else. Otherwise the overhead and cost of running a distributed cache cluster purely for the sake of a cross-machine semaphore makes this approach a no-go for most of us..

Instead, you could easily implement a semaphore service to provide the same functionality to multiple WCF clients as the bog-standard Semaphore class do to multiple threads. Such a service might look something like this:

[ServiceContract]
public interface ISemaphorService
{
    [OperationContract]
    void Acquire();

    [OperationContract]
    void Release();
}

[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class SemaphoreService
{
    private readonly static Semaphore Pool = new Semaphore(5, 5);

    public void Acquire()
    {
        Pool.WaitOne();
    }

    public void Release()
    {
        Pool.Release();
    }
}

This approach will introduce a single point of failure as for semaphore service to work correctly it’ll need to be running on a single machine. If you were to use this approach you should have build up some infrastructure around it so that you can recover swiftly if and when the server running the semaphore service goes down.

In terms of the client code you should make sure that Release is called for every Acquire call, so would be a good idea to put a try-finally block around it:

var client = new SemaphoreServiceClient();

try
{
    // acquire the semaphore before processing the request
    client.Acquire();

    // process request
    ...
}
finally
{
    // always remember the release the semaphore
    client.Release();
}

Whenever you’re ready, here are 3 ways I can help you:

  1. Production-Ready Serverless: Join 20+ AWS Heroes & Community Builders and 1000+ other students in levelling up your serverless game. This is your one-stop shop for quickly levelling up your serverless skills.
  2. I help clients launch product ideas, improve their development processes and upskill their teams. If you’d like to work together, then let’s get in touch.
  3. Join my community on Discord, ask questions, and join the discussion on all things AWS and Serverless.

2 thoughts on “WCF – Cross-machine semaphore with WCF”

  1. That looks nice, but…

    I see some problems:

    1. which machine with peer to peer servers would be the semaphore “server”?
    2. Network can go dead. finally won’t do alot without network connection.

  2. @offler –
    Regarding 1, if you have a network of peer-to-peer servers then you need to consider how semaphores should be used, especially if the peer-to-peer networks are formed at a ad-hoc basis. Unless all the independent peer-to-peer network of nodes share the same pool of semaphores, in which case a centralized approach will still work.
    The above is just an outline of an idea of implementation using WCF, nowadays Redis is so mature I would look into using Redis’s lists along with blocking pop and push operations instead.

    Regarding 2, if you’re going to use some form of shared semaphores in a distributed environment then you need to consider how your application should handle network partitions.
    For instance, if a subset of the nodes lose connectivity to the rest of the network, can/should the system continue operating in a soft-working state? Or can the system only function if ALL of the nodes remain connected? In which case you might need to have heartbeat tests for all your nodes, etc.
    How (and if) to handle network partitions is a broader question that needs to be answered on a case-by-case basis, and not limited to use of a central semaphore server.

Leave a Comment

Your email address will not be published. Required fields are marked *