Metricano – simplifying application monitoring

On application monitoring

In the Gamesys social team, our view on application monitoring is such that anything that runs in production needs to be monitored extensively all the time – every service entry point, IO operations or CPU intensive tasks. Sure, it comes at the cost of a few CPU cycles which might mean that you have to run a few more servers at scale, but that’s small cost to pay compared to:

  • lack of visibility of how your application is performing in production; or
  • inability to spot issues occurring on individual nodes amongst large number of servers; or
  • longer time to discovery on production issues, which results in
    • longer time to recovery (i.e. longer downtime)
    • loss of customers (immediate result of downtime)
    • loss of customer confidence (longer term impact)

Services such as StackDriver and Amazon CloudWatch also allow you to set up alarms around your metrics so that you can be notified or some automated actions can be triggered in response to changing conditions in production.

In Michael T. Nygard’s Release It!: Design and Deploy Production-Ready Software (a great read by the way) he discussed at length how unfavourable conditions in production can cause cracks to appear in your system, and through tight coupling and other anti-patterns these cracks can accelerate and spread across your entire application and eventually bring it crashing down to its knees.

 

In applying extensive monitoring to our services we are able to:

  • see cracks appearing in production early; and
  • collect extensive amount of data for the post-mortem; and
  • use knowledge gained during post-mortems to identify early warning signs and set up alarms accordingly

 

Introducing Metricano

With our emphasis on monitoring, it should come as no surprise that we have long sought to make it easy for our developers to monitor different aspects of their service.

 

Now, we’ve made it easy for you to do the same with Metricano, an agent-based framework for collecting, aggregating and publishing metrics from your application. From a high-level, the MetricsAgent class collects metrics and aggregates them into second-by-second summaries. These summaries are then published to all the publishers you have configured.

 

Recording Metrics

There are a number of options for you to record metrics with MetricsAgent:

Manually

You can call the IncrementCountMetric, or RecordTimeMetric methods on an instance of IMetricsAgent (you can use MetricsAgent.Default or create a new instance with MetricsAgent.Create), for example:

 

F# Workflows

From F#, you can also use the built-in timeMetric and countMetric workflows:

 

PostSharp Aspects

Lastly, you can also use the CountExecution and LogExecutionTime attributes from the Metricano.PostSharpAspects nuget package, which can be applied at method, class and even assembly level.

The CountExecution attribute records a count metric with the fully qualified name of the method, whereas the LogExecutionTime attribute records execution times into a time metric with the fully qualified name of the method. When applied at class and assembly level, the attributes are multi-casted to all encompassed methods, private, public, instance and static. It’s possible to target specific methods, by name or visibility, etc. please refer to PostSharp’s documentation for detail.

 

Publishing Metrics

All the metrics you record won’t do you much good if they just stay inside the memory of your application server.

To get metrics out of your application server and into a monitoring service or dashboard, you need to tell Metricano to publish metrics with a set of publishers. There is a ready made publisher for Amazon CloudWatch service in the Metricano.CloudWatch nuget package.

To add a publisher to the pipeline, use the Publish.With static method, see example here.

Since the lowest granularity on Amazon CloudWatch is 1 minute, so as an optimization to cut down on the number of web requests (which also has a cost impact), CloudWatchPublisher will aggregate metrics locally and only publish the aggregates on a per minute basis.

If you want to publish your metric data to another service (StackDriver or New Relic for instance), you can create your own publisher by implementing the very simple IMetricsPublisher interface. This simple ConsolePublisher for instance, will calculate the 95 percentile execution time and print them:

image

In general I find the 95/97/99 percentile time metrics much more informative than simple averages, since averages are so easily biased by even a small number of outlying data points.

 

Summary

I hope you have enjoyed this post and that you’ll find Metricano a useful addition in your application.

I highly recommend reading Release It!, much of the patterns and anti-patterns discussed in the book are becoming more and more relevant in today’s world where we’re building smaller, more granular microservices. Even the simplest of applications have multiple integration points – social networks, cloud services, etc. – and they are places where cracks are likely to occur before they spread out to the rest of your application, unless, you have taken the measure to guard against such cascading failures.

If you decide to buy the book from amazon, please use the link I provide below or add the query string parameter tag=theburningmon-21 to the URL so that I can get a small referral fee and use it towards buying more books and finding more interesting things to write about here Smile

 

Links

Metricano project page

Release It!: Design and Deploy Production-Ready Software

Metricano nuget package

Metricano.CloudWatch nuget package

Metricano.PostSharpAspects nuget package

Red-White Push – Continuous Delivery at Gamesys Social

Here Be Monsters – Message broker that links all things

In our MMORPG title Here Be Monsters, we offer the players a virtual world to explore where they can visit towns and spots; forage fruits and gather insects and flowers; tend to farms and animals in their homesteads; make in-game buddies and help each other out; craft new items using things they find in their travels; catch and cure monsters corrupted by the plague; help out troubled NPCs and aid the Ministry of Monsters in its struggle against the corruption, and much more!

All and all, there are close to a hundred distinct actions that can be performed in the game and more are added as the game expands. At the very centre of everything you do in the game, is a quest and achievements system that can tap into all these actions and reward you once you’ve completed a series of requirements.

 

The Challenge

However, such a system is complicated by the snowball effect that can occur following any number of actions. The following animated GIF paints an accurate picture of a cyclic set of chain reactions that can occurred following a simple action:

Chain

In this instance,

  1. catching a Gnome awards EXP, gold and occasionally loot drops, in addition to fulfilling any requirement for catching a gnome;
  2. getting the item as loot fulfils any requirements for you to acquire that item;
  3. the EXP and gold awarded to the player can fulfil requirements for acquiring certain amounts of EXP or gold respective;
  4. the EXP can allow the player to level up;
  5. levelling up can then fulfil a requirement for reaching a certain level as well as unlocking new quests that were previously level-locked;
  6. levelling up can also award you with items and gold and the cycle continues;
  7. if all the requirements for a quest are fulfilled then the quest is complete;
  8. completing a quest will in turn yield further rewards of EXP, gold and items and restarts the cycle;
  9. completing a quest can also unlock follow-up quests as well as fulfilling quest-completion requirements.

 

The same requirements system is also in place for achievements, which represent longer term goals for players to play for (e.g. catch 500 spirit monsters). The achievement and quest systems are co-dependent and feeds into each other, many of the milestone achievements we currently have in the game depend upon quests to be completed:

image

Technically there is a ‘remote’ possibility of deadlocks but right now it exists only as a possibility since new quest/achievement contents are generally played through many many times by many people involved in the content generation process to ensure that they are fun, achievable and that at no point will the players be left in a state of limbo.

 

This cycle of chain reactions introduces some interesting implementation challenges.

For starters, the different events in the cycle (levelling up, catching a monster, completing a quest, etc.) are handled and triggered from different abstraction layers that are loosely coupled together, e.g.

  • Level controller encapsulates all logic related to awarding EXP and levelling up.
  • Trapping controller encapsulates all logic related to monster catching.
  • Quest controller encapsulates all logic related to quest triggering, progressing and completions.
  • Requirement controller encapsulates all logic related to managing the progress of requirements.
  • and many more..

Functionally, the controllers form a natural hierarchy whereby higher-order controllers (such as the trapping controller) depend upon lower-order controllers (such as level controller) because they need to be able award players with EXP and items etc. However, in order to facilitate the desired flow, theoretically all controllers will need to be able to listen and react to events triggered by all other controllers..

 

To make matter worse, there are also non-functional requirements which also requires the ability to tap into this rich and continuous stream of events, such as:

  • Analytics tracking – every action the player takes in the game is recorded along with the context in which they occurred (e.g. caught a gnome with the trap X, acquired item Z, completed quest Q, etc.)
  • 3rd party reporting – notify ad partners on key milestones to help them track and monitor the effectiveness of different ad campaigns
  • etc..

 

For the components that process this stream of events, we also wanted to make sure that our implementation is:

  1. strongly cohesive – code that are dealing with a particular feature (quests, analytics tracking, community goals, etc.) are encapsulated within the same module
  2. loosely coupled – code that deals with different features should not be directly dependent on each other and where possible they should exist completely independently

Since the events are generated and processed within the context of one HTTP request (the initial action from the user), the stream also have a lifetime that is scoped to the HTTP request itself.

 

And finally, in terms of performance, whilst it’s not a latency critical system (generally a round-trip latency of sub-1s is acceptable) we generally aim for a response time (between request reaching the server and the server sending back a response) of 50ms to ensure a good round-trip latency from the user’s perspective.

In practice though, the last-mile latency (from your ISP to you) has proven to be the most significant factor in determining the round-trip latency.

 

The Solution

After considering several approaches:

  • Vanilla .Net events
  • Reactive Extensions (Rx)
  • CEP platforms such as Esper or StreamInsight

we decided to go with a tailor-made solution for the problem at hand.

In this solution we introduced two abstractions:

  • Facts – which are special events for the purpose of this particular system, we call them facts in order to distinguish them from the events we record for analytics purpose already. A fact contains information about an action or a state change as well as the context in which it occurred, e.g. a CaughtMonster fact would contain information about the monster, the trap, the bait used, where in the world the action occurred, as well as the rewards the player received.
  • Fact Processor – a component which processes a fact.

 

As a request (e.g. to check our trap to see if we’ve caught a monster) comes in the designated request handler will first perform all the relevant game logic for that particular request, accumulating facts along the way from the different abstraction layers that have to work together to process this request.

At the end of the core game logic, the accumulated facts is then forwarded to each of the configured fact processors in turn. The fact processors might choose to process or ignore each of the facts.

In choosing to process a fact the fact processors can cause state changes or other interesting events to occur which results in follow-up facts to be added to the queue.

FactProcessing

 

The system described above has the benefits of being:

  • Simple – easy to understand and reason with, easy to modularise, no complex orchestration logic or spaghetti code.
  • Flexible – easy to change information captured by facts and processing logic in fact processors
  • Extensible – easy to add new facts and/or fact processors into the system

The one big downside being that for the system to work it requires many types of facts which means it could potentially add to your maintenance overhead and requires lots of boilerplate class setup.

 

To address these potential issues, we turned to F#’s discriminated unions over standard .Net classes for its succinctness. For a small number of facts you can have something as simple as the following:

image

However, as we mentioned earlier, there are a lot of different actions that can be performed in Here Be Monsters and therefore many facts will be required to track those actions as well as the state changes that occur during those actions. The simple approach above is not a scalable solution in this case.

Instead, you could use a combination of marker interface and pattern matching to split the facts into a number of specialized discriminated union types.

image

Update  2014/07/28 : thank you to @johnazariah for bringing this up, the reason for choosing to use a marker interface rather than a hierarchical discriminated union in this case is because it makes interop with C# easier.

In C#, you can create the StateChangeFacts.LevelUp union clause above using the compiler generated StateChangeFacts.NewLevelUp static method but it’s not as readable as the equivalent F# code.

With a hierarchical DU the code will be even less readable, e.g. Fact.NewStateChange(StateChangeFacts.NewLevelUp(…))

 

To wrap things up, once all the facts are processed and we have dealt with the request in full we need to generate a response back to the client to report all the changes to the player’s state as a result of this request. To simplify the process of tracking these state changes and to keep the codebase maintainable we make use of a Context object for the current request (similar to HttpContext.Current) and make sure that each state change (e.g. EXP, energy, etc.) occurs in only one place in the codebase and that change is tracked at the point where it occurs.

At the end of each request, all the changes that has been collected is then copied from the current Context object onto the response object if it implements the relevant interface – for example, all the quest-related state changes are copied onto a response object if it implements the IHasQuestChanges interface.

 

Related Posts

F# – use Discriminated Unions instead of Classes

F# – extending Discriminated Unions using marker interfaces

Announcing libraries for C# and F# to make it easier to integrate with Sentry

Here at Gamesys social team, we’re rethinking our current approach to logging in general, from both server and client’s perspective. Having looked at many different alternatives (it was a little hard to imagine how crowded a space log aggregation and visualization is..) one of the services which we have decided to experiment with is Sentry.

Sentry is a fairly simple service, with an easy to use API and straight forward to integrate with, especially if you already have a client library (the Sentry doc refers to them them as Ravens) for your language of choice. On the .Net side of things, you have a little library called SharpRaven.

As for integration, using custom log4net appender such as this one is obviously a good way to go, but you still need to implement the try-catch-log pattern everywhere though, unless you’re happy for these exceptions to bubble all the way up to the app domain and catch them there. And when I see implementation patterns I see opportunities to automate them with PostSharp!

C# custom attributes

If you grab the SharpRaven-Contrib package from Nuget you’ll have access to a pair of custom attributes – RavenLogException and RavenLogExecutionTimeAttribute – when you open the SharpRaven namespace. For example,

The attributes does what they say on the tin, RavenLogException captures and logs exception information as errors to Sentry whilst RavenLogExecutionTime monitors execution time of your methods and logs any method execution that took longer than your given threshold as warnings to Sentry.

For F# however, whilst the attributes would still work for methods, chances are you will be spending most of your time working and composing functions instead and these attributes won’t help you there. So for F# I decided to do something slightly different.

F# workflows

Thankfully, in F#, we have computation expressions* (aka workflows) which already power language features such as async workflows and sequence comprehensions.

Using the workflows defined in the SharpRaven-ContribFs package you can create blocks of code where:

  • any unhandled exceptions are logged as Error in Sentry
  • if the block of code takes longer than the specified threshold to execute, it’ll be logged as a warning in Sentry

and your code remains unchanged, you simply wrap them in { }:

Of course, you can also just create wrapper functions to achieve the same results, but I find that using workflows in this case makes for more readable code. Another good alternative is to use a Maybe monad, which I won’t go into too much detail here as Scott Wlaschin has a great explanation for this already.

 

As always, the source code for both libraries are available on github, and if you find any issues feel free to report them via the issues page.

 

* if you’re interested in learning more about computation expressions, I highly recommend Scott Wlaschin’s series on his F# for Fun and Profit blog, it’s by far the most comprehensive and easy to understand set of articles I have seen.

 

Links

C# – extern alias, and ILMerge’d assemblies

Suppose you want to merge an assembly A (AssemblyA.dll) with another assembly B (AssemblyB.dll) with ILMerge into a merged assembly (Merged.dll), and everything works fine until the user of your merged assembly also references that AssemblyB.dll, at which point that user will get Ambiguous reference errors for any reference to types defined in assembly B, for example:

image image

Understandably the compiler is not happy here because it finds duplicated definitions for TypeB under the same namespace in the two versions of the assembly B (the one referenced in the user’s project and the one that’s merged with assembly A).

So how do we get out of this unholy mess?

 

Well, there’s this little known feature in .Net called extern alias which allows you to give referenced assemblies an alias via that little Aliases property in the Properties window for any referenced libraries (one I’m sure we have all seen countless times and wondered what it means).

By default the alias for an assembly is ‘global’, which just means global namespace, but you can change it via the Visual Studio properties window or via command line options to CSC.exe:

imageimage

Now the types defined in the Merged.dll assembly will fall under the Merged namespace and to access them you need to first add a line to your code:

extern alias Merged;

and then anywhere you’re referencing types from the Merged assembly you need to prefix it with Merged:: like the following.

image

You might also want to give AssemblyB an alias just to remove any reasonable doubt which assembly a type comes from whenever you reference a type defined in AssemblyB.

 

Whilst this is a way to get you out of a tight spot, it’s far from a clean solution, and as @BjoernRochel said below, a good general advice is to not merge assemblies that you do not own to begin with:

image

Filbert v0.2.0 – performance improvement on decoding

Icon for package Filbert

Some time ago I put together a small BERT serializer and BERT-RPC client for .Net called Filbert (which is another name for Hazelnut, that has the word bert and the letter F and at the time every F# library has a leading F in its name!).

As an experimental project admittedly I hadn’t given too much thought to performance, and as you can see from below, the numbers don’t make for a flattering reading even against the BCL’s binary formatter!

I finally found some time to take a stab at improving the dreadful deserialization speed and with a couple of small changes I was able to halve the deserialization time for a very simple benchmark for a simple object, and there are still a couple of low hanging fruits that can improve things further.

image

The new version is up on Nuget so feel free to play around with it, and do let me know if you have any feedbacks and raise any issues or bugs on the Issues page for the project.