My .Net Rocks talk is available

Hi, just a quick to say that my talk with .Net Rocks is now available on their web site. In this talk I shared some insights into how F# is used in our stack to help us build the backend for our social games, specifically in the areas of:

 

Enjoy!

Shlomo Swidler’s Many Cloud Design Patterns slides

This is so good I keep going back to it, so to save myself and you the hassle of searching for it every time I thought I’d share it here on my blog, enjoy! Smile

Slides and Source Code for my webinar with PostSharp

Following my recent webinar with SharpCrafters on how to setup pseudo real-time performance monitoring using Aspect Oriented Programming and Amazon CloudWatch, I’d like to say thanks to the guys for having me, it was a great fun Smile

For anyone interested, the source code is available at:

http://aop-demo.s3.amazonaws.com/RTPerfMonDemo.zip

If you want to run the demo console app to generate some data, you need to put your AWS key and secret in the App.config file in the Demo.ConsoleApp project:

image

Just go to aws.amazon.com and create an account, you’ll then be given an access key and secret to use.

The slides for the session is also available to download on SlideShare:

Enjoy!

Pseudo RealTime performance monitoring with AOP and AWS CloudWatch

This is something I’ve mentioned in my recent AOP talks, and I think it’s worthy of a wider audience as it can be very useful to anyone who’s obsessed with performance as I am.

At iwi, we take performance very seriously and are always looking to improve the performance of our applications. In order for us to identify the problem areas and focus our efforts on the big wins we first need a way to measure and monitor the individual performance of the different components inside our system, sometimes down to a method level.

Fortunately, with the help of AOP and AWS CloudWatch we’re able to get a pseudo-realtime view on how frequently a method is executed and how much time it takes to execute, down to one minute intervals:

image

With this information, I can quickly identify methods that are the worst offenders and focus my profiling and optimization efforts around those particular methods/components.

Whilst I cannot disclose any implementation details in this post, it is my hope that it’ll be sufficient to give you an idea of how you might be able to implement a similar mechanism.

AOP

A while back I posted about a simple attribute for watching method executing time and logging warning messages when a method takes longer than some pre-defined threshold.

Now, it’s possible and indeed easy to modify this simple attribute to instead keep track of the execution times and bundle them up into average/min/max values for a given minute. You can then publish these minute-by-minute metrics to AWS CloudWatch from each virtual instance and let the CloudWatch service itself handle the task of aggregating all the data-points.

By encapsulating the logic of measuring execution time into an attribute, you can start measuring a particular method by simply applying the attribute to that method. Alternatively, PostSharp supports pointcut and lets you multicast an attribute to many methods at once, and allows you to filter the method target by name as well as visibility level. It is therefore possible for you to start measuring and publishing the execution time of ALL public methods in a class/assembly with only one line of code!

CloudWatch

The CloudWatch service should be familiar to anyone who has used AWS EC2 before, it’s a monitoring service primarily for AWS cloud resources (virtual instances, load balancers, etc.) but it also allows you to publish your own data about your application. Even if your application is not being hosted inside AWS EC2, you can still make use of the CloudWatch service as long as you have an AWS account and a valid AWS access key and secret.

Once published, you can visualize your data inside the AWS web console, depending on the type of data you’re publishing there are a number of different ways you can view them – Average, Min, Max, Sum, Count, etc.

Note that AWS only keeps up to two weeks worth of data, so if you want to keep the data for longer you’ll have to query and store the data yourself. For instance, it makes sense to keep a history of hourly averages for the method execution times you’re tracking so that in the future, you can easily see where and when a particular change has impacted the performance of those methods. After all, storage is cheap and even with thousands of data points you’ll only be storing that many rows per hour.

S3 – Use using block to get the stream

When you’re using the Amazon S3 client, have you come across the occasional exception that says something like one of these exception messages:

“The request was aborted: The connection was closed unexpectedly”

“Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall”

“Unable to read data from the transport connection: An established connection was aborted by the software in your host machine “

If you do, then you’re probably attempting to return the response stream directly back to the rest of your application with something like this:

   1: var response = _s3Client.GetObject(request);

   2: return response.ResponseStream;

However, because the stream is coming from the Amazon S3 service and is fed to your code in chunks, your code needs to ensure that the connection to S3 stays open until all the data has been received. So as mentioned in the S3 documentation (which incidentally, most of us don’t read in great details…) here, you should be wrapping the response you get from the GetObject method in a using clause.

Depends on what it is you want to do with the stream, you might have to handle it differently. For instance, if you just want to read the string content of a text file, you might want to do this:

   1: using (var response = _s3Client.GetObject(request))

   2: {

   3:     using (var reader = new StreamReader(response.ResponseStream))

   4:     {

   5:         return reader.ReadToEnd();

   6:     }

   7: }

Alternatively, if you want to return the response stream itself, you’ll need to first load the stream in its entirety and return the loaded stream. Unfortunately, at the time of this writing, the AWSSDK library still hasn’t been migrated to .Net 4 and therefore doesn’t have the uber useful CopyTo method added in .Net 4, so you will most likely have to do the heavy lifting yourself and read the data out manually into a memory stream:

   1: using (var response = _s3Client.GetObject(request))

   2: {

   3:     var binaryData = ReadFully(response.ResponseStream);

   4:     return new MemoryStream(binaryData);

   5: }

   6:

   7: /// <summary>

   8: /// See Jon Skeet's article on reading binary data:

   9: /// http://www.yoda.arachsys.com/csharp/readbinary.html

  10: /// </summary>

  11: public static byte[] ReadFully (Stream stream, int initialLength = -1)

  12: {

  13:     // If we've been passed an unhelpful initial length, just

  14:     // use 32K.

  15:     if (initialLength < 1)

  16:     {

  17:         initialLength = 32768;

  18:     }

  19:

  20:     byte[] buffer = new byte[initialLength];

  21:     int read=0;

  22:

  23:     int chunk;

  24:     while ( (chunk = stream.Read(buffer, read, buffer.Length-read)) > 0)

  25:     {

  26:         read += chunk;

  27:

  28:         // If we've reached the end of our buffer, check to see if there's

  29:         // any more information

  30:         if (read == buffer.Length)

  31:         {

  32:             int nextByte = stream.ReadByte();

  33:

  34:             // End of stream? If so, we're done

  35:             if (nextByte==-1)

  36:             {

  37:                 return buffer;

  38:             }

  39:

  40:             // Nope. Resize the buffer, put in the byte we've just

  41:             // read, and continue

  42:             byte[] newBuffer = new byte[buffer.Length*2];

  43:             Array.Copy(buffer, newBuffer, buffer.Length);

  44:             newBuffer[read]=(byte)nextByte;

  45:             buffer = newBuffer;

  46:             read++;

  47:         }

  48:     }

  49:     // Buffer is now too big. Shrink it.

  50:     byte[] ret = new byte[read];

  51:     Array.Copy(buffer, ret, read);

  52:     return ret;

  53: }