Upcoming webinar on Localization and Design Pattern Automation

Hello, just a quick note to say that I’m doing a webinar with the PostSharp folks on a technique my team developed whilst working on Here Be Monsters (a MMORPG that had more text than the first 3 Harry Porter books combined) which allowed us to localise the whole game with a handful of lines of code and an hour’s worth of work.

The webinar will be held at 12:00 EST / 17:00 GMT on Tuesday 15th November, and you can register for the webinar here.

In the webinar, I’ll cover:

  • common practices of localization
  • challenges and problems these common approaches
  • how to rethink the localization problem as an automatable implementation pattern
  • pattern automation with PostSharp

image.png image.png

Fasterflect vs HyperDescriptor vs FastMember vs Reflection

The other day I had a small task to inspect return values of methods and if the following property exists then set it to empty array.

        public long[] Achievements { get; set; }

This needed to happen once on every web request, and I decided to implement it as a PostSharp attribute. WHY I needed to do this is another interesting discussion, but broadly speaking boils down to assumptions baked into base class’s control flows  no longer holds true but is inflexible to change.

A major refactoring is in order, but given we have a deadline to meet, let’s take a technical debt and make sure it’s in the backlog so we know to come back to it.


So I went about finding a more efficient way of doing this than hand-writing some reflection code.

There’s Jon Skeet’s famous reflection post but to make it work across all types and incorporating IL emit is more complex than this task warrants.

sidebar : if you’re still interested in seeing how Jon’s technique can be applied, check out FSharpx.Reflection to see how it’s used to make reflection over F# types fast.

Then there’s Marc Gravell’s HyperDescriptor although the Nuget  package itself was authored by someone else. Plus Marc’s follow-up to HyperDescriptorFastMember.

I also came across a package called Fasterflect which I wasn’t aware of before, from its project page the API looks pretty clean.


Test 1 – 1M instances of MyTypeA

Suppose I have a type, with the Achievements property that I want to set to empty whenever I see it returned by a method.


To do this with reflection is pretty straight forward:


With Fasterflect, this becomes:


I do like this API, it’s very intuitive.

And here’s how it looks with HyperDescriptor and FastMember:


Now let’s run this across 1M instance of MyTypeA and see how they do. Both Fasterflect and FastMember did really well although HyperDescriptor was 3x slower than basic reflection!



Test 2 – 1M instances of MyTypeA and MyTypeB

Ok, but since this code has to work across many types, so we have to expect both

  • types that has this property, and
  • types that don’t

To simulate that, let’s introduce another type into the test.


Instead of working with an array of MyTypeA, now we have a mixed array of both MyTypeA and MyTypeB for our test.


and the result over 1M objects makes for interesting readings:


some observations from this set of results:

  • we need to invoke the setter on only half the objects, so reflection is faster (almost halved) than before, makes sense;
  • both FastMember and HyperDescriptor are faster due to the same reason as above;
  • having less work to do had much smaller impact on FastMember suggests some caching around the call site (and indeed it does);
  • WTF is going on with Fasterflect!



The morale of this story is that – always verify claims against your particular use case.

Another perfect example of this is Kyle Kingsbury’s work with Jepsen. Where he uses a generative testing approach to verify whether NoSQL databases actually provide the consistency model that they claims to offer. His findings are very interesting to read, and in many cases pretty worrying

Oh, and stick with FastMember or reflection 



Beware of implicit boxing of value types

In the last post, we looked at some inefficiencies with reference types in .Net and perhaps oversold value types a little  In any case, now that we’ve made the initial sale and you’re back for more, let’s talk about some pitfalls wrt the use of value types you should be aware of. Specifically let’s focus on cases where the CLR will cause implicit boxing on your value types.


We all know that when we cast a value type to object, we cause boxing. For instance, if we need to shove an int into an object[] or an ArrayList.

This is not great, but at least we’re doing this consciously and have had the chance to make a decision about it. However, there are a number of situations where the CLR will emit a box IL instruction for us implicitly without us realizing. These are far worse.


When you invoke a virtual method

Value types inherit from the System.ValueType, which itself inherits from System.Object. Amongst other things, System.ValueType provides an override for Equals that gives value types the default compare-by-value behaviour.

However, a value types is stored in memory without the Method Table Pointer (see the last post) so in order to dispatch a virtual method call it’ll first have to be boxed into a reference type first.

There is an exception to this though, as the CLR is able to call Equals directly if the value type overrides the Equals method (which is why it’s a best practice to do so).

Aside from benefiting from CLR’s short-circuit behaviour above, another good reason to override Equals in your custom value type is because the default implementation uses reflection to fetch its fields to compare.

Further more, it then uses UnsafeGetValue to get the value of the field on both the current object and the object being compared to, both of which causes further boxing as FieldInfo.UnsafeGetValue takes an object as argument.

But wait, there’s more…

Even if you override Equals(object other), it’ll still cause boxing to the input argument. To get around this you’ll need to overload Equals to take in another Point2D instance instead, i.e. Equals(Point2D other), which is of course another recommended best practice from the previous post.

Given these three versions of Point2D:

  • V1 = plain old struct
  • V2 = overrides Equals(object other)
  • V3 = overloads Equals(Point2DV3 other)

We can see how they fared when we iterate through 10M instances of each and calling Equals each time.

Couple of things to note from the above:

  • V1 causes twice as much boxing as V2 (makes sense given the short-circuiting behaviour)
  • V1 also takes nearly four times as long to execute compared to V2
  • V3 causes no boxing, and runs in nearly half the time as V2!

sidebar : I chose to run the test in F# because it’s easy to get quick insight (real and CPU time + GC counts) when you enable the #time directive. However, I had to define the types in C# as by default F# compiles struct types with all the optimizations we have talked about – which means:

a. it’s a good reason to use F#; but

b. defeats the purpose of running theses tests!


Unfortunately, there is still one more edge case…


When you call List<T>.Contains

When you call List<T>.Contains, an instance of EqualityComparer<T> will be used to compare the argument against every element of the list.

This eventually causes a new EqualityComparer<T> to be created.

In the default case (where Point2D doesn’t implement the IEquatable<T> interface), the ObjectEqualityComparer<T> will be returned. And it is in here, that the overridden/inherited Equals(object other) method will be used and causes boxing to occur for every comparison!

If, on the other hand, Point2D implements the IEquatable<T> interface then the outcome will be very different. This allows some clever logic to kick in and use the overloaded Equals(Point2D other) instead.


So now, let’s introduce V4 of Point2D that implements the IEquatable<T> interface and see how it compares to

  • V2 = overridden Equals(Object other) ; and
  • V3 = overloaded Equals(Point2DV3 other)

when used in a List<T>.

For V2, List<T>.Contains performs the same as our hand coded version, but the improvements we made with V3 is lost due to the reasons outlined above.

V4 rectified this by allowing the optimization in List<T> to kick in.


Which brings us to the next implicit boxing…


When you invoke an interface method

Like virtual methods, in order to dispatch an interface method you also need the Method Table Pointer, which means boxing is required.

Fortunately, the CLR is able to short-circuit this by calling the method directly if the compile-time type is resolved to the actual value type (e.g. Point2D) rather than the interface type.


For this test, we’ll try:

  • invoke Equals(Point2DV4 other) via the IEquatable<Point2DV4> interface; vs
  • invoke Equals(Point2DV4 other) directly on Point2DV4


As you can see, invoking the interface method Equals(Point2DV4 other) does indeed incur boxing once for each instance of Point2D.


When Dictionary<T> invokes GetHashCode

GetHashCode is used by hash-based collection types, the most common being Dictionary<TKey, TValue> and HashSet<T>.

In both cases, it’s invoked through the IEqualityComparer<T> type we talked earlier, and in both cases the comparer is also initialized through EqualityComparer<T>.Default and the CreateComparer method.

GetHashCode is invoked in many places within Dictionary<TKey, TValue> – on Add, ContainsKey, Remove, etc.

For this test let’s find out the effects of:

  • implementing the IEquatable<T> interface in terms of boxing
  • overriding GetHashCode (assuming the default implementation requires the use of reflection)

but first, let’s create V5 of our Point2D struct, this time with a overridden GetHashCode implementation (albeit a bad one, which is OK for this since we only want to see the performance implication of having one).

In this test, we have:

  • V3 = no GetHashCode override, not implement IEquatable<T>
  • V4 = no GetHashCode override, implements IEquatable<T>
  • V5 = has GetHashCode override, implements IEquatable<T>

I used a much smaller sample size here (10K instead of 10M) because the amount of time it took to add even 10K items to a dictionary was sufficient to illustrate the difference here. Even for this small sample size, the difference is very noticeable, couple of things to note:

  • since V3 doesn’t implement IEquatable<T> it incurs boxing, and a lot of it because Equals is also called through the IEqualityComparer<T>
  • V4 eliminates the need for boxing but is still pretty slow due to the default GetHashCode implementation
  • V5 addresses both of these problems!

sidebar : with GetHashCode there are also considerations around what makes a good hash code, e.g. low chance of collision, evenly distributed, etc. but that’s another discussion for another day. Most times I just let tools like Resharper work out the implementation based on the fields my value type has.



As we have seen, there are a number of places where implicit boxing can happen and how much of a difference these might have on your performance. So, to reiterate what was already said in the previous post, here are some best practices for using value types:

  • make them immutable
  • override Equals (the one that takes an object as argument);
  • overload Equals to take another instance of the same value type (e.g. Equals(Point2D other));
  • overload operators == and !=;
  • override GetHashCode



Smallest .Net ref type is 12 bytes (or why you should consider using value types)

(Update 2015/07/21 : read the next post in this series to learn about the places where implicit boxing happens with value types and how you can prevent them)


I chanced upon Sasha Goldshtein’s excellent book, Pro .Net Performance : Optimize Your C# Application, a few years back and I thoroughly enjoyed it.

pro .net performance cover

Even though it has been almost 3 years since I read it and I have forgotten many of the things I learnt already but I still remember this line very clearly:

…in fact, even a class with no instance fields will occupy 12 bytes when instantiated…

this is of course, limited to a 32-bit system. On a 64-bit system, the smallest reference type instance will take up 24 bytes of memory!



The reason for this is due to the way .Net objects are laid out in memory where you have:

  • 4 bytes for Object Header Word;
  • 4 bytes for Method Table Pointer; and
  • min of 4 bytes for instance fields

When your class has only a byte field it’ll still take up 4 bytes of space for instance fields because objects have to align with 4-byte multiples. Even when your class has no instance fields, it’ll still take up 4 bytes. Therefore, on a 32-bit system, the smallest reference type instance will be 12 bytes.

On a 64-bit system, a word is 8 bytes instead of 4, and objects are aligned to the nearest 8-byte multiple, so the smallest reference type instance in this case will be 24 bytes.

sidebar : The Object Header Word and Method Table Pointer are used by the JIT and CLR. The book goes into a lot of detail on the structure and purpose of these, which for this blog post we’ll ignore. If you’re interested in learning more about them, go buy the book, it’ll be money well spent.


Reference vs Value Type

There are plenty to talk about when it comes to reference vs value type, including:

  • stack vs heap allocation;
  • default equality semantics, i.e. compare-by-ref vs compare-by-value;
  • pass-by-value vs pass-by-ref;

For the purpose of this post let’s focus on how they differ in terms of memory consumption and cache friendliness when you have a large array of each.

Memory Consumption

Suppose you have a Point2D class with only X and Y integer fields, each instance of this type will occupy a total of 16 bytes (including 8 bytes for X and Y) in the heap. Taking into account a 4 byte reference pointer (again, on a 32-bit system), it brings our total investment per instance of Point2D type to 20 bytes!

If you have an array with 10M instances of Point2D then you’ll have committed 190MB of memory for this array!

On the other hand, if Point2D was a value type then each instance would only take up 8 bytes for the X and Y values, and they’ll be tightly packed into the array without each needing an additional 4 bytes for reference pointer. Overall the amount of memory you’re committing to this array would drop to 76MB.

Cache Friendliness

Whilst there are no inherent differences in the speed of accessing stack vs heap allocated memory (they are just different ranges of addresses in the virtual memory after all) there are a number of performance-related considerations.

Stack allocated memory does not incur GC overhead

The stack is self-managed in the sense that when you leave a scope you just move the pointer back to the previous position and you’d have “deallocated” the previously allocated memory.

With heap allocated memory you incur the overhead of a rather sophisticated generational GC, which as part of a collection needs to move surviving objects around to compact the memory space (which becomes more and more expensive as you move from gen 0 to 2).

sidebar : I remember reading a post a few years ago that discussed how the StackOverflow guys went through their entire codebase and converted as many classes to struct as they can in order to reduce the amount of latency spikes on their servers due to GC pauses. Whilst I’m not advocating for you to do the same, just to illustrate that GC pauses is a common problem on web servers. The Background GC mode introduced in recent .Net releases would have reduced the frequency of these pauses but sensible uses of value types would still help in this regard.

On the stack, temporal locality implies spatial locality and temporal access locality

This means that, objects that are allocated close together in time are also stored close together in space. Further more, objects allocated close in time (e.g. inside the same method) are also likely to be used close together.

Both spatial and temporal access locality plays nicely with how the cache works (i.e. fewer cache misses) and how OS paging works (i.e. fewer virtual memory swaps).

On the stack, memory density is higher

Since value types don’t have the overheads with reference types – Object Header Word and Method Table Pointer – so you’re able to pack more objects in the same amount of memory.

Higher memory density leads to better performance because it means fewer fetches from memory.

On modern hardware, the entire thread stack can fit into CPU cache

When you spawn a new thread, it’s allocated with 1MB of stack space. Whereas CPUs nowadays comes with a much bigger L3 cache (e.g. Nehalem has up to 24MB L3 cache) so the entire stack can fit into the L3 cache which is a lot faster to access compared to main memory (have a look at this talk from Gael Fraiteur, creator of PostSharp, to see just how access time differs).


For better or worse, modern hardware is built to take advantage of both spatial and temporal localities and these optimizations manifest themselves in how data is fetched from main memory in cache lines.

Consider what happens at a hardware level when you iterate through an array of 10 million Point2D objects. If Point2D was a reference type, then each time you iterate over an instance of it in the array:

  • CPU will fetch a cache line (typically 64 bytes) starting from where the array element is
  • it’ll access the memory to get the reference pointer (4 bytes) for the object in the heap
  • it’ll discard the rest of the cache line
  • it’ll fetch another cache line (again, 64 bytes) starting from where the object is
  • it’ll read the object and do whatever it is you asked it to do
  • and then it’ll do the same for the next element in the array and so on…

notice how much work goes wasted here due to reference jumping?

On the other hand, if Point2D was a value type with only X and Y values (8 bytes):

  • CPU will fetch a cache line (64 bytes) starting from where the array element is
  • it’ll read the object (8 bytes) and do its work
  • it’ll try to get the next object, realizing that it’s already in the cache (what a lucky day!)
  • it’ll read the next object (8 bytes) from the existing cache line and so on

the CPU is able to read 8 objects from one fetch vs. 2 fetches per object!


To see how this cache friendliness translate to performance numbers we can easily measure – execution time. Let’s consider this simple example:

Here is the result of running the tests above in release mode:


or in tabular format:

create 10M objects

iterate over 10M objects

reference type



value type



As well as the additional memory overhead, it’s also over 10x slower to create and 3x slower to iterate over an array of 10 million objects when they are defined as a reference type.

You might say “well, these are still only milliseconds!”, but for a web server that needs to deal with a large number of concurrent requests, these margins are very significant!

In some areas of software, such as high frequency trading or arbitrage trading, even nanoseconds can make a big difference to your bottom line.

sidebar : people are doing some really funky stuff in those areas to gain a slight edge over each other, including:

  • building data centres right next to the exchange;
  • using microwave to transmit data so that you receive data faster than exchanges connected by fiber optic cables;
  • using FPGA to run your trading logic;
  • etc.

It’s an area of finance that is genuinely quite interesting and I have heard some fascinating tales from friends who are working in these areas.



Using value types sensibly is a very powerful way to improve the performance of your .Net application – this is true for both C# and F#. Without going overboard, you should consider using structs if:

  • the objects are small and you need to create lots of them;
  • you need high memory density;
  • memory consumption is a constraint;
  • you need to iterate large arrays of them frequently

In addition, here are some tips to help you “get value types right”:

  • make them immutable;
  • override Equals (the one that takes an object as argument);
  • overload Equals to take another instance of the same value type (e.g. Equals(Point2D other));
  • overload operators == and !=
  • override GetHashCode

In the next post we’ll look at some pitfalls wrt the use of value types, and why we have the best practices above.


Scott Meyers did a great talk on CPU cache at last year’s NDC Oslo and touched on other topics such as false sharing, etc. It’s a good starting point if you’re new to how CPU cache affects the performance of your code.

Martin Thompson (of the Disruptor fame) has an excellent blog where he writes about many related topics including:

Martin’s posts use Java as example but many of the lessons are directly applicable in .Net too.



Seven ineffective coding habits many F# programmers don’t have

This post is part of the F# Advent Calendar in English 2014 project. Check out all the other great posts there!

Special thanks to Sergey Tihon for organizing this.

A good coding habit is an incredibly powerful tool, it allows us to make good decisions with minimal cognitive effort and can be the difference between being a good programmer and a great one.

“I’m not a great programmer; I’m just a good programmer with great habits.”

– Kent Beck

A bad habit, on the other hand, can forever condemn us to repeat the same mistakes and is difficult to correct.


I attended Kevlin Henney’s “Seven Inef­fec­tive Cod­ing Habits of Many Pro­gram­mers” talk at the recent BuildStuff conference. As I sat in the audience and reflected on the times I exhibited many of the bad habits he identified and challenged, something hit me – I was coding in C# in pretty much every case even though I also spend a lot of time in F#.

Am I just a bad C# developer who’s better at F#, or did the language I use make a bigger difference than I realized at the time?

With this question in mind, I revisited the seven ineffective coding habits that Kevlin identified and pondered why they are habits that many F# programmers don’t have.


Noisy Code

Signal-to-noise ratio sometimes refers to the ratio of useful information to false or irrelevant data in a conversation or exchange. Noisy code requires greater cognitive effort on the reader’s part, and is more expensive to maintain.

We have a lot of habits which – without us realising it – add noise to our code. But often the language we use have a big impact on how much noise is added to our code, and the C-style syntax is big culprit here.

Objective comparison between languages are usually difficult because comparing different language implementations across different projects introduces too many other variables. Comparing different language implementations of a small project is achievable but cannot answer how well the solutions scale up to bigger projects.

Fortunately, Simon Cousins was able to provide a comprehensive analysis of two code-bases written in different languages – C# and F# – implementing the same application.

The application was non-trivial (~350k lines of C# code) and the numbers speak for themselves:




Not only is the F# implementation shorter and generally more useful (i.e. higher signal-to-noise ratio), according to Simon’s post it also took a fraction of the man-hours to provide a more complete implementation of the requirements:

“The C# project took five years and peaked at ~8 devs. It never fully implemented all of the contracts.

The F# project took less than a year and peaked at three devs (only one had prior experience with F#). All of the contracts were fully implemented.”

In summary, by removing the need for { } as a core part of the language structure and not having nulls, F# removes a lot of the noise that are usually found in C# code.


Visual Dishonesty

“…a clean design is one that supports visual thinking so people can meet their informational needs with a minimum of conscious effort.”

– Daniel Higginbotham

When it comes to code, visual honesty is about laying out your code so that their visual relationships are obvious and accurate.

For example, when you put things above each other then it signifies hierarchy. This is important, because you’re showing your reader how to process the information you’re giving them. However, you may not be aware that’s what you’re doing, which is why we end up with problems.

You convey information by the way you arrange a design’s elements in relation to each other. This information is understood immediately, if not consciously, by the people viewing your designs. This is great if the visual relationships are obvious and accurate, but if they’re not, your audience is going to get confused. They’ll have to examine your work carefully, going back and forth between the different parts to make sure they understand.”

– Daniel Higginbotham

Take the simple matter of how nested method calls are laid out in C#, they betray everything we have been taught about reading, and the order in which information needs to be processed has been reversed.

image image

Fortunately, F# introduced the pipeline operator |> which allows us to restore the visual honesty with the way we lay out nested functions calls.


In his talk Kevlin also touched on the placement of { } and how it affects readability, using a rather simple technique:

image  image

and by doing so, it reveals interesting properties about the structure of the above code which we might not have noticed:

  1. we can’t tell where the argument list ends and method body starts
  2. we can’t tell where the if condition ends and where the if body starts

These tell us that even though we are aligning our code, its structure and hierarchy is still not immediately clear without the aid of the curly braces. Remember, if the visual relationship between the elements is not accurate, it’ll cause confusion for your readers and they need to examine your code carefully to ensure they understand it correctly. In order words, you create additional cognitive burden on your readers when the layout of your code does not match the program structure.

Now contrast with the following:

image image

where the structure and hierarchy of the code is much more evident. So it turns out that placement style of { } is not just a matter of personal preference, it plays an important role in conveying the structure of your code and there’s a right way to do it.

“It turns out that style matters in programming for the same reason that it matters in writing.

It makes for better reading.”

– Douglas Crockford

In hindsight this seems obvious, but why do we still get it wrong? How can so many of us miss something so obvious?

I think part of the problem is that, you have two competing rules for structuring your code in C-style languages – one for the compiler and one for humans. { and } are used to convey structure of your code to the compiler, but to convey structure information to humans, you use both { } and indentation.

This, coupled with the eagerness to superficially reduce the line count or adhere by guidelines such as “methods shouldn’t be more than 60 lines long”, and we have the perfect storm that results in us sacrificing readability in the name of readability.


So what if you use space for conveying structure information to both the compiler and humans? Then you remove the ambiguity and people can stop fighting about where to place their curly braces!

The above example can be written in F# as the following, using conventions in the F# community:

image  image

notice how the code is not only much shorter, but also structurally very clear.


In summary, F#’s pipes allows you to restore visual honesty with regards to the way nested function calls are arranged so that the flow of information matches the way we read. In addition, whitespaces provide a consistent way to depict hierarchy information to both the compiler and human. It removes the need to argue over { } placement strategies whilst making the structure of your code clear to see at the same time.


Lego naming

Naming is hard, and as Kevlin points out that so often we resort to lego naming by gluing common words such as ‘create’, ‘process’, ‘validate’, ‘factory’ together in an attempt to create meaning.

This is not naming, it is labelling.

Adding more words is not the same as adding meaning. In fact, more often than not it can have the opposite effect of diluting the meaning of the thing we’re trying to name. This is how we end up with gems such as controllerFactoryFactory, where the meaning of the whole is less than the sum of its parts.



Naming is hard, and having to give names to everything – every class, method and variable – makes it even harder. In fact, trying to give everything a meaningful name is so hard, that eventually most of us simply stop caring, and lego naming seems like the lazy way out.


In F#, and in functional programming in general, it’s common practice to use anonymous functions, or lambdas. Straight away you remove the need to come up with good names for a whole bunch of things. Often the meaning of these lambdas are created by the higher order functions that use them – Array.map, Array.filter, Array.iter, e.g. the function passed into Array.map is used to, surprise surprise, map values in an array!

(Before you say it, yes, you can use anonymous delegates in C# too, especially when working with LINQ. However, when you use LINQ you are doing functional programming, and the use of lambdas is much more common in F# and other functional-first languages.)


Lego naming can also be the symptom of a failure to identify the right level of abstractions.

Just like naming, coming up with the right abstractions can be hard. And when the right abstraction is a piece of pure functionality, we don’t have a way to represent it effectively in OOP (note, I’m not talking about the object-orientation that Alan Kay had in mind when he coined the term objects).

In situations like this, the common practice in OOP is to wrap the functionality inside a class or interface. So you end up with something that provides the desired functionality, and the functionality itself. That’s two things to name instead of one, this is hard, let’s be lazy and combine some common words together and see if they make sense…

public interface ConditionChecker


    bool CheckCondition();


The problem here is that the right level of abstraction is smaller than an “object”, so we have to introduce another layer of abstraction just to make it fit into our world view.


In F#, and in functional programming in general, no abstraction is too small and functions are so ubiquitous that all the OO patterns that we’re so fond of can be represented as functions.

Take the ConditionChecker example above, the essence of what we’re looking for is a condition that is evaluated without input and returns a boolean value. This can be represented as the following in F#:

type Condition = unit –> bool

Much more concise, wouldn’t you agree? And any function that matches the type signature can be treated as a Condition without having to explicitly implement some interface.


Another common practice in C# and Java is to label exception types with Exception, i.e. CastException, ArgumentException, etc. This is another symptom of our tendency to label things rather than naming them, out of laziness (and not the good kind).

If we had put in more thought into them, than maybe we could have come up with more meaningful names, for instance:



In F#, the common practice is to define errors using the lightweight exception syntax, and the convention here is to not use the Exception suffix since the leading exception keyword already provides sufficient clue as to what the type represents.



In summary, whilst F# doesn’t stop you from lego naming things, it helps because:

  • the use of anonymous functions reduces the number of things you have to name significantly;
  • being able to model your application at the right level of abstraction removes unnecessary layers of abstractions, and therefore reduce the number of things you have to name even further,
  • it’s easier to name things when they’re at the right level of abstraction;
  • convention in F# is to use the lightweight exception syntax to define exception types without the Exception suffix.



In his presentation, Kevlin showed an interesting technique of using a tag cloud to see what pops out from your code:

image           image

Compare these two examples you can see the domain of the second example surfacing through the tag cloud – paper, picture, printingdevice, etc. whereas the first example shows raw strings and lists.

When we under abstract, we often find ourselves with a long list of arguments to our methods/functions. When that list gets long enough, adding another one or two arguments is no longer significant.

“If you have a procedure with ten parameters, you probably missed some.”

– Alan Perlis

Unfortunately, F# can’t stop you from under abstracting, but it has a powerful type system that provides all the necessary tools for you to create the right abstractions for your domain with minimal effort. Have a look at Scott Wlaschin’s talk on DDD with the F# type system for inspirations on how you might do that:


Unencapsulated State

If under-abstraction is like going to a job interview in your pyjamas then having unencapsulated state would be akin to wearing your underwear on the outside (which, incidentally put you in some rather famous circles…).


The example that Kevlin used to illustrate this habit is dangerous because the internal state that has been exposed to the outside world is mutable. So not only is your underwear worn on the outside for all to see, everyone is able to modify it without your consent… now that’s a scary thought!

In this example, what should have been done is for us to encapsulate the mutable list and expose only the properties that are relevant, for instance:

type RecentlyUsedList () =

    let items = new List<string>()

    member this.Items = items.ToArray() // now the outside world can’t mutate our internal state

    member this.Count = items.Count


Whilst F# has no way to stop you from exposing the items list publically, functional programmers are very conscious of maintaining the immutability facade so even if a F#’er is using a mutable list internally he would not have allowed it to leak outside.

In fact, a F#’er would probably have implemented a RecentlyUsedList differently, for instance:

type RecentlyUsedList (?items) =

    let items = defaultArg items [ ]

    member this.Items = List.toArray items

    member this.Count = List.length items

    member this.Add newItem =

        let newItems = newItem::(items |> List.filter ((<>) newItem))

        RecentlyUsedList newItems


But there’s more.

Kevlin also touched on encapsulation in general, and its relation to a usability study concept called affordance.

“An affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, whilst a cord affords pulling.”

– Wikipedia

If you want the user to push, then don’t give them something that they can pull, that’d be bad usability design. The same principles applies to code, your abstractions should afford the right behaviours whilst make it impossible to do the wrong thing.


When modelling your domain with F#, since there are no nulls it immediately eliminates the most common illegal state that you have to look out for. And since types are immutable by default, once they are validated at construction time you don’t have to worry about them entering into an invalid state later.

To make invalid states un-representable in your model, a common practice is to create a finite, closed set of possible valid states as a discriminated union. As a simple example:

type PaymentMethod =

    | Cash

    | Cheque of ChequeNumber

    | Card     of CardType * CardNumber

Compared to a class hierarchy, a discriminated union type cannot be extended and therefore invalid states cannot be introduced at a later date by abusing/exploiting inheritance.


In summary, F# programmers are very conscious of immutability so even if they are using mutable types to represent internal state it’s highly unlikely for them to expose their mutability and break the immutability facade they hold so dearly.

And because types are immutable by default, and there are no nulls in F#, it’s also easy for you to ensure that invalid states are simply un-representable when modelling a domain.


Getters and Setters

Kevlin challenged the naming of getters and setters, since in English ‘get’ usually implies side effects:

“I get married”

“I get money from the ATM”

“I get from point A to point B”

Yet, in programming, get implies a query with no side effects.

Also, getters and setters are opposites in programming, but in English, the opposite of set is reset or unset.


Secondly, Kevlin challenged the habit of always creating setters whenever we create getters. This habit is even enforced and encouraged by many modern IDEs that gives you shortcuts to automatically create these getters and setters in pairs.

“That’s great, now we have shortcuts to do the wrong thing.

We used to have type lots to do the wrong thing, not anymore.”

– Kevlin Henney

And he talked about how we need to be more cautious and conscious about what can change and what cannot.

“When it is not necessary to change, it is necessary not to change.”

– Lucius Cary


With F#, immutability is the default, so when you define a new record or value, it is immutable unless you explicitly say so otherwise (with the mutable keyword). So to do the wrong thing, i.e. to define a corresponding setter for every getter, you have to do lots of extra work.

Every time you have to type the mutable keyword is another chance for you to ask yourself “is it really necessary for this field to change”. In my experience it has provided sufficient friction and forced me to make very conscious decisions on what can change under what conditions.


Uncohesive Tests

Many of us have the habit of testing methods – that is, for every method Foo we have a TestFoo that invokes Foo and inspects its behaviour. This type of testing covers only the surface area of your code, and although you can achieve a high code coverage percentage this way (and keep the managers happy), that coverage number is only superficial.

Methods are usually called in different combinations to achieve some desired functionality, and many of the complexities and potential bugs lie in the way they work together. This is particularly true when states are concerned as the order the state is updated in might be significant and you also bring concurrency into the equation.

Kevlin calls for an end of this practice and for us to focus on testing specific functionalities instead, and use our tests as specifications for those functionalities.

“For tests to drive development they must do more than just test that code performs its required functionality: they must clearly express that required functionality to the reader. That is, they must be clear specification of the required functionality.”

– Nat Pryce and Steve Freeman

This is in line with Gojko Adzic’s Specification by Example which advocates the use of tests as a form of specification for your application that is executable and always up-to-date.


But, even as we improve on what we test, we still need to have sufficient number of tests to give us a reasonable degree of confidence. To put it into context, an exhaustive test suit for a function of the signature Int –> Int would need to have 2147483647 test cases. Of course, you don’t need an exhaustive test suit to reach a reasonable degree of confidence, but there’s a limitation on the number of tests that we will be able to write by hand because:

  • writing and maintaining large number of tests are expensive
  • we might not think of all the edge cases

This is where property-based automated testing comes in, and that’s where the F# (and other QuickCheck-enabled languages such as Haskell and Erlang) community is at with the widespread adoption of FsCheck. If you’re new to FsCheck or property-based testing in general, check out Scott Wlaschin’s detailed introductory post to property-based testing as part of the F# Advent Calendar in English 2014.



We pick up habits – good and bad – over time and with practice. Since we know that practice doesn’t make perfect, it makes permanent; only perfect practice makes perfect, it is important for us to acquire good practice in order to form and nurture good habits.

The programming language we use day-to-day plays an important role in this regard.

“Programming languages have a devious influence: they shape our thinking habits.”

– Dijkstra

As another year draws to a close, let’s hope the year ahead is filled with the good practice we need to make perfect, and to ensure it let’s all write more F# :-P


Wish you all a merry xmas!



F# Advent Calendar in English 2014

Kevlin Henney – Seven ineffective coding habits of many programmers

Andreas Selfik – The programming language wars

Simon Cousins – Does the language you use make a difference (revisited)

Coding Horror – The best code is no code at all

F# for Fun and Profit – Cycles and modularity in the wild

Being visually honest with F#

Ian Barber – Naming things

Joshua Bloch – How to design a good API and why it matters

Null References : the Billion dollar mistake