Slides from LambdaCon


Phew, finally back in famil­iar sur­round­ings of Lon­don after back-to-back con­fer­ences in Italy where I spoke about Elm and F# at Code­Mo­tion Rome and Lamb­da­Con. It was a great cou­ple of days, saw some inter­est­ing talks (I’ll write up some sum­maries later), met some old friends and made new ones.

Here are the slides for my Elm and F# talk at LambdaCon.

Slides and source code from other Lamb­da­Con talks is also avail­able on Github.


Next up, F# Exchange on 17th April!

Make flame with Elm

A friend of mine, Roger Engel­ber, pointed me to a nice arti­cle  on doing func­tional pro­gram­ming in Lua. The arti­cle detailed the steps to gen­er­ate a flame like effects using a sim­ple par­ti­cle system.

Of course, it nat­u­rally lead to me try­ing to do the same in Elm!


To trans­late the approach was really straight for­ward, though there are some minor dif­fer­ences, e.g. alpha val­ues in Elm are between 0 to 1 but 0 to 255 in Lua.


The code is avail­able on github, feel free to poke around.

Here are two vari­a­tions in action:



Source code

Live demo 1

Live demo 2

Live demo 3

Live demo 4

Solving the Stable Marriage problem in Erlang


Whilst talk­ing with an ex-colleague, a ques­tion came up on how to imple­ment the Sta­ble Mar­riage prob­lem using a mes­sage pass­ing approach. Nat­u­rally, I wanted to answer that ques­tion with Erlang!

Let’s first dis­sect the prob­lem and decide what processes we need and how they need to inter­act with one another.

The sta­ble mar­riage prob­lem is com­monly stated as:

Given n men and n women, where each per­son has ranked all mem­bers of the oppo­site sex with a unique num­ber between 1 and n in order of pref­er­ence, marry the men and women together such that there are no two peo­ple of oppo­site sex who would both rather have each other than their cur­rent part­ners. If there are no such peo­ple, all the mar­riages are “sta­ble”. (It is assumed that the par­tic­i­pants are binary gen­dered and that mar­riages are not same-sex).

From the prob­lem descrip­tion, we can see that we need:

  • a mod­ule for man
  • a mod­ule for woman
  • a mod­ule for orches­trat­ing the experiment

In terms of inter­ac­tion between the dif­fer­ent mod­ules, I imag­ined some­thing along the line of the following:


The pro­posal com­mu­ni­ca­tion needs to be syn­chro­nous* as the man can­not pro­ceed until he gets an answer for his pro­posal. But all other com­mu­ni­ca­tions can be asynchronous.

(* remem­ber, a syn­chro­nous call in OTP-sense is not the same as a syn­chro­nous call in Java/C# where the call­ing thread is blocked.

In this case the com­mu­ni­ca­tion still hap­pens via asyn­chro­nous mes­sage pass­ing, but the call­ing process asyn­chro­nously waits for a reply before mov­ing on)

From here, the imple­men­ta­tion itself is pretty straight forward.

And using the test data from Rosetta Code, you can have a mod­ule that sets up the test, which I’ve called stable_marriage here.

Com­pile and run the code gives the fol­low­ing output:


You can get the full source code for this solu­tion on github.



Repo for solution

Rosetta Code test data

QCon London 2015–Takeaways from “The Bad Idea Terminator”


It’s very unchar­ac­ter­is­tic of me, but I went to a ses­sion on the prod­uct man­age­ment track at QCon Lon­don Melissa Per­ris’ “The Bad Idea Ter­mi­na­tor”. Hav­ing gone in the room with the expec­ta­tion of com­ing out not much wiser, I was pleas­antly sur­prised to find myself in one of the best talks at the conference.


Melissa used Fire­stone and FourSquare as exam­ple of the “build­ing trap” whereby ail­ing com­pa­nies try to coun­ter­act their decline by adding more fea­tures with­out really chang­ing they do things.

We often start off doing things right – we test and iter­ate on our ideas before we hit the mar­ket, and then we end up with some­thing that peo­ple want to use. But then we just keep on build­ing with­out going back to find­ing those inno­v­a­tive ideas that peo­ple love.


The build trap stops us build­ing things that peo­ple love because we lose touch with our cus­tomers. We stop test­ing ideas with the mar­ket, and con­fine our­selves in our own bub­ble and just put our heads down and keep on building.


We can fall into the build trap in a num­ber of ways, including:

  • pres­sure from stake­hold­ers to always release new fea­tures (Peter Higgs made sim­i­lar crit­i­cisms about mod­ern acad­e­mia where researchers are pres­sured to keep pub­lish­ing papers rather than focus­ing on find­ing the next big idea)
  • arbi­trary dead­lines and fail­ure to respond to change – set­ting dead­lines that are too far out and not being flex­i­ble enough to adapt to change
  • build­ing is work­ing” men­tal­ity – which doesn’t allow time for us to step back and think if we’re build­ing the right things

Build­ing is the easy part.

Fig­ur­ing out what to build is hard.


- Melissa Perri


Why don’t we take the time to think before we go and build some­thing? Well, the endow­ment effect might has some­thing to do with it – as you invest more and more into an idea and it starts to become part of your iden­tity and it becomes hard for you to let go.

In behav­ioral eco­nom­ics, the endow­ment effect (also known as divesti­ture aver­sion) is the hypoth­e­sis that peo­ple ascribe more value to things merely because they own them. This is illus­trated by the obser­va­tion that peo­ple will tend to pay more to retain some­thing they own than to obtain some­thing owned by some­one else—even when there is no cause for attach­ment, or even if the item was only obtained min­utes ago.


One of the most impor­tant respon­si­bil­ity of a prod­uct man­ager is to say NO to ideas, until we’re able to back it up with tests that prove an idea can work, and I think the same goes to developers.

So how do you become the Bad Idea Ter­mi­na­tor, i.e. the per­son that goes and destroys all the bad ideas so we can focus on the good ones? We can start by iden­ti­fy­ing some com­mon mis­takes we make.


Mis­take 1 : don’t rec­og­nize bias

Prod­uct ideas suf­fer from sev­eral types of ideas:

  • Causal­ity – we attribute mean­ing and why things hap­pen to the wrong cause

For exam­ple,

We built a mobile app before and it was suc­cess­ful, let’s do another mobile app.

Every­one has a mobile app, so we need one too.

we need to rec­og­nize the dif­fer­ences between cus­tomers and busi­nesses, what worked under one set of cir­cum­stances is not guar­an­teed to work under another.

  • Curse of Knowl­edge – as experts we can­not put our­selves in the shoes of some­one who doesn’t know as much

You should be doing user research and user test­ing, bring your ideas to the cus­tomers and see if that’s what they really want.

  • Anchor­ing – we focus on insignif­i­cant data because it’s the first data we see

When­ever some­one says some­thing like

All my cus­tomers are ask­ing for this!

you should always ask for data, and make peo­ple prove what they’re say­ing is accurate.


Mis­take 2 : solu­tions with no problems

When peo­ple sug­gest new ideas, most of the time they come to the table with solu­tions. Instead, we need to start with the WHY, and focus on the prob­lem that we’re try­ing to solve.

On the topic of start­ing with the why, I also find Simon Sinek’s TED talk inspi­ra­tional, and he also has a book on the same topic.


There are always mul­ti­ple ways to solve the same prob­lem, and only by focus­ing on the prob­lem would we be able to decide which solu­tion is the best. Unfor­tu­nately, your idea is not always the best idea, and we should be con­scious of the Not Invented Here syn­drome and our propen­sity to fall under its influ­ence (even if only at a sub­con­scious level).

After we fig­ure out the prob­lem we still need to align it with our busi­ness goals, and decide if it’s a prob­lem we can solve and want to solve.


Mis­take 3 : build­ing with­out testing

When we get stuck in the build trap we don’t tend to test our assump­tion, as we tend to com­mit to one solu­tion too early. Instead, we should solicit many solu­tions at first, and get peo­ple off the fix­a­tion on the one idea.

We also tend to invest too much into the one idea and then have trou­ble let­ting go (endow­ment effect again).

Instead, we should pick out a few ideas that are the most viable and then test them to find the ideas that:

  • have the most pos­i­tive cus­tomer feed­back, and
  • require the small­est investment

using small, data-driven exper­i­ments. Tech­niques such as A/B test­ing falls right into this (but remem­ber though, A/B test­ing doesn’t tell the whole story, you prob­a­bly also want A/A test­ing to act as blind test group). It could also be as sim­ple as talk­ing to a few cus­tomers to get their feedbacks.

There are 3 key exper­i­ments you should run:

  1. do the cus­tomers have this problem?
  2. are they inter­ested in solu­tion ideas?
  3. are they inter­ested in our solution?


Mis­take 4 : no suc­cess metrics

Another com­mon mis­take is to not set suc­cess met­rics when we go and do exper­i­ments, and we also don’t set suc­cess met­rics when build­ing new features.

Instead, we should set goals early. Doing so early is impor­tant because if we set up goals in hind­sight then we’ll just change the goals to make the fea­ture look good…

We should be ask­ing ques­tions such as

How much value do we need to cap­ture to make this fea­ture worth building?

We also need to accept that lean­ing on its own is also a form of suc­cess.


The risk of con­tin­u­ing with a bad idea is really great, so the ear­lier we can kill of these bad ideas the lower our risk will be.

And the fast you kill the bad ideas, the more time you will have to devote to the good ones.

Fail fast, so you can suc­ceed faster.


- Melissa Perri

and finally,an oblig­a­tory pic­ture of ter­mi­na­tor of course!



I really enjoyed Melissa’s talk, and although I’m a devel­oper, I believe every­one inside an orga­ni­za­tion has the respon­si­bil­ity to ask ques­tions and help push the orga­ni­za­tion towards build­ing bet­ter prod­ucts that actu­ally match its customer’s needs.


Hav­ing read a cou­ple of Dan Ariely’s books in the past year, I find they pro­vide a very insight­ful back­drop on many of the human irra­tional­i­ties that underlies/causes us to make the com­mon mis­takes that Melissa has iden­ti­fied in her talk.

image   image



Slides for the talk

Simon Sinek – Start with Why TED talk

Start with Why : how great lead­ers inspire every­one to take action

Pre­dictably Irra­tional : The Hid­den Forces That Shape Our Decisions

The Upside of Irrationality

This is why you need Composition over Inheritance


I was attempt­ing to make some changes to some fairly old code in our code­base (some­thing I prob­a­bly wrote myself…) which hasn’t been touched on for a while.

Nat­u­rally, my first step is to under­stand what the code does, so I started by look­ing at the class where I need to make my changes.

composition over inheritance_1

Per­haps unsur­pris­ingly, I couldn’t fig­ure out how it works.. The class con­tains only a hand­ful of over­ride meth­ods, but I have no idea how they fit together.

So I started dig­ging deeper through sev­eral lay­ers of abstract classes, each fill­ing in parts of the puz­zle, until I reached the base of this class hierarchy.

By this point, I’m star­ing at a con­trol flow full of strate­gi­cally placed gaps. Going back-and-forth along the class hier­ar­chy sev­eral times, I ended up with a vague sense of how the var­i­ous pieces of logic scat­tered across the hier­ar­chy fit together.

composition over inheritance_2

What’s more, where we needed to devi­ate from the con­trol flow dic­tated by the base class, we have had to rein­vent a brand new con­trol flow mid-hierarchy, mak­ing it even harder for me to under­stand what’s going on.


This is not where I want to be… I want to be able to rea­son about my code eas­ily, and with con­fi­dence.

I sus­pect a great many of you have expe­ri­enced sim­i­lar pains in the past, but how do you go about apply­ing Com­po­si­tion over Inher­i­tance?


Wikipedia’s def­i­n­i­tion and exam­ple of Com­po­si­tion over Inher­i­tance focuses only on domain mod­el­ling, and I’m gen­er­ally not a fan of con­clu­sions such as:

To favor com­po­si­tion over inher­i­tance is a design prin­ci­ple that gives the design higher flex­i­bil­ity, giv­ing business-domain classes and more sta­ble busi­ness domain in the long term.

In other words, HAS-A can be bet­ter than an IS-A relationship.

What does this even mean?!? Are you able to back up these claims of “high flex­i­bil­ity” and “more sta­ble busi­ness domain in the long term” with empir­i­cal evidence?


In my view, the real ben­e­fit of Com­po­si­tion over Inher­i­tance is that it encour­ages bet­ter prob­lem decom­po­si­tion – if you don’t break up the prob­lem into smaller pieces (that are eas­ier to tackle) first you have noth­ing to com­pose with later on. Scott Wlaschin’s rail­way ori­ented pro­gram­ming approach is an excel­lent exam­ple of how to apply com­po­si­tion in a prac­ti­cal and ele­gant way.



In his keynote The Old New Old New Things at Build­Stuff 14, Greg Young described prob­lem decom­po­si­tion as the biggest prob­lem we have in the soft­ware indus­try, because we’re just so bad at it…

And the chal­lenge of prob­lem decom­po­si­tion is not lim­ited to code orga­ni­za­tion. Microser­vices are all the rage right now, and the move from mono­lithic archi­tec­tures to microser­vices is another exam­ple of prob­lem decom­po­si­tion, albeit one that hap­pens at a higher level.


So I’ll be tak­ing knife to the afore­men­tioned class hier­ar­chy and replac­ing them with small, com­pos­able units, using a lan­guage that is a great fit for the job – F#!

Will you fol­low my lead?



Rail­way ori­ented programming

Is your pro­gram­ming lan­guage unreasonable

Ser­vice archi­tec­tures at scale, lessons from Google and Ebay

Mar­tin Fowler on Microservices

Greg Young – The Old New Old New Things