Takeaways from “Simplifying the Future” by Adrian Cockcroft

Simplifying things in our daily lives

“Life is complicated… but we use simple abstractions to deal with it.”

– Adrian Cockcroft

When people say “it’s too complicated”, what they usually mean is “there are too many moving parts and I can’t figure out what it’s going to do next, that I haven’t figured out an internal model for how it works and what it does”.

Which bags the question: “what’s the most complicated thing that you can deal with intuitively?”

Driving, for instance, is one of the most complicated things that we have to do on a regular basis. It combines hand-eye-feet coordination, navigation skills, and ability to react to unforeseeable scenarios that can be life-or-death.

 

A good example of a simple abstraction is the touch-based interface you find on smart phones and pads. Kids can dissimulate the working of an iPad by experimenting with it, without needing any formal training because they can interact with them and get instant feedback which helps them build the mental model of how things work.

As engineers, we should inspire to build things that can be given to 2 year olds and they can intuitively understand how they operate. This last point reminds me of what Brett Victor has been saying for years, with inspirational talks such as Inventing on Principle and Stop Drawing Dead Fish.

Netflix for instance, has invested much effort in intuition engineering and are building tools to help people get a better intuitive understanding of how their complex, distributed systems are operating at any moment in time.

Another example of how you can take complex things and give them simple descriptions is XKCD’s Thing Explainer, which uses simple words to explain otherwise complex things such as the International Space Station, Nuclear Reactor and Data Centre.

sidebar: wrt to complexities in code, here are two talks that you might also find interesting

 

Simplifying work

Adrian mentioned Netflix’s slide deck on their culture and values:

Intentional culture is becoming an important thing, and other companies have followed suit, eg.

It conditions people joining the company on what they would expect to see once they’re onboarded, and helps frame & standardise the recruitment process so that everyone knows what a ‘good’ hire looks like.

If you’re creating a startup you can set the culture from the start, don’t wait until you have accidental culture, be intentional and early about what you want to have.

 

This creates a purpose-driven culture.

Be clear and explicit about the purpose and let people work out how best to implement that purpose.

Purposes are simple statements, whereas setting out all the individual processes you need to ensure people build the right things are much harder, it’s simpler to have a purpose-driven culture and let people self-organise around those purposes.

Netflix also found that if you impose processes on people then you drive talents away, which is a big problem. Time and again, Netflix found that people produce a fraction of what they’re capable of producing at other places because they were held back by processes, rules and other things that slow them down.

On Reverse Conway’s Law, Adrian said that you should start with an organisational structure that’s cellular in nature, with clear responsibilities and ownership for a no. of small, co-located teams – high trust & high cohesion within the team, and low trust across the teams.

The morale here is that, if you build a company around a purpose-driven, systems-thinking approach then you are building organisations that are flexible and can evolve as the technology moves on.

The more rules you put in, and the more complex and rigid it gets, then you end up with the opposite.

“You build it, you run it”

– Werner Vogel, Amazon CTO

 

Simplifying the things we build

First, you should shift your thinking from projects to products, the key difference is that whereas a project has a start and end, a product will continue to evolve for as long as it still serves a purpose. On this point, see also:

“I’m sorry, but there really isn’t any project managers in this new model”

– Adrian Cockcroft

As a result, the overhead & ratio of developers to people doing management, releases & ops has to change.

 

Second, the most important metric to optimise for is time to value. (see also “beyond features” by Dan North)

“The lead time to someone saying thank you is the only reputation metric that matters”

– Dan North

Looking at the customer values and working out how to improve the time-to-value is an interesting challenge. (see Simon Wardley’s value-chain-mapping)

And lastly, and this is a subtle point – optimise for the customers that you want to have rather than the customers you have now. Which is an interesting twist on how we often think about retention and monetisation.

For Netflix, their optimisation is almost always around converting free trials to paying customers, which means they’re always optimising for people who haven’t seen the product before. Interestingly, this feedback loop also has the side-effect of forcing the product to be simple.

On the other hand, if you optimise for power users, then you’re likely to introduce more and more features that contribute towards the product being too complicated for new users. You can potentially build yourself into a corner where you struggle to attract new users and become vulnerable to a new comers into the market with simpler products that new users can understand.

 

Monolithic apps only look simple from the outside (at the architect diagram level), but if you look under the cover to see your object dependencies then the true scale of their complexities start to become apparent. And they often are complicated because it requires discipline to enforce clear separations.

“If you require constant diligence, then you’re setting everyone up for failure and hurt.”

– Brian Hunter

Microservices enforce separation that makes them less complicated, and make those connectivities between components explicit. They are also better for on-boarding as new joiners don’t have to understand all the interdependencies (inside a monolith) that encompass your entire system to make even small changes.

Each micro-service should have a clear, well-defined set of responsibilities and there’s a cap on the level of complexities they can reach.

sidebar: the best answers I have heard for “how small should a microservice be?” are:

  • “one that can be completely rewritten in 2 weeks”
  • “what can fit inside an engineer’s head” – which Psychology tells us, isn’t a lot ;-)

 

Monitoring used to be one of the things that made microservices complicated, but the tooling has caught up in this space and nowadays many vendors (such as NewRelic) offer tools that support this style of architecture out of the box.

 

Simplifying microservices architecture

If your system is deployed globally, then having the same, automated deployment for every region gives you symmetry. Having this commonality (same AMI, auto-scaling settings, deployment pipeline, etc.) is important, as is automation, because they give you known states in your system that allows you to make assertions.

It’s also important to have systems thinking, try to come up with feedback loops that drive people and machines to do the right thing.

Adrian then referenced Simon Wardley’s post on ecosystems in which he talked about the ILC model, or, a cycle of Innovate, Leverage, and Commoditize.

He touched on Serverless technologies such as AWS Lambda (which we’re using heavily at Yubl). At the moment it’s at the Innovate stage where it’s still a poorly defined concept and even those involved are still working out how best to utilise it.

If AWS Lambda functions are your nano-services, then on the other end of the scale both AWS and Azure are going to release VMs with terabytes of memory to the general public soon – which will have a massive impact on systems such as in-memory graph databases (eg. Neo4j).

When we move to the Leverage stage, the concepts have been clearly defined and terminologies are widely understood. However, the implementations are not yet standardised, and the challenge at this stage is that you can end up with too many choices as new vendors and solutions compete for market share as mainstream adoption gathers pace.

This is where we’re at with container schedulers – Docker Swarm, Kubernetes, Nomad, Mesos, CloudFoundry and whatever else pops up tomorrow.

As the technology matures and people work out the core set of features that matter to them, it’ll start to become a Commodity – this is where we’re at with running containers – where there are multiple compatible implementations that are offered as services.

This new form of commodity then becomes the base for the next wave of innovations by providing a platform that you can build on top of.

Simon Wardley also talked about this as the cycle of War, Wonder and Peace.

My picks from OSCON keynotes

So OSCON came and went, and whilst I haven’t seen the recording for any of the sessions, the keynotes (and a bunch of interviews) are available on YouTube.

Unlike most conferences, the OSCON keynotes are really short (average 10-15 mins each) and having watched all the published keynote sessions here are my top picks.

 

Simon Wardley – Situation Normal, Everything Must Change

I’m a big fan of Simon’s work on value chain mapping, and his OSCON 2014 keynote was one of the most memorable talks for me last year.

swardley_13

Simon started by pointing out the lack of situational awareness on the part of enterprise IT. Enterprise IT lives in a low situational awareness environment that relies on backward causality and verbal reasoning (or story telling), and has no position or movement.

Whereas high-level situational awareness environments (e.g. military combat) are context specific, you have positions and movements and usually employ some form of visual reasoning.

swardley_14

Military actions are driven by your situational awareness of the where and why, but in Business we have a tyranny of actions.

swardley_15

and this is where Simon’s value chain mapping comes in. With maps, you can add positions to your components based on the values they provide, as well as movement as they evolve from the unchartered world (chaotic, uncertain, unpredictable, etc.) to become industrialized.

swardley_16

In terms of methodology, there’s no one size that fits all.

Agil, XP and Scrum are very good on the left side (the unchartered world), particularly when you want to reduce the cost of change.

On the right side, as things become industrialized, you want to reduce the cost of deviation and Six Sigma is good.

In the middle where you want to build a product, lean is particularly strong.

swardley_17

If you take a large scale project, rather than having a one-size-fits-all methodology, you can employ different methodologies based on where that component is in its evolution. For developers, this is no different to the arguments for polyglot programming, or polyglot persistence, because no single language or database is good for all the problems we have to solve.

Why should the way we work be any different?

swardley_18

By overlapping the value chain maps for different areas of the business you can start to identify overlaps within the organization, and a snippet from his most recent post shed some horrifying light on the amount of duplication that exists:

…To date, the worst example I know of duplication is one large global company that has 380 customised versions of the same ERP system doing exactly the same process…

The US air force discovered that, as people came up with new ideas they tend to add features to that idea and made it better (and more complex); they then added even more features and made the idea so complex it’s completely useless to anyone, and that’s approximately where they shipped it. (for regular readers of this blog, you would probably have read about this obsession of features many times before)

So Lt. Col. Dan Ward came up with FIST (Fast, Inexpensive, Simple and Tiny), which in his own words:

…FIST stands for Fast, Inexpensive, Simple and Tiny. It’s a term I use to describe a particular approach to acquisitions and system development. As you might guess, it involves using a small team of talented people, a tight budget, a short schedule and adhering to a particular set of principles and practices…

in other words, small is beautiful, and it’s a theme that we have seen repeatedly – be it microservices, or Amazon’s two-pizza teams, etc.

And as you impose constraints on the teams – tight budget, short schedule – you encourage creativity and innovation from the teams (something that Kevlin Henney also talked about at length during his Joy of Coding closing keynote).

 

However, even with small teams, you still have this problem that things need to evolve. Fortunately we have a solution to that too, in what is known as the three party system where you have:

  • pioneers – who are good at exploring the unchartered world
  • settlers – who are good at taking half-baked ideas and make useful products for others
  • town planners – who are good at taking a product and industrialising it into commodity and utility

swardley_20

Once you have a map, you can also start to play games and anticipate change. Or better yet, you can manipulate the map.

You can accelerate the pace of evolution by using open practices – open source, open API, etc. Or you can slow the process down by using patents, or FUD.

The key thing is that, once you have a map, you can see where things are moving and visually reason about why you should attack one component over another. And that’s how you can turn business into situational awareness first, and actions after.

As things move from product to commodity, they enable a new generation of services to spawn up (wonder), but they also cause death to organizations stuck behind the inertia barrier (death).swardley_21

This is a pattern that Simon calls War, Peace and Wonder, and is identifiable through weak signal detection and see roughly when these changes will likely happen.

swardley_22

Simon finished this brilliant session with three lessons:

  1. if you’re a start up, have no fear for large corporates because they suck at situational awareness;
  2. the future is awesome, and pioneers have already moved into the space of open hardware and open biology;
  3. open source itself is changing, we have new people coming in as new settlers

 

I hope you enjoyed Simon’s talk, his blog has much more information and goes into each of these topics in a greater deal of detail. If you follow him on Twitter (@swardley) he also post regular titbits of wisdom, which I have started to collect.

 

James Pearce – How Facebook Open Sources at Scale

…We use in production what we open source, and we open source only what we use in production…

– James Pearce

Nuff said 

 

Martin Fowler – Making Architecture Matter

I don’t like the term “software architecture” as it summons up these images of some senior person in an organization who’s setting rules and standards on how software should be written but having actually written any software for maybe 10 or 20 years. These architects, Joel Spolsky use the term “architecture astronauts”, often cause a lot of problems in software projects. So the whole term “architect” and “architecture” has a kinda nasty taste to it.

– Martin Fowler

For me, the key points from this talk are:

  • the notion that architects shouldn’t code is wrong (or, don’t be an ivory tower architect!)
  • architecture is really the shared understanding of the system’s design amongst its expert developers
    • architecture diagrams are just (often imperfect) representations of this shared understanding
    • as software projects grow, what matters the most is for you to ensure a good shared understanding between people leading the project
  • architecture is also the decisions that you wish you could get right early
    • your concern is therefore the decisions that are hard to change, e.g. the programming language
  • combining the two definitions above, and you can think of architecture as the “important things that I need to always keep in my head whilst I’m working on the system”
  • when confronted with requests for more features over quality, don’t make the moral argument of craftsmanship
    • when it comes to a battle between craftsmanship and economics, economics always wins
  • a common fallacy is to think that software quality is something that can be traded off for cost (like you do with cars or cellphones)
  • software has both external (visible to users) as well as internal (good modularity, etc.) and architecture is about internal quality
    • what matters with internal quality is the long term picture
    • well maintained code base gives you a platform to build upon and can make future development easier and faster
    • poorly maintained code base makes it harder and harder for you to make changes to
    • this is why architecture matters!

 

Raffi Krikorian – Hacking Conway’s Law

Conway’s law has been a trendy topic at conferences this past 12 months, and everyone is basically singing the same tune – apply Conway’s law in reverse and organize your communication structure to fit the software you want to build.

 

OSCON is coming to Europe!

At long last, we’ll see a version of OSCON in Europe this year, on 26th-28th October in Amsterdam. Some pretty cool tech companies will be represented there – GitHub, DataStax, Google, ThoughtWorks, PayPal, Heroku and Spotify to name a few, and of course, our very own Gamesys 

I will giving a talk on the work I did with Neo4j a while back, which you can read all about in this post.

p.s. Rachel Reese (of Jet) is coming over from the US and talking about building reactive service with F#!

 

Links

Polyconf Experience Report

This year’s PolyConf is over and although it was my first time at this conference and I didn’t know too many people going in I had such a great time and learnt so much I’ll definitely be back in the near future.

Recording of the talks are slowly appearing on their YouTube channel, so keep an eye out the talks mentioned below.

 

Venue

The conference is hosted at the Adam Mickiewicz University in the heart of Poznan and the facility is very modern.
2015-07-04 09.35.26 2015-07-04 09.35.59

The weather was great and fortunately the auditorium was very well air conditioned and ventilated given the temperature (around 30 degrees Celsius for the entire duration of the conference)!

polyconf_room

Food

Coffee was served throughout, and although lunch wasn’t to my taste (maybe it’s my Asian palette) there were always fresh fruits and snacks around to keep you going.

Oh, and there was a conference party every night with plenty of local craft beer on offer 

2015-07-04 09.37.29

Format

The format of the conference was very refreshing in that it has a single track and each of the talks is only 30 mins long. Having a single tracks means you no longer have to stress over deciding which talk to attend.

On day 1, the morning was dedicated to workshops and I really enjoyed William Byrd‘s workshop on building a relational interpreter in miniKanren.

The workshops were followed by two and half days of 30-min talks with 10 mins of break in between. This format caused some logistic problems. Many speakers had to adapt their talks to fit into the shorter-than-usual slot and ended up taking the full 30 mins so Q&A had to take place during the allocated break time which pushed the subsequent talks back or cut the breaks short and caused attendees to come back late for the next session and so on.

Talks

True to its goal of being a conference dedicated to polyglot programming there were a lot of different languages on show. Whilst EmojiLisp might just be the funkiest language of the lot, my picks from the conference have to be:

miniKanren

The highlight of the first day for me was definitely miniKanren.

I attended William Byrd‘s miniKanren workshop and went through the process of designing and building a relational interpreter that supports a subset of Scheme in miniKanren.

What does ‘relational’ mean here?

It means taking code such as:

    let f x y = x + y

and instead of seeing it as a function that takes input (x and y) and returns an output, you treat it as a relation between the values x, y and output.

Given this relation, you can deduce the value of a missing part if you know the values of the rest.

E.g. given x = 3, and output = 5, y must be 2

If more than one part of this relation is missing, then you can still deduce the value of the rest in terms of relations to each other.

E.g. given x = 3, then output = 3 + whatever value y takes

Now, if you have an interpreter that can evaluate your application code as a relation to its output when executed, what might you be able to do then?

Perhaps, you’d be able to synthesize the application code given some desired output value. For instance, you might be able to generate 99 programs that will output “I love you”.

Besides miniKanren there were also a couple of other good talks on day 1, I particularly liked Wojciech Ogrodowczyk‘s Beyond Ruby talk where he used the Piraha people as example to demonstrate why it’s so important to be polyglot. It’s one of the angles I used in my Tour of Language Landscape talk at NDC Oslo too.

p.s. you can find all the code for William’s workshop and talk on miniKanren here.

 

Crystal

On day 2, Erik Michaels-Ober showed off Crystal, a new LLVM-backed, compiled language that is inspired by Ruby’s syntax.

Based on the numbers Erik showed and a couple of his benchmark tests (one of which was a simple Hello World web server), Crystal can be very performant whilst retaining much of Ruby’s syntax.

In fact, the syntax of Ruby and Crystal are so similar (which is by design) you can even compile and run Ruby code (e.g. by running crystal xyz.rb) using the Crystal compiler and get a free speedup! Of course, this only works if the Ruby code also happens to be valid Crystal code.

Crystal also has some modern language features such as type inference and macros too.

 

Racket

Whilst I had been vaguely aware of Racket as ‘another LISP’ I had no idea just how much extensibility you have at your fingertips via the #lang notation. Sam Tobin-Hochstadt‘s talk did a great job in illustrating this using a number of examples:

Typed Racket

You can introduce a gradual type system by extending the language using a library and the #lang notation. To give you an idea, here’s a screenshot from the official getting started guide: Screenshot 2015-07-08 02.51.21

Lazy Racket

Similarly, you can also turn Racket into a lazily evaluated language (like Haskell) using #lang lazy

I saw a lot of impressive things at PolyConf but the ability to ‘hack’ the language in this way might have just topped the lot.

Just for fun (since EmojiLisp was introduced in an earlier lightening talk), Sam also showed how you can basically implement EmojiLisp in Racket with a few lines of code.

(on a side note, whilst not nearly as powerful as Racket in this regard, you can also do some really cool things with Elixir’s macro system. If you’re interested in Elixir, then you would want to watch this talk by Chris Mccord at NDC Oslo too)

 

Finally, here are some of the best tweet during the conference: 

 

So that’s it folks, another conference down and another bunch of things added to my ever-growing todo list!

Next, I’ll be speaking at the Cambridge F# User Group on the 20th July to talk about how Gamesys is using F# to build backends for games played by millions of players every month. Feel free to join us if you are in the area.

You’ll next find me at a conference on September 12th, at the first ever Kats Conf in Dublin, organized by Andrea Magnorsky. Edwin Brady will be there to talk about dependent typing and Idris, and I’m expecting to see an exciting lineup of speakers being announced soon!

kats_conf_logo

NDC Oslo 15 – Takeaways from “Lean and Functional Programming”

Bryan Hunter has been responsible for organising the FP track at NDC conferences as well as a few others and the quality of the tracks have been consistently good.

The FP track for this year’s NDC Oslo was exceptional and every talk has had a full house. And this year Bryan actually kicked off the conference with a talk on how functional programming can help you be more ‘lean’.

 

History of Lean

Bryan started by talking about the history of lean, and whilst most people (myself included) thought the lean principles were created at Toyota, turns out it actually originated from the US.

At a time when the US workforce was diminished because all the men were sent to war, a group called the TWI (Training Within Industry) was formed to find a way to bring women into the workforce for the first time, and train them.

The process of continuous improvement the TWI created was a stunning success, before the war the US was producing around 3,000 planes per year and by the end of the war the US was producing 300,000 planes a year!

Unfortunately, this history of lean was mostly lost when the men came back from war and the factories went back to how they worked before, and jobs were prioritized over efficiency.

Whilst the knowledge from this amazing period of learning was lost in the US, remains of the TWI was sent to Germany and Japan to help them rebuild and this was how the basic foundations of the lean principles were passed onto Toyota.

sidebar: one thing I find interesting is that, the process of continuous improvement that lean introduces:

Lean Principles

is actually very similar to how deep learning algorithms work, or as glimpsed from this tweet by Evelina, how our brain works.

 

Bryan then outlined the 4 key points of lean:

Long term philosophy : you need to have a sense of purpose that supersedes short-term goals and economic conditions. This is the foundation of lean, and without it you’re never stable.

The right process will produce the right results : if you aren’t getting the right results then you’ve got the wrong process and you need to continuously improve that process.

Respect, challenge, and develop your people : and they will become a force multiplier.

Continuously solving root problems drives organizational learning.

 

Companies adopt lean because it’s a proven path to improving delivery times, reducing costs and improving quality.

The way these improvements happen is by eliminating waste first, then eliminating over-burden and inconsistency.

 

Lean Thinking

The so-called lean house is built on top of the stability of having a long-term philosophy and the process of continuous improvement (or Kaizen, which translates to changes for the better in Chinese and Kanji).

Then through the two pillars of Just-In-Time and Act on Abnormality we arrive at our goal of improved Delivery Times, Quality and reduced Costs.

image

 

A powerful tool to help you improve is “go and see”, where you go and sit with your users and see them use your system. Not only do you see how they are actually using your system, but you also get to know them as humans and develop mutual empathy and respect which leads to better communication.

 

Another thing to keep in mind is that you can’t adopt lean by just adopting a new technical solution. Often when you adopt a new technical solution you just change things without improving them, and you end up with continuous change instead of continuous improvement.

Functional programming fits nicely here because it holds up underneath the scrutiny of Plan-Do-Check-Act (PDCA). Instead of having a series of changes you really have to build on the idea of standard work.

Without the standard you end up with a shotgun map where things change, and improvements might come about (if you change enough times then it’s bound to happen some time, right?) but not as the result of a formalised process and are therefore unpredictable.

 

Seven Wastes

Then there are the Seven Wastes.

image

Overproduction is the most important waste as it encompasses all the wastes underneath it. In software, if you are building features that aren’t used then you have overproduced.

sidebar : for regular readers of this blog, you might remember Melissa Perris calling this tendency of building features without verifying the need for them first as The Build Trap in her talk at QCon London. Dan North also talked about the issue of overproduction through a fixation on building features in his talk at CraftConf – Beyond Features.

 

By doing things that aren’t delivering value you cause other waste to occur.

 

Transportation, in software terms, can be thought of as the cost to go from requirement, through to deployment and into production. DevOps, Continuous Integration and Continuous Deployment are very much in line with the spirit of lean here as they all aim to reduce the waste (both cognitive and time) associated with transportation.

Inventory can be thought of as all the things that are in progress and haven’t been deployed. In the waterfall model where projects go on for months without anything ever being deployed, then all of it will become inventorial waste if the project is killed. The same can happen with scrums where you build up inventory during the two-week sprint ,which as Dan North mentioned in his Beyond Features talk, is just long enough for you to experience all the pain points of waterfall.

You experience Unnecessary Motion whenever you are fire fighting or doing things that should be automated. This waste equates to the wear-and-tear on your people, and can cause your people to burn out.

Waiting is self-explanatory, and can often result from deficiencies in your organization’s communication and work scheduling (requirements taking too long to arrive whilst assigned workers are waiting on their hands).

Over processing is equivalent to the idea of gold plating in software. Although as Jett Atwood pointed out in his post here, refactoring can be thought of as gold plating in the purest sense but it’s also important in producing sane, maintainable code.

image

Lastly, we have Defects. It’s exponentially cheaper to catch bugs at compile time than it is in production. This is why the notion of Type Driven Development is so important, and the goal is to make invalid state unrepresentable (as Scott Wlaschin likes to say!).

But you can still end up with defects related to performance, which is why I was really excited to hear about the work Greg Young has done with PrivateEye (which he publically announced at NDC Oslo).

 

Just-In-Time

Bryan used the working of a (somewhat dysfunctional) warehouse to illustrate the problem of a large codebase where you have lots of unnecessary code you have to go through to get to the problems you’re trying to solve.image

sidebar : whilst Bryan didn’t call it out by name, this is an example of cross-cutting concerns, which Aspect-Oriented Programming aims to address and something that I have written about regularly.

 

One way to visualize this problem is through Value Stream Mapping.

In this diagram the  batch processes are where value is created, but we can see that there’s a large amount of lead time (14 days) compared to the amount of processing time (585 seconds) so there are lots of waste here.

As one process finishes its work the material is pushed aside until the next process is ready, which is where you incur lead time between processes.

This is the push model in mass manufacturing.

In software, you can relate this to how the waterfall model works. All the requirements are gathered at once (a batch process); then implementations are done (another batch); then testing, and so on.

In between each batch process, you’re creating waste.

 

The solution to this is the flow model, or one-piece flow. It focuses on completing the production of one piece from start to finish with as little work in process inventory between operations as possible.

In this model you also have the opportunity of catching defects early. By exercising the process from start to end early you can identify problems in the process early, and also use the experience to refine and improve the process as you go.

You also deliver value (a completed item) for downstream as early as possible. For example, if your users are impacted by a bug, rather than have them wait for the fix in the next release cycle along with other things (the batch model) you can deliver just the fix right away.

Again, this reminds me of something that Dan North said in his Beyond Features talk:

“Lead time to someone saying thank you is the only reputation metric that matters.”

– Dan North

And finally, you can easily parallelise this one-piece flow by having multiple people work on different things at the same time.

 

Bryan then talked about the idea of Single-minute Exchange of Die (SMED) – which is to say that we need an efficient way to convert a manufacturing process from running the current product to running the new product.

This is also relatable to software, where we need to stay away from the batch model (where we have too much lead time between values being delivered), and do as much as necessary for the downstream and then switch to something else.

sidebar : I feel this is also related to what Greg Young talked about in his The Art of Destroying Software talk where he pushed for writing software components that can be entirely rewritten (i.e. exchanged) in less than a week. It is also the best answer to “how big should my microservice be?” that I have heard.

 

You should flow when you can, and pull when you must, and the idea of pull is:

“produce what you need, only as much as you need, when you need”

– Taiichi Ohno

when you do this you’re forced to see the underlying problems.

 

With a 2 week sprint, there is enough buffer there to mask any underlying issues you have in your process. For instance, even if you have to spend 20 mins to fight with some TeamCity configuration issue then that inefficiency is masked by you working just a bit harder. However, that problem is not fixed for anyone else and is a recurring cost that your organization has to pay.

In the pull model where you have downstream waiting on you then you’re forced to see these underlying problems and solve them.

 

There’s a widespread misconception that kanban equals lean, but kanban is just a tool to get there and there are other tools available. An interestingly, Taiichi Ohno actually got the idea for kanban from piggly wiggly, based on the way they stock sodas and soup.

Another tool that is similar to kanban is 5S.

image

 

Act on Abnormality

A key component here is to decouple humans from machines – that you shouldn’t require humans to watch the machines do their job and babysit them.

Another tool here is mistake-proofing, or poka yoke. If there’s a mistake that can happen we try to minimize the impact of those mistakes.

A good example is the design of the manhole cover:

image

which is round so that it can’t fall down the hole no matter which way you turn it.

Another example is to have visual controls that are vary obvious and obnoxious so that you don’t let problems hide.

 

I’m really glad to hear Bryan say:

“you shouldn’t require constant diligence, if you require constant diligence you’re setting everyone up for failure and hurt.”

– Bryan Hunter

which is something that I have been preaching to others in my company for some time.

An example of this is in the management of art assets in our MMORPG Here Be Monsters. There were a set of rules (naming conventions, folder structures, etc.) the artists have to follow for things to work, and when they make a mistake then we get subtle asset-related bugs such as:

  • invisible characters and you can only see his/her shadow
  • a character appearing without shirt/pants
  • trees mysteriously disappearing when transition into a state that’s missing an asset

So we created a poka yoke for this in the form of some tools to decouple the artists from these arbitrary rules. To give you a flavour of what the tool did here’s some screenshots from an internal presentation we did on this:

herebetools

 

When you do have a problem, there’s a process called the 5 Whys which helps you identify the root cause.

image

 

Functional Programming

The rest of the talk is on how functional programming helps you mistake-proof your code in ways that you can’t in imperative programming.

 

Immutability

A key difference between FP and imperative programming is that imperative programming relies on mutating state, whereas FP is all about transformations.

This quote from Joe Armstrong (one of the creators of Erlang) sums up the problem of mutation very clearly:

“the problem with OO is that you ask  OO for a banana, and instead you get a Gorilla holding the banana and the whole jungle.”

– Joe Armstrong

With mutations, you always have to be diligent to not cause some adverse effect that impacts other parts of the code when you call a method. And remember, when you require constant diligence you’re setting everyone up for failure and hurt.

sidebar : Venkat Subramaniam gave two pretty good talks at NDC Oslo on the power and practicality immutability and things we can learn from Haskell, both are relevant here and you might be interested in checking out if you’re still not sold on FP yet!

 

In C#, making a class immutable requires diligence because you’re going up against the defaults in the language.

image

sidebar : as Scott Wlaschin has discussed at length in this post, no matter how many FP features C# gets there will always be an unbridgeable gap because of the behaviour the language encourages with its default settings.

This is not a criticism of C# or imperative programming.

Programming is an incredibly wide spectrum and imperative programming has its place, especially in performance critical settings.

What we should do however, is to shy away from using C# for everything. The same is true for any other language – stop looking for the one language that rules them all.

Explore, learn, unlearn, there are lots of interesting languages and ideas waiting for you to discover!

 

How vs What

Another important difference between functional and imperative programming is that FP is focused on what you want to do whereas imperative forces you to think about how you want to do it.

Imperative programming forces you to understand how the machine works, which brings us back to the human-machine lock-in we talked about earlier. Remember, you want to decouple humans from machines, and allow humans to focus on understanding and solving the problem domain (which they’re good at) and leave the machine to work out how to execute their solution.

 

Some historical context is useful here, in that back in the days when computers were slow and buggy, the hardware was the bottleneck. So it pays to have the developer tell the computer how to do our computations for us, because we knew better.

Nowadays, software is the thing that is slow and buggy, both compilers and CPUs are capable of doing much more optimization than most of us can ever dream of.

Whilst there are still legitimate cases for choosing imperative programming in performance critical scenarios, it doesn’t necessarily mean that developers need to write imperative code themselves. Libraries such as Streams is a good example of how you can allow developers to write functional style code which is translated to imperative code under the hood. In act, the F# compiler does this in many places – e.g. compiling your functional code into imperative while/for loops.

 

Another way to look at this debate is that, by making your developers deal with the HOW as well as the WHAT (just because you’re thinking about the how doesn’t mean that you don’t have to think about the what) you have increased the complexity of the task they have to perform.

Given the amount of cognitive resources they have to perform the task is constant, then you have effectively reduced the likelihood your developers would deliver a piece working software that does the right thing. And a piece of software that does the wrong faster is probably not what you are after…

image

p.s. don’t take this diagram literally, it’s merely intended to illustrate the relationship between the three things here – cognitive resources, complexity of the problem and the chance of delivering a correct solution. And before you argue that you can solve this problem by just adding more people (i.e. cognitive resources) into the equation, remember that the effect of adding a new member into the group is not linear, and is especially true when the developers in question do not offer sufficiently different perspective or view point.

 

null references

And any conversation about imperative programming is about complete without talking about null references, the invention that Sir Tony Hoare considers as his billion dollar mistake.

sidebar : BTW, Sir Tony Hoare is speaking at the CodeMesh conference in London in November, along with other industry greats such as John Hughes, Robert Virding, Joe Armstrong and Don Syme.

In functional languages such as F#, there are no nulls.

The absence of a value is explicitly represented by the Option type which eliminates the no. 1 invalid state that you have to deal with in your application. It also allows the type system to inform you exactly where values might be absent and therefore require special handling.

 

And to wrap things up…

image

 

 

Links

Takeaway from conferences (so far…)

Time flies when you’re having fun, and we’re already into the second half of 2015!

On a personal level, it’s been a great 6 months, I have spoken at 10 conferences in as many cities and as you read this post I’m at Polyconf in Poznan! 

The great thing about being at so many conferences is that you get to learn so much from others. I’ve gotten into the habit of writing up a summary of what I learnt from these conferences (where time permits of course).

If you missed them here are my takeaways from the conferences from 2015 so far.

 

CraftConf

Kyle Kingsbury – Jepsen IV: Hope Springs Eternal

Michael Nygard – Architecture without an End State

Tammer Saleh – Microservice Anti-patterns

Michael Feathers – The Hidden Dimension of Refactoring

Adrian Trenaman – Scaling micro-services at Gilt

Dan North – Beyond Features

Experience report

You can find the recording of all the talks by room here:

 

QCon London

Melissa Perris – The Bad Idea Terminator

Matt Ranney – Scaling Uber’s realtime market platform

Randy Shoup – Service architectures at Scale, Lessons from Google and Ebay

Kevlin Henney – Small is Beautiful

Adam Tornhill – Code as Crime Scene

You can find recording of the other talks here, the list is incomplete yet as InfoQ releases only one video a week.

 

CodeMotion Rome

Richard Rodger – Measuring micro-services

 

Joy of Coding

Experience report

 

Functional Programing eXchange

Experience report

You can find recording of the talks here.

 

What next?

For the second half of the year, I’ll exercise better restraint and limit my appearance at conferences (I have all but exhausted my vacation days this year already…).

But, that said, I’m signed up to a few more conference yet:

code.talks in Hamburg, Sep 29th – 30th

codetalks

OSCON Europe in Amsterdam, Oct 26th – 28th

oscon

Oredev in Malmo, Nov 4th – 6th

oredev