Anders Hejlsberg’s podcast session with .Net Rocks!

Hav­ing just lis­tened to a recent .Net Rocks! pod­cast with Anders Hejls­berg (chief archi­tect of the C# lan­guage) in a fit­ting­ly named show, “Anders Hejls­berg blows our mind!”, I felt it worth­while to note down some of the views Anders shared with us, and some nice quotes for you to use in your next geek con­ver­sa­tion :-P


Recent changes to C# lan­guage have intro­duced more func­tion­al and dynam­ic fea­tures to a tra­di­tion­al­ly imper­a­tive and sta­t­i­cal­ly typed lan­guage.

Lan­guage exten­sions such as LINQ intro­duced an iso­lat­ed set of func­tion­al fea­tures (the intro­duc­tion of the LINQ query lan­guage for exam­ple) and through its dif­fer­ent flavours (Linq2SQL, Linq2XML and Linq2Objects) allows for a uni­fied way of work­ing with data – some­thing that has been lost since the days of old when con­cerns such as scal­a­bil­i­ty, trans­ac­tions, etc. drove a break­away of data man­age­ment capa­bil­i­ties into the Rela­tion­al Data­base realm, away from the land of gen­er­al pur­pose pro­gram­ming lan­guages.

Besides being a data query lan­guage, LINQ is also a data trans­for­ma­tion lan­guage allow­ing for the trans­for­ma­tion of data between objects and XML, or SQL. It also moves the lev­el of abstrac­tion upwards, allow­ing devel­op­ers to work on a high­er lev­el and say more about the ‘what’ rather than the ‘how’. This ele­va­tion of the abstrac­tion lev­el is impor­tant because it lends a greater sup­port for a con­cur­rent exe­cu­tion envi­ron­ment – PLINQ for instance allows for par­al­lel exe­cu­tion of LINQ queries on mul­ti­ple cores.

On Parallelism

In the years gone by, the hard­ware evo­lu­tion has pro­vid­ed for a quick and easy way to solve our soft­ware per­for­mance prob­lems, as CPUs’ clock speeds go up they pro­vide an all-around increase in per­for­mance to all our soft­ware. How­ev­er, the increase in clock speed has dried up because we have hit the phys­i­cal lim­it as CPUs get too hot, and so began the cur­rent cycle of mul­ti-core CPUs.

This presents a dif­fer­ent chal­lenge for soft­ware devel­op­ers as they can no longer just rely on the hard­ware to solve per­for­mance issues and have to be smart to lever­age off the poten­tials of the new gen­er­a­tion of CPUs:

The free lunch is over, now you real­ly have to learn to cook!”

Whilst con­cur­rent pro­gram­ming is hard, the lan­guage is try­ing to make it eas­i­er by intro­duc­ing PLINQ and the Task Par­al­lel Library (TPL), and Anders fore­sees future changes which will make it eas­i­er to cre­ate immutable types which work bet­ter in a con­cur­rent envi­ron­ment (see my post on immutable structs here). But bear in mind that not all things can be par­al­lelised and even those that can be par­al­lelise they don’t always guar­an­tee a lin­ear gain in per­for­mance.

The prob­lem with the exist­ing thread cre­ation func­tions in C# is that we have a fair­ly shal­low rep­re­sen­ta­tion of the OS resources, which are rel­a­tive­ly expen­sive – they don’t con­sume a lot of phys­i­cal mem­o­ry but they con­sume lots of log­i­cal address space. By default, every time you cre­ate a thread a MB of log­i­cal address space is allo­cat­ed to the thread, and with only 2–3 GB of avail­able log­i­cal space in a 32bit machine you’re only allowed a few thou­sand threads con­cur­rent­ly.

The TPL pro­vides a way for us to decou­ple our work from the threads, and enables us to rea­son about gen­er­at­ing log­i­cal units of work with­out hav­ing to wor­ry about thread affin­i­ty.

On Functional Programming

Func­tion­al pro­gram­ming is won­der­ful, except when you have to write real apps.”

In a pure func­tion­al pro­gram­ming lan­guage, there can be no side effects, but in any real-world appli­ca­tions there has to be side-effects some­where, writ­ing to a log file is a side effect and so is updat­ing a row in the Data­base.

So the trick is to iden­ti­fy areas in your appli­ca­tion which doesn’t require side effects and write islands of func­tion­al code, and find ways to mar­ry them to imper­a­tive code for their side effects. The great thing about no hav­ing any side effects is that you can rule out any con­sid­er­a­tion about state changes when you call a method, e.g. when you call Math.Round() on a num­ber you know what you’re gonna get and you nev­er stop to think about the con­se­quences of call­ing it twice because there’re no side effects when you call the method and for the same argu­ment it will always return the same val­ue.

On Dynamic Typing and C# 5

In C# 4, the sup­port for dynam­ic typ­ing has been added to the lan­guage. This has been intro­duced very much with the chal­lenges devel­op­ers face from day to day in mind, because we work with non-schema­tised data struc­tures all the time, e.g. every time you work against a cloud data ser­vice.

And final­ly, briefly touch­ing on what’s to come in C# 5, Anders talked about the theme of C# 5 – meta pro­gram­ming. Some of the work he’s doing now is about open­ing up the com­pil­er, and pro­vid­ing the com­pil­er as a ser­vice. This will make it pos­si­ble for us to break away from the tra­di­tion­al com­pil­er mod­el of work­ing with source files, and enable us to have inter­ac­tive prompts and REPL like the F# Inter­ac­tive win­dow that is avail­able for F#. This also opens the door for meta pro­gram­ming capa­bil­i­ty, pro­grams that writes itself, a fea­ture that has dri­ven the suc­cess of Ruby and Ruby on Rails.