Project Euler – Problem 64 Solution


All square roots are periodic when written as continued fractions and can be written in the form:


For example, let us consider ?23:


If we continue we would get the following expansion:


The process can be summarised as follows:


It can be seen that the sequence is repeating. For conciseness, we use the notation ?23 = [4;(1,3,1,8)], to indicate that the block (1,3,1,8) repeats indefinitely.

The first ten continued fraction representations of (irrational) square roots are:

?2=[1;(2)], period=1

?3=[1;(1,2)], period=2

?5=[2;(4)], period=1

?6=[2;(2,4)], period=2

?7=[2;(1,1,1,4)], period=4

?8=[2;(1,4)], period=2

?10=[3;(6)], period=1

?11=[3;(3,6)], period=2

?12= [3;(2,6)], period=2

?13=[3;(1,1,1,1,6)], period=5

Exactly four continued fractions, for N <= 13, have an odd period.

How many continued fractions for N <= 10000 have an odd period?


(see full solution here).


Based on the algorithm on continued fractions from Wikipedia, we can implement the expansion algorithm as:


From the Wikipedia page:

The algorithm can also terminate on ai when ai = 2 a0, which is easier to implement.

which corresponds to the termination condition we have in the Repeat active pattern (which also checks if the accumulator is empty):


Also, this algorithm doesn’t work on numbers that are perfect squares, i.e. 4, 9, 16, … hence we need to exclude them when searching for our answer.

Here’s the solution in full:



This solution took 92ms to execute on my machine.

Project Euler – Problem 80 Solution


It is well known that if the square root of a natural number is not an integer, then it is irrational. The decimal expansion of such square roots is infinite without any repeating pattern at all.

The square root of two is 1.41421356237309504880…, and the digital sum of the first one hundred decimal digits is 475.

For the first one hundred natural numbers, find the total of the digital sums of the first one hundred decimal digits for all the irrational square roots.


(see full solution here).

The premise of the problem itself is fairly straight forward, the challenge here is down to the way floating points are implemented on computers which lacks the precision necessary to solve this problem. So the bulk of the research went into finding a way to generate an arbitrary number of digits of a square root.

As usual, Wikipedia has plenty to offer and the easiest solution implementation wise is this solution by Frazer Jarvis.


which translates to:


The rest is really straight forward, with the only tricky thing being the conversion from char to int since this returns the internal integer value instead – e.g. int ‘0’ => 48 and int ‘1’ => 49 – hence the need for some hackery in the sum function.

Here is the full solution:



The solution took 95ms to complete on my machine.

Project Euler – Problem 61 Solution


Triangle, square, pentagonal, hexagonal, heptagonal, and octagonal numbers are all figurate (polygonal) numbers and are generated by the following formulae:


The ordered set of three 4-digit numbers: 8128, 2882, 8281, has three interesting properties.

  1. The set is cyclic, in that the last two digits of each number is the first two digits of the next number (including the last number with the first).
  2. Each polygonal type: triangle (P3,127=8128), square (P4,91=8281), and pentagonal (P5,44=2882), is represented by a different number in the set.
  3. This is the only set of 4-digit numbers with this property.

Find the sum of the only ordered set of six cyclic 4-digit numbers for which each polygonal type: triangle, square, pentagonal, hexagonal, heptagonal, and octagonal, is represented by a different number in the set.


(see full solution here),

The tricky thing here (at least for me) was to remember that the six 4-digit numbers have to come from different sets, but not necessarily in the order of P3, P4, … P8. Once that is cleared up, the rest is fairly straight forward. In the solution linked above, I first created a set of functions for generating triangle, square, pentagonal, … octagonal numbers:


Since the question concerns only 4-digit numbers, so for efficiency sake let’s generate the desired 4 digit numbers ahead of time and safe them for later use:


The is4digit predicate function is self-explanatory. naturalNumbers is an infinite sequence of integers starting from 1, we use this sequence to generate the figurate numbers we need, but only keep those that are actually 4 digits.

So far so good, we have all the figurate numbers in an array where [0] => P3, [1] => P4, and so on.


Next, create permutations of the figurate numbers such that we exhaust all possible sequence of figurate numbers:

P3 => P4 => P5 => P6 => P7 => P8

P3 => P4 => P6 => P7 => P8 => P5

P4 => P3 => P5 => P6 => P7 => P8


(P.S. the permute function here comes from the Common.fs source file in the solution)


To find the answer to the problem, we process each permutation to find our answer, take a moment to understand this code:



The processPermutation function processes one permutation of the figurate numbers and the termination conditions for the inner loop function are:

1. we have one number from each figurate number set and that the last 2 digits of the last number = first 2 digits of first number


2. we have one number from each figurate number set but last 2 digits of last number <> first 2 digits of first number (so close!)


3. one of the figurate number set in the sequence doesn’t contain a number whose first 2 digits = the last 2 digits of the number selected from the last figurate number set (short-circuited)



For each number in the current set of figurate numbers we build up a new predicate function – e.g. if x = 1282 then the predicate function would find 4-digit numbers whose first two digit = 82 – and use it to process the next set of figurate numbers in the sequence.

The loop function returns int list option where the int list represents the cyclic figurate numbers we’re looking for, so all that’s left is to unpack the option type and then sum the list.



This solution took 17ms to find the solution on my machine.

F# – inline functions and member constraints

Generic type parameters were introduced in C# 2.0, and they gave us the ability to write code that works against any type that matches a set of constraints and remove the need to create type-specific overloads, e.g.:


A few years passed, and dynamic types was introduced in C# 4, which allows us to bypass compile-time type checking (type checking still happens at runtime) when we use the dynamic keyword, e.g.:


this code works so long as the ‘+’ operator is defined for the runtime types of obj1 and obj2, otherwise, you’ll get a RuntimeBinderException when this code is executed. This kind of type checking is often referred to as Duck typing.

Duck typing is a very powerful language feature but it also makes your code less safe, as you now face the possibility of having runtime type errors that would have otherwise been caught by the compiler.

It’s a damn shame that there’s no way for you to combine the powers of generics and dynamic typing.. because if you could, you’d be able to write DoSomethingElse like this instead:

public void DoSomethingElse<T, U, V>(T obj1, U obj2) where T + U => V


    var obj3 = obj1 + obj2;



which stops you from mistakenly trying to invoke the method with an int and a List<int> at compile time, and saves you 10 mins of debugging and bug fixing time.

Well, would it blow your mind if I tell you that such feature already exists in the .Net framework today? No, I kid you not, member constraints in F# lets you do just that!

Member Constraints

In F#, you can specify a statically resolved type parameter by prefixing it with the caret symbol ( ^ ), e.g. ^T. These type parameters can be used with member constraints which allows you to specify that a type argument must have a particular member or members in order to be used.

Take the following F# code for instance:


It’s equivalent to the earlier C# version of DoSomethingElse, but this function has the following signature:


which says that the types ^a and ^b used in the function must have a static member ‘+’ defined on either one of them that can be used on instances of these two types.

You can explicitly state member constraints too, for instance:


Member constraints are very useful in those tight spots where you find yourself wrestling with the .Net type system, it gives you the flexibility of duck typing but also the safety of static typing, so really it gives you the best of both worlds!


However, useful as it may be, there are some restrictions to when you can use member constraints.

First, they cannot be used with generic type parameters, they can only be used with statically resolved type parameters.

Secondly, they can only be used with functions or methods that are inline.

Inline functions

Which brings us to the topic of inline functions, which as the name suggests, creates the functions ‘inline’.

But what does that mean? It means that, at every call site to an inlined function or method, the concrete types for the statically resolved type parameters (e.g. ^T, ^U) are resolved and the specific code for those types are inserted at the call site.

This has the obvious implication that the compiler will generate many duplicated IL code for inline functions and it increases the size of your assembly.

By contrast, a normal function or method will generate only one instance of that function or method and every call site makes a ‘jump’ to the location of that function or method in memory to execute it. The inlining behaviour also has some performance implications which we will go through shortly.


The one major restriction on inline functions is that an inlined function cannot be overloaded.

When to use inline functions

As with most powerful features, such as duck typing and inline functions, they are often open to abuse.. Personally, I think it’s important for one to think carefully about what it is that they’re trying to do before deciding to use the latest and greatest language features just for the sake of it.

Take the last code snippet for example, there’s no reason why I can’t solve this problem using interfaces and generic type parameters – define an ISpeaker interface with a Speak member and requiring the parameter x to be an instance of ISpeaker.

Doing so will achieve the same end goal, there will be a stronger contract (a well understood, shared interface) and the code will be cleaner and easier to read:


As far as I’m aware, there are really only two main reasons for using inline functions:

Generic Arithmetic

This is the problem inline functions were designed to solve, and a problem that can’t be solved by using interfaces and generics because there’s no way to specific constraints on operators without using member constraints (and therefore inline functions).

Take this simple add function for example:


without making it inline the parameters a and b will be inferred to be of type int, and there’s no way to make this function generic and usable with every numeric type (int32, int64, uint32, uint64, float, etc.). This is a prime example of when you should inline a function, doing so solves the aforementioned problem:


Performance Optimization

In some cases, it’s possible to get a performance improvement by making a function inline as it removes the jumps from call sites to the function’s location in memory.

Jon Harrop gave this example of how by making the following fold function inline it can run 5x faster:



As you can see, in my test, the inline version managed to run a whopping 8.5x faster!

This might just be too much temptation to those of us constantly seeking better performance from our apps, but it’s important to keep a couple of things in mind too:

  • making a function or method inline does not always guarantee a performance improvement
  • overuse can add significant bloat and makes your assembly bigger and takes longer to load, hence potentially impeding overall performance
  • always measure/profile your app first, don’t prematurely optimize

Further readings

MSDN – Statically resolved type parameters (preceded by a caret ^ symbol)

MSDN – Inline functions

Statically typed duck typing in F#

Project Euler – Problem 65 Solution


The square root of 2 can be written as an infinite continued fraction.


The infinite continued fraction can be written, ?2 = [1;(2)], (2) indicates that 2 repeats ad infinitum. In a similar way, ?23 = [4;(1,3,1,8)].

It turns out that the sequence of partial values of continued fractions for square roots provide the best rational approximations. Let us consider the convergents for ?2.


Hence the sequence of the first ten convergents for ?2 are:

1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, 1393/985, 3363/2378, …

What is most surprising is that the important mathematical constant,

e = [2; 1,2,1, 1,4,1, 1,6,1 , … , 1,2k,1, …].

The first ten terms in the sequence of convergents for e are:

2, 3, 8/3, 11/4, 19/7, 87/32, 106/39, 193/71, 1264/465, 1457/536, …

The sum of digits in the numerator of the 10th convergent is 1+4+5+7=17.

Find the sum of digits in the numerator of the 100th convergent of the continued fraction for e.


If you look at the convergents of ?2 and the numerators in the convergents of e, you’ll see a pattern emerging:

1    +    2    *   1    =    3

2    +    3    *   2    =    8

3    +    8    *   1    =    11

8    +   11   *   1    =    19

11  +   19   *   4    =    87

If you look at the sequence of numerators in the convergents of e ( 2, 3, 8, 11, … ) and the sequence of numbers in the convergents of ?2 ( 1, 2, 1, 1, 4, 1, … ), given the current numerator ( n ) and the previous numerator ( n-1 ) in the sequence and the corresponding number in the convergents of ?2 ( i )the next numerator ( n+1 ) can be calculated using the formula:

( n+1) = ( n-1 ) + n * i

Once we have this formula to work with, the rest is simple, the solution runs in 7 milliseconds on my machine.