Yubl’s road to Serverless architecture – Part 3 – ops

The Road So Far

part 1 : overview

part 2 : testing and continuous delivery strategies

 

A couple of folks asked me about our strategy for monitoring, logging, etc. after part 2, and having watched Chris Swan talk about “Serverless Operations is not a Solved Problem” at the Serverless meetup it’s a good time for us to talk about how we approached ops with AWS Lambda.

 

NoOps != No Ops

The notion of “NoOps” have often been mentioned with serverless technologies (I have done it myself), but it doesn’t mean that you no longer have to worry about ops.

To me, “ops” is the umbrella term for everything related to keeping my systems operational and performing within acceptable parameters, including (but not limited to) resource provisioning, configuration management, monitoring and being on-hand to deal with any live issues. The responsibilities of keeping the systems up and running will always exist regardless whether your software is running on VMs in the cloud, on-premise hardware, or as small Lambda functions.

Within your organization, someone needs to fulfill these responsibilities. It might be that you have a dedicated ops team, or perhaps your developers will share those responsibilities.

NoOps to me, means no ops specialization in my organization – ie. no dedicated ops team – because the skills and efforts required to fulfill the ops responsibilities do not justify the need for such specialization. As an organization it’s in your best interest to delay such specialization for as long as you can both from a financial point of view and also, perhaps more importantly, because Conway’s law tells us that having an ops team is the surefire way to end up with a set of operational procedures/processes, tools and infrastructure whose complexity will in turn justify the existence of said ops team.

At Yubl, as we migrated to a serverless architecture our deployment pipeline became more streamlined, our toolchain became simpler and we found less need for a dedicated ops team and were in the process of disbanding our ops team altogether.

 

Logging

Whenever you write to the stdout from your Lambda function – eg. when you do console.log in your nodejs code – it ends up in the function’s Log Group in CloudWatch Logs.

Centralised Logging

However, logs are not easily searchable, and once you have a dozen Lambda functions you will want to collect them in one central place. The ELK stack is the de facto standard for centralised logging these days, you can run your own ELK stack on EC2, and elastic.co also offers a hosted version of Elasticsearch and Kibana.

To ship your logs from CloudWatch Logs to ELK you can subscribe the Log Group to a cloudwatch-logs-to-elk function that is responsible for shipping the logs.

You can subscribe the Log Group manually via the AWS management console.

But, you don’t want a manual step everyone needs to remember every time they create a new Lambda function. Instead, it’s better to setup a rule in CloudWatch Events to invoke a subscribe-log-group Lambda function to set up the subscription for new Log Groups.

2 things to keep in mind:

  • lots services create logs in CloudWatch Logs, so you’d want to filter Log Groups by name, Lambda function logs have the prefix /aws/lambda/
  • don’t subscribe the Log Group for the cloudwatch-logs-to-elk function (or whatever you decide to call it), otherwise you create an infinite loop for the cloudwatch-logs-to-elk function where its own logs will trigger itself and produce more logs and so on

 

Distributed Tracing

Having all your logs in one easily searchable place is great, but as your architecture expands into more and more services that depends on one another you will need to correlated logs from different services to understand all the events that occurred during one user request.

For instance, when a user creates a new post in the Yubl app we distribute the post to all of the user’s followers. Many things happen along this flow:

  1. user A’s client calls the legacy API to create the new post
  2. the legacy API fires a yubl-posted event into a Kinesis stream
  3. the distribute-yubl function is invoked to handle this event
  4. distribute-yubl function calls the relationship-api to find user A’s followers
  5. distribute-yubl function then performs some business logic, group user A’s followers into batches and for each batch fires a message to a SNS topic
  6. the add-to-feed function is invoked for each SNS message and adds the new post to each follower’s feed

If one of user A’s followers didn’t receive his new post in the feed then the problem can lie in a number of different places. To make such investigations easier we need to be able to see all the relevant logs in chronological order, and that’s where correlation IDs (eg. initial request-id, user-id, yubl-id, etc.) come in.

Because the handling of the initial user request flows through API calls, Kinesis events and SNS messages, it means the correlation IDs also need to be captured and passed through API calls, Kinesis events and SNS messages.

Our approach was to roll our own client libraries which will pass the captured correlation IDs along.

Capturing Correlation IDs

All of our Lambda functions are created with wrappers that wraps your handler code with additional goodness such as capturing the correlation IDs into a global.CONTEXT object (which works because nodejs is single-threaded).

Forwarding Correlation IDs

Our HTTP client library is a thin wrapper around the superagent HTTP client and injects the captured correlation IDs into outgoing HTTP headers.

We also have a client library for publishing Kinesis events, which can inject the correlation IDs into the record payload.

For SNS, you can include the correlation IDs as message attributes when publishing a message.

Zipkin and Amazon X-Ray

Since then, AWS has announced x-ray but it’s still in preview so I have not had a chance to see how it works in practice, and it doesn’t support Lambda at the time of writing.

There is also Zipkin, but it requires you to run additional infrastructure on EC2 and whilst it has wide range of support for instrumentation the path to adoption in the serverless environment (where you don’t have or need traditional web frameworks) is not clear to me.

 

Monitoring

Out of the box you get a number of basic metrics from CloudWatch – invocation counts, durations, errors, etc.

You can also publish custom metrics to CloudWatch (eg. user created, post viewed) using the AWS SDK. However, since these are HTTP calls you have to be conscious of the latencies they’ll add for user-facing functions (ie. those serving APIs). You can mitigate the added latencies by publishing them in a fire-and-forget fashion, and/or budgeting the amount of time (say, to a max of 50ms) you can spend publishing metrics at the end of a request.

Because you have to do everything during the invocation of a function, it forces you to make trade offs.

Another approach is to take a leaf from Datadog’s book and use special log messages and process them after the fact. For instance, if you write logs in the format MONITORING|epoch_timestamp|metric_value|metric_type|metric_name like below..

console.log(“MONITORING|1489795335|27.4|latency|user-api-latency”);

console.log(“MONITORING|1489795335|8|count|yubls-served”);

then you can process these log messages (see Logging section above) and publish them as metrics instead. With this approach you’ll be trading off liveness of metrics for less API latency overhead.

Of course, you can employ both approaches in your architecture and use the appropriate one for each situation:

  • for functions on the critical path (that will directly impact the latency your users experience), choose the approach of publishing metrics as special log messages;
  • for other functions (cron jobs, kinesis processors, etc.) where invocation duration doesn’t significantly impact a user’s experience, publish metrics as part of the invocation

Dashboards + Alerts

We have a number of dashboards setup in CloudWatch as well as Graphite (using hostedgraphite, for our legacy stack running on EC2), and they’re displayed on large monitors near the server team area. We also set up alerts against various metrics such as API latencies and error count, and have opsgenie setup to alert whoever’s on-call that week.

Consider alternatives to CloudWatch

Whilst CloudWatch is good, cost-effective solution for monitoring (in some cases the only way to get metrics out of AWS services such as Kinesis and DynamoDB) it has its drawbacks.

Its UI and customization is not on-par with competitors such as Graphite, Datadog and Sysdig, and it lacks advanced features such as anomaly detection and finding correlations that you find in Stackdrvier and Wavefront.

The biggest limitation however, is that CloudWatch metrics are only granular to the minute. It means your time to discovery of issues is measured in mins (you need a few data points to separate real issues that require manual intervention from temporary blips) and consequently your time to recover is likely to be measured in tens of mins. As your scale up and the cost of unavailability goes up you need to invest efforts to cut down both times, which might mean that you need more granular metrics than CloudWatch is able to give you.

Another good reason for not using CloudWatch is that, you really don’t want your monitoring system to fail at the same time as the system it monitors. Over the years we have experienced a number of AWS outages that impacted both our core systems running on EC2 as well as CloudWatch itself. As our system fails and recovers we don’t have the visibility to see what’s happening and how it’s impacting our users.

 

Config Management

Whatever approach you use for config management you should always ensure that:

  • sensitive data (eg. credentials, connection strings) are encrypted in-flight, and at rest
  • access to sensitive data should be based on roles
  • you can easily and quickly propagate config changes

Nowadays, you can add environment variables to your Lambda functions directly, and have them encrypted with KMS.

This was the approach we started with, albeit using environment variables in the Serverless framework since it wasn’t a feature of the Lambda service at the time. After we had a dozen functions that share config values (eg. MongoDB connection strings) this approach became cumbersome and it was laborious and painfully slow to propagate config changes manually (by updating and re-deploying every function that require the updated config value).

It was at this point in our evolution that we moved to a centralised config service. Having considered consul (which I know a lot of folks use) we decided to write our own using API Gateway, Lambda and DynamoDB because:

  • we don’t need many of consul‘s features, only the kv store
  • consul is another thing we’d have to run and manage
  • consul is another thing we’d have to learn
  • even running consul with 2 nodes (you need some redundancy for production) it is still order of magnitude more expensive

Sensitive data are encrypted (by a developer) using KMS and stored in the config API in its encrypted form, when a Lambda function starts up it’ll ask the config API for the config values it needs and it’ll use KMS to decrypt the encrypted blob.

We secured access to the config API with api keys created in API Gateway, in the event these keys are compromised attackers will be able to update config values via this API. You can take this a step further (which we didn’t get around to in the end) by securing the POST endpoint with IAM roles, which will require developers to make signed requests to update config values.

Attackers can still retrieve sensitive data in encrypted form, but they will not be able to decrypt them as KMS also requires role-based access.

client library

As most of our Lambda functions need to talk to the config API we invested efforts into making our client library really robust and baked in caching support and periodic polling to refresh config values from the source.

 

So, that’s it folks, hope you’ve enjoyed this post, click here to see the rest of the series.

The emergence of AWS Lambda and other serverless technologies have significantly simplified the skills and tools required to fulfil the ops responsibilities inside an organization. However, this new paradigm has also introduced new limitations and challenges for existing toolchains and requires us to come up with new answers. Things are changing at an incredibly fast pace, and I for one am excited to see what new practices and tools emerge from this space!

Yubl’s road to Serverless architecture – Part 2 – Testing and CI/CD

Note: see here for the rest of the series.

 

Having spoken to quite a few people about using AWS Lambda in production, testing and CI/CD are always high up the list of questions, so I’d like to use this post to discuss the approaches that we took at Yubl.

Please keep in mind that this is a recollection of what we did, and why we chose to do things that way. I have heard others advocate very different approaches, and I’m sure they too have their reasons and their approaches no doubt work well for them. I hope to give you as much context (or, the “why”) as I can so you can judge whether or not our approach would likely work for you, and feel free to ask questions in the comments section.

 

Testing

In Growing Object-Oriented Software, Guided by Tests, Nat Pryce and Steve Freeman talked about the 3 levels of testing [Chapter 1]:

  1. Acceptance – does the whole system work?
  2. Integration – does our code work against code we can’t change?
  3. Unit – do our objects do the right thing, are they easy to work with?

As you move up the level (acceptance -> unit) the speed of the feedback loop becomes faster, but you also have less confidence that your system will work correctly when deployed.

Favour Acceptance and Integration Tests

With the FAAS paradigm, there are more “code we can’t change” than ever (AWS even describes Lambda as the “glue for your cloud infrastructure”) so the value of integration and acceptance tests are also higher than ever. Also, as the “code we can’t change” are easily accessible as service, it also makes these tests far easier to orchestrate and write than before.

The functions we tend to write were fairly simple and didn’t have complicated logic (most of the time), but there were a lot of them, and they were loosely connected through messaging systems (Kinesis, SNS, etc.) and APIs. The ROI for acceptance and integration tests are therefore far greater than unit tests.

It’s for these reason that we decided (early on in our journey) to focus our efforts on writing acceptance and integration tests, and only write unit tests where the internal workings of a Lambda function is sufficiently complex.

No Mocks

In Growing Object-Oriented Software, Guided by TestsNat Pryce and Steve Freeman also talked about why you shouldn’t mock types that you can’t change [Chapter 8], because…

…We find that tests that mock external libraries often need to be complex to get the code into the right state for the functionality we need to exercise.

The mess in such tests is telling us that the design isn’t right but, instead of fixing the problem by improving the code, we have to carry the extra complexity in both code and test…

…The second risk is that we have to be sure that the behaviour we stub or mock matches what the external library will actually do…

Even if we get it right once, we have to make sure that the tests remain valid when we upgrade the libraries…

I believe the same principles apply here, and that you shouldn’t mock services that you can’t change.

Integration Tests

Lambda function is ultimately a piece of code that AWS invokes on your behalf when some input event occurs. To test that it integrates correctly with downstream systems you can invoke the function from your chosen test framework (we used Mocha).

Since the purpose is to test the integration points, so it’s important to configure the function to use the same downstream systems as the real, deployed code. If your function needs to read from/write to a DynamoDB table then your integration test should be using the real table as opposed to something like dynamodb-local.

It does mean that your tests can leave artefacts in your integration environment and can cause problems when running multiple tests in parallel (eg. the artefacts from one test affect results of other tests). Which is why, as a rule-of-thumb, I advocate:

  • avoid hard-coded IDs, they often cause unintentional coupling between tests
  • always clean up artefacts at the end of each test

The same applies to acceptance tests.

Acceptance Tests

(this picture is slightly misleading in that the Mocha tests are not invoking the Lambda function programmatically, but rather invoking it indirectly via whatever input event the Lambda function is configured with – API Gateway, SNS, Kinesis, etc. More on this later.)

…Wherever possible, an acceptance test should exercise the system end-to-end without directly calling its internal code.

An end-to-end test interacts with the system only from the outside: through its interface…

…We prefer to have the end-to-end tests exercise both the system and the process by which it’s built and deployed

This sounds like a lot of effort (it is), but has to be done anyway repeatedly during the software’s lifetime…

– Growing Object-Oriented Software, Guided by Tests [Chapter 1]

Once the integration tests complete successfully, we have good confidence that our code will work correctly when it’s deployed. The code is deployed, and the acceptance tests are run against the deployed system end-to-end.

Take our Search API for instance, one of the acceptance criteria is “when a new user joins, he should be searchable by first name/last name/username”.

The acceptance test first sets up the test condition – a new user joins – by interacting with the system from the outside and calling the legacy API like the client app would. From here, a new-user-joined event will be fired into Kinesis; a Lambda function would process the event and add a new document in the User index in CloudSearch; the test would validate that the user is searchable via the Search API.

Avoid Brittle Tests

Because a new user is added to CloudSearch asynchronously via a background process, it introduces eventual consistency to the system. This is a common challenge when you decouple features through events/messages. When testing these eventually consistent systems, you should avoid waiting fixed time periods (see protip 5 below) as it makes your tests brittle.

In the “new user joins” test case, this means you shouldn’t write tests that:

  1. create new user
  2. wait 3 seconds
  3. validate user is searchable

and instead, write something along the lines of:

  1. create new user
  2. validate user is searchable with retries
    1. if expectation fails, then wait X seconds before retrying
    2. repeat
    3. allow Y retries before failing the test case

Sharing test cases for Integration and Acceptance Testing

We also found that, most of the time the only difference between our integration and acceptance tests is how our function code is invoked. Instead of duplicating a lot of code and effort, we used a simple technique to allow us to share the test cases.

Suppose you have a test case such as the one below.

The interesting bit is on line 22:

let res = yield when.we_invoke_get_all_keys(region);

In the when module, the function we_invoke_get_all_keys will either

  • invoke the function code directly with a stubbed context object, or
  • perform a HTTP GET request against the deployed API

depending on the value of process.env.TEST_MODE, which is an environment variable that is passed into the test via package.json (see below) or the bash script we use for deployment (more on this shortly).

 

Continuous Integration + Continuous Delivery

Whilst we had around 170 Lambda functions running production, many of them work together to provide different features to the app. Our approach was to group these functions such that:

  • functions that form the endpoints of an API are grouped in a project
  • background processing functions for a feature are grouped in a project
  • each project has its own repo
  • functions in a project are tested and deployed together

The rationale for this grouping strategy is to:

  • achieve high cohesion for related functions
  • improve code sharing where it makes sense (endpoints of an API are likely to share some logic since they operate within the same domain)

Although functions are grouped into projects, they can still be deployed individually. We chose to deploy them as a unit because:

  • it’s simple, and all related functions (in a project) have the same version no.
  • it’s difficult to detect if a change to shared code will impact which functions
  • deployment is fast, it makes little difference speed-wise whether we’re deploy one function or five functions

 

For example, in the Yubl app, you have a feed of posts from people you follow (similar to your Twitter timeline).

To implement this feature there was an API (with multiple endpoints) as well as a bunch of background processing functions (connected to Kinesis streams and SNS topics).

The API has two endpoints, but they also share a common custom auth function, which is included as part of this project (and deployed together with the get and get-yubl functions).

The background processing (initially only Kinesis but later expanded to include SNS as well, though the repo wasn’t renamed) functions have many shared code, such as the distribute module you see below, as well as a number of modules in the lib folder.

All of these functions are deployed together as a unit.

Deployment Automation

We used the Serverless framework to do all of our deployments, and it took care of packaging, uploading and versioning our Lambda functions and APIs. It’s super useful and took care of most of the problem for us, but we still needed a thin layer around it to allow AWS profile to be passed in and to include testing as part of the deployment process.

We could have scripted these steps on the CI server, but I have been burnt a few times by magic scripts that only exist on the CI server (and not in source control). To that end, every project has a simple build.sh script (like the one below) which gives you a common vocabulary to:

  • run unit/integration/acceptance tests
  • deploy your code

Our Jenkins build configs do very little and just invoke this script with different params.

Continuous Delivery

To this day I’m still confused by Continuous “Delivery” vs Continuous “Deployment”. There seems to be several interpretations, but this is the one that I have heard the most often:

Regardless of which definition is correct, what was most important to us was the ability to deploy our changes to production quickly and frequently.

Whilst there were no technical reasons why we couldn’t deploy to production automatically, we didn’t do that because:

  • it gives QA team opportunity to do thorough tests using actual client apps
  • it gives the management team a sense of control over what is being released and when (I’m not saying if this is a good or bad thing, but merely what we wanted)

In our setup, there were two AWS accounts:

  • production
  • non-prod, which has 4 environments – dev, test, staging, demo

(dev for development, test for QA team, staging is a production-like, and demo for private beta builds for investors, etc.)

In most cases, when a change is pushed to Bitbucket, all the Lambda functions in that project are automatically tested, deployed and promoted all the way through to the staging environment. The deployment to production is a manual process that can happen at our convenience and we generally avoid deploying to production on Friday afternoon (for obvious reasons ).

 

Conclusions

The approaches we have talked about worked pretty well for our team, but it was not without drawbacks.

In terms of development flow, the focus on integration and acceptance tests meant slower feedback loops and the tests take longer to execute. Also, because we don’t mock downstream services it means we couldn’t run tests without internet connection, which is an occasional annoyance when you want to work during commute.

These were explicit tradeoffs we made, and I stand by them even now and AFAIK everyone in the team feels the same way.

 

In terms of deployment, I really missed the ability to do canary releases. Although this is offset by the fact that our user base was still relatively small and the speed with which one can deploy and rollback changes with Lambda functions was sufficient to limit the impact of a bad change.

Whilst AWS Lambda and API Gateway doesn’t support canary releases out-of-the-box it is possible to do a DIY solution for APIs using weighted routing in Route53. Essentially you’ll have:

  • a canary stage for API Gateway and associated Lambda function
  • deploy production builds to the canary stage first
  • use weighted routing in Route53 to direct X% traffic to the canary stage
  • monitor metrics, and when you’re happy with the canary build, promote it to production

Again, this would only work for APIs and not for background processing (SNS, Kinesis, S3, etc.).

 

So that’s it folks, hope you’ve enjoyed this post, feel free to leave a comment if you have any follow up questions or tell me what else you’d like to hear about in part 3.

Ciao!

 

Links

From F# to Scala – apply & unapply functions

Note: read the whole series here.

 

Last time around we looked at Scala’s Case Class in depth and how it compares to F#’s Discriminated Unions. F# also has Active Patterns, which is a very powerful language feature in its own right. Unsurprisingly, Scala also has something similar in the shape of extractors (via the unapply function).

Before we can talk about extractors we have to first talk about Scala’s object again. Remember when we first met object in Scala I said it’s Scala’s equivalent to F#’s module? (except it can be generic, supports inheritance, and multiple inheritance)

Well, turns out Scala has another bit of special magic in the form of an apply function.

 

The apply function

In Scala, if you assign a function to a value, that value will have the type Function1[TInput, TOutput]. Since everything in Scala is an object, this value also have a couple of functions on it.

You can use andThen or compose to compose it with another function (think of them as F#’s >> and << operators respectively).

The apply function applies the argument to the function, but you can invoke the function without it.

Ok, now that we know what apply function’s role is, let’s go back to object.

If you declare an apply function in an object, it essentially allows the object to be used as a factory class (indeed this is called the Factory pattern in Scala).

You see this pattern in Scala very often, and there are some useful built-in factories such as Option (which wraps an object as Some(x) unless it’s null, in which case returns None).

String and BigInt defines their own apply function too (in String‘s case, it returns the char at the specified index) .

You can also define an apply function on a class as well as an object, and it works the same way. For instance…

I find this notion of applying arguments to an object somewhat alien, almost as if this is an elaborate way of creating a delegate even though Scala already have first-class functions…

Ok, can you pass multiple arguments to apply? What about overloading?

Check, and Check.

What about case classes and case object?

Check, and Check.

Ok. Can the apply function(s) be inherited and overridden like a normal function?

Check, and Check. Although this is consistent with inheritance and OOP in Java, I can’t help but to feel it has the potential to create ambiguity and one should just stick with plain old functions.

 

The unapply function (aka extractors)

When you create a case class or case object, you also create a pattern that can be used in pattern matching. It’s not the only way to create patterns in Scala, you can also create a pattern by defining an unapply function in your class/object.

or, if you don’t want to return anything from the pattern.

So, the unapply function turns a Scala object into a pattern, and here are some limitations on the unapply function:

  1. it can only take one argument
  2. if you want to return a value, then the return type T must defines members:
    • isEmpty: Boolean
    • get: Any

side note: point 2 is interesting. Looking at all the examples on the Internet one might assume the unapply function must return an Option[T], but turns out it’s OK to return any type so long it has the necessary members!

Whilst I can’t think of a situation where I’d need to use anything other than an Option[T], this insight gives me a better understanding of how pattern matching in Scala works.

Whether or not the pattern matches is determined by the value of isEmpty of the result type T. And the value returned by your pattern – ie msg in the example above – is determined by the value of get of the result type T. So if you’re feeling a bit cheeky, you can always do something like this:


Since the unapply function is a member on an object (like the apply function), it means it should work with a class too, and indeed it does.

As you can see from the snippet above, this allows you to create parameterized patterns and work around the limitation of having only one argument in the unapply function.

You can nest patterns together too, for example.

Here, the Int pattern returns an Int, and instead of binding it to a name we can apply another pattern inline to check if the value is even.

And whilst it doesn’t get mentioned in any of the articles I have seen, these patterns are not limited to the match clause either. For instance, you can use it as part of declaration. (but be careful, as you’ll get a MatchError if the pattern doesn’t match!)

 

Primer: F# Active Patterns

Before we compare Scala’s extractors to F#’s active patterns, here’s a quick primer if you need to catch up on how F#’s active patterns work.

Like extractors, it gives you named patterns that you can use in pattern matching, and comes in 3 flavours: single-case, partial, and multi-case.

You can parameterise a pattern.

If a pattern’s declaration has multiple arguments, then the last argument is the thing that is being pattern matched (same as the single argument to unapply); the preceding arguments can be passed into the pattern at the call site. For example…

If you don’t want to return anything, then you can always return () or Some() instead (partial patterns require the latter).

You can also mix and match different patterns together using & and |. So we can rewrite the fizzbuzz function as the following..

Patterns can be nested.

Finally, patterns can be used in assignment as well as function arguments too.

 

extractors vs F# Active Patterns

Scala’s extractor is the equivalent of F#’s partial pattern, and although there is no like-for-like replacement for single-case and multi-case patterns you can mimic both with extractor(s):

  • an extractor that always return Some(x) is like a single-case pattern
  • multiple extractors working together (maybe even loosely grouped together via a common trait) can mimic a multi-case pattern, although it’s up to you to ensure the extractors don’t overlap on input values

Whilst it’s possible to create parameterized patterns with Scala extractors (by using class instead of object), I find the process of doing so in F# to be much more concise. In general, the syntax for declaring patterns in Scala is a lot more verbose by comparison.

The biggest difference for me though, is that in F# you can use multiple patterns in one case expression by composing them with & and |. This makes even complex patterns easy to express and understand.

 

Links

From F# to Scala – case class/object (ADTs)

Note: read the whole series here.

 

Continuing on from where we left off with traits last time around, let’s look at Scala’s case class/object which can be used to create Algebraic Data Types (ADTs) in Scala.

 

Case Class

You can declare an ADT in F# using Discriminated Unions (DUs). For example, a binary tree might be represented as the following.

In Scala, you can declare this ADT with a pair of case class.

Here is how you construct and pattern match against F# DUs.

This looks very similar in Scala (minus all the comments).

Also, one often use single case DUs in F# to model a domain, for instance…

From what I can see, this is also common practice in Scala (at least in our codebase).

From this snippet, you can also see that case classes do not have to be tied together to a top level type (via inheritance), but more on this shortly.


UPDATE 16/01/2017: as @clementd pointed out, you can turn case classes with a single member into a value class and avoid boxing by extending from anyVal. For more details, see here.


On the surface, case classes in Scala looks almost identical to DUs in F#. But as we peek under the cover, there are some subtle differences which you ought to be aware of.

 

Case Object

In F#, if you have a union case that is not parameterised then the compiler will optimise and compile it as a singleton. For instance, the NotStarted case is compiled to a singleton as it’s always the same.

You can declare this GameStatus with case classes in Scala.

But reference equality check (which is done with .eq in Scala) reveals that:

  • NotStarted() always create a new instance
  • but equals is overridden to perform structural equality comparison

If you want NotStarted to be a singleton then you need to say so explicitly by using a case object instead.

Couple of things to note here:

  • as mentioned in my last post, object in Scala declares a singleton, so does a case object
  • a case object cannot have constructor parameters
  • a case object cannot be generic (but a normal object can)

When you pattern match against a class object you can also lose the parentheses too (see the earlier example in print[T]).

 

Cases as Types

For me, the biggest difference between DUs in F# and case classes in Scala is that you declare an ADT in Scala using inheritance, which has some interesting implications.

As we saw earlier, each case class in Scala is its own type and you can define a function that takes Node[T] or Empty[T].

This is not possible in F#. Instead, you rely on pattern matching (yup, you can apply pattern matching in the function params) to achieve a similar result.

It’s also worth mentioning that, case objects do not define their own types and would require pattern matching.

What this also means is that, each case class/object can define their own members! Oh, and what we have learnt about traits so far also holds true here (multiple inheritance, resolving member clashes, etc.).

In F#, all members are defined on the top level DU type. So, a like-for-like implementation of the above might look like this.

Whilst this is a frivolous example, I think it is still a good demonstration of why the ability to define members and inheritance on a per-case basis can be quite powerful. Because we can’t do that with F#’s union cases, we had to sacrifice some compile-time safety and resort to runtime exceptions instead (and the implementation became more verbose as a result).

The autoPlay function also looks slightly more verbose than its Scala counterpart, but it’s mainly down to a F# quirk where you need to explicitly cast status to the relevant interface type to access its members.

 

sealed and finally

“make illegal states unpresentable” – Yaron Minsky

Ever since Yaron Minsky uttered these fabulous words, it has been repeated in many FP circles and is often achieved in F# through a combination of DUs and not having nulls in the language (apart from when inter-opting with C#, but that’s where your anti-corruption layer comes in).

This works because DU defines a finite and closed set of possible states that can be represented, and cannot be extended without directly modifying the DU. The compiler performs exhaustive checks for pattern matches and will warn you if you do not cover all possible cases. So if a new state is introduced into the system, you will quickly find out which parts of your code will need to be updated to handle this new state.

For instance, using the GameStatus type we defined in the previous section…

the compiler will issue the following warning:

warning FS0025: Incomplete pattern matches on this expression. For example, the value ‘GameOver (_)’ may indicate a case not covered by the pattern(s).

You can also upgrade this particular warning – FS0025 – to error to make it much more prominent.

 

In Scala, because case classes/objects are loosely grouped together via inheritance, the set of possible states represented by these case classes/objects is not closed by default. This means new states (potentially invalid states introduced either intentionally or maliciously) can be added to the system and it’s not possible for you or the compiler to know all possible states when you’re interacting with the top level trait.

There’s a way you can help the compiler (and yourself!) in this case is to mark the top level trait as sealed.

A sealed trait can only be extended inside the file it’s declared in. It also enables the compiler to perform exhaustive checks against pattern matches to warn you about any missed possible input (which you can also upgrade to an error to make them more prominent).

Since case objects cannot be extended further we don’t have to worry about it in this case. But case classes can be extended by a regular class (case class-to-case class inheritance is prohibited), which presents another angle for potential new states to creep in undetected.

So the convention in Scala is to mark case classes as final (which says it cannot be extended anywhere) as well as marking the top level trait as sealed.

Voila! And, it works on abstract classes too.

 

But wait, turns out sealed is not transitive.

If your function takes a case class then you won’t get compiler warnings when you miss a case in your pattern matching.

You could, make the case class sealed instead, which will allow the compiler to perform exhaustive checks against it, but also opens up the possibility that the case class might be extended in the same file.

Unfortunately you can’t mark a case class as both final and sealed, so you’d have to choose based on your situation I suppose.

 

Reuse through Inheritance

Because case classes are their own types and they can inherit multiple traits, it opens up the possibility for you to share case classes across multiple ADTs.

For instance, many collection types have the notion of an empty case. It’s possible to share the definition of the Empty case class.

I think it’s interesting you can do this in Scala, although I’m not sure that’s such a good thing. It allows for tight coupling between unrelated ADTs, all in the name of code reuse.

Sure, “no one would actually do this”, but one thing I learnt in the last decade is that if something can be done then sooner or later you’ll find it has been done 

 

Summary

To wrap up this fairly long post, here are the main points we covered:

  • you can declare ADTs in Scala using case class and/or case object
  • case classes/objects are loosely grouped together through inheritance
  • a case class defines its own type, unlike discriminated unions in F#
  • a case object creates a singleton
  • case classes/objects can define their own members
  • case classes/objects support multiple inheritance
  • marking the top level trait as sealed allows compiler to perform exhaustive checks when you pattern match against it
  • Scala convention is to seal the top level trait and mark case classes as final
  • sealed is not transitive, you lose the compiler warnings when pattern matching against case classes directly
  • you can mark case classes as final or sealed, but not both
  • multiple inheritance allows you to share case classes across different ADTs, but you probably shouldn’t 

 

Links

From F# to Scala – traits

Note: read the whole series here.

 

Continuing on from where we left off with type inference last time around, let’s look at a language feature in Scala that doesn’t exist in F# – traits.

Scala has both abstract classes and traits (think of them as interfaces, but we’ll get into the differences shortly) to support OOP. Abstract classes are exactly what you’d expect and the preferred option where Java-interop is concerned. Traits, however, are much more flexible and powerful, but with great power comes great responsibility.

 

Basics

Like abstract classes, they can contain both fields and behaviour, and both abstract definitions and concrete implementations.

Any class that extends from this trait will inherit all the concrete implementations and need to implement the abstract members.

Of course, the concrete class can also override the default implementation that came with the trait.

You can extend multiple traits.

wait, hold on a sec, what’s this object thingymajig?

Sorry for springing that on you! A Scala object is basically a singleton and is Scala’s equivalent to a F# module. In fact, when you define an object in the Scala REPL it actually says “defined module“!

The notable difference with a F# module is that a Scala object can extend abstract classes and/or traits (but itself cannot be extended by another class/object). We’ll spend more time drilling into object later in the series but I just wanted to throw it in here for now as it’s such a heavily used feature.

The key thing to take away from the snippet above is that you can extend a class or object with multiple traits.

 

Traits vs Abstract Classes

There are 2 differences between traits and abstract classes:

  1. traits cannot have contractor parameters but abstract classes can
  2. you can extend multiple traits but can only extend one abstract class

With regards to point 2, you can actually mix both traits and abstract classes together!

 

Dealing with Collisions

Since you can extend more than one thing at a time (be it multiple traits, or 1 abstract class + 1 or more traits), one must wonder what happens when some of the members collide.

You might have noticed from our last snippet that both Being and Human defines a name field, but everything still works as expected and you can indeed use the theburningmonk object as a Human or Being.

Ok. What if I extend from 2 traits with clashing members and one of them provides a concrete implementation? My expectation would be for the concrete implementation to fill in for the abstract member with the same name.

and indeed it does.

What if I’m mixing traits with an abstract class?

No problem, still works.


side note: notice that in this second version ProfessorX is extending Psychic first? That’s because in cases where abstract class and traits are both involved, only the traits can be mixed in.


So far so good, but what if both traits/abstract classes provide a concrete implementation for a clashed member?

The safe thing to do here would be for the compiler to crap out and force the developer to rethink what he’s doing rather than springing a nasty surprise later on.

(and yes, it behalves the same way if Light or Dark is an abstract class instead)

This, in my opinion is a much better way to resolve collisions than the Python way.

 

Dynamic Composition

So far we have mixed in traits when defining a new class or object, but you can do the same in a more ad-hoc fashion.

Of course, all the rules around collision resolution also hold true here.

 

Generics

Whilst traits cannot have constructor parameters, they can have type parameters (ie, they are generic!).

To sneakily weave in the Transformers game I’m working on at Space Ape Games, here’s how you might model a Transformer Combiner as a generic trait.

 

Stackable Modifications

When you read about traits in Scala, you often hear the phrase “stackable modifications” (Jonas Boner’s real-world Scala slide deck has a nice example).

What makes this interesting (and different from straight up override in inheritance) is how super is modified in an ad-hoc fashion as you compose an object using traits (see Dynamic Composition section above).

This also works if Mutant is an abstract class, and in the definition of an object too.

However, it only work with methods. If you try to override an immutable value then you’ll get a compiler error.

And no, it doesn’t work with variable either, only methods.

 

self type annotation

Another related topic is the so-called self type annotation feature. It is often used to implement dependency injection in Scala with patterns such as the Cake Pattern.

Where you see code such as below (note the self : A => ), it means the trait B requires A.

Any composing object/class would need to mix in trait A if they want to extend trait B, failure to do so will be met with a swift and deadly compiler error.

This also gives the trait B access to members from A (which includes any members A inherits). For instance.

What’s more, you can mix in multiple traits in the self type.

It’s worth differentiating this “requires” relationship from the “is a” relationship created through inheritance.

In this case, since the Rooney trait is not a Footballer and RecordHolder (which he is in real life, of course) it won’t inherit the members from those traits either.

 

Interestingly, it’s possible to create cyclic dependencies using self type

which is not possible through inheritance.

As a .Net developer who have seen the damages cyclic references can do, I’m slightly concerned that you could do this in Scala…

That said, in Martin Odersky‘s Programming in Scala, he has an example of how this mutual dependency can be useful in building a spreadsheet, so I’ll keep an open mind for now.

…a new trait, Evaluator. The method needs to access the cells field in class Model to find out about the current values of cells that are referenced in a formula. On the other hand, the Model class needs to call evaluate. Hence, there’s a mutual dependency between the Model and the Evaluator. A good way to express such mutual dependencies between classes was shown in Chapter 27: you use inheritance in one direction and self types in the other.

In the spreadsheet example, class Model inherits from Evaluator and thus gains access to its evaluation method. To go the other way, class Evaluator defines its self type to be Model, like this:

  package org.stairwaybook.scells
  trait Evaluator { thisModel => ...

 

Finally, as you can see from Martin Odersky‘s example above, you don’t have to use “self” as the name for the self instance.


side note: I’m curious as to how Scala deals with a cyclic dependency where the traits define values that depends on a value on the other trait.

as you can see from all that red, even the code analysis tool in IntelliJ gave up trying to understand what’s going on, but it compiles!

What’s interesting (and again, slightly worrying) is when the fields are evaluated, which actually depends on the order the traits are mixed in.

(new Object with Yin with Yang).motto 

  1. Yin is mixed in, yang.motto is not yet initialised (and therefore null) so Yin.phrase is initialised as null + “yin”
  2. Yang is mixed in, yin.phrase is initialised to “nullyin“, so Yang.motto is now “nullyin” + “yang”
  3. the value of motto is therefore nullyinyang

(new Object with Yang with Yin).motto 

  1. Yang is mixed in, yin.phrase is not yet initialised, so Yang.motto is initialised as null + “yang”
  2. Yin is mixed in, but since motto is already initialised, whatever happens here doesn’t really affect our result
  3. the value of motto is therefore nullyang

ps. please DO NOT try this at work!


 

So, that’s everything I have learnt about traits in the last week, hope you have found it useful in your path to learn Scala.

Until next time, ciao!

 

Links