Rob West

Goldilocks abstraction for maintainable code

Published 30 Sep 2020

The line of thought for this post started with an excellent talk given by Simon Painter at the .NET Oxford Meetup. He showed how you could use a range of functional techniques to create abstract and reusable utility methods for more maintainable code. We can all agree that duplication is a code smell, but it struck me as Simon demonstrated ever more complex and abstract solutions that there is a tension between abstraction and maintainability. He started his talk by describing situations where he had to spend several days just to understand the existing code in order to be able to identify the one line that needed to be changed to resolve a bug. I've certainly found myself in similar situations, and this just shows how important clarity is to code maintainability. I think it is possible to get wrapped up in our own cleverness, writing code that is certainly elegant, clever and generic, but at the expense of those that come along afterwards to maintain it. Bob Martin makes this point in his book Clean Code:

...the ratio of time spent reading vs. writing is well over 10:1. We are constantly reading old code as part of the effort to write new code. Because this ratio is so high, we want the reading of code to be easy, even if it makes the writing harder. Of course there’s no way to write code without reading it, so making it easy to read actually makes it easier to write.

I think there is a Goldilocks zone for abstraction, just enough to avoid code duplication, but not so much as to create overly complex code that is hard to read and reason about. I'm going to set out my thoughts on how to find the right level of abstraction for you, and more importantly for your team. Here is how to stay in the Goldilocks zone.

Let me begin by using some code from C# in Depth by Jon Skeet (Fourth Edition) for the purposes of illustration. Chapter 11 deals with tuples and in one example he takes as a starting point the following straightforward solution to generating the Fibonacci sequence:

static IEnumerable<int> Fibonacci()
{
    int current = 0;
    int next = 1;
    while (true)
    {
        yield return current;
        int nextNext = current + next;
        current = next;
        next = nextNext;
    }
}

He refactors this to the following using tuples:

static IEnumerable<int> Fibonacci()
{
    var pair = (current: 0, next: 1);
    while (true)
    {
        yield return pair.current;
        pair = (pair.next, pair.current + pair.next);
    }
}

As he says, this version is beautiful. Not only is it more succinct, the use of a tuple actually makes the code easier to understand, the right tool for the job. There are less variables to reason about, and no confusing variable names like "nextNext". However, Jon goes on to say that he finds it hard to resist generalizing further, and provides the following more abstract solution:

static IEnumerable<TResult> GenerateSequence<TState, TResult>(
    TStateseed,
    Func<TState, TState> generator,
    Func<TState, TResult> resultSelector)
{
    var state = seed;
    while (true)
    {
        yield return resultSelector(state);
        state = generator(state);
    }
}

var fibonacci = GenerateSequence(
    (current: 0, next: 1),
    pair => (pair.next, pair.current + pair.next),
    pair => pair.current);

This solution is certainly more general, more abstract. We have a way of generating many mathematical sequences using the same function. However, it is more complicated to reason about. We have two generic types to deal with, two func parameters, more abstract variables and the usage requires us to engage in complex reasoning about the parameters we need to supply. How do we decide if this is an elegant solution or a step too far?

Single Read Comprehension

I think we should aim to write code that can be understood in a single reading. I should be able to walk through the code and understand its intent in one reading. When I use the word "intent" I mean that there might be some nuances that require further reasoning, but such details should not be critical to the code's purpose.

You might say that such a stance would lead to simple and imperative code but the best developers are able to deal with complex problem domains in an elegant and yet comprehensible way. You look at the code and it makes the solution seem obvious. This skill should really receive more recognition in development but we don't even have a term for it (maybe "clean coder" in honour of Uncle Bob). Any developer can write code that a computer can understand, but the real trick (especially where maintainability is the focus) is writing code that your team can understand (or the future you that will come back to it after several months or years). I'd rather trade a few clock cycles or lines of code for easier comprehension. This is why naming is so important, and why self documenting code with no need for comments should be valued.

I don't mean to be dogmatic on this point (we'll come to some exceptions shortly) but it is a useful heuristic, a kind of code smell if you encounter code that needs multiple readings to comprehend. I'll refer to it as the single read comprehension principle in what follows. It makes a natural bedfellow to other coding principles such as the single responsibility principle which helps to foster readability through separation of concerns.

Leave Nobody Behind

As a further standard to aim at, I think the single read comprehension principle should be pitched to cover your whole team. Aim for code that even the most junior member of your team can understand on a single pass. One or two rocket surgeons should not set the level. I think that taking this approach is an excellent chance to bring on the weakest members of the team. You should aim for psychological safety in your team, there should be no negative outcome from someone saying they don't understand a piece of code. This should provide a learning opportunity, whereby the more experienced members of the team can provide mentorship. If repeated attempts to explain the code fail, then this could indicate that the person is not a great fit for the team. That might sound harsh, but ultimately it isn't fair on them or the team if they don't have the intellect or inclination to progress their skills in a way that matches the standard that the team aspires to.

Going for a whole team approach also avoids the pitfall of the hero developer. I've seen many instances where a team has a hugely talented outlier and that person ends up being the only one who can maintain parts of the code, or even whole projects. That isn't good for them, as they end up overworked and frustrated, and isn't good for the rest of the team who miss an opportunity to learn from a more talented developer. Typically the rest of the team disengages and apathy takes hold. Use the hero to pull the whole team upwards.

On the flip side, if every member of the team is awesome, all highly familiar with functional programming techniques for example, then there is no issue with Funcs all over the place and Funcs returning Funcs returning Funcs. They'll all be able to pass the single read comprehension principle.

Is The Complexity Below the Surface?

A balancing consideration to the single read comprehension principle is where the complexity lies. In the Fibonacci example above the complexity of the abstracted version is exposed to the consumer. The funcs I have to provide do the lion's share of the work and I need to do the thinking about how they fit together. Contrast this with LINQ and something like the Select extension. A single predicate lambda expression is easy to reason about and its intent (identifying whether an item should be returned) is obvious.

The real trick when abstracting functionality is to encapsulate the complexity within the method. The consumption should be obvious and natural for the user, and this can warrant creating a function whose internal workings are not as easy to grasp. Simon provides an interesting case to consider with this method:

public static Func<TKey, TValue> ToLookupWithDefault<TKey, TValue>(
    this IDictionary<TKey, TValue> @this,
    TValue defaultValue) =>
    x => 
        @this.ContainsKey(x) 
            ? @this[x]
            : defaultValue;

private static readonly Func<int, string> GetDoctor = new Dictionary<int, string>
{
    {1, "Willian Hartnell"},
    {2, "Patrick Troughton"}
}.ToLookupWithDefault("Unknown");

Here Simon is dealing with the annoying behaviour of dictionaries whereby they throw errors if the key does not exist. Now this code is not exactly complicated, but it does return a Func, so we already get into the abstraction of treating functions as objects. We then layer on the fact that this relies on a closure in order to keep the dictionary in scope, the variable is instantiated and immediately transmogrified into a Func so we leak a little bit of what is going on. Why not refactor to something a little more mundane like this:

public static TValue LookupWithDefault<TKey, TValue>(
    this IDictionary<TKey, TValue> @this, TKey key,
    TValue defaultValue)
{
    return @this.ContainsKey(key) 
            ? @this[key]
            : defaultValue;
}

var doctor = doctors.LookupWithDefault(1, "Unknown");

This is a standard extension method and so the usage and intent are clear and easy. We don't really have much complexity under the hood in either case, but you see the point about being wary of exposing unnecessary abstraction. Closures may be ubiquitous in modern C# code but look around your development team and ask yourself how many developers could tell you what is actually going on under the hood (see the excellent explanation in the aforementioned book by Jon Skeet). I'd argue that for most development teams, this simpler version is more likely to keep you in the Goldilocks zone.

Mental Capacity, Moving Parts and Cognitive Chunking

One of the factors in readability is how many elements you have to mentally juggle at any point. Your frontal lobes can only hold so much in working memory and you have probably heard of the highly cited Magical Number Seven, Plus or Minus Two study by George A. Miller. Whilst the reference to particular numbers and "magic" is a little tongue in cheek, it recognises that working memory is pretty limited. If you are writing code that requires more than seven items to be held in consideration in order to understand the function then it is probably time to refactor. Jeff Atwood makes some great points on this in a blog article on the magical number seven:

  • In a well-designed system, you should never need to fall back on your faulty, unreliable short-term memory.
  • The fewer bits of information users have to remember, the better.
  • Use chunking and grouping aggressively to reduce the amount of information to remember.

This is why Jon Skeet's deployment of a tuple is so neat in the Fibonacci example. We reduce two variables to one, and although that single variable still holds two numbers they are conceptually related and we can chunk this as a single entity. This is how chess masters are able to quickly look at a board and see a move - they are not seeing the individual pieces but rather groups of pieces. If you can find ways to help the reader chunk information then this can aid in achieving single read comprehension. One way to do this is to create classes to group parameters as logical entities, a good practice anyway as you can then deploy smart constructors to validate them.

The limits on what people can hold in mind should make you cautious about going beyond two levels of wrapped entities. An IEnumerable<IEnumerable<T>> might be OK but going further makes comprehension difficult.

Does the Abstraction Earn Its Keep?

An obvious consideration in weighing up the utility of a complex abstraction is how many places it can be used in your project. If your bit of cleverness only earns you two or three usages then really be honest with yourself about whether it is worth it. If it gives you tens or hundreds of usages then maybe that provides a justification for breaking single read comprehension and straying out of the Goldilocks zone.

Wrap Up

At its heart, trying to stay in the Goldilocks zone is about favouring comprehensibility over cleverness. As Simon points out in his talk, the build phase of most projects is a tiny part of their existence, for the vast majority of the time they are in maintenance mode. The developers carrying out this work are unlikely to be the strongest in the team, so a focus on code clarity reaps more rewards than a focus on complexity and abstraction.

I'd love to know what you think. Can you suggest additional ways of assessing code maintainability? Is the single read comprehension principle an interesting principle? What the hell should we call developers that write clear code?

© 2020 Rob West. All Rights Reserved. Built using Kontent and Gatsby.