jml's notebook

Thoughts from Jonathan M. Lange


Deciding who goes to church

I thought it would be interesting to write up how we decided who goes to church today. Even though it is such an ordinary, low-stakes event, I think it sheds light on how decisions and groups work.

I woke up this morning after a rough night of sleep. I currently have a condition that makes it difficult for me to sleep through the night. During my many wakeful periods, I had noticed that one of my children, call them K1, was coughing all the time.

I decided to myself that they should not go to church. Even if they were feeling well enough, it would be irresponsible to risk spreading the illness further.

My partner, call them B, was also not feeling great yesterday. I guessed that she'd be the one to stay home with K1. I wanted to go to church and was feeling well enough, so the only remaining question in my mind was whether our second child, K2, should come with me or stay home. I thought he should come with me, although I wanted some time alone, B & K1 would have a more restful time if K2 were out. Also, church is good for K2, and I'm more than capable of taking him there.

I formed this plan in my head, but of course that's not how group decisions work. A group decision only works if everyone has the same understanding1 of what to do2, and everyone knows that everyone has the same understanding3 of what to do. Sometimes this can be "Do what the boss says", but that's not how things work in our house. Instead, we have a conversation:

Me: I don't think K1 should go to church today

B: Yeah, you're right. That means one of us will need to stay home, though.

I know this. She knows I know this. But now we both know that we both know.

Me: Okay. How about you stay here with K1 and I'll take K2 to church.

B: Are you sure?

Me: Yeah. I'd really like to go, and you and K1 will rest better without K2 here.

B: Okay

And so everything worked out okay. No worries, no drama, just boring functional communication.

But on the way to church, I thought that this isn't rational decision making. Or perhaps better to say it's not a systematic approach to decision making. We didn't consider all the options or trade them off against the other. What are all the options? Well, we could represent this as a truth table where I'm A, B is my partner, and the kids are K1 and K2. Staying home is F and going to church is T.


That's 16 options. If you were forced to map my mental approach to the truth table, it would be something like:

  1. Set K1 = F, because K1 has to stay at home
  2. Set B = F, because I think she should stay at home
  3. Set A = T, because I want to go
  4. Look at what's left and make an actual decision.

Internally, it felt more like a branching tree than a truth table.

But we can lean on this truth table thing a bit longer.

You see, we can't leave the kids at home unsupervised, and we can't send them to church unsupervised either, and we don't want to send sick people to church. So there are three things we want to avoid:

home_alone     = not (K1 and K2) and (A and B)
church_alone   = (K1 or K2) and not (A or B)
sick_at_church = B or K1

Putting these onto the truth table looks like:

ABK1K2not home alonenot church alonenot sick at churchOK?

Honestly, just looking at this is exhausting. Extracting the gunk, we get three valid choices:

This is not how I normally make decisions.

So what's the point of all this?

One version of this story is that I had a minor problem, did the first thing that came into my head, and it worked out okay.

Another version of this story is that even making simple decisions among people who are strongly aligned in terms of values (church good; spreading disease bad; rest while sick good; unsupervised kids bad) involves establishing consensus and at minimum acknowledging choices not taken.

Or we could look at this and say that even for easy decisions, it takes quite a bit of effort to break them down into their component subdecisions and to articulate the relations between them. Perhaps this is a thing to do sparingly, or perhaps it is a trainable skill to be drilled to the point of mastery.

I don't think I've ever seen someone explicitly present a decision table in my professional career, although some design docs have had tables that come close. I have certainly been involved in woolly discussions that go around and around without resolutions. Perhaps these would have gone better if someone had constructed a decision table.

Anyway, K2 & I had a great time at church, and K1 & B had a lovely quiet morning. That's probably enough.

See also Cost of decisions and Decisions.

P.S. By the time I got to writing this, I was being tickled by memories of reading some tech luminary advocating the use of "decisions tables" in documenting designs or system architecture or something like that. Thanks to Justin Blank who both first recommended Hillel Wayne's post Decision Table Patterns to me and helped me remember that he had done so. No promises, but I might follow up by applying that post to this decision, as an exercise.


This is consensus. Not deciding a path together, but establishing that this is the path we've decided on.


I was going to say "agrees on what to do" instead of "understands what to do", but it's okay if people don't agree. They don't have to believe the decision is the best decision, they just have to agree to comply with it.


This is "common knowledge". I used to have a couple of great links for this but lost them.


Ensuring quality enables throughput

Often in software development, caring about the quality of the things we are building is seen as an extra cost that must be justified on the potential benefits, such as improved user experience or lower maintenance.

From the engineering perspective, quality is frequently seen as an essential moral good. Sometimes we forget that the quality of the product and the quality of the code are two different things, although one influences the other.

Both of these are good, but I think they miss something important. Insisting on building high quality software is an effective means of /attention management/.

If a component or feature is low quality, then it is a source of distractions. Perhaps it causes bugs that need to be fixed. Perhaps its poor interface makes other work needlessly difficult. You can think of these bugs or difficulties as interruptions, like a child coming into the room to ask you to reach for something down from a shelf.

Whether or not you actually do anything about these interruptions is irrelevant, because deciding /not/ to do something still takes time and energy. In a corporate context, it can take quite a lot of time and energy, as people will keep wanting to talk about it.

Another way of saying this is that low quality software is unfinished. It still makes demands on your time and attention.

As you keep building buggy features, dodgy components and awkward interfaces, you add more and more sources of distractions. Your environment drowns in noise.

However, if you actually finish your work, if you produce high quality features and components, then you are free to move on to the next thing. Then, instead of you doing work for those components, they do work for you. Because they are trustworthy, you can build on them.

Thus, building quality in actually enables speed of execution. It increases cost, but it also disproportionately raises the ceiling on the adjacent possible.


Waiting for Aragorn

I've gone on this rant to a couple of people now, so I figure I should write it up. This is more some feelings than an idea.

My general sympathies are roundhead, republican, and low church. I'm not particularly into the monarchy, aristrocracy, or a "great men" theory of history.

However, I was exposed to an extreme amount of Tolkein at an impressionable age, and that has left an indelible mark.

One of the themes in The Lord of the Rings is the glory of a good king. People ("Men" in the book) can only be their best selves when they are serving under a king who serves them and is worthy of them. Then they, in their service, strive to be worthy of their king.

I do not think this is how a modern state should be run.

And yet, in the much lower stakes world of business and building software, I do find myself and others yearning for good leaders.

Over the years I've worked with blind leaders, power-hungry leaders, spiteful leaders, well-meaning but ineffectual leaders, absentee leaders, and ill-starred leaders. Each time, I've seen people who are pouring their time, energies, and creativity into their work end up frustrated and disappointed. It's not quite "lions led by donkeys", but it's close.

I know the world is complicated and full of compromise, and I know that no perfect leader exists, and I very much believe that people often need to step up, self-organize and take responsibility.

All the same, I see around me a longing for a worthy leader. Sometimes I feel it too.


Got a new laptop

Hello again. Posting here both to break the drought and to check that all of this works from my fancy new laptop. I got myself a MacBook Air with an M2 chip, and I quite like it. It feels like a significant improvement over my old MacBook Pro, which is my most regretted purchase. It's also the first time I've bought an Air instead of a Pro.

There are three factors that tipped me over the line.

The first is the price tag. A top-of-the-line Pro costs almost twice what an Air costs. That is a tonne of money that I would rather spend on other things, not least my own manumission from wage slavery.

The second is that I don't really get to use my personal laptop all that much. My weekday evenings are largely focused on getting the kids fed, bathed, and safely in bed and the house in some sort of reasonable state for the next day of chaos. Between church, family outings, and childminding duties, I also don't get a tonne of time on the weekends.

For a while, this lack of time combined with the sheer difficulty of firing up a laptop these days (so many updates, so little battery life) meant that I just used my work computer for the occasional blog post. Perhaps I just don't even need a laptop of my own!

However, work is introducing MDM, and while I trust our IT manager, I don't want to do any personal stuff on a computer that has corporate spyware on it.

Anyway, all of this means that while I am getting a computer, it doesn't have to be a workhorse or a beast, because I'm just not going to do that much with it.

The third factor is that I think cloud-based development environments are becoming much more practical and attractive. The article The End of Localhost more or less convinced me of this. As long as I can use Emacs running locally, I'm sure I can find a way to have fun doing remote dev.

Anyway, the new Air is lovely. It's very snappy, the keyboard is great, and I love the colour. I wish it had more USB-C ports, but that's about it.


Play is how mammals learn

The best way to learn is to play.

In order to play, you need to feel safe. You need to have time. The consequences of failure must be real but not lasting. The consequences of success, likewise, must be real but short lived.

Play can happen by yourself or with others. If playing with others, you need to trust them, because you need to feel safe in order to play.

To play well, we must throw ourselves into the scenario or game. That requires focus, attention, and dedicated time.

There must be a sense of abundance, or at least the freedom to "misuse" valuable resources. Time might be short, or resources tight, but you need to grant yourself a cheeky sort of permission, a license to be naughty, in order to actually play.


Too much steering, not enough pedalling

For a while now, I've been trying to think of a replacement for the phrase that begins "too many chiefs", as that phrase is way too racially loaded to be useful in any conversation.

When casting around, a lot of people suggested "too many cooks spoil the broth" or its variants. This doesn't quite get to the heart of the matter.

The original phrase exists as a short-hand to describe a dysfunction where there are too many people directing work and not enough people doing work. "Too many cooks" is more about how some projects are harmed by having too many people work on them. If you try hard, you can bend the phrase to be about conflicting directions—maybe one cook thinks it needs to be sweeter and the other more savoury—but that's not what you want in an aphorism.

The best replacement I've managed to come up with is "too much steering, not enough pedalling". Anyone who has ridden a bicycle knows that if you're on a bike and you don't pedal, you stop. If you're not pedalling fast enough and you steer too much, you fall over. Or, you don't fall over, and instead inefficiently zigzag your way to your destination.

I imagine a cartoon with four or five people in suits sitting on the handlebars, jostling for grip and arguing about which way to go, with a lone cyclist on the seat, pedalling away as fast as they can, exhausted and dripping with sweat. The caption reads "Why aren't we getting anywhere?"

The answer, of course, is too much steering, not enough pedalling.


Cost of cruft

Our main product at work has grown quite a few features over the years, and from time to time we the engineers point to an unmaintained feature and ask, "can we kill it please?"

The answer to the question is almost always equivocal, partly because it's hard for us to quantify or even communicate the cost of maintaining a single feature.

Here, I just want to rattle off a few of the costs in my head:

A lot of these problems can be mitigated by having enough money, and spending that money on awesome platform and internal tools teams.

Most of these costs are small enough at the time of incurring the cost that one can never make a rational argument for deleting the feature, especially because deleting is never free, but always involves planning, comms, public relations, data export, etc.

I think it was Alex Gaynor who first told me that "subscription" is the only valid business model for software, because there's an ongoing cost of keeping it running (Apologies if I'm misrepresenting you, Alex). I kind of wish it were easier to quantify with features. I feel like we're okay at estimating how much effort it will take to build something, but not how much to maintain it. It would be great to have planning conversations like we we could say something like, "it will take five engineers three months to build this, and then another 0.5 FTE for ongoing costs".


Turn the Ship Around / Drive mashup

I think a lot about Turn the Ship Around! by L. David Marquet. Out of all the books I've read on leadership or management, it's the one that resonates the most strongly with me.

When I was a lot younger, I figured that as a manager, you wanted to make sure you had a bunch of smart people around you and then get out of their way. Any time you told them what to do, you were making a mistake, because you probably know less than them about their job, and because the act of telling someone what to do short circuits the bit of their brain that would actually think about the problem. By giving an instruction, you've reduced the net intelligence of your team.

I tried this a few times and it mostly ended in disaster.

Groups of people don't naturally coordinate. It takes work to get a bunch of individual effort to cohere and add up, rather than cancel out.

Also, people sometimes do things badly, or do the the wrong thing, or take too long to do something, or spend way too much time on something unimportant, or completely overlook important details, or don't know what to do, or…

I responded to this failure of approach by changing my approach! I tried to become more directive, and give more instructions. I worked on giving timely constructive feedback (outside of the context of code reviews, which I've been doing for a long time).

This was definitely better, but it still wasn't where I wanted to be. I felt that I was the bottleneck of the team, that I was putting a ceiling on the growth of the more senior members of the team, and only giving the junior members on-the-spot feedback without giving them a way to actually grow. Helping them to do a job better without actually helping them get better at their job, if that makes sense.

Anyway, at some point I read Turn the Ship Around! and it opened my eyes. Marquet had the same ambition as me—at some point he vowed never to give a direct order—but actually thought about what would be required to make it happen. And, hoo boy is it a tonne of work. Reading through the book you get the distinct impression that he worked his arse off the entire time he was on tour.

Anyway, the big insight is that for people to have control, they must also have competence and clarity. Control means making decisions, taking initiative, having autonomy. Competence means actually being good at your job, being able to demonstrate that you are good at your job, and continually learning. It is about mastery. Clarity means knowing the direction you're going in and what's expected of you. If you have enough clarity and you have some commitment to what you see, then you have a sense of purpose.

So Marquet thinks that people in an effective organization need:

In his book Drive (which I have not read), Dan Pink suggests that for people to be happy they need:

And it turns out that control maps pretty well to autonomy, competence maps pretty well to mastery, and clarity to purpose.

I think this is fascinating. Marquet was most definitely not operating from the assumption that "if I make the crew happy, the ship will run better", he was trying to figure out how to get everyone to be leaders. Pink (to my limited understanding) wasn't concerned with organizational success or efficiency, but what makes individuals tick.

Put another way, my take on Turn the Ship Around! is that the whole thing is a framework for building an organisation around trust (which I think is underemphasised by almost everyone), and Drive is about individual happiness.

When I realised this, I had a bit of a "mind blown" moment. Is this a coincidence? It is that one author influenced the other? Or is there some deeper connection between institutional trust and individual happiness?


Thoughts from a code yellow

A long time ago I was working somewhere with a data pipeline with severe problems. Borrowing a term from Google, I declared a Code Yellow and wrote a doc with a plan for getting out of it.

Being relatively new to the org at the time, I also wrote down some core principles for getting out of the Code Yellow. Reading over them recently reminded me that they are actually relevant almost all of the time.

Quantify the problem

Pretty much everyone agreed there was a problem and that it was pretty bad, but we had no consistent quantification of the problem.

Without numbers, we couldn't know if we were making progress. Quantifying the problem led to better decisions and more motivation.

Fix the leak, then fill the bucket

Whenever we encountered a problem in our production systems, in our code, or in the data, our first reaction had to be “how can we change the system so it is impossible for this to happen again”? Only then should we address the problem at hand.

(Note: don't do this if you are paged for a system serving live user traffic.)

This was difficult, because:

Nevertheless, we can’t just solve problems, we need to eliminate problem generators.

Close the loop

Whenever a person or a machine let us know about a problem with our systems, we should fix the problem, and then tell them we have fixed the problem, ideally in the same forum they raised it.

For example, if we got an alert, we should:

This was important, because our goal was to restore trust in our system, and being trustworthy ourselves is a key part of that. Pragmatically, it also reduced interruptions, questions, and hassling, because people will know where to look for status updates.

Learn from failure

When things go wrong, we don’t blame anyone, but instead see the failure as an opportunity to learn. Put another way, accidents and mistakes will always happen, and it is our responsibility to build a system that can tolerate them.

Specifically, when something goes wrong in our production systems, we had to write a post-mortem, and then review it within the team.

This is absolutely not about blame, but rather about making sure we are actually “fixing the leak”. Post-mortems will give us valuable insight into which issues are hurting us most, and how we can systematically address them.

We started by being over-enthusiastic in writing official post-mortem documents, and then backed off as we become more familiar with the process and could incorporate that kind of thinking into our day-to-day work.

Build learning in

When we learned a better way of doing something, we changed the system (either our production system or our development processes) so that the new, better way was the default. Adding checks to CI is a great example of this.

We should be extremely suspicious of advice like “be careful not to…”, “make sure you…”, “watch out for…”. Humans are bad at vigilance. Instead, let’s make machines that check things for us.

I don't think any of these principles are revolutionary or unique to me. I also don't think that they are fundamental or complete or come anywhere near describing a system of thought. Instead, these were gaps between how I like to operate and where that particular team was at that particular time.

That said, I do find myself referring to them a lot. They sit in that big grey space between broad principles like "reliability" and more concrete processes.


Cost of decisions

A few months ago I read The Invisible Hook by Peter Leeson. It's a nice little book about the economics of pirate life, intertwined with an evangelistic tract extolling the good news of neoliberal economics. Also, fantastic title.

The most interesting idea I came across was that any group decision has two costs:

  1. The cost of making the decision (how many people must consent)
  2. The cost of continued collaboration with people who disagree with those decisions (how many people could dissent)

Leeson cites The Calculus of Consent by Buchanan & Tullock PDF to support this. I looked it up and it's 270 pages of very wordy economics. Something for another day.

Nevertheless, the idea is that in a command hierarchy, you can make decisions quickly because the person at the top says "do it this way", and that takes them barely any time at all. Cheap decision, right? Of course, if everyone else disagrees, then that decision has a cost of enforcement, grumbling, etc.

On the other end of the spectrum, if you wait for 100% consensus on a decision, you will be waiting a long time. But you will get the decision that will maximise happiness and compliance, probably.

Pirate constitutions were decided by unanimous vote for this very reason. Every had to be on board (heh heh) with the decisions being made. During battle, on the other hand, the captain ordered people around.

I think this trade off is worth the price of admission in the book. In business, I find we think a lot about the speed of decision making (it is really important) but less explicitly about the compliance cost of the decision. The idea is that there's no right and wrong, but different decisions might demand different trade-offs.

I was thinking about what other variables you might tweak for group decision making, and what might come of them.

The thing that kept coming up was the size of the decision. You can make a decision bigger or smaller and that changes the structure of the costs.

Say you're trying to set up some coding standards. If you want to do it across the whole company, then:

But if you shrink the scope of the decision, then:

(All of this might well be in The Calculus of Consent.)

I guess if you summarise the above, we can assemble a broader list of the costs of decision making:

  1. Consent
  2. Dissent
  3. Implementation
  4. Legibility

How often do you think of these things in your policy discussions?