“Less thinking. More testing.” — May 22, 2010

“Less thinking. More testing.”

(__Attention conservation notice:__ nearly 2,000 words that start with test-driven development in software, skip along to application prototyping, then take a big leap to an attack on libertarianism.)

I’ve meant for a long while to write about Kent Beck’s [book: Test-Driven Development By Example], but — as you can see from this blog in general — I’ve had a lot less time to write recently. The book hasn’t yet changed my life, but it should, and it will. And I think the idea has far broader applicability than just software development, which I’ll try to get into below.

The basic premise of test-driven development is to write your tests before you write your code. The structure is like so:

1. Write a test asserting something about the code that you’ve not yet written. For instance, if you intend to write code computing the number of days between two dates, you might make a few assertions: that the number of days between a date and itself should be zero; that the number of days between March 1 of 2009 and March 1 of 2012 is one day more than three times 365; and so forth. The more assertions you can make about this as-yet-unwritten code, the better.
2. Since you’ve not written the code, the assertions will fail. In fact, the code won’t even compile.
3. Write the simplest version of the code that will pass the test. Write this as quickly as possible.
4. Tests pass! Joy!
5. Refactor.
6. Having accomplished the task you were on, continue to bigger and better things. Go to step 1.
7. Repeat 1)-6) until you’ve achieved whatever you were trying to do.

The advantages of having unit tests are well known. The virtues of unit tests can best be understood if you know what their absence is like. If you’re like me, you’ve worked before on code bases that had absolutely no tests, and the experience is terrifying. You can’t change one bit of code without worrying that you’ve broken something in some far-off part of the code. If you’re like me, this experience makes work actually unbearable: the more code you dip your fingers into, the wider the potential swath of destruction. Again if you’re like me, this can turn your stomach into a big ulcer which actually makes it hard to sleep. On the wrong day, it can lead you to excessive caution, which keeps you from doing work. Which is bad and makes your bosses hate you. You want your bosses to love you, don’t you? Of course you do.

Imagine instead that the code is entirely covered by unit tests. (This nirvana is known as “100% code coverage.”) Now, if you change the code, you just run the tests. Do all the tests pass? Joy and rapture! You can keep changing code to your heart’s content. When you break a test, figure out why you broke it, fix it, confirm that the tests now all pass, and move on. Continue to add tests for all the code that you add. Again, if you’re like me, this gives you a feeling of calm and confidence, which makes you work faster, which makes your bosses like you more.

Of course, sometimes your code will break for reasons that you didn’t test against. This is unfortunate but expected. When this happens, add another unit test to guard against the heretofore-unanticipated case. In this way, the unit tests document your knowledge about the particular problem domain. If done well, people should be able to understand your code by reading the unit tests. A unit test can essentially be read as “the code is expected to respond like so when it encounters a world shaped like so.”

Striving for 100% code coverage leads you to write smaller functions, because it’s easier to write unit tests to cover a smaller, more-specialized function. This is a happy side-effect: smaller, more-specialized functions are a good thing, whether or not you’re writing unit tests.

Another way that test-driven development contributes to a fearless coding experience is that — per the title of this post — it encourages you to think less and code more. If you’re like me, you can get stuck inside your own mind, wondering whether the particular path you’re going down will work. The TDD approach is to move past this state of mind as fast as you can, by writing tests. Don’t speculate idly about whether your code will do what you expect; think about how it should respond to known inputs, then write code that responds appropriately to those inputs.

Any number of conclusions might come out of this testing discipline:

* your speculation proved correct; the code works.
* it proved incorrect, and you need to pursue another line of development.
* it proved partly correct, partly incorrect, and you need to course-correct.

This institutionalized course-correction is, I think, the greatest virtue of test-driven development. And it’s why some variant of test-driven development applies in much broader contexts.

Take one context that’s only slightly broader, namely the process of building an entire app from the ground up. We recently did this at work; the task for one of our sprints was to build a prototype of an app. I wasn’t entirely prepared for what “prototype” means, but now I think I get it. A few important aspects of prototyping stand out for me from this experience:

1. Build something with a terrible user interface, but the broad rough structure of what we think users will want. Explicitly *do not* make it pretty. If you make it pretty, the users who beta-test it and the designers who make it pretty will focus on the visual details rather than on how it functions. In order to keep their eyes on the prize, write a computer program that is only one or two steps up from a sketch on paper.
2. Write a prototype that exercises the necessary backend code, like databases and API calls and so forth. You might find that your API calls take too long to return, and thereby couldn’t fit into the application that you’re building. Or you might find that your database doesn’t have indexes where it needs them. Or you might find that you need to restructure the entire app to work with APIs and backend databases that are beyond your control.
3. By putting the code in front of users, you might find that they don’t actually want the program that you envisioned. Or they want it, but they’d *really* want it if you just added a little something extra.

When I first mentioned “less thinking, more testing” to my friends, one friend raised the absolutely valid point that this approach doesn’t rid you of the need for design. That’s absolutely true. First of all, you need actual hypotheses to put in front of users; you can’t put a blank piece of paper in front of them and ask them to draw what they want. You need to focus their attention in a particular direction. When you’re building the backend architecture, you likewise aren’t starting from a blank slate.

But the point is *course-correction as quickly and as often as possible*. That fundamental message is why I think test-driven development and rough prototyping are applicable far beyond software development. It’s more than a little applicable to ideologies. Take, for instance, the recent kerfuffle over Rand Paul’s opposition to the Civil Rights Act. Matt Yglesias pulls on this thread and attacks the very idea that adhering to consistent principles even when they drive you off a cliff is somehow admirable. I completely agree. Consistency is a fine virtue, and a belief system that’s not consistent can’t be entirely true. But there are many virtues apart from consistency; among the greatest is non-insanity.

To keep your beliefs from veering off into the insane, you need to course-correct as often as possible. We’re not playing some game where the purpose is to start with reasonable-seeming principles and derive hilarious conclusions that clearly make no sense; the point is to build ethics that work for you in conducting your daily life, and to build policies that work for your countrymen. If it looks like you’ve built a chain of reasoning that led from obvious-seeming premises to ridiculous conclusions, you probably need to reconsider the premises. If Goldwater believed that opposition to civil-rights legislation was obviously true, then the principles were so much chin music to defend conclusions that he would have come to anyway; if, on the other hand, he adhered to those conclusions with great regret because he believed that the principles were correct, then he should take stock of how he arrives at his conclusions.

It’s not exactly new wisdom that blind adherence to principles can lead you astray. Holmes said as much in [book: The Common Law], whose opening words are

> The object of this book is to present a general view of the Common Law. To accomplish the task, other tools are needed besides logic. It is something to show that the consistency of a system requires a particular result, but it is not all. The life of the law has not been logic: it has been experience. … The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.

Earlier in my life, I thought it was very important which ethical principles one had. I thought consistency was the most important thing. (Blame college; maybe blame academia more generally? College is a great place to pick up ideas at an impressionable age and run with them until everyone smirks at you with the amusement of the non-cloistered.) I still think it’s important, but there are many more important things. Constantly clinging to reality is among the most important. Constant course-correction, with input from the real world at every branch, is extremely helpful at keeping you moored in reality.

This, by the way, is why I’ve never been able to get far into Nozick’s [book: Anarchy, State, and Utopia]. It always feels like a shell game: “Let’s suppose you believe some premises about liberty and side constraints … We’ll just shuffle around the shells a little bit and … here we go: clearly you must believe this thing about government non-intervention.” I no longer trust long chains of reasoning from seemingly self-evident low-level principles. I want principles nowadays that are closer to daily life, whence the jump from them to concrete action is smaller and less fraught with the potential for insanity.

That said, of course there’s room to move in the other direction: I tell you that I believe X and Y, and you reply that X and Y are only instances of Z. (With some fear of stretching an analogy too far, this is akin to refactoring.) So now I believe Z instead, which is a generalized version of X and Y. Or maybe you ask me whether I believe A; if I say yes, you point out that A contradicts X. If I agree with you that there’s a contradiction, I now have a choice: continue to believe X, or change my belief in X. I believe the endpoint of this process is what Rawls called reflective equilibrium.

And of course when you course-correct frequently, you still need principles. Principles help determine the path that you start down, and help determine which experiments to perform to correct your course. But the goal should be to experiment at every possible step.

Am I just calling for more use of the scientific method here? I think I am. It works at small scales like software, and it also works at large scales like philosophy.