Self Documenting Unit Tests

Naming JUnit methods hasn’t been my strong point. My general obsession with naming conventions outstripped my creativity for specific unit test names during much of my last project. Darker moments birthed monstrosities like AdvertisementTest001(), AdvertisementTest002(), etc. Finding good names for classes, public methods, variables, and database columns is always on my mind; that’s why (a link to) a trusty thesaurus should always be at hand. So my inability to make good, consistent unit test names vexed me.

I had a trick for regression tests, though. I’d use the JIRA ID as the method name: e.g., public void jiraProjectNameIssueNumber(). Anybody could look up the issue and get the entire history when needed, and some particularly infamous issues were immediately recognizable when a too-well-known number showed up in the runner’s failed list. This I liked, and now I have something that I may like for actual unit tests thanks to the January 2015 Software As Craft topic, Russell Gold’s talk on Executable Documentation.

A better title might have been Self-Documenting Unit Tests since several of us were thinking this would be about generating code from comments like cucumber tests or software contracts. Regardless, it’s an excellent pattern to make better unit tests, and it’s a language-independent pattern. The core idea is simple: Test a single method given a particular set of conditions, and name the unit test givenCondition_expectResult(). If you practice TDD and you write these tests as stubs first, your path from red light to green light starts with something like this for a findsert style method:

@Test  // DEFINITELY use annotations with JUnit!
public void givenNotFound_returnNewRecord() { ... }

The one thing I might add here is a method name prefix depending on how the tests and production classes are structured. In the case of a service method, I’ll probably create separate JUnit classes for each, so the given-expects pattern is fine. For tests that might be a little more integration-y or about a class representing a persistent entity, it might look more like this:

@Test
public void nextSerialId_onInstantiation_defaultsToZero() { ... }

@Test
public void nextSerialId_onClosedStatus_throwsInvalidStateException() { ... }

That second line wraps when I preview it in my current (kind of horrible) WordPress theme; it’s definitely long for the blog page, but it’s not so bad for a code editor. How long is too long? I hesitate to say this given some people (I mean you, MG!) and their predilection for long method names, but … Java has no defined limit on the length of a method name! Yikes. I’d say too long is not being able to uniquely identify the method name when eye-balling it in the JUnit runner window. I should further qualify that’s on a 1920-wide monitor with readable font sizes and standard Eclipse frame widths for those of you who love to rules-lawyer such things into the realm of microfiche.

This approach yields some additional benefits atop normal TDD and unit testing. The presenter showed a few unit tests from well-known, highly-regarded open source projects. Having tests is good, but having ambiguous names and one method test a bunch of unrelated things make understanding, updating, and expanding on those tests difficult. If the tests get too hard to understand, then they’ll meet @Ignore when inexplicable failures crop up. Contributors won’t expand or update the tests for new functionality if they have to understand too much unrelated code, test and otherwise, to write something useful. Having the developer name the test this way makes the developer think about what this test really does and should contain up front, guiding the test writing process rather than playing catch-up after it’s done, and making it easier for future developers to expand upon it.

There are cases were a single method may test more than one method call at a time like a handful of easy “success” cases. Apply some common sense to avoid an explosion of teeny-tiny test methods, but also pay attention to how many valid separate cases you have. As the presenter said, it can be a clue about when production classes that are too large or complex should be broken down.

The underlying principle here is the same as a test commenting style that MG got me into the habit of using: Start with one or more “Given …” condition comments, then a “When …” for what is being tested, and finally a series of “Then …” assertion comments. This is like the comment/pseudocode-to-code style I’ve used for production code since forever but applied to tests. Those given/when/then comments often became the most recent documentation of how code behaved during refactoring because when something depending on it misbehaved, somebody went in there and read those comments. Some of the need for embedding such comments goes away with it already being in the test method name, but how much depends on which side you take in the agile controversy over comments.

Yes, A Controversy Over Comments

The SoC presenter talked about self-documenting unit tests as the “what it does” where the self-documenting production code shows “how it does it”. I like how that dove-tails with TDD since we (mostly) know the what first, and figure out the how later. However, neither answers what can be the most important question when stumbling into old code or somebody else’s code: Why? Deadlines, bugs in APIs, old versions of libraries, or (worst of all) politics sometimes require code to do … questionable things. Those are cases where comments are always, always, ALWAYS a Good Thing(tm).

Some people take a dimmer view of comments overall. They say that even good commenting is wasted effort since it duplicates what the code does and doesn’t have the basic validation code gets by compiling. They say comments get confusing over time as code changes but comments don’t keep up. They say that code should be obvious in how it’s laid out, how it’s broken down into subroutines, and how its elements are named. They say comments should not be necessary. They say it’s a mark of the badly written, the badly structured, or the badly named when comments are necessary. That’s a bold statement.

There is some truth there, but I don’t completely subscribe to it. Developers need to weigh the pros and cons for more than themselves when deciding how much commenting is enough. They also need to weigh project factors like how complex the system is, how long it will be maintained, and by whom. That who is also a consideration; if the whole team cannot play by the same rules, it may not be worth trying to assure regular, high-quality commenting. If there’s only so much effort you can get the whole them to commit to, then getting them to use TDD and self-documenting unit tests may be the 20% effort that will get you to an 80% maintainable system.