Writing Purposeful Unit Tests

by Geoff Gerrietts on April 11, 2014

unit-testing-framework00

Several recent blogs have discussed unit testing, some of them in considerable depth. One of my favorites is Jeff Knupp’s entry, which is a comprehensive look at how to write and understand good unit tests. Jeff touches on the motivations for testing, but like most other writers, moves on quickly to the details of actually writing tests. For an introduction, that’s probably appropriate. Once a developer has mastered the basics, though, it pays to give more consideration to why we’re testing in the first place.

Writing an exhaustive unit test suite is hard; for a non-trivial codebase it might even be impossible. That’s the dirty secret of testing: you can’t test everything. Even if you could imagine every possible test, it wouldn’t be practical to write tests for everything. You need to make informed, intelligent decisions about what you will test, and how you will test it.

A clear sense of the motivations behind testing can help inform the decision about how to write good tests. Testing efforts fail when they lose touch with their reason for being; developers are disinclined to write tests when the tests feel meaningless, or when the tests are hard to write. Motivation matters.

Knupp lists three of the more common motivations for writing unit tests. I’ll discuss those same three: writing better code, preventing regressions, and demonstrating code correctness. Each of these motivations, when realized, delivers great value to a software engineering process. But each of them is also tragically easy to miss.

Writing Better Code

When you write unit tests, you write better code. Making code testable requires better factoring and encourages decoupling. When you write tests, you look very closely at the code and think about it, which often reveals simpler ways to solve problems. To my mind, this is the greatest benefit of unit testing, but it’s also the most underrated, and the hardest to quantify.

I’ve seen a couple different descriptions of a well-factored unit. Essentially, a unit should do one thing, and that one thing should be explainable in a single sentence with no conjunctions. That’s an ambitious definition, but one that serves me pretty well when I’m on the fence about splitting up a function. When your functions fit that description, they’re really easy to test and really easy to read. The only downside is that you have to come up with a lot of function names.

But legacy code exists, spaghetti code can happen, and an adequately determined developer can write unit tests that are just as complicated. Writing unit tests does not make your code better; making your code better makes your code better. Writing unit tests can spark the decision to refactor poorly factored code, but it’s up to you to take the initiative.

Preventing Regressions

One of the great things about a large, well-written test suite is how it reacts to regressions. A regression is an error introduced by new development, into a feature that previously worked correctly. These tend to occur when new work needs to refactor or extend existing components. Automated tests can be an excellent defense against these kinds of errors.

Certain styles of testing can make automated tests less useful in detecting regressions, though. For example, imagine a function that retrieves a model object from the database, invokes a method on that model, and performs some transformations on the result:

def format_contact_for_summary(contact_name):
    contact = Contact.by_name(contact_name)
    next_contact = contact.next_contact_due()
    contact_in = datetime.datetime.now() - next_contact
    if contact_in.days < 1:
        contact_str = "NOW!"
    elif contact_in.days == 1:
        contact_str = "tomorrow"
    else:
        contact_str = "in {0.days} days".format(contact_in)
    return "{0.name} ({0.phone}) -- contact {1}".format(contact, contact_str)

When writing a test for this code, we might choose to employ a mock object to replace Contact. This would keep our test isolated while also letting us declaratively control the return of Contact.by_name. There’s a danger, here, though. Consider what happens if, while working on a different feature, we change the interface of Contact.next_contact_due so that it requires a User parameter — maybe we need to be able to calculate the next contact date for either a sales rep, or a manager. Our format_contact_for_summary function will be broken now, because it calls next_contact_due with no parameters.

This pitfall can be avoided in a couple different ways. Tests could use smart mock objects that mimic the interface of the objects they are mocking. Alternately, a test suite can contain integration tests specifically designed to prove that interfaces are called correctly. Both approaches are viable, and a combination of the two approaches probably offers the best defense against regressions.

Code Correctness

unit-testing-framework01

When developers start writing tests, correctness of code typically guides their efforts. Managers also tend to think in terms of this motivation. Unfortunately, proving a block of code is exceptionally difficult. At the automated test level, it can be exceptionally difficult to write a test that validates business requirements. Our tests can generally only make some assertions about the behavior of a particular implementation. Furthermore, while it can be tempting to write a lot of tests that pretend to be users, such tests can be very difficult to write, and very costly to run. While a test suite will reduce the testing burden, it does so indirectly, and by reducing the number of defects that appear to an end user.

That said, correctness remains an important goal. Each unit of code implements flow logic, transformations, or both. Testing can validate these behaviors at the unit level, and help ensure that a particular unit of code behaves according to expectations.

Since testing correctness can be hard, it’s comparatively easy to write tests that don’t really prove much. I’ve seen (and in fairness, probably written) several varieties of useless tests. Consider the following:

def test_nothing(self):
    result = do_a_thing()
    if result and result[0] is not None:
        self.assertEqual(result[0], 15)
    if result and result[0] is None:
        self.assertEqual(len(result), 1)

In the event that result is None or an empty list, this test passes. Maybe this example seems too contrived. Check out this one:

def test_nothing(self):
    result = do_a_thing()
    expected = iter(xrange(len(result)))
    for res in result:
        self.assertEqual(res, expected.next())

Much more insidious is the test that passes due to an empty loop!

Here’s a different kind of problem. This is what I call a test tautology.

def dont_be_a_square(man):
    root = math.sqrt(man)
    return (root - int(root) == 0.0)

class SpongebobSquareTests(UnitTest):
    def test_yo(self):
        for i in xrange(400):
            root = math.sqrt(i)
            expect = ((root - int(root)) == 0.0)
            self.assertEqual(dont_be_a_square(i), expect)

In this case, the test simply re-implements the logic of the method it is testing. On the plus side, it’s almost certainly going to pass every time. On the minus side, it could be rewritten to self.assertTrue(True) and it would run faster with no loss of information.

Other accuracy errors are more about laziness than oversight. I’d love to say that laziness errors are less common than logic errors, but they’re probably more common. The most common error here is when a developer implements a single test that exercises just the positive path through his code, or sometimes even a couple positive paths. Exceptional flows are important to document and test!

It can be tempting, when confronted with a knotty piece of code that has a number of intertwined dependencies, to skip the complicated part. Sadly, that’s also a fairly common problem in a test suite. At the time that the decision gets made, it seems harmless enough. But if the code is complicated, it’s pretty likely to be the source of errors. Sometimes the right approach is just to push on through, but more often, a complicated block of code is something that could benefit from a refactoring — let the need to test help clean up your code!

Looking at tests with a sense of purpose can help illuminate and avoid many common pitfalls. Tests with clear purpose, that achieve the aims of testing, reinforce the quality of the test suite. They also reinforce the desire to write tests — when a developer can see that they are contributing value, they are more likely to want to contribute. Together, these can make the difference in the success of a testing effort.

TwitterFacebookLinkedInRedditEmail

Once it's tested, what is it doing in production?

Sign up for a TraceView account, and monitor Java, .NET, PHP, Python, or Ruby after it's deployed. Create Free Account
  • jeffknupp

    Humbled that my article inspired this one. Thanks for continuing the conversation!

    • Geoff Gerrietts

      I’m flattered! Your blog has inspired me often. I appreciate the thorough and accessible coverage you give each topic, and your choice of topics tracks pretty closely with my interests. Thanks for what you do!