Automated Unit Testing Best Practices

Having a suite of automated tests for your code helps improve software quality and maintainability in several dimensions. Writing unit tests for your software is likely to cause you to incorporate several design aspects that make it easier for other developers to use your and which have the side-effect of dramatically increasing the maintainability of your code overall. Testable code typically has fewer tight couplings between components, dependencies that are injectable, and encourages SOLID design principles in a naturalistic way because SOLID code makes writing tests (and therefor all usage) of your code easier. Automated unit tests also help you write a verifiable usage contract between your components that enables you to find and isolate bugs or perform major refactoring on your code without fear of breaking existing features. All of this leads to higher quality software.

Scope

This document isn’t meant to be a test-driven development (TDD) tutorial, an exhaustive treatment of unit testing theory, or an NUnit tutorial. Many people have written entire books on these subjects and this is just a short article. I’m going to make some assumptions about you: you’re a competent enough developer to be able to download and install software like NUnit and reference it in your project. I’m going to try to quickly show the value of unit testing and how automated tests can help you maintain a consistent (hopefully very high) level of quality through the software development lifecycle. We’ll start with an existing piece of code and refactor it to add some features and use unit tests to guide the process. Go ahead and download and install NUnit—the examples in this text were prepared against v2.6.2.

Testing and Software Quality

The two primary software quality domains are functional and structural. Functional quality generally refers to the degree of correctness of a piece of software as it pertains to its functional requirements—what the software is supposed to do, while structural quality generally refers to the degree of correctness of non-functional requirements and engineering principles that determine things like how stable or maintainable the software will be—how well the software does what it’s supposed to do.

The software developer is ultimately the most capable of influencing software quality. Since the developer is in the best position to understand their code and how their systems work, writing, maintaining, and routinely measuring the effectiveness of automated unit tests is a best practice used by quality-conscious coders.

I don’t know a single developer who doesn’t test their code in some way informally before turning it over to a dedicated tester or deploying to production. Well, not anyone who stayed employed for very long 😉 Some people use test-driven development (TDD) and write tests before they write any production code. Some people just open the app and poke around looking for things that might be broken. Between these two gulfs lies a world of possibility for performing all manner of different kinds of testing techniques. This document focuses on writing and maintaining automated unit tests where order (code- or test-first) isn’t as important as understanding the purpose and import of having reliable, speedy unit tests to run when you need to make changes to your code.

What’s a Unit Test?

Just about everything you will ever read about unit testing will have a nice academic definition of what unit tests means—probably only differing because of the many edits to the Wikipedia page on unit testing that occur on a regular basis. Here’s what it looked like in early 2013 when I wrote this:

In computer programming, unit testing is a method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine if they are fit for use.

In practice, a unit test is usually a method that contains logic (usually referencing a dedicated unit testing framework, like NUnit) that make assertions about another method that contain application logic which you want to test. Here’s an example of a class with a method that we can unit test:

///
/// Uses the Broker to send and receive and verify transmission of messages
///
public class MessageService
{
    ///
/// Sends a message over the broker and returns the receipt ID
    ///
    ///A message to send
    /// Receipt ID for status checking
    public Guid Send(Message message)
    {
        return new Broker().Deliver(message);
    }
}

We’ll refer to this as the system under test, or SUT for short. Let’s use NUnit to test it out. Later I will go over best practices for structuring tests in your solution, but for now I’m going to assume that you can download and install NUnit yourself and add a reference to the framework DLL in a project. After that, you must create a test fixture, which is a class that contains tests. Just create a regular public class and add the TestFixture attribute to it:

using NUnit.Framework;

namespace Application.Tests
{
    [TestFixture]
    public class MessageServiceTests
    {
    }
}

It needs both the attribute and the public access modifier so that the NUnit test runner can find the class and run the test and configuration methods in it. To create a test, you need a public void method with the Test attribute:

    [Test]
    public void SendReturnsReceiptGuidFromBroker()
    {
    }

We have a test! How do you run this thing, anyway? Well, if you can build your solution, you can open the file in the NUnit test runner on your machine:

NUnit_Gui
In this example I’ve pointed the runner at the DLL that contains my test fixture. You can see in the tree that the fixtures are organized by namespace, and the tests by fixture. Feel free to mess with the attributes or member access modifiers in your code to see how they affect the runner’s ability to see your tests.

Press run and see what happens. Everything passes. Wait, what? But our test was empty! So we’ve just learned that a unit tests passes unless we do something to make it fail. Let’s make an assertion about the code using the NUnit Assert syntax:

    [Test]
    public void SendReturnsReceiptGuidFromBroker()
    {
        var sut = new MessageService();
        var message = new Message();
        var result = sut.Send(message);
        Assert.That(result, Is.Not.Null);
    }

Ok, looks like our code was a little too simple—all we can do is verify that the return value of send isn’t null. Is there much value here? Not really, since that result could be coming from anywhere. There’s no way to tell if it actually came from the broker class or if it was just generated from thin air. Because our SUT creates the broker internally, we don’t have any control over its initialization. We can’t easily configure it or swap it out for a fake test implementation. Something else to consider is how Broker might behave inside a unit test. Will it try to connect to the network? Will it throw exceptions unless certain configuration data is available in an XML file someplace? These are questions that are important for testing—but only if you’re testing the Broker class.

We’re concerned with the system under test. Broker is just a dependency of the SUT and we’ll want to swap it out with a fake implementation that will behave in a predictable way (and avoid doing unsafe or slow operations) while tests are running. Let’s take some lessons from the best practices for dependency injection post and refactor a bit:

public class MessageService
{
    private readonly IBroker _broker;

    public MessageService(IBroker broker)
    {
        _broker = broker;
    }

    public Guid Send(Message message)
    {
        return _broker.Deliver(message);
    }
}

Ok, now we have an IBroker that exposes the broker’s Deliver() method. We can use this in our test fixture to inject our own implementation of IBroker and measure our SUT’s interactions with its IBroker dependency to make sure that the value returned by Send() is actually coming from Deliver()!

    [Test]
    public void SendReturnsReceiptGuidFromBroker()
    {
        var expectedResult = Guid.NewGuid();
        var testBroker = new TestBroker {Result = expectedResult};
        var sut = new MessageService(testBroker);
        var message = new Message();
        var result = sut.Send(message);
        Assert.That(result, Is.EqualTo(expectedResult));
    }

    class TestBroker : IBroker
    {
        internal Guid Result { get; set; }

        public Guid Deliver(Message message)
        {
            return Result;
        }
    }

By verifying that Send() is returning the same value as Deliver(), we’ve just completely explored the interaction between MessageService and Broker using a completely fake IBroker implementation. The important thing about this is that our test can now run in complete isolation from any other system—we can create an instance of MessageService and call Send() over and over without worrying about speed or infrastructure side-effects. This is a good thing! We’ve also made the code much more deliberate; the caller needs to create and pass the broker instead of the MessageService worrying about how that class is implemented and what might be involved in creating an instance of it itself. That’s a looser coupling!

Now, interfaces aren’t the only abstraction that enables the creation of a fake implementation for exploring dependency interactions in a unit test. We could ditch the interface and create an abstract class instead and have both Broker and TestBroker inherit from it. Or we could mark Broker.Deliver() as virtual and have TestBroker inherit from Broker and then override the virtual method. The only difference at this point is in the specific code required to implement TestBroker—the unit test code won’t need to change.

That leaves us with another consideration: It seems like there’s going to be a lot of code in your tests to make fake objects. Isn’t there an easier way to do this? There is: mocks. Using a mocking framework is a best practice to replace fakes, stubs, doubles, etc. in your unit tests with concise, elegant test objects that can exactly replicate abstract code like the types I described above (interfaces, abstract base classes, virtual/overridable members). Mocks pretend to be implementations of your injectable dependency types and have special APIs that let you program their behavior in a succinct and fluent way. See the [[mock objects for unit testing best practice]] post for more detail.

Now let’s deal with adding some features to the implementation. It turns out the Broker crashes if you pass in a null message, a message with no recipients, so we need to throw exceptions under those conditions and let the caller handle them. Also, the business doesn’t want anyone sending messages without both a subject and body, so we need to cover that condition as well:

    public Guid Send(Message message)
    {
        if (message == null)
            throw new ApplicationException("Invalid message");

        if (message.Recipients == null || message.Recipients.Count == 0)
            throw new ApplicationException("No recipients");

        if (string.IsNullOrWhiteSpace(message.Subject) &&
            string.IsNullOrWhiteSpace(message.Body))
            throw new ApplicationException("No message data detected");

        return _broker.Deliver(message);
    }

Let’s add a test that can verify the null message:

    [Test]
    public void SendThrowsApplicationExceptionWhenMessageIsNull()
    {
        var testBroker = new TestBroker();
        var sut = new MessageService(testBroker);
        var exception = Assert.Throws(() => sut.Send(null));
        Assert.That(exception.Message, Is.EqualTo("Invalid message"));
    }

Using the Assert.Throws<T>() syntax, we can execute the Send() method and verify that in the configuration our test provides, an ApplicationException is thrown, and then assert the contents of the message, as well. Now a test for invalid recipients:

    [Test]
    public void SendThrowsApplicationExceptionWhenRecipientsIsNullOrEmpty()
    {
        var testBroker = new TestBroker();
        var sut = new MessageService(testBroker);
        var message = new Message();
        var exception = Assert.Throws(() => sut.Send(message));
        Assert.That(exception.Message, Is.EqualTo("No recipients"));
        message.Recipients = new List();
        exception = Assert.Throws(() => sut.Send(message));
        Assert.That(exception.Message, Is.EqualTo("No recipients"));
    }

And a test to verify that both an empty subject and empty body aren’t allowed:

    [Test]
    public void SendThrowsApplicationExceptionWhenMessageBodyAndSubjectAreNullOrEmpty()
    {
        var testBroker = new TestBroker();
        var sut = new MessageService(testBroker);
        var message = new Message {Recipients = new[] {"person@domain.com"}};
        var exception = Assert.Throws(() => sut.Send(message));
        Assert.That(exception.Message, Is.EqualTo("No message data detected"));
    }

So this time we added a recipient, but left subject and body empty and got our error. Great, our new features are covered! Let’s run the whole suite:

NUnit GUI Failing Tests
Looks like our original test that verifies sending a message returns the Guid receipt is now failing. Oh yeah, we didn’t add any recipients to it or verify that that is has a subject or body. Let’s add all three:

    [Test]
    public void SendReturnsReceiptGuidFromBroker()
    {
        var expectedResult = Guid.NewGuid();
        var testBroker = new TestBroker {Result = expectedResult};
        var sut = new MessageService(testBroker);
        var message = new Message
            {
                Body = "Body",
                Subject = "Subject",
                Recipients = new[] {"foo@bar.com", "bar@foo.com"}
            };
        var result = sut.Send(message);
        Assert.That(result, Is.EqualTo(expectedResult));
    }

We’ve covered all of our features and bugs for now, and as the number of tests grow so will your confidence that you’ve got everything covered. However, every once in a while something will slip through the cracks and go untested. Using a code coverage tool is a best practice that can help you identify individual statements that aren’t tested. See the [code coverage best practices] post for more details on how a coverage tool can enhance your testing efforts.

Frequent Speedy Testing Is Key

So you can see that your unit tests will grow as you implement new features and fix bugs. It’s extremely important to have a fast-running test suite because it’s also a best practice to run unit tests every time you change your code and especially before you check anything into source control. The best practice of  continuous integration (CI) incorporates frequent check-ins and automated builds with analysis and testing tools to measure the quality and completeness of ongoing software development. If you’re putting any effort into automating your builds or using a CI server, you’ll want to have it run your automated unit tests as well in order to make sure nothing gets deployed when tests are failing and to record your progress over time. Check out the [[continuous integration best practices]] post on this site for more details.

Because you’ll be running your tests so often, it’s best to make sure they’re very fast. If you incorporate mocks or fakes whenever possible, you’ll avoid creating instances of real implementations that might slow things down by accessing the network, connecting to a database, reading files, etc. If you have 500 tests, you don’t want to be waiting around for 5 minutes just to see if a small change had a negative effect—you want your tests to be as agile as you are, so keep them convenient for you and your team by watching for performance bottlenecks and occasionally optimizing for speed as your suite grows larger. Look for long-running tests or fixtures and see if there’s anything you can do to improve performance to make testing easier and faster for everyone. The NUnit test runner has limited timing information available in the GUI.

Another thing to consider is Visual Studio integration. My two favorite options are ReSharper and TestDriven.Net (in that order). Both have context menu support and UI integration that lets you run individual tests, whole fixtures, all tests in a solution, etc., and have great debugger support. I prefer ReSharper because it has many other handy features, but TestDriven.Net is a fantastic free product. Here’s a shot of ReSharper’s test runner hitting a breakpoint in a unit test:

ReSharper Unit Test Runner
The UI is great for displaying test results and code side-by-side and it does a great job presenting timing information and test errors. One-click execution of all your tests is extremely handy when you’re testing frequently, so definitely look into VS integration if you’re taking unit testing seriously.

Summary

These four tests now act as a contract for future developers to follow when they need to modify the implementation or add features. If they modify the message checking logic in a way that causes a test to fail, they can see by looking at the names of the tests and simple setups and assertions exactly what was expected of the system before they started modifying it. That’s the kind of maintainability guarantee that will let you unleash your refactoring skills on your code with confidence!

Leave a Comment