/ TDD

Why your test should fail first

I always try to convince other developers that writing your test first is not just about doing TDD the "correct" way (if there even is such a thing). That sounds a little fundamentalist. Rather, it's about making sure your test is failing and failing for the right reason.

Actually, this is only one reason, as writing your test first also pushes you more towards writing only the code you need, writing tests that cover all code paths, and forcing you to think more about your code.

But for now, I will show some examples of tests that succeeded immediately. If I would have written the code first, and then the test, I would have had a false feeling of security. Writing the test first proved to me that the test was written wrong.

A .NET example

First, a .NET example. I'm trying to test that none of my WebApi controllers have the AuthorizeAttribute applied to them. This is because our legacy project used to use these attributes but we're moving to Autofac's filters. So none of the ApiControllers registered with Autofac is allowed to use the AuthorizeAttribute. For the "old" controllers that were not managed by Autofac, this isn't an issue.

The idea is to build the Autofac container, then request the controllers and finally check if the attribute is present. The container is built using the same registrations as in production, giving us a container that resembles production. There is already an extension method in place (GetServices) to query the container and get all registrations for such a service.

This is the test code:

var controllers = GetServices<Controller>(container);

foreach (var service in controllers)
{
    var implementation = container.Resolve(service.ServiceType);
    var authorizeAttributes = implementation
        .GetType()
        .GetCustomAttributes(typeof(System.Web.Http.AuthorizeAttribute), true);

    authorizeAttributes.Length.Should().Be(0);
}

Can you spot the mistake? I'm testing Controller classes (i.e. MVC controllers), instead of ApiController classes (i.e. WebAPI controllers). It's a stupid little mistake, but those happen too. In fact, in most cases where my test runs green without having implemented the code, it comes down to small errors.

The above was easily fixed by changing the first line to:

var controllers = GetServices<ApiController>(container);

A NodeJS example

Here's another example, in NodeJS. We're checking whether or not a HTTP response contains certain keys (I've obfuscated of course):

expect(res.body.customer).to.not.contain.keys(['internalId', 'address']);

However, this succeeds when the body looks like this:

{
    _meta: {
        internalId: 'foo',
        address: 'bar',
        name: 'zaz'
    }
}

That might seem weird, but it's how the Chai library works. The code that does what we want looks like this:

expect(res.body.customer).not.to.contain.any.keys('internalId', 'address');

Again, a small difference, but with larger consequences for your test and the security it should provide. And in this case, with even graver consequences if this construction is spread throughout hundreds of tests.

Conclusion

Writing tests first drives your design, but it also ensures you're writing the test correctly. Sometimes, small mistakes will cause the test to succeed, regardless if the application code you actually write. This removes the advantage of tests as a safety net when you're changing the underlying code.