We as humans are always changing and evolving, sometimes for the best and sometimes not. In our professional life this is expected, one gains experience, learns new tools, knowns different people with different background, etc.

Back when I was working for a consultancy company in 2016 I met a wonderful person who at that time was an architect, and right now one of my best friends. Our main topic of conversation was technology, and we always dug deeper and deeper into it. Since we were both working on .NET a lot of the following words appeared at least once in every conversation: patterns, reflection, expressions.

At that time, we were enjoying a lot doing green-field projects as we could decide the foundations for each one. Deciding how to architecture your project based on architectural patterns never fails, or does it?

Well, the projects we did never really failed, but there was one red flag that we did not put a lot of attention at that time. The code was complex. Engineers did not fully understand what we have done or why we have done it that way. For sure, one can explain certain topics, but at some point you have to draw the line.

Don’t get me wrong, complex applications are not necessarily bad, one might need to do some things preemptively in order to avoid seeing their system become a big ball of mud in a couple of months. But there is one concept that at that time I did not keep close to my heart: KISS and YAGNI.

Keep it simple stupid because you aren’t gonna need it. This applies to a lot of concepts, such as the creation of interfaces just because. And it also avoids premature optimization.

I remember one of the parts on which engineers working with us had the most problems with, the tests. Well, this might sound common as every company I have been spent at least a dozen of hours defining what should tests be, what should be tested and the difference between different types of testing.

This is not the case, the problems here came by misusing a framework named AutoFixture. This cool framework claims the following:

AutoFixture makes it easier for developers to do Test-Driven Development by automating non-relevant Test Fixture Setup, allowing the Test Developer to focus on the essentials of each test case.

In other words, for each test you have, it will supply random data to fill your entities so no parameter is null. Then, you can add customizations in order to provide certain value to certain parameters, so instead of having:

[Theory]
[InlineData(2, 2, 4)]
public void TestSum(int value1, int value2, int expected)
{
    var result = Calculator.Add(value1, value2);
    Assert.Equal(expected, result);
}

With AutoFixture you could have the following to have random data generated:

[Theory, AutoData]
public void TestSum(int value1, int value2, int expected)
{
    var result = Calculator.Add(value1, value2);
    Assert.Equal(expected, result);
}

Or with customizations to indicate exactly which type of data you want for the specified fields (the others will get random data).

[Theory]
[InlineDefaultData(typeof(Value1Is4))]
[InlineDefaultData(typeof(Value2Is10))]
public void TestSum(int value1, int value2, int expected)
{
    var result = Calculator.Add(value1, value2);
    Assert.Equal(expected, result);
}

public class Value1Is4 : ICustomization
{
    public void Customize(IFixture fixture)
    {
        fixture.Customizations.Add(
            new FilteringSpecimenBuilder(
                new FixedBuilder(4),
                new ParameterSpecification(typeof(int), "value1")));
    }
}

public class Value2Is10 : ICustomization
{
    public void Customize(IFixture fixture)
    {
        fixture.Customizations.Add(
            new FilteringSpecimenBuilder(
                new FixedBuilder(10),
                new ParameterSpecification(typeof(int), "value2")));
    }
}

Do you start seeing the complexity? While this example is not fair for AutoFixture as it really comes handier when the parameters are more than just a couple of integers, there is a problem that you have probably already noticed:

[Theory, AutoData]
public void TestSum(int value1, int value2, int expected)

This means that the expected value will also be a random value. So now we cannot test this function the same way we did before and we have to do something like:

[Theory, AutoData]
public void TestSum(int value1, int value2)
{
    var result = Calculator.Add(value1, value2);

    Assert.Equal(value1, result - value2);
    Assert.Equal(value2, result - value1);
}

As we cannot predict what value1 and value2 are going to be, we cannot predict beforehand what the result is going to be. So we need to implement a way of checking that the operation we wanted to test actually worked at runtime. This is problematic as we end up adding complexity on something that should not be.

There are probably some use cases on which AutoFixture can be useful, but the problem is that back then we used it for everything. So for those functions that should map A to B, we ended up implementing a way to map from B to A in the test and comparing the original A with the actual A that we got from the B result.

Just for the sake of wanting to use new and cool frameworks, we ended up complicating things when we did not have a good statement to justify it. Fortunately, as time passed, we both ended up moving away from the automatic and auto-magic way of doing things.

When I started working with Go, everything changed for me, as the way of working was quite different. People had a mindset of not adding frameworks just because, and trying to get around with the super good standard library that the language provides. Coming from .NET it was very weird, as back then we did not even have a native JSON library and all the projects had Newtonsoft.Json just to begin with. This ended up in developers using a lot of frameworks most of the time. And I am not even looking at JS developers which take this to a whole other level.

The first time I’ve read a Go test, I did not understand a thing. It was a test of like 100 lines of code, but the actual testing logic was very small, instead it had a lot of lines inside a list defining different test cases. This is what they call table driven test. Basically:

Value1 Value2 Expected
2 2 4
-3 3 0
0 10 10
-5 -7 -12

In code:

func TestAdd(t *testing.T) {
	tests := []struct {
		name   string
		value1 int
		value2 int
		want   int
	}{
		{
			name:   "two positives",
			value1: 2,
			value2: 2,
			want:   4,
		},
		{
			name:   "negative and positive",
			value1: -3,
			value2: 3,
			want:   0,
		},
		{
			name:   "zero and positive",
			value1: 0,
			value2: 10,
			want:   10,
		},
		{
			name:   "negative and negative",
			value1: -5,
			value2: -7,
			want:   -12,
		},
	}
	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			if got := Add(tt.value1, tt.value2); got != tt.want {
				t.Errorf("Add() = %v, want %v", got, tt.want)
			}
		})
	}
}

When I got around it, I decided this was the way I wanted to do testing from now on. This way of testing does not limit you in terms of parameter types or assertions and you can even define mocks as values if you require something like that.

Fast forward yesterday, I went into an old repository that I’ve created on .NET for doing SOAP calls. And for some reason I went into the test class, and saw them… I was furious. The tests were using AutoFixture, and they were not understandable at all.

[Theory(DisplayName = "Post should do expected HTTP call")]
[InlineDefaultData(typeof(ActionIsNullCustomization))]
[InlineDefaultData(typeof(SoapVersion12Customization))]
[InlineDefaultData(typeof(HeadersAreNullCustomization))]
[InlineDefaultData(typeof(HeadersAreEmptyCustomization))]
[InlineDefaultData(typeof(OnlyOneHeaderAndOneBodyCustomization))]
public async void PostAsync_ShouldDoExpectedHttpCall(
    Uri endpoint,
    string action,
    SoapVersion soapVersion,
    List<XElement> bodies,
    List<XElement> headers)
{ ... }

And I am omitting the 15 lines of code that each customization took. How could I, at some point of my career, thought that this was a good way of testing things? But the good part here is learn from your own mistakes.

Tests are living documentation of your code, as long as you enforce them on your pipeline and don’t disable them when you have to do a quick fix that ends up breaking everything. With that being said, tests should be easy to understand, and on .NET you can also keep it simple stupid.

public static IEnumerable<object[]> PostAsyncTestsData =>
    new List<object[]>
    {
        // Action is null
        new object[] {
            new Uri("https://test.com"),
            SoapVersion.Soap11,
            new[] { new XElement("body1"), new XElement("body2") },
            new[] { new XElement("header1") },
            null
        },

        // Soap 12
        new object[] {
            new Uri("https://test.com"),
            SoapVersion.Soap12,
            new[] { new XElement("body1") },
            new[] { new XElement("header1"), new XElement("header2") },
            "action"
        },

        // Headers are null
        new object[] {
            new Uri("https://test.com"),
            SoapVersion.Soap12,
            new[] { new XElement("body1") },
            null,
            "action"
        },

        // Headers are empty
        new object[] {
            new Uri("https://test.com"),
            SoapVersion.Soap12,
            new[] { new XElement("body1") },
            new XElement[] {},
            "action"
        },

        // One header and one body
        new object[] {
            new Uri("https://test.com"),
            SoapVersion.Soap11,
            new[] { new XElement("body1") },
            new[] { new XElement("header1") },
            "action"
        },
    };

[Theory]
[MemberData(nameof(PostAsyncTestsData))]
public async void PostAsyncTests(
    Uri endpoint,
    SoapVersion version,
    IEnumerable<XElement> bodies,
    IEnumerable<XElement> headers,
    string action)
{ ... }

The snippet above is relying in the parameter order, which is not necessarily intuitive, but there are some things that can make this better which I am not going to cover here.

The point I am trying to make here is don’t over-engineer things just for the sake of trying a new cool framework or new tool, at least on your work environment. Your code should not be clever but be clear for your code inhabitants, in other words, your colleagues.

You are likely to get more engagement in testing and maintaining a healthy codebase when your tests are clear than when your colleagues don’t understand what you are trying to do.