BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Fundamentals of Testing with Persistence Layers

The Fundamentals of Testing with Persistence Layers

Bookmarks

Key Takeaways

  • Isolation means tests are isolated from other tests, not their dependencies.
  • Tests should be runnable in any order without resetting the database.
  • Longer, more complex tests are often necessary to prove the persistence layer is working correctly.
  • It is better to add more assertions to a test than to repeat the test unnecessarily.
  • If tests are slow, look for performance problems in the application itself.

It is well known in our industry that mocking out dependencies such as databases and other persistence layers lead to ineffective tests. One simply cannot test the interactions between two components if one of those components isn’t actually part of the test. Unfortunately, our industry is also focused on function-level testing to the exclusion of all else, so few are trained on how to write any other type of test. This article seeks to correct the issue by reintroducing the concept of testing with databases. These techniques work equally with for other types of persistence, such as calls to a microservice. 

In order to learn how to test with databases, one must first ‘unlearn’ a few things starting with the concept of unit tests and integration tests. To put it bluntly, the modern definitions of these terms are so far removed from their original meanings that they are no longer useful for conversation. So, for the remainder of this article, we aren’t going to use either of them.

The fundamental goal of testing is to produce information. A test should tell you something about the thing being tested you may not have known before. The more information you get the better. So, we are going to ignore anyone who says, “A test should only have one assertion” and replace it with, “A test should have as many assertions as needed to prove a fact”. 

The next problematic expression we need to deal with is, “All tests should be isolated”. This is often misunderstood to mean each test should be full of mocks so the function you’re testing is segregated from its dependencies. This is nonsense, as that function won’t be segregated from its dependencies in production. Instead, you should attempt to make tests behave as closely to production as possible so they are as likely as possible to detect problems. 

What, “All tests should be isolated” really means is each test should be able to run independently of all other tests. Or in other words, you can run any test or set of tests in any order at any time.

Many go out of their way to make this more complicated than it needs to be. They’ll do things like completely recreate the database before each test run, or even each individual test. This causes a couple of problems. 

First, it makes the tests unreasonably slow. Creating a new database, and populating it with a reasonable amount of data, takes a non-trivial amount of time. This is often the source of the claim that testing with databases is too slow, which in turn leads people to not run the tests or even not write them in the first place.

Another problem is that size matters when it comes to databases. Code that works fine with only 1 record in the table may fail when there are hundreds or thousands of rows. In some cases, such as a missing WHERE clause, the test may fail with only 2 records.

For this reason, we have to write our database-backed tests to work regardless of the state of the database. At any time, you should be able to point your tests of a copy of the production data and watch them run successfully. 

A good demonstration of this is the .NET ORM Cookbook. This project has over 1,600 database-backed tests, and all of them can be run in any order. To understand how it does that we’ll build a series of simple CRUD tests to demonstrate the concept.

The next issue is determinism. It is often said that every test should be perfectly deterministic, which means every time you run it, you’ll get exactly the same result. To be deterministic, a test cannot use time-based or randomly generated values, nor can it be influenced by its environment. 

This is not achievable when testing a database. There is always the chance for unforeseen issues to get in the way such as network connectivity, drive space, pre-existing data, etc. 

But non-deterministic does not mean unreliable. You can write a test with non-deterministic attributes that will still return the same result the vast majority of the time. And any random failure will give you information about how your application performs under those circumstances. 

Note: all examples for this article can be found on GitHub.

Create Test

Our first test will be to simply create a record. For the sake of simplicity, we’ll choose a the EmployeeClassification class. This only has four fields:

int EmployeeClassificationKey 
string? EmployeeClassificationName 
bool IsEmployee 
bool IsExempt  

Checking the database schema, we discover that EmployeeClassificationKey is an autonumber column, so we don’t have to worry about that. The EmployeeClassificationName column is constrained to be unique, which is where a lot of people get into trouble. 

[TestMethod]
public async Task Example1_Create()
{
	var repo = CreateEmployeeClassificationRepository();
	var row = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test classification",
	};
	await repo.CreateAsync(row);
}

This test is not repeatable; the second time you run it, a record with that name will already exist. To address this we add a differentiator, such as a timestamp or GUID.

[TestMethod]
public async Task Example2_Create()
{
	var repo = CreateEmployeeClassificationRepository();
	var row = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test " + DateTime.Now.Ticks,
	};
	await repo.CreateAsync(row);
}

Unfortunately, this test doesn’t really test anything. We know CreateAsync didn’t throw an exception, but it could have been an empty method. To make a complete test, we need to pair every create with a read.

Create and Read Test

For our create-and-read test, we will start by making sure we got back a non-0 key from the database. Then we fetch the record by that key and verify the fields echoed from the database match the originals.

[TestMethod]
public async Task Example3_Create_And_Read()
{
	var repo = CreateEmployeeClassificationRepository();
	var row = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test " + DateTime.Now.Ticks,
	};
	var key = await repo.CreateAsync(row);
	Assert.IsTrue(key != 0);

	var echo = await repo.GetByKeyAsync(key);
	Assert.AreEqual(key, echo.EmployeeClassificationKey);
	Assert.AreEqual(row.EmployeeClassificationName, echo.EmployeeClassificationName);
	Assert.AreEqual(row.IsEmployee, echo.IsEmployee);
	Assert.AreEqual(row.IsExempt, echo.IsExempt);
}

Note: If your repository doesn’t throw an exception when a record isn’t found, then add an Assert.IsNotNull check before your property-level assertions. This will make it easier to understand test failures.

When you see this many assertions, there are a couple of issues. First, if an assertion fails you don’t know which one failed. The fields IsEmployee and IsExempt are both Boolean, so you can’t even tell by context clues. This can be fixed by adding messages, if your unit test framework supports it.

The second is harder to solve. If multiple assertions fail, only the first will be captured. This means information is lost. In order to work around this issue, we’re using a disposable AssertionScope object. Assertions made against it are combined and reported together at the end of the using block. For your convenience, an example implementation of an assertion scope  is included in the GitHub repository that accompanies this article. For a more comprehensive implementation, consider using Fluent Assertions or NUnit’s Assert.Multiple.

[TestMethod]
public async Task Example4_Create_And_Read()
{
	var repo = CreateEmployeeClassificationRepository();
	var row = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test " + DateTime.Now.Ticks,
	};
	var key = await repo.CreateAsync(row);
	Assert.IsTrue(key != 0, "New key wasn't created or returned");

	var echo = await repo.GetByKeyAsync(key);

	using (var scope = new AssertionScope(stepName))
	{
		scope.AreEqual(expected.EmployeeClassificationKey, actual.EmployeeClassificationKey, "EmployeeClassificationKey");
		scope.AreEqual(expected.EmployeeClassificationName, actual.EmployeeClassificationName, "EmployeeClassificationName");
		scope.AreEqual(expected.IsEmployee, actual.IsEmployee, "IsEmployee");
		scope.AreEqual(expected.IsExempt, actual.IsExempt, "IsExempt");
	} 
} 

As we write more tests, this is going to become tedious to repeat so a helper method is in order.

row.EmployeeClassificationKey = key;
PropertiesAreEqual(row, echo); 

static void PropertiesAreEqual(EmployeeClassification expected, EmployeeClassification actual, string? stepName = null)
{
	Assert.IsNotNull(actual, $"Actual value for step {stepName} is null.");
	Assert.IsNotNull(expected, $"Expected value for step {stepName} is null.");

	using (var scope = new AssertionScope(stepName))
	{
		scope.AreEqual(expected.EmployeeClassificationKey, actual.EmployeeClassificationKey, "EmployeeClassificationKey");
		scope.AreEqual(expected.EmployeeClassificationName, actual.EmployeeClassificationName, "EmployeeClassificationName");
		scope.AreEqual(expected.IsEmployee, actual.IsEmployee, "IsEmployee");
		scope.AreEqual(expected.IsExempt, actual.IsExempt, "IsExempt");
	}
}

Instead of writing this method by hand, you can consider a library such as CompareNETObjects.

Create, Update, and Read Test

For our next test we want to perform an update. This involves a create and two reads. 

[TestMethod]
public async Task Example5_Create_And_Update()
{
	var repo = CreateEmployeeClassificationRepository();
	var version1 = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test " + DateTime.Now.Ticks,
	};
	var key = await repo.CreateAsync(version1);
	Assert.IsTrue(key != 0, "New key wasn't created or returned");
	version1.EmployeeClassificationKey = key;

	var version2 = await repo.GetByKeyAsync(key);
	PropertiesAreEqual(version1, version2, "After created");

	version2.EmployeeClassificationName = "Modified " + DateTime.Now.Ticks;
	await repo.UpdateAsync(version2);

	var version3 = await repo.GetByKeyAsync(key);
	PropertiesAreEqual(version2, version3, "After update");
}

In order to ensure we understand why a comparison failed we are adding a stepName parameter to our PropertiesAreEqual method.

Create and Delete Test

At this point we have covered the C, R, and U steps of CRUD, so all that’s left is delete. In our delete test we will again be reading from the database twice. But we’re going to be using a different method from the repository that returns a null when the record isn’t found. If your repository doesn’t offer such a method, then see example seven.

[TestMethod]
public async Task Example6_Create_And_Delete()
{
	var repo = CreateEmployeeClassificationRepository();
	var version1 = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test " + DateTime.Now.Ticks,
	};
	var key = await repo.CreateAsync(version1);
	Assert.IsTrue(key != 0, "New key wasn't created or returned");
	version1.EmployeeClassificationKey = key;

	var version2 = await repo.GetByKeyOrNullAsync(key);
	Assert.IsNotNull(version2, "Record wasn't created");
	PropertiesAreEqual(version1, version2, "After created");

	await repo.DeleteByKeyAsync(key);

	var version3 = await repo.GetByKeyOrNullAsync(key);
	Assert.IsNull(version3, "Record wasn't deleted");
}
[TestMethod]
public async Task Example7_Create_And_Delete()
{
	var repo = CreateEmployeeClassificationRepository();
	var version1 = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test " + DateTime.Now.Ticks,
	};
	var key = await repo.CreateAsync(version1);
	Assert.IsTrue(key != 0, "New key wasn't created or returned");
	version1.EmployeeClassificationKey = key;

	var version2 = await repo.GetByKeyAsync(key);
	PropertiesAreEqual(version1, version2, "After created");

	await repo.DeleteByKeyAsync(key);

	try
	{
		await repo.GetByKeyAsync(key);
		Assert.Fail("Expected an exception. Record wasn't deleted");
	}
	catch (MissingDataException)
	{
		//Expected
	}
}

If your database is using soft deletes, then you need to also check to see if the row was flagged correctly instead of being deleted outright. This can be accomplished by adding the following lines.

var version4 = await GetEmployeeClassificationIgnoringDeletedFlag(key);
Assert.IsNotNull(version4, "Record was hard deleted");
Assert.IsTrue(version4.IsDeleted);

Improving the Create Test

A flaw in our original create test is it always left the optional columns with their default values. This can be solved by using a data-driven test. The example below is for MS Test, but all major unit testing frameworks have an equivalent. 

[TestMethod]
[DataTestMethod, EmployeeClassificationSource]
public async Task Example9_Create_And_Read(bool isExempt, bool isEmployee)
{
	var repo = CreateEmployeeClassificationRepository();
	var row = new EmployeeClassification()
	{
		EmployeeClassificationName = "Test " + DateTime.Now.Ticks,
		IsExempt = isExempt,
		IsEmployee = isEmployee
	};
	var key = await repo.CreateAsync(row);
	Assert.IsTrue(key > 0);
	Debug.WriteLine("EmployeeClassificationName: " + key);

	var echo = await repo.GetByKeyAsync(key);
	Assert.AreEqual(key, echo.EmployeeClassificationKey);
	Assert.AreEqual(row.EmployeeClassificationName, echo.EmployeeClassificationName);
	Assert.AreEqual(row.IsEmployee, echo.IsEmployee);
	Assert.AreEqual(row.IsExempt, echo.IsExempt);
}

public class EmployeeClassificationSourceAttribute : Attribute, ITestDataSource
{
	public IEnumerable<object[]> GetData(MethodInfo methodInfo)
	{
		for (var isExempt = 0; isExempt < 2; isExempt++)
			for (var isEmployee = 0; isEmployee < 2; isEmployee++)
				yield return new object[] { isExempt == 1, isEmployee == 1 };
	}

	public string GetDisplayName(MethodInfo methodInfo, object[] data)
	{
		return $"IsExempt = {data[0]}, IsEmployee = {data[1]}";
	}
}

Now that we are creating multiple objects for a single test, the ability to see which row was being created in the database becomes important. For MS Test, you can log this information using Debug.WriteLine. Check the documentation for your unit test framework for its preferred method.

Filtered Read Tests

Until now we have only been working with single records, but some repository methods will return multiple records. This poses some additional challenges.

For this next test, we’re going to be looking for records that have IsEmployee = true and IsExempt = false. In order for this test to be meaningful, we need to first pre-populate the database with rows that match and rows that don’t match.

Then we need two types of assertions. 

  1. Assert that the matching rows we inserted were returned.
  2. Assert that no non-matching rows were returned.

Note the wording in the second rule. We’re not just checking to see if our newly created, non-matching rows were returned by mistake. Rather, we are looking for any non-matching row. This will account for data that existed before our test began.

[TestMethod]
public async Task Example10_Filtered_Read()
{
	var repo = CreateEmployeeClassificationRepository();

	var matchingSource = new List<EmployeeClassification>();
	for (var i = 0; i < 10; i++)
	{
		var row = new EmployeeClassification()
		{
			EmployeeClassificationName = "Test " + DateTime.Now.Ticks + "_A" + i,
			IsEmployee = true,
			IsExempt = false
		};
		matchingSource.Add(row);
	}

	var nonMatchingSource = new List<EmployeeClassification>();
	for (var i = 0; i < 10; i++)
	{
		var row = new EmployeeClassification()
		{
			EmployeeClassificationName = "Test " + DateTime.Now.Ticks + "_B" + i,
			IsEmployee = false,
			IsExempt = false
		};
		nonMatchingSource.Add(row);
	}
	for (var i = 0; i < 10; i++)
	{
		var row = new EmployeeClassification()
		{
			EmployeeClassificationName = "Test " + DateTime.Now.Ticks + "_C" + i,
			IsEmployee = true,
			IsExempt = true
		};
		nonMatchingSource.Add(row);
	}
	await repo.CreateBatchAsync(matchingSource);
	await repo.CreateBatchAsync(nonMatchingSource);

	var results = await repo.FindWithFilterAsync(isEmployee: true, isExempt: false);

	foreach (var expected in matchingSource)
		Assert.IsTrue(results.Any(x => x.EmployeeClassificationName == expected.EmployeeClassificationName));

	var nonMatchingRecords = results.Where(x => x.IsEmployee == false || x.IsExempt == true).ToList();
	Assert.IsTrue(nonMatchingRecords.Count == 0,
		$"Found unexpected row(s) with the following keys " +
		string.Join(", ", nonMatchingRecords.Take(10).Select(x => x.EmployeeClassificationKey)));
}

Something we didn’t check for is count. Unless you filter on a value unique to your test, looking at a count will cause problems when other tests are running against the same database. This can happen with a shared database, or if your unit test framework supports parallel test execution.

You will also notice that over time the amount of data returned by this test will continue to grow. If it reaches the point where the tests run excessively slow, you have a few options to consider.

  1. Reset the database
  2. Improve the indexes
  3. Remove the repository method

Resetting the database is the quick fix, but I rarely recommend it. Though the database may contain a lot of sample data, chances are it is still many times smaller than the production database. Which means resetting it will only mask performance problems.

Improving the indexes comes with its own set of challenges, as each new index reduces the write performance of the database. But if you can afford them, better indexes will result in happier users.

Removing the method also needs to be considered, especially if the method returns a substantial percentage of the database. A GetAll method on a table with only a dozen rows is sensible. If that table balloons to 10,000 rows, you probably can’t use it in production anyways and it should be removed.

A Note on Cleaning Up

Many people will recommend that you delete any records created at the end of the test. Some will even go so far as to wrap the entire test in a transaction to ensure the newly created rows are eliminated.

Generally speaking, I discourage this behavior. Non-production databases tend to have far too little data in them to begin with and rolling back the transactions will prevent the beneficial accumulation of data we want.

Furthermore, this cleanup activity can fail unexpectedly; especially if you are manually deleting records instead of rolling back a transaction. The first time this happens, you may find one or more of your tests are depending on the database being in a clean state. And that kind of brittle test is what we’re trying to avoid.

Speaking of transactions, some people encourage you to hold a single transaction open for the entire duration of a test. This can be a significant anti-pattern that will prevent you from running your tests in parallel due to blocking (and possibly deadlocks) in the database. Also, rollbacks in some database such as SQL Server are particularly slow.

That said, there is nothing fundamentally wrong with adding clean-up steps to a test. Just be careful to write them in a way that doesn’t add significant time to the test or result in a test that reports a failure.

Conclusion

Testing with persistence layers requires a different mindset than testing simple classes and methods. The techniques are not difficult to learn, but like any skill require practice to master. Start with testing simple CRUD scenarios before looking at more complex scenarios such as parallel tests, random sampling, performance tests, and full read scans. 

About the Author

Jonathan Allen got his start working on MIS projects for a health clinic in the late 90's, bringing them up from Access and Excel to an enterprise solution by degrees. After spending five years writing automated trading systems for the financial sector, he became a consultant on a variety of projects including the UI for a robotic warehouse, the middle tier for cancer research software, and the big data needs of a major real estate insurance company. In his free time he enjoys studying and writing about martial arts from the 16th century.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Excellent article thank you

    by Arthur Gorr,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    We are doing tests similar to these. We do cleanup between tests. One thing I have found is the setups and assertions can be more complex when the database is already populated.

    One thing to watch out for is when using Entity Framework with tracking enabled, the read then assert can cause false negatives (test succeeds when it should fail). Because the entity is still in memory. So we are careful to use AsNoTracking on the queries.

    I love this AssertionScope. I have been using FluentAssertions for several years on multiple projects and that one is new to me! Thanks for the pro tip!

  • Re: Excellent article thank you

    by Jonathan Allen,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I got caught by that myself recently; tests passing but a missing Commit meant the database updates were being rolled back.

    Now I use a second EF DbContext to verify my result was correctly written.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT