Catching-up Test Coverage

The Agile Estimator application code has fallen behind on test coverage. In this post I’ll document a few things I discovered while adding the tests.

Catching-up Test Coverage

The Agile Estimator application code has fallen behind on test coverage. In this post I’ll document a few things I discovered while adding the tests.

As a reminder, the following commands can be used to use the coverage module with Pytest to check that the code is exercised in the unit tests:

coverage run --source estimator,database,forms -m pytest
coverage report -m

Since I introduced using coverage in the post Test Coverage With Pytest I have added Flask-Login functionality and many new controller functions. There are now a few gaps to be filled in the tests.

Firstly there are a some additional relationships to set up before running the tests. In addition to a user and a group, I’ve added an issue and made the owner a voting member of the group by changing conftest.py.

db.create_all()
# create a user
user = User('default')
db.session.add(user)
db.session.flush()
# create a new group
test_group = Group('TestGroup', user)
db.session.add(test_group)
db.session.flush()
# add an issue to the group
issue = Issue('REF', 'Description', test_group.id)
db.session.add(issue)
# Make the user a member of the group
membership = Membership(test_group, user)
db.session.add(membership)

Most of the tests for the controller will require logging in before running some command. Therefore I’ve added a simple helper method to take a nickname and log in using the post method. When I upgrade the login mechanism to take a password, this should be the only place that requires a change in the test code.

def login(client, nickname):
    return client.post('/login', data=dict(
        name=nickname
    ), follow_redirects=True)

The first test to use this verifies that the response that is returned, after following a redirect, contains the phrase “Sign out” with the nickname.

def test_login_logout_accepted(client):
	response = login(client, 'bob')
	# will then show the option to log out on the page
	assert 'Sign Out bob' in response.get_data(as_text = True)

The next few lines of the test ensure that the logout confirmation screen shows up correctly.

	response = client.get('/confirm-logout')
	# verify the message on the logout screen
	assert 'Are you sure you want to log out' in response.get_data(as_text = True)

Finally, I tested that when the logout is submitted that the option to login is again shown. Note that I did not put the logout call into a helper function as it will not feature in many test cases; at the start of each test function, the user is logged out by default.

	response = client.post('confirm-logout', follow_redirects=True)
	# the user is no longer logged in
	assert 'Sign Out bob' not in response.get_data(as_text=True)
	assert 'Click here to Login' in response.get_data(as_text=True)

I chose to do the above all in one test function. Many would argue that a test should check only a single bit of functionality. I accept that as being true, but there are times when creating less unit tests is a better choice. I think the logic fits nicely together into one function.

That is mostly it. The tests get a little repetitive from here on out. Login, call an endpoint, and verify the response is as expected. For example, here is a check on creating an issue, the action that has the most elements in the form object:

def test_create_issue(client):
	login(client, 'default')
	response = client.get('/group/1/issue')
	assert 'Create an issue:' in response.get_data(as_text=True)
	response = client.post('/group/1/issue', data=dict(
        story_ref='T-1', description='Story description'), follow_redirects=True)
	data = response.get_data(as_text=True)
	assert 'T-1' in data
	assert 'Story description' in data

Again I have chosen to check the GET and the POST actions in the same test function. It could just have easily have been done in two functions, albeit with some repetition of code.

I used the coverage report to target what tests to write. It helped me to identify code that was no longer reachable. For example, since adding the @login_required decorator, I no longer needed to be defensive within the code to check that the user was logged in.

Going through the process also helped me to spot an issue with the make_estimate function, which checked that the user was logged in but not whether they were a member of the estimation group for the issue being estimated.

Note that I did need to make a small change to the application’s test configuration to turn off CSRF. This is the code that places a secure token in the form and validates that it is there when the form is submitted. It is cryptographically signed with the applications SECRET_KEY.

Turning it off is straightforward, and only affects the test environment. All you need to do is to override the WTF_CSRF_ENABLED property to be false in the configuration file config.py.

class TestConfig(Config):
	TESTING = True
	WTF_CSRF_ENABLED = False
	SQLALCHEMY_DATABASE_URI = 'sqlite:///:memory:'
	SECRET_KEY = 'Temporary not-very-secret key'

Altogether I added 23 new test functions to bring the test line coverage to 93% which is within my target range. Here is the report form the coverage tool:

(env-estimator-3.7) estimator$ coverage report
Name                           Stmts   Miss Branch BrPart  Cover   Missing
--------------------------------------------------------------------------
database/__init__.py               0      0      0      0   100%
database/models.py                49      2      0      0    96%   53, 67
estimator/__init__.py             21      0      0      0   100%
estimator/controllers.py         245     18     64      7    91%   98-99, 104, 162-163, 207, 289, 294-305, 24->30, 51->53, 97->98, 103->104, 161->162, 206->207, 284->289
estimator/rest_controller.py      24      0      4      0   100%
forms/__init__.py                  0      0      0      0   100%
forms/forms.py                    19      0      0      0   100%
--------------------------------------------------------------------------
TOTAL                            358     20     68      7    93%

One thing that test coverage will not really flag is whether the decorators are being exercised. For example, I have not added a test to ensure that a user who has not logged in is redirected to the login page if they try to access the review issue URL, and this is not flagged in the above results.

It could be argued that once I test one such function to ensure that the decorator works, that I don’t need to repeat the test for each occurrence. But there is the possibility that some later change may remove or subtly change the decorator and go unnoticed. Just something to be aware of.

There are still some gaps in coverage, but the code is in a good state to make changes going forward. Besides helping to identify bugs that might not be spotted by doing exploratory testing, having good code coverage will stop future updates from breaking the existing behaviour without the developer finding out.

There is the added advantage that as I begin to re-factor the code base, for example to separate controller logic from the application behaviour, that I can run a quick test to ensure that the changes have not impacted the code.

A good base of test coverage also makes test driven development more likely as the developer has a good basis on which to build.