Thursday, May 28, 2015

The perils of over testing

Test Driven Development (TDD) is a great thing. It allows us to write code "safe" in the knowledge that if we accidentally break something in another part of the system, we will be alerted to it and can re-write or fix accordingly.

It's such a good thing that often in large teams people will insist on this idea of 100% code coverage (i.e. every piece of working code has a corresponding test). Also, some people are compelled to test every single permutation and variation of how someone will access their program/website.

TDD does have some costs however. Some are evident, but others not so.

Cost 1 - Tests take time to run
True, automated testing is faster than manual testing, but they do take time to run. This time means every time you make a change to the code, you need to wait for the suite to finish running in order to feel that your code is "safe" for deployment.

Most of this time is a necessary evil.

A lot of it is unnecessary...

Should you write a feature that checks to see that a form is disabled if a text box is not checked? Well, it depends...

If this is essential to the end user getting to a process, then possibly yes.

If this is just to check a possible error condition that has little or no bearing on the final result, then possibly no.

You could maybe write it as you are developing a feature and then remove it when you are done.

Or else you can write an Object Oriented JS component (FormDisabler) and then test its logic with Jasmine.

Bear in mind that testing JavaScript in a feature will bring up the whole stack which takes time. Time is valuable.

One analogy is that testing is like packing for a trip. You might think that you will need to bring everything you have to cover every possible situation, but really you probably only need an overnight bag with a couple of changes of clothes. The cost is that the more you bring, the more you have to carry.

Cost 2 - Brittleness
Over testing can lead to a very brittle code base where one small change can lead to many hours of fixing broken tests. In Agile development you will constantly make changes to the code base and often what is wanted in the end is not what is expressed in the beginning.

If you test every single possible scenario, then a small business change can lead to a domino effect of broken tests.

The key is to identify what aspects/methods of a feature are most important and to test those.

Classes should be made bulletproof, but not features.

Martin Fowler's Test Pyramid is a good guide.

http://martinfowler.com/bliki/TestPyramid.html

 Summary
At the end of the day, you have to use your brain and judgement to determine what tests are important and what tests are not. It is a subjective thing and can vary from team to team (and application to application). Back to the packing analogy, you should only pack what you need for the trip. If you are writing something mission critical that can never go down and the slightest mistake can lead to potential loss of human life, then by all means go for 100% test coverage. On the other hand, if a mistake in part of your app means a comment or review might not get posted, then test accordingly.

No comments: