While at work I found myself in the situation of making some manually written unit-tests to work with a new version of software. Beside the fact that the tests were manually written they were written also without using any framework like cunit/cppunit/cpptest ... [I'm talking about c++ code] (yes it is stupid but wait ... there's more).
After I finished this task I saw that:
- some function changed its body but the functionality was the same
- the function's prototype was the same
This means that tests for that particular function need no modification in order to be run - which was correct.
- tests for the previous version of function passed
- the same tests didn't pass this time although it was clearly that the function did *the same thing* (had the same functionality)
The test actually consisted in *also* couting the number of calls to several procedures invoked by the tested function (by stub-ing those invoked procedures and using some barbarian global variables).
You guessed it - the updated function made more/less calls to those stub-ed procedures, scrambling this way the number of calls expected.
People confuse "unit testing" with "code coverage". Unit testing refers to the logic of the functions, *not* what procedures it calls or what parts of the code are run. That is another measure in testing called "code coverage".
The fact that usually the software helping you making tests also provides tools for showing the "code coverage" is not an excuse for this confusion; "code coverage" has nothing to do with "unit testing".
As a general rule: when you change/optimize a function's algorithm without affecting the functionality and the prototype is the same -> unit tests that previously worked *should* work with the new procedure.
Is it more important just to deliver something, instead of delivering something good? Where is the respect for the customer?