SyntaxHighlighter

Tuesday, April 5, 2016

Meaningful Software Tests

Software Testing is vital to any project, though not all software tests are meaningful. Not a big deal if a few extra automated tests get written or run. Debilitating if the tests cannot catch what they should, by forcing errors to be found outside the dev team or becoming a maintenance nightmare. So what tests should you write?

Testing Types


There exist about three types of software tests:

  • Unit Test - the lowest level of test. Against a single "unit" of code (a class, a group of methods, an algorithm, etc). One unit is tested in isolation and any other units are mocked to keep functionality going and maintain control of the test.
  • Integration Test - Multiple units tested together, but still the same software system. Can have dependencies outside the software, like a database or even, gasp, an internet connection. Much less is mocked at this level. 
  • End-To-End Test - the whole deal. Testing from one end of the system (user interface) to the other (database, algorithms) and back. Runs real code in real environment with (custom) test runners. Nothing is mocked - it doesn't need to be!

People might use different terms for these types of tests, and different approaches might blend somewhere in the middle of these. Want a long list to amaze your friends,mor give you some good ideas? Here.

But honestly, the type of testing you do isn't the most important thing. That you have testing is important, but varying opinions and success stories will advocate for one type or another.

Is There Meaning In These Tests


The type of test is not the issue, rather it is "are these tests meaningful?" Each test that is written should have some meaning behind it - test out a particular function or flow so you know it works. Test out a particular business need or requirement so you know its covered. Test a particular corner case so you know that bug will not come up again.

You need to be confident in your testing. These are little bits of automated software that will prevent errors down the line (where they are much more expensive to fix). When you run your tests, you need to know the functionality they tested and be confident that functionality was performed correctly.

Two quick examples from the past.

Back in the defense industry, documentation was king. Requirement number to test number to test results. Who wrote 'em, who performed 'em, who witnessed 'em and did they each test pass or fail. If you ever wondered how complex programs with hundreds or tens of thousands of requirements can get approved, its because of the documentation backing it up. Each test was performed for a specific purpose, and that purpose was written down. Anyone looking at the resulting documents would know what was being proved when some test step was being performed.

A second example was a past contract. They had pure unit tests that were never to turn into integration tests. One test for one unit. Code coverage was measured and the number of tests were counted. If you were to ask why we were writing all those tests, it was to get code coverage up and have all the code tested! The bad part was that every piece of code was tested in a vacuum (no database, no outside services, minimal component communication), so when the software verification group looked at something (or worse, they deployed!), lots of things were found because it was the first time those pieces were mingled outside of developer testing. There should not have been confidence that those unit tests were a predictor of safe software.


It Depends


Like many software answers, "It Depends." The type of testing you use and how much you test will be dependent on your goals and business needs. Make sure your code has tests, that those tests have meaning, and your teams understands why the tests are there.

Don't waste time writing tests just to fill metrics. If 100% code coverage is important to, or required in, the project, write those tests! But if you made up a number for coverage and then force yourself to reach it (or even better, just keep adjusting your target number!), you are wasting time. Don't test code just to check off some software engineering list - the number of tests you run is as meaningful as SLOC metrics. Tests need to be updated and maintained, forever, and are written with each new feature. If time is being wasted, find out sooner rather than later!

Meaningful tests will help the team too. Later work load will be reduced by catching real bugs early, avoiding those crazy weekend or late night sessions to "GET IT FIXED!!!1", not to mention freeing up your Verification or QA groups a bit. The dev team will have confidence and faster feedback that what they write will work, or not, boosting productivity. Continuous Integration is not possible without tests you can trust. Writing tests that have meaning makes test writing valuable, leading devs to avoid half-backed work (and insidious false negatives).

One final tip. Tests will have more meaning the closer you can get to real code on real hardware. This is also a more expensive environment to set up, and maybe cannot be automated, so it is another decision to be made. Where you are able, get as close as you can to real code on a real system.

 Happy Testing.



No comments:

Post a Comment