-
Notifications
You must be signed in to change notification settings - Fork 0
Minimum Viable Testing
This is a concept that may look ridiculous at first. It defies some well-accepted rules of the testing world. Be it as it may, Let me introduce you to the concept of Minimum Viable Testing. It is a lightweight testing process. It is faster and requires fewer testers.
In his legendary book Testing Computer Software, Cem Kaner writes about testers - Finding problem is the core of your work. You should want to find as many as possible… . It is impossible (almost) to disagree with this. In the tester’s eye, this is very fundamental. But I would request you to look from the company’s point of view. The company commissions the testing work. So, the company can define the purpose of testing. Before we look at testing from the company’s point of view, let me state the universal truth of testing.
— Production bugs can not be avoided.
— Testing, like every other endeavour, has a commercial aspect.
The first universal truth needs no introduction. The second truth implies that testing activity must be measured in terms of its efficiency. For every extra hour of the testing effort, how many effective bugs were found? We will see the meaning of effective bugs later in this write-up. If we were to measure the efficiency of the testing process, we would need to find the ratio of time and bug-count.
When a testing process starts, this ratio is always high. Testers find effective bugs at a high rate. As time passes, the ratio goes down. The number of daily bugs decreases. Even the bugs start becoming more exotic and rare. In other words, the bugs tend to be more ineffective. Before we discuss anymore, let me explain the idea of effective bug. Let us say a bug occurs when you do not follow the normal click order of select1 and select2. Instead, if you follow the click order select5, deselect5, select1, select2 then the submit button remains disabled. With normal click order, the submit button becomes enabled. Now, this is a valid bug. But it is like a fortune cookie. You need some luck to come across this bug. This is an example of an ineffective bug. The effectiveness of a bug is proportional to the likelihood of its occurrence.
With the effectiveness of testing as the target, things start changing. Now we need not focus on the bug count. We instead stop testing once we start seeing ineffective bugs. In a sense, ineffective bugs are signs of lower test efficiency. They usually occur after an initial phase of testing. To be effective, Minimum Viable Testing must be a time-bound activity. An alternative is to stop testing when daily ineffective bug count overtakes effective bug count.
To ensure that the test process remains effective, Minimum Viable Testing classifies tests into two types. The first one is checklist tests. It includes all the tests that verify the functionality as expected by the customer. Such test cases are rather easy to write. The second one is sledgehammer tests. The sledgehammer tests are the tricky one. The purpose of such test is to test out unimagined or uncommon conditions. To be efficient, the tester must try possible yet uncommon conditions in production. And this makes writing sledgehammer scenarios an art.
For effective Minimum Viable Testing here are few things that test teams could do.
Switch completely to automated testing - Automate all except the final pre-release test.
Segregate test case writing - Write checklist and sledgehammer tests in separate sessions.
Split test-case writing and test-script writing roles - Assign the test architect for scenario writing and the test engineer for test script writing.
Do pre-release test manually - Do ad-hoc and fresh (not pre-documented) testing every time and the docuement it. The document should be shared with the team.
Testing is an art. The core job of a tester is to imagine and write test scenarios. Execution is secondary. That is what drives the role split among testers. That does not mean a test engineer should not write test scenarios. Perfecting the art of creating scenarios is the main goal of any test professional. So, as a test engineer gains experience, he moves more and more into scenario writing.
One way to ensure that is to set your timeline beforehand. To do so, write the checklist scenarios first and estimate the testing effort. Then add extra time for sledgehammer tests. An extra time of the same duration as the checklist test is a good starting point. Now that both types of tests are budgeted, you should be able to set your testing timeline. Once you set the timeline, do not breach it. If you have to drop some sledgehammer tests to meet the timeline, so be it.
The test team may adopt any other method to ensure that testing is a time-bound effort.
Effective sledgehammer test scenarios take into account the user behaviour and the production system behaviour. For example, slow or irresponsive server is a likely situation. The loss of a link, either within the production environment or with external world is another likely scenario. A good tester needs to have a sense of what the developer is likely to miss. He also needs to understand the flow of data within the system. In short, a sense of testing, knowledge of user and production system behaviour, knowledge of data flow within the system are few things that can ensure effective test cases.