Monday, August 25, 2008

Practical Estimation Lesson

In wring this blog as response to the comment from Raj in one of my earlier post. He's asked me about the estimation techniques I use for testing efforts.
I've never used any standard estimation process such as Function point analysis, COCOMO etc. Testing estimation always is very much tied to the development estimates. So an independent estimation without taking into consideration development schedules always fails. At a very high level, I ask team members and test leads to use work break down structure (WBS) to estimate how much it'd take to complete the task on an optimistic and pessimistic count. I've found WBS method to be effective when testers have performed similar task previously. These estimates are subjective, and that is also fine. Behavioural and emotional factors also comes into picture when it comes to estimation in WBS. It always helps if you meet as a group and ask 'why'/'what-if'/'how' questions to further refine the estimates. Probing questions can be 'why is time taken for testing testing the feature more on Unix than on windows?', 'did you consider the factor in your estimates that feature XXX may be unstable because its based on a libraries our developers had inherited from a team that no longer exist?', 'how did you come up with this estimate?'. Many of these questions are intrusive in nature, some testers may be uncomfortable with this style. However, the team lead/manager should remember the need to perform this within a team meeting context. It requires to be stated very clearly that its the process that is being questioned and not the person .
What I typically do, is to have a 'long-term' estimate for the management when the project is initiated. This is generally a one-time estimate and as a test manager, you'd need to convince the stakeholders that the estimates comes with 'risk'. Identification of risks is important at this stage, and the most important risk is the 'risk the feature not delivered for testing in the schedule time', 'no clarity on what the requirement is' etc.
What I also do is to have internal estimates of individual tasks. These are 'near-term' estimates of individual tasks. These tasks are more granular and gives a foresight of tasks for typically next 3 weeks. Tasks performed by individual team member tasks should be stated to a level not exceeding 2 to 3 days. These estimates are more realistic and have dependencies. For example, these tasks can be very specific something like 'verify defect fixed in last 2 weeks', 'test the fund transfer module for inoperative accounts', 'get the latest build installed on the UNIX box', 'review the latest modifications for the user doc'. For each and every task taken-up by team members, a standard question to be asked is 'how long will it take?'. In my experience, typically for a mid-sized project, on a typically day, you may tend to ask this question over 3 to 4 times. Often so, every one in the team gets used to this question that they have this answer ready. It helps to track the near-term estimates if you are using a spread sheet or similar project tracking tools.
From my experience and the context in which we are working on, the stakeholders are more interested progress of work rather than the estimates. Enough advanced communication with the stakeholders about the progress, issues and risks are more important than the accuracy of the estimates.
With enough experience in the project, you can derive your next long term extimate from your previous near-term estimates. I've heard of "Wideband Delphi" estimation and seems to be a promising technique for estimating testing efforts.

Saturday, August 9, 2008

Test Cases as security blanket

There's a raging controversy going on, at least in the Indian software testing community about "test cases centric testing" vs "non test cases centric" testing. Pradeep has posted here at length on this subject, also has the imprints of the controversy all over the web, from forum posts to blogs to yahoo groups.

My take on this is to consider a test case as a security blanket. Wiki defines security blanket as "A security blanket is any familiar object whose presence provides comfort or security to its owner, such as the literal blankets often favoured by small children". It gives the sense of comfort for the person using it. "Person" I mean not just testers running the test script but also developers developing the product, product managers or end customers. Everyone derives comfort from the fact that "test cases have been executed and passed"

Are security blankets necessarily bad? certainly not!. For me as a test manager, it gives me comfort ( and thereby the sense of security) to the fact that the software I'm going to sign-off will not fail when used within the confines of the test scenarios that my team has found to be passing. Does it mean that my test cases are foolproof /security blanket is without holes? Certainly not and its my responsibility as a test manager to make it understand to the stakeholders that the test cases are never foolproof.

To find bugs in a software is the most important responsibility of a test team and a set of predefined execution paths ( aka test cases ) will certainly not find many defects. But the test cases has its rightful and important place as a security blanket within the confines of a software development cycle. During the final phases of the release cycle, several rounds of test cases executions provide the necessary confidence to the stakeholders that the product is ready for release.

Again, I've never seen anyone questioning "what's a test case?" all thoughout the controversy. When I mention a test case in this blogpost, I'd need to make it clear that "I" mean by a "test case". This will be a topic for another blogpost.

Equate test cases to comfort object ; certainly not harmful in anyway and certainly provide the psychological strength. To conclude it all, software development is all about human behavior and interactions!

Monday, July 28, 2008

Lessons From an Expert.

Last two days my team had attended a class on Skilled Exploratory Testing from the expert himself, Pradeep Soundararajan. More than the classroom sessions, it was the interactions we had with Pradeep that made a difference. It had reinforced many a of my personal beliefs about testing and also busted some of the myths.

I'd always believed that testing a software product was an intellectual exercise, its a skill that needs to be practiced and the practitioners will get better at it by "riyaz" aka practise. For those who'd like to know about riyaz here the link of what it means to a musician. We can relate what it means to a software tester. Pradeep had demonstrated what can be achieved by riyaz with 1 hour of exploratory testing of a product he's never seen before!

I'd believed my team always did exploratory testing, this was reinforced to some degree but also we found some flaws in the process we'd been following. Most of the testers did not follow a "written script" of any kind while they were testing. Even those who had a script, may not have followed it in letter or spirit. Most of our testers did not really document when they were 'exploring' the product. The mission statement for the exploratory testing session they performed was often in their head and changing when they were testing. They were focused on the activity, but the testing "sessions" were often long, extending over days or even weeks. We didn't have a debriefing meeting. Defect finding, investigations, debugging, environment creation, data creation, even status meetings were all part of these "sessions" ( or whatever name we'd have given to this until of time" ) . On the other hand, we were not bounded by any detailed written scripts that stated, "click this", "click that", "if this than pass, if not then fail". At least we did not take away the creativity from the testers, but then again we did not have a structure were by we could focus the testers creativity. Important thing to note, we were finding defects in the product, critical ones and all of us were getting better and better at finding defects and that's whats expected from a testing team.

There are tools that we could use to amplify our effectiveness, pair-testing, heuristics, oracles and many more. Again, automated test execution has its rightful place and so do "intelligent human testing", one is never a substitute for the other.

Saturday, July 12, 2008

The Final Days And The Finish Line!

The past several weeks were rather hectic. We were reaching the end of release our our product. From the inception of the project to the final testing signoff, its been two long years.
The final build had arrived last week and our testing team had performed the routine sanity checks on all the four supported operating systems.
Activity of the signoff was rather a non-event. A simple click on a "signed-off" checkbox.

Sunday, March 23, 2008

Regression Testing:What is it anyway?

My project is in the 'Regression Testing' phase. So I felt it was appropriate to have a blogpost on regression testing. I did a quick googling and came up with some definitions here and here and here and many more. These definitions do make sense, but the crux of the matter is where do these explanations fit into the context of my project?
To put it in simple terms, regression testing is one step in the of the following sequence of events in the life of a software tester:
Testers find bugs, developers fixed it, testers verify the defect fixes, testers again tests, find new bugs. developer fix them and so on and so on...
The regression testing is "testers again tests, find new bugs".
Testers in my project have been working for the past several months, testing various aspects of the application, finding defects, getting those fixed. Eventually a times comes when the defect finding rate comes down and the application is "relatively stable".

The questions from a person not involved in the project will be as below.

Q) So is the application ready for release?
A) For sure not!

Q) What else is remaining to be done, you've completed all the application features you've planned to test?
A) We've to test for regressions.

Q) Is'nt that what you've been doing all these days?
A) Nope! We were doing the 1st round of testing until now. We were hunting for defects in the application. We've ensured that the defect fixes provided to us fixes the specific problem we've raised. And that's all we've done.

Q) What more you've to do?
A) The defect fixes made in the application may have introduced more defects, we've got to find them. We call them regression testing.

Q) So what's it that you've not done in your 1st round of testing that you've gonna do in regression testing?
A) One thing for sure, test the application again. Rerun the test cases and scenario we've already executed.

Q) Why did'nt you do the regression testing earlier?
A) Because the application had defects, defects were being fixed, lots of code churn happening. Its not the good time to start regression testing. Regression testing should start only when the defects are "under control". Regression testing can start only the changes to the code is so much under control that the sanity test suites can be executed on the entire application to ensure there are no additional problems introduced in the code as a result of the defect fixes.

Q) What's this "under control" thing you keep referring to?
A) When the product was delivered to testing at the begining of the testing phase we were finding on the average 15 defects a day. Out of these lets say on the average 10 defects were getting fixed. With so much defects getting fixed and even when new features were sneaked in into the code nobody kept a watch on how these code changes are doing to affect the application. Testers where busy finding new defects in the application and testing newer and newer areas of the application for the first time. Testers never had the time to check if "a particular" defect fix has broken something that was already working. Developers never had the time to see how their fix in one area of the application is going to integrate with the rest of the application. Now things are different. Testers have completed testing the application as a whole, average number of new defect found per day is very low, maybe one or two. Developers have lesser number of defects to deal with. Developers now can do detailed impact analysis of how their fix will impact the rest of the application. They in turn can pass this information to testers. Testers in turn know where to focus their thier efforts

Q) And when are you doing to release this?

A) As soon as the 1st regression test is complete, we'd go in for a relatively faster 2nd regression test. This should give us an gut feel of the quality of the product. As soon as the 2nd or third regression test do not detect any major new defects we are ready for release.

Friday, January 11, 2008

Automating Tests: A Suggested Classification

My last Post had described about the our teams experiences in implementing an automated build verification tests. There's been lot of talk and blog post over pros and cons of automated execution of test scripts during past couple of weeks. This post Post in James Bach's blog and Steve Rowe's this post are indicative of where the discussions are heading.

I've found automated execution of test as a significant productivity booster for testers. It relieves testers from the drudgery of re-running tests that were already executed and found to pass.

The current project I'm working on is first version of a business intelligence suite. Being the first version, there's quite of lot of GUI redesign and this is a challenge for development of automated test. However we've been successful in getting the initial level of test automated and successfully executed on daily images vericiations.

We've been fortunate in our project to learn from the mistakes made from other project groups as far as automation approach is concerned.

We've taken the approach for classification of tests to be automated under the following levels. Level-0 or build verification tests, Level-1 or happy path tests, Level-2 or error conditions, Level-3 or detailed user scenarios Level-4 advanced automation strategies.

Level-0 tests are used to check the testability of the product. This is preferably executed on your daily build before this is released for testing. The level-0 tests are often the first to get automated. These tests are also known as smoke tests.

Level-1 tests are used to test the positive paths of the application, mostly focusing your test goals on testing the application features from the user interface. For example, in an banking application, test for correctness of balance', 'check for transfer from one account to another' 'check for clearing of check'. These tests check for your correctness of your GUI and check for the integration of the GUI with the overall system.

Level-2 These will be automated tests for checking the error conditions or conditions that do not occur on normal working of the application. In case of the banking application, this may be be something like 'test for check bouncing', 'test for incorrect signature matches' etc.

Level-3 test are the detailed user cases. For example, create a scenario where a customer opens an accounts and does couple of deposits and withdrawal and closes the account. A suite of these tests can also be used as user acceptance tests.

Level-4 These are tests that typically cannot be carried out without the aid of an tool. High Volume Test Automation is a good example of this type of tests. For a banking application, simulating a typical year end closing procedure or executing hundreds of concurrent accounting transactions.

These classification of test needs to be looked at withing the context of the project in which you are testing. The levels are not really a classification of the order in which the automation project should be implemented. You may very well have tests automated from level-0 and level-2 , skip tests in level 1, level3 and automate tests in level-4
Creative Commons License
The Elusive Bug by Rajesh Kazhankodath is licensed under a Creative Commons Attribution-Share Alike 2.5 India License.