Wednesday, September 19, 2007

Looking at afterthoughts


With customer reported defects in their inbox, tester have this sense of "if only we could have done this too", "why did we miss that?", "i though we'd tested that". Its a mad scramble to find an explanation when things get uglyOften at the release time, the opposite is true. The feeling is "I know i did;nt get to that, but it not important", "I know its not tested to my satisfaction, but its too late to holdup a release now, should have raised this earlier", "the test data is not comprehensive, customer data may uncover more issues"Looking at hind-side, the the emotive and tense aspects are overlooked. With the count down to release time ticking, the challenge is to get the risks covered. Testers often overlook minor aspects of the product Being in the lowest end of the "software food chain", its the testing time that most of the other entities in the food chain eatup. Again and again, we hear this around our cubicles and meeting room, "its taking more than expected to complete this feature, but its critical". The routine activity done under these circumstances are "lets take up few more days", and the "few more days" come at the cost of testing time. Of course the delay in project is not affordable since we cant miss the market!.In these days of agile & iterative development, a lot is being discussed about the importance of testing early and getting testers involved early in the development process. Better said than done? May be, maybe not!
The projects I've worked on have for past few years have a structure with a development manager looking at development activities, a test manager on the testing activities and a project manager managing the project timelines and external liason required by the project. The project manger often holds the key for success of the project and the effective delivery of the project depends on the effectiveness of the project manager. Project manager do not have anyone reporting to him/her. This person's primary responsibility is to maintain the timelimes, make sure everyone in the project adheres to the timelines, look out for possible risks and escalate issues. Often with the prior experience in development teams, the project managers unfortunately champions the cause of development team often at the expense of testing team. The schedules prepared by project manager puts overlay emphesis on development delivery and ignores the fact that testing resources can be optimally in an iterative developmment mode. The schedule prepared by the project manager turns out to be a waterfall method that leaves little time for the testing team.
Of course there are project managers who considers themselves as the independent authority and creates the project plan with optimal utilization of all the resources in the project. A project manager stern on the timelines are the ones with successful project delivery.

Sunday, September 9, 2007

Due share for Test Management?

Over the past several months I've been keeping myself updated with blog activities on software testing. Yahoo group on software testing has also been very interesting read.
Overall, majority of the blogs discuss about the testing processes & techniques, what I'd term as the "last mile" in the software testing. Certainly this is the most important aspect of our software testing field.
Very few blogs I've been reading discuss about the "mangement" aspects of software testing. Aspects in testing such as "negotiation", "risk management", "analysis what'if scenarios", "work allocation" "work allocation" generally have very few post in the blog world, at least the one I keep track of regularly.
In general these areas comes under project management and several blogs discuss software project management. There are areas where software development project management differs from the management of testing projects. For example, there is negotiation for time and budget on both development and testing projects, however testing team generally find themselves between "hard place and a rock" in negotiation situations. Testing risks generally get better attention from higher management than risks from development team. Development teams have proven estimation techniques, testing team generally estimates on their prior experiences.

Monday, July 30, 2007

Testing for performance

Ben Simo's post on performance testing is an interesting read. These lessons learned more or less maps to my limited experience of testing application for an often overlooked aspect of functional testers.

This subject is sub categorized into various sections such as performance, scalability, load, volume testing. For the matter of simplicity I'll refer these here under the general term performance testing.

Performance is the last thing that appears in many of the requirements document I've come across. One reason for this could be that requirements are created by business analyst or product managers with limited exposure to the software development process. For them, performance is not a requirement, its "implied that the system works to the satisfaction of the end users". For the end user a non performing system is a defective system. While developing a "custom-made solution" for "a specific client" also known as consulting implementations, performance requirement can be defined specifically and these are part of the service level agreements. Here the number of users, data volume, hardware specifications are know before the system is developed. Its often difficult to specify performance requirement when it comes to development of "generic solution" that can be configured in complex ways on multiple operating platforms and implemented at clients ranging from big corporates to small and medium enterprises.
I've seen mixed reaction from developers on the performance requirement. Developers fall into many categories based on their interpretation of the "implicit" requirement. Some ignore requirement and assume their design takes care of performance. Some make implicit assumptions on the performance requirement. "My assumption is on a typical setup about 100,000 records gets uploaded to the transaction_history table by the nightly process", "I feel there may be about 10 users requesting a customer record search". Few others ask the business analyst/Product manager about the performance expectations, they may or many not get the answers they are looking for. With the inputs they get, they design the system to take care of the worst case scenario. Others realize their design design may not scale to the performance expectations but time pressures makes him to attend to more important tasks. There are of course developers who go to great lengths to ensure there code is meets any performance goals.

Testers have their own view of performance. Unless there is a mandate on performance to be tested, testers have these low in their priority. The testers asked to test for performance is functional testers with limited experience with testing for performance. In the iterative development cycles, performance can be tested only after the last iteration is delivered for testing. This leaves very less time to test for performance. Creation of sample data for performance is also a difficult proposition. Functional testers create their own data for the specific scenario or may use a 'limited' set of client data that may be easier to detect functional data integrity issues.

Performance testing needs to be considered as a specialized area of testing and not to be mixed with the area of functional testing. This again depends on the budget of the project. The 'low end' performance tests can be done by testers huddles in a single room banging away on the system with the intend that something may go wrong. This type of testing, very labor intensive may detect many of the concurrency issues. Unless there's development support for this type of effort, the defects detected as a result of this effort are marked as non-reproducible.
A cost conscious project can choose from a wide range of open source tools available for performance testing.

Sunday, June 17, 2007

Why do we test?

This post is not by any chance a comprehensive look on the reasons why software needs to be tested. I'm discussing here two reasons why software needs to be tested. These may not be the only reasons software testing is required, there are endless reasons. However these two reasons very clearly shows the way in which testers and developers view software testing.

"To prove the software works". No prizes in guessing where this reason comes from. This blog post has its roots from an email from somewhere up in the management chain asking me to do something that was quite unassuming. "Rajesh, Can you test feature so-and-so just completed by developer John Doe. We'd like to take stock of how much of the features are complete". Reading between the lines, its obvious what management wants me to do. Prove to them that the developer had done with his code and checked them into the version control system. "To prove the software works" is something that developer of the feature has to do themselves. Unit-Test!!!. And the management has to as "John Doe, Can you show us the unit-test results?"

Given the background of the situation, I'd mentioned to the management, we do not have a build to test yet. The build scripts are broken. We will not be able to replicate the customer environment. To this the management's reply "oh, just do this in the developer setup, we just want to know if the feature is complete".

The second reason for testing "To find defects in the software so that your customers do not to find them". ( I'm mentioning this as second, not the the point of view of important, but as the second point discussed here ) Again, no guesses for who's the proponent for this view point. For the testing group, the imperative mandate is to find defects and absolute NOT to prove the software works. This mandate is something that needs to be reiterated to every tester everyday however experienced they are.
The summary is Testers test "To find defects in the software so that your customers do not to find them" Developers test "To prove the software works". The success of any software project where the management bets their money on.

Wednesday, March 28, 2007

"Gutless Estimating" - Excerpts from "The Mythical Man-Month"

The following is a excerpt from the classic book "The Mythical Man-Month".
Read on the following paragraph carefully. Sit back, close your eyes and think for the next 2 minutes. Frederick P. Brooks, Jr. the "father of the IBM System/360," has written these lines 20 years ago. So little has changed over these 20 years...

Gutless Estimating

Observe that for the programmer, as for the chef, the urgency of the patron may govern the scheduled completion of the task, but it cannot govern the actual completion. An omelette, promised in two minutes, may appear to be progressing nicely. But when it has not set in two minutes, the customer has two choices—wait or eat it raw. Software customers have had the same choices.

The cook has another choice; he can turn up the heat. The result is often an omelette nothing can save—burned in one part, raw in another.

Now I do not think software managers have less inherent courage and firmness than chefs, nor than other engineering managers. But false scheduling to match the patron's desired date is much more common in our discipline than elsewhere in engineering. It is very difficult to make a vigorous, plausible, and job-risking defense of an estimate that is derived by no quantitative method, supported by little data, and certified chiefly by the hunches of the managers.

Clearly two solutions are needed. We need to develop and publicize productivity figures, bug-incidence figures, estimating rules, and so on. The whole profession can only profit from sharing such data.

Until estimating is on a sounder basis, individual managers will need to stiffen their backbones and defend their estimates with the assurance that their poor hunches are better than wish-derived estimates.

Thursday, March 15, 2007

An universal format for resume

My most of the latter half of last week went into scanning resumes for a test automation position. HR in its part had dumped into a MS Outlook folder, quite a few resumes from internet job postings and candidates applied for the job posting on our company website. Given the enormous amount of work it takes to scan through the resumes for the right set of skill set and experience, I'm wondering if its high time the software industry propose an universal format for resumes.

Automated tools could scan the machine readable formats and save the drudgery of reading through pages text to get simple information like years of experience and skill set. This had prompted me to looked into wikipedia for a definition of resume. One format that had caught my attention was the hResume . Any new initiative requires big names to support it. Its time for the big companies to support a common initiative. The smaller one will follow suit.

Saturday, March 3, 2007

Two Blogger bugs

After having started blogging, I've started looking at everyday things more critically. I noticed two things with blogger.com that caught my attention.

I have this habit of creating a blog post and saving it as draft. After many a iterations of writing and rewriting, I finally publish the post. Often times the gap between creating the post for the first time and publishing it takes several days. I have several draft post that I work on at any given point in time.

I created a post on February 26th, finally published this post on march 4th. The published date on my blog still lists as february 26th instead of march 4th.
The other bug related to my blog title, I'll still drafting.

 
Creative Commons License
The Elusive Bug by Rajesh Kazhankodath is licensed under a Creative Commons Attribution-Share Alike 2.5 India License.