Search Blog

Loading

Tuesday, 19 August 2008

Empowered Test Managers

The test function plays an important part within an organisation, yet too often they are asked to work in a manner which is restrictive and fails to give them the authority or accountability that they require or deserve. Strangely in some instances, even when they are given the authority they fail to take it.

So let’s look at the Test Completion Report. This is the opportunity for the Test Team to give a full account of what has occurred during the testing and make any recommendations. Why is it then, that we do not include as a standard within every report, a recommendation on whether the item is suitable for release into production. Part of the answer may be that the Test Manager does not feel comfortable stepping away from the black and white of scripts passing and failing and the associated defects would appear to make some uncomfortable. But the Test Manager knows the testing which was due to occur, understands how much has occurred and what level of quality has been achieved. Why then is it felt that this is not a logical and calculated recommendation.

On the flip side, why are Application Support departments not demanding proof of testing and a solid opinion on the level of quality that has been achieved? In most organisations, if a poor quality product is released to production, it is not the members of the project that are impacted by the problems, but those in Application Support. Not only are they going to be receiving calls through to the Service Desk, they are potentially going to be calling out support staff in the middle of the night to recover from the problems. Not only this, but any service levels that the Application Support department are supposed to maintain are potentially being compromised. It is therefore imperative that these two departments work together. It is often the case that Application Support will have more clout within the organisation and will therefore be able to assist in the move to a more authoritative testing department.

Another area that the Test Manager should be given more authority is on the exit of Integration in the Small and the entry of Integration in the Large. Two important events occur here. The first is the handover of the development agency’s testing assets and proof. A failure to provide this information is a red flag to the Test Manager and should sound all sorts of alarm bells. This is indicative of a lot of problems about to be experienced during test execution. The second is when the code is first delivered into testing and the process of running those first few scripts. If these start to fail then there is a second indication of problems with quality. Yet so often the Test Manager is not in a position of authority to insist on the production of test assets from development, or indeed to bounce the code back for rectification of defects. If the Test Manager is engaged at the outset of the project, they should be able to insist on the supply of test assets from development. To avoid the problems of poor quality code delivered, take smaller drops of code and start them sooner, so that an indication of quality is understood, or add a specialist phase of testing prior to Integration in the Large, where the sole purpose is the sign off of the code quality delivered, potentially by smoke/sanity testing the application using a selection of the scripts created.

To summarize, ensure that the opinion of the Test Manager is sought. Don’t let them get away with a statement of scripts run and defects raised, open and closed in the Test Completion Report. Insist that they provide an opinion. Work with Application Support to bring additional credibility to the department. Once this has been achieved you can then start to think about how you ensure that all of the Test Managers in a department apply the same level of reasoning when making recommendations for go live. The subject of another article I think.

No comments:

Post a Comment

Hi, we would like to hear your feedback.