I am sure that most of the readers of the blog will instantly know what I mean when I say that different Test Managers arrive at different decisions. There can be many reasons for this, some are harder than others, some are nicer than others, some come from one country and some from another, some are permanent, some are supplied and some are contract, some are new to role, some are new to the organisation.
The reality is that people are different and our experiences drive us to behave in certain ways. To some extent those that know us well will predict our reactions and actions. When running a department full of Test Managers and having a mix of personalities and capabilities, it becomes important to bring some balance or levelling to this. You don’t want Test Managers to be sought because the project will always go live.
This means that we need to turn some of those decision making activities into more of a science and remove the imbalance that can be introduced by emotion or experience. I can’t really supply you with facts and figures to enable you to draw a decision diagram, because it is not that easy. But I will try and point you in what I think is the right general direction.
I would start by breaking down the departments testing functions into business domains. It is more likely that you can assess what is being released in terms of quality at domain level as a starting point. For instance, when releasing to the web you experience a high degree of success and few calls for support, yet when compared to releasing in Business Intelligence, the quality is always an issue and the first run of the month end batch processes always crashes. This gives an indication of where to start defining some criteria. But there are other areas to be considered as well. It could be that one development agency has a far higher success rate than another, where the volume of defects found are always low, or perhaps when a development agency or department develop using a particular language, the results are always much better than others.
These different criteria can be used to form some basic rules around final exit criteria for instance, enabling decisions to be made within a set range, ensuring that all Test Managers reach the same conclusions. Perhaps looking at the ratio of defects to scripts and comparing this to some analysis on the application being developed, the language of development, volume of calls pertaining to the application in support and the domain of release would provide good statistical evidence for decision making criteria.
Basically I am suggesting that you use historical data to form basic guidelines that the Test Managers can use. It does not necessarily eradicate the problem, but at least it will enable easy identification of projects that are outside of the norm. The Test Manager must still have the power to state what they think, even if this flies in the face of the statistical evidence, but when doing so they should be able to substantiate their view.
For projects running with inexperienced Test Managers or those that are inclined to shy away from awkward decisions, the decision becomes one of science which in most cases is going to be correct.
No comments:
Post a Comment
Hi, we would like to hear your feedback.