The reality is that people are different and our experiences drive us to behave in certain ways. To some extent those that know us well will predict our reactions and actions. When running a department full of Test Managers and having a mix of personalities and capabilities, it becomes important to bring some balance or levelling to this. You don’t want Test Managers to be sought because the project will always go live.
Search Blog
Thursday, 21 August 2008
Test Managers – Balance of Opinions
The reality is that people are different and our experiences drive us to behave in certain ways. To some extent those that know us well will predict our reactions and actions. When running a department full of Test Managers and having a mix of personalities and capabilities, it becomes important to bring some balance or levelling to this. You don’t want Test Managers to be sought because the project will always go live.
Tuesday, 19 August 2008
Empowered Test Managers
So let’s look at the Test Completion Report. This is the opportunity for the Test Team to give a full account of what has occurred during the testing and make any recommendations. Why is it then, that we do not include as a standard within every report, a recommendation on whether the item is suitable for release into production. Part of the answer may be that the Test Manager does not feel comfortable stepping away from the black and white of scripts passing and failing and the associated defects would appear to make some uncomfortable. But the Test Manager knows the testing which was due to occur, understands how much has occurred and what level of quality has been achieved. Why then is it felt that this is not a logical and calculated recommendation.
Another area that the Test Manager should be given more authority is on the exit of Integration in the Small and the entry of Integration in the Large. Two important events occur here. The first is the handover of the development agency’s testing assets and proof. A failure to provide this information is a red flag to the Test Manager and should sound all sorts of alarm bells. This is indicative of a lot of problems about to be experienced during test execution. The second is when the code is first delivered into testing and the process of running those first few scripts. If these start to fail then there is a second indication of problems with quality. Yet so often the Test Manager is not in a position of authority to insist on the production of test assets from development, or indeed to bounce the code back for rectification of defects. If the Test Manager is engaged at the outset of the project, they should be able to insist on the supply of test assets from development. To avoid the problems of poor quality code delivered, take smaller drops of code and start them sooner, so that an indication of quality is understood, or add a specialist phase of testing prior to Integration in the Large, where the sole purpose is the sign off of the code quality delivered, potentially by smoke/sanity testing the application using a selection of the scripts created.
To summarize, ensure that the opinion of the Test Manager is sought. Don’t let them get away with a statement of scripts run and defects raised, open and closed in the Test Completion Report. Insist that they provide an opinion. Work with Application Support to bring additional credibility to the department. Once this has been achieved you can then start to think about how you ensure that all of the Test Managers in a department apply the same level of reasoning when making recommendations for go live. The subject of another article I think.
Sunday, 17 August 2008
T.I.G.E.R – What this acronym means to us!
T = Truthful
It is imperative to us that all members of the company operate in a truthful manner. We need to know that we can rely on what people are saying and that they will be honest with each other and our clients. This is not always easy, but some wise person once said, “Honesty is the best policy”. Well we believe this and have made it part of our policy.
I = Independent
Testing as a discipline needs to remain independent of other functions in order that it can provide an unbiased view on quality. Lack of independence places a testing department under the same constraints in terms of time and cost and therefore quality can become compromised. We pride ourselves on the fact that Testing is all that we do. We live and breathe the subject and can always be relied upon to act independently.
G= Good Willed
We expect our staff to be good willed because this is a personality trait that we embody as an organisation. As a result we are affiliated with several organisations and as a group contribute charitably each year. We work with some local organisations and some as large as the NSPCC.
E = Energetic
Energy is incredibly important to us. We want our employees to work hard and play hard. We expect them to be passionate about testing and what it involves. We expect them to demonstrate an enthusiasm for their work and not view it as just a job.
R = Reliable
We need to be able to rely on our resources and we expect our clients to rely on us also. Reliability is a cornerstone on which we build, taking care of the basics by ensuring that we can be counted upon to be knowledgeable and dependable, providing value into our organisation and those of our clients.
Not only is the acronym easy to remember, but it is a strong image and one with which all of our employees are happy to associate. From a TCL India perspective, the Tiger is probably even more powerful an image, being so highly revered.
Friday, 15 August 2008
SOFTWARE TESTING GLOSSARY (Update)
A…………………………………………………………………………………………………
Accessibility: (Testing)Looks at an application to ensure that those with disabilities will be able to use the application as an able individual.
Beta: (Testing)Testing of an application which is in production, but not available for general use. This is normally performed by a select group of friendly external testers.
Case: See ScenarioCode: The software that has been produced by development and is being subjected to testing.
Data: Information pertaining to the use of a system or recorded through the use of a system. Data is used in order to test a system, potentially after data cleansing if personal information is involved.
Environment: The combination of hardware, software and data as used in development, testing and production. The platform/s upon which the testing occurs.
Functional: (Testing) The testing of a products function, against requirements and design.
Glass Box: See White Box
High Level Design: A design showing the major components and how these will interface to each other, defining the hardware to be used and the software that will be impacted.
Integration in the Large: Where the application or applications that have been developed are brought together along with those which have remained unchanged, building a production like system around the application/s. Testing is then applied looking at the communication between the different applications.
K……………………………………………………………………………………………………
Key Performance Indicator: A mechanism built on one or metrics, which determines a band of acceptable performance, which over time is often targeted towards improvement.
Live: See Production
Metric: A measure of an attribute or occurrence in connection with an organisation, department or functions performance.
Non-functional: How an application performs, rather than how it does it.
P……………………………………………………………………………………………………
Performance Centre: A tool used for measuring the performance of an application or series of applications. See:https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-126-17_4000_100__
Quality: The suitability of an item for its intended purpose and how well it achieves this.
Regression: Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Safety Critical: Used to identify something for which the use has an impact on personal safety, such as medicinal applications or those used by rescue services.
Technical Architecture Document: Definition of how the business requirements should be translated into a technical solution at the highest level.
U……………………………………………………………………………………………………
Unit: The testing of the application by the developers at a component or modular level, looking at whether the code is performing as expected.
V-Model: A testing focussed software development life-cycle where the process of creativity occurs down the left side of the V and the process of testing up the right side of the V, where each step down the V has a corresponding point of testing up the right of the V. Begins with a requirement and terminates with User Acceptance Testing.
W3C: World Wide Web Consortium – body creating web standards and guidelines. http://www.w3.org/
Y…………………………………………………………………………………………………….
Z…………………………………………………………………………………………………….
Wednesday, 13 August 2008
The Very First Bug
The similarity is strange, because only recently I had been recounting to colleagues an experience with my son (10 years old). A storm fly, one of those tiny little things about 1mm by 2mm had some how crept into my LCD screen on my monitor and to all appearances, was crawling all over my document. I called my son over remarking, “Come take a look at this bug in my screen”. His response was naïve but endearing as he responded, “Wow, Dad is that a virus?” much to my delight.
The picture above relates to the following text. “While Grace Hopper was working on the Harvard Mark II Computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system. Though the term computer bug cannot be definitively attributed to Admiral Hopper, she did bring the term into popularity. The remains of the moth can be found in the group's log book at the Smithsonian Institution's National Museum of American History in Washington, D.C.”
Monday, 11 August 2008
How to Annoy a Test Manager
The coding has finished and we need to test – should we be talking to you now?
I know you were expecting the code today, but when do you really need it?
Can you please stop raising defects as we don’t have time to fix that many problems?
We have only got half the time we thought we had for testing so how are you going to get all of your work done?
We know the quality is bad, but we are releasing anyway?
UAT has been dropped because the business can’t free any resources.
The fact that this problem was fixed in an earlier release is not indicative of poor configuration management.
Why do we need developers, support and environment resources to work the weekend as well as testers?
The environment is nearly the same as live, we just don’t know how nearly!
Testing is supposed to make the software perfect. Why hasn’t it worked?
We have spent 20% of our budget on analysis, 60% on design and development and 10% on other items. You have 10% left, what can you do for us?
We are running out of money because the test execution is taking too long, so we are going to let the testers go and the developers can finish the job.
Don’t worry about getting testers. We have lots of people in the business that we can get to test at lunchtimes.
The supplier was not asked to perform Unit testing and we can’t afford to have it done now as it was not in the budget.
Testing is a service and as such we will let you know when we need you.
I know you asked for three environments, but we can only get one. Don’t worry. I am sure everyone can get on it at the same time.
The testing is occurring on the development environment. That way the developers can change things whenever they need to.
The problems in live are fixed out of hours so we can’t test them.
If we automate all of the testing how much time will it save us?
Why do we need testers on board before the testing starts?
The user spilt coffee on their system and it crashed. I bet you didn’t test for that did you?
We have too many high severity defects, so we are going to hold a meeting and see how many we can downgrade?
We need to do some performance testing. There are no non-functional requirements, so let’s just assume that everything must happen in under a second.
Testing is easy. Anyone can do it.
It’s not a bug. It’s a design feature.
The requirements were captured at the start of the project. But don’t read that it’s way out of date.
Please feel free to add your own by way of comments.
Saturday, 9 August 2008
Test Process Structure
TMM suggests that as organisations mature in line with the Capability Maturity Model, the same is occurring within testing. TMM was introduced to apply CMM type measures to an organisations testing capability. In brief there are 5 stages of test maturity, beginning at level 1 where testing is undisciplined, level 2 where the testing is managed but still seen as a phase of the SDLC (Software Development Life Cycle) that occurs after testing, although at this point process is being put in place on a programme or organisation wide basis. Level 3 sees testing fully integrated in the SDLC and no longer a single phase, with processes become more mature and honed over time. Level 4 sees the testing fully defined, established and measurable and level 5 looks at optimisation of testing efforts, processes and practices where costs are controlled and continuous improvement occurs.
What I would like to suggest is that TMM can be seen as a trifle heavy for some organisations and unless already pursuing CMM may seem quite foreign. I am not suggesting that it does not have a place, but that it focuses entirely on process and not on understanding the drivers of the business itself, which may make it impossible to move up through the maturity levels.
I think that from a more simplistic approach, organisations will follow something more as follows:
Stage 1
Testing is being performed by available resources that are not professional testers. Defects are being recorded in basic form, using a tool like Excel. Some test cases or scripts may be produced.
Stage 2
Testing is being performed by test professionals and they are called in after development has occurred. Test Plans and Scripts are being produced and defects are being logged formally and reported on.
Stage 3
Testing exists as one or more independent functions, each having it’s own process. These small test teams are aligned to the business areas they serve. Projects now have approaches or strategies produced against them.
Stage 4
The disparate teams have now been pooled to form one function which is independent from the development side of the business. A Test Policy has been produced and testing is involved in all aspects of the software development life-cycle. Metrics and KPIs are being gathered and fed back to the organisation.
Stage 5
Testing is improving in it’s capability. The quality of releases to production is high and end users have a high opinion of the IT function. Releases to live occur only with approval from the testing function and testing is viewed as a discipline within the organisation. Metrics and KPIs are now giving the business sufficient information to base decisions on preferred development agencies and technologies, demonstrating those which are most successful.
The key is to ensure that your business is improved and enabled by testing and not crippled by a level of process which does not match that of the rest of the organisation. Learn to walk before you run and build over time.
Wednesday, 6 August 2008
Role of the Environment Manager
Test Environments are required in order to enable the test execution to occur. The Test Environment Manager is responsible for the provision of the test environment, its configuration status, its stability and maintenance. They also provide administrative activities such as backups and purging on request of the project, monitor usage of environments, supplying support throughout the test execution, including out of hours working if required. They will be expected to manage change to the test environments and keep the programme and testing informed of status. They must be able to give advance warning of downtime of any component of the test environment, which may cover multiple platforms. Duties are predominantly based around the organisation’s infrastructure but can extend to cover interfacing with 3rd party organisations which are offering equipment to form part of the project environment.
The environment manager is responsible for ensuring the provision of suitable test environments for all aspects of the testing of the core system. A suitable test environment is one that meets the requirements for the phase of testing and those requirements will be documented and provided to the Environment Manager by the Test Manager.
They shall ensure that the interfacing systems test environments are available and of a known and controlled configuration. For the core system, the environment should be available two weeks prior to test execution commencement, so giving time for any difficulties to be resolved. The environment must remain available for the duration of the project. As many peripheral systems should be delivered on similar timescales.
Thought must be given to the possibility of different environments being required, where phases of testing overlap, or where the requirements of the testing phase are such that the provided environment needs to be changed or upgraded. Each time a new environment is made available, this should also occur two weeks prior to test execution occurring in the new environment. The Environments will be placed under change control and will always be of a known state of configuration.
No upgrade of hardware or software may occur to a test environment without the approval of the Environment Manager. The Environment Manager must place change control over the environments and maintain that level of control. This may be a formalised mechanism, but as a minimum, requires that the Environment Managers approval has been received prior to the change occurring. All changes must be communicated to the following people as a minimum:
1. Development Manager
2. Testing Manager
3. UAT Manager
4. Project Manager
It is expected that this list will grow as time passes and the individual users of the system become known. It is felt that the above will form a minimum conduit for such communications until such time as all individuals are known.
A record of the configuration should be maintained for each environment or element of an environment if multiple environments are required in order to make a whole. The record should detail the hardware build that has been provided by Technical Operations, including versions of operating systems and other utilities provided as part of the basic provision. Over and above this, each item of software placed onto the environment for test, shall have its version recorded, so enabling a clear picture of the configuration at any given time. Any change to any aspect of the configuration should be approved under the change control mechanism and logged in the Configuration Record.
Should the software under test be configurable, a document with its own version control shall be supplied to the Environment Manager so that they can use this as a state of software configuration. If changes are required to that configuration, these should be recorded in an upgraded version of the document and the Configuration Record updated to reflect that this has happened.
Ad hoc changes to the configuration, be that hardware, software or software configuration, are not permissible. Alteration to the configuration must be controlled and the configuration must always be maintained in a known and managed state.
The Environment manager, will ensure that there is support for the system during all times of test execution, be this late at night or over weekends and bank holidays. They will make known a list of all the different elements of the environments and the contact names and numbers of the people that they support.
A plan shall be maintained by the environment manager of who is using what environment for what purpose at what time. This needs to be maintained so as to ensure that changes in environment usage are managed and controlled and so that if multiple environments are required, this can be kept to a minimum by the intelligent employment of the environment resources available.
The Environment Manager, as a result of the change control mechanism, will be able to notify the users of the test environments of any outages. These changes shall be recorded in an outage log, so that upon project completion, the details of outages can be formally reported. It will also enable monitoring of any problems that may occur in this area and be used to aid resolution of repetitive issues.
As part of the standard weekly output, the environment manager will distribute the following information.
1. Planned usage for the week.
2. Planned outages for the week.
3. Planned changes for the week.
4. The statement of configuration for each environment.
5. Planned Backups
The last item on the list is backups. These shall be available to all testers upon request and shall be recorded and requested by the Environment Manager.
I hope that this is found to be useful and informative.
Saturday, 2 August 2008
Importance of Accurate Test Environments
So what happens? Firstly all of the investment on a project is in the production equipment. As the project is creating application X for the first time, the to-be production environment can be used for testing purposes. It is ideal in this situation because it means that we can test on the most suitable infrastructure possible – what it will run on in live.
The problem comes the next time the application is updated. There was no investment in test infrastructure as part of the original project – they didn’t need it. So now the update project has to justify a massive expenditure in test equipment, which could potentially jeopardise the projects financial benefit. This step is therefore avoided and the hunt begins. We need to find a piece of kit that can be used to test the application. Something which is similar to that in production would do. Alright something which resembles the production kit, if you squint at it whilst wearing dark glasses. The good news comes through. Kit has been found and once the rust is scraped off, it is found to be an earlier version of the kit in production. The RAM is reduced, the processor is 2 years earlier than that which is in production and the operating system has fallen out of support. Never mind, it is all we have. Then the test manager explains that he needs three environments to be able to complete the testing on schedule. Don’t worry we can whack in some more RAM and increase the disk space. It’s still not half as good as production but we can split it into three for you.
This is seen as a pragmatic solution by all involved and the test manager must now do his best. (Agggghhhhh!!!!!!!). Let me explain my frustration. As soon as you compromise the test environment, you compromise the testing. So many times defects are raised which are directly attributable to the environment and if that environment is not the same as production, time is going to be spent resolving defects which will not exist in production and potentially defects which will occur due to the production environment will not be located during testing. Certainly in terms of release testing, the process is likely to be different and effectively the results during testing are false and cannot be relied upon. The lack of adequate test environments becomes a project risk in its’ own right and should be logged in the project risk register accordingly.
It is critical that the testing environment is as close to production as possible and that any deviations are fully understood. Only with the differences between production and testing environments being known and measurable can you rely on the testing results.
Take performance testing. If you are running with less RAM, less disk space, less process power, how will this manifest itself in the results? As for security testing, the operating system version becomes paramount as the level of inherent security is likely to change. Know and document the differences and if this cannot be achieved be careful about signing off the product as fit for release.
As a recommendation to all project managers, ensure that you leave an inheritance of test infrastructure for the following projects. Placing a new piece of equipment live is not sufficient. Consideration must be given to ongoing support, testing, development and training and what environments are going to be used for these.
Grading Exit Criteria
What I would like to draw attention to, are two key points. The test manager that says I am not going to have any defects remaining open at the point in time I go live, is living in “cloud cuckoo land”. This is the test manager that wants to be able to test forever and release perfect code. This does not happen. The reality of business is that certain levels of risk become acceptable in order to deliver the project to live and meet the requirements of the business and deliver the business benefits that were used to justify the project in the first instance.
So the next question becomes one of levels of risk and what is likely to be acceptable. This will depend on the nature of the project. One that involves safety critical applications is likely to have a far higher level requirement for quality, than one which is being used by 2 people in a small business. One which is exposed to the public or clients is also going to need to be of a significantly higher level of quality than one which is used internally only. The skill of the test manager is to assess the application, its’ use and define the level of defects that are going to be tolerable. As previously defined in http://tcl-india.blogspot.com/2008/06/entry-and-exit-criteria.html the Project Manager should be bought into these levels in order to ensure support later in the project.
Now we need to look at the defects that are going to be acceptable. A common misconception is to identify that the level of critical defects is 0, be that priority or severity, but that there is no restraint on the level of insignificant defects. This could not be further from the truth. Whilst 1 insignificant defect is not going to stop the release, there comes a volume of insignificant defects that makes the release of the application unacceptable.
I would suggest that we concentrate initially on the severity of defects. We need to understand proportionately that a volume of insignificant is equal to 1 minor, a volume of minor is equal to 1 significant and so on. This will change dependent on the application use, so here are some suggestions for different application types.
Safety Critical
Proportions: 1 Critical = 2 Severe : 1 Severe = 3 Significant : 1 Significant = 5 Minor : 1 Minor = 10 Insignificant
Final Exit Criteria: 0 Critical : 0 Severe : 0 Significant : 5 Minor : 10 Insignificant
Public/Client Facing
Proportions: 1 Critical = 3 Severe : 1 Severe = 5 Significant : 1 Significant = 10 Minor : 1 Minor = 20 Insignificant
Final Exit Criteria: 0 Critical : 0 Severe : 3 Significant : 10 Minor : 20 Insignificant
Internal Consumption (20 + Users)
Proportions: 1 Critical = 4 Severe : 1 Severe = 7 Significant : 1 Significant = 15 Minor : 1 Minor = 50 Insignificant
Final Exit Criteria: 0 Critical : 1 Severe : 5 Significant : 10 Minor : 40 Insignificant
Internal Consumption (0 to 20 Users)
Proportions: 1 Critical = 5 Severe : 1 Severe = 10 Significant : 1 Significant = 20 Minor : 1 Minor = 100 Insignificant
Final Exit Criteria: 0 Critical : 2 Severe : 5 Significant : 10 Minor : 50 Insignificant
Testing in Live – Why you should not!
Production is designed to serve a business in operation. It is being used in anger every minute of the day. It may be internally used, it may be used by high street shops or it may be open to public use such as the internet.
The first problem is that the code that needs to be tested is not fit for live because its quality is not known. So by placing the code live you are compromising the production platform. You can cause code which is running perfectly in production to fail, as common routines are modified.
Testing is designed to break things. We need to understand the points of vulnerability and testing tries to identify these. This means that the application could be caused to crash, which could result in a domino effect and take out other elements of infrastructure or code.
In testing terms, the point at which resolving a defect is highest, is in production. One of the primary reasons for this is that outages which affect the production platform can be disastrous. Imagine 1000 customer services personnel being unable to work for an hour. A hourly pay rate of £10.00 per hour and the problem has already cost £10,000.00 But what if you have an on-line presence and you lose that for an hour. You have now lost sales revenue for that period of time and this you cannot recover. Perhaps more damaging is that perception of your organisation has taken some public damage. The potential new customer went somewhere else and you have now lost their trade for the foreseeable future. The existing client who was returning has now been forced elsewhere.
Think also of the press that can be received through public outages. Organisations that experience loss of service will often find this being reported in the media and this causes immeasurable brand damage.
Another consideration is that the data which exists on your production system is real. The testing therefore cannot modify, add or delete information without careful consideration of the consequences. Corruption of data exposes an organisation to the data protection act, but worse, may invalidate information which is crucial to your business. What happens if a change of address occurs which is a test – the client is potentially lost along with their business?
A final point is that your system is in all likelihood linked to other organisations. When I first became involved in development, I had a story relayed where a developer had unit tested his code in a live environment. His code was a credit checking piece of software and he used his own details each time he ran the test. As your credit score is negatively impacted each time a check is performed, the developer managed to take his credit rating from positive to negative in a matter of a day. He did not realise he had done this until such time has he tried to make a credit purchase and was refused. Fortunately, after some discussion with the credit reference agency, his rating was restored. But any kind of financial transaction must be performed in a test environment, linking to other test systems. Imagine the damage that re-running a BACS transaction could cause, or payroll being sent out at the wrong time.
Production is very much live. You would not expect a doctor to test a new drug on a human without going through a certain degree of process first. Even then you would move into a clinical trial under strict process.
What about Beta testing I hear you ask. Yes, there are examples when the software is deemed to have no negative impact on production, where sufficient testing has already been performed as to be confident of its capabilities, that the software may be released with a “Health Warning” attached. It may be that a select group of users will be asked to perform a trial. But in these instances, the application will have been put through its’ paces in a test environment.
It is important that users of systems have good experiences. Failure to achieve this results in loss of the users from the system, negative impact to the reputation of the department and individuals making the release, plus the potential to damage the organisation's brand and revenue-earning potential should make this a course of action that is avoided.