Search Blog

Loading

Thursday 21 August 2008

Test Managers – Balance of Opinions

I am sure that most of the readers of the blog will instantly know what I mean when I say that different Test Managers arrive at different decisions. There can be many reasons for this, some are harder than others, some are nicer than others, some come from one country and some from another, some are permanent, some are supplied and some are contract, some are new to role, some are new to the organisation.

The reality is that people are different and our experiences drive us to behave in certain ways. To some extent those that know us well will predict our reactions and actions. When running a department full of Test Managers and having a mix of personalities and capabilities, it becomes important to bring some balance or levelling to this. You don’t want Test Managers to be sought because the project will always go live.

This means that we need to turn some of those decision making activities into more of a science and remove the imbalance that can be introduced by emotion or experience. I can’t really supply you with facts and figures to enable you to draw a decision diagram, because it is not that easy. But I will try and point you in what I think is the right general direction.

I would start by breaking down the departments testing functions into business domains. It is more likely that you can assess what is being released in terms of quality at domain level as a starting point. For instance, when releasing to the web you experience a high degree of success and few calls for support, yet when compared to releasing in Business Intelligence, the quality is always an issue and the first run of the month end batch processes always crashes. This gives an indication of where to start defining some criteria. But there are other areas to be considered as well. It could be that one development agency has a far higher success rate than another, where the volume of defects found are always low, or perhaps when a development agency or department develop using a particular language, the results are always much better than others.

These different criteria can be used to form some basic rules around final exit criteria for instance, enabling decisions to be made within a set range, ensuring that all Test Managers reach the same conclusions. Perhaps looking at the ratio of defects to scripts and comparing this to some analysis on the application being developed, the language of development, volume of calls pertaining to the application in support and the domain of release would provide good statistical evidence for decision making criteria.

Basically I am suggesting that you use historical data to form basic guidelines that the Test Managers can use. It does not necessarily eradicate the problem, but at least it will enable easy identification of projects that are outside of the norm. The Test Manager must still have the power to state what they think, even if this flies in the face of the statistical evidence, but when doing so they should be able to substantiate their view.

For projects running with inexperienced Test Managers or those that are inclined to shy away from awkward decisions, the decision becomes one of science which in most cases is going to be correct.

Tuesday 19 August 2008

Empowered Test Managers

The test function plays an important part within an organisation, yet too often they are asked to work in a manner which is restrictive and fails to give them the authority or accountability that they require or deserve. Strangely in some instances, even when they are given the authority they fail to take it.

So let’s look at the Test Completion Report. This is the opportunity for the Test Team to give a full account of what has occurred during the testing and make any recommendations. Why is it then, that we do not include as a standard within every report, a recommendation on whether the item is suitable for release into production. Part of the answer may be that the Test Manager does not feel comfortable stepping away from the black and white of scripts passing and failing and the associated defects would appear to make some uncomfortable. But the Test Manager knows the testing which was due to occur, understands how much has occurred and what level of quality has been achieved. Why then is it felt that this is not a logical and calculated recommendation.

On the flip side, why are Application Support departments not demanding proof of testing and a solid opinion on the level of quality that has been achieved? In most organisations, if a poor quality product is released to production, it is not the members of the project that are impacted by the problems, but those in Application Support. Not only are they going to be receiving calls through to the Service Desk, they are potentially going to be calling out support staff in the middle of the night to recover from the problems. Not only this, but any service levels that the Application Support department are supposed to maintain are potentially being compromised. It is therefore imperative that these two departments work together. It is often the case that Application Support will have more clout within the organisation and will therefore be able to assist in the move to a more authoritative testing department.

Another area that the Test Manager should be given more authority is on the exit of Integration in the Small and the entry of Integration in the Large. Two important events occur here. The first is the handover of the development agency’s testing assets and proof. A failure to provide this information is a red flag to the Test Manager and should sound all sorts of alarm bells. This is indicative of a lot of problems about to be experienced during test execution. The second is when the code is first delivered into testing and the process of running those first few scripts. If these start to fail then there is a second indication of problems with quality. Yet so often the Test Manager is not in a position of authority to insist on the production of test assets from development, or indeed to bounce the code back for rectification of defects. If the Test Manager is engaged at the outset of the project, they should be able to insist on the supply of test assets from development. To avoid the problems of poor quality code delivered, take smaller drops of code and start them sooner, so that an indication of quality is understood, or add a specialist phase of testing prior to Integration in the Large, where the sole purpose is the sign off of the code quality delivered, potentially by smoke/sanity testing the application using a selection of the scripts created.

To summarize, ensure that the opinion of the Test Manager is sought. Don’t let them get away with a statement of scripts run and defects raised, open and closed in the Test Completion Report. Insist that they provide an opinion. Work with Application Support to bring additional credibility to the department. Once this has been achieved you can then start to think about how you ensure that all of the Test Managers in a department apply the same level of reasoning when making recommendations for go live. The subject of another article I think.

Sunday 17 August 2008

T.I.G.E.R – What this acronym means to us!

Transition Consulting Limited, as a group of companies, have a set of values that we expect all of our employees to demonstrate. These are embodied by the T.I.G.E.R acronym.

T = Truthful
It is imperative to us that all members of the company operate in a truthful manner. We need to know that we can rely on what people are saying and that they will be honest with each other and our clients. This is not always easy, but some wise person once said, “Honesty is the best policy”. Well we believe this and have made it part of our policy.


I = Independent
Testing as a discipline needs to remain independent of other functions in order that it can provide an unbiased view on quality. Lack of independence places a testing department under the same constraints in terms of time and cost and therefore quality can become compromised. We pride ourselves on the fact that Testing is all that we do. We live and breathe the subject and can always be relied upon to act independently.


G= Good Willed
We expect our staff to be good willed because this is a personality trait that we embody as an organisation. As a result we are affiliated with several organisations and as a group contribute charitably each year. We work with some local organisations and some as large as the NSPCC.


E = Energetic
Energy is incredibly important to us. We want our employees to work hard and play hard. We expect them to be passionate about testing and what it involves. We expect them to demonstrate an enthusiasm for their work and not view it as just a job.


R = Reliable
We need to be able to rely on our resources and we expect our clients to rely on us also. Reliability is a cornerstone on which we build, taking care of the basics by ensuring that we can be counted upon to be knowledgeable and dependable, providing value into our organisation and those of our clients.


Not only is the acronym easy to remember, but it is a strong image and one with which all of our employees are happy to associate. From a TCL India perspective, the Tiger is probably even more powerful an image, being so highly revered.

Friday 15 August 2008

SOFTWARE TESTING GLOSSARY (Update)

This glossary is a living post – so will be edited as we come across terminology that is not included. If you have any suggestions or disagree with an explanation, drop us an e-mail and let us know. TCL India offer this as a means of establishing glossaries of your own or as a point of reference. This update includes 10 new definitions.

A…………………………………………………………………………………………………

Accessibility: (Testing)Looks at an application to ensure that those with disabilities will be able to use the application as an able individual.
Agile: A development method, involving the creation of software, with the contributing parties, including testing, all working on the same item at the same time.
Alpha: (Testing)Testing of an application which is in production, but not available for general use. This is normally performed by users internal to the business.
Analyst: Person who interacts with the business in order to understand and document their requirements for an application or change to an existing one.
Analyst (Test): Person responsible for the preparation and execution of test scripts, recording and progressing of defects, reporting into the Test Team Leader or Test Manager.
Automation: The process of developing a script or program which can be run by software rather than a Test Engineer, to test an aspect of an application. Automation is used to increase the speed of test execution.
B……………………………………………………………………………………………………

Beta: (Testing)Testing of an application which is in production, but not available for general use. This is normally performed by a select group of friendly external testers.
Black Box: The process of testing without understanding the internal workings of an application, relying on inputs and outputs only.
Bug: See DefectBusiness Requirement Specification: See Requirement Specification

C……………………………………………………………………………………………………

Case: See ScenarioCode: The software that has been produced by development and is being subjected to testing.
Completion Report: A document produced during the testing and presented to the project manager on completion of testing activity. The document should detail the tests that have been performed along with the associated results and defects. Some reports may include a recommendation for the applications suitability for release to production.
Component: The smallest item that is testable or producible. This often refers to a single file of code.
Configuration Management: The means of managing the components required to make a larger item. Applicable to files, documents, data, hardware, software, tools. Understanding of the versions of each component required in order to be able to re-build the larger item.
Criteria: See Entry Criteria and Exit Criteria

D……………………………………………………………………………………………………

Data: Information pertaining to the use of a system or recorded through the use of a system. Data is used in order to test a system, potentially after data cleansing if personal information is involved.
Data Generator: A tool used to generate high volumes of data in order to be able to test many permutations, or to load test an item.
Data Protection Act (DPA): The act that determines how personal data should be used and protected from abuse.
Data Scrambling: The process of altering personal information from data to be used for testing purposes.
Defect: Where the item under test has been found to inaccurate, resulting from testing. Primarily used in associated with software, but equally valid for static testing of documentation.
Defect Detection: The means of identifying a defect. Can be a metric used to predict the volume of defects expected during the course of a project, or as a means of looking back at a project to understand where testing needs to be concentrated in future projects of a similar nature.
Defect Removal Efficiency: A metric used to assess the ability of testing to remove defects as they are introduced, during the software development life-cycle, keeping the cost of testing later phases to a minimum.
Defect Turnaround: The time taken from the identification of a defect, through to the point of resolution. Different levels of granularity may be used. e.g. A focus on the time taken by development.
Developer: A person responsible for the development of an application.
Development: A process of producing an application by production of low level design, code, unit testing and integration in the small testing.
Dynamic: Testing which occurs on the right hand side of the V-model, with the application present.

E……………………………………………………………………………………………………

Environment: The combination of hardware, software and data as used in development, testing and production. The platform/s upon which the testing occurs.
Entry Criteria: The criteria that must be met prior to a deliverable entering the next phase of testing to another. This is normally associated with documented test assets and pre-agreed volumes of defects.
Error: See Defect
Exit Criteria: The criteria that must be met prior to a deliverable leaving the current phase of testing. This is normally associated with documented test assets and pre-agreed volumes of defects.
Execution: The process of working through a script on the application under test, in the testing environment.

F……………………………………………………………………………………………………

Functional: (Testing) The testing of a products function, against requirements and design.
Functional Specification: A document which extracts all of the functional requirements from the requirement specification.

G……………………………………………………………………………………………………

Glass Box: See White Box
Grey Box: Testing performed by testers with some knowledge of the internal workings of an application. See also Black Box and White Box testing.

H……………………………………………………………………………………………………

High Level Design: A design showing the major components and how these will interface to each other, defining the hardware to be used and the software that will be impacted.

I…………………………………………………………………………………………………… 


Integration in the Large: Where the application or applications that have been developed are brought together along with those which have remained unchanged, building a production like system around the application/s. Testing is then applied looking at the communication between the different applications.
Integration in the Small: Where the components of the application that have been developed are brought together along with those which have remained unchanged, building the application or major component of a single application. Testing is then applied looking at the communication between the different components.Integration: The act of bringing many parts together to form a whole.
ISEB: Information Systems Examination Board. This was historically the board that was used to certify test professionals at either Foundation (Entry Level) or Practitioner Level (Advanced). See:http://www.bcs.org/server.php?show=nav.5732
ISTQB: International Software Testing Qualifications Board. See:http://www.istqb.org/

J…………………………………………………………………………………………………….
K……………………………………………………………………………………………………


Key Performance Indicator: A mechanism built on one or metrics, which determines a band of acceptable performance, which over time is often targeted towards improvement.

L……………………………………………………………………………………………………

Live: See Production
Load: One of the types of performance testing, this looks at testing for the maximum load on the system.
Load Runner: Tool used to performance test one or many applications, to understand how it handles increases in load. See:https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-15-17%5e8_4000_100__
Low Level Design: Definition of exactly how the application/s will be modified or produced in order to meet the requirements and the high level design. This can extend in some examples to elements of pseudo code being defined.

M……………………………………………………………………………………………………

Metric: A measure of an attribute or occurrence in connection with an organisation, department or functions performance.

N……………………………………………………………………………………………………..

Non-functional: How an application performs, rather than how it does it.
Non-functional Specification: A document which details the non-functional requirements such as performance, security, operational elements.

O……………………………………………………………………………………………………
P……………………………………………………………………………………………………


Performance Centre: A tool used for measuring the performance of an application or series of applications. See:https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-126-17_4000_100__
Performance: Used to describe many types of testing relating to the speed of an application. See Volume, Load and Stress.
Plan: A document produced for a type of testing defining how the testing will be performed, when the testing will be performed and who will be performing it. The test plan lists the test scenarios that will be scripted.
Preparation: The process of generating the test scripts.
Prince2: A project management methodology that is used amongst a lot of blue chip organizations to bring discipline to the software development life cycle.
Priority: The importance of fixing a defect from a business perspective. Defined by business representatives.
Production: The area or a computer network that contains applications which are in use by real users and contains real data. The area that applications are released to on completion of a project.

Q……………………………………………………………………………………………………

Quality: The suitability of an item for its intended purpose and how well it achieves this.
Quality Centre: Tool used to assist with the management of testing, recording and tracking scripts, logging and tracking defects and more. See:https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-127-24_4000_100__

R……………………………………………………………………………………………………

Regression: Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Requirement Specification: A document normally produced by a business analyst, capturing the needs of the individual in a manner which means that they can be translated into a software solution.
Re-Test: Taking a defect which has failed and executing the associated test script again.

S……………………………………………………………………………………………………

Safety Critical: Used to identify something for which the use has an impact on personal safety, such as medicinal applications or those used by rescue services.
Scenario: A high level description of how a requirement is going to be tested.
Script: Referable back to the scenario, the script defines each step that must be passed through in order to perform a test, along with the expected results. As the script is executed, results are recorded and if they match the expected result are marked as passed, otherwise as failed. A script containing a failure should have a resultant defect raised.
Schedule: A document, similar to a project plan, but detailing the activities associated with testing, when they are due to occur and who will be performing them.
Severity: The importance of a defect to testing and the application. Defined by Testers
Special Interest Group In Software Testing (SIGIST): A group that has been set up by the British Computer Society, made up of people interested in the subject of software testing.
Smoke: A process of proving an application or environment is ready to commence full testing, by running a sample set of scripts testing the primary functionality and/or connectivity.
Software Development Life Cycle (SDLC): The process of taking a requirement and translating it into a fully working application.
Static: The process of testing without the presence of software. Normally refers to the testing of documentation.
Strategy: A document produced at project or programme level, defining what testing is to be performed.
Stress: A form of performance testing, whereby the volume of testing is increased to a point at which the application is deemed to be failing to perform, either due to failure to meet performance criteria or system breakdown.

T……………………………………………………………………………………………………

Technical Architecture Document: Definition of how the business requirements should be translated into a technical solution at the highest level.
Test Bed: A cumulative term used for all of the test environments used by the test department.
Test Director: A tool used for test management, capture of requirements, scripts and defects, with the ability to manage defects through to resolution. Now replaced by Quality Centre.
Test Maturity Model (TMM): A means of measuring a testing department’s level of maturity in a similar manner to that of the capability maturity model (CMM).
Testing: The process of reviewing an item for accuracy and its ability to complete the desired objective.

U……………………………………………………………………………………………………

Unit: The testing of the application by the developers at a component or modular level, looking at whether the code is performing as expected.
User Acceptance: The means of testing applied, normally by members of the business or recipients of the system, whilst still in the test environment. This looks to ensure that business requirements have been understood by all involved in the application production as well as providing an opportunity to check the business processes that the application will be used in conjunction with.

V……………………………………………………………………………………………………

V-Model: A testing focussed software development life-cycle where the process of creativity occurs down the left side of the V and the process of testing up the right side of the V, where each step down the V has a corresponding point of testing up the right of the V. Begins with a requirement and terminates with User Acceptance Testing.
Version: An alpha numeric means of identifying a particular build of software, or a component thereof. The version changes incrementally to reflect each modification of the software, component, document etc.
Volume: (Testing) A type of performance testing which increases the volume of users on a system, in each cycle, taking the volume up to a prescribed limit, normally exceeding optimum load.

W……………………………………………………………………………………………………

W3C: World Wide Web Consortium – body creating web standards and guidelines. http://www.w3.org/
WAI: Web Accessibility Initiative. See: http://www.w3.org/WAI/
Waterfall: Project Management Method whereby each element of the software development life-cycle occurs sequentially.
Win Runner: A tool that was historically used for test automation. See Automation Centre.
White Box: Testing normally performed by developers, looking at the code and ensuring that each model of code is performing as expected. See Unit

X……………………………………………………………………………………………………

Y…………………………………………………………………………………………………….
Z…………………………………………………………………………………………………….

Wednesday 13 August 2008

The Very First Bug

I saw this a while ago on Wikipedia and included it in our company journal. I was amazed at how many people had not heard the story. I am posting it here now in recognition of a similar article on "help i found a bug in mycode" where he draws comparison to a more modern day version.

The similarity is strange, because only recently I had been recounting to colleagues an experience with my son (10 years old). A storm fly, one of those tiny little things about 1mm by 2mm had some how crept into my LCD screen on my monitor and to all appearances, was crawling all over my document. I called my son over remarking, “Come take a look at this bug in my screen”. His response was naïve but endearing as he responded, “Wow, Dad is that a virus?” much to my delight.
The picture above relates to the following text. “While Grace Hopper was working on the Harvard Mark II Computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system. Though the term computer bug cannot be definitively attributed to Admiral Hopper, she did bring the term into popularity. The remains of the moth can be found in the group's log book at the Smithsonian Institution's National Museum of American History in Washington, D.C.”

Monday 11 August 2008

How to Annoy a Test Manager

A little light relief from all the serious blogging, but some of these do really get me angry.

The coding has finished and we need to test – should we be talking to you now?

I know you were expecting the code today, but when do you really need it?

Can you please stop raising defects as we don’t have time to fix that many problems?


We have only got half the time we thought we had for testing so how are you going to get all of your work done?

We know the quality is bad, but we are releasing anyway?


UAT has been dropped because the business can’t free any resources.


The fact that this problem was fixed in an earlier release is not indicative of poor configuration management.


Why do we need developers, support and environment resources to work the weekend as well as testers?


The environment is nearly the same as live, we just don’t know how nearly!


Testing is supposed to make the software perfect. Why hasn’t it worked?


We have spent 20% of our budget on analysis, 60% on design and development and 10% on other items. You have 10% left, what can you do for us?


We are running out of money because the test execution is taking too long, so we are going to let the testers go and the developers can finish the job.


Don’t worry about getting testers. We have lots of people in the business that we can get to test at lunchtimes.


The supplier was not asked to perform Unit testing and we can’t afford to have it done now as it was not in the budget.


Testing is a service and as such we will let you know when we need you.


I know you asked for three environments, but we can only get one. Don’t worry. I am sure everyone can get on it at the same time.


The testing is occurring on the development environment. That way the developers can change things whenever they need to.


The problems in live are fixed out of hours so we can’t test them.


If we automate all of the testing how much time will it save us?


Why do we need testers on board before the testing starts?


The user spilt coffee on their system and it crashed. I bet you didn’t test for that did you?


We have too many high severity defects, so we are going to hold a meeting and see how many we can downgrade?


We need to do some performance testing. There are no non-functional requirements, so let’s just assume that everything must happen in under a second.


Testing is easy. Anyone can do it.


It’s not a bug. It’s a design feature.


The requirements were captured at the start of the project. But don’t read that it’s way out of date.

Please feel free to add your own by way of comments.

Saturday 9 August 2008

Test Process Structure

Different organisations will have different needs in terms of the levels of test process that are required and it is important that the correct levels are applied. The Testing Maturity Model is a good source of information for this and can be located at http://www.tmmifoundation.org/

TMM suggests that as organisations mature in line with the Capability Maturity Model, the same is occurring within testing. TMM was introduced to apply CMM type measures to an organisations testing capability. In brief there are 5 stages of test maturity, beginning at level 1 where testing is undisciplined, level 2 where the testing is managed but still seen as a phase of the SDLC (Software Development Life Cycle) that occurs after testing, although at this point process is being put in place on a programme or organisation wide basis. Level 3 sees testing fully integrated in the SDLC and no longer a single phase, with processes become more mature and honed over time. Level 4 sees the testing fully defined, established and measurable and level 5 looks at optimisation of testing efforts, processes and practices where costs are controlled and continuous improvement occurs.

What I would like to suggest is that TMM can be seen as a trifle heavy for some organisations and unless already pursuing CMM may seem quite foreign. I am not suggesting that it does not have a place, but that it focuses entirely on process and not on understanding the drivers of the business itself, which may make it impossible to move up through the maturity levels.

I think that from a more simplistic approach, organisations will follow something more as follows:

Stage 1
Testing is being performed by available resources that are not professional testers. Defects are being recorded in basic form, using a tool like Excel. Some test cases or scripts may be produced.

Stage 2
Testing is being performed by test professionals and they are called in after development has occurred. Test Plans and Scripts are being produced and defects are being logged formally and reported on.

Stage 3
Testing exists as one or more independent functions, each having it’s own process. These small test teams are aligned to the business areas they serve. Projects now have approaches or strategies produced against them.

Stage 4
The disparate teams have now been pooled to form one function which is independent from the development side of the business. A Test Policy has been produced and testing is involved in all aspects of the software development life-cycle. Metrics and KPIs are being gathered and fed back to the organisation.

Stage 5
Testing is improving in it’s capability. The quality of releases to production is high and end users have a high opinion of the IT function. Releases to live occur only with approval from the testing function and testing is viewed as a discipline within the organisation. Metrics and KPIs are now giving the business sufficient information to base decisions on preferred development agencies and technologies, demonstrating those which are most successful.

The key is to ensure that your business is improved and enabled by testing and not crippled by a level of process which does not match that of the rest of the organisation. Learn to walk before you run and build over time.

Wednesday 6 August 2008

Role of the Environment Manager

One area that I have regularly come across which needs further clarification is the role of the Test Environment Manager. I have produced this article for those people looking to understand what exactly should be expected of an Environment Manager and could be used as the basis for a job description or role profile.

Test Environments are required in order to enable the test execution to occur. The Test Environment Manager is responsible for the provision of the test environment, its configuration status, its stability and maintenance. They also provide administrative activities such as backups and purging on request of the project, monitor usage of environments, supplying support throughout the test execution, including out of hours working if required. They will be expected to manage change to the test environments and keep the programme and testing informed of status. They must be able to give advance warning of downtime of any component of the test environment, which may cover multiple platforms. Duties are predominantly based around the organisation’s infrastructure but can extend to cover interfacing with 3rd party organisations which are offering equipment to form part of the project environment.

The environment manager is responsible for ensuring the provision of suitable test environments for all aspects of the testing of the core system. A suitable test environment is one that meets the requirements for the phase of testing and those requirements will be documented and provided to the Environment Manager by the Test Manager.

They shall ensure that the interfacing systems test environments are available and of a known and controlled configuration. For the core system, the environment should be available two weeks prior to test execution commencement, so giving time for any difficulties to be resolved. The environment must remain available for the duration of the project. As many peripheral systems should be delivered on similar timescales.

Thought must be given to the possibility of different environments being required, where phases of testing overlap, or where the requirements of the testing phase are such that the provided environment needs to be changed or upgraded. Each time a new environment is made available, this should also occur two weeks prior to test execution occurring in the new environment. The Environments will be placed under change control and will always be of a known state of configuration.

No upgrade of hardware or software may occur to a test environment without the approval of the Environment Manager. The Environment Manager must place change control over the environments and maintain that level of control. This may be a formalised mechanism, but as a minimum, requires that the Environment Managers approval has been received prior to the change occurring. All changes must be communicated to the following people as a minimum:

1. Development Manager
2. Testing Manager
3. UAT Manager
4. Project Manager

It is expected that this list will grow as time passes and the individual users of the system become known. It is felt that the above will form a minimum conduit for such communications until such time as all individuals are known.

A record of the configuration should be maintained for each environment or element of an environment if multiple environments are required in order to make a whole. The record should detail the hardware build that has been provided by Technical Operations, including versions of operating systems and other utilities provided as part of the basic provision. Over and above this, each item of software placed onto the environment for test, shall have its version recorded, so enabling a clear picture of the configuration at any given time. Any change to any aspect of the configuration should be approved under the change control mechanism and logged in the Configuration Record.

Should the software under test be configurable, a document with its own version control shall be supplied to the Environment Manager so that they can use this as a state of software configuration. If changes are required to that configuration, these should be recorded in an upgraded version of the document and the Configuration Record updated to reflect that this has happened.

Ad hoc changes to the configuration, be that hardware, software or software configuration, are not permissible. Alteration to the configuration must be controlled and the configuration must always be maintained in a known and managed state.

The Environment manager, will ensure that there is support for the system during all times of test execution, be this late at night or over weekends and bank holidays. They will make known a list of all the different elements of the environments and the contact names and numbers of the people that they support.

A plan shall be maintained by the environment manager of who is using what environment for what purpose at what time. This needs to be maintained so as to ensure that changes in environment usage are managed and controlled and so that if multiple environments are required, this can be kept to a minimum by the intelligent employment of the environment resources available.

The Environment Manager, as a result of the change control mechanism, will be able to notify the users of the test environments of any outages. These changes shall be recorded in an outage log, so that upon project completion, the details of outages can be formally reported. It will also enable monitoring of any problems that may occur in this area and be used to aid resolution of repetitive issues.

As part of the standard weekly output, the environment manager will distribute the following information.

1. Planned usage for the week.
2. Planned outages for the week.
3. Planned changes for the week.
4. The statement of configuration for each environment.
5. Planned Backups

The last item on the list is backups. These shall be available to all testers upon request and shall be recorded and requested by the Environment Manager.

I hope that this is found to be useful and informative.

Saturday 2 August 2008

Importance of Accurate Test Environments

Look - my new test bed. What a brilliant image! How many of us go here for our next test environment.

Do people realise the importance of the environment to the tester? I don’t think they can. I have spent most of my testing life working with make-do arrangements and being expected to test to high standards. I had a conversation with a fellow tester today, who is currently working as an Environment Manager and he and I were both moaning about the problem.

So what happens? Firstly all of the investment on a project is in the production equipment. As the project is creating application X for the first time, the to-be production environment can be used for testing purposes. It is ideal in this situation because it means that we can test on the most suitable infrastructure possible – what it will run on in live.

The problem comes the next time the application is updated. There was no investment in test infrastructure as part of the original project – they didn’t need it. So now the update project has to justify a massive expenditure in test equipment, which could potentially jeopardise the projects financial benefit. This step is therefore avoided and the hunt begins. We need to find a piece of kit that can be used to test the application. Something which is similar to that in production would do. Alright something which resembles the production kit, if you squint at it whilst wearing dark glasses. The good news comes through. Kit has been found and once the rust is scraped off, it is found to be an earlier version of the kit in production. The RAM is reduced, the processor is 2 years earlier than that which is in production and the operating system has fallen out of support. Never mind, it is all we have. Then the test manager explains that he needs three environments to be able to complete the testing on schedule. Don’t worry we can whack in some more RAM and increase the disk space. It’s still not half as good as production but we can split it into three for you.

This is seen as a pragmatic solution by all involved and the test manager must now do his best. (Agggghhhhh!!!!!!!). Let me explain my frustration. As soon as you compromise the test environment, you compromise the testing. So many times defects are raised which are directly attributable to the environment and if that environment is not the same as production, time is going to be spent resolving defects which will not exist in production and potentially defects which will occur due to the production environment will not be located during testing. Certainly in terms of release testing, the process is likely to be different and effectively the results during testing are false and cannot be relied upon. The lack of adequate test environments becomes a project risk in its’ own right and should be logged in the project risk register accordingly.

It is critical that the testing environment is as close to production as possible and that any deviations are fully understood. Only with the differences between production and testing environments being known and measurable can you rely on the testing results.

Take performance testing. If you are running with less RAM, less disk space, less process power, how will this manifest itself in the results? As for security testing, the operating system version becomes paramount as the level of inherent security is likely to change. Know and document the differences and if this cannot be achieved be careful about signing off the product as fit for release.

As a recommendation to all project managers, ensure that you leave an inheritance of test infrastructure for the following projects. Placing a new piece of equipment live is not sufficient. Consideration must be given to ongoing support, testing, development and training and what environments are going to be used for these.

Grading Exit Criteria

For the sake of this article, let us assume that we have 5 different grades of defect that start with insignificant going on to, minor, significant, major and ending in critical. When we come to exit one phase of a project and enter the next, we need to recognise that criteria will be set by the test manager, which need to be achieved. It is normal that the exit criteria from one phase, becomes the entry criteria for the next phase. For instance, a Test Manager for “Integration in the Large” testing will set his entry criteria, which by default becomes the exit criteria for the phase before. This remains true up to the point of release to live.


What I would like to draw attention to, are two key points. The test manager that says I am not going to have any defects remaining open at the point in time I go live, is living in “cloud cuckoo land”. This is the test manager that wants to be able to test forever and release perfect code. This does not happen. The reality of business is that certain levels of risk become acceptable in order to deliver the project to live and meet the requirements of the business and deliver the business benefits that were used to justify the project in the first instance.


So the next question becomes one of levels of risk and what is likely to be acceptable. This will depend on the nature of the project. One that involves safety critical applications is likely to have a far higher level requirement for quality, than one which is being used by 2 people in a small business. One which is exposed to the public or clients is also going to need to be of a significantly higher level of quality than one which is used internally only. The skill of the test manager is to assess the application, its’ use and define the level of defects that are going to be tolerable. As previously defined in http://tcl-india.blogspot.com/2008/06/entry-and-exit-criteria.html the Project Manager should be bought into these levels in order to ensure support later in the project.


Now we need to look at the defects that are going to be acceptable. A common misconception is to identify that the level of critical defects is 0, be that priority or severity, but that there is no restraint on the level of insignificant defects. This could not be further from the truth. Whilst 1 insignificant defect is not going to stop the release, there comes a volume of insignificant defects that makes the release of the application unacceptable.


I would suggest that we concentrate initially on the severity of defects. We need to understand proportionately that a volume of insignificant is equal to 1 minor, a volume of minor is equal to 1 significant and so on. This will change dependent on the application use, so here are some suggestions for different application types.


Safety Critical
Proportions: 1 Critical = 2 Severe : 1 Severe = 3 Significant : 1 Significant = 5 Minor : 1 Minor = 10 Insignificant
Final Exit Criteria: 0 Critical : 0 Severe : 0 Significant : 5 Minor : 10 Insignificant


Public/Client Facing
Proportions: 1 Critical = 3 Severe : 1 Severe = 5 Significant : 1 Significant = 10 Minor : 1 Minor = 20 Insignificant
Final Exit Criteria: 0 Critical : 0 Severe : 3 Significant : 10 Minor : 20 Insignificant


Internal Consumption (20 + Users)
Proportions: 1 Critical = 4 Severe : 1 Severe = 7 Significant : 1 Significant = 15 Minor : 1 Minor = 50 Insignificant
Final Exit Criteria: 0 Critical : 1 Severe : 5 Significant : 10 Minor : 40 Insignificant


Internal Consumption (0 to 20 Users)
Proportions: 1 Critical = 5 Severe : 1 Severe = 10 Significant : 1 Significant = 20 Minor : 1 Minor = 100 Insignificant
Final Exit Criteria: 0 Critical : 2 Severe : 5 Significant : 10 Minor : 50 Insignificant



Please bear in mind that these are indicative and that the best solution is for the test manager and other members of the project team to determine the levels between them.

Testing in Live – Why you should not!

As a tester I am often asked why we can’t test in live. There are so many positives that can be gained from it that it surely makes sense. After all, the environment is exactly the same as live, the data is real and the volume of users and traffic is real.

Production is designed to serve a business in operation. It is being used in anger every minute of the day. It may be internally used, it may be used by high street shops or it may be open to public use such as the internet.

The first problem is that the code that needs to be tested is not fit for live because its quality is not known. So by placing the code live you are compromising the production platform. You can cause code which is running perfectly in production to fail, as common routines are modified.

Testing is designed to break things. We need to understand the points of vulnerability and testing tries to identify these. This means that the application could be caused to crash, which could result in a domino effect and take out other elements of infrastructure or code.

In testing terms, the point at which resolving a defect is highest, is in production. One of the primary reasons for this is that outages which affect the production platform can be disastrous. Imagine 1000 customer services personnel being unable to work for an hour. A hourly pay rate of £10.00 per hour and the problem has already cost £10,000.00 But what if you have an on-line presence and you lose that for an hour. You have now lost sales revenue for that period of time and this you cannot recover. Perhaps more damaging is that perception of your organisation has taken some public damage. The potential new customer went somewhere else and you have now lost their trade for the foreseeable future. The existing client who was returning has now been forced elsewhere.

Think also of the press that can be received through public outages. Organisations that experience loss of service will often find this being reported in the media and this causes immeasurable brand damage.

Another consideration is that the data which exists on your production system is real. The testing therefore cannot modify, add or delete information without careful consideration of the consequences. Corruption of data exposes an organisation to the data protection act, but worse, may invalidate information which is crucial to your business. What happens if a change of address occurs which is a test – the client is potentially lost along with their business?

A final point is that your system is in all likelihood linked to other organisations. When I first became involved in development, I had a story relayed where a developer had unit tested his code in a live environment. His code was a credit checking piece of software and he used his own details each time he ran the test. As your credit score is negatively impacted each time a check is performed, the developer managed to take his credit rating from positive to negative in a matter of a day. He did not realise he had done this until such time has he tried to make a credit purchase and was refused. Fortunately, after some discussion with the credit reference agency, his rating was restored. But any kind of financial transaction must be performed in a test environment, linking to other test systems. Imagine the damage that re-running a BACS transaction could cause, or payroll being sent out at the wrong time.

Production is very much live. You would not expect a doctor to test a new drug on a human without going through a certain degree of process first. Even then you would move into a clinical trial under strict process.

What about Beta testing I hear you ask. Yes, there are examples when the software is deemed to have no negative impact on production, where sufficient testing has already been performed as to be confident of its capabilities, that the software may be released with a “Health Warning” attached. It may be that a select group of users will be asked to perform a trial. But in these instances, the application will have been put through its’ paces in a test environment.

It is important that users of systems have good experiences. Failure to achieve this results in loss of the users from the system, negative impact to the reputation of the department and individuals making the release, plus the potential to damage the organisation's brand and revenue-earning potential should make this a course of action that is avoided.