Search Blog

Loading

Thursday, 26 June 2008

Test Script Storage

Storage of anything should be in a manner whereby retrieval of the item is easy. If we look at a library, everyone is familiar with the process. Find the section dealing with your subject matter and then search for the book by author. It has a proven track record and we are all able to go into a library and find what we are looking for (as long as it is there).

Test scripts are no different. As testers we generate vast volumes of scripts and need to be able to re-use them. It strikes me as odd therefore, that scripts are so often stored against a project. There may be subdivisions within the project structure for each of the applications being tested, perhaps even a definition of which version the scripts have been written against. Indeed If the testers are a constant from one phase of the project to the next, the scripts are likely to be re-used.

But for each project there is normally a primary application that is being tested and the others are interfacing systems. The scripts that are generated dealing with the interface do not necessarily cover the entire application. The first problem occurs when someone outside of the project needs to test the primary application. If they are unaware of the project name, the chances are that they will produce their own set of scripts. We now have a duplication of effort occurring. Each new script that is written is potentially wasting effort.

The second problem is that the scripts that were written to test a change to an interfacing system, will not be stored in a way that is usable or locatable by those testing it as a primary application. This generates two problems, the first being that the test suite for the interfacing system is now out of date. The second problem is that the regression pack has not been updated to reflect the changes and therefore future changes may introduce problems into the new code which will not be identified. At best the difference in version may alert the testers to the fact that a change has occurred and time will be spent trying to re-locate the scripts, or more likely, they will find out what the change was and generate their own scripts.

I believe the way to avoid this is to ensure that scripts are not stored by project, but always by application. The application should then be divided into a series of suites, one for each version. Lastly a regression pack should exist as a permanent fixture. This allows the storage of scripts as deltas to the original suite and then as each new version of the application is developed, the scripts can be reviewed for those which would best augment the regression pack. It may be prudent to have each version contain all scripts, as some will have been modified or removed from the test suite.

This means that any one on any project, can look at the applications they are working with and know without doubt, where to look for the scripts. I can only see one argument against this, which is around end-to-end tests and UAT, where the focus on the application is not present and the focus is more around the business process. In such instances these scripts may need to be stored against the project.

To my way of thinking, this is symptomatic of testing being seen as a service to a project. Therefore the need is to satisfy the project’s needs by storing everything in a very project-centric manner. The reality is that this is costing organisations a vast amount of time and money as scripts are permanently re-generated in order to meet the current requirement.
The situation becomes worse where large volumes of external resources are involved. A lack of consistency and knowledge leaving the client organisation becomes an issue and the likelihood of finding a previously written set of scripts lowers dramatically.

Lastly, the regression pack is key to ensure the stability of the production platform and anything which compromises the regression pack undermines the efficiency of the business.

So – to summarise, ensure that test scripts are written and stored by application wherever possible. Breakdown the scripts under the application folder by version and have a generic regression pack sub-folder either at application level or version level.

W3C 29 out of 30 Sites Fail

W3C makes the following statement:

“W3C primarily pursues its mission through the creation of Web standards and guidelines. Since 1994, W3C has published more than 110 such standards, called W3C Recommendations. W3C also engages in education and outreach, develops software, and serves as an open forum for discussion about the Web. In order for the Web to reach its full potential, the most fundamental Web technologies must be compatible with one another and allow any hardware and software used to access the Web to work together. W3C refers to this goal as “Web interoperability.” By publishing open (non-proprietary) standards for Web languages and protocols, W3C seeks to avoid market fragmentation and thus Web fragmentation.”


So how come so many sites are not following these standards. As a piece of work recently, we looked at 30 random web sites and checked them for W3C compliance. We found that only 1 of the 30 passed the check.

What was perhaps more surprising, was that around one third of the sites that were reviewed, had an error count of less than 30. Would it not be reasonable to suspect that an organisation which is so close to compliance to the foremost standards on the web, would go the extra mile and achieve compliance? I must therefore assume that these organisations are unaware of the fact that they are nearly W3C compliant and are developing their websites in ignorance, relying solely upon the development agencies to abide by their own standards, some of which may happen to coincide with W3C.

So what are the problems that organisations that are not W3C compliant going to face? Foremost, the transfer of the development from one agency to another is going to become more complex. Rather than being able to transfer code that is well written and understood by many, the site owner may become tied to a certain development agency, as they are the only ones that understand the code. The cost of changing a website completely is often far too expensive to consider, resulting in a reliance on one particular supplier.

Secondly, these standards once employed facilitate more effective and efficient crawling by web robots, who gather the information that search engines use. This means that poorly written code, translates to hard work for robots, poor understanding by search engines, lowering chances of the site being identified and reducing traffic to the site. Reduced traffic means reduced sales.

Thirdly, for those that are using the site from a disabled perspective, the tools that they use to try and surf the web, are hindered by poor code, making it more difficult to look at the site. We have already discussed elsewhere in the blog the importance of making web usage easy, yet here we find another example of poor user experience. Some organisations have even had lawsuits filed due to them failing to meet their obligations to disabled users.

Lastly, the site is less likely to transfer to other platforms used to surf the web, such as mobile phones and handheld devices. This is again restricting the use of the site, barring individuals that do not come on line using the mainly prescribed mechanisms.

My last point on W3C for the moment, so as not to be taken as a complete hypocrite, is that when I checked a couple of blogs, including my own, the volume of errors was in the hundreds. I am not yet certain whether this is something that can be remedied, but I can assure you that we will be looking at it and will let you know the results.

In summary, get your site W3C checked. Know that you are giving all users the best possible chance to use your site, have a good experience and possibly even generate a sale or two.

Thursday, 19 June 2008

Middleware Regression Testing

I have been working for a client on an interesting problem recently, which I thought I would share on the blog. The problem has been around the regression testing of an item which is intrinsic by nature, to many other projects. The item is classed as middleware, which provides a translation between two different systems, but is being relied upon increasingly, with in excess of 50 systems now interfacing. Most projects now require the middleware to be updated and the volume of change is placing the regression testing under increasing pressure as more releases occur.

The problem is amplified as the projects are working individually and the middleware changes are independent as a result. A group of projects are collated to form a release and once normal testing is completed the middleware changes are converged to form a single drop of code, which is then passed into the regression testing phase.

The first mistake has been made. The projects are not taking accountability for testing the converged code and in some instances have even been disbanded prior to this testing occurring. At this point the release has lost the testers with the understanding of the end to end function and more pressure is applied to the regression test team, to trace the scripts from the project, interpret them and generate new scripts. It is important that The testing of the converged code is now taking up large amounts of the already pressured regression testing window. This needs to be pushed back to the projects and the convergence owned by them.

By removing the testing of the converged code from the regression process, more time is now allowed and full regression can be performed, but this does not provide a future proof solution. The same window is now being fully utilised and as the volume of utilisation increases over time, the regression pack will increase and the window will become too small again.

This is an ideal opportunity to make the most of an automated solution. If we look to industry standards, automation makes a test 7 times faster to execute but 7 times longer to write. This means that the testing duration, once automated can be reduced to 1.5 days. Now we have a regression test that is only performing regression; that can be executed swiftly and provides scope for future growth.

The testing environments were another level of complexity. Whilst the project completed it’s testing with all the required components/systems/applications, the regression team, within their two week window had to acquire their own equivalent. Again this was another drag on time, but more importantly the regression was sometimes being performed on an environment that was incomplete. The result has been to make the regression middleware environment available to the projects for the convergence testing, but to utilise the projects interfacing systems. In order to future proof, the regression testing will be moved onto a pre-production environment.

There were other aspects to this problem, but those listed were particularly pertinent. The following feedback was received from two sources as a result of the work and presentation of the findings:

I was delighted that your slides prompted so much discussion, even if it made your presentation difficult, and that by all accounts we have got the main individuals on board.”

I was very appreciative of the clear way you presented where we are and your proposals for going forward, together with a clear capture of issues that are spoken about in lots of quarters but never pulled together into a single picture. Great.

In summary – regression testing should encompass the entire system.
Projects should retain responsibility past the point of release to production.
Automation is ideally suited to regression testing.
Testing should always occur on as complete an environment as possible.

Monday, 16 June 2008

Dealing with Delays Impacting Software Testing

We have all been there, on a project where the inevitable has happened. You guessed it, the code is late into testing, delivered at the 11th hour. Entry criteria has come under threat and possibly been ignored completely. Your test environment has been delivered but remains unproven as the code has not been available. The Project Manager has been harassed by the business stakeholders for the development delay and is not interested in your problems. IT Management are pressing to see the project delivered on time and to top it all, Marketing have arranged for a campaign launch to occur on the prescribed delivery date. As the Test Manager, you are now the primary obstacle to go live; the success of the project is resting on your shoulders. Oh yes, and your six week testing window has been reduced to four.

This is the stress (no pun intended), of the test execution phase. Not only are you now required to think out of the box in order to complete the testing, but you must also be thinking at a far wider level than just testing. There are actions that can be taken at a project level which can make a massive difference to the work of the testers.

Let’s start by looking at this from the wider perspective. I remember a situation on a very early project I was managing. Sat in a project progress meeting and being asked how we could test with an execution window reduced from four weeks to two. I dug my heels in and refused to budge, resulting in a separate meeting immediately afterwards with the PM, being instructed that this was not the right course of action and getting some early tuition in the art of testing.

Back to our problem, there are several actions that can be taken at project level to help with the situation and it may be possible to assist the PM by making some recommendations. (1) Does the application all need to be placed live? Are there any aspects of the application that could be put live as part of a second release, reducing the scope of the testing required? (2) Can the release occur to internal users only? This minimises the risk of damage in the event of production defects, meaning that exit criteria may be reviewed. (3) Can additional development effort be applied to recover whilst in development?

Regardless of the answers at a project level, the following can be applied by the Test Manager to handle the situation from within testing:

(1) Insert the testers into the development team and increase the unit and integration in the small testing. Improve the quality before it hits testing, reducing the volume of defects found and therefore the duration.
(2) Consider taking components of the application that are finished, prior to the arranged delivery date. It is likely that there are only certain parts of the application that are causing the delay and not the entirety of the application. Bringing in some parts early, increases the testing window and enables recovery of some of the lost time.
(3) Apply a risk based testing technique. Test the high risk elements of the application first and work through the testing in order of risk. When time runs out, this should mean that the only the items of lower risk have been omitted.
(4) Increase the hours being worked by the team. Look at options around overtime and weekend working. If using offshore capabilities then look at working two days in one. (A word of warning at this point, if the testers are working, they need to be supported by the environment support staff and the developers. Testing alone will only increase the flow of defects making it harder for development to keep up and an environment issue could stop all out of hours work.)
(5) Consider overlapping some of the testing phases. E.g. If UAT is being run as a separate phase, after functional testing, look at overlapping some of the UAT with the functional testing that is occurring.
(6) Ensure that there is a high focus on defect prioritisation by the business. Make sure the developers are fixing what needs to be fixed first. (Don’t ignore severity at this point.)
(7) Monitor defect turnaround. If the development has arrived late it is indicative of problems and a slow defect turnaround will cripple the project and whilst testing may complete, whilst the exit criteria will have been corrupted.
(8) Can more environments be made available? There is likely to already be a requirement for multiple environments, but if you begin overlapping test phases, functional with non-functional, with UAT, then the volume of environments needed may increase.
(9) Carry out a review of the exit criteria for the project. Bear in mind that this was set prior to the problems occurring and although they are the desired outcome, some compromise may need to be reached. Work out what is acceptable and don’t forget that if coverage is reduced, the numbers of defects are indicative of only a percentage of the testing. i.e. 80% coverage, means that you have possibly only have discovered 80% of the defects and it is good to assume that 20% remain unfound.

Actions (2), (3) and (5) above, all increase risk in some manner. Ensure that the project and stakeholders have agreed to these risks and that where possible mitigating actions are in place.

To summarise, there are many actions that can be taken within testing to deal with a slippage, whilst maintaining the original delivery date. Don't forget the project elements that can make a difference. I am sure there are more but do ensure that all aspects are being looked at and only then start to compromise on the testing.

Thursday, 12 June 2008

Static Testing - Do you have a requirement?

Static Testing is the first form of testing that can be applied during the software development life-cycle. Once the requirement document has been produced it should be checked against the 8 points identified below. This will ensure the removal of defects prior to other departments or resources becoming involved and therefore minimise costs.

60% of all defects are attributable to the requirement specification.





The 8 point check applied in order to perform static testing against a requirement document is as follows:
-Singular: One requirement that does not refer to others or use words like "and".
-Unambiguous: Not open to more than one interpretation. Clear and easy to understand.
-Measurable: Avoids use of words like instant, approximately. Specifies given units, such as hours minutes and seconds.
-Complete: The requirement is not lacking information or supporting data.
-Developable: The developers will be able to implement the requirement.
-Testable: The testers will be able to test the requirement.
-Achievement Driven: A benefit is associated, which is tangible.
-Business Owned: A member of the business owns each requirement, providing a point of reference and approval.

Within the testing discipline, 60% of defects are believed to be attributable to the requirement specification. This means that one of the largest contributors to poor software quality can be remedied at project initiation. It is also widely accepted that each phase later in the software development life-cycle that a defect is found, it costs 10 times more to resolve. In the simplest form, this means that for every £1 it costs to resolve a defect at the point of the requirement being defined, if missed until the resulting application is live, will cost 10,000 time more. This is due to the effort that will have been spent designing, developing and testing something which was wrong, but the real cost comes when a problem in live causes an outage affecting large volumes of users and perhaps even stopping them working. Not only does this impact the individual, but dependent on what they are doing, could stop income to an organisation's other departments: Sales, Billing, Payroll. All of these become disasters and if in the public eye can lead to bad press for the organisation.

It is 10,000 times cheaper to fix a defect in the requirement compared to live!

We are trying to make static testing for organisations as easy and affordable to achieve as possible to achieve for organisations. Without complexities of billing and non-disclosure, the document can be e-mailed to either of the contacts detailed below. Our pricing is fixed so there are no surprises along the way.

Grant.Obermaier@TransitionConsulting.in or Mick.Morey@TransitionConsulting.in


Testing will be scheduled and will take between 5 and 30 working days. If a faster turn around is required please specify any deadlines and we will advise on achievability.

Early Defect Detection

They say that it is the early bird that catches the worm. The same is true in software testing.

The process starts with a requirement and from this point onwards, there is gradually more and more effort being applied to the creation or modification of an application. More people become involved and the cost of the solution increases. This is the natural course of events. It is therefore a natural conclusion, that if a mistake is made in the requirement, the cheapest time to fix the problem, is as the requirement is being defined, or immediately afterwards. This keeps the cost to a minimum. The same logic applies to design, development and test. Yes, even testing has the ability to insert defects, with badly written scenarios, scripts or reporting defects that are not real.

Defects are introduced throughout the software development lifecycle and the art of testing is to find as many of them as possible when they are inserted. It is widely recognised that there is a parabolic curve of defect insertion. The starting point is the requirement specification which begins by inserting 60% of the defects. The curve terminates with live or production, where the intended result is to find 0 defects. The tester should report on completion of the project, the defect detection efficiency. This looks at understanding for each defect, where it was inserted and where it was detected. A perfect test process would look to identify each defect as it is inserted. This is highly unlikely and the reality is that some defects will be found in later phases of testing, or indeed in live.

It is important to understand that projects testing budgets can double when critical steps such as static testing are not performed. This forces 60% of the defects into later phases, incurring as a minimum the cost of the design. Projects also often suffer from slippages and with test execution occurring at the end of the process, is often squeezed or compromised. The project must also realise that reducing the testing duration or coverage, is likely to increase the risk of a defect being found in live, where in comparison to it being found and removed in the requirements phase, will cost 10,000 times more to resolve, at best 10 times more. The reality here is that a defect undetected until live, can be business impacting, brand damaging and could cause a business to fail.

It is therefore important that testing is involved in the project from the outset, not as something that is included if there is the time, the budget and the inclination. Only by the application of systematic testing throughout the project is the quality level going to be understood and the opportunity to remedy problems in a timely manner presented.

Monday, 9 June 2008

Entry and Exit Criteria

As you pass from one phase of testing into the next, there is a need for control. For the purpose of this post, we will refer to the prior phase as the supplier and the current phase as the recipient. The supplier needs to retain control of their testing phase, until such time as it is deemed ready for release. The recipient needs to ensure that the testing performed by the supplier has achieved a sufficiently high standard as to be acceptable.

The means of achieving this is referred to as Exit Criteria for the supplier and Entry Criteria for the recipient. These criteria are documented in the test plan and define the standards that should be achieved entering and exiting, the test phase described by the document.

The criteria are set by the Test Manager or nominated delegate. They may take any form that the test manager deems necessary, although are more frequently based on defect volumes of certain severity and priority, along with test assets from the supplier.

Consideration should be given to the use of other information, drawing on experiences of dealing with particular project teams, development agencies etc. This requires the Test Manager to be able to think outside of the pure testing aspects of the project, to look and see what else may impact them. Closer to the heart of the Test Manager are subjects like the Test Environment/s and Configuration Management. The environment is one of the main enablers to testing and should be made available two weeks prior to test execution commencing. Without comprehensive Configuration Management the likelihood of controlled deliveries from development into testing is severely reduced. Such events and dependencies make excellent criteria and are sound reasons for not entering into a subsequent phase of testing.

Looking at the testing components only, Test Managers should ensure that the supplier of each testing phase is held accountable for the production of the associated test assets. The volume of defects is often seen as a point of contention. Test Managers should avoid making statements such as zero defects allowed, or setting the bar so high that entry or exit becomes impossible to achieve. Having to relax these criteria because they are impractical corrupts the integrity of the testing function and undermines the credibility of the Test Manager. Work with the Project Manager when setting these criteria, ensuring that they are bought into them and don’t rely on just a signature on a document. This gives a higher level of buy in and ensures that any future change to the criteria is going to be resisted by both the Test Manager and the Project Manager. Always ensure that such changes are formally change controlled.

It is recommended that on entry into Integration in the Large, there should be no top priority or top severity defects outstanding. This is not to say that there can be no defects, only that items which are so important to the business as to warrant a top priority, should be resolved as a priority. Defects of the highest severity are perhaps more debatable, but delivering an item with high severity defects indicates that elements of the testing cannot be completed and the deliverable is therefore significantly below the level of functionality that has been specified and is expected.

If circumstances dictate that entry or exit criteria are going to be overruled, as a minimum the risk register needs to be updated to reflect that this has occurred, detailing the impact and mitigating actions. It is worth noting that this is normally indicative of a project in trouble and that timescales are now seriously at risk.

Saturday, 7 June 2008

SOFTWARE TESTING GLOSSARY

This glossary is a living post – so will be edited as we come across terminology that is not included. If you have any suggestions or disagree with an explanation, drop us an e-mail and let us know. TCL India offer this as a means of establishing glossaries of your own or as a point of reference.



A………………………………………………………………………………………………………………………………………………………………
Accessibility: (Testing)Looks at an application to ensure that those with disabilities will be able to use the application as an able individual.
Agile: A development method, involving the creation of software, with the contributing parties, including testing, all working on the same item at the same time.
Alpha: (Testing)Testing of an application which is in production, but not available for general use. This is normally performed by users internal to the business.
Analyst: Person who interacts with the business in order to understand and document their requirements for an application or change to an existing one.
Analyst (Test): Person responsible for the preparation and execution of test scripts, recording and progressing of defects, reporting into the Test Team Leader or Test Manager.
Automation: The process of developing a script or program which can be run by software rather than a Test Engineer, to test an aspect of an application. Automation is used to increase the speed of test execution.
Automation Centre: A tool used in the automation of software testing. See
https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-127-24%5E1074_4000_100__&jumpid=reg_R1002_USEN
B………………………………………………………………………………………………………………………………………………………………
Beta: (Testing)Testing of an application which is in production, but not available for general use. This is normally performed by a select group of friendly external testers.
Black Box: The process of testing without understanding the internal workings of an application, relying on inputs and outputs only.
Bug: See Defect
Business Requirement Specification: See Requirement Specification
C………………………………………………………………………………………………………………………………………………………………
Case: See Scenario
Code: The software that has been produced by development and is being subjected to testing.
Completion Report: A document produced during the testing and presented to the project manager on completion of testing activity. The document should detail the tests that have been performed along with the associated results and defects. Some reports may include a recommendation for the applications suitability for release to production.
Configuration Management: The means of managing the components required to make a larger item. Applicable to files, documents, data, hardware, software, tools. Understanding of the versions of each component required in order to be able to re-build the larger item.
Criteria: See Entry Criteria and Exit Criteria
D………………………………………………………………………………………………………………………………………………………………
Data: Information pertaining to the use of a system or recorded through the use of a system. Data is used in order to test a system, potentially after data cleansing if personal information is involved.
Data Generator: A tool used to generate high volumes of data in order to be able to test many permutations, or to load test an item.
Data Scrambling: The process of altering personal information from data to be used for testing purposes.
Defect: Where the item under test has been found to inaccurate, resulting from testing. Primarily used in associated with software, but equally valid for static testing of documentation.
Defect Detection: The means of identifying a defect. Can be a metric used to predict the volume of defects expected during the course of a project, or as a means of looking back at a project to understand where testing needs to be concentrated in future projects of a similar nature.
Defect Removal Efficiency: A metric used to assess the ability of testing to remove defects as they are introduced, during the software development life-cycle, keeping the cost of testing later phases to a minimum.
Defect Turnaround: The time taken from the identification of a defect, through to the point of resolution. Different levels of granularity may be used. e.g. A focus on the time taken by development.
Developer: A person responsible for the development of an application.
Development: A process of producing an application by production of low level design, code, unit testing and integration in the small testing.
Dynamic: Testing which occurs on the right hand side of the V-model, with the application present.
E………………………………………………………………………………………………………………………………………………………………
Environment: The combination of hardware, software and data as used in development, testing and production. The platform/s upon which the testing occurs.
Entry Criteria: The criteria that must be met prior to a deliverable entering the next phase of testing to another. This is normally associated with documented test assets and pre-agreed volumes of defects.
Error: See Defect
Exit Criteria: The criteria that must be met prior to a deliverable leaving the current phase of testing. This is normally associated with documented test assets and pre-agreed volumes of defects.
Execution: The process of working through a script on the application under test, in the testing environment.
F………………………………………………………………………………………………………………………………………………………………
Functional: (Testing) The testing of a products function, against requirements and design.
Functional Specification: A document which extracts all of the functional requirements from the requirement specification.
G………………………………………………………………………………………………………………………………………………………………
Glass Box: See White Box
Grey Box: Testing performed by testers with some knowledge of the internal workings of an application. See also Black Box and White Box testing.
H………………………………………………………………………………………………………………………………………………………………
High Level Design: A design showing the major components and how these will interface to each other, defining the hardware to be used and the software that will be impacted.
I………………………………………………………………………………………………………………………………………………………………
Integration in the Large: Where the application or applications that have been developed are brought together along with those which have remained unchanged, building a production like system around the application/s. Testing is then applied looking at the communication between the different applications.
Integration in the Small: Where the components of the application that have been developed are brought together along with those which have remained unchanged, building the application or major component of a single application. Testing is then applied looking at the communication between the different components.
Integration: The act of bringing many parts together to form a whole.
ISEB: Information Systems Examination Board. This was historically the board that was used to certify test professionals at either Foundation (Entry Level) or Practitioner Level (Advanced). See:
http://www.bcs.org/server.php?show=nav.5732
ISTQB: International Software Testing Qualifications Board. See:
http://www.istqb.org/
J………………………………………………………………………………………………………………………………………………………………
K………………………………………………………………………………………………………………………………………………………………
Key Performance Indicator: A mechanism built on one or metrics, which determines a band of acceptable performance, which over time is often targeted towards improvement.
L………………………………………………………………………………………………………………………………………………………………
Load: One of the types of performance testing, this looks at testing for the maximum load on the system.
Load Runner: Tool used to performance test one or many applications, to understand how it handles increases in load. See:
https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-15-17%5e8_4000_100__
Low Level Design: Definition of exactly how the application/s will be modified or produced in order to meet the requirements and the high level design. This can extend in some examples to elements of pseudo code being defined.
M………………………………………………………………………………………………………………………………………………………………
Metric: A measure of an attribute or occurrence in connection with an organisation, department or functions performance.
N………………………………………………………………………………………………………………………………………………………………
Non-functional: How an application performs, rather than how it does it.
Non-functional Specification: A document which details the non-functional requirements such as performance, security, operational elements.
O………………………………………………………………………………………………………………………………………………………………
P………………………………………………………………………………………………………………………………………………………………
Performance Centre: A tool used for measuring the performance of an application or series of applications. See:
https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-126-17_4000_100__
Performance: Used to describe many types of testing relating to the speed of an application. See Volume, Load and Stress.
Plan: A document produced for a type of testing defining how the testing will be performed, when the testing will be performed and who will be performing it. The test plan lists the test scenarios that will be scripted.
Preparation: The process of generating the test scripts.
Priority: The importance of fixing a defect from a business perspective. Defined by business representatives.
Q………………………………………………………………………………………………………………………………………………………………
Quality: The suitability of an item for its intended purpose and how well it achieves this.
Quality Centre: Tool used to assist with the management of testing, recording and tracking scripts, logging and tracking defects and more. See:
https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-127-24_4000_100__
R………………………………………………………………………………………………………………………………………………………………
Regression: Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made
Requirement Specification: A document normally produced by a business analyst, capturing the needs of the individual in a manner which means that they can be translated into a software solution.
Re-Test: Taking a defect which has failed and executing the associated test script again.
S………………………………………………………………………………………………………………………………………………………………
Scenario: A high level description of how a requirement is going to be tested.
Script: Referable back to the scenario, the script defines each step that must be passed through in order to perform a test, along with the expected results. As the script is executed, results are recorded and if they match the expected result are marked as passed, otherwise as failed. A script containing a failure should have a resultant defect raised.
Schedule: A document, similar to a project plan, but detailing the activities associated with testing, when they are due to occur and who will be performing them.
Severity: The importance of a defect to testing and the application. Defined by Testers
Smoke: A process of proving an application or environment is ready to commence full testing, by running a sample set of scripts testing the primary functionality and/or connectivity.
Static: The process of testing without the presence of software. Normally refers to the testing of documentation.
Strategy: A document produced at project or programme level, defining what testing is to be performed.
Stress: A form of performance testing, whereby the volume of testing is increased to a point at which the application is deemed to be failing to perform, either due to failure to meet performance criteria or system breakdown.
T………………………………………………………………………………………………………………………………………………………………
Technical Architecture Document: Definition of how the business requirements should be translated into a technical solution at the highest level.
Test Director: A tool used for test management, capture of requirements, scripts and defects, with the ability to manage defects through to resolution. Now replaced by Quality Centre.
Testing: The process of reviewing an item for accuracy and its ability to complete the desired objective.
U………………………………………………………………………………………………………………………………………………………………
Unit: The testing of the application by the developers at a component or modular level, looking at whether the code is performing as expected.
User Acceptance: The means of testing applied, normally by members of the business or recipients of the system, whilst still in the test environment. This looks to ensure that business requirements have been understood by all involved in the application production as well as providing an opportunity to check the business processes that the application will be used in conjunction with.
V………………………………………………………………………………………………………………………………………………………………
V-Model: A testing focussed software development life-cycle where the process of creativity occurs down the left side of the V and the process of testing up the right side of the V, where each step down the V has a corresponding point of testing up the right of the V. Begins with a requirement and terminates with User Acceptance Testing.
Version: An alpha numeric means of identifying a particular build of software, or a component thereof. The version changes incrementally to reflect each modification of the software, component, document etc.
Volume: (Testing) A type of performance testing which increases the volume of users on a system, in each cycle, taking the volume up to a prescribed limit, normally exceeding optimum load.
W………………………………………………………………………………………………………………………………………………………………
W3C: World Wide Web Consortium – body creating web standards and guidelines.
http://www.w3.org/
WAI: Web Accessibility Initiative. See:
http://www.w3.org/WAI/
Waterfall: Project Management Method whereby each element of the software development life-cycle occurs sequentially.
Win Runner: A tool that was historically used for test automation. See Automation Centre.
White Box: Testing normally performed by developers, looking at the code and ensuring that each model of code is performing as expected. See Unit
X………………………………………………………………………………………………………………………………………………………………
Y………………………………………………………………………………………………………………………………………………………………
Z………………………………………………………………………………………………………………………………………………………………

Wednesday, 4 June 2008

Looking to Recruit a Tester

You have a vacancy or you may just want to breathe new energy into your test function. The logical option appears to be a new Software Test Engineer and you would like them sooner rather than later.

The employment of a UK Test Engineer is going to cost an organization between £250 and £300 per day, ignoring the cost of recruitment. Why do this when you can have a four man offshore team for the same price?

Think of the difference that you can make with a four man team, in comparison to the one UK resource that you are currently seeking. Stop your own team members from having to do the mundane tasks. How many times do you see the look of pain appear on someone's face at the prospect of having to perform more scripting? Umpteen permutations of the same thing – yawn!!!! Relieve the boredom and place the work offshore. Then start thinking about what the rest of your team can do, now that so much repetitive work is out of the way.

This paves the way for your current team to:

- Increase the innovation and use the time to find ways to improve the department's capability.
- Find time to work on the processes that always take second place to the projects.
- Increase the throughput of work and stop being seen as a bottleneck to progress.
- Apply more resource to existing projects and increase the coverage of testing.
- Increased staff satisfaction resulting in greater resource and knowledge retention.

Ultimately you have already invested in your existing team and they will relish the opportunity to move away from the tedium and onto more interesting and stimulating work. This will increase the job satisfaction and naturally improve team morale.

If this can all be achieved at the price of one permanent resource, can you really afford not to try it?

TCL India will provide a four man team, comprising a lead and three test engineers for £300 per day. You will have a single point of contact offshore and TCL India’s UK Management will ensure that the function works for you, lending our wealth of experience in offshore working. This can be put in place very quickly and you can start realizing the benefits of having four people instead of one.

E-mail : Grant.Obermaier@TransitionConsulting.co.uk

If you have come across this as a result of looking for work, we will shortly be recruiting in India and currently looking in the UK. Please feel free to e-mail your CV.

Tuesday, 3 June 2008

A View on Test Automation

Testing has been a long and laborious task for many years, probably since its inception. I can’t comment on that, as I was not around. I have seen the introduction of automation as a means of reducing the time taken to test, but as with all new ideas, people jump on the band wagon and the results are not always as expected.

With vast sums of money invested, the cost of automation was high (when compared to a manual tester), a skill set which was hard to come by, but with the results of their labours there was an expectation that greater savings would be made. One man would be able to do the work of ten etc. The results however often failed to live up to expectation over a period of time. Early successes were not invested in and the result was a lot of automation started to gather dust.

One of the leading sellers of automation software once indicated that an automated script would take seven times longer to write, but would take one seventh of the time to execute. This would mean that a script that was automated would need to be run seven times without change in order to cover the cost of its creation. Here lies the main problem.

Initially automated scripts were often based on a record and playback capability, which whilst of extreme use had a finite life if the application was changed in any way. As time has moved on this has now become a basic form of automation and the capability is now more based around intelligence, not relying on the position in which it appears on the screen, but the fact that it is there as an object. It must still be understood that an application which is undergoing regular and significant change, is far less suitable than something which is relatively stable and unchanging.

Automation does however have a place. Regression testing, which relies on the same scripts being repeated, is an ideal candidate for automation. Another area is smoke testing or sanity testing, the process of ensuring that an application is performing as expected with a reduced regression set, perhaps for the purposes of checking that a test environment has been set up correctly. It should also be remembered that certain applications will go through many functional cycles of testing for a single release, again making them more suitable for automation.

This method of testing is often viable and the greater the capability of the automation engineer the more likely the success of the automation. Try to avoid a wholesale approach, which is likely to leave you with a lot of shelf-ware gathering dust. Look at each project and application individually and understand whether you are likely to obtain the returns you seek. Look at all the types of products that are available and select one that suits your purpose, but also for which the skills are available. Last, but by no means least, ensure that you plan the automation from the start, allowing for the extra time to develop the scripts. Once started, continue to evaluate the approach, bearing in mind that the scripts are likely to need updating with changes to some extent, generating a maintenance cost.

If you are looking at automation of regression packs, or for smoke testing purposes, consider the offshore option. The scripts are pre-defined and therefore the work understood. The cost is as always significantly lower.

Monday, 2 June 2008

Severity vs Priority

When raising a defect, there is a common confusion that occurs for testers and others alike. What is the difference between a defects severity and its priority?

So let’s start with looking at the dictionary definitions:
Severity - Causing very great pain, difficulty, anxiety, damage, etc.
Priority - Something that is very important and must be dealt with before other things.

We can see that the two words have completely different meanings. Why then is their confusion between them?

The severity is the domain of the tester and they should be capable of recording this. The severity to the testers is the impact of the defect on the application and their ability to continue testing. The priority is the domain of the business and should be entered by them against each defect raised to reflect the importance of the change to them.

For instance, a spelling mistake would be deemed as a low severity by the tester, but if this mistake occurs in the company name or address, this would be classed as high priority by the business. An inability to access a rarely used menu option may be of low priority to the business, but the severity is high as a series of tests cannot be executed, all dependent on access to the option.

The mistake that we have seen made many times is to assume that the tester is also capable of recording the priority. Whilst it may be possible for the tester to make an educated assessment, the priority is the business’ means of defining what must be repaired prior to release to production and the order in which effort should be applied to the fixing of defects. Testers who have been involved with a particular application for some period of time may be able to do this, but it is essential to have adequate business representation on the project and their involvement in the life-cycle of a defect and the defect management process.

When a project enters test execution, the focus will be on fixing defects of the highest priority. This means that the application will be released with the minimum amount of priority defects unresolved. Care should be taken by the Project Manager to ensure that whilst the priority is paramount, severity is not ignored. What is needed is a balanced approach, which favours the business priority. At the end of the project the volume of high severity and high priority defects should have at least been reduced, if not removed, in order to meet the exit criteria defined in the test strategy.

To summarize:
Priority = Business = Order of Fixing
Severity = Tester = Failure of Application