Search Blog

Loading

Saturday, 31 May 2008

Components of a Bug

Legs, body, head, eyes, mouth; or are we talking about software bugs? The bug, error, defect; it little matters what it is called, but some are more touchy about the subject than others and like to use more politically correct statements, like issue or observation. The reality is that during the testing of an application or piece of software, something is identified as not performing in the expected manner, as driven by experience or indeed by the requirement specification, functional specification or design documents.

Call it what you will, we will use the term defect. Ensuring that you capture the right information at this point is critical. From a testing and development perspective, it is essential to be able to reproduce the defect at a later date, either as part of the remedial process or proving that a remedy has worked. TCL India would recommend that the following points are recorded for each defect:

· Unique defect identifier – a unique reference by which the defect can be identified.
· The application name or identifier and version – the software in which the defect was located.
· Environment specifics – the environment that was being used to test the application.
· Test script identifier (if applicable) – A unique reference from the test script, enabling each defect to be identified by script.
· Defect synopsis – A brief description of the defect that has been encountered.
· Detailed description of Defect – Full definition of the defect, enabling it to be reproduced by developers or those trying to resolve it.
· Test Steps Executed – from the test script, details of the steps executed and the detail of the step that has failed
· Expected Result for Test Step – The expected results for the step, e.g. the displayed logo should have been blue and white
· Actual Result for Test Step – The actual results encountered, e.g. the displayed logo was red and white
· Evidence of defect – Additional information showing the defect, such as screen shots
· Severity estimate – The impact of the defect on the ability of the tester to complete testing
· Priority – The importance of resolving the defect to the organisation
· Tester name – The name of the tester who has reported the defect
· Test date – The date that the tester located the defect
· Defect reporting date – The date that the defect is located. This should be immediate
· Defect assigned to – Who is being asked to resolve the defect

Each time responsibility for a defect is passed to another party, that fact should be recorded to form a history. The value becomes obvious when investigating why certain defects may not have been resolved in a timely manner. Investigation may be required as to why defects are bouncing backwards and forwards between different people, or those which are with an individual or team for a long period of time, impacting on the defect fix turn around time, potentially breaking SLAs.

In some more advanced tools used for defect management you will be able to prescribe the life-cycle of a defect. Such as:
-Raised by the tester
-Passed to the business analyst to prioritise
-Passed to the environment manager for specification of the environment and ensuring the defect was not caused by environmental issues
-Then onto development to fix code
-Back to the tester for re-testing
-Then onto the test manager or defect manager for closure.

The key then is to report on the defects on a daily basis during test execution, keeping the project informed of the number of defects raised and closed that day and the resulting cumulative totals. This enables the Project Manager to make decisions on the project's likelihood to go live on time, whether to apply more resources to certain aspects of the projects, whether overtime is required etc.

Care should be taken not to raise the same defect over and over again. When volumes of defects are high it is worth employing a defect manager to ensure that defects are not being repeatedly raised, wasting the time of resources. The defect manager should also ensure that all defects are being progressed. Sometimes defects that have been remedied are re-introduced erroneously: this is usually symptomatic of poor configuration management.

Friday, 30 May 2008

Offshore Communication

Communication is a critical point for a successful offshore arrangement. Our TCL COMS (Consultancy led Offshore Managed Service) is built around our understanding of this. But beware, communication can be difficult, even amongst people of our own nationality and culture. Speaking from personal experience, there is a huge between speaking to someone with a broad Scottish, Geordie or Wiltshire accent, when compared to someone speaking “The Queens English”. I mean no offence to any of the above, but accents are hard to deal with face to face and even harder when a telephone is used to handle the communication.

When dealing with other nationalities, where English is not the first language, the problems are amplified and whilst there are some foreigners who speak English perfectly the chances of finding such people becomes more difficult.

What we have found is that when people from other countries come and work for extended periods in the UK, their language skills naturally increase. As a result of this TCL India look to deploy resources locally, that have a long exposure to the UK, it’s people, their idiosyncrasies and the colloquialisms they use. These resources become highly useful, because not only can they communicate with us, they can also communicate with those offshore, utilising combinations of their own language and English. This provides an excellent conduit for communication, but it cannot be relied upon as the sole answer to the problem.

Written communication is far easier and less liable to be misunderstood than verbal, if interacting directly with an offshore capability. E-mail can be used and tools such as MSN Messenger if a more conversational approach is required.

Video Conferencing offers an ability to see as well as hear, but whilst it is great to see the faces of those you are talking to, there are inherent problems. Not only are you dealing with accents, but you have a time lapse and perhaps poor image quality, all of which is tending to distract you. There is also the potential lack of visual and audio synchronisation that your brain will be trying to cope with. It is suggested that if video conferencing is to be used, it does not occur at the beginning of the relationship and that time is taken to become accustomed to the experience.

In all offshore communication, look for feedback to demonstrate understanding. Query their understanding of the requests and make sure that theirs is the same as yours. Controversially perhaps, for the offshore element of a relationship, use the same techniques back at the client. Make sure that your grasp is right. Make sure that they have understood your communication. Whilst this is written very much from a UK perspective, the rules apply the world over.

Beware of getting back exactly what you have asked for and not what you wanted. Be careful in your communication. You could be taken literally.

Thursday, 29 May 2008

Contractually Demanding Testing from Developers

It is an experience that testers all over the world will instantly understand. The contract with the development agency makes no demand for them to perform unit and integration (in the small) testing.

How does this manifest itself? Firstly, the Test Manager will specify within the Test Strategy or Test Plans a series of entry criteria, from development into testing. This should comprise test plans, a full suite of test scripts and a series of defects that have been raised. As the development nears completion, the Test Manager will begin to ask for the evidence of the testing and look to meet the entry criteria for Integration (in the large) or formal testing.

At this point the response can be that the unit and integration (in the small) testing has not been contractually specified. The supplier often takes one of two stances, stating that this cannot be achieved, or insisting that the testing can be performed, but at additional cost and increased timescale. This immediately places the Project Manager in an awkward position: they will be unable to agree to the additional time and will not want to incur the increased costs. The normal action at this point is for the Project Manager to accept that a mistake has been made, log a risk and accept that the code will be delivered into testing without development testing having occurred (or proven to have occurred).

The quality of code is now likely to be of a lower standard and the volume of defects expected in the formal testing higher. This may result in additional cycles of testing and increases in costs from this perspective (part of the risks that the Project Manager will have taken on board). As the volume of defects found during the black box testing grows, the demand on the development agency increases and care should be taken to ensure that defect fix times are monitored. These are likely to slow as the volume of defects increases and this should be reported to the Project Manager as soon as it is seen. A good contract will have service level agreements(SLAs) in place, to ensure that defect turn-around is at a specified rate depending on severity and priority.

If the development agency has failed to perform their own testing, the demand is placed on the testing team or test supplier and the estimates that they will be working to should be re-visited. If testing is working on a fixed price contract, failure to meet entry criteria should constitute a change request and the ability to re-price.

It may be possible to insert some of the test team with the developers, allowing them to test unit and integration (in the small) in parallel to the developers. Alternatively, testers with a technical bent can be asked to look at the code, perhaps performing dry runs. If the testing is being performed but the deliverables are not being made available, then it may be possible to witness some of the testing that is occurring. It must be said that these last two points are more geared to damage limitation, rather than ensuring that the right level of quality has been built into the product.

We need to avoid this in the first place, so ensure that your organisation's Supply Chain has a standard wording that is inserted into all contracts with development agencies protecting from such events. For smaller organisations that do not have a Supply Chain function, bring the Test Manager on board for up-front consultancy; they should look to ensure that this does not happen. A further measure is for the development agency to be asked to sign off the test strategy, which should detail the entry criteria into the formal (black box) testing phase, giving the buyer some level of recourse. Lastly, as a Test Manager, don't leave it until the end of development to find you have a problem - identify it as early as possible and get written agreement that the appropriate testing will take place and that the test assets produced will form part of a deliverable.

Wednesday, 28 May 2008

WebCheck - More Information

We thought we would give some more information on the Web Check process as an addendum to the previous Introduction posting.

Websites are critical to all of us and with 15% of all retail trade in the UK conducted on line and whilst the credit crunch is hitting retail elsewhere, the effectiveness of the web ensures that it remains buoyant.

Some organizations are happy to rely on automated tools to assess their website/s, but can they really experience the web site as their designers intended? Programs are as intelligent as the programmer and whilst they may be very good, recognising that the layout of a page is jarring on the eye is going to be hard to achieve to say the least. Using the web is to some extent an emotional experience and this needs people to experience it. Most of us use the internet to shop to some extent, be that a book from Amazon or shopping from Sainsbury or Tesco. As a high street shopper, the ability to move from one store to another takes time and effort. The ability to ask a shop assistant for help is simple. On the internet it is not so easy. We hit problems and change site at the click of a button - opportunity missed! Zero time and effort and zero tolerance for problems or challenges that make shopping harder to achieve.

WebCheck is performed by professional testers, familiar with web testing, experienced in understanding the needs of the user and most of all…human. We don’t ignore the capabilities of web crawlers as this would be foolhardy: They do have a place and we augment our manual process with them.

As an output from each of our WebChecks we provide a certificate showing the quality percentage achieved. Further, a detailed report of our findings, including defects, observations and recommendations is provided. We also look at aspects such as Google and Alexa rankings, understanding the importance of your site to the internet community and the likelihood that people will come to your site. If desired, we can provide comparisons between your site and others, from a ranking perspective or user experience. We are not looking to develop, host or design your site. We are testers and want to help clients understand their sites problems and capabilities.
If you would like a quotation for the Webcheck service on your website, please e-mail us using the contact details on the left of the page.

Saturday, 24 May 2008

Raising Awareness of Software Testing

In some organisations that have their own internal testing departments, they can suffer from a lack of awareness and sometimes understanding of testing. How do you communicate what testing is all about to the masses? An idea that was had some time ago, was to put posters up around the business, gentle reminders that the testers exist, memory joggers to get people thinking about testing. A trawl of the internet found various means of visually assisting and we thought we would share some of these with you.

This one has to be “Load Testing”. Can your systems cope with the expected load? A great photograph that makes me smile every time I see it.









How about this one for “disaster recovery testing”?
Can your crew recover from this?







“Regression testing” stretches the mind a little but we came up with this. Do you understand the impact of change? Have you handled all of the ripples?







On the subject of “Stress Testing”: At what point will it break? This one is very visual and in our opinion instantly understandable.





What we did find, was that different people had very different perceptions of what some images meant to them. It was very much a personal interpretation. Testing is still one of the least well known disciplines of the software development life-cycle and departments which are run in a service like manner, called upon when needed and not a “must have”, need to let organisations know of their existence, encourage thinking about testing and encourage engagement on projects.

Friday, 23 May 2008

Why Use External Testers

In the modern organisation, IT is intrinsic to most of what goes on. From a laptop running MS products, to the multitude of systems used by a corporate, the need for software testing is growing as the uptake of IT solutions to resolve our business needs increases. The testing department is coming under increasing demand to become involved in ever increasing volumes of projects. If the volume of work were static the problem would be more easily resolved with additional resources being employed. The situation that presents most test departments is far from a balanced flow of work.

Lots of businesses have particular times during the calendar year where the volume of project activity increases. For some retailers this would be Christmas, for companies involved in education it could be the start of a term, but the pressures come in peaks and troughs. A test department may be able to cope with some periodic increase in demand but may find itself in a situation where it is perceived as a bottleneck to the desired throughput of projects. This reflects badly on the department and can be avoided by bring in external testers.

By understanding the flow of work coming through the department and plotting this over the year ahead (if possible), the size of the core team can be understood. The core team is the size of team required at the lowest point of utilisation. External testers are then used to deal with the demand for resource over and above the core team, covering the peaks of activity. The external testers can be employed for the duration of the peak and then released, dropping the team back to it’s core size.

Recruitment of good quality test resources is becoming increasingly difficult. Less people are leaving university with IT related degrees in this country. Numbers which are a couple of years old suggested that there were only 30,000 per annum in the UK and the number was dropping, where as in India the number was 500,000 and increasing. The cost of recruitment is also prohibitive and can be avoided by use of external resources. If activities can be performed offsite, then some of the additional overheads of Resourcing in terms of phones, computers, desk space, HR and training can be avoided, making the use of some external resources here in the UK, cheaper than employing them. Cost is an ever present consideration and if external resources can be used and released as is required, provide the right level of expertise and be utilised offsite then these factors begin to form a really strong case for the use of the external tester.
Flexibility is key to the way we work and the supplier of external test resources that can provide this at a price which is competitive compared to employment, are worthy of consideration.

Thursday, 22 May 2008

Does Offshore Work

Simply – yes – if performed correctly. Where so many fail, is by misunderstanding some of the key factors that need to be in place. The simplest way is to treat it as a black box scenario, offshoring well specified work packages, with quantifiable and specified deliverables against known timescales. This reduced need for communication simplifies the process, but it is only as good as the inputs. If these can be well specified then the chance of a good delivery is excellent. A prime example of offshore capability being used in this manner would be the generation of test scripts. This is often seen as a tedious activity that bores the UK Test Analyst, being repetitive in nature. Providing an offshore capability with a requirement specification that has been statically tested, perhaps even with the test scenarios defined, should enable a good offshore team to produce a well defined suite of scripts. Confidence can be obtained by sampling scripts as they are being produced and example scripts can be provided by the client to define the standards to be followed. Having mentioned static testing, the passing of a single document to an offshore team for review against the 8 point check is simple and requires minimal communication. Neither the test scripting nor the static testing require anything more than e-mail to send information, so setting up links between the offshore destination and your own test environments is not required. If you are not currently utilising offshore then we would strongly recommend giving one of these a try and sampling the option.

Once confidence to this level has been achieved the next easiest element to place offshore is the test execution. This does involve setting up links to the offshore capability, but some software can be used to achieve this quite simply. Access to defect management tools needs to be arranged to ensure that defects are reported as they are found and not gathered for end of day reporting. Freeware is available such as Bugzilla to achieve this and only needs to be hosted. The data protection act comes into play here, but it should be remembered that personal data should be scrambled prior to testing anyway, if not using simulated data. At this point it is worth pointing out that we do not recommend that all of the test execution is performed offshore, as interfacing with developers, business analysts, environment managers and project managers is required, so some onsite presence is required. Again if this is new to your organisation, try just placing a small fraction of the work offshore and increase the volume as confidence grows.

As more onus is placed on the offshore capability, the increase in management and communication occurs. This should start moving the operation into that of a service and the management mechanisms should be introduced as defined in http://tcl-india.blogspot.com/2008/05/tcl-coms.html.

From a service perspective, it is critical that both the offshore and onshore sides of the operation need to understand each other, how they work, what their expectations are and how to get things done. Too many people trust in organisations that put people in place to manage an offshore setup that the client has no knowledge of or relationship with. This can work, but beware the teething troubles that can occur. A good mechanism is to have the head of offshore resource work onsite for a considerable period of time, at least 3 months if not 6, before moving into an offshore service model. Most importantly rely on a team that has successfully managed offshore engagements and work well as a unit spread across the two countries.

To summarise, start simple, build confidence, know your offshore lead and gradually move to an offshore model. If moving straight into offshore, get expert assistance.

TCL India would be happy to discuss how to run an offshore trial with you. Please contact either Mick or Grant for further information or to set up a meeting.

Wednesday, 21 May 2008

Tester vs. Developer

Often a battle of many rounds, the tester is seen as the anti-developer, with the developer standing for creation and the tester the destroyer of the developers work. On many occasions you will see their two departments almost fighting battles as each tries to prove its worth and capability. The smart developer will realise that the tester is there to enhance the impression given by the developer. A developer who is told where the bugs are is able to modify their code and the result is a far happier end user. Not only that, but if the tester is involved in static testing of requirements, the difference to the developer can be huge, with each requirement being unambiguous and so far easier to develop in the first instance. When code goes live and the volume of bugs is minimal with positive user experience, it is rarely the tester that is thanked for their diligence: more likely to be the developer thanked for their fantastic software.

Yet all too frequently, the developer and their skill set are valued above the tester. Within the project life-cycle, development is considered essential, testing as a "nice to have". The disciplines are two sides of the same coin and occasionally someone migrates from the dark side to the light or vice versa. (Up to you to decide which is which?) The developers mind set is one of creation whereas the tester is one of destruction. There is no doubt that the tester becomes redundant if there is no development whereas the opposite is not true. Many projects are still run today with no focus on testing and the onus is placed on the users to experience the problems and report them.

It is well cited within the testing industry, that each progression of a defect to a later phase in the software development life-cycle (SDLC), makes it 10 times more expensive to resolve. So a requirement costs £1 to fix, design costs £10 to fix, development costs £100 to fix, testing costs £1000 and live costs £10000. Defects that translate into an outage for a business, where large volumes of resource are stopped from working, are indeed expensive. Surely this makes the role of the tester incredibly valuable to the SDLC.

Here are some other considerations on the value of the humble tester:

  1. The tester must be able to analyse requirements and in static testing take on part of the guise of the business analyst
  2. Rather than looking singularly at how a function should be used, look at all of the permutations then look at all the permutations of how it should not be used
  3. Pressure is high as the time allocated for testing on a project is regularly reduced as earlier phases slip, resulting in late code delivery
  4. The tester is trained in multiple testing disciplines and techniques, enabling them to perform sufficient testing
  5. There are as many types of testing that the tester should be able to handle, as there are development languages
  6. Automation, Performance and Security Testing require levels of expertise akin to a developer or network specialist
  7. The go-live decision of a project relies primarily on the advice of the tester and the evidence that they can provide to support the decision
  8. Advising and working with the business to perform acceptance testing
  9. Able to witness the unit and integration in the small testing performed by the developer
  10. Efficient management of defects through to closure.

Tuesday, 20 May 2008

Test Estimation (or) How Long is a Piece of String?

Estimation is an art form at the best of times, depending on the experience of the test manager, the stage of engagement in the software development life-cycle and many other considerations. Different organisations will have different expectations and it is important to understand this as a key consideration. E.g. If you start including every possible variety of non-functional testing in your estimate, in an organisation that does not normally look at this, you are going to have a result which is wildly inaccurate.

Some companies will have rules in terms of percentages of the total project cost; 20% Analysis and Design, 40% Development and 40% Testing, for example. This will vary depending on an organisation's focus on quality, but could be as low as 10% for testing.


We would suggest that the following aspects are considered:

General Risk : How mature is the relationship with the development agency? How often the application is being changed? Are more requirements anticipated? Do Service Level Agreements governign defect fix turn-around exist? Is the Configuration Management process mature?
Environments : Do they exist? Are there processes for environment management? Are the environments maintained in line with production? Will the environments be established and proven prior to test execution?
Quality : Have the requirements been static tested? How many defects or production issues are currently associated with the application/s? What level of unit and integration in the small testing is being performed? How many defects were found during earlier phases? Is the project confident that entry and exit criteria are being met? Is the project already slipping or maintaining schedule? How mature is the project team – is this something they are all familiar with doing? Is this a safety critical application/system?
Effort/Scale : How many requirements are to be tested? What percentage of the application/s is/are changing? How many cycles of execution are required? Do regression test packs already exist? What deliverables are needed/required?
Complexity : How many suppliers and development agencies are involved in the process? How many technologies and platforms are involved? How many systems are being changed? How many systems are interfacing? How many stakeholders are involved with the project/programme?
Timing : Is the project taking place over a drawn out period of time? Will the project stop and start? Is the testing expected to be performed over a very short time frame when compared to the effort required?


Any of the above have the ability to influence what is needed to complete the project, impacting different phases of the testing life-cycle, or the volume of effort that is required by different resources. Don’t forget that when project size or complexity increases, the strain on the test manager increases, and the introduction of additional resource such as defect managers may be necessary.


The base estimate should look at the volume of scripts that are required. From this you can calculate how many can be written and executed in a day and work out resource requirements accordingly. The management time can often equate to a further 33% effort. As a guideline a conservative estimate would look to write 5 scripts per day and execute 10 scripts per day. This is dependent on the content of the script and the experience of the testers, but can be significantly higher. Bear in mind that the cycles of execution will each act at as a multiplier to the volume of test execution. Two cycles should be the minimum. More than four would be indicative of problems.


TCL India have a mechanism for applying the above considerations to our estimates and weightings associated with each of the different elements.

TCL India - Testing Help and Advice

The IT industry is becoming more co-operative and sharing of it's knowledge, both with other IT professionals and the business community as a whole. To this end, TCL India would like to offer advice around testing issues, derived from our experience and those of our colleagues.

If you have questions about testing, offshore or just about the blog, then please post your questions to Grant.Obermaier@TransitionConsulting.co.uk and we will do our best to respond.
Please note that there is no charge for this activity but that TCL India or any other orgranisation associated with the Transition Consulting Limited group of companies, cannot be held liable for any consequence resulting from following recommendations made.

Friday, 16 May 2008

An Introduction to Testing

It is amazing in this day and age, that we are still asked why you should test. When you are really passionate about a subject it is hard, to say the least, to go right back to basics and justify why you are needed as part of the Software Lifecycle.

Lets look at some definitions of testing:

http://www.answers.com/topic/test?cat=technology
“a procedure for critical evaluation; a means of determining the presence, quality, or truth of something”
http://www.askoxford.com/concise_oed/test_1?view=uk
“a procedure intended to establish the quality, performance, or reliability of something.”
“the procedure of submitting a statement to such conditions or operations as will lead to its proof or disproof or to its acceptance or rejection”

To put our own interpretation on this, we believe that testing helps to:-
- Find problems early
- Identify if build meets requirements
- Improve user perception
- Avoid unnecessary outages
- Reduce maintenance
- Establish quality of purchases
- Understand the risks
It is not the complete answer and will not ensure that your code is bug free before it goes live, but it will enable you to understand the risk of going live. Testing looks to perform sufficient checks to understand the quality: it cannot on it's own resolve the problems. Testing does not look at every single permutation, or we would be testing forever, but looks to test sufficient permutations to give confidence in the level of quality attained. In certain instances, such as safety critical applications, the volume of testing does increase as the confidence required needs to be higher.

The above graphic shows a series of testing types. This is often an area of confusion for someone new to testing, but each of these has a specific function. They can also often be references to the same thing, such as White Box and Unit from the above list. Part of the skills of the test manager are in knowing what test types to apply to a project and the level of intensity of the testing performed therein.

Different types of testing are performed by different people or roles associated with a project. The graphic below shows a series of testing types and the roles that would normally be expected to perform the testing. It is worth noting that the Test Manager associated with a project, would take responsibility for all testing to some greater or less degree.


From a Testing perspective, the testing is broken into 5 primary areas, Engagement, Planning, Preparation, Execution and Completion. A series of deliverables are produced through the life-cycle of a project

Strategy: A document defining the types of testing to be applied; the criteria that allow you to start or stop testing. The shaping activities such as risk identification.

TTRM: Traceability to Requirements Matrix, providing traceability from each test case/scenario back to the original requirement specification. Proving that testing is covering the full set of specifications.

Test Plan: A document defining the type of testing to be performed, the test cases/scenarios to be covered, when and by whom.

Test Schedule: A project management type schedule showing all activities performed by what resources, at what time and for what duration. Further dependencies on other activities will also be included.

Test Scripts: Detail of the tests to be performed, the steps to go through and the expected results, with reference back to the test cases/scenarios.

Defect: Detail of each script that has failed, why it has failed and under what circumstances. The priority of the defect, it’s severity to testing and additional information allowing the defect to be replicated.

Completion/Exit Report: A summary of the findings of the testing team, showing evidence of issues and stipulating an opinion of the readiness of the item under to test to be placed live.

Regression Pack: A suite of scripts that can be reused to test the primary functionality of the item in the future.

To summarize, we believe that testing is a discipline in the same manner as analysis, architecture, design or development. It helps the client understand the risks associated with the release of a newly developed or modified application. It is an essential means of introducing quality into the software development life-cycle. It protects the users of the system, looking to ensure that IT is delivering what was requested. Pre-defined outputs are produced naturally throughout the engagement.

Wednesday, 14 May 2008

TCL COMS

TCL COMS (Consultancy led Offshore Managed Service) offers our customers the perfect marriage of our consultancy capability with the economic advantage of global delivery units. It represents the way in which TCL believes offshore should be best utilised and brings considerable industry experience to bear in its execution. There is a focus on ensuring that the management of the offshore capability is seamless and near invisible to the customer in terms of communication and management.

Onsite teams are made up of local and offshore resources, with the latter having a minimum of 4 years experience in the country of delivery. This provides the customer with a consistent onsite communicative presence that is capable of overcoming any issues that may arise offshore. We also advocate the use of messaging services, such as MSN to ensure that key communications are recorded and can be referred back to. TCL COMS is driven by a clear communication process, up and down through the management chain, and back and forth through peer level contact. In all instances the implementations are performed by resources familiar with distributed working practices and that they are able to demonstrate a successful track record in such environments.

In the earlier diagram, the area surrounded by red dashes can be fulfilled by either the client or TCL resources. In this manner, the client can either make TCL accountable for the entire project testing, or can retain control and use the TCL resources purely to facilitate an offshore capability.

The solutions are further enhanced with the addition of a dedicated Offshore Delivery Management Team with representation in the UK and India, which ensures efficiency in responding to demand, issue resolution and service improvement. We believe that whilst there are financial advantages to placing work offshore, roles which require high levels of communication are better resourced from the locale in which the role will interface. To this end a blended solution is offered, where prime roles, that of the Account Manager and Test Manager (s), will be sourced from the United Kingdom. Other roles may be sourced globally in order to offer high skill sets at economic cost.

As a preference, TCL work towards an optimisation of resource utilisation as defined below. It is recognised that some activities lend themselves to being performed offshore, but that there is always a need for onshore presence and communication. India is far away and the customer can often feel that they are too remote. This is especially important at the start and end of a project. By managing the testing activity carefully and over a period of time, this blend of resources should be achieved. The time taken to achieve this will vary from client to client and project to project.

The resources applying the effort change over the life-cycle of a project, starting and ending with a heavier focus on management, to define and shape the piece of work and increasing the volume of engineers to take on the more prescribed roles of scripting and execution. The areas where the majority of effort occur are the Preparation and Execution phases and these are also the areas where we would have the majority of the effort offshore.

The team will naturally assume a set service structure as defined below. This looks to ensure that a core presence is maintained both onshore and offshore, with a management layer. This team is then flexed to cope with demand from the client, where the majority of resource increase or reduction is handled offshore. This ensures consistency in the touch points onsite for the project, but also that communication paths once established with offshore remain constant.

Those resources which remain constant will be responsible for the interfacing with the project and ensuring that the testing remains on track, that resources flex as is demanded by the client and that the work undertaken remains on target, both in terms of financials and time.

Continuous improvement is key to the way TCL have grown and worked and this is something that we look to bring to all client relationships. TCL COMS looks to implement service structures that deliver to the clients requirements and then surpass expectations. This is a highly professional service geared to deliver and excel.

Metrics & KPIs

Metrics and KPIs (Key Performance Indicators) are a key aspect to any successful Test Function and to any service delivery. TCL India recognise the importance and can deliver Metrics and KPIs as a natural part of their service capability, bringing them together to form a balance scorecard. This accumulation of data is sometimes referred to as a dashboard and is designed to provide a single visual display of how the service is performing against expectation.

The TCL India Scorecard is broken into 7 key areas, although more areas of interest can be added if the client requires. These are the key components that are considered to be essential to the measurement of the service and the Testing it delivers.

Estimation : Accuracy : Overspend Areas : Environments : Code Delivery : Design : Requirements

Cost : Blended Rate : Cost per Defect : Onshore vs. Offshore : Cost per Project : Scale : Complexity

Responsiveness : Acknowledging Resource Requests : Identification of Candidates : Placement of Candidates : Staff Replacement : Issue Resolution

Satisfaction : Survey Responses : Project Feedback : Value Add : Innovations

Deliverables : Quality : Timely Delivery : Appropriate Sign Off : Resource Quality

Development Quality : Unit Testing : Code Delivery : Accuracy of Code : Defect Turnaround

Effectiveness : Defects by Phase : Root Cause : Analysis : Defects in Live : Script Coverage

These metrics are grouped together and provided to the client in a single diagram. The rings represent the targeted achievements for each of the primary areas, converted into a single scale. The points of the star are representative of how close to achieving the target the service has achieved. This means that the rings can be set at the required level for a period of time, or adjusted monthly, driving through change within the service.

Regression Testing

The SIGIST “Special Interest Group In Software Testing”, have a glossary of terms located at http://www.testingstandards.co.uk/living_glossary.htm

Here we find regression testing defined as: “Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made”

Regression Testing is a key component of any project and should be the final test performed prior to release to the production platform.

So what comprises a regression test?

The best way of describing this, is that an application which has been previously released, will be in a known state. Tests will have been previously written and executed against this and a selection of them will have been extracted to form the regression test pack. The tests would look to exercise as a minimum the core functionality of the application, in a positive manner. In this way, whenever a change is made to the application in the future, the regression pack can be run to ensure that faults have not been introduced or uncovered in functionality that was previously deemed to be working correctly. The regression pack should be updated as each new version of the application is released, building in new functionality to the suite of regression tests and maintaining it in line with the applications capabilities.

Some debate occurs as to what a regression test should comprise and it is fair to say that the larger the regression suite, the more confidence will exist in the new version of the application, but there is a balance to be achieved. The more tests, the longer they take to execute and the higher the cost to change. Pragmatism must be applied and rather than testing every permutation, coverage should ensure that the functionality of the application has not been impacted by change. Tests which apply different data to the same function, access the same function from a different point or negatively test the function (trying to break it), do not need to be part of the regression suite.

Experience shows that there is a distinct need for regression testing. Problems with Configuration Management or code control are often identified during regression, where previously fixed problems are re-introduced. Code which is common to different modules within the application is also a primary area of difficulty, as a change for the benefit of one module may adversely impact another.

It is key from a regression point of view, that no test is performed that has not been run successfully on a previous occasion. In a similar manner, no code should be regression tested which has not been previously tested.

A recent experience we had with webMethods regression testing was where, due to the nature of webMethods, several projects were being released together and the webMethods code was being converged and tested for the first time during the regression test. Because the convergence was seen as something over and above the need of the projects, it was passed onto a different team to build and a different test team to regression test. Not only was this an inaccurate use of regression testing, but the testers with the most pertinent knowledge around what was expected, were not involved.

As testers we must protect the regression testing activity. Ensure that it is full, up-to-date and complete. Failure to run regression properly increases the risk of faults in production.

Thursday, 8 May 2008

Functional Testing

Our primary offshore service is functional testing. Why?
· The progress can be systematically measured
· Prescriptive testing , easily packaged
· The testing is as good or better than the inputs received
· Communication is handled via reporting and defect identification
· Interfacing with other project members can be obviated

According to IEEE90, “Functional Testing ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.”

In this way, many of the perceived problems with offshore testing are removed from the equation. It is our proven intention that the hurdles, obstacles and reasons for failure of offshore are removed.

Typically 75% of testing effort is based around the production of test scripts and their subsequent execution. Our Functional Testing capability will maximize the benefits of offshore. With offshore prices starting as low as £75 per day, in comparison to permanent resources costing £250 per day. This gives a 70% saving to three quarters of the testing effort.

If this is put into perspective, a project that has a UK testing budget of £50,000 would spend £37,500 of its budget on scripting and execution. This can be reduced to £11,250 using our offshore setup. This means that the total testing budget would be £23,750, less than half that which was originally estimated. The beauty of the solution is that all of the management activity and the governance around the testing can remain onsite, where it can be controlled and interfaces managed. The offshore activity becomes near invisible.


The process is simple, a series of inputs are required from the project. These are normally in the form of a Project Plan, High Level Design, Requirement Specification and Functional Specification. These are not mandatory, but the more of these the better the project is suited to offshore effectiveness.

From a testing perspective, the Test Plan, Schedule and Scenarios should be provided. Again, these are not mandatory and can be produced offshore if required, but in order that control is retained in the UK, the controlling documents should be specified and owned there.

Once all of the inputs have been received or created, the process of producing the scripts can begin, based on the test plan and the test scenarios. Reporting now begins in terms of the anticipated volume of test scripts and how many have been produced against this. This phase is often referred to as the Preparation phase. During this period the onshore project team should be looking to ensure that the test environment is ready and available to the offshore team. Progress can be reported on a basis that makes the client feel comfortable, daily or weekly, depending on the length of the work.
As execution nears commencement, resources will look to smoke test, or prove that the environment is ready for testing. Once testing commences, defects will be logged in the client prescribed tool, or if not provided, we will adopt a suitable mechanism for defect capture. Statistical data will now be produced on a daily basis and provided to the client, ensuring full awareness of issues as they arise.

On completion of the test execution, the team will generate a functional test completion or exit report, showing the full details of the testing that has been performed and the results of that testing. An executive summary is included which specifies our opinion on the projects readiness for go-live. All test assets will be delivered to the client with scripts highlighted as forming the regression test pack. This ensures that all of the intellectual property associated with the project is returned to the client if they desire. It is our intention to ensure that the client is free to choose how future testing should be performed and not tie them into a service that they may not want.

TCL India Introduce Web Check

Over the last couple of months we have developed a new solution, called Web Check, designed to service a clients web site and ensure that the user experience associated with its use is as the client would expect it and as both the client and the user would like it to be.

Web sites are now a medium often used by organisations as their prime marketing tool, reinforcement of their brand or as a means of communicating internally or externally. Some organisations only exist through their web site. Therefore the importance that this brings to the owner of a web site is potentially, extremely high and could result in the success or failure of their business, or a large proportion of it. Because of this, web sites are often changing, with new products on offer, new goods to sell or a new message to put across. Some of these changes are made by means of adding new software or functionality and a lot are handled by making changes to content. The reality is that web sites grow, alter and adjust regularly and that often, finding the time to ensure that everything is as it should be, is difficult.

When things are important to us as individuals, like cars, we take care of them by performing regular services, to ensure that they are safe and will perform the next time they are needed. We do not like to have failures occur. A breakdown, be it at home or on a journey, is something to be avoided. Web Check has been designed to perform a service of a web site, giving the site owner peace of mind that it is working correctly and that the user experience is a positive one.

Web Check looks at key attributes of each web site and by employing formal testing techniques looks to ensure that these work as expected. The output report identifies all of the checks that have been performed and the results of those checks. These are translated into a single quality percentage against the checks that have taken place, providing a singular view of quality achieved.


...................................................................................................................................................


TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: http://www.tcl-global.com/