Legs, body, head, eyes, mouth; or are we talking about software bugs? The bug, error, defect; it little matters what it is called, but some are more touchy about the subject than others and like to use more politically correct statements, like issue or observation. The reality is that during the testing of an application or piece of software, something is identified as not performing in the expected manner, as driven by experience or indeed by the requirement specification, functional specification or design documents.· Unique defect identifier – a unique reference by which the defect can be identified.
· The application name or identifier and version – the software in which the defect was located.
· Environment specifics – the environment that was being used to test the application.
· Test script identifier (if applicable) – A unique reference from the test script, enabling each defect to be identified by script.
· Defect synopsis – A brief description of the defect that has been encountered.
· Detailed description of Defect – Full definition of the defect, enabling it to be reproduced by developers or those trying to resolve it.
· Test Steps Executed – from the test script, details of the steps executed and the detail of the step that has failed
· Expected Result for Test Step – The expected results for the step, e.g. the displayed logo should have been blue and white
· Actual Result for Test Step – The actual results encountered, e.g. the displayed logo was red and white
· Evidence of defect – Additional information showing the defect, such as screen shots
· Severity estimate – The impact of the defect on the ability of the tester to complete testing
· Priority – The importance of resolving the defect to the organisation
· Tester name – The name of the tester who has reported the defect
· Test date – The date that the tester located the defect
· Defect reporting date – The date that the defect is located. This should be immediate
· Defect assigned to – Who is being asked to resolve the defect
-Back to the tester for re-testing
The key then is to report on the defects on a daily basis during test execution, keeping the project informed of the number of defects raised and closed that day and the resulting cumulative totals. This enables the Project Manager to make decisions on the project's likelihood to go live on time, whether to apply more resources to certain aspects of the projects, whether overtime is required etc.









+V0.01+GOB+220508.png)



The above graphic shows a series of testing types. This is often an area of confusion for someone new to testing, but each of these has a specific function. They can also often be references to the same thing, such as White Box and Unit from the above list. Part of the skills of the test manager are in knowing what test types to apply to a project and the level of intensity of the testing performed therein. 








As execution nears commencement, resources will look to smoke test, or prove that the environment is ready for testing. Once testing commences, defects will be logged in the client prescribed tool, or if not provided, we will adopt a suitable mechanism for defect capture. Statistical data will now be produced on a daily basis and provided to the client, ensuring full awareness of issues as they arise.