Search Blog

Loading

Tuesday, 15 May 2012

ecommerce: Five things to know this week, May 14th 2012

Here is a copy of the ecommerce newsletter we send to some of our customers and prospects.
This is the second edition of the weekly newsletter.

Given the plethora of information already available on the internet, we thought curating just five stories or downloadable reports of interest would be a better approach than adding to the deluge!



Hope you like the articles.
Ecom newsletter 14052012
View more documents from Transition Consulting Limited, India






To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/

Monday, 14 May 2012

TCL presents 'Aligning QA needs to the start up journey and market forces'

Here is a great video on the quality journey for startups and how quality and
testing should never be left for the end.

There is always scope to improve quality and doing so
almost always pays off in terms of real dollars earned and saved.
 







To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/

Wednesday, 9 May 2012

Multi-stores seem to be catching on. What are the advantages?






Some trend reports have suggested that the fatigued e-commerce customer is now veering towards focused (niche) websites that offer a deeper range in fewer categories. It appears that large ecommerce players with many categories might be responding by exploring the “multi-store” option, which is not a new option but hasn't really become too popular, so far.

How can I tell? Because for one, I am getting a string of promotional newsletters from a bunch of stores which fall under the same umbrella- watchkart, bagskart, sunglasseskart…not amusing.
So here is the warning: go easy on the “multi-promotions” unless you want your earnest marketing efforts to antagonise potential customers.

If you are a typical broad-spectrum online retailer, you would have multiple product categories and sub-categories and some of you biggest challenges (apart from supply chain and inventory management etc) would be:
  •       Search Engine Optimisation: given the wide range and variety in the stock defining your range in a simple key phrase is next to impossible, let alone designing a phrase that people will use to actually search on.
  •       Designing neat and easy site navigation: obviously the range needs to be neatly ordered, categorised and placed such that the products are easy to find by any visitor. You either end up with too many top level categories or deep nested categories, both of which are avoidable.

This is why most top e-commerce platforms are now offering the multi-store solution and hosting solutions providers are happy to consider it as well. A multi-store is basically an option that allows you to you split your inventory up into two or more stores, all being run from a single platform instance/ source.

These multi-stores have a single administration, a single list of orders, and a single hosting plan making them easier to handle at the back-end. But for visitor navigation and Search Engine Optimisation, the advantages are almost like having separate niche websites, because by splitting your inventory into smaller and focused areas, it becomes easier to target the necessary keywords and phrases.

So by having a site dedicated to a single large area of inventory, e-tailers can use fewer but more obviously defined categories. These focused category names/titles also tend to include the main keywords keeping SEO in mind, while the category descriptions also become more relevant and keyword rich.

The thumb-rule when splitting your inventory into multiple stores is that it should be done from the customer point of view. You have to see your inventory how your customers would view it and go with where there is a natural split.

What is highly dangerous and must be avoided is the temptation of putting the same products in two stores. This will only ensure that search engines downgrade one of your stores for “duplicate content” which defeats the purpose of having multiple stores.

By establishing a brand (your main website) you can also have a common set of terms and conditions, a common returns policy, a common guarantee etc. But, again, the common pages have to be so managed that search engines do not consider that your web sites have duplicate content.

Though, theoretically, there is no limit to the number of stores you can add on. There is however a practical limit, because although you are using a single instance of a platform service, additional stores do add to the overheads.

A multi-store strategy comes with its pros and cons, which you need to weigh, but it offers an option that allows large format e-tailers to mimic the niche websites in terms of design and merchandising,and thus attract more customers for a particular category.

TCL offers independent testing services for e-commerce websites, new or existing, and helps to improve the overall performance of your website.






To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/



Friday, 4 May 2012

A Great Infographic on the State of Web Performance in the US, 2012




Some important observations:
Pages have gotten bigger

Repeat views are a whopping 20% slower than they were last year.Page Speed scores have gotten significantly worse.

To download the full report go here:
http://www.strangeloopnetworks.com/2012-SU-Report/





To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/


Thursday, 26 April 2012

Some digital trends for ecommerce, for 2012

Here are a a couple of interesting videos on digital trends for ecommerce in 2012.

As Software Testing Consultants, we believe each of these trends will have large implications on ecommerce technology, the website front-end and back-end systems, and therefore on the testing strategy and effort!



In some of our future posts we plan to address a few of these trends and their implications for testing, in greater depth.



To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com

















To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/

Wednesday, 25 April 2012

Why test an ecommerce website? The basics.



E-commerce or E-business is defined as the software and business processes required for businesses to operate solely via web portals, but e-commerce is much more than the provision of a web page as the customer interface. 

Why is testing important in the e-commerce environment? 
The first and primary reason is because technology for e-commerce is, by its very nature, business critical.  The second reason is that the history of e-commerce development is littered with expensive failures, at least some of which could have been avoided by better testing before the site was opened to the general public.

A successful e-commerce application should be usable, secure, reliable, maintainable and highly available to the user. These characteristics relate in part to the web technology that usually underlies e-commerce applications, but they are also dependent on effective integration with other applications.  E-commerce integrates high value, high risk, high performance business critical systems, and it is these characteristics that must dominate the approach to testing: do all the parts work, and do they work well together?

If we simplify matters and consider that an e-commerce site is fundamentally made up of a front end (the human-computer interface), a back end (the software applications underlying the key business processes) and some middleware (the integrating software to link all the relevant software applications), we can plan the independent testing of each of these components.

The front end of an e-commerce site is usually a website that needs testing in its own right.  
The site must be syntactically correct, but it must also offer an acceptable level of service on one or more platforms, and have portability between chosen platforms.  It should be tested against a variety of browsers, to ensure that the website is consistent across browsers.  Usability is a key issue and testing must adopt a user perspective.  The services offered to customers must be systematically explored, including the turnaround time for each service and the overall server response.  This, too, must be exercised across alternative platforms, browsers and network connections.
           
The back end of e-commerce systems will typically include ERP and database applications.  What is essential is to apply the key front end testing scenarios to the back end systems.  In other words, the back end systems should be driven by the same real transactions and data that will be used in front end testing.  The back end may well prove to be a bottleneck for user services, so performance under load and scalability are key issues to be addressed.  Security is an issue in its own right, but also has potential to impact on performance.

Integration is the key to e-commerce. Generally an e-commerce application integrates one or more components such as Database Server, Server-side application scripts/programs, Application server, HTML forms for user interface, Application scripts on the client, Payment server, Scripts/programs to integrate with legacy back-end systems. If an application is being built that uses a database server, web server and the payment server from different vendors, there is considerable effort involved in networking these components, understanding connectivity-related issues and integrating them into a single environment.   
  
Contributed by:
Thanooj Kumar
Transition Consulting Limited





To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/

Friday, 20 April 2012

Two great Infographics on ecommerce website testing.







To Know more about our ecommerce testing solutions contact:
Arun Kumar
arun.Kumar@tcl-asia.com




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/

Tuesday, 17 April 2012

What do top ecommerce companies perceive as the top opportunities for improvement?


The key is not to prioritise what's on your schedule,
but to schedule your priorities.
 

Stephen Covey




No company, not even the largest ones, can do everything at once.
But how you prioritise speaks volumes about who you are and can hugely impact on how your brand is perceived and naturally this will impact on business results.

So, when ecommerce toppers and laggards (across geographies, verticals and sizes) were researched by a prominent retail research company on what they prioritise as major opportunities, significant differences emerged in their thought patterns.




Some of this could reflect the fact that emerging or relatively new ecommerce players are still in the early phase of being worried about the nuts & bolts operations like payments, promotions and social integration.

But those are now considered the bare essentials, as technology improves almost everyone will be able to offer a smooth and secure online shopping experience and will enable sharing across social media.

Online, as in real life, what will set a brand apart is the store experience.
Spectacular merchandising, Rich media, augmented reality, virtual assistants, the more memorable the customer’s experience the more she/he values the Brand.

Of course there are cross channel opportunities, but once you have mastered the art of creating memorable experiences for your customers, you can port that skill across various channels much more easily, than trying to do everything at once.

So, if you are a new ecommerce company, striving to get and retain customers, would you rather invest in

a) A great "in-house merchandiser" and a “customer experience designer” who can delight your customer and love doing what they do?
                                               Or
b) A large in-house technical team including a small army of testers, who are bored to death?


If you said b you must have your reasons, but if you said a, then TCL can be a trusted and valuable partner. We offer end-to-end ecommerce portals/ website testing services that can free up your capital to invest in strategic resources who could change the future of your company.

We believe in the value of an early engagement with our clients, and that does not necessarily mean increased testing costs, but almost always means an overall reduction in project costs and of course the avoidance of potential revenue loss and damage to reputation.


To know more:
Contact K. V. Shashi Kiran
Shashi.Kiran@tcl-asia.com
+91 984 500 8696 

.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit:
http://www.tcl-global.com/

Tuesday, 10 April 2012

Testing everything, and how the future might look





When we talk testing (we presume most of you are software testers) your mind would obviously jump to the last test case you ran or the last killer "bug" you detected and logged. 

But the word testing covers a wide gamut of possibilities, everything could  (some would say must) be tested. Including content for the web.

Now, how does one test content?
Of course there are spell checks and grammar checks, but apart from that what can be done to ensure that the content is optimised to deliver what it is supposed to?


Turns out there's a lot that can be done, if you have the right resources and if you are determined to approach digital marketing in a structured, result-oriented manner.

SAP for instance has set up a digital test lab to standardise the testing of all content to be published on the company's Web pages—including usability, design and registration forms—as well as on its mobile- enabled site launched recently.

In their own words:
“We take an idea [such as an offer] and look at all the things that lead up to a conversion—the process, Web pages, design, user experience, content, offer and actual registration process,” he said. “Then we do a blend of A/B and multivariate testing, and image testing [such as putting an image next to the offer], to see where we get the most significant lift.”

Similarly, HP is another company that is making content testing a priority.
“We have a publishing calendar every week, which looks at all the content we're pushing out to meet our corporate goals,we use LinkedIn, Twitter and Facebook to see what topics resonate.”

We won't get into too many details here, as this post is not about content testing, but
imagine that, entire teams dedicated to testing content!

As our world becomes more digital,data-driven, cloud-hosted and mobile, and 
as software becomes ubiquitous, present everywhere from fridges to cars and household robots, the lines between different types of testing, analytics and strategy consulting might blur.
What we might have are large Independent "testing" Labs that test everything, from design, to content, products, software and everything else.They would not only test, but would predict future problems and suggest strategies to overcome them.

Is that another way of saying "Full Services IT Company", we don't know, but we don't think so.
What we see as the next generation of Independent Testing Labs, are teams that would offer hybrid services of testing, analytics and strategy, but not necessarily design and development.

The skills that will be in demand would be the ability to grasp concepts, new business models and consumer insights, and being able to put together information in different ways, create virtual proofs of concept through high-end predictive data analytics (or sheer intuition) and being able to foresee problems and point them out in advance.

Customers will pay a premium for this ability to predict scenarios and outcomes, not by crystal-gazing, but through a combination of intuition, testing and advanced analytics.

To Know more:




.....................................................................
TCL is a specialist consultancy in software testing.  As a company, our core purpose is to Develop and Deliver World Class Solutions in Software Testing that are Innovative, Structured and Professional.
The company exists to meet the needs of our clients who strive to ensure that their IT projects create value and demonstrate return for their investment.  We are geared to deliver services in all areas of software testing including functional, non functional and process improvement.
The first TCL company was based in the UK (Transition Consulting Limited) and the group has now expanded to include enterprises in the US and India.  A further expansion into Australia or New Zealand is expected within the next five years.
For more details visit: 
http://www.tcl-global.com/









Tuesday, 24 August 2010

Zappers - 18th August

The 5th edition of the ‘Zappers – Eventful Testing’ was yet another successfully organized event by TCL, encouraging and discovering avid testers across the city. The event was wholly organized and sponsored TCL and held at The Grand Magrath Hotel in Bangalore.
As attendees began walking into the venue by 6:30 pm, they were greeted with some appetizing food and post-work beer. It was nice to see new faces  and some familiar ones, socialize with eachother as they discussed the proceedings of the evening. By 7:45 most of the people were well settled and rearing to go. There was the omnipresent connectivity glitch which was rectified soon by the hotel authorities.


The event saw an array of talented testers glued to their screens battling it out, finding bugs and logging it into the bug reporting system. The event was a conducted with a view to firstly encourage peer sharing and enhance the potential of many enthusiastic testers from various companies around Bangalore. HP, SourceEdge, Canary’s, Thomson Reuters, EFI, Fidelity, and Collabera were some of the participating organizations. There were a number of well known testers from the Weekend Testing community, who were looking to establish their dominance and well, did to a certain level. The applications which were given to the participants to test were Irfan View, Google Chrome 6 Beta, Song Bird 1.8 Beta, Firefox 4 Beta and finally the popular gaming application Quake 3 Arena. Teams logged a total of 190 defects in 60 minutes which kept our judges on their toes.

The evening of competition and anticipation was brought to an end by participants from the weekend testers bagging the 1st place and walking away with a cash prize. The team consisted of Pradeep, Sharath and Santhosh. The prize for the Show Stopper bug went to Parimala who almost couldn’t make it to the event but was quite evidently happy she did. The prizes were handed out by the Chairman, Stewart Noakes and the COO of TCL India, Manoj Chandrappa. The event concluded with a dinner and a few photo sessions.
The event is termed as a success purely because of the interest it generated. This event is not aimed at having hundreds of participants but to form a platform and create a community which would aid in doing so. We can just hope there were new connections formed and extend our gratitude to each and every participant alike.


Pictures of the event can be found on our facebook page.

Marcel Hoover
TCL India

Friday, 9 July 2010

Early Testing: Always Consumer Friendly

As a testing professional, it is up to us to deliver a quality product that will satisfy our customers and lead to a more successful and prosperous organization. To do this we must locate and correct defects efficiently, select processes and technology as well as lead others to apply them, and effectively communicate the results of your efforts to management.

A test scenario, a bug report deferred as-designed: According to Associate Press “Toyota is recalling 437,000 Prius and other hybrid vehicles worldwide to fix brake problem.” Toyota is the world’s largest automaker with impeccable quality reputation up to now.

As it turns out, the cause of the problem is a bug in the software that controls the brake system. “There have been about 200 complaints in Japan and the U.S. about a delay when the brakes in the Prius were pressed in cold conditions and on some bumpy roads. The delay doesn’t indicate a brake failure. The company says the problem can be fixed in 40 minutes with new software that oversees the controls of the antilock brakes.”

I don’t mean to wage in on Toyota disastrous situation. I just think that this is an essential software testing lesson and it illustrates some key issues that testers like us have experienced over the years. “After receiving a similar report from the U.S. in October, Toyota’s tests concluded that a glitch in the Prius antilock brake system software could reduce braking force when drivers traveled across bumpy surfaces.
The company wrote that “although this system was operating as intended,” it decided to make a change to its production line in January to address the problem,” Kareyama reported.

If I wrote a bug report, it would look like this:
  • Bug Report Summary: Braking force is reduced (the problem) when driving across bumpy surface (the scenario).
  • Reproducibility: Intermittent.
It seems that the Resolution can be treated as “As-designed” (although it does not work in certain scenarios, that was how Toyota designed it), or “Deferred” (the problem is acknowledged and will be fixed later), or “To-be-fixed” (but in the next release). We already know the outcome. So, what do we learn?
  • Scenario-based and exploratory testing during system-test is essential—this requires skill and creativity in test design, not test-driven development (which is all good and an essential element to software development) or process-driven testing such as CMMi or ISO-9000 (which is good for quality control).
  • Software testing is not quality assurance—testers can report bugs, but a decision for corrective action lies somewhere else. There is nothing wrong with that. Testing is an important information service provider. We find and report problems.
  • Software is everywhere; and software is buggy—our job as testers is to find bug by breaking software in every possible way we can!
Last but not least, we, testers don’t create software or the product; we help improve software reliability through our defect finding skills. Buy breaking software, we are saving consumer and our company time and money (even contributing to public safety in this case). So, keep breaking software tester…Your job is much appreciated!

By Shri Vidhya
shri.vidhya@tcl-asia.com

A Thought on Test Effort Estimation

Introduction

Test estimation always starts with the following common questions.
  1. What are the requirements to test software: Resource? Infrastructure? Time? Proper planning and Estimation?
  2. How many resources \ time \ infrastructure required: To understand requirements and plan test strategy? To write and review test cases? For test execution? For reporting?
  3. Why estimation of the testing process: To deliver quality product on time? To complete process in low budget?

Commonly in a software development environment, the focus of the estimation is always on the development activity. The project estimation team’s main focus will be to estimate the effort for the development and underestimate the testing effort. Usually, effort for testing phase is considered to be 30-40% of the total development effort. The effect of this approach results in insufficient testing effort and test coverage.

Test estimation is an iterative process. Usually we create estimations in terms of test cases, early in the test cycle. Here we need to make it clear to the clients; "This estimation is based on current understanding of the requirements and can be changed". It provides flexibility later in the test cycle to ask for more resources and time.

Challenges:

Estimation of effort for the testing project done by the testing team is always considered very high by the other team. The reason being no formal process for estimation exists. The real challenge for the testing team at the organization and project level is to come out with the effective estimation technique for testing activity.
Effort estimation at the time of making proposals is very difficult because at that time requirements are not so much clear and it is very difficult to see end to end scenarios at that time. Therefore at that time software industry standard approach of taking around 40% is followed relative to the development efforts.
Sometimes, even expert managers, especially those who are unfamiliar with software testing project, have real trouble estimating the time and resources to allocate for testing. It’s important to combine good estimation techniques with an understanding of the factors that can influence effort, time, dependencies, and resources.
It is the reality that successful test estimation is a challenge for most organizations, since few can accurately estimate software project development efforts, but much less the testing effort of a project.


Solution?

There is several estimation approaches used in the industries. Followings are some of the different approaches.

Implicit Risk Context Approach: A typical approach to test estimation is for a manager to implicitly use risk context, in combination with past personal experiences in the organization, to choose a level of resources to allocate to testing. This is essentially an intuitive guess based on experience.

Metrics-Based Approach: A useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. But with this approach determining and collecting useful project metrics over time can be an extremely difficult task.

Iterative Approach: With this approach a rough testing estimate can be made for large test efforts. When testing starts, a more refined estimate is made after a small percentage (e.g., 5%) of the first estimate's work is done. At this point testers have obtained additional test project knowledge and a better understanding of issues, general software quality, and risk. Test plans and schedules can be re-factored if necessary and a new estimate provided. More refined estimate can be made after a somewhat larger percentage (e.g., 10%) of the new work estimation is done.

Test Work Breakdown Approach: This approach is to separate the expected testing tasks into a collection of small tasks, for which estimates can be made with reasonable accuracy. But in many large projects, this is not the case. If quite large numbers of defects are being found in a project, then it will increase the time required for testing, re-testing, defect analysis and reporting. It will also add to the time required for development, and if development schedules and efforts do not go as planned, this will further impact testing.

Since estimation for testing varies from project to project as the test strategy depends on the project requirement, the testing team should be the part of estimation process for the requirement stage.

The test estimation technique should consist of the following major steps.

  1. Knowing the test life cycle: The main focus of this activity is to identify the different stages of test life cycle where testing is to be done. The phases of test life cycles can be:
  • Test requirement phase
  • Test case design phase
  • Test script development phase
  • Test execution phase
  • Test result analysis and documentation phase

The objective of identifying different life cycle is to estimate size and effort for each phase and document the estimation assumption for each phase. Later on the team can work back on the estimation for variation in estimation.

2. Output identification of each test phase: In this activity the product of each phase of test life cycle can be identified. The job of the testing team is to estimate the size of the work products. The following figure shows the different phases of testing and respective products.



3. Estimating size for each phase: The main activity is to estimate the size of the product for each phase. When estimating the size for each phase, all the assumption considered during estimation, should be documented for future analysis if, any variation in estimation exists. Some predefined techniques e.g. Wide Band Size Estimation can be used to do size estimation. Even previous project data can be used as reference depending upon the project Scope and nature.

Size estimation for each phase is detailed below.

I. Test requirement phase: The main goal is to analyze the project requirement and identify the test requirement based on the test strategy. The testing team should brain storm the project requirement and categorize the test requirement.
Test requirement can be categorized mainly as, functional, GUI, performance, security and compatibility.
Here, the testing team can identify the positive and negative test requirement for each category. This will help the testing team in identifying the test data. The purpose of this activity is that the testing team is the part of the estimation from the requirement stage. Also any ambiguity in the requirement can also be addressed at this stage.

Size estimation process:
Input: Project requirement, Use case
Process: Project requirement analysis, test requirements identification and categorization, documentation of estimation assumptions
Output: Number of test requirements, functional requirements, performance requirements, compatibility test requirements and, security test requirements.

Note:
1. Under each category we can have positive and negative conditions. e.g. estimation assumption.
2. Platform for testing identified during the Estimation will not change during the Test Life Cycle.

II. Test case design phase: The aim of this phase is to identify and generate test cases. Both manual and automated test cases are identified in this phase. Also test data will be identified for positive and negative condition. Test strategy can be used as a base to design of test case.

Size estimation process:
Input: Test requirement
Process: Test requirement reanalysis, identification and categorisation of test cases based on test requirements; apply conversion factor, documentation of estimation assumptions
Output: Number of functional, GUI, performance, compatibility, security test cases
Conversion Factor: The best way to identify the conversion factor is clubbing the test requirement, which have similar scenarios.

Number of test cases = Number of test requirements / N
Here N is the conversion factor.
If N = 1 then Number of test cases = Number of test requirements
Also using the conversion factor user can calculate the test coverage against the requirement. For example one test case can test two requirements.

III. Test script development phase (if automation): The aim of this phase is to automate test cases using automated test tools. The test asset generated from this phase is test script. The goal of the estimation team is to identify number of test script and source lines of code (SLOC).

Size estimation process:
Input: Test cases, external Interfaces, third party controls, verification points
Process: Identification of automatable test cases, external interfaces, third party controls, reusable component based on the product feature, and documentation for the estimation assumptions
Output: Number of test scripts, reusable components, and total SLOC.

Note:
Total SLOC = Executable lines + Comment lines
1 Test script = N SLOC (where ‘N’ is average SLOC per Script)
Total SLOC = N x Number of Test scripts

IV. Test Case Execution Phase: Main aim of this phase is to execute the manual and automated test case. The test asset for this phase is test log. The goal of testing team is to identify the number of test cases to be executed.

Size estimation process:
Input: Number of manual test cases and automated test cases
Process: Identification of number of manual test cases and automated test cases, number of defects to be detected (e.g. N test cases will detect one defect) and documentation for the estimation assumptions
Output: Number of manual and automated test cases to be executed, and number of probable defects

Note:
Number of defects will help the user to identify number of test cases to be executed during regression

V. Regression Cycle: The aim of this activity is to re-execute the test cases. It is quite difficult to identify how many iteration of testing needs to be carried out. Depending upon the number of defects, the estimation team can identify the number of test cases to be executed.

Size estimation process:
Input: Number of manual and automated test cases, number of defects uncovered
Process: Identification of number of manual and automated test cases to be executed, document assumptions, Identification of test coverage criteria
Output: Number of regression cycle, effort estimation:

4. Estimating effort: Based upon the size estimated for each stage of testing phase, the total testing effort for the project can be calculated.

Total Effort of each phase = Size of the Product * Productivity
Productivity factor should be considered separately for manual and automated testing. Productivity is specific to organization as it is influenced by knowledge and skill level of the testing team.

e.g. Manual Testing
Total effort for test case generation = Number of manual test cases * Productivity
Where Productivity = Number of test case generated /Time

e.g. Automated Testing
Total effort for test script generation = Total SLOC * Productivity
Where Productivity = Total SLOC / Time
SLOC = executable + non executable line
Total testing effort for first iteration = Effort for test requirement + Effort for test development + Effort for test execution

Note:
Test development comprises of test case and test script development
Effort for regression testing: It consists of two components, manual testing and automated testing.

This effort can be split into two components
• Test case or Test script development/Enhancement
• Test execution

Total Regression Effort for manual testing = Test case development/Enhancement + Test Execution
Total Regression Effort for Automation = Test script development/Enhancement + Test Execution
Where Test Script Development = (Total SLOC * Reuse effectiveness) * Productivity
Based on this, total test project effort will be
Total test project effort = Total testing effort + Regression testing effort + Planning effort + Management effort + Rework effort

Note:
Rework Effort depends on the risk factor specific to the project.

Conclusion:

At the project level, with this estimation technique, the testing team will arrive very near to accurate estimate and provide better ability to understand test cycle to the management or client. Testing team will be able to trace any variation in estimation as all the estimation assumptions will be documented for each test phase. This technique doesn’t introduce any complexity factors for estimation, and it will reduce the estimation variation as all the parameters with respect to requirement can be addressed.
For an organization a standard estimation technique can help the testing team in improving the existing estimation process. With this estimation technique, the testing team can strongly defend and reason the effort required for testing. A standard estimation technique will help build an organization metrics data for testing that can be used for future testing project estimation.
I conclude by saying; there are three factors that decide test estimation: experience, manipulation and intelligent planning. We need to utilize more of negotiation and communication skills than any proven method.

By: Kamaljjeet Sinha
kamaljeet.sinha@tcl-asia.com

Thursday, 21 August 2008

Test Managers – Balance of Opinions

I am sure that most of the readers of the blog will instantly know what I mean when I say that different Test Managers arrive at different decisions. There can be many reasons for this, some are harder than others, some are nicer than others, some come from one country and some from another, some are permanent, some are supplied and some are contract, some are new to role, some are new to the organisation.

The reality is that people are different and our experiences drive us to behave in certain ways. To some extent those that know us well will predict our reactions and actions. When running a department full of Test Managers and having a mix of personalities and capabilities, it becomes important to bring some balance or levelling to this. You don’t want Test Managers to be sought because the project will always go live.

This means that we need to turn some of those decision making activities into more of a science and remove the imbalance that can be introduced by emotion or experience. I can’t really supply you with facts and figures to enable you to draw a decision diagram, because it is not that easy. But I will try and point you in what I think is the right general direction.

I would start by breaking down the departments testing functions into business domains. It is more likely that you can assess what is being released in terms of quality at domain level as a starting point. For instance, when releasing to the web you experience a high degree of success and few calls for support, yet when compared to releasing in Business Intelligence, the quality is always an issue and the first run of the month end batch processes always crashes. This gives an indication of where to start defining some criteria. But there are other areas to be considered as well. It could be that one development agency has a far higher success rate than another, where the volume of defects found are always low, or perhaps when a development agency or department develop using a particular language, the results are always much better than others.

These different criteria can be used to form some basic rules around final exit criteria for instance, enabling decisions to be made within a set range, ensuring that all Test Managers reach the same conclusions. Perhaps looking at the ratio of defects to scripts and comparing this to some analysis on the application being developed, the language of development, volume of calls pertaining to the application in support and the domain of release would provide good statistical evidence for decision making criteria.

Basically I am suggesting that you use historical data to form basic guidelines that the Test Managers can use. It does not necessarily eradicate the problem, but at least it will enable easy identification of projects that are outside of the norm. The Test Manager must still have the power to state what they think, even if this flies in the face of the statistical evidence, but when doing so they should be able to substantiate their view.

For projects running with inexperienced Test Managers or those that are inclined to shy away from awkward decisions, the decision becomes one of science which in most cases is going to be correct.

Tuesday, 19 August 2008

Empowered Test Managers

The test function plays an important part within an organisation, yet too often they are asked to work in a manner which is restrictive and fails to give them the authority or accountability that they require or deserve. Strangely in some instances, even when they are given the authority they fail to take it.

So let’s look at the Test Completion Report. This is the opportunity for the Test Team to give a full account of what has occurred during the testing and make any recommendations. Why is it then, that we do not include as a standard within every report, a recommendation on whether the item is suitable for release into production. Part of the answer may be that the Test Manager does not feel comfortable stepping away from the black and white of scripts passing and failing and the associated defects would appear to make some uncomfortable. But the Test Manager knows the testing which was due to occur, understands how much has occurred and what level of quality has been achieved. Why then is it felt that this is not a logical and calculated recommendation.

On the flip side, why are Application Support departments not demanding proof of testing and a solid opinion on the level of quality that has been achieved? In most organisations, if a poor quality product is released to production, it is not the members of the project that are impacted by the problems, but those in Application Support. Not only are they going to be receiving calls through to the Service Desk, they are potentially going to be calling out support staff in the middle of the night to recover from the problems. Not only this, but any service levels that the Application Support department are supposed to maintain are potentially being compromised. It is therefore imperative that these two departments work together. It is often the case that Application Support will have more clout within the organisation and will therefore be able to assist in the move to a more authoritative testing department.

Another area that the Test Manager should be given more authority is on the exit of Integration in the Small and the entry of Integration in the Large. Two important events occur here. The first is the handover of the development agency’s testing assets and proof. A failure to provide this information is a red flag to the Test Manager and should sound all sorts of alarm bells. This is indicative of a lot of problems about to be experienced during test execution. The second is when the code is first delivered into testing and the process of running those first few scripts. If these start to fail then there is a second indication of problems with quality. Yet so often the Test Manager is not in a position of authority to insist on the production of test assets from development, or indeed to bounce the code back for rectification of defects. If the Test Manager is engaged at the outset of the project, they should be able to insist on the supply of test assets from development. To avoid the problems of poor quality code delivered, take smaller drops of code and start them sooner, so that an indication of quality is understood, or add a specialist phase of testing prior to Integration in the Large, where the sole purpose is the sign off of the code quality delivered, potentially by smoke/sanity testing the application using a selection of the scripts created.

To summarize, ensure that the opinion of the Test Manager is sought. Don’t let them get away with a statement of scripts run and defects raised, open and closed in the Test Completion Report. Insist that they provide an opinion. Work with Application Support to bring additional credibility to the department. Once this has been achieved you can then start to think about how you ensure that all of the Test Managers in a department apply the same level of reasoning when making recommendations for go live. The subject of another article I think.

Sunday, 17 August 2008

T.I.G.E.R – What this acronym means to us!

Transition Consulting Limited, as a group of companies, have a set of values that we expect all of our employees to demonstrate. These are embodied by the T.I.G.E.R acronym.

T = Truthful
It is imperative to us that all members of the company operate in a truthful manner. We need to know that we can rely on what people are saying and that they will be honest with each other and our clients. This is not always easy, but some wise person once said, “Honesty is the best policy”. Well we believe this and have made it part of our policy.


I = Independent
Testing as a discipline needs to remain independent of other functions in order that it can provide an unbiased view on quality. Lack of independence places a testing department under the same constraints in terms of time and cost and therefore quality can become compromised. We pride ourselves on the fact that Testing is all that we do. We live and breathe the subject and can always be relied upon to act independently.


G= Good Willed
We expect our staff to be good willed because this is a personality trait that we embody as an organisation. As a result we are affiliated with several organisations and as a group contribute charitably each year. We work with some local organisations and some as large as the NSPCC.


E = Energetic
Energy is incredibly important to us. We want our employees to work hard and play hard. We expect them to be passionate about testing and what it involves. We expect them to demonstrate an enthusiasm for their work and not view it as just a job.


R = Reliable
We need to be able to rely on our resources and we expect our clients to rely on us also. Reliability is a cornerstone on which we build, taking care of the basics by ensuring that we can be counted upon to be knowledgeable and dependable, providing value into our organisation and those of our clients.


Not only is the acronym easy to remember, but it is a strong image and one with which all of our employees are happy to associate. From a TCL India perspective, the Tiger is probably even more powerful an image, being so highly revered.