Search Blog

Loading

Tuesday, 29 July 2008

Reinventing the Testing Wheel (Object Oriented Testing)

I have been thinking about this one for some time and wondered what other views existed. For some time developers have taken a more modular approach to programming, applying techniques such as Object Oriented Design, in order to improve efficiency. Why re-invent the wheel each time a certain piece of code is required? Take something like a calculation of VAT. Once this has been written the first time, that should be enough and if stored and indexed correctly, each time someone else needs a VAT calculation they call a pre-written routine from a library. Not only does this mean that the developer does not spend valuable time writing the same code again, but also that the module should already have been tested and therefore be reliable, reducing the likelihood of errors.

This was seen as a major break through in the development world, but one which has not been replicated to my knowledge in the testing arena. When we write a manual test script, we write it in a manner which is long hand. So how many times have we written scripts to test VAT calculations? Why are we not creating a library of scripts that are re-usable by all testers? Why do we have to sit and think each time that we test a field: to understand what all the permutations are that we need to deal with, how to test it positively and negatively? There are hundreds of thousands of testers all over the world that are repeating the same thought processes, writing the same scripts, duplicating their efforts.

How many times has an input of a person's age been tested? What are all of the parameters? How old before they are at school, how old before they can hold a credit card, bank account, retire? All of these questions have been thought about so often, but we still religiously recreate them each time. So testers are accused of being too slow or not moving with the times, being overly expensive. Well I suppose when you look at something like this, the answer has to be, "Yes".

So when we are asked what the future may hold for testers, I put on my rose-tinted spectacles and imagine…..

I imagine a world where we are sharing our knowledge and expertise, generating a global testing community and capability. One where we are seen as slick operators, contributing massively to technology programmes the world over, with the minimum of input for the maximum return.

How does this happen? If there were a central repository for scripts to which every tester contributed, where we went as a first port of call to locate our script components, or store those components we have to create, we would quickly generate a wealth of stored expertise. A Wikipedia of test scripts – wow! We would need to be able to locate scripts in a fast and efficient manner and the database used to store the script components or objects would need to be referenced in a multitude of ways. But this is not new technology. Developers have been doing a lot of this for years and we can no doubt learn from the mistakes that they made along the way.

I would welcome comment on this. Is it feasible? How could it work? Am I barking mad? If you are a developer and have a story about OOD then share it and let us know if you think this is feasible. Drop me a line and let me know.

Saturday, 19 July 2008

Site Under Construction!

In the current climate of the credit crunch and with so much positive press around the buoyancy of the internet, more organizations are looking to the World Wide Web to maintain or increase sales. What many are doing is realizing that their sites are perhaps not as good as they could be and are investing in updating or de-developing them. For others, the process of site maintenance is an ongoing activity with a dedicated team.

So with all of this development going on, it would be fair to assume that there must be a lot of testing as well? I would suggest that the answer is no. For some reason, when people are requesting development of web sites, it is believed that they are best placed to ensure that it is fit for purpose.

This is a bad assumption for several reasons:
1. Finding the time to thoroughly test the website is difficult to achieve. The people responsible for the new website are normally being asked to do this as one of many tasks.
2. The attention to detail required to check the entire site, is high and demands a certain type of individual. The tester has this mindset and is used to applying methodical and meticulous tests, whereas the business person may not be.
3. This may be the first time that the business person has been asked to test anything. Testers are used to testing websites, this is something that they do on a regular basis and this enables them to make better use of their time, homing in on problem areas.
4. Business resources normally check things positively, making sure that situations that are expected are dealt with. The tester introduces negative testing. Checking to understand that the site can handle abnormal activity, that which deviates from the norm.

So what is happening? Companies are employing development agencies and placing themselves in their care. They are relying on the development agency to ensure the quality of the site. The business are doing their best to make sure that the site is okay and are paying for development when they think it is finished.

In a normal IT project, the business relies on the testers to ensure that the application is to a certain standard before it comes anywhere near to them and this step is not being performed. Three problems result from a lack of in-built quality: The first is that the development agency is paid for the delivery, even though problems may not manifest until later. The second is that the development agency is held in low esteem once they have been found to have under-delivered. The third is that the organization requesting the website, find that they have a tool which does not meet their needs and therefore requires further investment to put the problems right.

The involvement of testing as part of the process ensures that the site is performing as was requested. It can be used as a means of safeguarding the payment, ensuring that the development agency have fully delivered prior to releasing funds. It also protects the development agency from losing their customer due to poor satisfaction.

The design is truly the prerogative of the business and only they can say if the look of the site is what they wanted to achieve. Other than this it should be placed in the hands of the experts. Remember that better quality, means customers are happier to use the site, more likely to find what you have to offer and therefore an increase in sales should result. I have said in many places that a poor web experience results in lost sales and that users are highly likely to leave your site for one offering a better experience or easier to find items.

Friday, 11 July 2008

Why is Testing Sexy

I suppose the answer depends on the person and for some not at all. But I will endeavour to give you 12 reasons why I think Testing is “sexy”.

1. The world of testing is governed by process and is incredibly black and white. A test passes or it fails; there is no shade of gray.
2. It is like being part of a big family, where the same individuals are involved and you keep bumping into old friends and faces.
3. You have to keep your wits about you and be prepared to change tack quickly, effectively and efficiently.
4. Testing is the underdog of IT. It is trying to bring to people’s attention to the value that it brings to the software development life-cycle.
5. You have to be able to think around the developer, understand what they are trying to achieve and then look at how you can think around it.
6. You need to be able to look at a project as a project manager, analyst, designer, architect, developer and tester, giving you a well rounded grasp of IT.
7. There are opportunities to specialise in testing, by technology, by testing type or by process, giving endless opportunities to grow and learn.
8. The work is endless, because finding perfect code from perfect projects is nigh on impossible, keeping us all in employment.
9. We create tangible assets and benefits for those employing our services, leaving them in a stronger position.
10. Whilst sometime invisible to the end users, we make a massive contribution to their use of IT, without which life would be harder, more frustrating and the helpdesk would be inundated.
11. We are growing as a discipline become more recognised and with more accreditation available in universities across the world.
12. We get to break stuff on a very regular basis.

I hate the word “Sexy” being used to describe something like a subject, but it seems to be used regularly these days. Do I dream of curvy defects at night? ….No! But I am passionate about Testing. I do believe in the value that it brings and the value I can add as part of a project. More and more people are becoming involved and the testing community is growing. Less people are falling into Testing and are making it a career choice.

Thursday, 10 July 2008

Ranking Your Web Success

When talking to a wide group of web owners, it becomes apparent that whilst they are concerned about the quality of their website, they are equally interested in some of the other aspects of their web presence.

Different people have different ways of measuring a web sites success, but there are some which are universally accepted. I have yet to find a site that is not recognised by Google for instance. Google offer a means of looking at web sites in order to determine their importance to the internet community. It does this by a series of complex algorithms and the results are provided on a scale of 0 to 10, with 0 being the lowest and 10 being the highest. I would suggest that an average Google Page Rank is between 4 and 6. Alexa Ranking looks at the popularity of a website in comparison to all others. A score of under 100,000 is particularly good and one that is running above 10,000,000 bad.

A general rule of thumb is that as those sites with a high Google Page Rank will also have a high Alexa Rank, although at times this does not follow precisely. Once you have established your own rankings, you can then start looking at the competition. We all know who our closest competitors are, those that we would like to think we are better than and perhaps those that we would aspire to match. From a web perspective comparing each of the organisations Google and Alexa rankings against those of your own organisation’s, gives a good indication of your standing in the group.

Use of such competitor analysis helps organisations recognise that work is required, or that they are ahead of the game and need to maintain momentum. Whatever your line of business, the importance of being found quickly on the web is high. More traffic comes to your site and you can then reflect on what is happening to traffic when it arrives.

When checking a web site, make sure that you do not forget to consider the simpler aspects. Your rankings are an excellent gauge on how you are performing.

TCL India ensure that information of this nature is included as part of our Web Check results. These are quality indicators, although of a different type. Supply us with your list of competitors and we will also include the results for them in your report.

Friday, 4 July 2008

Reasons for Offshore Failure

It is often the case that Indian resources are accused of failing to deliver what was requested. In my experience, whilst they are not faultless, more often than not the root cause of the problem is that what is required has been poorly specified. In all walks of life, when purchasing we expect the contents of the tin to be that which is described on the label. Why should testing be any different?

The reality is that people have been greedy for the savings that can be achieved from global economies. Regardless of a project’s suitability for offshore work, the project is forced down this path and may fail. The resources employed do their level best to deliver, but the reality is that with poor specifications and requirements, the chance of delivering to a high standard is significantly lowered.

Another area, in which mistakes are made, is the desire of the client to specify how the offshore operation will work. This may be flying in the face of years of experience, but the desire to control and make sure it works is so high that these factors are ignored, having the opposite effect. The offshore capabilities may also be culpable for not being stronger or refusing to engage in certain situations, but someone once said “the customer is always right”.

Whatever the culture that you are dealing with, be respectful of it. Accept it for it’s strengths and weaknesses and learn how to work with, through or around them. Recognize that as a nation you are likely to have your own idiosyncrasies and that these will be magnified to other nationalities.

The English reserve, the stiff upper lip, the desire to queue, being over polite, being rude, thugs, hooligans, the english have been accused of them all and no doubt there are those amongst us to whom elements remain true in part or whole. We expect others to deal with our traits, so why should we not be able to deal with others?

Beware of getting what you ask for. If you ask someone to take the shortest route to the other side of the mountain, it is technically accurate that this will involve tunnelling through the middle. This will take infinitely longer but is what was ordered. The expectation may have been that someone would either go around or over the top, but that was not what was requested. If your instructions to an offshore capability are specified precisely, the likelihood is that these instructions will be followed. Making sure that they are what you want takes some self-questioning before commencing.

Tuesday, 1 July 2008

Software Testing Interview Questions

I am not suggesting that these are the only questions that would be asked, but are some samples for consideration. These are questions that I have used when interviewing others:





1. How many test scripts can you write in a day?
I am looking here for the ability to estimate. Whilst in reality the question is impractical, because it depends on so many different factors, I want to see the interviewee come up with some kind of answer. When estimating, there is a need to make assumptions and having a ball park figure enables someone to provide rough estimates more quickly.

2. How many test scripts can you execute in a day?
This is a repetition of the first question. I normally ask one directly after the other. The more junior resources tend to struggle with the first question and then the second also. The better resources learn from the experience of the first question and respond in a more positive manner to the second. Now I am not only looking at the ability to estimate, but also the ability to learn and an indication of the resources chances of seniority going forward.


3. Do you see testing as a service or a discipline?
Personally I am quite passionate about this one. I very much see testing as a discipline and a part of the software life-cycle that is as important as analysis, design and development. What I am trying to understand is the background that the individual someone has come from. Consultancy can demand one or other mindset and someone coming from either background can adapt, but I would suggest it is easier to revert from discipline to service than vice versa.

4. What is the most interesting defect you have found?
This is a passion question. I am looking to see if the individual can recount a particular incident and in what degree of detail. This begins to tell me whether they are a career tester or someone who is doing a job.

5. What are the key components of a test script?
Someone who is raising scripts every day should be able to define what information is required to be recorded. Different organisations have different standards, but there are key aspects to the answer. The requirement should be referenced or potentially included showing understanding that scripts should always be traceable to their point of origin. The script should have steps through, each of which defines the action to be taken, the data to be used and whether the step has passed or failed. This is the most basic of information and fundamental to all testers so an inability to provide this information is probably going to fail the candidate.


6. Can you define Black, White and Glass box testing?
This question is used to understand whether people have a basic understanding of testing jargon, as well as having attended the ISEB Foundation in Software Testing, where Black and White are covered. The inclusion of Glass box testing throws some people completely, others will think about it and try to define the answer; some will immediately define Glass box as being the same as White box.


Scenario Based Questions


You are the test manager on a project. The code is going to be delivered late and you now need to complete four weeks testing in two. How are you going to cope with this?


I am looking for managerial level thinking and understanding. There are numerous ways of dealing with this, more staff, overtime, weekend working, shift working, risk based testing, reduction in deliverables. The question is how many can they come up with and do they understand how to achieve them and the associated problems.


Risk Based Testing – How are they going to achieve this? Are the tests already grouped by risk? Are the requirements already grouped by risk or have the risk identified against each.


More test effort – Whilst increasing test effort is a simple means of achieving more in the same time frame, there needs to be a realisation that other departments will also need to apply similar Resourcing, to cover out of hours working, to handle increased volumes of defect fixing.


Cost Implication – Does the person discuss the increase in cost that can be associated with their suggested course of action? Are they thinking at project level or something lower?