I have many times been asked to estimate the test effort in a variety of test activities in different projects. In general test estimation is a difficult skill to master and it sometimes requires several iterations before I am confident about the result – even when I have gathered some test metrics regarding time and effort on test activities in previous projects.
One of the test activities that can mess up your test estimates is the cost of fixing defects (from a test perspective it’s about defect management, retests and regression tests). You will not gain popularity or gather useful insight if you ask the project manager or the developers the question “How many defects do you expect to put in the product?”
This week I read a new book about test and again I see some estimates on how much it costs to fix defects depending on when the defect was discovered.
The first estimate I saw long ago on the cost of change over time was by Barry Boehm. ( I don’t remember where I got the figure below from)
Later I found the figures below:
McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p. 29. ISBN 0-7356-1967-0.
And last week I read the following in Anne Mette Hass’ book:
(the graph is a recontruction of the data in the book – not a copy)
Hass, Anne Mette (2014). Guide to Advanced Software Testing. Second edition. p. xxxii and chapter 3.4. ISBN-13: 978-1-60807-804-2.
When I read these different sources of information about the cost of fixing defects, then I tend to get confused and it does not really help me with my test estimation of fixing defects.
Some information suggests a linearly progression in costs over time (or test phases). Other information suggests exponentially progression in costs.
It is time to ask some questions:
Does the values take into account whether the product is a legacy product build from scratch or the product is build or customized within a well-known framework?
If a project build the product from absolutely scratch making everything within the project then I can see that the costs can be very very high when fixing a defect in a production environment. Knowledge, tools, documentation etc. can be very difficult to use by anybody else but the original developer team.
Does the values apply to a distributed product or a central hosted product?
If for example a car manufacturer finds a defect in the car computer and then has to recall one million cars to fix the defect it will be costly. If Google finds a defects in gmail it can be fixed in the central hosted environment.
There are of course a lot more questions to be asked!
Here is a very short list of things I think needs to be considered when estimating the cost of testing fixed defects:
- Defect severity/impact
- Product properties (centralized or distributed, legacy or known framework)
- Product interfaces
- Skills of the developer team
- Skills of Operations(!)
- Defect, Change and Release processes (and Test process for that matter)
- Required documentation
In general I have learned that you should start to test as early as possible and that the cost of fixing defects just gets higher the later you find them (in time or test phases) . Also I have learned not to copy values for my estimates from other project without really considering whether it will fit into my current project. Does your current project have the same risk profile as the project you would like to compare it to?
I would really like to learn more about this subject so if you have comments or references that could be useful, please send them to me.