It pains me to think of someone spending time to make their estimates more accurate, instead of thinking how to deliver better quality software. I think your managers are misguided.
That said, the best way to have a hope that your estimates will, on average, be a reasonable idea of the size of the effort facing you is to divide the features you are about to develop into very small chunks. That helps you to identify the risks more specifically, and address high-risk areas first, so that if something takes longer than you expected, you didn't leave it to the very end.
It sounds like you might be estimating the time it takes you to do manual regression testing, though, rather than the time it takes to test a new feature? If this is the case, ask the programmers at your company for help in thinking how you can automate these regression checks, so that you have time for more important activities such as exploratory testing of new functionality.
Reply#2:
for most accurate estimates you have to divide product into small features, prioritize them and do estimates for each. So when you do planning for the next test round you can just summ all feature estimates that must be in scope of this run.
Reply# 4 :
Start recording estimates and actual durations for every project, including past projects if the data is available.
Determine which attributes distinguish projects from each other in your shop (product area, size, complexity, risk, etc, etc) and record these as well.
Over time, you'll end up with a base of data to use for estimating upcoming projects.
And if you must avoid the human element and build an algorithm, your attributes become your input, and the baseline data becomes your output.
(This is not they way I would do it myself, since I believe estimations based on experience are best, but if you must build something to automate estimation this might be the way to do it.)
Reply# 5 :
Thanks you very much for your suggestion! It is nort easy here.... We had TATA here for an accessement abut our test process (they used TPI Msthodology) and from their report, my company decided to improve according to that document. I work in software testing since 2002 and previously I was an analyst in the financial system develloping team. I have the foundation level certification and I went to attendthe Test management course from ISTQB(didn't have time yet to have te exam), I read a lot about software testing but leading this work team (about improving test process) is some guy that arrived 33 month ago to work in the team (with no previous experience in software testing) and it isn't very easy to convince him about how to do things. That why I openned this discussion, to change ideas with people with experience on this area. Thanks for all the suggestions. After the estimating subject, I have to improve the way we organize our manual system tests including to implement product risk analysis, prioritise requirements, how to built case tests. And the work base here is a very poor requirements list (never with more than half a dozen requirements even in very extensive or complex projects) and lots of later requests from clients (many of them occurs during uat test :-().
Reply # 6 :
These may help (at least for amusement):
http://strazzere.blogspot.com/2011/01/estimation-guesstimation-and....
http://strazzere.blogspot.com/2010/04/there-are-always-requirements...
Reply # 7 :
Hi, my first post so go easy on me.
I also looked into estimation as part of our company TPI, here is a 'summary' of my findings:
Test work break down; break down the expected tasks into a series of small ones, the theory being that these estimates will be more accurate. This is the method that most people currently use.
Pros:
- This is quick and easy to implement.
- Only requires test leads input
- Could work well with ‘Test Specification’ papers.
Cons:
- This assumes that the break down is accurate and predictable
- A whole series of small margin of error will lead to an overall large margin of error.
- Relies on test leads experience and understanding of the required tests for a requirement.
Metrics based; this requires us to track past experience and test effort, once there is a set of data from previous projects ‘past information’ can be used. This would be a more quantitative approach based on prior experience.
Pros:
- Produces accurate estimates.
- Judgment is based upon documented experience.
- Quick to produce estimates once the relevant information has been collated.
Cons:
- What makes a reasonable history of projects?
- Need to collect relevant information to base estimates on e.g. function points, lines of code, number of requirements/use cases.
Worst case Best case; this method can be used in combination with other methods estimation techniques. The user will produce Worst case, most likely and Best case estimates for each of the areas they are estimating. The difference between the best and worst displays the degree of confidence in the estimates, and weights the estimate accordingly.
Expected time = (Best + (4 x Most likely) + Worst)/6
Pros:
- Same as test work break down.
- Added weighting depending upon confidence.
Cons:
- Relies on test leads experience.
- Have to produce 3 estimates.
Wideband Delphi; this works on the principle of more heads are better than one, estimates are produced by several members of a team and then a meeting is set up to discuss the estimates. The estimators then discuss why there are differences in the estimates (if any) and the estimates are refined until everyone is in agreement.
Pros:
- Multiple people are more likely to spot any missed/unnecessary tasks.
- Draws on experience from multiple people.
- Could work with the ‘Test Specification’ papers.
Cons:
- Could be quite time intensive; need estimates from multiple people, and a meeting where everyone needs to agree.
- Could be difficult to put together a committee with the relevant experience.
Reply # 9
Hi, In our organization we have considered consider 7% of development Effort would constitute for Integration Testing and 20% of Development for System Testing. We also consider 5 influential factors like (Business Risk, Technology, Complexity, Development Team Efficiency and Test Team Experience).
1. We classify the 5 Influential factors priority as either Major or Minor.
2. Then we classify the Impact of the Factors as High, Medium and Low.
3. We have scale where Priority is Major, then High is 8, Medium is 4 and Low is 2. If priority is Minor, then High is 4, Medium is 2 and Low is 1
4. Then Risk Factor = sum of all the Factor values/sum of Medium factors based on priority
5. Percentage Effort = 27% * Risk Factor.
6. Testing Hours = Percentage Effort * Development Hours.