Abstract
A number of effort estimation methods have been developed over the recent decades. The key question software practitioners ask is “Which method is the best one for me?” The bad news is that there is no “best” estimation method. The good news is that there are many useful estimation methods; yet, in order to be useful, an estimation method must be selected with a particular estimation context in mind.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
Examples of such conflicting results can be found in Briand et al. (2000), Wieczorek and Ruhe (2002), and Mendes and Kitchenham (2004). The two former studies simply excluded from the data set the projects with missing values, whereas the latter study replaced missing values with values sampled randomly from the other projects in the set (thus preserving projects that were excluded by the two former studies).
- 4.
- 5.
“z” statistic represents the ratio of estimated to actual effort.
- 6.
For an example overview of principles of empirical evaluation, refer to C. Wohlin et al., Experimentation in Software Engineering: An Introduction, Kluwer Academic Publishers, 2000.
- 7.
Weight on a criterion should reflect two aspects: (1) range of the criterion being weighted and (2) relative importance of the criterion to a decision-maker. For example, when buying a car, the price of the car is usually important. But it would not be important if the prices of alternative cars being considered ranged from 15,000€ and 15,100€. In this example, the importance of price criterion depends obviously on the spread of the values on this criterion.
References
Ozernoy VM (1992) Choosing the ‘best’ multiple criteria decision-making method. Inform Syst Oper Res 30(2):159–171
Menzies T, Hihn J (2006) Evidence-based cost estimation for better-quality software. IEEE Softw 23(4):64–66
MacDonell SG, Shepperd MJ (2007) Comparing local and global software effort estimation models – reflections on a systematic review. In: Proceedings of the 1st international symposium on empirical software engineering and measurement, Madrid, Spain, 20–21 Sept 2007. IEEE Computer Society, Washington, DC, pp 401–409
Moløkken-Østvold K, Jørgensen M (2004) Group processes in software effort estimation. Empir Softw Eng 9(4):315–334
Basili VR, Weiss DM (1984) A methodology for collecting valid software engineering data. IEEE Trans Softw Eng SE-10(6):728–738
Ohsugi N, Tsunoda M, Monden A, Matsumoto K (2004) Effort estimation based on collaborative filtering. In: Proceedings of the 5th international conference on product focused software process improvement, Apr 2004, Kansai Science City, Japan. Springer, Berlin, pp 274–286
Mendes E, Kitchenham B (2004) Further comparison of cross-company and within-company effort estimation models for Web applications. In: Proceedings of the 10th international symposium on software metrics, 11–17 Sept 2004. IEEE Computer Society, Chicago, IL, pp 348–357
Wieczorek I, Ruhe M (2002) How valuable is company-specific data compared to multi-company data for software cost estimation? In: Proceedings of the 8th international symposium on software metrics, 4–7 June 2002, Ottawa, Canada. IEEE Computer Society, Washington, DC, pp 237–246
Dolado JJ (1999) Limits to the methods in software cost estimation. In: Proceedings of the 1st international workshop on soft computing applied to software engineering, 12–14 Apr 1999. Limerick University Press, Limerick, pp 63–68
Dolado JJ (2001) On the problem of the software cost function. Inform Softw Technol 43(1):61–72
Grimstad S, Jørgensen M (2006) A framework for the analysis of software cost estimation accuracy. In: Proceedings of the 2006 ACM/IEEE international symposium on empirical software engineering, 21–22 Sept 2006, Rio de Janeiro, Brazil. ACM Press, New York, pp 58–65
Grimstad S, Jørgensen M, Moløkken-Østvold K (2006) Software effort estimation terminology: the tower of Babel. Inform Softw Technol 48(4):302–310
Armstrong JS (2001) Principles of forecasting: a handbook for researchers and practitioners. Kluwer Academic, Dordrecht
Barron FH, Barrett BE (1996) Decision quality using ranked attribute weights. Manage Sci 42(11):1515–1523
Basili VR, Caldiera G, Rombach HD (1994b) The goal question metric approach. In: Marciniak JJ (ed) Encyclopedia of software engineering, vol 1, 2nd edn. Wiley, New York, pp 528–532
Briand LC, Langley T, Wieczorek I (2000) A replicated assessment and comparison of common software cost modeling techniques. In: Proceedings of the 22nd international conference on software engineering, 4–11 June 2000. IEEE Computer Society, Limerick, pp 377–386
Myrtveit I, Stensrud E, Shepperd M (2005) Reliability and validity in comparative studies of software prediction models. IEEE Trans Softw Eng 31(5):380–391
Briand LC, Emam KE, Wieczorek I (1999) Explaining the cost of European space and military projects. In: Proceedings of the 21st international conference on software engineering, Los Angeles, CA, pp 303–312
Conte SD, Dunsmore HE, Shen YE (1986) Software engineering metrics and models. Benjamin-Cummings Publishing Company, Menlo Park, CA
Kitchenham B, Jeffery R, Connaughton C (2007) Misleading metrics and unsound analyses. IEEE Softw 24(2):73–78
Foss T, Stensrud E, Kitchenham B, Myrtveit I (2003) A simulation study of the model evaluation criterion MMRE. IEEE Trans Softw 29(11):985–995
Jørgensen M (2004c) Realism in assessment of effort estimation uncertainty: it matters how you ask. IEEE Trans Softw Eng 30(4):209–217
Kitchenham BA, Pickard LM, MacDonell SG, Shepperd MJ (2001) What accuracy statistics really measure [software estimation]. IEE Proc Softw 148(3):81–85
Pickard L, Kitchenham B, Linkman S (1999) An investigation of analysis techniques for software datasets. In: Proceedings of the 6th international symposium on software metrics, 4–6 Nov 1999, Boca Raton, FL. IEEE Computer Society, Washington, DC, pp 130–142
Lokan C (2005) What should you optimize when building an estimation model? In: Proceedings of the 11th international software metrics symposium, 19–22 September 2005, Como, Italy, p 34
Mendes E, Lokan C (2008) Replicating studies on cross- vs. single-company effort models using the ISBSG database. Empir Softw Eng 13(1):3–37
Mendes E, Lokan C, Harrison R, Triggs C (2005) A replicated comparison of cross-company and within-company effort estimation models using the ISBSG database. In: Proceedings of the 11th IEEE international symposium on software metrics, 19–22 Sept 2005. IEEE Computer Society, Como, pp 27–36
Mendes E, Mosley N, Counsell S (2003a) A replicated assessment of the use of adaptation rules to improve web cost estimation. In: Proceedings of the 2nd international symposium on empirical software engineering, 30 September–1 October, Rome, Italy, p 100
Myrtveit I, Stensrud E (1999) A controlled experiment to assess the benefits of estimating with analogy and regression models. IEEE Trans Softw Eng 25(4):510–525
Putnam LH, Myers W (1992) Measures for excellence: reliable software on time, within budget. Prentice-Hall Professional Technical Reference, Englewood Cliffs, NJ
Putnam LH, Myers W (2003) Five core metrics: the intelligence behind successful software management. Dorset House Publishing Company, New York
Saaty TL (1980) The analytic hierarchy process, planning, priority setting, resource allocation. McGraw-Hill, New York
Shepperd M (2005) Evaluating software project prediction systems. In: Proceedings of the 11th international software metrics symposium, 19–22 September 2005, Como, Italy, p 2
Shepperd M, Kadoda G (2001) Comparing software prediction techniques using simulation. IEEE Trans Softw Eng 27(11):1014–1022
Shepperd M, Cartwright M, Kadoda G (2000) On building prediction systems for software engineers. Empir Softw Eng 5(3):175–182
Srinivasan K, Fisher D (1995) Machine learning approaches to estimating software development effort. IEEE Trans Softw Eng 21(2):126–137
Trendowicz A (2008) Software effort estimation with well-founded causal models. Ph.D. thesis, Technical University Kaiserslautern, Kaiserslautern, Germany
Vincke P (1992) Multicriteria decision-aid. Wiley, New York
Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: an introduction. Kluwer, Norwell, MT
Author information
Authors and Affiliations
Further Reading
Further Reading
-
B.A. Kitchenham, L.M. Pickard, S.G. MacDonell, and M.J. Shepperd (2001), “What accuracy statistics really measure,” IEE Proceedings Software, vol. 148, no. 3, pp. 81–85.
This article investigates two common aggregated measures of estimation performance: the mean magnitude relative estimation error (MMRE) and the number of estimates within 25 % of the actuals (Pred.25). Authors analyze what exact aspect of estimation these metrics quantify and their relationships to the elementary metric z of estimation uncertainty, computed as z = estimate/actual. Based on these example measures of accuracy, the authors discuss the benefits of using elementary and aggregated metrics.
-
T. Foss, E. Stensrud, B. Kitchenham, and I. Myrtveit (2003), “A Simulation Study of the Model Evaluation Criterion MMRE,” IEEE Transactions on Software Engineering, vol. 29, no. 11, pp. 985–995.
The article investigates the mean magnitude of estimation error (MMRE) which is the most widely used measure of estimation uncertainty. Authors perform a simulation study in order to investigate the limitations of MMRE. As a result, they cast some doubts on the reliability of assessing performance of effort estimation methods using the MMRE metric. Finally, the authors propose alternative measures of estimation error.
-
J. S. Armstrong (2001), Principles of forecasting: A handbook for researchers and practitioners. Kluwer Academic Publishers, Dordrecht, The Netherlands.
In Chap. 12, the book considers common ways of selecting appropriate forecasting methods and discusses their threats and opportunities. Moreover, best-practice guidelines for selecting forecasting methods in specific contexts are formulated. The author summarizes recommendations in the form of a tree graph for selecting an appropriate forecasting method depending on several characteristics of the estimation context.
-
J. R. Figueira, S. Greco, and M. Ehrgott (2005), Multiple Criteria Decision Analysis: State of the Art Surveys. vol. 78. Springer Verlag.
The book presents the state of the art in multicriteria decision analysis (MCDA). It motivates MCDA and introduces its basic concepts. Furthermore, it provides an overview of the basic MCDA approaches, including multiattribute utility theory employed in this book for assessing the suitability of effort estimation methods for specific estimation contexts. Finally, the book presents in detail the most popular MCDA methods.
-
K. P. Yoon and Ch.-L. Hwang (1995), Multiple Attribute Decision Making. An Introduction. Quantitative Applications in the Social Sciences, vol. 104, Sage Publications, Thousand Oaks, California, USA.
The book provides an easy-to-understand introduction to multicriteria decision analysis (MCDA). They introduce the basic concepts of MCDA, such as weighting relative importance of decision criteria. Moreover, they overview the basic types of MCDA approaches and illustrate them by discussing concrete MCDA methods that represent each approach.
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Trendowicz, A., Jeffery, R. (2014). Finding the Most Suitable Estimation Method. In: Software Project Effort Estimation. Springer, Cham. https://doi.org/10.1007/978-3-319-03629-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-03629-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-03628-1
Online ISBN: 978-3-319-03629-8
eBook Packages: Computer ScienceComputer Science (R0)