Skip to main content

Finding the Most Suitable Estimation Method

  • Chapter
  • First Online:
Software Project Effort Estimation
  • 3151 Accesses

Abstract

A number of effort estimation methods have been developed over the recent decades. The key question software practitioners ask is “Which method is the best one for me?” The bad news is that there is no “best” estimation method. The good news is that there are many useful estimation methods; yet, in order to be useful, an estimation method must be selected with a particular estimation context in mind.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See, for example, Pickard et al. (1999), Shepperd and Kadoda (2001), and Mendes et al. (2003a).

  2. 2.

    Systematic reviews of the studies comparing cross- versus within-company effort estimation are presented by Mendes et al. (2005), Kitchenham et al. (2007), MacDonell and Shepperd (2007), and Mendes and Lokan (2008).

  3. 3.

    Examples of such conflicting results can be found in Briand et al. (2000), Wieczorek and Ruhe (2002), and Mendes and Kitchenham (2004). The two former studies simply excluded from the data set the projects with missing values, whereas the latter study replaced missing values with values sampled randomly from the other projects in the set (thus preserving projects that were excluded by the two former studies).

  4. 4.

    For example, Briand et al. (1999), Shepperd et al. (2000), Shepperd (2005), Kitchenham et al. (2001), Foss et al. (2003), and Myrtveit et al. (2005).

  5. 5.

    z” statistic represents the ratio of estimated to actual effort.

  6. 6.

    For an example overview of principles of empirical evaluation, refer to C. Wohlin et al., Experimentation in Software Engineering: An Introduction, Kluwer Academic Publishers, 2000.

  7. 7.

    Weight on a criterion should reflect two aspects: (1) range of the criterion being weighted and (2) relative importance of the criterion to a decision-maker. For example, when buying a car, the price of the car is usually important. But it would not be important if the prices of alternative cars being considered ranged from 15,000€ and 15,100€. In this example, the importance of price criterion depends obviously on the spread of the values on this criterion.

References

  1. Ozernoy VM (1992) Choosing the ‘best’ multiple criteria decision-making method. Inform Syst Oper Res 30(2):159–171

    MATH  Google Scholar 

  2. Menzies T, Hihn J (2006) Evidence-based cost estimation for better-quality software. IEEE Softw 23(4):64–66

    Article  Google Scholar 

  3. MacDonell SG, Shepperd MJ (2007) Comparing local and global software effort estimation models – reflections on a systematic review. In: Proceedings of the 1st international symposium on empirical software engineering and measurement, Madrid, Spain, 20–21 Sept 2007. IEEE Computer Society, Washington, DC, pp 401–409

    Google Scholar 

  4. Moløkken-Østvold K, Jørgensen M (2004) Group processes in software effort estimation. Empir Softw Eng 9(4):315–334

    Article  Google Scholar 

  5. Basili VR, Weiss DM (1984) A methodology for collecting valid software engineering data. IEEE Trans Softw Eng SE-10(6):728–738

    Article  Google Scholar 

  6. Ohsugi N, Tsunoda M, Monden A, Matsumoto K (2004) Effort estimation based on collaborative filtering. In: Proceedings of the 5th international conference on product focused software process improvement, Apr 2004, Kansai Science City, Japan. Springer, Berlin, pp 274–286

    Google Scholar 

  7. Mendes E, Kitchenham B (2004) Further comparison of cross-company and within-company effort estimation models for Web applications. In: Proceedings of the 10th international symposium on software metrics, 11–17 Sept 2004. IEEE Computer Society, Chicago, IL, pp 348–357

    Google Scholar 

  8. Wieczorek I, Ruhe M (2002) How valuable is company-specific data compared to multi-company data for software cost estimation? In: Proceedings of the 8th international symposium on software metrics, 4–7 June 2002, Ottawa, Canada. IEEE Computer Society, Washington, DC, pp 237–246

    Google Scholar 

  9. Dolado JJ (1999) Limits to the methods in software cost estimation. In: Proceedings of the 1st international workshop on soft computing applied to software engineering, 12–14 Apr 1999. Limerick University Press, Limerick, pp 63–68

    Google Scholar 

  10. Dolado JJ (2001) On the problem of the software cost function. Inform Softw Technol 43(1):61–72

    Article  Google Scholar 

  11. Grimstad S, Jørgensen M (2006) A framework for the analysis of software cost estimation accuracy. In: Proceedings of the 2006 ACM/IEEE international symposium on empirical software engineering, 21–22 Sept 2006, Rio de Janeiro, Brazil. ACM Press, New York, pp 58–65

    Google Scholar 

  12. Grimstad S, Jørgensen M, Moløkken-Østvold K (2006) Software effort estimation terminology: the tower of Babel. Inform Softw Technol 48(4):302–310

    Article  Google Scholar 

  13. Armstrong JS (2001) Principles of forecasting: a handbook for researchers and practitioners. Kluwer Academic, Dordrecht

    Book  Google Scholar 

  14. Barron FH, Barrett BE (1996) Decision quality using ranked attribute weights. Manage Sci 42(11):1515–1523

    Article  MATH  Google Scholar 

  15. Basili VR, Caldiera G, Rombach HD (1994b) The goal question metric approach. In: Marciniak JJ (ed) Encyclopedia of software engineering, vol 1, 2nd edn. Wiley, New York, pp 528–532

    Google Scholar 

  16. Briand LC, Langley T, Wieczorek I (2000) A replicated assessment and comparison of common software cost modeling techniques. In: Proceedings of the 22nd international conference on software engineering, 4–11 June 2000. IEEE Computer Society, Limerick, pp 377–386

    Google Scholar 

  17. Myrtveit I, Stensrud E, Shepperd M (2005) Reliability and validity in comparative studies of software prediction models. IEEE Trans Softw Eng 31(5):380–391

    Article  Google Scholar 

  18. Briand LC, Emam KE, Wieczorek I (1999) Explaining the cost of European space and military projects. In: Proceedings of the 21st international conference on software engineering, Los Angeles, CA, pp 303–312

    Google Scholar 

  19. Conte SD, Dunsmore HE, Shen YE (1986) Software engineering metrics and models. Benjamin-Cummings Publishing Company, Menlo Park, CA

    Google Scholar 

  20. Kitchenham B, Jeffery R, Connaughton C (2007) Misleading metrics and unsound analyses. IEEE Softw 24(2):73–78

    Article  Google Scholar 

  21. Foss T, Stensrud E, Kitchenham B, Myrtveit I (2003) A simulation study of the model evaluation criterion MMRE. IEEE Trans Softw 29(11):985–995

    Article  Google Scholar 

  22. Jørgensen M (2004c) Realism in assessment of effort estimation uncertainty: it matters how you ask. IEEE Trans Softw Eng 30(4):209–217

    Article  Google Scholar 

  23. Kitchenham BA, Pickard LM, MacDonell SG, Shepperd MJ (2001) What accuracy statistics really measure [software estimation]. IEE Proc Softw 148(3):81–85

    Article  Google Scholar 

  24. Pickard L, Kitchenham B, Linkman S (1999) An investigation of analysis techniques for software datasets. In: Proceedings of the 6th international symposium on software metrics, 4–6 Nov 1999, Boca Raton, FL. IEEE Computer Society, Washington, DC, pp 130–142

    Google Scholar 

  25. Lokan C (2005) What should you optimize when building an estimation model? In: Proceedings of the 11th international software metrics symposium, 19–22 September 2005, Como, Italy, p 34

    Google Scholar 

  26. Mendes E, Lokan C (2008) Replicating studies on cross- vs. single-company effort models using the ISBSG database. Empir Softw Eng 13(1):3–37

    Article  Google Scholar 

  27. Mendes E, Lokan C, Harrison R, Triggs C (2005) A replicated comparison of cross-company and within-company effort estimation models using the ISBSG database. In: Proceedings of the 11th IEEE international symposium on software metrics, 19–22 Sept 2005. IEEE Computer Society, Como, pp 27–36

    Google Scholar 

  28. Mendes E, Mosley N, Counsell S (2003a) A replicated assessment of the use of adaptation rules to improve web cost estimation. In: Proceedings of the 2nd international symposium on empirical software engineering, 30 September–1 October, Rome, Italy, p 100

    Google Scholar 

  29. Myrtveit I, Stensrud E (1999) A controlled experiment to assess the benefits of estimating with analogy and regression models. IEEE Trans Softw Eng 25(4):510–525

    Article  Google Scholar 

  30. Putnam LH, Myers W (1992) Measures for excellence: reliable software on time, within budget. Prentice-Hall Professional Technical Reference, Englewood Cliffs, NJ

    Google Scholar 

  31. Putnam LH, Myers W (2003) Five core metrics: the intelligence behind successful software management. Dorset House Publishing Company, New York

    Google Scholar 

  32. Saaty TL (1980) The analytic hierarchy process, planning, priority setting, resource allocation. McGraw-Hill, New York

    MATH  Google Scholar 

  33. Shepperd M (2005) Evaluating software project prediction systems. In: Proceedings of the 11th international software metrics symposium, 19–22 September 2005, Como, Italy, p 2

    Google Scholar 

  34. Shepperd M, Kadoda G (2001) Comparing software prediction techniques using simulation. IEEE Trans Softw Eng 27(11):1014–1022

    Article  Google Scholar 

  35. Shepperd M, Cartwright M, Kadoda G (2000) On building prediction systems for software engineers. Empir Softw Eng 5(3):175–182

    Article  MATH  Google Scholar 

  36. Srinivasan K, Fisher D (1995) Machine learning approaches to estimating software development effort. IEEE Trans Softw Eng 21(2):126–137

    Article  Google Scholar 

  37. Trendowicz A (2008) Software effort estimation with well-founded causal models. Ph.D. thesis, Technical University Kaiserslautern, Kaiserslautern, Germany

    Google Scholar 

  38. Vincke P (1992) Multicriteria decision-aid. Wiley, New York

    Google Scholar 

  39. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: an introduction. Kluwer, Norwell, MT

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Further Reading

Further Reading

  • B.A. Kitchenham, L.M. Pickard, S.G. MacDonell, and M.J. Shepperd (2001), “What accuracy statistics really measure,” IEE Proceedings Software, vol. 148, no. 3, pp. 81–85.

    This article investigates two common aggregated measures of estimation performance: the mean magnitude relative estimation error (MMRE) and the number of estimates within 25 % of the actuals (Pred.25). Authors analyze what exact aspect of estimation these metrics quantify and their relationships to the elementary metric z of estimation uncertainty, computed as z = estimate/actual. Based on these example measures of accuracy, the authors discuss the benefits of using elementary and aggregated metrics.

  • T. Foss, E. Stensrud, B. Kitchenham, and I. Myrtveit (2003), “A Simulation Study of the Model Evaluation Criterion MMRE,” IEEE Transactions on Software Engineering, vol. 29, no. 11, pp. 985–995.

    The article investigates the mean magnitude of estimation error (MMRE) which is the most widely used measure of estimation uncertainty. Authors perform a simulation study in order to investigate the limitations of MMRE. As a result, they cast some doubts on the reliability of assessing performance of effort estimation methods using the MMRE metric. Finally, the authors propose alternative measures of estimation error.

  • J. S. Armstrong (2001), Principles of forecasting: A handbook for researchers and practitioners. Kluwer Academic Publishers, Dordrecht, The Netherlands.

    In Chap. 12, the book considers common ways of selecting appropriate forecasting methods and discusses their threats and opportunities. Moreover, best-practice guidelines for selecting forecasting methods in specific contexts are formulated. The author summarizes recommendations in the form of a tree graph for selecting an appropriate forecasting method depending on several characteristics of the estimation context.

  • J. R. Figueira, S. Greco, and M. Ehrgott (2005), Multiple Criteria Decision Analysis: State of the Art Surveys. vol. 78. Springer Verlag.

    The book presents the state of the art in multicriteria decision analysis (MCDA). It motivates MCDA and introduces its basic concepts. Furthermore, it provides an overview of the basic MCDA approaches, including multiattribute utility theory employed in this book for assessing the suitability of effort estimation methods for specific estimation contexts. Finally, the book presents in detail the most popular MCDA methods.

  • K. P. Yoon and Ch.-L. Hwang (1995), Multiple Attribute Decision Making. An Introduction. Quantitative Applications in the Social Sciences, vol. 104, Sage Publications, Thousand Oaks, California, USA.

    The book provides an easy-to-understand introduction to multicriteria decision analysis (MCDA). They introduce the basic concepts of MCDA, such as weighting relative importance of decision criteria. Moreover, they overview the basic types of MCDA approaches and illustrate them by discussing concrete MCDA methods that represent each approach.

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Trendowicz, A., Jeffery, R. (2014). Finding the Most Suitable Estimation Method. In: Software Project Effort Estimation. Springer, Cham. https://doi.org/10.1007/978-3-319-03629-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-03629-8_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-03628-1

  • Online ISBN: 978-3-319-03629-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics