Abstract
Recommendation systems find and summarize patterns in the structure of some data or in how we visit that data. Such summarizing can be implemented by data mining algorithms. While the rest of this book focuses specifically on recommendation systems in software engineering, this chapter provides a more general tutorial introduction to data mining.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Given a mean value for x over n measurements \(\bar{x} = \frac{1} {n}\sum _{i=1}^{n}x_{i}\), then the total sum of squares is \(\mathit{SS}_{\text{tot}} =\sum _{i}{(x_{i} -\bar{ x})}^{2}\) and the sum of squares of residuals is \(\mathit{SS}_{\text{err}} =\sum _{i}{(x_{i} - f_{i})}^{2}\). From this, the amount by which x determines f is \({R}^{2} = 1 -\left (\mathit{SS}_{\text{err}}/\mathit{SS}_{\text{tot}}\right )\).
- 2.
But it should be emphasized that this is more an issue in the typical toolkit’s implementation than some fatal flaw with random forests.
- 3.
Note that Farnstrom et al. use n = 1 but this is a parameter that can be tuned. In the next section, we discuss incremental learners where, at least during the initial learning phase, all the data will be anomalous since this learner has never seen anything before. For learning from very few examples, n should be greater than one.
- 4.
References
Agrawal, R., Imieliński, T., Swami, A.: Mining association rules between sets of items in large databases. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 207–216 (1993). DOI 10.1145/170035.170072
Aha, D.W., Kibler, D., Albert, M.K.: Instance-based learning algorithms. Mach. Learn. 6(1), 37–66 (1991). DOI 10.1023/A:1022689900470
Boley, D.: Principal direction divisive partitioning. Data Min. Knowl. Discov. 2(4), 325–344 (1998). DOI 10.1023/A:1009740529316
Bradley, P.S., Fayyad, U.M., Reina, C.: Scaling clustering algorithms to large databases. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 9–15 (1998)
Brady, A., Menzies, T.: Case-based reasoning vs parametric models for software quality optimization. In: Proceedings of the International Conference on Predictor Models in Software Engineering, pp. 3:1–3:10 (2010). DOI 10.1145/1868328.1868333
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Chapman and Hall/CRC, Boca Raton, FL (1984)
Breimann, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). DOI 10.1023/A:1010933404324
Catlett, J.: Inductive learning from subsets, or, Disposal ofexcess training data considered harmful. In: Proceedings of the Australian Workshop on Knowledge Acquisition forKnowledge-Based Systems, pp. 53–67 (1991)
Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: A survey. ACM Comput. Surv. 41, 15:1–15:58 (2009). DOI 10.1145/1541880.1541882
Chang, C.L.: Finding prototypes for nearest neighbor classifiers. IEEE Trans. Comput. 23(11), 1179–1185 (1974). DOI 10.1109/T-C.1974.223827
Corazza, A., Di Martino, S., Ferrucci, F., Gravino, C., Sarro, F., Mendes, E.: How effective is tabu search to configure support vector regression for effort estimation? In: Proceedings of the International Conference on Predictor Models in Software Engineering, pp. 4:1–4:10 (2010). DOI 10.1145/1868328.1868335
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995). DOI 10.1023/A:1022627411411
Deerwester, S., Dumais, S., Furnas, G., Landauer, T., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inform. Sci. 41(6), 391–407 (1990). DOI 10.1002/(SICI)1097-4571(199009)41:6⟨391::AID-ASI1⟩3.0.CO;2-9
Dejaeger, K., Verbeke, W., Martens, D., Baesens, B.: Data mining techniques for software effort estimation: A comparative study. IEEE Trans. Software Eng. 38, 375–397 (2012). DOI 10.1109/TSE.2011.55
Domingos, P., Pazzani, M.J.: On the optimality of the simple Bayesian classifier under zero-one loss. Mach. Learn. 29(2–3), 103–130 (1997). DOI 10.1023/A:1007413511361
Dougherty, J., Kohavi, R., Sahami, M.: Supervised and unsupervised discretization of continuous features. In: Proceedings of the International Conference on Machine Learning, pp. 194–202 (1995)
Durstenfeld, R.: Algorithm 235: Random permutation. Comm. ACM 7(7), 420 (1964). DOI 10.1145/364520.364540
Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 226–231 (1996)
Farnstrom, F., Lewis, J., Elkan, C.: Scalability for clustering algorithms revisited. SIGKDD Explor. Newslett. 2(1), 51–57 (2000). DOI 10.1145/360402.360419
Fayyad, U.M., Irani, K.B.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 1022–1029 (1993)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997). DOI 10.1006/jcss.1997.1504
Gama, J., Pinto, C.: Discretization from data streams: Applications to histograms and data mining. In: Proceedings of the ACM SIGAPP Symposium on Applied Computing, pp. 662–667 (2006). DOI 10.1145/1141277.1141429
Garcia, S., Derrac, J., Cano, J.R., Herrera, F.: Prototype selection for nearest neighbor classification: Taxonomy and empirical study. IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 417–435 (2012). DOI 10.1109/TPAMI.2011.142
Gupta, C., Grossman, R.: GenIc: A single pass generalized incremental algorithm for clustering. In: Proceedings of the SIAM International Conference on Data Mining, pp. 147–153 (2004)
Hall, M.A., Holmes, G.: Benchmarking attribute selection techniques for discrete class data mining. IEEE Trans. Knowl. Data Eng. 15(6), 1437–1447 (2003). DOI 10.1109/TKDE.2003.1245283
Hand, D.J.: Classifier technology and the illusion of progress. Stat. Sci. 21(1), 1–14 (2006). DOI 10.1214/088342306000000060
Hart, P.: The condensed nearest neighbor rule. IEEE Trans. Inform. Theory 14(3), 515–516 (1968). DOI 10.1109/TIT.1968.1054155
Hedges, L.V., Olkin, I.: Nonparametric estimators of effect size in meta-analysis. Psychol. Bull. 96(3), 573–580 (1984)
Jain, A.K.: Data clustering: 50 years beyond K-means. Pattern Recogn. Lett. 31(8), 651–666 (2010). DOI 10.1016/j.patrec.2009.09.011
Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: A review. ACM Comput. Surv. 31(3), 264–323 (1999). DOI 10.1145/331499.331504
Kampenes, V.B., Dybå, T., Hannay, J.E., Sjøberg, D.I.K.: A systematic review of effect size in software engineering experiments. Inform. Software Tech. 49(11–12), 1073–1086 (2007). DOI 10.1016/j.infsof.2007.02.015
Knuth, D.E.: The Art of Computer Programming, vol. 2: Seminumerical Algorithms, 3rd edn. Addison-Wesley, Boston, MA (1998)
Kocaguneli, E., Menzies, T., Bener, A., Keung, J.: Exploiting the essential assumptions of analogy-based effort estimation. IEEE Trans. Software Eng. 28(2), 425–438 (2012a). DOI 10.1109/TSE.2011.27
Kocaguneli, E., Menzies, T., Keung, J.: On the value of ensemble effort estimation. IEEE Trans. Software Eng. 38(6), 1403–1416 (2012b). DOI 10.1109/TSE.2011.111
Kocaguneli, E., Menzies, T., Keung, J., Cok, D., Madachy, R.: Active learning and effort estimation: Finding the essential content of software effort estimation data. IEEE Trans. Software Eng. 39(8), 1040–1053 (2013). DOI 10.1109/TSE.2012.88
Kohavi, R.: Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 202–207 (1996)
Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1–2), 273–324 (1997). DOI 10.1016/S0004-3702(97)00043-X
Levina, E., Bickel, P.J.: Maximum likelihood estimation of instrinsic dimension. In: Saul, L.K., Weiss, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems 17, pp. 777–784. MIT Press, Cambridge, MA (2005)
McCallum, A., Nigam, K., Ungar, L.H.: Efficient clustering of high-dimensional datasets with application to reference matching. In: Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 169–178 (2000). DOI 10.1145/347090.347123
Menzies, T., Butcher, A., Cok, D., Marcus, A., Layman, L., Shull, F., Turhan, B., Zimmermann, T.: Local vs. global lessons for defect prediction and effort estimation. IEEE Trans. Software Eng. 39(6), 822–834 (2013). DOI 10.1109/TSE.2012.83
Menzies, T., Turhan, B., Bener, A., Gay, G., Cukic, B., Jiang, Y.: Implications of ceiling effects in defect predictors. In: Proceedings of the International Workshop on Predictor Models in Software Engineering (2008). DOI 10.1145/1370788.1370801
Minku, L.L., Yao, X.: DDD: A new ensemble approach for dealing with concept drift. IEEE Trans. Knowl. Data Eng. 24(4), 619–633 (2012). DOI 10.1109/TKDE.2011.58
Mittas, N., Angelis, L.: Ranking and clustering software cost estimation models through a multiple comparisons algorithm. IEEE Trans. Software Eng. 39(4), 537–551 (2012). DOI 10.1109/TSE.2012.45
Nagappan, N., Ball, T., Zeller, A.: Mining metrics to predict component failures. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, pp. 452–461 (2006). DOI 10.1145/1134285.1134349
Pearson, K.: I. mathematical contributions to the theory of evolution—VII. on the correlation of characters not quantitatively measurable. Phil. Trans. Roy. Soc. Lond. Ser. A 195, 1–47 & 405 (1900)
Pearson, K.: LIII. On lines and planes of closest fit to systems of points in space. Phil. Mag. 2(11), 559–572 (1901). DOI 10.1080/14786440109462720
Peters, F., Menzies, T., Gong, L., Zhang, H.: Balancing privacy and utility in cross-company defect prediction. IEEE Trans. Software Eng. 39(8), 1054–1068 (2013). DOI 10.1109/TSE.2013.6
Platt, J.C.: FastMap, MetricMap, and Landmark MDS are all Nyström algorithms. In: Proceedings of the International Workshop on Artificial Intelligence and Statistics, pp. 261–268 (2005)
Porter, M.F.: An algorithm for suffix stripping. Program Electron. Libr. Inform. Syst. 14(3), 130–137 (1980). DOI 10.1108/eb046814
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)
Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice Hall, Englewood Cliffs, NJ (2003)
Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall, Englewood Cliffs, NJ (2009)
Scott, A.J., Knott, M.: A cluster analysis method for grouping means in the analysis of variance. Biometrics 30(3), 507–512 (1974)
Sculley, D.: Web-scale k-means clustering. In: Proceedings of the International Conference on the World Wide Web, pp. 1177–1178 (2010). DOI 10.1145/1772690.1772862
Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948a)
Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(4), 623–656 (1948b)
Witten, I.H., Frank, E., Hall, M.A.: Data Mining: Practical Machine Learning Tools and Techniques, 3rd edn. Morgan Kaufmann, San Francisco, CA (2011)
Yang, Y., Webb, G.I.: Discretization for naive-Bayes learning: Managing discretization bias and variance. Mach. Learn. 74(1), 39–74 (2009). DOI 10.1007/s10994-008-5083-5
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Menzies, T. (2014). Data Mining. In: Robillard, M., Maalej, W., Walker, R., Zimmermann, T. (eds) Recommendation Systems in Software Engineering. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-45135-5_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-45135-5_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-45134-8
Online ISBN: 978-3-642-45135-5
eBook Packages: Computer ScienceComputer Science (R0)