Skip to main content
Log in

Hybridization of feature selection and feature weighting for high dimensional data

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The classification of high dimensional data is a challenging problem due to the presence of redundant and irrelevant features in a higher amount. These unwanted features degrade accuracy and increase the computational complexity of machine learning algorithms. In this paper, we propose a hybrid method that integrates the complementary strengths of feature selection and feature weighting approaches for improving the classification of high dimensional data on the Nearest Neighbor classifier. Specifically, we suggest four strategies that combine filter and wrapper methods of feature selection and feature weighting. Experiments are performed on 12 high dimensional datasets and outcomes are supported by Friedman as well as Holm statistical tests for validation. Extended Adjusted Ratio of Ratios is used to recognize the best method considering accuracy, feature selection, and runtime. The results show that two proposed strategies outperform other well-known methods in accuracy and features reduction. The hybrid feature selection-feature weighting wrapper method is the best among all in accuracy while the hybrid feature selection filter-feature weighting wrapper method is the most suitable for reducing features and runtime. Thus, the promising outcomes validate the importance of hybridizing feature selection and feature weighting while dealing with high dimensional data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Jain AK, Duin RPW, Mao J (2000) Statistical pattern recognition: a review. IEEE Trans Pattern Anal Mach Intell 22(1):4–37

    Article  Google Scholar 

  2. Hughes G (1968) On the mean accuracy of statistical pattern recognizers. IEEE Trans Inform Theory 14 (1):55–63

    Article  Google Scholar 

  3. Koller D, Sahami M (1996) Toward optimal feature selection. Technical report, Stanford InfoLab

  4. Yu L, Liu H (2004) Efficient feature selection via analysis of relevance and redundancy. J Mach Learn Res 5(Oct):1205–1224

    MathSciNet  MATH  Google Scholar 

  5. Nakariyakul S (2018) High-dimensional hybrid feature selection using interaction information-guided search. Knowl-Based Syst 145:59–66

    Article  Google Scholar 

  6. Pérez-Rodríguez J, Arroyo-Peña AG, García-Pedrajas N (2015) Simultaneous instance and feature selection and weighting using evolutionary computation: proposal and study. Appl Soft Comput 37:416–443

    Article  Google Scholar 

  7. Wu X, Yu K, Ding W, Wang H, Zhu X (2013) Online feature selection with streaming features. IEEE Trans Pattern Anal Mach Intell 35(5):1178–1192

    Article  Google Scholar 

  8. Yu K, Ding W, Wu X (2016) Lofs: a library of online streaming feature selection. Knowl-Based Syst 113:1–3

    Article  Google Scholar 

  9. Weston J, Mukherjee S, Chapelle O, Pontil M, Poggio T, Vapnik V (2001) Feature selection for SVMs. In: Advances in neural information processing systems, pp 668–674

  10. Kelly JD Jr, Davis L (1991) A Hybrid Genetic Algorithm for Classification. In: IJCAI, vol 91, pp 645–650

  11. Raymer ML, Punch WF, Goodman ED, Kuhn LA, Jain AK (2000) Dimensionality reduction using genetic algorithms. IEEE Trans Evol Comput 4(2):164–171

    Article  Google Scholar 

  12. Wettschereck D, Aha DW, Mohri T (1997) A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artif Intell Rev 11(1-5):273–314

    Article  Google Scholar 

  13. Kira K, Rendell L (1992) A Practical Approach to Feature Selection. In: Proceedings of ninth international workshop on machine learning, pp 249–256

  14. Kononenko I (1994) Estimating attributes: analysis and extensions of RELIEF. In: European conference on machine learning. Springer, pp 171–182

  15. Sun Y (2007) Iterative RELIEF for feature weighting: algorithms, theories, and applications. IEEE Trans Pattern Anal Mach Intell 29(6):1035–1051

    Article  Google Scholar 

  16. Deng Z, Chung FL, Wang S (2010) Robust relief-feature weighting, margin maximization, and fuzzy optimization. IEEE Trans Fuzzy Syst 18(4):726–744

    Article  Google Scholar 

  17. Gilad-Bachrach R, Navot A, Tishby N (2004) Margin based feature selection-theory and algorithms. In: Proceedings of the twenty-first international conference on machine learning. ACM, pp 43

  18. Sun Y, Todorovic S, Goodison S (2010) Local-learning-based feature selection for high-dimensional data analysis. IEEE Trans Pattern Anal Mach Inteill 32(9):1610–1626

    Article  Google Scholar 

  19. Hall MA (1999) Correlation-based feature selection for machine learning

  20. Yu L, Liu H (2003) Feature selection for high-dimensional data: a fast correlation-based filter solution. In: Proceedings of the 20th international conference on machine learning (ICML-03), pp 856–863

  21. DeSarbo WS, Carroll JD, Clark LA, Green PE (1984) Synthesized clustering: a method for amalgamating alternative clustering bases with differential weighting of variables. Psychometrika 49(1):57–78

    Article  MathSciNet  MATH  Google Scholar 

  22. Huang JZ, Ng MK, Rong H, Li Z (2005) Automated variable weighting in k-means type clustering. IEEE Trans Pattern Anal Mach Intell 27(5):657–668

    Article  Google Scholar 

  23. Domeniconi C, Gunopulos D, Ma S, Yan B, Al-Razgan M, Papadopoulos D (2007) Locally adaptive metrics for clustering high dimensional data. Data Min Knowl Disc 14(1):63–97

    Article  MathSciNet  Google Scholar 

  24. Jing L, Ng MK, Huang JZ (2007) An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data. IEEE Transactions on knowledge and data engineering 19(8):1026–1041

    Article  Google Scholar 

  25. Chen X, Ye Y, Xu X, Huang JZ (2012) A feature group weighting method for subspace clustering of high-dimensional data. Pattern Recogn 45(1):434–446

    Article  MATH  Google Scholar 

  26. Song Q, Ni J, Wang G (2013) A fast clustering-based feature subset selection algorithm for high-dimensional data. IEEE Trans Knowl Data Eng 25(1):1–14

    Article  Google Scholar 

  27. Revanasiddappa MB, Harish BS (2018) A New Feature Selection Method based on Intuitionistic Fuzzy Entropy to Categorize Text Documents. International Journal of Interactive Multimedia and Artificial Intelligence (In Press), pp 1–12

  28. Liu Y, Wang G, Chen H, Dong H, Zhu X, Wang S (2011) An improved particle swarm optimization for feature selection. J Bionic Eng 8(2):191–200

    Article  Google Scholar 

  29. Ghamisi P, Benediktsson JA (2015) Feature selection based on hybridization of genetic algorithm and particle swarm optimization. IEEE Geosci Remote Sens Lett 12(2):309–313

    Article  Google Scholar 

  30. Hancer E, Xue B, Karaboga D, Zhang M (2015) A binary ABC algorithm based on advanced similarity scheme for feature selection. Appl Soft Comput 36:334–348

    Article  Google Scholar 

  31. Hafez AI, Zawbaa HM, Emary E, Hassanien AE (2016) Sine cosine optimization algorithm for feature selection. In: International symposium on innovations in intelligent systems and applications (INISTA). IEEE, pp 1–5

  32. Paredes R, Vidal E (2000) A class-dependent weighted dissimilarity measure for nearest neighbor classification problems. Pattern Recogn Lett 21(12):1027–1036

    Article  MATH  Google Scholar 

  33. Tahir MA, Bouridane A, Kurugollu F (2007) Simultaneous feature selection and feature weighting using Hybrid Tabu Search/K-nearest neighbor classifier. Pattern Recogn Lett 28(4):438–446

    Article  Google Scholar 

  34. Barros AC, Cavalcanti GD (2008) Combining global optimization algorithms with a simple adaptive distance for feature selection and weighting. In: Proceedings of IEEE international joint conference on neural networks, pp 3518–3523

  35. Derrac J, Triguero I, García S, Herrera F (2012) Integrating instance selection, instance weighting, and feature weighting for nearest neighbor classifiers by coevolutionary algorithms. IEEE TRrans Syst Man Cybern Part B (Cybern) 42(5):1383–1397

    Article  Google Scholar 

  36. Chuang LY, Yang CH, Wu KC, Yang CH (2011) A hybrid feature selection method for DNA microarray data. Comput Biol Med 41(4):228–237

    Article  Google Scholar 

  37. Derrac J, Cornelis C, García S, Herrera F (2012) Enhancing evolutionary instance selection algorithms by means of fuzzy rough set based feature selection. Inf Sci 186(1):73–92

    Article  Google Scholar 

  38. Apolloni J, Leguizamón G, Alba E (2016) Two hybrid wrapper-filter feature selection algorithms applied to high-dimensional microarray experiments. Appl Soft Comput 38:922–932

    Article  Google Scholar 

  39. Duch W (2006) Filter methods. In: Feature extraction. Springer, pp 89–117

  40. den Bergh F, Engelbrecht AP (2004) A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput 8(3):225–239

    Article  Google Scholar 

  41. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80–98

    Article  Google Scholar 

  42. Gupta E, Saxena A (2016) Performance evaluation of antlion optimizer based regulator in automatic generation control of interconnected power system. Journal of Engineering 2016

  43. Yao P, Wang H (2017) Dynamic Adaptive Ant Lion Optimizer applied to route planning for unmanned aerial vehicle. Soft Comput 21(18):5475–5488

    Article  Google Scholar 

  44. Tharwat A, Hassanien AE (2018) Chaotic antlion algorithm for parameter optimization of support vector machine. Appl Intell 48(3):670–686

    Article  Google Scholar 

  45. Eshelman LJ, Schaffer JD (1993) Real-coded genetic algorithms and interval-schemata. Found Genet Algorithm 2:187–202

    Google Scholar 

  46. Li J, Cheng K, Wang S, Morstatter F, Trevino RP, Tang J, Liu H (2017) Feature selection: a data perspective. ACM Comput Surv (CSUR) 50(6):94

    Article  Google Scholar 

  47. Asuncion A, Newman D (2007) UCI machine learning repository. available: http://archive.ics.uci.edu/ml/, 2018-04-23

  48. Wang G, Song Q, Sun H, Zhang X, Xu B, Zhou Y (2013) A feature subset selection algorithm automatic recommendation method. J Artif Intell Res 47:1–34

    Article  MATH  Google Scholar 

  49. Mateos-García D, García-Gutiérrez J, Riquelme-Santos JC (2016) An evolutionary voting for k-nearest neighbours. Expert Syst Appl 43:9–14

    Article  Google Scholar 

  50. Sindhu R, Ngadiran R, Yacob YM, Zahri NAH, Hariharan M (2017) Sine-cosine algorithm for feature selection with elitism strategy and new updating mechanism. Neural Comput Appl 28(10):2947–2958

    Article  Google Scholar 

  51. Wang G, Song Q, Xu B, Zhou Y (2013) Selecting feature subset for high dimensional data via the propositional FOIL rules. Pattern Recogn 46(1):199–214

    Article  Google Scholar 

  52. Dubey VK, Saxena AK, Shrivas MM (2016) A cluster-filter feature selection approach. In: International conference on ICT in business industry & government (ICTBIG). IEEE, pp 1–5

  53. Wang Y, Wang J, Liao H, Chen H (2017) An efficient semi-supervised representatives feature selection algorithm based on information theory. Pattern Recogn 61:511–523

    Article  Google Scholar 

  54. Rahmaninia M, Moradi P (2017) OSFSMI: online stream feature selection method based on mutual information. Applied Soft Computing

  55. Gao W, Hu L, Zhang P (2018) Class-specific mutual information variation for feature selection. Pattern Recogn 79:328–339

    Article  Google Scholar 

  56. Gu S, Cheng R, Jin Y (2018) Feature selection for high-dimensional classification using a competitive swarm optimizer. Soft Comput 22(3):811–822

    Article  Google Scholar 

  57. Dowlatshahi MB, Derhami V, Nezamabadi-pour H (2017) Ensemble of filter-based rankers to guide an epsilon-greedy swarm optimizer for high-dimensional feature subset selection. Information 8(4):152

    Article  Google Scholar 

  58. Wang Y, Wang J, Liao H, Chen H (2017) Unsupervised feature selection based on Markov blanket and particle swarm optimization. J Syst Eng Electron 28(1):151–161

    Article  Google Scholar 

  59. Seetha H, Murty MN, Saravanan R (2016) Classification by majority voting in feature partitions. Int J Inf Decis Sci 8(2):109–124

    Google Scholar 

  60. Aryal S, Ting KM, Washio T, Haffari G (2017) Data-dependent dissimilarity measure: an effective alternative to geometric distance measures. Knowl Inf Syst 53(2):479–506

    Article  Google Scholar 

  61. Breiman L (2017) Classification and regression trees. Routledge, Evanston

    Book  Google Scholar 

  62. Friedman JH (2006) Recent advances in predictive (machine) learning. J Classif 23(2):175–197

    Article  MathSciNet  MATH  Google Scholar 

  63. Maudes J, Rodríguez JJ, García-Osorio C, García-Pedrajas N (2012) Random feature weights for decision tree ensemble construction. Inf Fusion 13(1):20–30

    Article  Google Scholar 

  64. Galili T, Meilijson I (2016) Splitting matters: how monotone transformation of predictor variables may improve the predictions of decision tree models. arXiv:161104561

  65. Arora S, Singh S (2017) An effective hybrid butterfly optimization algorithm with artificial bee colony for numerical optimization. Int J Interact Multimed Artif Intell 4(4):14–21

    Google Scholar 

  66. Meza J, Espitia H, Montenegro C, Giménez E, González-Crespo R (2017) Movpso: Vortex multi-objective particle swarm optimization. Appl Soft Comput 52:1042–1057

    Article  Google Scholar 

  67. Aydilek IB (2018) A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl Soft Comput 66:232–249

    Article  Google Scholar 

  68. Han X, Liu Q, Wang H, Wang L (2018) Novel fruit fly optimization algorithm with trend search and co-evolution. Knowl-Based Syst 141:1–17

    Article  Google Scholar 

  69. Gaber MM (2012) Advances in data stream mining. Wiley Interdiscip Rev Data Min Knowl Discov 2(1):79–85

    Article  Google Scholar 

  70. Ramírez-Gallego S, Krawczyk B, García S, Woźniak M, Herrera F (2017) A survey on data preprocessing for data stream mining: current status and future directions. Neurocomputing 239:39–57

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Birmohan Singh.

Ethics declarations

Conflict of interests

There is no conflict of interest.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(TEX 3.03 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Singh, D., Singh, B. Hybridization of feature selection and feature weighting for high dimensional data. Appl Intell 49, 1580–1596 (2019). https://doi.org/10.1007/s10489-018-1348-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-018-1348-2

Keywords

Navigation