Skip to main content

Advertisement

Log in

Minimizing data consumption with sequential online feature selection

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

In most real-world information processing problems, data is not a free resource. Its acquisition is often expensive and time-consuming. We investigate how such cost factors can be included in supervised classification tasks by deriving classification as a sequential decision process and making it accessible to reinforcement learning. Depending on previously selected features and the internal belief of the classifier, a next feature is chosen by a sequential online feature selection that learns which features are most informative at each time step. Experiments on toy datasets and a handwritten digits classification task show significant reduction in required data for correct classification, while a medical diabetes prediction task illustrates variable feature cost minimization as a further property of our algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Notes

  1. e.g., Gartner’s survey at http://www.gartner.com/it/page.jsp?id=1460213.

  2. A partially observable MDP is a MDP with limited access to its states, i.e., the agent does not receive the full state information but only an incomplete observation based on the current state.

  3. These costs represent a rough estimate of the time in minutes it takes to acquire the feature on a real patient. The estimates are based on oral communication with a local GP.

  4. with the exception of the 5 rfa experiment, which only has 8 features in total. All of them carry information and an optimal static FS method would have to choose all 8.

References

  1. Bazzani L, Freitas N, Larochelle H, Murino V, Ting JA (2011) Learning attentional policies for tracking and recognition in video with deep networks. In: Getoor L, Scheffer T (eds.) Proceedings of the 28th international conference on machine learning (ICML-11). ICML ’11, pp 937–944

  2. Deisenroth M, Rasmussen C, Peters J (2009) Gaussian process dynamic programming. Neurocomputing 72(7–9):1508–1524

    Article  Google Scholar 

  3. Dulac-Arnold G, Denoyer L, Preux P, Gallinari P (2011) Datum-wise classification: a sequential approach to sparsity. In: Proceedings of the European conference of machine learning (ECML 2011). Springer, pp 375–390

  4. Ernst D, Geurts P, Wehenkel L (2005) Tree-based batch mode reinforcement learning. J Mach Learn Res 6(1):503

    MathSciNet  MATH  Google Scholar 

  5. Frank A, Asuncion A (2011) UCI machine learning repository. University of California, Irvine, CA. http://archive.ics.uci.edu/ml/

  6. Gaudel R, Sebag M (2010) Feature selection as a one-player game. In: Fürnkranz J, Joachims T(eds.) Proceedings of the 27th international conference on machine learning (ICML-10), pp 359–366 http://www.icml2010.org/papers/247.pdf

  7. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  8. Hüsken M, Stagge P (2003) Recurrent neural networks for time series classification. Neurocomputing 50:223–235

    Article  MATH  Google Scholar 

  9. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  10. Lin L (1992) Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach Learn 8(3):293–321

    Google Scholar 

  11. Liu F, Su J (2004) Reinforcement learning-based feature learning for object tracking. In: Proceedings of the 17th international conference on pattern recognition, vol 2. IEEE, pp 748–751

  12. Liu H, Motoda H (2008) Computational methods of feature selection. Chapman & Hall, London

  13. Monahan G (1982) A survey of partially observable Markov decision processes: theory, models, and algorithms. Manag Sci 1–16

  14. Neumann G, Peters J (2009) Fitted q-iteration by advantage weighted regression. Adv Neural Inf Process Syst 21:1177–1184

    Google Scholar 

  15. Neumann G, Pfeiffer M, Hauser H (2006) Batch reinforcement learning methods for point to point movements. Tech rep Graz University of Technology

  16. Norouzi E, Nili Ahmadabadi M, Nadjar Araabi B (2010) Attention control with reinforcement learning for face recognition under partial occlusion. Mach Vis Appl 1–12

  17. Paletta L, Fritz G, Seifert C (2005) Q-learning of sequential attention for visual object recognition from informative local descriptors. In: Proceedings of the 22nd international conference on machine learning, vol 22, p 649

  18. Perkins S, Theiler J (2003) Online feature selection using grafting. In: Proceedings of the 20th international conference on machine learning (ICML), pp 592–599

  19. Riedmiller M (2005) NNeural fitted Q iteration—first experiences with a data efficient neural reinforcement learning method. In: Lecture notes in computer science, vol 3720

  20. Saar-Tsechansky M, Provost F (2007) Handling missing values when applying classification models. J Mach Learn Res 8(1625–1657):9

    Google Scholar 

  21. Schmidhuber J, Huber R (1991) Learning to generate artificial fovea trajectories for target detection. Int J Neural Syst 2(1):135–141

    Article  Google Scholar 

  22. Timmer S, Riedmiller M (2007) Fitted q iteration with cmacs. In: IEEE international symposium on approximate dynamic programming and reinforcement learning, 2007. ADPRL 2007, IEEE pp 1–8

  23. Vijayakumar S, Schaal S (2000) Locally weighted projection regression: An o (n) algorithm for incremental real time learning in high dimensional space. In: Proceedings of the seventeenth international conference on machine learning (ICML 2000), Citeseer 1:288–293

  24. Williams R, Peng J (1990) An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Comput 2(4):490–501

    Article  Google Scholar 

  25. Wu X, Yu K, Wang H, Ding W (2010) Online streaming feature selection. In: Fürnkranz J, Joachims T (eds.) Proceedings of the 27th international conference on machine learning (ICML-10), pp 1159–1166

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Rückstieß.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rückstieß, T., Osendorfer, C. & van der Smagt, P. Minimizing data consumption with sequential online feature selection. Int. J. Mach. Learn. & Cyber. 4, 235–243 (2013). https://doi.org/10.1007/s13042-012-0092-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-012-0092-x

Keywords