Estimation of the entropy based on its polynomial representation

Martin Vinck, Francesco P. Battaglia, Vladimir B. Balakirsky, A. J. Han Vinck, and Cyriel M. A. Pennartz
Phys. Rev. E 85, 051139 – Published 29 May 2012

Abstract

Estimating entropy from empirical samples of finite size is of central importance for information theory as well as the analysis of complex statistical systems. Yet, this delicate task is marred by intrinsic statistical bias. Here we decompose the entropy function into a polynomial approximation function and a remainder function. The approximation function is based on a Taylor expansion of the logarithm. Given n observations, we give an unbiased, linear estimate of the first n power series terms based on counting sets of k coincidences. For the remainder function we use nonlinear Bayesian estimation with a nearly flat prior distribution on the entropy that was developed by Nemenman, Shafee, and Bialek. Our simulations show that the combined entropy estimator has reduced bias in comparison to other available estimators.

  • Figure
  • Figure
  • Received 19 March 2012

DOI:https://doi.org/10.1103/PhysRevE.85.051139

©2012 American Physical Society

Authors & Affiliations

Martin Vinck1, Francesco P. Battaglia1, Vladimir B. Balakirsky2, A. J. Han Vinck2, and Cyriel M. A. Pennartz1

  • 1Cognitive and Systems Neuroscience Group, Center for Neuroscience, University of Amsterdam, the Netherlands
  • 2Institüt für Experimentele Mathematik, Essen, Germany

Article Text (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 85, Iss. 5 — May 2012

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×