Abstract
This erratum reports and corrects several errors in Gershman and Niv (2012), Learning & Behavior, 40, 255–268. In particular, the particle filter and several simulations were implemented incorrectly. A corrected particle filter model and new simulations are reported.
Similar content being viewed by others
This erratum reports corrections to errors in Gershman and Niv (2012), which described simulations of a computational model of classical conditioning. The major error in the manuscript was an incorrect implementation of importance weighting in the particle filter. In Eq. 5 of the Supplemental Materials, the approximate posterior should be a weighted sum of delta functions defined at the particles:
where the particles are sampled from the Chinese restaurant process prior at each time step, and the importance weight is given by:
and then normalized by the sum of all weights (so that the weights add up to 1). We have fixed this error, so that the model is consistent with the implementation described in Gershman, Blei, and Niv (2010). The code implementing the corrected model is available at:
There were three other errors in the code that related to how the model was applied to particular experimental phenomena. In the first two cases, the simulation did not match the experimental procedure (corrected figures appear below). One was in the simulation of conditioning with imperfect predictors (Fig. 1). Fixing this error qualitatively changed the results, so that our original conclusions no longer hold. The other was in the simulation of extinction of conditioned inhibition (Fig. 2). As in the original simulation, the highest response was produced in the X+ condition, but unlike in the original simulation the control condition produced higher responding than the X- condition. We also note that the text on p. 262 was confusing in relation to Fig. 2, as it mentions AX+/X-, whereas the caption refers to AX-/A+ (which is what we simulated).
The third error pertains to Fig. 9 (simulation of the Hall Pearce effect). The text describes a reward magnitude manipulation. However, the model cannot simulate scalar changes in reward magnitude because rewards are modeled as binary. The original implementation used a non-binary reward value, but this is conceptually incorrect.
References
Gershman, S.J., Blei, D.M., & Niv, Y. (2010). Context, learning, and extinction. Psychological Review, 117, 197-209.
Gershman, S.J. & Niv, Y (2012). Exploring a latent cause model of classical conditioning. Learning & Behavior, 40, 255-268.
Acknowledgements
We are very grateful to Sashank Pisupati for assistance with code checking, and to Nathaniel Daw and Amy Cochran for pointing out the original errors.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gershman, S., Niv, Y. Erratum to Gershman and Niv (2012), Learning & Behavior, 40, 255-268. Learn Behav 48, 453–455 (2020). https://doi.org/10.3758/s13420-020-00426-5
Published:
Issue Date:
DOI: https://doi.org/10.3758/s13420-020-00426-5