What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning

Authors

  • Caleb Schultz Kisby Department of Computer Science, Indiana University
  • Saúl A. Blanco Department of Computer Science, Indiana University
  • Lawrence S. Moss Department of Mathematics, Indiana University

DOI:

https://doi.org/10.1609/aaai.v38i13.29409

Keywords:

ML: Neuro-Symbolic Learning, PEAI: Philosophical Foundations of AI, KRR: Reasoning with Beliefs, ML: Transparent, Interpretable, Explainable ML, KRR: Nonmonotonic Reasoning

Abstract

This paper is a contribution to neural network semantics, a foundational framework for neuro-symbolic AI. The key insight of this theory is that logical operators can be mapped to operators on neural network states. In this paper, we do this for a neural network learning operator. We map a dynamic operator [φ] to iterated Hebbian learning, a simple learning policy that updates a neural network by repeatedly applying Hebb's learning rule until the net reaches a fixed-point. Our main result is that we can "translate away" [φ]-formulas via reduction axioms. This means that completeness for the logic of iterated Hebbian learning follows from completeness of the base logic. These reduction axioms also provide (1) a human-interpretable description of iterated Hebbian learning as a kind of plausibility upgrade, and (2) an approach to building neural networks with guarantees on what they can learn.

Published

2024-03-24

How to Cite

Schultz Kisby, C., Blanco, S. A., & Moss, L. S. (2024). What Do Hebbian Learners Learn? Reduction Axioms for Iterated Hebbian Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14894-14901. https://doi.org/10.1609/aaai.v38i13.29409

Issue

Section

AAAI Technical Track on Machine Learning IV