Information theoretic limits on the memory capacity of neuronal and synaptic networks

Professor Surya Ganguli
Assistant Professor, Stanford University
Given on: November 1, 2012

Abstract

Critical cognitive phenomena, such as planning and decision making, rely on the ability of the brain to hold information in working memory. Many proposals exist for the maintenance of such memories in persistent activity that arises from stable fixed point attractors in the dynamics of recurrent neural networks. However such fixed points are incapable of storing temporal sequences of recent events. An alternate, and less explored paradigm, is the storage of arbitrary temporal input sequences in the transient responses of a neural network. We combine information theory with dynamical systems theory to develop new results governing fundamental limits on the duration of such transiently encoded memory traces, and their dependence on circuit connectivity, and signal and noise statistics. Furthermore, we extend recent results from the field of compressed sensing to show how recurrent neuronal processing can extend the duration of memory traces for sparse temporal sequences.

We also consider the storage of long term memories through synaptic modifications in existing networks. Recent experimental work suggests that single synapses are digital, in the sense that, from the perspective of extracellular physiology, they can only take on a finite number of discrete values for their strength, imposing catastrophic limits on the memory capacity of classical models of memory that have relied on a continuum of analog synaptic strengths. However, synapses have many internal molecular states, suggesting we should model synapses themselves as complex molecular networks, rather than by a single scalar value, or strength. We develop new theorems bounding the memory capacity of such complex synaptic models, and describe the structural organization of internal molecular networks necessary for achieving these limits.

Biography

Surya began his academic career as an undergraduate at MIT, triple majoring in mathematics, physics, and EECS, and then moved to Berkeley to complete a PhD in string theory. There he worked on theories of how the geometry of space and time might holographically emerge from the statistical mechanics of large non-gravitational systems. After this, he chose to pursue the field of theoretical neuroscience, where theories could actually be tested against experiments. After completing a postdoc at UCSF, he has recently started a theoretical neuroscience laboratory at Stanford. He and his lab now study how networks of neurons and synapses cooperate to mediate important brain functions, like sensory perception, motor control, and memory. He has been awarded a Swartz-Fellowship in computational neuroscience, a Burroughs-Wellcome Career Award at the Scientific Interface, and a Terman Award.