Sliding Global Attractors of Neural Learning and Memory
Yoram Baram
Computer Science Department, Technion - Israel Institute of Technology, Haifa 32000, Israel
Keywords: Neural Firing, Learning, Memory, Dynamics, Sliding Global Attractors.
Abstract: The highly variable nature of neural firing has been recognized by diverse empirical and analytic findings.
Here, the underlying morphology of neural firing is shown to be governed by a bilinear map, prescribing
eight types of neuronal global attractors and their points of local bifurcation. While synaptic learning gives
rise to irregular firing, membrane memory is shown to guarantee that, under the same external activation,
learning and retrieval end at the same global attractor. Forced and spontaneous changes in membrane
conductance are shown to cause sliding of the global attractors, switching them from passive to active state
and vice versa, and creating secondary firing modes. Selective activation of interacting neurons is shown to
create a shunting effect, yielding combinatorial retrieval, concealment and revelation of stored global
attractors. The utility of the global attractors is explained not only by their individual dynamic
characteristics, but also by their high power of combinatorial expression.
1 INTRODUCTION
Empirical and analytic evidence show high dynamic
variability of neuronal firing. Individual neurons of
the same type are often capable of producing
different firing modes, switching from one to
another in a seemingly unpredictable manner. The
transition from one dynamic mode to another has
been called local bifurcation when caused by a
change in parameter values, and global bifurcation
when caused by the landscape of the underlying map
under fixed parameter values (Blanchard et al.,
2006). Variation in synaptic efficacy, widely
associated with learning and memory (Dudai, 1989),
has been shown to play a key role in the disorderly
dynamics of neural firing (Baram, 2012). While
almost all theoretical and experimental studies make
the implicit assumption that synaptic efficacy is both
necessary and sufficient to account for learning and
memory, it has been suggested that learning and
memory in neural networks result from an ongoing
interplay between changes in synaptic efficacy and
intrinsic membrane properties (Marder et al., 1996).
It seems equally plausible that changes in membrane
efficacy play a role in shaping the firing dynamics.
Employing widely accepted models of neuronal
firing arising from the conductance paradigm
(Hodgkin and Huxley, 1952), we show that the
firing rate process is governed by a bilinear discrete
iteration map. The map is shown to have a singular
value that defines the local bifurcation points
between eight global attractor types, comprising the
variable landscape of neural firing activity. The
global attractor types are grouped by elementary
firing modes into six classes, divided into two
categories: chaotic attractor (mixed), square attractor
(periodic), point attractor (constant) and attractor at
infinity (saturated), associated with positive
activation, form the active attractor category, while
attractor at zero (silent), and bipolar attractor at zero
and infinity (binary), associated with positive
activation, form the passive attractor category.
Changes in membrane conductance are shown to
cause sliding of the global attractors, modifying their
dynamic properties. In particular, sliding may
transform active attractors into passive ones
(concealment), and passive attractors into active
ones (revelation), and vice versa. In the case of time-
dependent conductance variation (Connor and
Stevens, 1971), such transformation may in itself
become a secondary dynamic mode. Membrane
memory, manifested by invariance to changes in
lateral feedback activity in the absence of external
intervention, guarantees that neuronal retrieval will
produce a stored global attractor. Selective
activation of interacting neurons is shown to create a
shunting effect, producing globally stable
combinatorial patterns of stored, concealed and
revealed neuronal global attractors.
570
Baram Y..
Sliding Global Attractors of Neural Learning and Memory.
DOI: 10.5220/0004167705700575
In Proceedings of the 4th International Joint Conference on Computational Intelligence (NCTA-2012), pages 570-575
ISBN: 978-989-8565-33-4
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
2 GLOBAL ATTRACTORS OF
NEURONAL FIRING
The firing dynamics of interacting neurons have
been formulated as (Gerstner, 1995)
() () ()()()
i
T
ii i
t
ii i
tt t
d
ftd
dt






ω υ
(1)
where
()
i
t
is the membrane current, or the firing
rate, of the
i
'th of n neurons in a neural network,
i
is a time constant,
i
f
is the neuronal kernel,
i
ω
is
the synaptic weights vector at the input to the
i ’th
neuron,
()
i
t
is a function representing the membrane
speed of response to the input current,
()tυ
is the
vector of firing rates corresponding to the pre-
neurons, and
()
i
t
is the equilibrium potential, or the
conductance threshold, which may also encompass
external activation and may generally take positive
or negative values (Dayan and Abbott, 2001).
Employing the exponential kernel
/
()
i
t
i
te
,
which incorporates the time constant
i
of the
membrane potential response to an input pulse, a
discrete-time version of (1) is given by
/
1
() ( 1)
1)
1
()
(
i
i
p
T
i
p
i
ii
ii i
kkfekp








ω υ
(2)
The conductance-based rectification neuronal
kernel
i
f
, first derived from empirical data (Granit
et al., 1963, Connor and Stevens, 1971), then
formulated mathematically (Carandini and Ferster,
2000), and widely assumed in firing-rate models
(Dayan and Abbott, 2001), is
if 0
() ()
0if 0
i
xx
fx fx
x

(3)
As lateral feedback from other neurons can be
expected to be slower than self-feedback, the
ergodic nature of neural firing (Herveet al., 1990)
implies that (2) can be written as

,
() ( 1) ( 1)
1)
1
(
iii i
i
ii
ii ii
kk kf





(4)
where
,,
2/
/
1/
12 1
()
1
i
i
i
ij ij
nn
p
ij
jp j
e
ekp
e




(5)
where
is the ensemble average of the neuronal
firing rate processes.
The map (4) divides into two parts. The first,
corresponding to the domain
,
(1) 0
ii i i i
k


, is

1
() ( 1) ( 1)
1
f
i
i
ii i
kk k

(6)
The second, corresponding to the
domain
,
(1) 0
ii i i i
k


, is

,
2
() ( 1) ( 1)
1
f
ii
i
i
i
ii i
kk ku


(7)
where

1
iii
i
u

(8)
is the total activation. Clearly, the
line
1
() ( 1)f
ii
kk
has a positive slope smaller
than 1, hence, it intersects the diagonal
() ( 1)kk
only at the origin. On the other
hand, when the line

2
() ( 1)f
ii
kk

intersects
the diagonal, it will be at
,
1
i
i
i
ii
u
p
(9)
The latter is, then, the only possible fixed point of
the map beside the origin. The dynamic nature of the
map is determined by its singular values. The
singular value of
2
(1)f
i
k
is its slope at
i
p
, that
is
,
1
i
i
ii
i
(10)
An attractor (Abrahamet al., 1997) is a subset
A
of
the state space, which has a neighborhood,
BA
,
called a basin, such that any trajectory originating
from
BA
stays in it, and no proper subset of A has
the same property. A global attractor is an attractor
whose basin is the entire state space. The domain of
,iii
u
divides into eight subdomains, each defining
a type of global attractor. Positive total activation
(
0
i
u
) defines the active global attractors:
(a) Chaotic attractor for
,
12
i
ii

, yielding
1
i
(b) Square attractor at for
,
12
i
ii

, yielding
1
i
SlidingGlobalAttractorsofNeuralLearningandMemory
571
(c) Alternate point attractor for
,
12 1
ii
ii


, yielding
10
i

(d)
2
f -dominated monotone point attractor for
,
11
i
ii

, yielding
01
i

(e) Attractor at infinity for
,
1
ii
, yielding
1
i
while non-positive total activation (
0
i
u
)
defines the passive global attractors:
(f)
1
f -dominated attractor at zero for
0
i
u
and
,
0
ii
(g) Bi-modal (piece-wise
1
f
and
2
f
- dominated)
attractor at zero for
0
i
u
and
,
01
ii

,
yielding

1/ 1
ii
i



(h) Bi-polar attractor at zero and infinity for
0
i
u
and
,
1
ii
, yielding
1
i
A diagram of the eight attractor types is posted at
http://www.cs.technion.ac.il/~baram/Attractors.pdf
Case (a) represents a so-called homoclinic orbit (Ott,
1994) which, initiating at a neighborhood of
p
, first
diverges, then snaps back to
p , making the latter a
snap-back repeller (Marotto, 1978, 2005). It has
been shown for invertible smooth maps (Marotto,
1978, 2005) and extended to noninvertible piece-
wise smooth maps (Gardini and Tramontana, 2011),
that the existence of a snap-back repeller is a
sufficient condition for chaotic behavior in the Li-
Yorke sense (Li andYorke, 1975).
Case (b) represents a period-2 oscillation. A
trajectory initiating at any point in the state space
will converge to such oscillation within the interior
of a square, which is, then, an attractor (it might be
noted that, in general, a free oscillator, such as an
undamped pendulum, is not a cyclic attractor, as its
limit orbits, depending on initial conditions, are not
isolated).
Case (c) represents a point attractor at
p
, resulting in
periodic convergence (increasing
()k
step followed
by decreasing
()k
step).
Case (d) also represents a point attractor at
p
, but
the mode of convergence, dominated by
2
f
, is
monotonic.
Case (e) represents an attractor at infinity, which, in
reality, will be rectified at the maximal sustainable
physical firing limit, defining saturation.
Case (f) represents the passive, silent versions of the
active attractors (a-c).
Case (g) represents the passive, silent, bimodal
version of the active attractor (d).
Case (h) represents a bipolar attractor at zero and
infinity, which is the passive version of case (e). The
final destination of a trajectory of case (h) at zero or
infinity (or, rather, the saturation value) will be
determined by the initial condition, with
p the point
of separation between the two basins.
The attractors (a-h), each dominating the entire
state space, are global. As the total activation
u is
represented by the point of contact of
2
f
with the
coordinate
()k
, a change in u will have a sliding
effect, moving
2
f
, and its point of intersection, p ,
with the diagonal
() ( 1)kk
, up or down in
parallel to the coordinate
()k
. This will change the
parameters of the global attractor, but, as long as
u does not change in sign, not its dynamic nature.
As
u
is changed from positive to non-positive value,
the corresponding active attractor will turn into a
passive attractor, and, in the case of attractor at zero,
may be regarded as concealed in this state.
Conversely, as
u is changed from non-positive to
positive value, the active state of the attractor is
revealed as one of the active attractor types.
Moreover, the sliding effect of time-dependence
conductance (Connor and Stevens, 1971), such as
post-inhibitory rebound (Perkel and Mulloney,
1974), can produce secondary firing modes, such as
oscillatory bursting, by switching elementary firing
modes, such as saturation, from passive to active
state and vice versa.
Combining the point attractor types, (c,d), into
one, and the attractor at zero types, (f,g), into one,
the group of eight global attractor types may be
rearranged into a group of six global attractor
classes, associated with different dynamic modes:
chaotic attractor (mixed), square attractor
(oscillatory), point attractor (constant), attractor at
zero (silent), attractor at infinity (saturated) and
bipolar attractor at zero and infinity (binary). We
call it the elementary code of global attractors.
Our analysis shows that the domain of the
singular value
i
is divided into subdomains
corresponding to different global attractor types. As
the analysis involves statistical averaging
(manifested by
i
, representing the ensemble
average of lateral activity), the boundaries between
the
i
- subdomains may not precisely match the
empirical transition (or local bifurcation) points
between the firing modes. Yet, these analytic
IJCCI2012-InternationalJointConferenceonComputationalIntelligence
572
bifurcation points seem to be highly valuable. For
instance, as our analysis implies the arousal of a
snap-back repeler when
1
i

, the latter may be
regarded as an analytic indicator of chaos. Such an
indicator seems highly desirable, in view of the often
reported inadequacy (Sprott, 2003) of empirical
measure of chaos, such as the first Lyapunov
exponent (Wright, 1984). In neighborhoods of the
i
-bifurcation points one might expect to find
combined, mixed and transient modes. It has been
suggested that chaotic neural firing gives rise to
multiplexed oscillatory modes (Baram, 2012).
3 COMBINATORIAL
RETRIEVAL BY
INTERACTING NEURONS
The learning process is characterized by variation of
the synaptic weights. Mathematical manifestations
of the Hebbian learning paradigm introduce products
of firing rates into the dynamic equations involved,
turning them into essentially polynomial maps,
prone to noninvertibility and chaos (Baram, 2012).
While such properties represent a high degree of
irregularity, it has been shown that the behavior of
the synaptic weights in certain manifestations of
Hebbian learning (Oja, 1982; Bienenstocket al.,
1982) is highly regular. In particular, it has been
shown that, under bounding conditions on the inputs,
the synaptic weights under the BCM rule converge
to final values (Cooper et al., 2004). Moreover, it
has been shown that, in the BCM framework, the
neuronal fixed points are not altered by lateral
connectivity if the neuronal kernel is invertible and
differentiable (Castellani et al., 1999). These
properties are shared by linear and sigmoidal kernels
but not by the rectification kernel (4). Yet, the bi-
linear map associated with the rectification kernel
implies that the fixed point
p
associated with
2
f
is
not altered by lateral connectivity, as long as the
nature of the map is not changed (a change may
eliminate
p
altogether). This property may be
defined as local invariance of the global attractor to
lateral activity. Employing the index
to denote
conductance and activation values during learning
and the index
r to denote conductance and
activation values during retrieval, the equality
,,ir i
coupled with the local invariance of the global
attractor to lateral activity, will guarantee that,
without external intervention, the retrieved global
attractor will be the same as the stored one. As the
equilibrium threshold,
i
, is widely assumed to be
constant (Dayan and Abbott, 2001), the local
invariance of the global attractor to lateral activity
may be viewed as membrane memory.
The nature of the map, hence, the global
attractor, can only change if the sign of
i
u
changes.
It follows that changing the lateral activity,
i
, has
the same effect on the nature of the map as changing
the conductance equilibrium threshold,
i
. As noted
before, the definition of
i
can be changed to include
external activation. The nature of the map, or the
global attractor, can be controlled, then, by external
activation, or by some internal mechanism,
enforcing
ii
for a positive total activation,
hence, an active attractor, or
ii
for a negative
total activation, hence, a passive attractor. In
particular, the state
ii
will enforce a strict
attractor at zero (case f), hence, silence, which may
be regarded as the ground state of the neuron.
External activation of the neuron at hand, or of
laterally connected neurons, can, by the sliding
effect, change the nature of the map, and, with it, the
very existence of the fixed point
p
. Specifically, the
transition of any of the active attractors (a-c) to the
passive attractor (f) and of the active attractor (d) to
the passive attractor (g) will eliminate the fixed
point associated with the respective attractor of any
of the types (a-d). The transition of the active
attractor (e) to the passive attractor (h) will give rise
to the fixed point
p
in (h). On the other hand, the
transition from the passive attractor (f) to any of the
active attractors (a-c) will give rise to the
corresponding fixed point
p
, as will the transition
from the passive attractor (g) to the active attractor
(e), while the transition from the passive attractor (g)
to the active attractor (e) will eliminate the fixed
point
p
in (h). The result will be concealment of a
stored active global attractor (if
,
0
i
u
and
,
0
ir
u
), or revelation of the active state of a
stored passive global attractor (if
,
0
i
u
and
,
0
ir
u ).
Applying a network-wide activation pattern, by
which some of the neurons receive positive external
activation and the others non-positive external
activation, will produce retrieval and concealment of
stored active global attractors, and revelation of the
SlidingGlobalAttractorsofNeuralLearningandMemory
573
active state of stored passive global attractors. For
instance, in learning, a neuron
i may store a global
attractor of one of the active types (a-c), which, due
to the activation level, may slide and become a
passive global attractor of type (f), concealing the
nature of the active state of the stored attractor. On
the other hand, in selective retrieval, inhibitory
effects of the lateral feedback activity may be
eliminated by negative activation of an interacting
neuron
j
, causing an upward slide and revelation of
the active state of the global attractor stored in
neuron
i . This shunting effect allows for the creation
of a large variety of network-wide patterns from the
stored active and passive neuronal patterns. A group
of
n neurons can retrieve, by choice of neuron
activations, any permutation of neuronal stored
pattern, and their complimentary active or passive
states. Assuming that neural information is
represented by firing mode, the expressive power of
a group of
n
neurons employing the elementary
code of global attractors alone is the retrieval
capacity of
6
n
globally stable patterns, which may
be written as the set
n
M
A
(11)
where
A
is the set of firing modes associated with the
global attractor types and
1nn
A
AA

, with
the
Cartesian product. In general,
A
includes not only the
elementary firing modes but also the secondary
modes comprising combination, mixture and
multiplexity of elementary modes.
4 CONCLUSIONS
The neuronal global attractors can be directly related
to empirically observed firing modes. For instance,
seemingly random spiking can be represented by a
chaotic attractor, tonic spiking by a point attractor,
oscillatory spiking by a square attractor, and
bursting by saturation, representing an attractor at
infinity. The singularity parameter
i
defining local
bifurcation points between global attractors and their
corresponding firing modes, constitutes a valuable
tool for dynamic analysis of neural firing. For
instance, the arousal of chaos does not appear to
have been analytically identified with specific
parameter values. The empirical manifestation of the
first Lyapunov exponential (Wright, 1984) has been
known to produce highly unreliable results, even
when applied to data generated by simulating low
dimensional models (Sprott, 2003). We have shown
that, for bilinear maps, and, specifically, the
important class of such maps associated with
neuronal firing, the singular value
1
i
 provides,
in some statistical sense, an analytic characterization
of chaotic arousal. We have seen that the
neighborhoods of points of local bifurcation,
represented by certain values of the singularity
parameter
i
, define regions of secondary firing
modes, comprising combination, mixture and
temporal multiplexing of elementary modes.
Secondary modes, such as periodic bursting, may
also arise from the sliding effect caused by time-
dependent conductance (Connor and Stevens, 1971),
such as post-inhibitory rebound (Perkel and
Mulloney, 1974), switching elementary firing
modes, such as saturation, from passive to active
state and vice versa. While there seems to be a clear
relationship between certain firing modes and neural
functions (e.g., oscillation, or periodic bursting,
seem related to heartbeat, walking and chewing) the
utility of others is not as commonly recognized or
understood. The chaotic trajectories of learning
(Baram, 2012), wandering over a wide range in the
state space, may serve the purpose of rapid search,
or formation, of a global attractor of memory. A
chaotic global attractor, mixing different firing rates
in a single sequence, may provide temporal
multiplexing for inter-neural communication
purposes. The maximum-energy response of a
neuron storing a bi-polar attractor, aroused by initial
condition at a threshold determined by memory, may
represent instinct. A global attractor at zero,
representing silence, may serve the purpose not only
of neural rest, but also a common initial condition
for combinatorial learning and retrieval. The
combinatorial emergence of active and passive
global attractors may give rise not only to stored
subpatterns, but also to previously un-aroused
patterns, representing innovation.
REFERENCES
Abraham, R. H., Gardini, L. and Mira, C., 1997. Chaos in
Discrete Dynamical Systems. Springer- Verlag, Berlin.
Baram, Y., 2012. Noninvertibility, Chaotic coding and
chaotic multiplexity in synaptically modulated neural
firing. Neural Computation 24(3): 676-699.
Bienenstock, E. L., Cooper, L. N. and Munro, P. W., 1982.
Theory for the development of neuron selectivity:
orientation specificity and binocular interaction in
visual cortex. J. Neurosci. 2, 32-48.
Blanchard, P., Devaney, R. L. and Hall, G. R., 2006.
Differential Equations. London: Thompson.
IJCCI2012-InternationalJointConferenceonComputationalIntelligence
574
Carandini, M . and Ferster, D., 2000. Orientation tuning of
membrane potential and firing rate in cat primary
visual cortex. J. Neurosci., 20 (1), 470-484.
Connor, J. A. and Stevens, C. F., 1971. Prediction of
repetitive firing behaviour from voltage clamp data on
an isolated neuron soma. J. Physiol.; 213(1), 31–53.
Castellani, G. C., Intrator, N., Shouval, H., and Cooper, L.
N., 1999. Solutions of a BCM learning rule in a
network of lateral interacting non-linear neurons
Network 10:111-121.
Cooper, L. N., Intrator, N., Blais, B. S. and Shouval, H. Z.,
2004. Theory of Cortical Plasticity. New Jersey:
World Sciettific.
Dayan, P. and Abbott, L. F., 2001. Theoretical
Neuroscience. MIT Press, Cambridge, MA.
Dudai, Y, 1989. Neurobiology of Memory. New York,
Oxford University Press.
Gardini, L. and Tramontana, F., 2011. Snap-back repellers
in non-smooth functions. Regular and Chaotic
Dynamics 2-3: 237-245.
Gerstner, W., 1995. Time structure of the activity in neural
network models. Phys.Rev. E 51: 738–758.
Granit, R., D. Kernell, D. and Shortess, G. K., 1963.
Quantitative aspects of repetitive firing of mammalian
motoneurons caused by injected currents. J. Physiol.
168, 911-931.
Herve, T., Dolmazon, J. M. and Demongeot, J., 1990.
Random field and neural information. Proc. Natl.
Acad. Sci. USA 87: 806-810, Biophysics.
Hodgkin, A., and Huxley, A., 1952. A quantitative
description of membrane current and its application to
conduction and excitation in nerve. J. Physiol.
117:500–544.
Li, T-Y andYorke, J. A., 1975. Period three implies chaos,
Am. Math. Month., 82 (10): 985-992.
Marder, E., Abbott, L., F., Turrigiano, G., G. Liu, Z. and
Golowasch, J., 1996. Memory from the dynamics of
intrinsic membrane currents. Proc. Nat. Acad. Sci.
USA 93: 13481-13486.
Marotto, F. R., 1978. Snap-back repellers imply chaos in
R
n
. J. Math. Anal. Appl. 63(1): 199-223.
Marotto, F. R., 2005. On redefining a snap-back repeller.
Chaos, Solitons & Fractals 25 25-28.
Oja, E., 1982. Simplified neuron model as a principal
component analyzer. J. Math. Biol. 15 (3): 267–273.
Ott, E., 1994 Chaos in Dynamical Systems. Cambridge
University Press.
Perkel, D. H. and Mulloney B., 1974. Motor pattern
production in reciprocally inhibitory neurons
exhibiting postinhibitory rebound. Science, 185, 181-
182.
Sprott, J. C., 2003. Chaos and Time-Series Analysis. New
York, Oxford University Press.
Wright, J., 1984. Method for calculating a Lyapunov
exponent, Phys. Rev. A, 29(5), 2924-2927.
SlidingGlobalAttractorsofNeuralLearningandMemory
575