Learning Temporally Encoded Patterns in Networks of Spiking Neurons
Networks of spiking neurons are very powerful and versatile models for
biological and artificial information processing systems. Especially for
modelling pattern analysis tasks in a biologically plausible way that require
short response times with high precision they seem to be more appropriate
than networks of threshold gates or models that encode analog values in
average firing rates. We investigate the question how neurons can learn
on the basis of time differences between firing times. In particular, we
provide learning rules of the Hebbian type in terms of single spiking events
of the pre- and postsynaptic neuron and show that the weights approach
some value given by the difference between pre- and postsynaptic firing
times with arbitrary high precision.