On the Complexity of Learning for Spiking Neurons with Temporal Coding
Spiking neurons are models for the computational units in biological
neural systems where information is considered to be encoded mainly
in the temporal patterns of their activity. In a network of spiking
neurons a new set of parameters becomes relevant which has no
counterpart in traditional neural network models: the time that a
pulse needs to travel through a connection between two neurons (also
known as delay of a connection). It is known that these delays
are tuned in biological neural systems through a variety of
mechanisms. In this article we consider the arguably most simple
model for a spiking neuron, which can also easily be implemented in
pulsed VLSI. We investigate the VC dimension of networks of spiking
neurons where the delays are viewed as programmable parameters
and we prove tight bounds for this VC dimension. Thus we get
quantitative estimates for the diversity of functions that a network
with fixed architecture can compute with different settings of its
delays. In particular, it turns out that a network of spiking
neurons with $k$ adjustable delays is able to compute a much richer
class of functions than a threshold circuit with $k$ adjustable
weights. The results also yield bounds for the number of training
examples that an algorithm needs for tuning the delays of a network
of spiking neurons. Results about the computational complexity of
such algorithms are also given.