We study networks of spiking neurons that use the timing of pulses
to encode information. Nonlinear interactions model the spatial
groupings of synapses on the neural dendrites and describe the
computations performed at local branches. Within a theoretical
framework of learning we analyze the question of how many training
examples these networks must receive to be able to generalize well.
Bounds for this sample complexity of learning can be obtained in
terms of a combinatorial parameter known as the pseudo-dimension.
This dimension characterizes the computational richness of a neural
network and is given in terms of the number of network parameters.
Two types of feedforward architectures are considered:
constant-depth networks and networks of unconstrained depth. We
derive asymptotically tight bounds for each of these network types:
Constant depth networks are shown to have an almost linear
pseudo-dimension, whereas the pseudo-dimension of general networks
is quadratic.
Networks of spiking neurons that use temporal coding are becoming
increasingly more important in practical tasks such as computer
vision, speech recognition, and motor control. The question of how
well these networks generalize from a given set of training examples
is a central issue for their successful application as adaptive
systems. The results show that, although coding and computation in
these networks is quite different and in many cases more powerful,
their generalization capabilities are at least as good as those of
traditional neural network models.
|