On Methods to Keep Learning Away from Intractability
We investigate the complexity of learning from restricted sets of training
examples. With the intention to make learning easier we introduce two types
of restrictions that describe the permitted training examples. The strength
of the restrictions can be tuned by choosing specific parameters. We ask
how strictly their values must be limited to turn NP-complete learning
problems into polynomial-time solvable ones. Results are presented for
Perceptrons with binary and arbitrary weights. We show that there exist
bounds for the parameters that sharply separate efficiently solvable from
intractable learning problems.