Efficient learning of linear perceptrons
Abstract. We introduce an efficient agnostic learning algorithm for the class of linear perceptrons. We make no assumptions whatsoever on the example-generating distribution. Our performance guarantee is that, given any \eta>0, our algorithm runs in time polynomial in the sample size and dimension, and outputs a hypothesis perceptron that classifies correctly at least the number of points classified correctly with margin \eta by any other perceptron. While our algorithm's running time is not polynomial in 1/(\eta), we prove that unless P=NP no such "fully polynomial" approximation scheme exists.