Nearest neighbor rules PAC-approximate feedforward networks

Research output: Contribution to conferencePaperpeer-review

1 Scopus citations

Abstract

The problem of function estimation using feedforward neural networks based on an independently and identically generated sample is addressed. The feedforward networks with a single hidden layer of 1/(1 + e-γz)-units and bounded parameters are considered. It is shown that given a sufficiently large sample, a nearest neighbor rule approximates the best neural network such that the expected error is arbitrarily bounded with an arbitrary high probability. The result is extendible to other neural networks where the hidden units satisfy a suitable Lipschitz condition. A result of practical interest is that the problem of computing a neural network that approximates (in the above sense) the best possible one is computationally difficult, whereas a nearest neighbor rule is linear-time computable in terms of the sample size.

Original languageEnglish
Pages108-113
Number of pages6
StatePublished - 1996
EventProceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4) - Washington, DC, USA
Duration: Jun 3 1996Jun 6 1996

Conference

ConferenceProceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4)
CityWashington, DC, USA
Period06/3/9606/6/96

Fingerprint

Dive into the research topics of 'Nearest neighbor rules PAC-approximate feedforward networks'. Together they form a unique fingerprint.

Cite this