## Abstract

The problem of function estimation using feedforward neural networks based on an independently and identically generated sample is addressed. The feedforward networks with a single hidden layer of 1/(1 + e^{-γz})-units and bounded parameters are considered. It is shown that given a sufficiently large sample, a nearest neighbor rule approximates the best neural network such that the expected error is arbitrarily bounded with an arbitrary high probability. The result is extendible to other neural networks where the hidden units satisfy a suitable Lipschitz condition. A result of practical interest is that the problem of computing a neural network that approximates (in the above sense) the best possible one is computationally difficult, whereas a nearest neighbor rule is linear-time computable in terms of the sample size.

Original language | English |
---|---|

Pages | 108-113 |

Number of pages | 6 |

State | Published - 1996 |

Event | Proceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4) - Washington, DC, USA Duration: Jun 3 1996 → Jun 6 1996 |

### Conference

Conference | Proceedings of the 1996 IEEE International Conference on Neural Networks, ICNN. Part 1 (of 4) |
---|---|

City | Washington, DC, USA |

Period | 06/3/96 → 06/6/96 |