Abstract
We consider learning a function f: [0, 1](d) → [0, 1] by using feedforward sigmoid networks with a single hidden layer and bounded weights. Using the Lipschitz property, we provide a simple sample size bound within the framework of probably approximately correct learning. The sample size is linear in the number of parameters and logarithmic in the bound on weights. This brief note provides a non-trivial function estimation problem using sigmoid neural networks for which the sample size is a linear function of the number of network parameters.
Original language | English |
---|---|
Pages (from-to) | 115-122 |
Number of pages | 8 |
Journal | Neurocomputing |
Volume | 29 |
Issue number | 1-3 |
DOIs | |
State | Published - Nov 1999 |
Funding
The critical and helpful comments of the anonymous reviewers, which improved the perspective and presentation of this paper, are gratefully acknowledged. This research is sponsored by the Engineering Research Program of the Office of Basic Energy Sciences, of the US Department of Energy, under Contract No. DE-AC05-96OR22464 with Lockheed Martin Energy Research Corp., the Seed Money Program of Oak Ridge National Laboratory, and the Office of Naval Research under order N00014-96-F-0415.
Funders | Funder number |
---|---|
Lockheed Martin Energy Research Corp. | |
Office of Basic Energy Sciences | |
US Department of Energy | DE-AC05-96OR22464 |
Office of Naval Research | N00014-96-F-0415 |
Oak Ridge National Laboratory |
Keywords
- Neural networks
- Probably approximately correct
- Sample size
- Sigmoid feedforward networks