Abstract
Consider a system of N sensors (S1, S2, …, SN), where the sensor Sj outputs Y(j) ϵ in response to input X ϵ, according to an unknown probability distribution Py(j)IX. A training n-sample (X1, Y1), (X2, Y2),…, (Xn, Yn) is given where Yi, = (Yi(1), Yi(2),…, Yi(N)) and Yi(j) is the output of Sj in response to input Xi ϵ. The problem is to choose a fusion function f: from a family, based on the sample, to minimize the expected square error where Y = (Y(1), Y(2)…, Y(N)). We consider to be the set of feedforward neural networks of sigmoid units with a single hidden layer and bounded weights. The computation of f* ϵ that exactly minimizes I(f) is not possible in general since the underlying distributions are unknown. Under the boundedness of X and Y, we show that for a sufficiently large sample, a neural network estimate fˆ can be obtained such that P[I(fˆ) — I(f*)>ϵ] <δ for any ϵ>0, δ, 0<δ<1, and any distribution PY, X. Using various properties of the feedforward neural networks we obtain three different estimates for the required sample sizes.
Original language | English |
---|---|
Pages (from-to) | 21-30 |
Number of pages | 10 |
Journal | Intelligent Automation and Soft Computing |
Volume | 5 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 1999 |
Funding
t Research sponsored by the Engineering Research Program of the Office of Basic Energy Sciences, of the U.S. Department of Energy, under Contract No. DE-AC05-960R22464 with Lockheed Martin Energy Research Corp., Seed Money Fund Project of Oak Ridge National Laboratory, and Office of Naval Research under orders No. N00014-96-F-0415 and No. N00014-97-F-0329.
Keywords
- Empirical estimation
- Feedforward
- Fusion rule estimation
- Lipschitz functions
- Neural networks
- Sensor fusion