Abstract
In a system of N sensors, the sensor Sj, j = 1, 2..., N, outputs Y(j) ∈ ℛ, according to an unknown probability distribution P(Y(j)|X), corresponding to input X ∈ [0, 1]. A training n-sample (X1, Y1), (X2, Y2),..., (Xn, Yn) is given where Yi = (Y i(1), Yi(2),..., Yi (N)) such that Yi(j) is the output of S j in response to input Xi. The problem is to estimate a fusion rule f : ℛN → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions ℱ that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f̂ to f* is feasible. We estimate the sample size sufficient to ensure that f̂ provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (1) the sample size estimate is a simple function of the dimensionality of ℱ, and (2) the estimate f̂ can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks.
Original language | English |
---|---|
Pages (from-to) | 130-135 |
Number of pages | 6 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 3067 |
DOIs | |
State | Published - 1997 |
Event | Sensor Fusion: Architectures, Algorithms, and Applications - Orlando, FL, United States Duration: Apr 24 1997 → Apr 24 1997 |
Keywords
- Empirical estimation
- Fusion rule estimation
- Sensor fusion
- Vector space methods