Abstract
Due to the curse of dimensionality and the limitation on training data, approximating high-dimensional functions is a very challenging task even for powerful deep neural networks. Inspired by the Nonlinear Level-set Learning (NLL) method, which uses the reversible residual network (RevNet), in this paper, we propose a new method of Dimension Reduction via Learning Level Sets (DRiLLS) for function approximation. Our method contains two major components: one is the pseudoreversible neural network (PRNN) module, which effectively transforms high-dimensional input variables to low-dimensional active variables, and the other is the synthesized regression module for approximating function values based on the transformed data in the low-dimensional space. The PRNN not only relaxes the invertibility constraint of the nonlinear transformation present in the NLL method due to the use of RevNet but also adaptively weights the influence of each sample and controls the sensitivity of the function to the learned active variables. The synthesized regression uses Euclidean distance in the input space to select neighboring samples whose projections on the space of active variables are used to perform local least-squares polynomial fitting. This helps to resolve numerical oscillation issues present in traditional local and global regressions. Extensive experimental results demonstrate that our DRiLLS method outperforms both the NLL and active subspace methods, especially when the target function possesses critical points in the interior of its input domain.
Original language | English |
---|---|
Pages (from-to) | A1148-A1171 |
Journal | SIAM Journal on Scientific Computing |
Volume | 45 |
Issue number | 3 |
DOIs | |
State | Published - 2023 |
Funding
*Submitted to the journal's Methods and Algorithms for Scientific Computing section November 15, 2021; accepted for publication (in revised form) December 15, 2022; published electronically June 5, 2023. https://doi.org/10.1137/21M1459198 Funding: This work is partially supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Office of Biological and Environmental Research, through the Applied Mathematics, Earth, and Environmental System Modeling and the Scientific Discovery through Advanced Computing programs under university grant DE-SC0022254 (third author) and contract ERKJ387 (fifth author) at Oak Ridge National Laboratory and by the U.S. National Science Foundation under grants DMS-2012469 and DMS-2038080 (second author).
Keywords
- dimension reduction
- function approximation
- level set learning
- pseudoreversible neural network
- sparse data
- synthesized regression