A BACKWARD SDE METHOD FOR UNCERTAINTY QUANTIFICATION IN DEEP LEARNING

Richard Archibald, Feng Bao, Yanzhao Cao, He Zhang

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

We develop a backward stochastic differential equation based probabilistic machine learning method, which formulates a class of stochastic neural networks as a stochastic optimal control problem. An efficient stochastic gradient descent algorithm is introduced with the gradient computed through a backward stochastic differential equation. Convergence analysis for stochastic gradient descent optimization and numerical experiments for applications of stochastic neural networks are carried out to validate our methodology in both theory and performance.

Original languageEnglish
Pages (from-to)2807-2835
Number of pages29
JournalDiscrete and Continuous Dynamical Systems - Series S
Volume15
Issue number10
DOIs
StatePublished - Oct 2022

Funding

2020 Mathematics Subject Classification. 660H35, 68T07, 93E20. Key words and phrases. Probabilistic machine learning, stochastic neural networks, stochastic optimal control, stochastic gradient descent. The second and third authors are partially supported by U.S. Department of Energy under grant numbers DE-SC0022297 and DE-SC0022253, the last author is supported by NSFC12071175 and Science and Technology Development of Jilin Province, China no. 201902013020. ∗ Corresponding author: He Zhang.

FundersFunder number
NSFC12071175 and Science and Technology Development of Jilin Province201902013020
U.S. Department of EnergyDE-SC0022253, DE-SC0022297

    Keywords

    • Probabilistic machine learning
    • stochastic gradient descent
    • stochastic neural networks
    • stochastic optimal control

    Fingerprint

    Dive into the research topics of 'A BACKWARD SDE METHOD FOR UNCERTAINTY QUANTIFICATION IN DEEP LEARNING'. Together they form a unique fingerprint.

    Cite this