Deep kernel methods learn better: from cards to process optimization

Mani Valleti, Rama K. Vasudevan, Maxim A. Ziatdinov, Sergei V. Kalinin

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

The ability of deep learning methods to perform classification and regression tasks relies heavily on their capacity to uncover manifolds in high-dimensional data spaces and project them into low-dimensional representation spaces. In this study, we investigate the structure and character of the manifolds generated by classical variational autoencoder (VAE) approaches and deep kernel learning (DKL). In the former case, the structure of the latent space is determined by the properties of the input data alone, while in the latter, the latent manifold forms as a result of an active learning process that balances the data distribution and target functionalities. We show that DKL with active learning can produce a more compact and smooth latent space which is more conducive to optimization compared to previously reported methods, such as the VAE. We demonstrate this behavior using a simple cards dataset and extend it to the optimization of domain-generated trajectories in physical systems. Our findings suggest that latent manifolds constructed through active learning have a more beneficial structure for optimization problems, especially in feature-rich target-poor scenarios that are common in domain sciences, such as materials synthesis, energy storage, and molecular discovery. The Jupyter Notebooks that encapsulate the complete analysis accompany the article.

Original languageEnglish
Article number015012
JournalMachine Learning: Science and Technology
Volume5
Issue number1
DOIs
StatePublished - Mar 1 2024

Funding

This work was supported by the Energy Frontier Research Centers program: CSSAS–The Center for the Science of Synthesis Across Scales–under Award Number DE-SC0019288, located at the University of Washington, and the modeling and process optimization by the US Department of Energy Office of Science under the Materials Sciences and Engineering Division of the Basic Energy Sciences program. The Bayesian optimization and Deep Kernel Learning research was supported by the Center for Nanophase Materials Sciences (CNMS) which is a US Department of Energy, Office of Science User Facility at Oak Ridge National Laboratory.

FundersFunder number
CSSASDE-SC0019288
Center for Nanophase Materials Sciences
U.S. Department of Energy
Office of Science
Oak Ridge National Laboratory
University of Washington

    Keywords

    • Bayesian optimization
    • deep kernel learning
    • ferroelectric kinetic model
    • high-dimensional optimization
    • latent space
    • process optimization

    Fingerprint

    Dive into the research topics of 'Deep kernel methods learn better: from cards to process optimization'. Together they form a unique fingerprint.

    Cite this