Abstract
Principal component analysis (PCA) is by far the most widespread tool for unsupervised learning with high-dimensional data sets. It is popularly studied for exploratory data analysis and online process monitoring. Unfortunately, fine-tuning PCA models and particularly the number of components remains a challenging task. Today, this selection is often based on a combination of guiding principles, experience, and process understanding. Unlike the case of regression, where cross-validation of the prediction error is a widespread and trusted approach for model selection, there are no tools for PCA model selection enjoying this level of acceptance. In this work, we address this challenge and evaluate the utility of the cross-validated ignorance score with both simulated and experimental data sets. Application of this model selection criterion is based on the interpretation of PCA as a density model, as in probabilistic principal component analysis. With simulation-based benchmarking, it is shown to be (a) the overall best performing criterion, (b) the preferred criterion at high noise levels, and (c) very robust to changes in noise level. Tests on experimental data sets suggest that the ignorance score is sensitive to deviations from the PCA model structure, which suggests the criterion is also useful to detect model-reality mismatch.
| Original language | English |
|---|---|
| Pages (from-to) | 13448-13468 |
| Number of pages | 21 |
| Journal | Industrial and Engineering Chemistry Research |
| Volume | 58 |
| Issue number | 30 |
| DOIs | |
| State | Published - Jul 31 2019 |
| Externally published | Yes |
Funding
The authors would like to thank Karin Rottermann, Sylvia Richter, and Kai Udert for their contributions to the work presented in this paper. The study has been made possible by the Swiss National Foundation (project: 157097) and Eawag Discretionary Funds (grant number: 5221.00492.012.02, project: DF2018/ADASen).