Machine learning (ML) models for molecules and materials commonly rely on a decomposition of the global target quantity into local, atom-centered contributions. This approach is convenient from a computational perspective, enabling large-scale ML-driven simulations with a linear-scaling cost, and can also be used to deduce useful structure--property relations as they associate simple atomic motifs with complicated macroscopic properties. However, even though there exist practical justifications for these decompositions, only the global quantity is rigorously defined, and thus it is unclear to what extent the atomistic terms predicted by the model can be trusted. Here, we introduce a quantitative metric, which we call the local prediction rigidity (LPR), to assess how robust the locally decomposed predictions of ML models are. We investigate the dependence of LPR on the details of model training, e.g. composition of the dataset, for several different problems ranging from simple toy models to real chemical systems. We present strategies to systematically enhance the LPR, which can be used to improve the robustness, interpretability, and transferability of the resulting atomistic ML models.
This repository contains datasets and Jupyter notebooks employed to corroborate the results of the above study.