-
Benchmarking physics-inspired machine learning models for transition metal co...
<p>Physics-inspired machine learning (ML) models can be categorized into two classes: those relying solely on three-dimensional structure and those incorporating... -
Representing spherical tensors with scalar-based machine-learning models
Rotational symmetry plays a central role in physics, providing an elegant framework to describe how the properties of 3D objects – from atoms to the macroscopic scale –... -
SPAᴴM(a,b): encoding the density information from guess Hamiltonian in quantu...
Recently, we introduced a class of molecular representations for kernel-based regression methods — the spectrum of approximated Hamiltonian matrices (SPAᴴM) — that takes... -
Probing the effects of broken symmetries in machine learning
Symmetry is one of the most central concepts in physics, and it is no surprise that it has also been widely adopted as an inductive bias for machine-learning models applied to... -
Automated prediction of ground state spin for transition metal complexes
Predicting the ground state spin of transition metal complexes is a challenging task. Previous attempts have been focused on specific regions of chemical space, whereas a more... -
Reconstructions and dynamics of 𝛽 -lithium thiophosphate surfaces
<p>Lithium thiophosphate (LPS) has demonstrated promising properties for use as a solid electrolyte for the next-generation of lithium ion batteries. However, the high... -
Expectation consistency for calibration of neural networks (code)
Despite their incredible performance, it is well reported that deep neural networks tend to be overoptimistic about their prediction confidence. Finding effective and efficient... -
Analysis of bootstrap and subsampling in high-dimensional regularized regress...
We investigate popular resampling methods for estimating the uncertainty of statistical models, such as subsampling, bootstrap and the jackknife, and their performance in... -
A prediction rigidity formalism for low-cost uncertainties in trained neural ...
Quantifying the uncertainty of regression models is essential to ensure their reliability, particularly since their application often extends beyond their training domain. Based... -
Benchmarking machine-readable vectors of chemical reactions on computed activ...
In recent years, there has been a surge of interest in predicting computed activation barriers, to enable the acceleration of the automated exploration of reaction networks.... -
On double-descent in uncertainty quantification in overparametrized models (c...
Uncertainty quantification is a central challenge in reliable and trustworthy machine learning. Naive measures such as last-layer scores are well-known to yield overconfident...
