Click here to flash read.
Kernel machines have sustained continuous progress in the field of quantum
chemistry. In particular, they have proven to be successful in the low-data
regime of force field reconstruction. This is because many equivariances and
invariances due to physical symmetries can be incorporated into the kernel
function to compensate for much larger datasets. So far, the scalability of
kernel machines has however been hindered by its quadratic memory and cubical
runtime complexity in the number of training points. While it is known, that
iterative Krylov subspace solvers can overcome these burdens, their convergence
crucially relies on effective preconditioners, which are elusive in practice.
Effective preconditioners need to partially pre-solve the learning problem in a
computationally cheap and numerically robust manner. Here, we consider the
broad class of Nystr\"om-type methods to construct preconditioners based on
successively more sophisticated low-rank approximations of the original kernel
matrix, each of which provides a different set of computational trade-offs. All
considered methods aim to identify a representative subset of inducing (kernel)
columns to approximate the dominant kernel spectrum.
No creative common's license