Click here to flash read.
This work puts forth low-complexity Riemannian subspace descent algorithms
for the minimization of functions over the symmetric positive definite (SPD)
manifold. Different from the existing Riemannian gradient descent variants, the
proposed approach utilizes carefully chosen subspaces that allow the update to
be written as a product of the Cholesky factor of the iterate and a sparse
matrix. The resulting updates avoid the costly matrix operations like matrix
exponentiation and dense matrix multiplication, which are generally required in
almost all other Riemannian optimization algorithms on SPD manifold. We further
identify a broad class of functions, arising in diverse applications, such as
kernel matrix learning, covariance estimation of Gaussian distributions,
maximum likelihood parameter estimation of elliptically contoured
distributions, and parameter estimation in Gaussian mixture model problems,
over which the Riemannian gradients can be calculated efficiently. The proposed
uni-directional and multi-directional Riemannian subspace descent variants
incur per-iteration complexities of $\mathcal{O}(n)$ and $\mathcal{O}(n^2)$
respectively, as compared to the $\mathcal{O}(n^3)$ or higher complexity
incurred by all existing Riemannian gradient descent variants. The superior
runtime and low per-iteration complexity of the proposed algorithms is also
demonstrated via numerical tests on large-scale covariance estimation problems.
No creative common's license