Click here to flash read.
We present a novel framework for learning cost-efficient latent
representations in problems with high-dimensional state spaces through
nonlinear dimension reduction. By enriching linear state approximations with
low-order polynomial terms we account for key nonlinear interactions existing
in the data thereby reducing the problem's intrinsic dimensionality. Two
methods are introduced for learning the representation of such low-dimensional,
polynomial manifolds for embedding the data. The manifold parametrization
coefficients can be obtained by regression via either a proper orthogonal
decomposition or an alternating minimization based approach. Our numerical
results focus on the one-dimensional Korteweg-de Vries equation where
accounting for nonlinear correlations in the data was found to lower the
representation error by up to two orders of magnitude compared to linear
dimension reduction techniques.
No creative common's license