Click here to flash read.
The convergence of policy gradient algorithms hinges on the optimization
landscape of the underlying optimal control problem. Theoretical insights into
these algorithms can often be acquired from analyzing those of linear quadratic
control. However, most of the existing literature only considers the
optimization landscape for static full-state or output feedback policies
(controllers). We investigate the more challenging case of dynamic
output-feedback policies for linear quadratic regulation (abbreviated as dLQR),
which is prevalent in practice but has a rather complicated optimization
landscape. We first show how the dLQR cost varies with the coordinate
transformation of the dynamic controller and then derive the optimal
transformation for a given observable stabilizing controller. One of our core
results is the uniqueness of the stationary point of dLQR when it is
observable, which provides an optimality certificate for solving dynamic
controllers using policy gradient methods. Moreover, we establish conditions
under which dLQR and linear quadratic Gaussian control are equivalent, thus
providing a unified viewpoint of optimal control of both deterministic and
stochastic linear systems. These results further shed light on designing policy
gradient algorithms for more general decision-making problems with partially
observed information.
No creative common's license