Click here to flash read.
Importance sampling (IS) is a popular technique in off-policy evaluation,
which re-weights the return of trajectories in the replay buffer to boost
sample efficiency. However, training with IS can be unstable and previous
attempts to address this issue mainly focus on analyzing the variance of IS. In
this paper, we reveal that the instability is also related to a new notion of
Reuse Bias of IS -- the bias in off-policy evaluation caused by the reuse of
the replay buffer for evaluation and optimization. We theoretically show that
the off-policy evaluation and optimization of the current policy with the data
from the replay buffer result in an overestimation of the objective, which may
cause an erroneous gradient update and degenerate the performance. We further
provide a high-probability upper bound of the Reuse Bias, and show that
controlling one term of the upper bound can control the Reuse Bias by
introducing the concept of stability for off-policy algorithms. Based on these
analyses, we finally present a novel Bias-Regularized Importance Sampling
(BIRIS) framework along with practical algorithms, which can alleviate the
negative impact of the Reuse Bias. Experimental results show that our
BIRIS-based methods can significantly improve the sample efficiency on a series
of continuous control tasks in MuJoCo.