Click here to flash read.
Federated learning (FL) has recently gained much attention due to its
effectiveness in speeding up supervised learning tasks under communication and
privacy constraints. However, whether similar speedups can be established for
reinforcement learning remains much less understood theoretically. Towards this
direction, we study a federated policy evaluation problem where agents
communicate via a central aggregator to expedite the evaluation of a common
policy. To capture typical communication constraints in FL, we consider finite
capacity up-link channels that can drop packets based on a Bernoulli erasure
model. Given this setting, we propose and analyze QFedTD - a quantized
federated temporal difference learning algorithm with linear function
approximation. Our main technical contribution is to provide a finite-sample
analysis of QFedTD that (i) highlights the effect of quantization and erasures
on the convergence rate; and (ii) establishes a linear speedup w.r.t. the
number of agents under Markovian sampling. Notably, while different
quantization mechanisms and packet drop models have been extensively studied in
the federated learning, distributed optimization, and networked control systems
literature, our work is the first to provide a non-asymptotic analysis of their
effects in multi-agent and federated reinforcement learning.
No creative common's license