Click here to flash read.
Deep Reinforcement Learning (DRL) is vital in various AI applications. DRL
algorithms comprise diverse compute kernels, which may not be simultaneously
optimized using a homogeneous architecture. However, even with available
heterogeneous architectures, optimizing DRL performance remains a challenge due
to the complexity of hardware and programming models employed in modern data
centers. To address this, we introduce PEARL, a toolkit for composing parallel
DRL systems on heterogeneous platforms consisting of general-purpose processors
(CPUs) and accelerators (GPUs, FPGAs). Our innovations include: 1. A general
training protocol agnostic of the underlying hardware, enabling portable
implementations across various processors and accelerators. 2. Incorporation of
DRL-specific scheduling optimizations within the protocol, facilitating
parallelized training and enhancing the overall system performance. 3.
High-level API for productive development using the toolkit. 4. Automatic
optimization of DRL task-to-device assignments through performance estimation,
supporting various optimization metrics including throughput and power
efficiency.
We showcase our toolkit through experimentation with two widely used DRL
algorithms, DQN and DDPG, on two diverse heterogeneous platforms. The generated
implementations outperform state-of-the-art libraries for CPU-GPU platforms by
throughput improvements of up to 2.1$\times$ and power efficiency improvements
of up to 3.4$\times$.
No creative common's license