Click here to flash read.
Despite the promising results achieved, state-of-the-art interactive
reinforcement learning schemes rely on passively receiving supervision signals
from advisor experts, in the form of either continuous monitoring or
pre-defined rules, which inevitably result in a cumbersome and expensive
learning process. In this paper, we introduce a novel initiative
advisor-in-the-loop actor-critic framework, termed as Ask-AC, that replaces the
unilateral advisor-guidance mechanism with a bidirectional learner-initiative
one, and thereby enables a customized and efficacious message exchange between
learner and advisor. At the heart of Ask-AC are two complementary components,
namely action requester and adaptive state selector, that can be readily
incorporated into various discrete actor-critic architectures. The former
component allows the agent to initiatively seek advisor intervention in the
presence of uncertain states, while the latter identifies the unstable states
potentially missed by the former especially when environment changes, and then
learns to promote the ask action on such states. Experimental results on both
stationary and non-stationary environments and across different actor-critic
backbones demonstrate that the proposed framework significantly improves the
learning efficiency of the agent, and achieves the performances on par with
those obtained by continuous advisor monitoring.
No creative common's license