Click here to flash read.
The use of Transformer represents a recent success in speech enhancement.
However, as its core component, self-attention suffers from quadratic
complexity, which is computationally prohibited for long speech recordings.
Moreover, it allows each time frame to attend to all time frames, neglecting
the strong local correlations of speech signals. This study presents a simple
yet effective sparse self-attention for speech enhancement, called ripple
attention, which simultaneously performs fine- and coarse-grained modeling for
local and global dependencies, respectively. Specifically, we employ local band
attention to enable each frame to attend to its closest neighbor frames in a
window at fine granularity, while employing dilated attention outside the
window to model the global dependencies at a coarse granularity. We evaluate
the efficacy of our ripple attention for speech enhancement on two commonly
used training objectives. Extensive experimental results consistently confirm
the superior performance of the ripple attention design over standard full
self-attention, blockwise attention, and dual-path attention (Sep-Former) in
terms of speech quality and intelligibility.