Click here to flash read.
Most existing audio-text retrieval (ATR) methods focus on constructing
contrastive pairs between whole audio clips and complete caption sentences,
while ignoring fine-grained cross-modal relationships, e.g., short segments and
phrases or frames and words. In this paper, we introduce a hierarchical
cross-modal interaction (HCI) method for ATR by simultaneously exploring
clip-sentence, segment-phrase, and frame-word relationships, achieving a
comprehensive multi-modal semantic comparison. Besides, we also present a novel
ATR framework that leverages auxiliary captions (AC) generated by a pretrained
captioner to perform feature interaction between audio and generated captions,
which yields enhanced audio representations and is complementary to the
original ATR matching branch. The audio and generated captions can also form
new audio-text pairs as data augmentation for training. Experiments show that
our HCI significantly improves the ATR performance. Moreover, our AC framework
also shows stable performance gains on multiple datasets.