Click here to flash read.
Spoken language identification refers to the task of automatically predicting
the spoken language in a given utterance. Conventionally, it is modeled as a
speech-based language identification task. Prior techniques have been
constrained to a single modality; however in the case of video data there is a
wealth of other metadata that may be beneficial for this task. In this work, we
propose MuSeLI, a Multimodal Spoken Language Identification method, which
delves into the use of various metadata sources to enhance language
identification. Our study reveals that metadata such as video title,
description and geographic location provide substantial information to identify
the spoken language of the multimedia recording. We conduct experiments using
two diverse public datasets of YouTube videos, and obtain state-of-the-art
results on the language identification task. We additionally conduct an
ablation study that describes the distinct contribution of each modality for
language recognition.
No creative common's license