Click here to flash read.
Video-to-speech synthesis involves reconstructing the speech signal of a
speaker from a silent video. The implicit assumption of this task is that the
sound signal is either missing or contains a high amount of noise/corruption
such that it is not useful for processing. Previous works in the literature
either use video inputs only or employ both video and audio inputs during
training, and discard the input audio pathway during inference. In this work we
investigate the effect of using video and audio inputs for video-to-speech
synthesis during both training and inference. In particular, we use pre-trained
video-to-speech models to synthesize the missing speech signals and then train
an audio-visual-to-speech synthesis model, using both the silent video and the
synthesized speech as inputs, to predict the final reconstructed speech. Our
experiments demonstrate that this approach is successful with both raw
waveforms and mel spectrograms as target outputs.