Click here to flash read.
Denoising diffusion probabilistic models (DDPMs) have shown promising
performance for speech synthesis. However, a large number of iterative steps
are required to achieve high sample quality, which restricts the inference
speed. Maintaining sample quality while increasing sampling speed has become a
challenging task. In this paper, we propose a "Co"nsistency "Mo"del-based
"Speech" synthesis method, CoMoSpeech, which achieve speech synthesis through a
single diffusion sampling step while achieving high audio quality. The
consistency constraint is applied to distill a consistency model from a
well-designed diffusion-based teacher model, which ultimately yields superior
performances in the distilled CoMoSpeech. Our experiments show that by
generating audio recordings by a single sampling step, the CoMoSpeech achieves
an inference speed more than 150 times faster than real-time on a single NVIDIA
A100 GPU, which is comparable to FastSpeech2, making diffusion-sampling based
speech synthesis truly practical. Meanwhile, objective and subjective
evaluations on text-to-speech and singing voice synthesis show that the
proposed teacher models yield the best audio quality, and the one-step sampling
based CoMoSpeech achieves the best inference speed with better or comparable
audio quality to other conventional multi-step diffusion model baselines. Audio
samples are available at https://comospeech.github.io/.
No creative common's license