Click here to flash read.
Pre-trained language models have achieved impressive results in various music
understanding and generation tasks. However, existing pre-training methods for
symbolic melody generation struggle to capture multi-scale, multi-dimensional
structural information in note sequences, due to the domain knowledge
discrepancy between text and music. Moreover, the lack of available large-scale
symbolic melody datasets limits the pre-training improvement. In this paper, we
propose MelodyGLM, a multi-task pre-training framework for generating melodies
with long-term structure. We design the melodic n-gram and long span sampling
strategies to create local and global blank infilling tasks for modeling the
local and global structures in melodies. Specifically, we incorporate pitch
n-grams, rhythm n-grams, and their combined n-grams into the melodic n-gram
blank infilling tasks for modeling the multi-dimensional structures in
melodies. To this end, we have constructed a large-scale symbolic melody
dataset, MelodyNet, containing more than 0.4 million melody pieces. MelodyNet
is utilized for large-scale pre-training and domain-specific n-gram lexicon
construction. Both subjective and objective evaluations demonstrate that
MelodyGLM surpasses the standard and previous pre-training methods. In
particular, subjective evaluations show that, on the melody continuation task,
MelodyGLM achieves average improvements of 0.82, 0.87, 0.78, and 0.94 in
consistency, rhythmicity, structure, and overall quality, respectively.
Notably, MelodyGLM nearly matches the quality of human-composed melodies on the
melody inpainting task.
No creative common's license