VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling
Paper
•
2406.04321
•
Published
•
1
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
The V2M dataset is proposed in the VidMuse project, aimed at advancing research in video-to-music generation.
The V2M dataset comprises 360K pairs of videos and music, covering various types including movie trailers, advertisements, and documentaries. This dataset provides researchers with a rich resource to explore the relationship between video content and music generation.
Download the dataset:
git clone https://huggingface.co/datasets/HKUSTAudio/VidMuse-V2M-Dataset
Dataset structure:
V2M/
├── V2M.txt
├── V2M-20k.txt
└── V2M-bench.txt
If you use the V2M dataset in your research, please consider citing:
@article{tian2024vidmuse,
title={Vidmuse: A simple video-to-music generation framework with long-short-term modeling},
author={Tian, Zeyue and Liu, Zhaoyang and Yuan, Ruibin and Pan, Jiahao and Liu, Qifeng and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},
journal={arXiv preprint arXiv:2406.04321},
year={2024}
}