You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Mumospee V2: A MUltiMOdal SPEEch Corpus Version 2

Dataset Description

MuMoSPEE v2 is a large-scale multimodal speech dataset containing audio and video recordings with transcripts, aggregated from:

  1. European Council events — official speeches, interviews, and doorstep appearances.
  2. Public YouTube meetings — webinars, conferences, panel discussions, and institutional videos.

This release is built upon the first version of the MuMoSPEE dataset, significantly expanding both the scale and diversity of content, and unifying audio and video data into a consistent metadata format suitable for large-scale speech and multimodal research.

The original MuMoSPEE v1 dataset is available at:
👉 https://huggingface.co/datasets/meetween/mumospee

The dataset is designed for research in speech recognition, multimodal modeling, meeting analysis, and AI-driven content understanding.

All media are linked via URLs, with transcripts and metadata included. Audio and video recordings are stored as separate entries.

Sources & HuggingFace pages:


Dataset Structure

Column Type Description
url string URL to audio or video
type string audio or video
duration float Duration in seconds
language string Primary spoken language
transcript string Full transcript text
tag string Source tag (EU_Council or YouTube_Meeting)
split string Dataset split (train, validation, test)
license string License information for the content

Statistic Summary

Audio (EU Council only)

  • Number of audio entries: 45,522
  • Total duration (hours): 3,673.86
  • Average duration (seconds): 290.54

Video

  • Number of video entries: 167,929
  • Total duration (hours): 83,801.51
  • Average duration (seconds): 1,796.51

Breakdown by tag:

Tag Count
EU_Council 45,522
YouTube_Meeting 122,407

Language Distribution

Language Audio Count Audio Hours Video Count Video Hours
English 23,635 1,907.4 119,531 76,000
Spanish 2,427 196.7 13,810 8,785
French 2,953 238.6 11,176 7,110
German 3,582 289.2 5,049 3,210
Portuguese 1,230 99.5 4,260 2,710
Italian 1,374 111.2 2,845 1,810
Dutch 1,064 86.2 1,846 1,174
Swedish 1,092 88.5 1,174 746
Polish 970 78.7 999 635
Czech 873 70.9 886 563
Croatian 826 67.1 828 526
Danish 787 63.9 792 503
Slovak 684 55.5 685 435
Finnish 647 52.6 652 414
Greek 604 49.1 617 392
Slovenian 519 42.2 519 330
Bulgarian 401 32.6 403 256
Hungarian 384 31.2 386 245
Luxembourgish 297 24.1 297 188
Romanian 295 23.9 296 187
Maltese 156 12.6 156 99
Multilingual 94 7.6 94 60
Lithuanian 92 7.4 92 58
Arabic 79 6.5 79 50
Latvian 73 6.0 73 46
Ukrainian 56 4.7 56 35
Russian 38 3.2 38 24
Estonian 37 3.0 37 23
Serbian 36 2.9 36 22
Georgian 33 2.6 33 20
Norwegian 29 2.3 29 17
Albanian 25 2.0 25 15
Macedonian 23 1.9 23 14
Bosnian 17 1.4 17 10
Montenegrin 16 1.3 16 9
Turkish 14 1.1 14 8
Belarusian 6 0.5 6 3
Moldavian 5 0.4 5 3
Japanese 4 0.3 4 2
Persian 4 0.3 4 2
Catalan 4 0.3 4 2
Chinese 3 0.2 3 1
Korean 3 0.2 3 1
Vietnamese 3 0.2 3 1
Armenian 3 0.2 3 1
Indonesian 3 0.2 3 1
Swahili 1 0.1 1 0.1
Hindi 1 0.1 1 0.1
Tajik 1 0.1 1 0.1
Kazakh 1 0.1 1 0.1
Khmer 1 0.1 1 0.1

  • Notes
    For recordings containing multiple spoken languages, the total duration is split equally among the detected languages, as precise language-level timestamps are not available. This avoids inflating the total duration across languages.

Notes on Transcripts

  • EU Council (audio & video):
    Transcripts are generated using the Whisper ASR package. Each entry contains one or more languages per recording, corresponding to the full speech.

  • YouTube Meeting (video):
    Transcripts are extracted from YouTube subtitle tracks. They may be auto-generated, and some videos may lack transcripts entirely.


Usage

from datasets import load_dataset

dataset = load_dataset("meetween/mumospee_v2")

# Access first audio sample
audio_sample = dataset['train'].filter(lambda x: x['type'] == 'audio')[0]
print(audio_sample['transcript'])
print(audio_sample['url'])

# Access first video sample
video_sample = dataset['train'].filter(lambda x: x['type'] == 'video')[0]
print(video_sample['transcript'])
print(video_sample['url'])

Licensing Information

Users must comply with the source license when accessing media via URLs.

Downloads last month
10