The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
sim_id: string
time_id: int32
u: binary
v: binary
shape_t: int32
shape_h: int32
shape_w: int32
p: binary
-- schema metadata --
huggingface: '{"info": {"features": {"sim_id": {"dtype": "string", "_type' + 342
to
{'sim_id': Value('string'), 'time_id': Value('int32'), 'observed': Value('binary'), 'shape_t': Value('int32'), 'shape_h': Value('int32'), 'shape_w': Value('int32'), 'numerical': Value('binary'), 'numerical_channels': Value('int32')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 609, in wrapped
for item in generator(*args, **kwargs):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py", line 76, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py", line 59, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
sim_id: string
time_id: int32
u: binary
v: binary
shape_t: int32
shape_h: int32
shape_w: int32
p: binary
-- schema metadata --
huggingface: '{"info": {"features": {"sim_id": {"dtype": "string", "_type' + 342
to
{'sim_id': Value('string'), 'time_id': Value('int32'), 'observed': Value('binary'), 'shape_t': Value('int32'), 'shape_h': Value('int32'), 'shape_w': Value('int32'), 'numerical': Value('binary'), 'numerical_channels': Value('int32')}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1334, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 911, in stream_convert_to_parquet
builder._prepare_split(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
sim_id
string | time_id
int32 | observed
unknown | shape_t
int32 | shape_h
int32 | shape_w
int32 | numerical
unknown | numerical_channels
int32 |
|---|---|---|---|---|---|---|---|
20NH3_1.1.h5
| 1,687
| "Q/M7PM+WGDy4B887K53BO6pHnzsopXY7/HhUOzQDgTseb6I7YOusOzyAnzsYtpg7AuGMO9LjojsQ+5w78LCIOxY3nDuIcSY77Mh(...TRUNCATED)
| 40
| 64
| 64
| "4SDHRwaEkEiIvtMycIXSPOeehD1nOEs+KJ8XMuR0FTPJec46nzAdRKhLCUXXmgA+ZPntQGyJnL9/QfFAzyDHR/lWgkcxuAkx2Zf(...TRUNCATED)
| 15
|
40NH3_0.85.h5
| 477
| "GMpJuzB/NDoAeqk63OQfO9g35zuUwSM8YT7IO5A/9jqo3Vo7/mPEO4dB8Du8n1o7zGNOO1Jkjju0G3U79gmsO5ybiTsw+7Q6YJZ(...TRUNCATED)
| 40
| 64
| 64
| "E/3FRxoNokdgWoolSfKHOgC1fz22f0A+ZS58Lm7SCSZkcSk7M5o0QulpAUUs6fw9a0wZPwe7akAvG29ALfzFR3DpBEhQS88n3D+(...TRUNCATED)
| 15
|
0NH3_0.85.h5
| 376
| "oMvIuRCCSbtkfQe7ACluOZg7Fjt5xMc7SsKRO2BQCTvY5cY6UAVCO65Tmjssjkc7uNbIOtyZBjv4LlQ7r8zLO+Pe0zt6eII7zoa(...TRUNCATED)
| 40
| 64
| 64
| "/drGRzB/tEdKFiKomkiOOmZpoD02uyA+cdpyLpsK3y4jCjA743z0Q6YJAkX5Kt4+U0IiP4kQFT7p5kg/KdvGRzBeqUfZqL8roI+(...TRUNCATED)
| 15
|
40NH3_1.1.h5
| 826
| "0X4dPO5VFjyz78I7JI2NOwKZoDt3+7k7HIWWO+hzZjtkoFg73P18OzoHgztqlp076BGWO1gmNjt4f+06QH6SOgjvirrgHv65ANw(...TRUNCATED)
| 40
| 64
| 64
| "oPvGR5AzxUceupGm6Y65PHRRXz07Ulo+GrA4MKbKvDCt3J46xI8KRGy0B0VZCFe+qsxWQKA5nkBgWb9ABfzGR8qCxkfDfHynkvq(...TRUNCATED)
| 15
|
20NH3_1.1.h5
| 1,056
| "xeL7OyrCnjtrjME7eCXAOwDNiTusjnM7EIpWO+qolzsG44o7JDGHO9iIrzvmP887CXDkO3b8kjs0dDg7rFygO8xs3Tts7bQ7mAI(...TRUNCATED)
| 40
| 64
| 64
| "jvjGR0D3WEdKWogvzdvvPCM2fj3pXUs+hUJeMsIufDMxt5o66AYJRM9cCEWDCmQ9EkhXQPWnmkAwdbxArfjGR2S0OEj6UAQwW4j(...TRUNCATED)
| 15
|
80NH3_1.25.h5
| 1,901
| "eBf1Ot7/jjsIK6U7quKPO7AxjzqQf+Q6cO7MOpAXaTqY5YA6AE5vOpCk6TpYJ+86qE8WO4zLNztQU3o7kUC/OxzfAzzabwU8Ocf(...TRUNCATED)
| 40
| 64
| 64
| "5rnFRysYeUjxc2469paYPNr2njxhq4E+atg9OqPzazz67rk5aGiywgF6/USZpA4/O0G1QHnCy0CBpQhBSLnFR9au+UjSeWs61+q(...TRUNCATED)
| 15
|
20NH3_0.9.h5
| 1,670
| "gR7vO4nYzjtsL6Q76E5GO9igRTtku5c7ESvGO5uSsTsggkI74OCsOqCO6DnwcCs6EAaXOza1pjvAoxQ6GHuKOqziaTvy6oY7ynK(...TRUNCATED)
| 40
| 64
| 64
| "UCnGR5CmvMYUt+Mqz1AqO6jPmD19qzg+XhpGMTPEPizh2FQ7caAFQ/nzBUWb0Kc+mUuzvwdB/r/3wh1AnirGR88MGUj7YQErHZc(...TRUNCATED)
| 15
|
0NH3_0.85.h5
| 36
| "JCI3OyAOLztMMnc7/im8O8I/tDtonNo63JozOwZuwDviXZ07TiHHO7NU5TsVug48uqoEPLgTEjwQrok7WJTZOlzxKjt8tkQ7eDk(...TRUNCATED)
| 40
| 64
| 64
| "sznFR0qwzUfUCmImR8DTOvRkpD12XiU+bz63LkdKHi/UH0s78syswzArBEUXgj0/51EOv/P9K0Dp2TVAgznFR2Xqu0ctKGAo7H/(...TRUNCATED)
| 15
|
80NH3_0.9.h5
| 1,426
| "PWCxO6CvmjsiQII7WMOUO4gHczuQjS47HMNcO1CtHDsAwpg5AGjit9Dsyjr4Si47+K2yOvRYPzvQaII7kl6NO4xXlTvUB1Y75KF(...TRUNCATED)
| 40
| 64
| 64
| "OZTFRzrfLEnGGUcuZpMLOn0x8Tx06HU+wysxNS/LwS8kuyU7c44kw9kO/0RyYQK+MSzXvpvgykBE3stA/JLFR6/8jUnpw5Qw85U(...TRUNCATED)
| 15
|
20NH3_1.25.h5
| 1,549
| "GOdCu1ADljqUnnY7bJgaO8T7GDtAP+E6cCNkOoDOHrpoNZC6oEHFObCFnTpwiNw6oL4dO2ikiTogMf+5SH4Fu9A2RLu4nGq7oLZ(...TRUNCATED)
| 40
| 64
| 64
| "tdPCR+VwAUhJNhszG0NFPTmhTj3ugUg+05YBNLPPgDWaU/s5qbLExJBOA0WPoZ0+h+A3QEYECz80QzxADNPCR1k+BUhGZ1w1i5p(...TRUNCATED)
| 15
|
RealPDEBench
RealPDEBench is a benchmark of paired real-world measurements and matched numerical simulations for complex physical systems. It is designed for spatiotemporal forecasting and sim-to-real transfer evaluation on real data.
This Hub repository (AI4Science-WestlakeU/RealPDEBench) is the release repo for RealPDEBench.
- Website & documentation: realpdebench.github.io
- Benchmark codebase: AI4Science-WestlakeU/RealPDEBench
Figure 1. RealPDEBench provides paired real-world measurements and matched numerical simulations for sim-to-real evaluation.
What makes RealPDEBench different?
- Paired real + simulated data: each scenario provides experimental measurements and corresponding CFD/LES simulations.
- Real-world evaluation: models are evaluated on real trajectories to quantify the sim-to-real gap.
- Multi-modal mismatch: simulations include additional unmeasured modalities (e.g., pressure, species fields), enabling modality-masking and transfer strategies.
Data sources (high level)
- Fluid systems (
cylinder,controlled_cylinder,fsi,foil):- Real: Particle Image Velocimetry (PIV) in a circulating water tunnel
- Sim: CFD (2D finite-volume + immersed-boundary; 3D GPU solvers depending on scenario)
- Combustion (
combustion):- Real: OH* chemiluminescence imaging (high-speed)
- Sim: Large Eddy Simulation (LES) with detailed chemistry (NH3/CH4/air co-firing)
Scenarios (5)
| Scenario | Real data (measured) | Numerical data (simulated) | Frames / trajectory | Spatial grid (after sub-sampling) | HDF5 trajectories (real / numerical) |
|---|---|---|---|---|---|
| cylinder | velocity (u,v) | (u,v,p) | 3990 | 64×128 | 92 / 92 |
| controlled_cylinder | (u,v) | (u,v,p) (+ control params in filenames) | 3990 | 64×128 | 96 / 96 |
| fsi | (u,v) | (u,v,p) | 2173 | 64×64 | 51 / 51 |
| foil | (u,v) | (u,v,p) | 3990 | 64×128 | 98 / 99 |
| combustion | OH* chemiluminescence intensity (1 channel) | intensity surrogate (1) + 15 simulated fields | 2001 | 128×128 | 30 / 30 |
Total trajectories (HDF5 files): ~735 (≈367 real + ≈368 numerical).
Physical parameter ranges (real experiments)
| Scenario | Key parameters (real) |
|---|---|
| cylinder | Reynolds number (Re): 1800–12000 |
| controlled_cylinder | (Re): 1781–9843; control frequency (f): 0.5–1.4 Hz |
| fsi | (Re): 3272–9068; mass ratio (m^*): 18.2–20.8 |
| foil | angle of attack (\alpha): 0°–20°; (Re): 2968–17031 |
| combustion | CH4 ratio: 20–100%; equivalence ratio (\phi): 0.75–1.3 |
Data format on the Hub
Each split is stored as a Hugging Face datasets.Dataset saved with Dataset.save_to_disk(). Concretely, each split is a directory containing:
data-*.arrow(sharded Arrow files, float32 payloads stored as bytes)dataset_info.jsonstate.json
test_mode metadata (JSON)
RealPDEBench supports test_mode evaluation splits (in_dist, out_dist, seen, unseen). The group definitions are shipped as JSON dicts per scenario:
in_dist_test_params_{type}.jsonout_dist_test_params_{type}.jsonremain_params_{type}.json
where {type} is real or numerical.
Temporal windowing (what an “example” means)
RealPDEBench is stored as sliding windows cut from longer trajectories. Each row corresponds to (sim_id, time_id):
sim_id: which trajectory (HDF5 file)time_id: start index of the window
Typical window lengths (T):
- 40 frames for
cylinder,fsi,foil,combustion(often used as 20‑step input + 20‑step output) - 20 frames for
controlled_cylinder(often 10 + 10) - 20 frames for
combustion/surrogate_train(surrogate model training data)
Intended layout for the full release (mirrors the on-disk structure used by RealPDEBench loaders):
{repo_root}/
cylinder/
in_dist_test_params_real.json
out_dist_test_params_real.json
remain_params_real.json
in_dist_test_params_numerical.json
out_dist_test_params_numerical.json
remain_params_numerical.json
hf_dataset/
real_train/ real_val/ real_test/
numerical_train/ numerical_val/ numerical_test/
fsi/
in_dist_test_params_real.json
out_dist_test_params_real.json
remain_params_real.json
in_dist_test_params_numerical.json
out_dist_test_params_numerical.json
remain_params_numerical.json
hf_dataset/
...
combustion/
in_dist_test_params_real.json
out_dist_test_params_real.json
remain_params_real.json
in_dist_test_params_numerical.json
out_dist_test_params_numerical.json
remain_params_numerical.json
hf_dataset/
real_train/ real_val/ real_test/
numerical_train/ # (val/test intentionally empty)
surrogate_train/ # combustion-only (surrogate model training)
surrogate_train_sim_ids.txt
surrogate_train_meta.json
...
How to download only what you need
For large data, use snapshot_download(..., allow_patterns=...) to avoid pulling the full repository.
import os
from huggingface_hub import snapshot_download
from datasets import load_from_disk
repo_id = "AI4Science-WestlakeU/RealPDEBench"
os.environ["HF_HUB_DISABLE_XET"] = "1"
local_dir = snapshot_download(
repo_id=repo_id,
repo_type="dataset",
allow_patterns=["fsi/**"], # example: download only the FSI folder
endpoint="https://hf-mirror.com",
)
ds = load_from_disk(os.path.join(local_dir, "fsi", "hf_dataset", "numerical_val"))
row = ds[0]
print(row.keys())
Schema (columns)
Fluid datasets (cylinder, controlled_cylinder, fsi, foil)
- Keys:
sim_id(string): trajectory file name (e.g.,10031.h5)time_id(int): start frame index of the windowu,v(bytes): float32 arrays of shape(T, H, W)p(bytes): float32 array(T, H, W)(numerical splits only)shape_t,shape_h,shape_w(int): shapes for decoding
Combustion dataset (combustion)
- Keys:
sim_id(string): e.g.,40NH3_1.1.h5time_id(int): start frame index of the windowobserved(bytes): float32 array(T, H, W)(real: measured intensity; numerical: surrogate intensity)numerical(bytes): float32 array(T, H, W, 15)(numerical splits only)numerical_channels(int): number of numerical channels (15)shape_t,shape_h,shape_w(int): shapes for decoding
Combustion surrogate-train (combustion/surrogate_train)
Used to train a surrogate model mapping simulated modalities → real modality (combustion only).
- Keys:
real(bytes): float32 array(T, H, W)(target intensity)numerical(bytes): float32 array(T, H, W, C)(input fields)- plus shapes (
*_shape_*) andnumerical_channels
Current converted data size (local conversion; full release target)
These numbers refer to our current HF Arrow conversion outputs (not all uploaded to this test repo yet):
- Total: ~954GB across all scenarios
- Largest shard file: ~0.47GB (well below the Hub’s recommended <50GB per file)
- Total file count: ~2.1k files (well below the Hub’s recommended <100k files per repo)
Per-scenario totals (HF Arrow):
| Scenario | Total size |
|---|---|
| combustion | 622GB |
| cylinder | 116GB |
| fsi | 34GB |
| controlled_cylinder | 61GB |
| foil | 124GB |
Recommended benchmark protocols
RealPDEBench supports three standard training paradigms (all evaluated on real-world data):
- Simulated training (numerical only)
- Real-world training (real only)
- Simulated pretraining + real finetuning
License
This dataset is released under CC BY‑NC 4.0 (non‑commercial). Please credit the authors and the benchmark paper when using the dataset.
Citation
If you find our work and/or our code useful, please cite us via:
@misc{hu2026realpdebenchbenchmarkcomplexphysical,
title={RealPDEBench: A Benchmark for Complex Physical Systems with Real-World Data},
author={Peiyan Hu and Haodong Feng and Hongyuan Liu and Tongtong Yan and Wenhao Deng and Tianrun Gao and Rong Zheng and Haoren Zheng and Chenglei Yu and Chuanrui Wang and Kaiwen Li and Zhi-Ming Ma and Dezhi Zhou and Xingcai Lu and Dixia Fan and Tailin Wu},
year={2026},
eprint={2601.01829},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2601.01829},
}
Contact
AI for Scientific Simulation and Discovery Lab, Westlake University
Maintainer: westlake-ai4s (Hugging Face)
Org: AI4Science-WestlakeU
- Downloads last month
- 161