--- license: mit task_categories: - robotics - video-classification tags: - minecraft - vla - vision-language-action - gaming - behavioral-cloning size_categories: - 1M mouse_x mouse_y scroll ; K1 ; K2 ; K3 ; K4 <|action_end|> ``` - `mouse_x`, `mouse_y`: Mouse delta (-1000 to 1000) - `scroll`: Hotbar scroll (always 0 - VPT uses number keys) - `K1` to `K4`: Key combinations per 50ms chunk **Example:** ``` <|action_start|> 45 -12 0 ; W ; W Space ; W LMB ; W LMB <|action_end|> ``` ### Processing Details - **Frame rate**: 5 FPS (downsampled from VPT's 20 FPS) - **Action chunks**: 4 per frame (each 50ms = 200ms total) - **Filtering**: Idle frames removed, loading screens filtered ## Usage ```python from datasets import load_dataset # Streaming (recommended - no download required) ds = load_dataset("TESS-Computer/minecraft-vla-stage1", split="train", streaming=True) for sample in ds: image = sample["image"] # PIL Image or bytes action = sample["action"] # Process... ``` ## Training Pipeline This is Stage 1 of a 3-stage training pipeline: 1. **Stage 1** (this dataset): Action pretraining - learn observation→action mapping 2. **Stage 2**: Instruction following - add task instructions from JARVIS-VLA 3. **Stage 3**: Reasoning - add chain-of-thought before complex actions ## Citation If you use this dataset, please cite: - [OpenAI VPT](https://arxiv.org/abs/2206.11795) - Original contractor data - [JARVIS-VLA](https://craftjarvis.github.io/JarvisVLA/) - Instruction annotations - [Lumine](https://www.lumine-ai.org/) - Training methodology ## License MIT License. Original VPT data is released under MIT by OpenAI.