CVPR 2026
We present PhysInOne, the largest dataset addressing the critical scarcity of physically-grounded training data for AI systems.
- 2 million videos generated from 153,810 dynamic 3D scenes
- Covers 71 fundamental physical phenomena in everyday environments, spanning four major domains: Mechanics, Optics, Fluid Dynamics, Magnetism
- Includes 2,231 common objects tailored to daily physical interactions
- Enriched with 623 materials across five categories: plastic, metal, wood, stone, and fabric
- Features 528 diverse 3D backgrounds to ensure realism and environmental variety
- Each scene involves 1–3 physical phenomena, reflecting real-world activities
- Supports complex multi-object interactions, with increasing scene complexity
- Average number of objects per scene: 3.9 (single-physics), 6.3 (double-physics), 7.8 (triple-physics)
- Each scene is captured from 13 viewpoints: 12 static cameras and 1 moving camera
- 3D geometry
- Semantic labels
- Object motion and dynamics
- Physical properties
- Natural-language scene descriptions
- Physics-aware video generation
- Short- and long-term future frame prediction
- Physical property estimation
- Motion transfer
- And more...
| Resource | Link |
|---|---|
| 📄 Paper | arXiv |
| 🌐 Project Page | vlar-group.github.io/PhysInOne |
| 🤗 Dataset | Hugging Face |
🚧 Coming Soon 🚧
Data processing and benchmark evaluation code will be released soon. Stay tuned!
If you find this work useful, please cite:
@misc{zhou2026physinonevisualphysicslearning,
title={PhysInOne: Visual Physics Learning and Reasoning in One Suite},
author={Siyuan Zhou and Hejun Wang and Hu Cheng and Jinxi Li and Dongsheng Wang and Junwei Jiang and Yixiao Jin and Jiayue Huang and Shiwei Mao and Shangjia Liu and Yafei Yang and Hongkang Song and Shenxing Wei and Zihui Zhang and Peng Huang and Shijie Liu and Zhengli Hao and Hao Li and Yitian Li and Wenqi Zhou and Zhihan Zhao and Zongqi He and Hongtao Wen and Shouwang Huang and Peng Yun and Bowen Cheng and Pok Kazaf Fu and Wai Kit Lai and Jiahao Chen and Kaiyuan Wang and Zhixuan Sun and Ziqi Li and Haochen Hu and Di Zhang and Chun Ho Yuen and Bing Wang and Zhihua Wang and Chuhang Zou and Bo Yang},
year={2026},
eprint={2604.09415},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2604.09415},
}This project is licensed under the CC BY-NC-SA 4.0 license.
We would like to express our sincere gratitude to all contributors who participated in human evaluations and data collection efforts.
