Hyeonjoong Jang, Dongyoung Choi, Donggun Kim, Woohyun Kang, Min H. Kim
This repository provides MA3DGS (Motion-Aware 3DGS) implementation of "Splat-based 3D Scene Reconstruction with Extreme Motion-blur", presented at ICCV 2025.
@InProceedings{jang2025iccv,
author = {Jang, Hyeonjoong and Choi, Dongyoung and Kim, Donggun and Kang, Woohyun and Kim, Min H.},
title = {{Splat-based 3D Scene Reconstruction with Extreme Motion-blur}},
booktitle = {{IEEE/CVF International Conference on Computer Vision (ICCV)}},
year = {2025}
} We recommend to use Docker+CUDA. For other environments, you can jump to the step 5 with checking compatibility with the environment that we used. (The docker image nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04) You can find the required Python packages in the install_req.sh.
- Go to
envdirectory and run./run_docker.sh. This will create the containermadgs. - Open a shell inside the container. (e.g.,
docker exec -it madgs bash) - Inside the container, run:
cd /root/code/env && chmod +x install_base.sh && ./install_base.sh(This step takes some time) - Close the current shell and re-open the shell to apply initialization.
- Inside the container, run:
cd /root/code/env && chmod +x install_req.sh && ./install_req.sh(This step takes some time)
Inside the container, move to /root/code and follow the steps below.
-
Make a directory named
dataand put the unzipped example data. Your file structure should be:. ├── arguments/ ├── data/ │ ├── Library/ │ │ ├── camera.json │ │ ├── depth/ │ │ └── images/ │ └── Livingroom/ │ ├── camera.json │ ├── depth/ │ ├── images/ │ └── pose_c2w.json ├── env/ ├── gaussian_renderer/ ├── run_ma3dgs.py ├── scene/ ├── submodules/ │ ├── diff-gaussian-rasterization/ │ └── simple-knn/ ├── trainer/ ├── unimatch/ └── utils/ -
Activate the conda environment.
conda activate madgs
-
Run on the example data. You need to specify the data_type.
-
Synthetic data (
Bedroom,Livingroom,Office)python run_ma3dgs.py -s data/Livingroom --mode train --data_type synthetic --resolution 1 --optical_flow True
-
Real data from Azure Kinect (
Desk,Library,Room)python run_ma3dgs.py -s data/Library --mode train --data_type azure --resolution 1 --optical_flow True
-
-
Export the output poses and point cloud as a COLMAP format. The exported data can also be used as input for running other 3DGS methods.
python export_colmap_model.py --data data/Livingroom
-
(Synthetic data only) Evaluate results. For evaluation, you need the converted COMLAP format.
python evaluate.py --data data/Livingroom
This code heavily relies on the repositories: CF-3DGS, 3DGS (with modification) and unimatch. We appreciate the authors for their work.