Skip to content

KAIST-VCLAB/gs-extreme-motion-blur

Repository files navigation

Splat-based 3D Scene Reconstruction with Extreme Motion-blur

Hyeonjoong Jang, Dongyoung Choi, Donggun Kim, Woohyun Kang, Min H. Kim

This repository provides MA3DGS (Motion-Aware 3DGS) implementation of "Splat-based 3D Scene Reconstruction with Extreme Motion-blur", presented at ICCV 2025.

@InProceedings{jang2025iccv,
   author = {Jang, Hyeonjoong and Choi, Dongyoung and Kim, Donggun and Kang, Woohyun and Kim, Min H.},
   title = {{Splat-based 3D Scene Reconstruction with Extreme Motion-blur}},
   booktitle = {{IEEE/CVF International Conference on Computer Vision (ICCV)}},
   year = {2025}
} 

Install

We recommend to use Docker+CUDA. For other environments, you can jump to the step 5 with checking compatibility with the environment that we used. (The docker image nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04) You can find the required Python packages in the install_req.sh.

  1. Go to env directory and run ./run_docker.sh. This will create the container madgs.
  2. Open a shell inside the container. (e.g., docker exec -it madgs bash)
  3. Inside the container, run: cd /root/code/env && chmod +x install_base.sh && ./install_base.sh (This step takes some time)
  4. Close the current shell and re-open the shell to apply initialization.
  5. Inside the container, run: cd /root/code/env && chmod +x install_req.sh && ./install_req.sh (This step takes some time)

Demo

Inside the container, move to /root/code and follow the steps below.

  1. Make a directory named data and put the unzipped example data. Your file structure should be:

    .
    ├── arguments/
    ├── data/
    │   ├── Library/
    │   │   ├── camera.json
    │   │   ├── depth/
    │   │   └── images/
    │   └── Livingroom/
    │       ├── camera.json
    │       ├── depth/
    │       ├── images/
    │       └── pose_c2w.json
    ├── env/
    ├── gaussian_renderer/
    ├── run_ma3dgs.py
    ├── scene/
    ├── submodules/
    │   ├── diff-gaussian-rasterization/
    │   └── simple-knn/
    ├── trainer/
    ├── unimatch/
    └── utils/
    
    
  2. Activate the conda environment.

    conda activate madgs
  3. Run on the example data. You need to specify the data_type.

    • Synthetic data (Bedroom, Livingroom, Office)

      python run_ma3dgs.py -s data/Livingroom --mode train --data_type synthetic --resolution 1 --optical_flow True
    • Real data from Azure Kinect (Desk, Library, Room)

      python run_ma3dgs.py -s data/Library --mode train --data_type azure --resolution 1 --optical_flow True
  4. Export the output poses and point cloud as a COLMAP format. The exported data can also be used as input for running other 3DGS methods.

    python export_colmap_model.py --data data/Livingroom
  5. (Synthetic data only) Evaluate results. For evaluation, you need the converted COMLAP format.

    python evaluate.py --data data/Livingroom

Acknowledgements

This code heavily relies on the repositories: CF-3DGS, 3DGS (with modification) and unimatch. We appreciate the authors for their work.

About

gs-extreme-motion-blur

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages