Skip to content

Claude/build binary wheel 01 mz6q m xg slr yq x2h aj75 hv6#76

Open
Aero-Ex wants to merge 18 commits intoautonomousvision:mainfrom
Aero-Ex:claude/build-binary-wheel-01Mz6qMXgSLRYqX2hAJ75Hv6
Open

Claude/build binary wheel 01 mz6q m xg slr yq x2h aj75 hv6#76
Aero-Ex wants to merge 18 commits intoautonomousvision:mainfrom
Aero-Ex:claude/build-binary-wheel-01Mz6qMXgSLRYqX2hAJ75Hv6

Conversation

@Aero-Ex
Copy link
Copy Markdown

@Aero-Ex Aero-Ex commented Nov 16, 2025

No description provided.

claude and others added 18 commits November 16, 2025 05:15
This commit introduces a comprehensive setup for building and distributing
binary wheels for mip-splatting, following the pattern used in diffoctreerast.

Changes include:
- setup.py: Root-level build configuration that compiles both diff-gaussian-rasterization
  and simple-knn CUDA extensions with cross-platform support (Linux/Windows)
- pyproject.toml: Modern Python packaging configuration with project metadata
- MANIFEST.in: Source distribution manifest for including all necessary files
- build_wheel.sh/bat: Convenient build scripts for Linux and Windows
- BUILD.md: Comprehensive documentation for building binary wheels
- .github/workflows/build_binary_wheel.yml: Automated CI/CD workflow for
  building wheels across multiple Python versions (3.8-3.11) and CUDA versions

Key features:
- Support for CUDA architectures 6.0-9.0 (Pascal to Hopper)
- Windows-specific compiler flags for MSVC/CUDA compatibility
- Automated testing in GitHub Actions
- Single-file distribution for easier deployment

This enables users to install pre-compiled wheels instead of building from source,
significantly reducing installation time and complexity.
Updated the build configuration to target Windows exclusively, as per user requirements.

Changes:
- setup.py: Added Windows-only check that exits with error on non-Windows platforms
- setup.py: Removed conditional logic - now uses Windows flags unconditionally
- .github/workflows/build_binary_wheel.yml: Removed Linux builds, only Windows matrix
- .github/workflows/build_binary_wheel.yml: Removed source distribution build step
- BUILD.md: Completely rewritten to focus on Windows installation and troubleshooting
- Removed build_wheel.sh (Linux build script, no longer needed)

The build now only supports:
- Windows (windows-latest)
- Python 3.8, 3.9, 3.10, 3.11
- CUDA 11.8, 12.1

Linux/Mac users should continue using the original installation method from README.
Update actions/upload-artifact from v3 to v4 to resolve deprecation warning.
Use full version format (11.8.0, 12.1.0) instead of short format (11.8, 12.1) as required by Jimver/cuda-toolkit action.
Changed CUDA version from 12.1.0 to 12.1.1 for compatibility with Jimver/cuda-toolkit action.
- Set both CUDA_HOME and CUDA_PATH env vars for build step
- Upload wheel before testing (so artifacts are saved even if tests fail)
Changes to match diffoctreerast pattern:
- Remove matrix strategy, only build Python 3.11 + CUDA 11.8
- Use python setup.py bdist_wheel instead of python -m build --wheel
- Add MSVC environment setup step
- Install PyTorch before building (brings numpy automatically)
- Update to newer action versions (setup-python@v5, cuda-toolkit@v0.2.21)
- Use PowerShell with verbose logging throughout
- Upload wheel before testing

This avoids the isolated build environment that was causing CUDA_HOME issues.
PyTorch 2.2.2 doesn't automatically install numpy in the workflow environment,
causing build failures. Adding explicit numpy==1.26.1 installation.
Changes:
- Add GitHub Actions cache for pip packages (caches PyTorch 2.7GB download)
- Enable pip cache in setup-python action
- Combine pip install steps to reduce overhead
- First run: normal speed, subsequent runs: much faster
Changed from redirecting to build.log to using Tee-Object for real-time output.
This will show the actual compilation errors instead of just the exit code.
Set CUDA_HOME and CUDA_PATH explicitly in the build step to ensure
CUDA toolkit is found during compilation.
Removed all custom additions to match diffoctreerast exactly:
- Removed pip caching steps
- Removed explicit numpy installation (let PyTorch handle it)
- Changed output from Tee-Object back to > build.log 2>&1
- Removed CUDA_HOME and CUDA_PATH from env (rely on cuda-toolkit action)
- Separated PyTorch and build dependencies into two steps

Now identical to diffoctreerast workflow structure.
- Fixed spacing in Build environment output (2 spaces)
- Fixed indentation in if/foreach blocks
- Added glm submodule comment
- Updated test imports for mip-splatting modules
- Fixed summary display indentation
Added CUDA_HOME and CUDA_PATH to build step to ensure CUDA toolkit
is properly detected during compilation.
Added steps to exactly match diffoctreerast workflow:
- Cache pip packages before CUDA installation
- Updated cuda-toolkit to v0.2.29
- Verify environment step (Python/NVCC/CUDA/MSVC)
- Cache PyTorch separately
- Verify PyTorch after installation
- pip install --upgrade pip before build dependencies
- Separate numpy==1.26.4 installation step
- Final verification showing PyTorch and CUDA versions
- Added CUDA_HOME and CUDA_PATH to build environment

Now matches diffoctreerast workflow structure exactly.
Changed from redirecting to build.log to using Tee-Object for real-time output.
Added CUDA_HOME to environment display.
Set ErrorActionPreference to Continue to avoid stopping on warnings.
Set DISTUTILS_USE_SDK=1 environment variable to prevent multiple
activations of the VC environment when building CUDA extensions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants