Paper: Similarity-Guided Layer-Adaptive ViT for UAV Tracking (SGLATrack) ArXiv: https://arxiv.org/abs/2503.06625 GitHub: https://github.com/GXNU-ZhongLab/SGLATrack Defense Score: 40/50 | Tier: T3 Wave: 10 — WARDOG (War Dog Breeds) Focus: UAV/Drone Defense for Shenzhen Robot Fair
Similarity-guided adaptive layer pruning for efficient ViT-based UAV tracking
Key Result: SOTA real-time with dynamic layer pruning
# Install dependencies
uv pip install -e ".[dev]"
# Run synthetic inference
python -m anima_sloughi infer --backend auto --synthetic --with-yolo26-prior
# Run synthetic training/evaluation
python -m anima_sloughi train --backend cpu --steps 3 --batch-size 2
python -m anima_sloughi evaluate --backend cpu --dataset synthetic
# If package is not installed in editable mode:
PYTHONPATH=src python -m anima_sloughi infer --backend cpu --syntheticproject_sloughi/
├── src/anima_sloughi/ # Source code
├── tests/ # Unit tests
├── configs/ # Configuration files
├── scripts/ # Utility scripts
├── papers/ # Paper PDF
├── docker/ # Docker setup
├── CLAUDE.md # Agent instructions
├── PRD.md # Production requirements
├── NEXT_STEPS.md # Execution ledger
└── MODULE_TODO.md # Implementation checklist
All code runs on both MLX (Apple Silicon) and CUDA (GPU server).
See src/anima_sloughi/device.py for the abstraction layer.
Research use only. See paper for original license terms.
