AICL-Lab
Popular repositories Loading
-
the-book-of-secret-knowledge-zh
the-book-of-secret-knowledge-zh Public📚 秘密知识之书中文版 - A curated collection of tools, manuals, cheatsheets, and resources for SysAdmins, DevOps, Pentesters and Security Researchers. Chinese translation of the-book-of-secret-knowledge.
Python 4
-
-
brave-sync-notes
brave-sync-notes Public🔐 端到端加密笔记同步 | End-to-end encrypted note sync with real-time collaboration
JavaScript 4
-
diy-flash-attention
diy-flash-attention PublicLearn Triton by building FlashAttention from scratch — V2 kernels, persistent threads, mask DSL, profiling toolkit, bilingual docs
Python 4
-
heterogeneous-task-scheduler
heterogeneous-task-scheduler PublicC++17 DAG scheduler for heterogeneous CPU/GPU workloads - production-ready with CPU-only validation path
C++ 4
-
triton-fused-ops
triton-fused-ops PublicFused Triton kernels for Transformer inference: RMSNorm+RoPE, Gated MLP, FP8 GEMM — CPU-testable references, autotuning, and benchmarking
Python 4
Repositories
- modern-ai-kernels Public
TensorCraft-HPC: A header-only C++/CUDA kernel library for learning high-performance AI operators with progressive optimization paths
AICL-Lab/modern-ai-kernels’s past year of commit activity - triton-fused-ops Public
Fused Triton kernels for Transformer inference: RMSNorm+RoPE, Gated MLP, FP8 GEMM — CPU-testable references, autotuning, and benchmarking
AICL-Lab/triton-fused-ops’s past year of commit activity - llm-speed Public
CUDA kernels for LLM inference: FlashAttention forward, Tensor Core GEMM, and PyTorch bindings
AICL-Lab/llm-speed’s past year of commit activity - cuda-kernel-academy Public
Systematic CUDA kernel engineering from SGEMM fundamentals to reusable kernels, advanced optimization, and inference components
AICL-Lab/cuda-kernel-academy’s past year of commit activity - mini-image-pipe Public
GPU-accelerated image processing pipeline with DAG scheduling, CUDA operators, and multi-stream execution
AICL-Lab/mini-image-pipe’s past year of commit activity - mini-opencv Public
CUDA-accelerated GPU image processing library. 30-50x faster than CPU OpenCV. High-performance operators for computer vision: convolution, morphology, filters, geometric transforms, and more.
AICL-Lab/mini-opencv’s past year of commit activity - gpu-fft Public
High-performance GPU-accelerated FFT library for JavaScript/TypeScript using WebGPU compute shaders. Zero runtime dependencies, dual GPU/CPU paths, TypeScript-first.
AICL-Lab/gpu-fft’s past year of commit activity - tiny-llm Public
CUDA-native C++ Transformer inference engine with W8A16 quantization, KV cache management, and optimized CUDA kernels
AICL-Lab/tiny-llm’s past year of commit activity
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…