CCCL: In-GPU Compression-Coupled Collective Communication

· ArXiv · AI/CL/LG ·

CCCL is an in-GPU compression-coupled collective communication library that achieves up to 3x NVLink bandwidth by fusing compression kernels directly into NCCL without user-side code changes.

Categories: OSS & Tools, Research

Excerpt

Collective communication incurs significant overhead in LLM workloads. Although overlapping communication with computation in application-level is a common strategy, it often requires substantial code modifications and is impractical for many workloads (e.g., tensor and expert parallelism). We present CCCL, a built-in compression-based collective communication library that supports operations such as allreduce, alltoall, and send/recv without requiring any user-side changes, thereby enabling seamless adoption in existing applications. CCCL tightly fuses compression kernels to minimize memory accesses and integrates with NCCL to eliminate the data coalescing stage, making it fast enough (up to 3x NVLink bandwidth) to sustain communication. Our evaluation shows that CCCL improves end-to-end throughput in vLLM PD disaggregation workloads by up to 10.1% and microbenchmark throughput by up to 30%.