LoopCTR: Unlocking the Loop Scaling Power for Click-Through Rate Prediction
LoopCTR introduces loop scaling for CTR prediction models, reusing shared layers to decouple computation from parameter growth while maintaining training efficiency.
Excerpt
Jiakai Tang, Runfeng Zhang, Weiqiu Wang, Yifei Liu, Chuan Wang — Scaling Transformer-based click-through rate (CTR) models by stacking more parameters brings growing computational and storage overhead, creating a widening gap between scaling ambitions and the stringent industrial deployment constraints. We propose LoopCTR, which introduces a loop scaling paradigm that increases training-time computation through recursive reuse of shared model layers, decoupling computation from parameter growth. LoopCTR adopts a sandwich architecture enhanced with Hyper-Connected Residuals and Mixture-of-Experts, and employs process supervision at every loop depth to encode multi-loop benefits into the shared parameters. This enables a train-multi-loop, infer-zero-loop strategy where a single forward pass without any loop already outperforms all baselines. Experiments on three public benchmarks and one industrial dataset demonstrate state-of-the-art performance. Oracle analysis further reveals 0.02--0.04 AUC of untapped headroom, with models trained with fewer loops exhibiting higher oracle ceilings, pointing to a promising frontier for adaptive inference.
Read at source: https://arxiv.org/abs/2604.19550