End-to-End Autoregressive Image Generation with 1D Semantic Tokenizer
Jointly trained 1D semantic tokenizer and autoregressive generator achieves FID 1.48 on ImageNet 256x256 without guidance, matching diffusion quality with an AR approach.
Excerpt
Wenda Chu, Bingliang Zhang, Jiaqi Han, Yizhuo Li, Linjie Yang — Autoregressive image modeling relies on visual tokenizers to compress images into compact latent representations. We design an end-to-end training pipeline that jointly optimizes reconstruction and generation, enabling direct supervision from generation results to the tokenizer. This contrasts with prior two-stage approaches that train tokenizers and generative models separately. We further investigate leveraging vision foundation models to improve 1D tokenizers for autoregressive modeling. Our autoregressive generative model achieves strong empirical results, including a state-of-the-art FID score of 1.48 without guidance on ImageNet 256x256 generation.
Read at source: https://arxiv.org/abs/2605.00503