STARFlow2: Bridging Language Models and Normalizing Flows for Unified Multimodal Generation

· HF Daily Papers ·

STARFlow2 uses autoregressive normalizing flows (same causal structure as LLMs) for unified multimodal generation, built on Pretzel architecture with vertical VLM interleaving.

Categories: Research

Excerpt

Ying Shen, Tianrong Chen, Yuan Gao, Yizhe Zhang, Yuyang Wang — Deep generative models have advanced rapidly across text and vision, motivating unified multimodal systems that can understand, reason over, and generate interleaved text-image sequences. Most existing approaches combine autoregressive language modeling with diffusion-based image generators, inheriting a structural mismatch between causal text generation and iterative visual denoising. We observe that autoregressive normalizing flows are autoregressive Transformers--sharing the same causal mask, KV-cache mechanism, and left-to-right structure as LLMs--making them the most natural paradigm for true unified multimodal generation. We present STARFlow2, built on the Pretzel architecture that vertically interleaves a pretrained VLM stream with a TarFlow stream via residual skip connections, both operating under the same causal mask. Combined with a deep-shallow flow design and a unified FAE latent space, STARFlow2 enables cache-friendly interleaved generation where both text and visual outputs directly enter the KV-cache without re-encoding. Experiments demonstrate strong performance across image generation and multimodal understanding benchmarks, validating autoregressive flows as a viable foundation for unified multimodal modeling.