Online Self-Calibration Against Hallucination in Vision-Language Models
Researchers identify that LVLMs are better at discriminative verification than generative hallucination, and exploit this Generative-Discriminative Gap to create online self-supervision that outperforms offline GPT-distilled alignment.
Excerpt
Minghui Chen, Chenxu Yang, Hengjie Zhu, Dayan Wu, Zheng Lin — Large Vision-Language Models (LVLMs) often suffer from hallucinations, generating descriptions that include visual details absent from the input image. Recent preference alignment methods typically rely on supervision distilled from stronger models such as GPT. However, this offline paradigm introduces a Supervision-Perception Mismatch: the student model is forced to align with fine-grained details beyond its perceptual capacity, learning to guess rather than to see. To obtain reliable self-supervision for online learning, we identify a Generative-Discriminative Gap within LVLMs, where models exhibit higher accuracy on discriminative verification than open-ended generation. Leveraging this capability, we propose Online Self-CAlibRation (OSCAR), a framework that integrates Monte Carlo Tree Search with a Dual-Granularity Reward Mechanism to construct preference data and iteratively refines the model via Direct Preference Optimization. Extensive experiments demonstrate that OSCAR achieves state-of-the-art performance on hallucination benchmarks while improving general multimodal capabilities.
Read at source: https://arxiv.org/abs/2605.00323