Retrieval from Within: An Intrinsic Capability of Attention-Based Models

· HF Daily Papers ·

INTRA enables attention-based encoder-decoders to retrieve directly from internal representations, unifying retrieval and generation while outperforming engineered RAG pipelines on QA benchmarks.

Categories: Research

Excerpt

Elad Hoffer, Yochai Blau, Edan Kinderman, Ron Banner, Daniel Soudry — Retrieval-augmented generation (RAG) typically treats retrieval and generation as separate systems. We ask whether an attention-based encoder-decoder can instead retrieve directly from its own internal representations. We introduce INTRA (INTrinsic Retrieval via Attention), a framework where decoder attention queries score pre-encoded evidence chunks that are then directly reused as context for generation. By construction, INTRA unifies retrieval and generation, eliminating the retriever-generator mismatch typical of RAG pipelines. This design also amortizes context encoding by reusing precomputed encoder states across queries. On question-answering benchmarks, INTRA outperforms strong engineered retrieval pipelines on both evidence recall and end-to-end answer quality. Our results demonstrate that attention-based models already possess a retrieval mechanism that can be elicited, rather than added as an external module.