REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory

Ziniu Hu1, Ahmet Iscen2, Chen Sun2, Zirui Wang2, Kai-Wei Chang1, Yizhou Sun1,
Cordelia Schmid2, David A Ross2, Alireza Fathi2
1University of California, Los Angeles, 2Google Research

REVEAL is an end-to-end Retrieval-Augmented Visual Language Model that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries.

Abstract

We propose an end-to-end Retrieval-Augmented Visual Language Model (REVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries

REVEAL consists of four key components: the memory, the encoder, the retriever and the generator. The large-scale memory encodes various sources of multimodal world knowledge (e.g. image-text pairs, question answering pairs, knowledge graph triplets, etc) via a unified encoder. The retriever finds the most relevant knowledge entries in the memory, and the generator fuses the retrieved knowledge with the input query to produce the output. A key novelty in our approach is that the memory, encoder, retriever and generator are all pre-trained end-to-end on a massive amount of data. Furthermore, our approach can use a diverse set of multimodal knowledge sources, which is shown to result in significant gains. We show that REVEAL achieves state-of-the-art results on visual question answering and image captioning.

BibTeX

@article{hu2023reveal,
  author    = {Ziniu Hu and Ahmet Iscen and  Chen Sun and  Zirui Wang and  Kai-Wei Chang and  Yizhou Sun and  Cordelia Schmid and  David A. Ross and  Alireza Fathi},
  title     = {REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory},
  journal   = {CVPR},
  year      = {2023},
}