2024-03-05_VILA - On Pre-training for Visual Language Models

| 2024_Lin_VILA.pdf| 2024-03-05 | Ji Lin, Hongxu Yin, Wei Ping, Yao Lu, Pavlo Molchanov, Andrew Tao, Huizi Mao, Jan Kautz, Mohammad Shoeybi, Song Han | URL | arxiv

对VLM的探究性实验

Image Resolution>num_visual_tokens

参考[MM1],可以对比。

Arch和Frozen Training

本文始终冻住Visual Encoder,且多了stage 0:Projector initialization. The LLM and ViT are separately pre-trained, while the projector is usually initialized from random weights and trained with paired data.

Projector: Linear层比Transformer略好。

freezing LLMs during pre-training can achieve decent zero-shot performance, but lack in-context learning capability, which requires unfreezing the LLM

SFT时,一定要Unfreeze LLM!

Interleaved structure很重要

1.把interleaved data转为paired,效果很差(因为图文不强相关)
2.尝试图文没有顺序,图片全作为Prefix:<im1><im2><txt1><txt2> instead of the <im1><txt1><im2><txt2>

most of the information is still from pure text modeling!

Text-only data in SFT

没有尝试在Pretraining阶段引入预训练语料。只探究了SFT阶段。

re-blending text-only instruction data to image-text data during instruction fine-tuning not only remedies the degradation of text-only tasks, but also boosts VLM task accuracy

Abstract

Visual language models (VLMs) rapidly progressed with the recent success of large language models. There have been growing efforts on visual instruction tuning to extend the LLM with visual inputs, but lacks an in-depth study of the visual language pre-training process, where the model learns to perform joint modeling on both modalities. In this work, we examine the design options for VLM pre-training by augmenting LLM towards VLM through step-by-step controllable comparisons. We introduce three main findings: (1) freezing LLMs during pre-training can achieve decent zero-shot performance, but lack in-context learning capability, which requires unfreezing the LLM; (2) interleaved pre-training data is beneficial whereas image-text pairs alone are not optimal; (3) re-blending text-only instruction data to image-text data during instruction fine-tuning not only remedies the degradation of text-only tasks, but also boosts VLM task accuracy. With an enhanced pre-training recipe we build VILA, a Visual Language model family that consistently outperforms the state-of-the-art models, e.g., LLaVA-1.5, across main benchmarks without bells and whistles. Multi-modal pre-training also helps unveil appealing properties of VILA, including multi-image reasoning, enhanced in-context learning, and better world knowledge.