Spotlight Poster
Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning
Bingchen Zhao · Haoqin Tu · Chen Wei · Jieru Mei · Cihang Xie
Halle B
This paper introduces an efficient strategy to transform Large Language Models(LLMs) into Multi-Modal Large Language Models (MLLMs). By conceptualizingthis transformation as a domain adaptation process, i.e., transitioning from textunderstanding to embracing multiple modalities, we intriguingly note that, withineach attention block, tuning LayerNorm suffices to yield strong performance.Moreover, when benchmarked against other tuning approaches like full parameterfinetuning or LoRA, its benefits on efficiency are substantial. For example, whencompared to LoRA on a 13B model scale, performance can be enhanced by anaverage of over 20% across five multi-modal tasks, and meanwhile, results in asignificant reduction of trainable parameters by 41.9% and a decrease in GPUmemory usage by 17.6%. On top of this LayerNorm strategy, we showcase thatselectively tuning only with conversational data can improve efficiency further.Beyond these empirical outcomes, we provide a comprehensive analysis to explorethe role of LayerNorm in adapting LLMs to the multi-modal domain and improvingthe expressive power of the model.