LVPruning: An Effective yet Simple Language-Guided Vision Token Pruning Approach for Multi-modal Large Language Models

Jingyuan Sun, Yizheng Sun, Riza Theresa Batista-Navarro, Chenghua Lin, Hao Li, Yanze Xin

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

1 Downloads (Pure)

Abstract

Multi-modal Large Language Models (MLLMs) have achieved remarkable success by integrating visual and textual modalities. However, they incur significant computational overhead due to the large number of vision tokens processed, limiting their practicality in resource-constrained environments. We introduce Language-Guided Vision Token Pruning (LVPruning) for MLLMs, an effective yet simple method that significantly reduces the computational burden while preserving model performance. LVPruning employs cross-attention modules to compute the importance of vision tokens based on their interaction with language tokens, determining which to prune. Importantly, LVPruning can be integrated without modifying the original MLLM parameters, which makes LVPruning simple to apply or remove. Our experiments show that LVPruning can effectively reduce up to 90% of vision tokens by the middle layer of LLaVA-1.5, resulting in a 62.1% decrease in inference Tera Floating-Point Operations Per Second (TFLOPs), with an average performance loss of just 0.45% across nine multi-modal benchmarks.
Original languageEnglish
Title of host publication2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Publication statusAccepted/In press - 23 Jan 2025

Fingerprint

Dive into the research topics of 'LVPruning: An Effective yet Simple Language-Guided Vision Token Pruning Approach for Multi-modal Large Language Models'. Together they form a unique fingerprint.

Cite this