Latest Research Papers
2025-01-21
arXiv
InternVideo2.5: Empowering Video MLLMs with Long and Rich Context Modeling
The paper introduces InternVideo2.5, which enhances video MLLMs by incorporating long and rich context (LRC) modeling, improving their ability to handle fine-grained details and long-form temporal structures. This approach significantly boosts performance on video understanding benchmarks and enables the model to process much longer video inputs. The work underscores the importance of multimodal context richness for advancing MLLM capabilities.
This paper aims to improve the performance of video multimodal large language
models (MLLM) via long and rich context (LRC) modeling. As a result, we develop
a new version of InternVideo2.5 with a focus on enhancing the original MLLMs'
ability to perceive fine-grained details and capture long-form temporal
structure in videos. Specifically, our approach incorporates dense vision task
annotations into MLLMs using direct preference optimization and develops
compact spatiotemporal representations through adaptive hierarchical token
compression. Experimental results demonstrate this unique design of LRC greatly
improves the results of video MLLM in mainstream video understanding benchmarks
(short & long), enabling the MLLM to memorize significantly longer video inputs
(at least 6x longer than the original), and master specialized vision
capabilities like object tracking and segmentation. Our work highlights the
importance of multimodal context richness (length and fineness) in empowering
MLLM's innate abilites (focus and memory), providing new insights for future
research on video MLLM. Code and models are available at
https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2.5