MambaMia: State-Space Hierarchical Compression with Gated Attention and Learnable Sampling
for Hour-Long Video Understanding in Large Multimodal Models
Welcome to the official repository for MambaMia.
Understanding hour-long videos in Large Multimodal Models (LMMs) remains a significant challenge due to the immense computational cost and information redundancy. To address this, we introduce State-Space Hierarchical Compression, a novel approach leveraging Gated Attention and Learnable Sampling.
MambaMia focuses on optimizing key components to balance long-context performance with manageable computational costs. By introducing a hierarchical compression mechanism based on State-Space Models (SSMs), we effectively filter and aggregate visual information over long temporal horizons. Our experiments demonstrate that MambaMia achieves state-of-the-art performance in hour-long video understanding tasks while significantly reducing memory usage and inference latency compared to existing LMMs.
Here, you will find the resources and codebase needed to replicate our findings. We hope this repository serves as a valuable resource for research in efficient long-video understanding.
Paper Title: State-Space Hierarchical Compression with Gated Attention and Learnable Sampling for Hour-Long Video Understanding in Large Multimodal Models
Authors: Geewook Kim and Minjoon Seo
Conference: AAAI 2026 (Oral Presentation)
- [2025-12-21] Slides and Poster are now available.
- [2025-11-08] 📢 Accepted to AAAI 2026 (Oral)!
- [2025-06-16] Preprint uploaded to arXiv.
To get started with MambaMia, execute the following installation command:
bash install.shThis will:
- Initialize the LLaVA submodule
- Copy MambaMia modifications to LLaVA
- Install required dependencies
- Python >= 3.10
- PyTorch >= 2.0.1
- CUDA >= 11.8 (for mamba-ssm)
- transformers >= 4.44.0
| Model | Backbone | Link |
|---|---|---|
| MambaMia-Vicuna-7B | Vicuna-7B | Coming Soon |
| MambaMia-Qwen2-7B | Qwen2-7B | Coming Soon |
Train your own MambaMia model using the provided script:
bash train_mambamia.sh| Parameter | Description | Default |
|---|---|---|
PROJECTOR_TYPE |
MambaMia projector type | video_projector_mambamiav7.1 |
PROJECTOR_MODE |
Compression mode | per_from1to4frames_24patchwise_576tokperframe |
PROJECTOR_NUM_LAYERS |
Number of SSM layers | 3 |
MAX_FRAME |
Maximum frames to process | 128 |
FPS |
Video sampling rate | 2.0 |
Evaluate trained models using lmms-eval:
# First, copy the evaluation model file to lmms-eval
cp MambaMia/mambamia_vid.py /path/to/lmms-eval/lmms_eval/models/
# Run evaluation
bash eval_mambamia.shMambaMia introduces a hierarchical compression mechanism:
-
Spatiotemporal Compression (1st stage): Bi-directional Mamba2 with Gated Pooling Aggregation (GPA)
- Compresses spatial tokens within each frame
- Mode:
v4(bi-directional + GPA)
-
Temporal Compression (2nd stage): Uni-directional Mamba2 with delta extraction
- Aggregates temporal information across frames
- Mode:
v0delta(uni-directional + delta output)
-
Learnable Sampling: Delta-based importance scoring for adaptive frame selection
If you find our work helpful for your research, please consider citing:
@misc{kim2025mambamia,
title={MambaMia: A State-Space-Model-Based Compression for Efficient Video Understanding in Large Multimodal Models},
author={Geewook Kim and Minjoon Seo},
year={2025},
eprint={2506.13564},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.13564},
}MambaMia
Copyright (c) 2026-present NAVER Cloud Corp.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
This project builds upon:
- Transformers - Hugging Face Transformers
- LLaVA - Large Language and Vision Assistant
- LLaVA-NeXT - LLaVA-NeXT
- Mamba - State Space Models
- lmms-eval - Evaluation framework
See also our sister project:
- ELVA - Efficient Language and Vision Assistant