| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - multiple-choice |
| | language: |
| | - en |
| | - zh |
| | tags: |
| | - audio-visual |
| | - omnimodality |
| | - multi-modality |
| | - benchmark |
| | pretty_name: 'XModBench ' |
| | size_categories: |
| | - 10K<n<100K |
| | --- |
| | |
| | <h1 align="center"> |
| | XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models |
| | </h1> |
| |
|
| | <p align="center"> |
| | <img src="https://xingruiwang.github.io/projects/XModBench/static/images/teaser.png" width="90%" alt="XModBench teaser"> |
| | </p> |
| |
|
| | <p align="center"> |
| | <a href="https://arxiv.org/abs/2510.15148"> |
| | <img src="https://img.shields.io/badge/Arxiv-Paper-b31b1b.svg" alt="Paper"> |
| | </a> |
| | <a href="https://xingruiwang.github.io/projects/XModBench/"> |
| | <img src="https://img.shields.io/badge/Website-Page-0a7aca?logo=globe&logoColor=white" alt="Website"> |
| | </a> |
| | <a href="https://huggingface.co/datasets/RyanWW/XModBench"> |
| | <img src="https://img.shields.io/badge/Huggingface-Dataset-FFD21E?logo=huggingface" alt="Dataset"> |
| | </a> |
| | <a href="https://github.com/XingruiWang/XModBench"> |
| | <img src="https://img.shields.io/badge/Github-Code-181717?logo=github&logoColor=white" alt="GitHub Repo"> |
| | </a> |
| | <a href="https://opensource.org/licenses/MIT"> |
| | <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT"> |
| | </a> |
| | </p> |
| | |
| |
|
| |
|
| | XModBench is a comprehensive benchmark designed to evaluate the cross-modal capabilities and consistency of omni-language models. It systematically assesses model performance across multiple modalities (text, vision, audio) and various cognitive tasks, revealing critical gaps in current state-of-the-art models. |
| |
|
| | ### Key Features |
| |
|
| | - **🎯 Multi-Modal Evaluation**: Comprehensive testing across text, vision, and audio modalities |
| | - **🧩 5 Task Dimensions**: Perception, Spatial, Temporal, Linguistic, and Knowledge tasks |
| | - **📊 13 SOTA Models Evaluated**: Including Gemini 2.5 Pro, Qwen2.5-Omni, EchoInk-R1, and more |
| | - **🔄 Consistency Analysis**: Measures performance stability across different modal configurations |
| | - **👥 Human Performance Baseline**: Establishes human-level benchmarks for comparison |
| |
|
| |
|
| | ## 🚀 Quick Start |
| |
|
| | ### Installation |
| |
|
| | ```bash |
| | # Clone the repository |
| | git clone https://github.com/XingruiWang/XModBench.git |
| | cd XModBench |
| | |
| | # Install dependencies |
| | pip install -r requirements.txt |
| | ``` |
| |
|
| | ## 📂 Dataset Structure |
| |
|
| | ### Download and Setup |
| |
|
| | After cloning from HuggingFace, you'll need to extract the data: |
| |
|
| | ```bash |
| | # Download the dataset from HuggingFace |
| | git clone https://huggingface.co/datasets/RyanWW/XModBench |
| | |
| | cd XModBench |
| | |
| | # Extract the Data.zip file |
| | unzip Data.zip |
| | |
| | # Now you have the following structure: |
| | ``` |
| |
|
| | ### Directory Structure |
| |
|
| | ``` |
| | XModBench/ |
| | ├── Data/ # Unzipped from Data.zip |
| | │ ├── landscape_audiobench/ # Nature sound scenes |
| | │ ├── emotions/ # Emotion classification data |
| | │ ├── solos_processed/ # Musical instrument solos |
| | │ ├── gtzan-dataset-music-genre-classification/ # Music genre data |
| | │ ├── singers_data_processed/ # Singer identification |
| | │ ├── temporal_audiobench/ # Temporal reasoning tasks |
| | │ ├── urbansas_samples_videos_filtered/ # Urban 3D movements |
| | │ ├── STARSS23_processed_augmented/ # Spatial audio panorama |
| | │ ├── vggss_audio_bench/ # Fine-grained audio-visual |
| | │ ├── URMP_processed/ # Musical instrument arrangements |
| | │ ├── ExtremCountAV/ # Counting tasks |
| | │ ├── posters/ # Movie posters |
| | │ └── trailer_clips/ # Movie trailers |
| | │ |
| | └── tasks/ # Task configurations (ready to use) |
| | ├── 01_perception/ # Perception tasks |
| | │ ├── finegrained/ # Fine-grained recognition |
| | │ ├── natures/ # Nature scenes |
| | │ ├── instruments/ # Musical instruments |
| | │ ├── instruments_comp/ # Instrument compositions |
| | │ └── general_activities/ # General activities |
| | ├── 02_spatial/ # Spatial reasoning tasks |
| | │ ├── 3D_movements/ # 3D movement tracking |
| | │ ├── panaroma/ # Panoramic spatial audio |
| | │ └── arrangements/ # Spatial arrangements |
| | ├── 03_speech/ # Speech and language tasks |
| | │ ├── recognition/ # Speech recognition |
| | │ └── translation/ # Translation |
| | ├── 04_temporal/ # Temporal reasoning tasks |
| | │ ├── count/ # Temporal counting |
| | │ ├── order/ # Temporal ordering |
| | │ └── calculation/ # Temporal calculations |
| | └── 05_Exteral/ # Additional classification tasks |
| | ├── emotion_classification/ # Emotion recognition |
| | ├── music_genre_classification/ # Music genre |
| | ├── singer_identification/ # Singer identification |
| | └── movie_matching/ # Movie matching |
| | ``` |
| |
|
| | **Note**: All file paths in the task JSON files use relative paths (`./benchmark/Data/...`), so ensure your working directory is set correctly when running evaluations. |
| |
|
| |
|
| |
|
| | ### Basic Usage |
| |
|
| | ```bash |
| | |
| | |
| | #!/bin/bash |
| | #SBATCH --job-name=VLM_eval |
| | #SBATCH --output=log/job_%j.out |
| | #SBATCH --error=log/job_%j.log |
| | #SBATCH --ntasks-per-node=1 |
| | #SBATCH --gpus-per-node=4 |
| | |
| | echo "Running on host: $(hostname)" |
| | echo "CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES" |
| | |
| | module load conda |
| | # conda activate vlm |
| | conda activate omni |
| | |
| | export audioBench='/home/xwang378/scratch/2025/AudioBench' |
| | |
| | # python $audioBench/scripts/run.py \ |
| | # --model gemini \ |
| | # --task_name perception/vggss_audio_vision \ |
| | # --sample 1000 |
| | |
| | |
| | # python $audioBench/scripts/run.py \ |
| | # --model gemini \ |
| | # --task_name perception/vggss_vision_audio \ |
| | # --sample 1000 |
| | |
| | # python $audioBench/scripts/run.py \ |
| | # --model gemini \ |
| | # --task_name perception/vggss_vision_text \ |
| | # --sample 1000 |
| | |
| | # python $audioBench/scripts/run.py \ |
| | # --model gemini \ |
| | # --task_name perception/vggss_audio_text \ |
| | # --sample 1000 |
| | |
| | # Qwen2.5-Omni |
| | |
| | # python $audioBench/scripts/run.py \ |
| | # --model qwen2.5_omni \ |
| | # --task_name perception/vggss_audio_text \ |
| | # --sample 1000 |
| | |
| | python $audioBench/scripts/run.py \ |
| | --model qwen2.5_omni \ |
| | --task_name perception/vggss_vision_text \ |
| | --sample 1000 |
| | |
| | |
| | ``` |
| |
|
| |
|
| |
|
| | ## 📈 Benchmark Results |
| |
|
| | ### Overall Performance Comparison |
| |
|
| | | Model | Perception | Spatial | Temporal | Linguistic | Knowledge | Average | |
| | |-------|------------|---------|----------|------------|-----------|---------| |
| | | **Gemini 2.5 Pro** | 75.9% | 50.1% | 60.8% | 76.8% | 89.3% | 70.6% | |
| | | **Human Performance** | 91.0% | 89.7% | 88.9% | 93.9% | 93.9% | 91.5% | |
| |
|
| | ### Key Findings |
| |
|
| | #### 1️⃣ Task Competence Gaps |
| | - **Strong Performance**: Perception and linguistic tasks (~75% for best models) |
| | - **Weak Performance**: Spatial (50.1%) and temporal reasoning (60.8%) |
| | - **Performance Drop**: 15-25 points decrease in spatial/temporal vs. perception tasks |
| |
|
| | #### 2️⃣ Modality Disparity |
| | - **Audio vs. Text**: 20-49 point performance drop |
| | - **Audio vs. Vision**: 33-point average gap |
| | - **Vision vs. Text**: ~15-point disparity |
| | - **Consistency**: Best models show 10-12 point standard deviation |
| |
|
| | #### 3️⃣ Directional Imbalance |
| | - **Vision↔Text**: 9-17 point gaps between directions |
| | - **Audio↔Text**: 6-8 point asymmetries |
| | - **Root Cause**: Training data imbalance favoring image-to-text over inverse directions |
| |
|
| | ## 📝 Citation |
| |
|
| | If you use XModBench in your research, please cite our paper: |
| |
|
| | ```bibtex |
| | @article{wang2024xmodbench, |
| | title={XModBench: Benchmarking Cross-Modal Capabilities and Consistency in Omni-Language Models}, |
| | author={Wang, Xingrui, etc.}, |
| | journal={arXiv preprint arXiv:2510.15148}, |
| | year={2024} |
| | } |
| | ``` |
| |
|
| | ## 📄 License |
| |
|
| | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |
| |
|
| | ## 🙏 Acknowledgments |
| |
|
| | We thank all contributors and the research community for their valuable feedback and suggestions. |
| |
|
| | ## 📧 Contact |
| |
|
| | - **Project Lead**: Xingrui Wang |
| | - **Email**: [xwang378@jh.edu] |
| | - **Website**: [https://xingruiwang.github.io/projects/XModBench/](https://xingruiwang.github.io/projects/XModBench/) |
| |
|
| | ## 🔗 Links |
| |
|
| | - [Project Website](https://xingruiwang.github.io/projects/XModBench/) |
| | - [Paper](https://arxiv.org/abs/xxxx.xxxxx) |
| | - [Leaderboard](https://xingruiwang.github.io/projects/XModBench/leaderboard) |
| | - [Documentation](https://xingruiwang.github.io/projects/XModBench/docs) |
| |
|
| |
|
| | ## Todo |
| |
|
| | - [ ] Release Huggingface data |
| | - [x] Release data processing code |
| | - [x] Release data evaluation code |
| | --- |
| |
|
| | **Note**: XModBench is actively maintained and regularly updated with new models and evaluation metrics. For the latest updates, please check our [releases](https://github.com/XingruiWang/XModBench/releases) page. |