logo
0
0
Login
Draw<CaoWGG@users.noreply.huggingface.co>
Update README.md

LongCat-Flash-Omni

LongCat Logo

Tech Report 📄

Model Introduction

We introduce LongCat-Flash-Omni, a state-of-the-art open-source omni-modal model with 560 billion parameters (with 27B activated), excelling at real-time audio-visual interaction, which is attained by leveraging LongCat-Flash's high-performance Shortcut-connected Mixture-of-Experts (MoE) architecture with zero-computation experts, augmented by efficient multimodal perception and speech reconstruction modules. Through an effective curriculum-inspired progressive training strategy, our model achieves comprehensive multimodal capabilities while maintaining strong unimodal capability. Now, we open-source the model to foster future research and development in the community.

Model Architecture

LongCat-Flash-Omni

Key Features

🌟 SOTA and Unified Omni-Modal Model

LongCat-Flash-Omni is an open-source omni-modal model that achieves state-of-the-art cross-modal comprehension performance. It seamlessly integrates powerful offline multi-modal understanding with real-time audio–visual interaction within a single all-in-one framework.

🌟 Large-Scale with Low-Latency Audio–Visual Interaction

By leveraging an efficient LLM backbone, carefully designed lightweight modality encoders and decoder, and a chunk-wise audio–visual feature interleaving mechanism, LongCat-Flash-Omni achieves low-latency, high-quality audio–visual processing and streaming speech generation. It supports a context window of up to 128K tokens, enabling advanced capabilities in long-term memory, multi-turn dialogue, and temporal reasoning across multiple modalities.

🌟 Effective Early-Fusion Training

The model adopts an innovative multi-stage pretraining pipeline that progressively incorporates text, audio, and visual modalities under a balanced data strategy and early-fusion training paradigm, ensuring strong omni-modal performance without degradation in any single modality.

🌟 Efficient Training Infrastructure

Inspired by the concept of modality decoupling, we propose a Modality-Decoupled Parallelism training scheme that significantly enhances the efficiency of large-scale and highly challenging multimodal training.

🌟 Open-Source Contribution

We provide a comprehensive overview of the training methodology and data strategies behind LongCat-Flash-Omni, and release the model to accelerate future research and innovation in omni-modal intelligence.

For more detail, please refer to the comprehensive LongCat-Flash-Omni Technical Report.

Evaluation Results

Omni-modality
BenchmarkLongCat-Flash-Omni InstructGemini-2.5-Pro (ThinkingBudget128)Gemini-2.5-Flash (non-thinking)Qwen3-Omni InstructQwen2.5-Omni Instruct
OmniBench61.3866.8054.9958.4148.16
WorldSense60.8963.9658.7252.0146.69
DailyOmni82.3880.6180.7869.3347.45
UNO-Bench49.9064.4854.3042.1032.60
Vision

Image-to-Text

BenchmarkLongCat-Flash-Omni InstructGemini-2.5-Pro (ThinkingBudget128)Gemini-2.5-Flash (non-thinking)Qwen3-Omni InstructSeed-1.6GPT-4o-1120Qwen3-VL-235B-A22B-InstructQwen2.5-VL-72B-Instruct
General
MMBench-ENtest87.589.889.386.888.583.788.388.6*
MMBench-ZHtest88.789.288.586.483.882.889.887.9*
RealWorldQA74.876.073.972.974.574.179.3*75.7*
MMStar70.978.5*75.568.5*71.563.278.4*68.2
STEM & Reasoning
MathVistamini77.977.7*77.175.978.762.884.9*74.8*
MMMUval70.780.9*76.369.1*74.969.478.7*70.2*
MMVet69.080.779.568.974.476.675.974.5
Multi-Image
BLINK63.170.0*65.756.165.065.570.7*60.1
MuirBench77.174.0*73.762.174.670.572.8*70.7*
Mantis84.883.983.480.781.179.379.782.0
Text Recognition & Chart/Document Understanding
ChartQA87.671.777.686.8*82.474.589.289.5*
DocVQA91.894.0*93.6*95.794.380.994.696.4*
OCRBench84.987.2*85.685.585.682.391.288.5
OmniDocBenchEN/ZH22.8/29.031.9/24.522.8/32.928.4/40.522.0/27.625.9/37.713.6/17.522.6/32.4*
Grounding & Counting
RefCOCO-avg92.375.471.989.380.2-87.190.3
CountBench92.491.0*78.690.0*94.185.6*94.393.6*
Graphical User Interface (GUI)
VisualWebBench78.781.173.579.381.177.180.882.3*
ScreenSpot-v291.275.863.994.791.7-93.492.9
AndroidControllow91.279.279.190.584.665.290.093.7*
AndroidControlhigh75.660.855.570.855.241.774.167.4*

Note: Values marked with * are sourced from public reports. As GPT-4o does not support image grounding, we do not report its results on RefCOCO and ScreenSpot-v2


Video-to-Text

BenchmarkLongCat-Flash-Omni InstructGemini-2.5-Pro (ThinkingBudget128)Gemini-2.5-Flash (non-thinking)Qwen3-Omni InstructSeed-1.6GPT-4o-1120Qwen3-VL (235B-A22B-Instruct)Qwen2.5-VL-72B-Instruct
Short Video
MVBench75.266.463.069.3*68.462.171.370.4*
NextQA86.284.281.482.484.179.781.382.3
TempCompass82.280.880.273.579.476.480.574.8*
Long Video
VideoMME (w/o audio)76.2--70.5*75.273.279.2*73.3*
VideoMME (w/ audio)78.280.6*78.573.0----
LongVideoBench69.369.466.465.464.863.9-60.7*
STEM & Reasoning
MMVU67.175.672.462.467.367.469.362.9*
Video-MMMU67.579.4*76.660.375.468.073.759.3

Note: Values marked with * are sourced from public reports.

Audio

Table 1: Automatic Speech Recognition (ASR) and Speech-to-Text Translation (S2TT)

BenchmarkLongCat-Flash-Omni InstructGemini-2.5-Pro (ThinkingBudget128)GPT-4o-AudioQwen3-Omni InstructKimi-AudioStep-Audio-2-mini
ASR
LibriSpeech (test-clean | test-other)1.57 | 4.011.74 | 3.8030.00 | 41.831.22 | 2.481.28 | 2.421.33 | 2.86
AISHELL-10.633.1134.810.840.600.78
AISHELL-22.785.2477.732.342.562.16
Fleurs (zh | en)3.99 | 5.022.24 | 4.773.91 | 5.562.20 | 2.722.69 | 4.442.53 | 3.05
CommonVoice 15 (zh | en)4.98 | 13.5947.30 | 49.8642.83 | 23.884.31 | 6.058.46 | 7.925.00 | 6.75
WenetSpeech (test-meeting | test-net)6.69 | 6.09136.13 | 32.8254.35 | 67.905.89 | 4.696.28 | 5.374.87 | 4.82
S2TT (BLEU)
CoVost2 en→zh47.2341.9429.3248.72-49.12
CoVost2 zh→en27.3225.3816.0121.51-29.47

Note: ASR results are in CER/WER (lower is better), S2TT results are in BLEU score.


Table 2: Audio Understanding

BenchmarkLongCat-Flash-Omni InstructGemini-2.5-Pro (ThinkingBudget128)GPT-4o-AudioQwen3-Omni InstructKimi-AudioStep-Audio-2-mini
MMAU75.9072.8068.4077.5065.2073.20
VocalSound92.7689.4582.3791.6094.8587.58
TUT201765.4333.1520.7440.7465.2530.67
ClothoAQA72.8369.6761.8775.1672.2168.39
Nonspeech7k93.7987.5972.2880.8393.9373.24
CochlScene70.0245.3434.9443.0380.4244.58
MELD54.6046.7439.0050.8059.1331.44

Table 3: Audio-to-Text Chat

BenchmarkLongCat-Flash-Omni InstructGemini-2.5-Pro (ThinkingBudget128)GPT-4o-AudioQwen3-Omni InstructKimi-AudioStep-Audio-2-mini
OpenAudioBench
LlamaQuestions83.3383.0086.3083.3079.3369.70
ReasoningQA79.7180.3068.7184.1658.0255.64
TriviaQA86.2090.2076.0075.9062.1045.30
Webquestions76.0080.9081.2075.2070.2054.40
AlpacaEval75.4376.5881.6185.4375.7353.92
VoiceBench
AlpacaEval4.944.704.734.744.463.84
CommonEval4.324.114.374.543.973.19
OpenBookQA93.4195.1687.9089.7083.5272.97
SDQA82.4683.5490.1076.9063.1244.85
MMSU81.9588.3278.9069.0062.1752.00
AdvBench10097.6999.2399.3010097.00
IFEval77.9977.8366.8177.8061.1029.80
Text
BenchmarkLongCat-Flash-Omni InstructLongCat-FlashDeepSeek V3.1Qwen3 MoE-2507Kimi-K2GPT-4.1Claude Sonnet-4Gemini-2.5-Flash
ArchitectureMoEMoEMoEMoEMoE---
# Total Params560B560B671B235B1043B---
# Activated Params27B27B37B22B32B---
General Domains
MMLU(acc)90.3089.7190.9690.2389.8689.6491.7586.33
MMLU-Pro(acc)82.7382.6884.4584.8382.0681.7283.7481.95
CEval(acc)91.6890.4489.2192.7091.2679.5386.6378.78
CMMLU(acc)89.3984.3488.0488.1489.6677.6586.5178.30
Instruction Following
IFEval(acc)82.4489.6586.6988.5488.9185.5888.3583.92
COLLIE(acc)45.6957.1043.8049.7156.3450.0051.2248.60
Meeseeks-zh(acc)39.0543.0333.8335.3242.7941.5435.0734.84
Mathematical Reasoning
MATH500(acc)97.6096.4096.0898.8097.6090.6093.8098.40
AIME24(avg@10)72.9270.4266.30*81.6769.60*47.0047.0079.67
BeyondAIME(avg@10)47.4043.0036.5057.6036.6022.1020.5044.20
General Reasoning
GPQA-diamond(acc)74.4173.2374.90*77.4375.7667.6870.7180.30
DROP(f1)83.5379.0684.1978.5789.0466.9473.0645.03
ZebraLogic(acc)86.0089.3085.3094.2289.1156.30*80.1057.00
GraphWalks-128k(precision)56.0051.0573.5480.7247.5085.0280.5764.83
Coding
LiveCodeBench(pass@1)52.6448.0256.40*46.4846.7039.2145.5939.65
Humaneval+(pass@1)90.8588.4192.6894.5185.9893.2994.5187.80
MBPP+(pass@1)80.1679.6379.8979.8981.7579.3780.1676.19

Note: Values marked with * are sourced from other public reports. Note that DeepSeek-V3.1, Qwen3-235B-A22B, Gemini2.5-Flash, and Claude4-Sonnet are evaluated under their non-thinking mode.

Quick Start

Model Download

LongCat-Flash-Omni is a MoE model, which means that the model weights are distributed across multiple devices. Therefore, during loading in Hugging Face Transformers or vLLM, model weights will be automatically downloaded based on the model name. However, if your runtime environment is not conducive to downloading weights during execution, you can refer to the following commands to manually download the model weights to a local directory:

# Download through Hugging Face pip install -U "huggingface_hub[cli]" huggingface-cli download meituan-longcat/LongCat-Flash-Omni --local-dir ./LongCat-Flash-Omni

Usage

We have implemented basic adaptations in SGLang to support running the Longcat-Flash-Omni model. Currently, the official SGLang does not natively support Longcat-Flash-Omni, so you can temporarily use our development branch for local installation and testing.

Due to its size of 560 billion parameters (560B), LongCat-Flash-Omni requires at least one node (e.g., 8×H20-141G) to host the model weights in FP8 format, and at least two nodes (e.g., 16×H800-80G) for BF16 weights. Detailed launch configurations are provided below.

Installation

  • python >= 3.10.0 (Recommend to use Anaconda)
  • PyTorch >= 2.8
  • CUDA >= 12.9
conda create -n longcat python=3.10 conda activate longcat # install SGLang git clone -b longcat_omni_v0.5.3.post3 https://github.com/XiaoBin1992/sglang.git pushd sglang pip install -e "python" popd # install longcat-flash-omni demo git clone https://github.com/meituan-longcat/LongCat-Flash-Omni pushd LongCat-Flash-Omni git submodule update --init --recursive pip install -r requirements.txt popd

Demo

The model can be served on your cluster using a combination of Tensor Parallelism and Expert Parallelism. Once all dependencies are installed, you can launch the demo using the following command.

  • single-node inference
python3 longcat_omni_demo.py \ --tp-size 8 \ --ep-size 8 \ --model-path where_you_download_model_dir \ --output-dir output
  • multi-node inference
python3 longcat_omni_demo.py \ --tp-size 16 \ --ep-size 16 \ --nodes 2 \ --node-rank $NODE_RANK \ --dist-init-addr $MASTER_IP:5000 \ --model-path where_you_download_model_dir \ --output-dir output

NOTE: Replace $NODE_RANK and $MASTER_IP with the corresponding values of your GPU machines.

All test cases are defined in examples_dict.py, and additional test cases may be added as needed. After model execution, the generated results are saved in the directory specified by the --output-dir parameter.

Interaction with LongCat-Flash-Omni

Real-time Chat Website

You can use LongCat-Flash-Omni (web version currently only supports audio interaction features) on https://longcat.ai. The full service will be provided in subsequent updates.

APP

We are excited to announce that the LongCat-Flash-Omni app is now available for both Android and iOS.

For Android, you can download it from the following QR code.

<img src=https://raw.githubusercontent.com/meituan-longcat/LongCat-Flash-Omni/main/figures/android_app_qrcode.jpg width="200px">

For iOS, you can download it by searching "LongCat" at App Store or QR code. Currently, only the Chinese App Store is supported.

<img src=https://raw.githubusercontent.com/meituan-longcat/LongCat-Flash-Omni/main/figures/ios_app_qrcode.jpg width="200px">

License Agreement

The model weights are released under the MIT License.

Any contributions to this repository are licensed under the MIT License, unless otherwise stated. This license does not grant any rights to use Meituan trademarks or patents.

See the LICENSE file for the full license text.

Usage Considerations

This model has not been specifically designed or comprehensively evaluated for every possible downstream application.

Developers should take into account the known limitations of large language models, including performance variations across different languages, and carefully assess accuracy, safety, and fairness before deploying the model in sensitive or high-risk scenarios. It is the responsibility of developers and downstream users to understand and comply with all applicable laws and regulations relevant to their use case, including but not limited to data protection, privacy, and content safety requirements.

Nothing in this Model Card should be interpreted as altering or restricting the terms of the MIT License under which the model is released.

Citation

We kindly encourage citation of our work if you find it useful.

@misc{ title={LongCat-Flash-Omni Technical Report}, author={Meituan LongCat Team}, year={2025}, url={https://github.com/meituan-longcat/LongCat-Flash-Omni}, }

Contact

Please contact us at longcat-team@meituan.com or join our WeChat Group if you have any questions.

WeChat Group

<img src=https://raw.githubusercontent.com/meituan-longcat/LongCat-Flash-Omni/main/figures/wechat_qrcode.jpeg width="200px">

About

No description, topics, or website provided.
626.57 GiB
0 forks0 stars1 branches0 TagREADMEMIT license
Language
Python100%