logo
0
0
Login
Tianyu Yu<Yirany@users.noreply.huggingface.co>
Update README.md

A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone

GitHub | Online Demo | Technical Blog

News

  • [2025.03.01] 🚀🚀🚀 RLAIF-V, which is the alignment technique of MiniCPM-o, is accepted by CVPR 2025!The code, dataset, paper are open-sourced!

  • [2025.01.24] 📢📢📢 MiniCPM-o 2.6 technical report is released! See Here.

  • [2025.01.19] ⭐️⭐️⭐️ MiniCPM-o tops GitHub Trending and reaches top-2 on Hugging Face Trending!

MiniCPM-o 2.6

MiniCPM-o 2.6 is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include:

  • 🔥 Leading Visual Capability. MiniCPM-o 2.6 achieves an average score of 70.2 on OpenCompass, a comprehensive evaluation over 8 popular benchmarks. With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-202405, Gemini 1.5 Pro, and Claude 3.5 Sonnet for single image understanding. It also outperforms GPT-4V and Claude 3.5 Sonnet in mutli-image and video understanding, and shows promising in-context learning capability.

  • 🎙 State-of-the-art Speech Capability. MiniCPM-o 2.6 supports bilingual real-time speech conversation with configurable voices in English and Chinese. It outperforms GPT-4o-realtime on audio understanding tasks such as ASR and STT translation, and shows state-of-the-art performance on speech conversation in both semantic and acoustic evaluations in the open-source community. It also allows for fun features such as emotion/speed/style control, end-to-end voice cloning, role play, etc.

  • 🎬 Strong Multimodal Live Streaming Capability. As a new feature, MiniCPM-o 2.6 can accept continous video and audio streams independent of user queries, and support real-time speech interaction. It outperforms GPT-4o-202408 and Claude 3.5 Sonnet and shows state-of-art performance in open-source community on StreamingBench, a comprehensive benchmark for real-time video understanding, omni-source (video & audio) understanding, and multimodal contextual understanding.

  • 💪 Strong OCR Capability and Others. Advancing popular visual capabilites from MiniCPM-V series, MiniCPM-o 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves state-of-the-art performance on OCRBench for models under 25B, surpassing proprietary models such as GPT-4o-202405. Based on the the latest RLAIF-V and VisCPM techniques, it features trustworthy behaviors, outperforming GPT-4o and Claude 3.5 Sonnet on MMHal-Bench, and supports multilingual capabilities on more than 30 languages.

  • 🚀 Superior Efficiency. In addition to its friendly size, MiniCPM-o 2.6 also shows state-of-the-art token density (i.e., number of pixels encoded into each visual token). It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-o 2.6 can efficiently support multimodal live streaming on end-side devices such as iPad.

  • 💫 Easy Usage. MiniCPM-o 2.6 can be easily used in various ways: (1) llama.cpp support for efficient CPU inference on local devices, (2) int4 and GGUF format quantized models in 16 sizes, (3) vLLM support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with LLaMA-Factory, (5) quick local WebUI demo setup with Gradio, and (6) online web demo on server.

Model Architecture.

  • End-to-end Omni-modal Architecture. Different modality encoder/decoders are connected and trained in an end-to-end fashion to fully exploit rich multimodal knowledge.
  • Omni-modal Live Streaming Mechanism. (1) We change the offline modality encoder/decoders into online ones for streaminig inputs/outputs. (2) We devise a time-division multiplexing (TDM) mechanism for omni-modality streaminig processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices.
  • Configurable Speech Modeling Design. We devise a multimodal system prompt, including traditional text system prompt, and a new audio system prompt to determine the assistant voice. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation.

Evaluation

Visual understanding results

Image Understanding:

ModelSizeToken Density+OpenCompassOCRBenchMathVista miniChartQAMMVetMMStarMMEMMB1.1 testAI2DMMMU valHallusionBenchTextVQA valDocVQA testMathVerse miniMathVisionMMHal Score
Proprietary
GPT-4o-20240513-108869.973661.385.769.163.92328.782.284.669.255.0-92.850.230.43.6
Claude3.5-Sonnet-75067.978861.690.866.062.21920.078.580.265.949.9-95.2--3.4
Gemini 1.5 Pro--64.475457.781.364.059.12110.673.979.160.645.673.586.5-19.2-
GPT-4o-mini-20240718-108864.178552.4-66.954.82003.476.077.860.046.1----3.3
Open Source
Cambrian-34B34B182058.359150.375.653.254.22049.977.879.550.441.676.775.5---
GLM-4V-9B13B78459.177651.1-58.054.82018.867.971.246.945.0-----
Pixtral-12B12B25661.068556.981.858.554.5-72.779.051.147.075.790.7---
DeepSeek-VL2-27B (4B)27B67266.480963.986.060.061.92253.081.283.854.045.384.293.3--3.0
Qwen2-VL-7B8B78467.186658.283.062.060.72326.081.883.054.150.684.394.531.916.33.2
LLaVA-OneVision-72B72B18268.174167.583.760.665.82261.085.085.656.849.080.591.339.1-3.5
InternVL2.5-8B8B70668.382264.484.862.862.82344.083.684.556.050.179.193.039.519.73.4
MiniCPM-V 2.68B282265.2852*60.679.460.057.52348.4*78.082.149.8*48.1*80.190.825.718.33.6
MiniCPM-o 2.68B282270.2897*71.9*86.9*67.564.02372.0*80.585.850.4*51.982.093.541.4*23.1*3.8
* We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set.

+ Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.

Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.

Multi-image and Video Understanding:

click to view
ModelSizeBLINK valMantis EvalMIRBVideo-MME (wo / w subs)
Proprietary
GPT-4o-20240513-68.0--71.9/77.2
GPT4V-54.662.753.159.9/63.3
Open-source
LLaVA-NeXT-Interleave 14B14B52.666.430.2-
LLaVA-OneVision-72B72B55.477.6-66.2/69.5
MANTIS 8B8B49.159.534.8-
Qwen2-VL-7B8B53.269.6*67.6*63.3/69.0
InternVL2.5-8B8B54.867.752.564.2/66.9
MiniCPM-V 2.68B53.069.153.860.9/63.6
MiniCPM-o 2.68B56.771.958.663.9/67.9
* We evaluate officially released checkpoints by ourselves.

Audio understanding and speech conversation results.

Audio Understanding:

TaskSizeASR (zh)ASR (en)ASTEmotion
MetricCER↓WER↓BLEU↑ACC↑
DatasetAISHELL-1Fleurs zhWenetSpeech test-netLibriSpeech test-cleanGigaSpeechTED-LIUMCoVoST en2zhCoVoST zh2enMELD emotion
Proprietary
GPT-4o-Realtime-7.3*5.4*28.9*2.6*12.9*4.8*37.1*15.7*33.2*
Gemini 1.5 Pro-4.5*5.9*14.3*2.9*10.6*3.0*47.3*22.6*48.4*
Open-Source
Qwen2-Audio-7B8B-7.5-1.6--45.224.455.3
Qwen2-Audio-7B-Instruct8B2.6*6.9*10.3*3.1*9.7*5.9*39.5*22.9*17.4*
GLM-4-Voice-Base9B2.5--2.8----
MiniCPM-o 2.68B1.64.46.91.78.73.048.227.252.4
* We evaluate officially released checkpoints by ourselves.

Speech Generation:

TaskSizeSpeechQA
MetricACC↑G-Eval (10 point)↑Semantic ELO score↑Acoustic ELO score↑Overall ELO score↑UTMOS↑ASR-WER↓
DatasetSpeech Llama Q.Speech Web Q.Speech Trivia QASpeech AlpacaEvalAudioArena
Proprietary
GPT-4o-Realtime71.751.669.77.41157120312004.22.3
Open-Source
GLM-4-Voice9B50.032.036.45.1999114710354.111.7
Llama-Omni8B45.322.910.73.99608788973.224.3
Moshi7B43.723.816.72.48718088752.88.2
Mini-Omni1B22.012.86.92.59268038653.410.0
MiniCPM-o 2.68B61.040.040.25.11088116311314.29.8
All results are from AudioEvals, and the evaluation methods along with further details can be found in UltraEval-Audio.

End-to-end Voice Cloning

TaskVoice cloning
MetricSIMO↑SIMO↑
DatasetSeed-TTS test-zhSeed-TTS test-en
F5-TTS7667
CosyVoice7564
FireRedTTS6346
MiniCPM-o 2.65747

Multimodal live streaming results.

Multimodal Live Streaming: results on StreamingBench

ModelSizeReal-Time Video UnderstandingOmni-Source UnderstandingContextual UnderstandingOverall
Proprietary
Gemini 1.5 Pro-77.467.851.170.3
GPT-4o-202408-74.551.048.064.1
Claude-3.5-Sonnet-74.041.437.859.7
Open-source
VILA-1.58B61.537.526.749.5
LongVA7B63.135.930.250.7
LLaVA-Next-Video-34B34B69.841.734.356.7
Qwen2-VL-7B8B71.240.733.157.0
InternVL2-8B8B70.142.734.157.0
VITA-1.58B70.940.835.857.4
LLaVA-OneVision-7B8B74.340.831.058.4
InternLM-XC2.5-OL-7B8B75.446.233.660.8
MiniCPM-V 2.68B72.440.233.457.7
MiniCPM-o 2.68B79.953.438.566.0

Examples

We deploy MiniCPM-o 2.6 on end devices. The demo video is the raw-speed recording on an iPad Pro and a Web demo.


math diagram bike

Online Demo

Click here to try the online demo of MiniCPM-o 2.6.

Usage

Inference using Huggingface transformers on NVIDIA GPUs. Please ensure that transformers==4.44.2 is installed, as other versions may have compatibility issues. We are investigating this issue. Requirements tested on python 3.10:

Pillow==10.1.0 torch==2.3.1 torchaudio==2.3.1 torchvision==0.18.1 transformers==4.44.2 librosa==0.9.0 soundfile==0.12.1 vector-quantize-pytorch==1.18.5 vocos==0.1.0 decord moviepy

Model initialization

import torch from PIL import Image from transformers import AutoModel, AutoTokenizer # load omni model default, the default init_vision/init_audio/init_tts is True # if load vision-only model, please set init_audio=False and init_tts=False # if load audio-only model, please set init_vision=False model = AutoModel.from_pretrained( 'openbmb/MiniCPM-o-2_6', trust_remote_code=True, attn_implementation='sdpa', # sdpa or flash_attention_2 torch_dtype=torch.bfloat16, init_vision=True, init_audio=True, init_tts=True ) model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True) # In addition to vision-only mode, tts processor and vocos also needs to be initialized model.init_tts()

If you are using an older version of PyTorch, you might encounter this issue "weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16', Please convert the TTS to float32 type.

model.tts.float()

Omni mode

We provide two inference modes: chat and streaming

Chat inference

import math import numpy as np from PIL import Image from moviepy.editor import VideoFileClip import tempfile import librosa import soundfile as sf def get_video_chunk_content(video_path, flatten=True): video = VideoFileClip(video_path) print('video_duration:', video.duration) with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_audio_file: temp_audio_file_path = temp_audio_file.name video.audio.write_audiofile(temp_audio_file_path, codec="pcm_s16le", fps=16000) audio_np, sr = librosa.load(temp_audio_file_path, sr=16000, mono=True) num_units = math.ceil(video.duration) # 1 frame + 1s audio chunk contents= [] for i in range(num_units): frame = video.get_frame(i+1) image = Image.fromarray((frame).astype(np.uint8)) audio = audio_np[sr*i:sr*(i+1)] if flatten: contents.extend(["<unit>", image, audio]) else: contents.append(["<unit>", image, audio]) return contents video_path="assets/Skiing.mp4" # if use voice clone prompt, please set ref_audio ref_audio_path = 'assets/demo.wav' ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode='omni', language='en') # or use default prompt # sys_msg = model.get_sys_prompt(mode='omni', language='en') contents = get_video_chunk_content(video_path) msg = {"role":"user", "content": contents} msgs = [sys_msg, msg] # please set generate_audio=True and output_audio_path to save the tts result generate_audio = True output_audio_path = 'output.wav' res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, temperature=0.5, max_new_tokens=4096, omni_input=True, # please set omni_input=True when omni inference use_tts_template=True, generate_audio=generate_audio, output_audio_path=output_audio_path, max_slice_nums=1, use_image_id=False, return_dict=True ) print(res) ## You will get the answer: The person in the picture is skiing down a snowy slope. # import IPython # IPython.display.Audio('output.wav')

Streaming inference

# a new conversation need reset session first, it will reset the kv-cache model.reset_session() contents = get_video_chunk_content(video_path, flatten=False) session_id = '123' generate_audio = True # 1. prefill system prompt res = model.streaming_prefill( session_id=session_id, msgs=[sys_msg], tokenizer=tokenizer ) # 2. prefill video/audio chunks for content in contents: msgs = [{"role":"user", "content": content}] res = model.streaming_prefill( session_id=session_id, msgs=msgs, tokenizer=tokenizer ) # 3. generate res = model.streaming_generate( session_id=session_id, tokenizer=tokenizer, temperature=0.5, generate_audio=generate_audio ) audios = [] text = "" if generate_audio: for r in res: audio_wav = r.audio_wav sampling_rate = r.sampling_rate txt = r.text audios.append(audio_wav) text += txt res = np.concatenate(audios) sf.write("output.wav", res, samplerate=sampling_rate) print("text:", text) print("audio saved to output.wav") else: for r in res: text += r['text'] print("text:", text)

Speech and Audio Mode

Model initialization

import torch import librosa from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True, attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager model = model.eval().cuda() tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True) model.init_tts() model.tts.float()

Mimick

Mimick task reflects a model's end-to-end speech modeling capability. The model takes audio input, and outputs an ASR transcription and subsequently reconstructs the original audio with high similarity. The higher the similarity between the reconstructed audio and the original audio, the stronger the model's foundational capability in end-to-end speech modeling.

mimick_prompt = "Please repeat each user's speech, including voice style and speech content." audio_input, _ = librosa.load('./assets/input_examples/Trump_WEF_2018_10s.mp3', sr=16000, mono=True) # load the audio to be mimicked # can also try `./assets/input_examples/cxk_original.wav`, # `./assets/input_examples/fast-pace.wav`, # `./assets/input_examples/chi-english-1.wav` # `./assets/input_examples/exciting-emotion.wav` # for different aspects of speech-centric features. msgs = [{'role': 'user', 'content': [mimick_prompt, audio_input]}] res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, temperature=0.3, generate_audio=True, output_audio_path='output_mimick.wav', # save the tts result to output_audio_path )

General Speech Conversation with Configurable Voices

A general usage scenario of MiniCPM-o-2.6 is role-playing a specific character based on the audio prompt. It will mimic the voice of the character to some extent and act like the character in text, including language style. In this mode, MiniCPM-o-2.6 sounds more natural and human-like. Self-defined audio prompts can be used to customize the voice of the character in an end-to-end manner.

ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_roleplay', language='en') # round one user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} msgs = [sys_prompt, user_question] res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, generate_audio=True, temperature=0.3, output_audio_path='result_roleplay_round_1.wav', ) # round two history = msgs.append({'role': 'assistant', 'content': res}) user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} msgs = history.append(user_question) res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, generate_audio=True, temperature=0.3, output_audio_path='result_roleplay_round_2.wav', ) print(res)

Speech Conversation as an AI Assistant

An enhanced feature of MiniCPM-o-2.6 is to act as an AI assistant, but only with limited choice of voices. In this mode, MiniCPM-o-2.6 is less human-like and more like a voice assistant. In this mode, the model is more instruction-following. For demo, you are suggested to use assistant_female_voice, assistant_male_voice, and assistant_default_female_voice. Other voices may work but not as stable as the default voices.

Please note that, assistant_female_voice and assistant_male_voice are more stable but sounds like robots, while assistant_default_female_voice is more human-alike but not stable, its voice often changes in multiple turns. We suggest you to try stable voices assistant_female_voice and assistant_male_voice.

ref_audio, _ = librosa.load('./assets/input_examples/assistant_female_voice.wav', sr=16000, mono=True) # or use `./assets/input_examples/assistant_male_voice.wav` sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_assistant', language='en') user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # load the user's audio question # round one msgs = [sys_prompt, user_question] res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, generate_audio=True, temperature=0.3, output_audio_path='result_assistant_round_1.wav', ) # round two history = msgs.append({'role': 'assistant', 'content': res}) user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} msgs = history.append(user_question) res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, generate_audio=True, temperature=0.3, output_audio_path='result_assistant_round_2.wav', ) print(res)

Instruction-to-Speech

MiniCPM-o-2.6 can also do Instruction-to-Speech, aka Voice Creation. You can describe a voice in detail, and the model will generate a voice that matches the description. For more Instruction-to-Speech sample instructions, you can refer to https://voxinstruct.github.io/VoxInstruct/.

instruction = 'Speak like a male charming superstar, radiating confidence and style in every word.' msgs = [{'role': 'user', 'content': [instruction]}] res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, generate_audio=True, temperature=0.3, output_audio_path='result_voice_creation.wav', )

Voice Cloning

MiniCPM-o-2.6 can also do zero-shot text-to-speech, aka Voice Cloning. With this mode, model will act like a TTS model.

ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='voice_cloning', language='en') text_prompt = f"Please read the text below." user_question = {'role': 'user', 'content': [text_prompt, "content that you want to read"]} msgs = [sys_prompt, user_question] res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, generate_audio=True, temperature=0.3, output_audio_path='result_voice_cloning.wav', )

Addressing Various Audio Understanding Tasks

MiniCPM-o-2.6 can also be used to address various audio understanding tasks, such as ASR, speaker analysis, general audio captioning, and sound scene tagging.

For audio-to-text tasks, you can use the following prompts:

  • ASR with ZH(same as AST en2zh): 请仔细听这段音频片段,并将其内容逐字记录。
  • ASR with EN(same as AST zh2en): Please listen to the audio snippet carefully and transcribe the content.
  • Speaker Analysis: Based on the speaker's content, speculate on their gender, condition, age range, and health status.
  • General Audio Caption: Summarize the main content of the audio.
  • General Sound Scene Tagging: Utilize one keyword to convey the audio's content or the associated scene.
task_prompt = "Please listen to the audio snippet carefully and transcribe the content." + "\n" # can change to other prompts. audio_input, _ = librosa.load('./assets/input_examples/audio_understanding.mp3', sr=16000, mono=True) # load the audio to be captioned msgs = [{'role': 'user', 'content': [task_prompt, audio_input]}] res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, max_new_tokens=128, use_tts_template=True, generate_audio=True, temperature=0.3, output_audio_path='result_audio_understanding.wav', ) print(res)

Vision-Only mode

MiniCPM-o-2_6 has the same inference methods as MiniCPM-V-2_6

Chat with single image

# test.py image = Image.open('xx.jpg').convert('RGB') question = 'What is in the image?' msgs = [{'role': 'user', 'content': [image, question]}] res = model.chat( image=None, msgs=msgs, tokenizer=tokenizer ) print(res) ## if you want to use streaming, please make sure sampling=True and stream=True ## the model.chat will return a generator res = model.chat( msgs=msgs, tokenizer=tokenizer, sampling=True, stream=True ) generated_text = "" for new_text in res: generated_text += new_text print(new_text, flush=True, end='')

Chat with multiple images

Click to show Python code running MiniCPM-o 2.6 with multiple images input.
image1 = Image.open('image1.jpg').convert('RGB') image2 = Image.open('image2.jpg').convert('RGB') question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' msgs = [{'role': 'user', 'content': [image1, image2, question]}] answer = model.chat( msgs=msgs, tokenizer=tokenizer ) print(answer)

In-context few-shot learning

Click to view Python code running MiniCPM-o 2.6 with few-shot input.
question = "production date" image1 = Image.open('example1.jpg').convert('RGB') answer1 = "2023.08.04" image2 = Image.open('example2.jpg').convert('RGB') answer2 = "2007.04.24" image_test = Image.open('test.jpg').convert('RGB') msgs = [ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, {'role': 'user', 'content': [image_test, question]} ] answer = model.chat( msgs=msgs, tokenizer=tokenizer ) print(answer)

Chat with video

Click to view Python code running MiniCPM-o 2.6 with video input.
MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number def encode_video(video_path): def uniform_sample(l, n): gap = len(l) / n idxs = [int(i * gap + gap / 2) for i in range(n)] return [l[i] for i in idxs] vr = VideoReader(video_path, ctx=cpu(0)) sample_fps = round(vr.get_avg_fps() / 1) # FPS frame_idx = [i for i in range(0, len(vr), sample_fps)] if len(frame_idx) > MAX_NUM_FRAMES: frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES) frames = vr.get_batch(frame_idx).asnumpy() frames = [Image.fromarray(v.astype('uint8')) for v in frames] print('num frames:', len(frames)) return frames video_path ="video_test.mp4" frames = encode_video(video_path) question = "Describe the video" msgs = [ {'role': 'user', 'content': frames + [question]}, ] # Set decode params for video params={} params["use_image_id"] = False params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448 answer = model.chat( msgs=msgs, tokenizer=tokenizer, **params ) print(answer)

Please look at GitHub for more detail about usage.

Inference with llama.cpp

MiniCPM-o 2.6 (vision-only mode) can run with llama.cpp. See our fork of llama.cpp and readme for more detail.

Int4 quantized version

Download the int4 quantized version for lower GPU memory (7GB) usage: MiniCPM-o-2_6-int4.

License

Model License

  • The code in this repo is released under the Apache-2.0 License.
  • The usage of MiniCPM-o and MiniCPM-V series model weights must strictly follow MiniCPM Model License.md.
  • The models and weights of MiniCPM are completely free for academic research. After filling out a "questionnaire" for registration, MiniCPM-o 2.6 weights are also available for free commercial use.

Statement

  • As an LMM, MiniCPM-o 2.6 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-o 2.6 does not represent the views and positions of the model developers
  • We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.

Key Techniques and Other Multimodal Projects

👏 Welcome to explore key techniques of MiniCPM-o 2.6 and other multimodal projects of our team:

VisCPM | RLHF-V | LLaVA-UHD | RLAIF-V

Citation

If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!

@article{yao2024minicpm, title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone}, author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others}, journal={arXiv preprint arXiv:2408.01800}, year={2024} }