logo
0
0
Login
Guo-Hua Wang<Flourish@users.noreply.huggingface.co>
Update README.md

arxiv paper code demo model

Built upon Ovis-U1, Ovis-Image is a 7B text-to-image model specifically optimized for high-quality text rendering, designed to operate efficiently under stringent computational constraints.


The overall architecture of Ovis-Image (cf. Fig.2 in our report).

🏆 Highlights

  • Strong text rendering at a compact 7B scale: Ovis-Image is a 7B text-to-image model that delivers text rendering quality comparable to much larger 20B-class systems such as Qwen-Image and competitive with leading closed-source models like GPT4o in text-centric scenarios, while remaining small enough to run on widely accessible hardware.
  • High fidelity on text-heavy, layout-sensitive prompts: The model excels on prompts that demand tight alignment between linguistic content and rendered typography (e.g., posters, banners, logos, UI mockups, infographics), producing legible, correctly spelled, and semantically consistent text across diverse fonts, sizes, and aspect ratios without compromising overall visual quality.
  • Efficiency and deployability: With its 7B parameter budget and streamlined architecture, Ovis-Image fits on a single high-end GPU with moderate memory, supports low-latency interactive use, and scales to batch production serving, bringing near–frontier text rendering to applications where tens-of-billions–parameter models are impractical.

✨ Showcase

Here are some examples demonstrating the capabilities of Ovis-Image.

Ovis-Image examples

🛠️ Inference

Inference with Diffusers

First, install the diffusers library with support for Ovis-Image.

pip install git+https://github.com/DoctorKey/diffusers.git@ovis-image

Next, use the OvisImagePipeline to generate the image.

import torch from diffusers import OvisImagePipeline pipe = OvisImagePipeline.from_pretrained("AIDC-AI/Ovis-Image-7B", torch_dtype=torch.bfloat16) pipe.to("cuda") prompt = "A creative 3D artistic render where the text \"OVIS-IMAGE\" is written in a bold, expressive handwritten brush style using thick, wet oil paint. The paint is a mix of vibrant rainbow colors (red, blue, yellow) swirling together like toothpaste or impasto art. You can see the ridges of the brush bristles and the glossy, wet texture of the paint. The background is a clean artist's canvas. Dynamic lighting creates soft shadows behind the floating paint strokes. Colorful, expressive, tactile texture, 4k detail." image = pipe(prompt, negative_prompt="", num_inference_steps=50, true_cfg_scale=5.0).images[0] image.save("ovis_image.png")

Inference with Pytorch

Ovis-Image has been tested with Python 3.10, Torch 2.6.0, and Transformers 4.57.1. For a full list of package dependencies, please see requirements.txt.

git clone git@github.com:AIDC-AI/Ovis-Image.git conda create -n ovis-image python=3.10 -y conda activate ovis-image cd Ovis-Image pip install -r requirements.txt pip install -e .

For text-to-image, please run

python ovis_image/test.py \ --model_path AIDC-AI/Ovis-Image-7B/ovis_image.safetensors \ --vae_path AIDC-AI/Ovis-Image-7B/ae.safetensors \ --ovis_path AIDC-AI/Ovis-Image-7B/Ovis2.5-2B \ --image_size 1024 \ --denoising_steps 50 \ --cfg_scale 5.0 \ --prompt "A creative 3D artistic render where the text \"OVIS-IMAGE\" is written in a bold, expressive handwritten brush style using thick, wet oil paint. The paint is a mix of vibrant rainbow colors (red, blue, yellow) swirling together like toothpaste or impasto art. You can see the ridges of the brush bristles and the glossy, wet texture of the paint. The background is a clean artist's canvas. Dynamic lighting creates soft shadows behind the floating paint strokes. Colorful, expressive, tactile texture, 4k detail." \

Alternatively, you can try Ovis-Image directly in your browser on Hugging Face Space

📊 Performance

Evaluation of text rendering ability on CVTG-2K.

Model#Params.WA (2 regions)WA (3 regions)WA (4 regions)WA (5 regions)WA (average)NED↑CLIPScore↑
Seedream 3.0-0.62820.59620.60430.56100.59240.85370.7821
GPT4o-0.87790.86590.87310.82180.85690.94780.7982
SD3.5 Large11B+8B0.72930.68250.65740.59400.65480.84700.7797
RAG-Diffusion11B+12B0.43880.33160.21160.19100.26480.44980.7797
FLUX.1-dev11B+12B0.60890.55310.46610.43160.49650.68790.7401
TextCrafter11B+12B0.76280.76280.74060.69770.73700.86790.7868
Qwen-Image7B+20B0.83700.83640.83130.81580.82880.91160.8017
Ovis-Image2B+7B0.92480.92390.91800.91660.92000.96950.8368

Evaluation of text rendering ability on LongText-Bench.

Model#Params.LongText-Bench-ENLongText-Bench-ZN
Kolors 2.0-0.2580.329
GPT4o-0.9560.619
Seedream 3.0-0.8960.878
OmniGen23B+4B0.5610.059
Janus-Pro7B0.0190.006
BLIP3-o7B+1B0.0210.018
FLUX.1-dev11B+12B0.6070.005
BAGEL7B+7B0.3730.310
HiDream-I1-Full11B+17B0.5430.024
Qwen-Image7B+20B0.9430.946
Ovis-Image2B+7B0.9220.964

Evaluation of text-to-image generation ability on DPG-Bench.

Model#Params.GlobalEntityAttributeRelationOtherOverall
Seedream 3.0-94.3192.6591.3692.7888.2488.27
GPT4o-88.8988.9489.8492.6390.9685.15
Ovis-U12B+1B82.3790.0888.6893.3585.2083.72
OmniGen23B+4B88.8188.8390.1889.3790.2783.57
Janus-Pro7B86.9088.9089.4089.3289.4884.19
BAGEL7B+7B88.9490.3791.2990.8288.6785.07
HiDream-I1-Full11B+17B76.4490.2289.4893.7491.8385.89
UniWorld-V17B+12B83.6488.3988.4489.2787.2281.38
Qwen-Image7B+20B91.3291.5692.0294.3192.7388.32
Ovis-Image2B+7B82.3792.3890.4293.9891.2086.59

Evaluation of text-to-image generation ability on GenEval.

Model#Params.Single objectTwo objectCountingColorsPositionAttribute bindingOverall
Seedream 3.0-0.990.960.910.930.470.800.84
GPT4o-0.990.920.850.920.750.610.84
Ovis-U12B+1B0.980.980.900.920.790.750.89
OmniGen23B+4B1.000.950.640.880.550.760.80
Janus-Pro7B0.990.890.590.900.790.660.80
BAGEL7B+7B0.990.940.810.880.640.630.82
HiDream-I1-Full11B+17B1.000.980.790.910.600.720.83
UniWorld-V17B+12B0.990.930.790.890.490.700.80
Qwen-Image7B+20B0.990.920.890.880.760.770.87
Ovis-Image2B+7B1.000.970.760.860.670.800.84

Evaluation of text-to-image generation ability on OneIG-EN.

Model#Params.AlignmentTextReasoningStyleDiversityOverall
Kolors 2.0-0.8200.4270.2620.3600.3000.434
Imagen4-0.8570.8050.3380.3770.1990.515
Seedream 3.0-0.8180.8650.2750.4130.2770.530
GPT4o-0.8510.8570.3450.4620.1510.533
Ovis-U12B+1B0.8160.0340.2260.4430.1910.342
CogView46B0.7860.6410.2460.3530.2050.446
Janus-Pro7B0.5530.0010.1390.2760.3650.267
OmniGen23B+4B0.8040.6800.2710.3770.2420.475
BLIP3-o7B+1B0.7110.0130.2230.3610.2290.307
FLUX.1-dev11B+12B0.7860.5230.2530.3680.2380.434
BAGEL7B+7B0.7690.2440.1730.3670.2510.361
BAGEL+CoT7B+7B0.7930.0200.2060.3900.2090.324
HiDream-I1-Full11B+17B0.8290.7070.3170.3470.1860.477
HunyuanImage-2.17B+17B0.8350.8160.2990.3550.1270.486
Qwen-Image7B+20B0.8820.8910.3060.4180.1970.539
Ovis-Image2B+7B0.8580.9140.3080.3860.1860.530

Evaluation of text-to-image generation ability on OneIG-ZN.

Model#Params.AlignmentTextReasoningStyleDiversityOverall
Kolors 2.0-0.7380.5020.2260.3310.3330.426
Seedream 3.0-0.7930.9280.2810.3970.2430.528
GPT4o-0.8120.6500.3000.4490.1590.474
CogView46B0.7000.1930.2360.3480.2140.338
Janus-Pro7B0.3240.1480.1040.2640.3580.240
BLIP3-o7B+1B0.6080.0920.2130.3690.2330.303
BAGEL7B+7B0.6720.3650.1860.3570.2680.370
BAGEL+CoT7B+7B0.7190.1270.2190.3850.1970.329
HiDream-I1-Full11B+17B0.6200.2050.2560.3040.3000.337
HunyuanImage-2.17B+17B0.7750.8960.2710.3480.1140.481
Qwen-Image7B+20B0.8250.9630.2670.4050.2790.548
Ovis-Image2B+7B0.8050.9610.2730.3680.1980.521

📚 Citation

If you find Ovis-Image useful for your research or applications, please cite our technical report:

@article{wang2025ovis_image, title={Ovis-Image Technical Report}, author={Wang, Guo-Hua and Cao, Liangfu and Cui, Tianyu and Fu, Minghao and Chen, Xiaohao and Zhan, Pengxin and Zhao, Jianshan and Li, Lan and Fu, Bowen and Liu, Jiaqi and Chen, Qing-Guo}, journal={arXiv preprint arXiv:2511.22982}, year={2025} }

🙏 Acknowledgments

The code is built upon Ovis and FLUX. We thank their authors for open-sourcing their great work.

📄 License

This project is licensed under the Apache License, Version 2.0 (SPDX-License-Identifier: Apache-2.0).

🚨 Disclaimer

We used compliance checking algorithms during the training process, to ensure the compliance of the trained model(s) to the best of our ability. Due to complex data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.

🔥 We are hiring!

We are looking for both interns and full-time researchers to join our team, focusing on multimodal understanding, generation, reasoning, AI agents, and unified multimodal models. If you are interested in exploring these exciting areas, please reach out to us at qingguo.cqg@alibaba-inc.com.