🤗 Hugging Face | 🖥️ Official Website | 🕖 HunyuanAPI | 🕹️ Demo | 🤖 ModelScope
Technical Report | GITHUB | cnb.cool | LICENSE
Welcome to the official repository of Hunyuan-A13B, an innovative and open-source large language model (LLM) built on a fine-grained Mixture-of-Experts (MoE) architecture. Designed for efficiency and scalability, Hunyuan-A13B delivers cutting-edge performance with minimal computational overhead, making it an ideal choice for advanced reasoning and general-purpose applications, especially in resource-constrained environments.
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have achieved remarkable progress in natural language processing, computer vision, and scientific tasks. However, as model scales continue to expand, optimizing resource consumption while maintaining high performance has become a critical challenge. To address this, we have explored Mixture of Experts (MoE) architectures. The newly introduced Hunyuan-A13B model features a total of 80 billion parameters with 13 billion active parameters. It not only delivers high-performance results but also achieves optimal resource efficiency, successfully balancing computational power and resource utilization.
As a powerful yet computationally efficient large model, Hunyuan-A13B is an ideal choice for researchers and developers seeking high performance under resource constraints. Whether for academic research, cost-effective AI solution development, or innovative application exploration, this model provides a robust foundation for advancement.
Note: The following benchmarks are evaluated by TRT-LLM-backend
| Model | Hunyuan-Large | Qwen2.5-72B | Qwen3-A22B | Hunyuan-A13B |
|---|---|---|---|---|
| MMLU | 88.40 | 86.10 | 87.81 | 88.17 |
| MMLU-Pro | 60.20 | 58.10 | 68.18 | 67.23 |
| MMLU-Redux | 87.47 | 83.90 | 87.40 | 87.67 |
| BBH | 86.30 | 85.80 | 88.87 | 87.56 |
| SuperGPQA | 38.90 | 36.20 | 44.06 | 41.32 |
| EvalPlus | 75.69 | 65.93 | 77.60 | 78.64 |
| MultiPL-E | 59.13 | 60.50 | 65.94 | 69.33 |
| MBPP | 72.60 | 76.00 | 81.40 | 83.86 |
| CRUX-I | 57.00 | 57.63 | - | 70.13 |
| CRUX-O | 60.63 | 66.20 | 79.00 | 77.00 |
| MATH | 69.80 | 62.12 | 71.84 | 72.35 |
| CMATH | 91.30 | 84.80 | - | 91.17 |
| GSM8k | 92.80 | 91.50 | 94.39 | 91.83 |
| GPQA | 25.18 | 45.90 | 47.47 | 49.12 |
Hunyuan-A13B-Instruct has achieved highly competitive performance across multiple benchmarks, particularly in mathematics, science, agent domains, and more. We compared it with several powerful models, and the results are shown below.
| Topic | Bench | OpenAI-o1-1217 | DeepSeek R1 | Qwen3-A22B | Hunyuan-A13B-Instruct |
|---|---|---|---|---|---|
| Mathematics | AIME 2024 AIME 2025 MATH | 74.3 79.2 96.4 | 79.8 70 94.9 | 85.7 81.5 94.0 | 87.3 76.8 94.3 |
| Science | GPQA-Diamond OlympiadBench | 78 83.1 | 71.5 82.4 | 71.1 85.7 | 71.2 82.7 |
| Coding | Livecodebench Fullstackbench ArtifactsBench | 63.9 64.6 38.6 | 65.9 71.6 44.6 | 70.7 65.6 44.6 | 63.9 67.8 43 |
| Reasoning | BBH DROP ZebraLogic | 80.4 90.2 81 | 83.7 92.2 78.7 | 88.9 90.3 80.3 | 89.1 91.1 84.7 |
| Instruction Following | IF-Eval SysBench | 91.8 82.5 | 88.3 77.7 | 83.4 74.2 | 84.7 76.1 |
| Text Creation | LengthCtrl InsCtrl | 60.1 74.8 | 55.9 69 | 53.3 73.7 | 55.4 71.9 |
| NLU | ComplexNLU Word-Task | 64.7 67.1 | 64.5 76.3 | 59.8 56.4 | 61.2 62.9 |
| Agent | BDCL v3 τ-Bench ComplexFuncBench C3-Bench | 67.8 60.4 47.6 58.8 | 56.9 43.8 41.1 55.3 | 70.8 44.6 40.6 51.7 | 78.3 54.7 61.2 63.5 |
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = os.environ['MODEL_PATH']
# model_name_or_path = "tencent/Hunyuan-A13B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto",trust_remote_code=True) # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=4096)
output_text = tokenizer.decode(outputs[0])
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
For deployment, you can use frameworks such as vLLM, SGLang, or TensorRT-LLM to serve the model and create an OpenAI-compatible API endpoint.
We provide a pre-built Docker image containing vLLM 0.8.5 with full support for this model. The official vllm release is currently under development, note: cuda 12.8 is require for this docker.
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm
Download Model file:
modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct-FP8Start the API server:
model download by huggingface:
docker run --privileged --user root --net=host --ipc=host \ -v ~/.cache:/root/.cache/ \ --gpus=all -it --entrypoint python hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm \ -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 \ --tensor-parallel-size 2 --dtype bfloat16 --kv-cache-dtype fp8 --model tencent/Hunyuan-A13B-Instruct-FP8 --trust-remote-code
model downloaded by modelscope:
docker run --privileged --user root --net=host --ipc=host \ -v ~/.cache/modelscope:/root/.cache/modelscope \ --gpus=all -it --entrypoint python hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm \ -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 \ --tensor-parallel-size 2 --dtype bfloat16 --kv-cache-dtype fp8 \ --model /root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct-FP8 --trust_remote_code
We also provide a pre-built Docker image based on the latest version of SGLang.
To get started:
# china mirror docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang # docker hub: docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-sglang
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ --ipc=host \ hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-sglang \ -m sglang.launch_server --model-path hunyuan/Hunyuan-A13B-Instruct-FP8 --tp 2 --trust-remote-code --host 0.0.0.0 --port 30000
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).