logo
0
0
Login
Artyom Kozhevnikov<artyomko@fb.com>
first commit
Header image with a collage of on-the-ground photos from the transcription gathering efforts in Pakistan and Liberia.

Photographs captured during corpus creation efforts in Pakistan and Liberia.

Omnilingual ASR: Open-Source Multilingual Speech Recognition for 1600+ Languages

Omnilingual ASR is an open-source speech recognition system supporting over 1,600 languages — including hundreds never previously covered by any ASR technology. Designed for broad accessibility, it enables new languages to be added with just a few paired examples without requiring specialized expertise or large datasets. By combining scalable zero-shot learning with a flexible model family, Omnilingual ASR aims to make speech technology more inclusive and adaptable for communities and researchers worldwide.

Performance results table

Our 7B-LLM-ASR system achieves state-of-the-art performance across 1,600+ languages, with character error rates (CER) below 10 for 78% of those languages.

Documentation

Quick Start

Models & Architecture

Training & Data Pipeline

  • Data Preparation - End-to-end guide for multilingual dataset preparation, HuggingFace integration, and parquet processing
  • Training Recipes - Pre-configured workflows for CTC and LLM model training

Installation

The models were developed using fairseq2, a research-focused sequence modeling toolkit. While we provide a reference inference pipeline that works across platforms, audio support requires libsndfile (Mac: brew install libsndfile; Windows may need an additional setup).

# using pip pip install omnilingual-asr # using uv uv add omnilingual-asr

Inference

from omnilingual_asr.models.inference.pipeline import ASRInferencePipeline pipeline = ASRInferencePipeline(model_card="omniASR_LLM_7B") audio_files = ["/path/to/eng_audio1.flac", "/path/to/deu_audio2.wav"] lang = ["eng_Latn", "deu_Latn"] transcriptions = pipeline.transcribe(audio_files, lang=lang, batch_size=2)

More details on running specific models can be found in the src/omnilingual_asr/models/inference directory.

⚠️ Important: Currently only audio files shorter than 40 seconds are accepted for inference. We plan to add support for transcribing unlimited-length audio files shortly.

Supported Languages

To view the full list of 1600+ supported languages, you can access the language list programmatically:

from omnilingual_asr.models.wav2vec2_llama.lang_ids import supported_langs # Print all supported languages print(f"Total supported languages: {len(supported_langs)}") print(supported_langs) # Check if a specific language is supported if "eng_Latn" in supported_langs: print("English (Latin script) is supported!")

Languages follow the format {language_code}_{script}, for example eng_Latn - English (Latin script), cmn_Hans - Mandarin Chinese (Simplified), ...

Using the HuggingFace Dataset 🤗

We provide a large-scale multilingual speech dataset on HuggingFace under CC-BY-4.0 License: facebook/omnilingual-asr-corpus. This dataset can be directly used with our inference pipeline for evaluation or testing:

pip install "omnilingual-asr[data]"
from datasets import load_dataset from omnilingual_asr.models.inference.pipeline import ASRInferencePipeline # Load dataset for a specific language (e.g., Ligurian) omni_dataset = load_dataset("facebook/omnilingual-asr-corpus", "lij_Latn", split="train", streaming=True) batch = next(omni_dataset.iter(5)) # Convert to pipeline input format audio_data = [{"waveform": x["array"], "sample_rate": x["sampling_rate"]} for x in batch["audio"]] # Run inference pipeline = ASRInferencePipeline(model_card="omniASR_LLM_7B") transcriptions = pipeline.transcribe(audio_data, batch_size=2) # Display results for i, (transcription, original_text) in enumerate(zip(transcriptions, batch["raw_text"]), 1): print(f"\n Sample {i}:") print(f" Ground Truth: {original_text}") print(f" Predicted: {transcription}")

Model Architectures

Model NameFeaturesParametersDownload Size (FP32)Inference VRAM¹Real-Time Factor¹ (relative speed)²
omniASR_W2V_300MSSL317_390_5921.2 GiB
omniASR_W2V_1BSSL965_514_7523.6 GiB
omniASR_W2V_3BSSL3_064_124_67212.0 GiB
omniASR_W2V_7BSSL6_488_487_16825.0 GiB
omniASR_CTC_300MASR325_494_9961.3 GiB~2 GiB0.001 (96x)
omniASR_CTC_1BASR975_065_3003.7 GiB~3 GiB0.002 (48x)
omniASR_CTC_3BASR3_080_423_63612.0 GiB~8 GiB0.003 (32x)
omniASR_CTC_7BASR6_504_786_13225.0 GiB~15 GiB0.006 (16x)
omniASR_LLM_300MASR with optional language conditioning1_627_603_5846.1 GiB~5 GiB0.090 (~1x)
omniASR_LLM_1BASR with optional language conditioning2_275_710_5928.5 GiB~6 GiB0.091 (~1x)
omniASR_LLM_3BASR with optional language conditioning4_376_679_04017.0 GiB~10 GiB0.093 (~1x)
omniASR_LLM_7BASR with optional language conditioning7_801_041_53630.0 GiB~17 GiB0.092 (~1x)
omniASR_LLM_7B_ZSZero-Shot ASR7_810_900_60830.0 GiB~20 GiB 0.194 (~0.5x)
omniASR_tokenizerTokenizer for most of architectures (except omniASR_LLM_7B)-100 KiB-
omniASR_tokenizer_v7Tokenizer for omniASR_LLM_7B model-100 KiB-

¹ (batch=1, audio_len=30s, BF16, A100)

² Relative speed to omniASR_LLM_7B

Model Download & Storage

  • Automatic Download: Models are automatically downloaded on first use during training or inference
  • Storage Location: Models are saved to ~/.cache/fairseq2/assets/

Architecture Documentation

We provide a high-level model architecture overview in the model directory (src/omnilingual_asr/models), with individual configurations for each model family in the respective directories:

Training

To further finetune the released checkpoints on your own data, use our data preparation guide followed by the finetuning recipe guide.

License

Omnilingual ASR code and models are released under the Apache 2.0.

Citation

If you use the omnilingual ASR model suite in your research and wish to cite us, please use the following BibTeX entry (arxiv version will be added soon)!

@misc{omnilingualasr2025, title={{Omnilingual ASR}: Open-Source Multilingual Speech Recognition for 1600+ Languages}, author={{Omnilingual ASR Team} and Keren, Gil and Kozhevnikov, Artyom and Meng, Yen and Ropers, Christophe and Setzler, Matthew and Wang, Skyler and Adebara, Ife and Auli, Michael and Chan, Kevin and Cheng, Chierh and Chuang, Joe and Droof, Caley and Duppenthaler, Mark and Duquenne, Paul-Ambroise and Erben, Alexander and Gao, Cynthia and Mejia Gonzalez, Gabriel and Lyu, Kehan and Miglani, Sagar and Pratap, Vineel and Sadagopan, Kaushik Ram and Saleem, Safiyyah and Turkatenko, Arina and Ventayol-Boada, Albert and Yong, Zheng-Xin and Chung, Yu-An and Maillard, Jean and Moritz, Rashel and Mourachko, Alexandre and Williamson, Mary and Yates, Shireen}, year={2025}, url={https://ai.meta.com/research/publications/omnilingual-asr-open-source-multilingual-speech-recognition-for-1600-languages/}, }