This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.
bash scripts/setup.sh
pip install whisper-live
The server supports 3 backends faster_whisper, tensorrt and openvino. If running tensorrt backend follow TensorRT_whisper readme
python3 run_server.py --port 9090 \
--backend faster_whisper
# running with custom model
python3 run_server.py --port 9090 \
--backend faster_whisper \
-fw "/path/to/custom/faster/whisper/model"
# Run English only model
python3 run_server.py -p 9090 \
-b tensorrt \
-trt /home/TensorRT-LLM/examples/whisper/whisper_small_en
# Run Multilingual model
python3 run_server.py -p 9090 \
-b tensorrt \
-trt /home/TensorRT-LLM/examples/whisper/whisper_small \
-m
Docker Recommended: Running WhisperLive with OpenVINO inside Docker automatically enables GPU support (iGPU/dGPU) without requiring additional host setup.
Native (non-Docker) Use: If you prefer running outside Docker, ensure the Intel drivers and OpenVINO runtime are installed and properly configured on your system. Refer to the documentation for installing OpenVINO.
python3 run_server.py -p 9090 -b openvino
To control the number of threads used by OpenMP, you can set the OMP_NUM_THREADS environment variable. This is useful for managing CPU resources and ensuring consistent performance. If not specified, OMP_NUM_THREADS is set to 1 by default. You can change this by using the --omp_num_threads argument:
python3 run_server.py --port 9090 \ --backend faster_whisper \ --omp_num_threads 4
By default, when running the server without specifying a model, the server will instantiate a new whisper model for every client connection. This has the advantage, that the server can use different model sizes, based on the client's requested model size. On the other hand, it also means you have to wait for the model to be loaded upon client connection and you will have increased (V)RAM usage.
When serving a custom TensorRT model using the -trt or a custom faster_whisper model using the -fw option, the server will instead only instantiate the custom model once and then reuse it for all client connections.
If you don't want this, set --no_single_model.
lang: Language of the input audio, applicable only if using a multilingual model.translate: If set to True then translate from any language to en.model: Whisper model size.use_vad: Whether to use Voice Activity Detection on the server.save_output_recording: Set to True to save the microphone input as a .wav file during live transcription. This option is helpful for recording sessions for later playback or analysis. Defaults to False.output_recording_filename: Specifies the .wav file path where the microphone input will be saved if save_output_recording is set to True.max_clients: Specifies the maximum number of clients the server should allow. Defaults to 4.max_connection_time: Maximum connection time for each client in seconds. Defaults to 600.mute_audio_playback: Whether to mute audio playback when transcribing an audio file. Defaults to False.from whisper_live.client import TranscriptionClient
client = TranscriptionClient(
"localhost",
9090,
lang="en",
translate=False,
model="small", # also support hf_model => `Systran/faster-whisper-small`
use_vad=False,
save_output_recording=True, # Only used for microphone input, False by Default
output_recording_filename="./output_recording.wav", # Only used for microphone input
max_clients=4,
max_connection_time=600,
mute_audio_playback=False, # Only used for file input, False by Default
)
It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to True if we want to translate from the source language to English and False if we want to transcribe in the source language.
client("tests/jfk.wav")
client()
client(rtsp_url="rtsp://admin:admin@192.168.0.1/rtsp")
client(hls_url="http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_1xtra/bbc_1xtra.isml/bbc_1xtra-audio%3d96000.norewind.m3u8")
GPU
docker run -it --gpus all -p 9090:9090 ghcr.io/collabora/whisperlive-gpu:latest
docker run -p 9090:9090 --runtime=nvidia --gpus all --entrypoint /bin/bash -it ghcr.io/collabora/whisperlive-tensorrt
# Build small.en engine
bash build_whisper_tensorrt.sh /app/TensorRT-LLM-examples small.en # float16
bash build_whisper_tensorrt.sh /app/TensorRT-LLM-examples small.en int8 # int8 weight only quantization
bash build_whisper_tensorrt.sh /app/TensorRT-LLM-examples small.en int4 # int4 weight only quantization
# Run server with small.en
python3 run_server.py --port 9090 \
--backend tensorrt \
--trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en_float16"
--trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en_int8"
--trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en_int4"
docker run -it --device=/dev/dri -p 9090:9090 ghcr.io/collabora/whisperlive-openvino
CPU
docker run -it -p 9090:9090 ghcr.io/collabora/whisperlive-cpu:latest
Note: By default we use "small" model size. To build docker image for a different model size, change the size in server.py and then build the docker image.
We are available to help you with both Open Source and proprietary AI projects. You can reach us via the Collabora website or vineet.suryan@collabora.com and marcus.edel@collabora.com.
@article{Whisper title = {Robust Speech Recognition via Large-Scale Weak Supervision}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, publisher = {arXiv}, year = {2022}, }
@misc{Silero VAD, author = {Silero Team}, title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/snakers4/silero-vad}}, email = {hello@silero.ai} }