RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged context engine and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.
Try our demo at https://demo.ragflow.io.
⭐️ Star our repository to stay up-to-date with exciting new features and improvements! Get instant notifications for new releases! 🌟
[!TIP] If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.
Ensure vm.max_map_count >= 262144:
To check the value of
vm.max_map_count:$ sysctl vm.max_map_countReset
vm.max_map_countto a value at least 262144 if it is not.# In this case, we set it to 262144: $ sudo sysctl -w vm.max_map_count=262144This change will be reset after a system reboot. To ensure your change remains permanent, add or update the
vm.max_map_countvalue in /etc/sysctl.conf accordingly:vm.max_map_count=262144
Clone the repo:
$ git clone https://github.com/infiniflow/ragflow.git
Start up the server using the pre-built Docker images:
[!CAUTION] All Docker images are built for x86 platforms. We don't currently offer Docker images for ARM64. If you are on an ARM64 platform, follow this guide to build a Docker image compatible with your system.
The command below downloads the
v0.22.1edition of the RAGFlow Docker image. See the following table for descriptions of different RAGFlow editions. To download a RAGFlow edition different fromv0.22.1, update theRAGFLOW_IMAGEvariable accordingly in docker/.env before usingdocker composeto start the server.
$ cd ragflow/docker
# git checkout v0.22.1
# Optional: use a stable tag (see releases: https://github.com/infiniflow/ragflow/releases)
# This step ensures the **entrypoint.sh** file in the code matches the Docker image version.
# Use CPU for DeepDoc tasks:
$ docker compose -f docker-compose.yml up -d
# To use GPU to accelerate DeepDoc tasks:
# sed -i '1i DEVICE=gpu' .env
# docker compose -f docker-compose.yml up -d
Note: Prior to
v0.22.0, we provided both images with embedding models and slim images without embedding models. Details as follows:
| RAGFlow image tag | Image size (GB) | Has embedding models? | Stable? |
|---|---|---|---|
| v0.21.1 | ≈9 | ✔️ | Stable release |
| v0.21.1-slim | ≈2 | ❌ | Stable release |
Starting with
v0.22.0, we ship only the slim edition and no longer append the -slim suffix to the image tag.
Check the server status after having the server up and running:
$ docker logs -f docker-ragflow-cpu-1
The following output confirms a successful launch of the system:
____ ___ ______ ______ __ / __ \ / | / ____// ____// /____ _ __ / /_/ // /| | / / __ / /_ / // __ \| | /| / / / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ / /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/ * Running on all addresses (0.0.0.0)
If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a
network anormalerror because, at that moment, your RAGFlow may not be fully initialized.
In your web browser, enter the IP address of your server and log in to RAGFlow.
With the default settings, you only need to enter
http://IP_OF_YOUR_MACHINE(sans port number) as the default HTTP serving port80can be omitted when using the default configurations.
In service_conf.yaml.template, select the desired LLM factory in user_default_llm and update
the API_KEY field with the corresponding API key.
See llm_api_key_setup for more information.
The show is on!
When it comes to system configurations, you will need to manage the following files:
SVR_HTTP_PORT, MYSQL_PASSWORD, and
MINIO_PASSWORD.The ./docker/README file provides a detailed description of the environment settings and service configurations which can be used as
${ENV_VARS}in the service_conf.yaml.template file.
To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80
to <YOUR_SERVING_PORT>:80.
Updates to the above configurations require a reboot of all containers to take effect:
$ docker compose -f docker-compose.yml up -d
RAGFlow uses Elasticsearch by default for storing full text and vectors. To switch to Infinity, follow these steps:
Stop all running containers:
$ docker compose -f docker/docker-compose.yml down -v
[!WARNING]
-vwill delete the docker container volumes, and the existing data will be cleared.
Set DOC_ENGINE in docker/.env to infinity.
Start the containers:
$ docker compose -f docker-compose.yml up -d
[!WARNING] Switching to Infinity on a Linux/arm64 machine is not yet officially supported.
This image is approximately 2 GB in size and relies on external LLM and embedding services.
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
docker build --platform linux/amd64 -f Dockerfile -t infiniflow/ragflow:nightly .
Install uv and pre-commit, or skip this step if they are already installed:
pipx install uv pre-commit
Clone the source code and install Python dependencies:
git clone https://github.com/infiniflow/ragflow.git
cd ragflow/
uv sync --python 3.10 # install RAGFlow dependent python modules
uv run download_deps.py
pre-commit install
Launch the dependent services (MinIO, Elasticsearch, Redis, and MySQL) using Docker Compose:
docker compose -f docker/docker-compose-base.yml up -d
Add the following line to /etc/hosts to resolve all hosts specified in docker/.env to 127.0.0.1:
127.0.0.1 es01 infinity mysql minio redis sandbox-executor-manager
If you cannot access HuggingFace, set the HF_ENDPOINT environment variable to use a mirror site:
export HF_ENDPOINT=https://hf-mirror.com
If your operating system does not have jemalloc, please install it as follows:
# Ubuntu
sudo apt-get install libjemalloc-dev
# CentOS
sudo yum install jemalloc
# OpenSUSE
sudo zypper install jemalloc
# macOS
sudo brew install jemalloc
Launch backend service:
source .venv/bin/activate
export PYTHONPATH=$(pwd)
bash docker/launch_backend_service.sh
Install frontend dependencies:
cd web
npm install
Launch frontend service:
npm run dev
The following output confirms a successful launch of the system:
Stop RAGFlow front-end and back-end service after development is complete:
pkill -f "ragflow_server.py|task_executor.py"
See the RAGFlow Roadmap 2025
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.