
DeepWiki is my own implementation attempt of DeepWiki, automatically creates beautiful, interactive wikis for any GitHub, GitLab, BitBucket, or CNB repository! Just enter a repo name, and DeepWiki will:
English | 简体中文 | 繁體中文 | 日本語 | Español | 한국어 | Tiếng Việt | Português Brasileiro | Français | Русский
# Clone the repository
git clone https://github.com/AsyncFuncAI/deepwiki-open.git
cd deepwiki-open
# Create a .env file with your API keys
echo "GOOGLE_API_KEY=your_google_api_key" > .env
echo "OPENAI_API_KEY=your_openai_api_key" >> .env
# Optional: Add OpenRouter API key if you want to use OpenRouter models
echo "OPENROUTER_API_KEY=your_openrouter_api_key" >> .env
# Optional: Add Ollama host if not local. defaults to http://localhost:11434
echo "OLLAMA_HOST=your_ollama_host" >> .env
# Optional: Add Azure API key, endpoint and version if you want to use azure openai models
echo "AZURE_OPENAI_API_KEY=your_azure_openai_api_key" >> .env
echo "AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint" >> .env
echo "AZURE_OPENAI_VERSION=your_azure_openai_version" >> .env
# Run with Docker Compose
docker-compose up
For detailed instructions on using DeepWiki with Ollama and Docker, see Ollama Instructions.
💡 Where to get these keys:
- Get a Google API key from Google AI Studio
- Get an OpenAI API key from OpenAI Platform
- Get Azure OpenAI credentials from Azure Portal - create an Azure OpenAI resource and get the API key, endpoint, and API version
Create a .env file in the project root with these keys:
GOOGLE_API_KEY=your_google_api_key OPENAI_API_KEY=your_openai_api_key # Optional: Add this if you want to use OpenRouter models OPENROUTER_API_KEY=your_openrouter_api_key # Optional: Add this if you want to use Azure OpenAI models AZURE_OPENAI_API_KEY=your_azure_openai_api_key AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint AZURE_OPENAI_VERSION=your_azure_openai_version # Optional: Add Ollama host if not local. default: http://localhost:11434 OLLAMA_HOST=your_ollama_host
# Install Python dependencies
pip install -r api/requirements.txt
# Start the API server
python -m api.main
# Install JavaScript dependencies
npm install
# or
yarn install
# Start the web app
npm run dev
# or
yarn dev
https://github.com/openai/codex, https://github.com/microsoft/autogen, https://gitlab.com/gitlab-org/gitlab, https://bitbucket.org/redradish/atlassian_app_versions, or https://cnb.cool/learning-docker/project-1-jupyter)DeepWiki uses AI to:
deepwiki/ ├── api/ # Backend API server │ ├── main.py # API entry point │ ├── api.py # FastAPI implementation │ ├── rag.py # Retrieval Augmented Generation │ ├── data_pipeline.py # Data processing utilities │ └── requirements.txt # Python dependencies │ ├── src/ # Frontend Next.js app │ ├── app/ # Next.js app directory │ │ └── page.tsx # Main application page │ └── components/ # React components │ └── Mermaid.tsx # Mermaid diagram renderer │ ├── public/ # Static assets ├── package.json # JavaScript dependencies └── .env # Environment variables (create this)
DeepWiki now implements a flexible provider-based model selection system supporting multiple LLM providers:
gemini-2.0-flash, also supports gemini-1.5-flash, gemini-1.0-pro, etc.gpt-4o, also supports o4-mini, etc.gpt-4o, also supports o4-mini, etc.llama3Each provider requires its corresponding API key environment variables:
# API Keys GOOGLE_API_KEY=your_google_api_key # Required for Google Gemini models OPENAI_API_KEY=your_openai_api_key # Required for OpenAI models OPENROUTER_API_KEY=your_openrouter_api_key # Required for OpenRouter models AZURE_OPENAI_API_KEY=your_azure_openai_api_key #Required for Azure OpenAI models AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint #Required for Azure OpenAI models AZURE_OPENAI_VERSION=your_azure_openai_version #Required for Azure OpenAI models # OpenAI API Base URL Configuration OPENAI_BASE_URL=https://custom-api-endpoint.com/v1 # Optional, for custom OpenAI API endpoints # Ollama host OLLAMA_HOST=your_ollama_host # Optional, if Ollama is not local. default: http://localhost:11434 # Configuration Directory DEEPWIKI_CONFIG_DIR=/path/to/custom/config/dir # Optional, for custom config file location
DeepWiki uses JSON configuration files to manage various aspects of the system:
generator.json: Configuration for text generation models
embedder.json: Configuration for embedding models and text processing
repo.json: Configuration for repository handling
By default, these files are located in the api/config/ directory. You can customize their location using the DEEPWIKI_CONFIG_DIR environment variable.
The custom model selection feature is specifically designed for service providers who need to:
Service providers can implement their model offerings by selecting from the predefined options or entering custom model identifiers in the frontend interface.
The OpenAI Client's base_url configuration is designed primarily for enterprise users with private API channels. This feature:
Coming Soon: In future updates, DeepWiki will support a mode where users need to provide their own API keys in requests. This will allow enterprise customers with private channels to use their existing API arrangements without sharing credentials with the DeepWiki deployment.
If you want to use embedding models compatible with the OpenAI API (such as Alibaba Qwen), follow these steps:
api/config/embedder.json with those from api/config/embedder_openai_compatible.json..env file, set the relevant environment variables, for example:
OPENAI_API_KEY=your_api_key OPENAI_BASE_URL=your_openai_compatible_endpoint
This allows you to seamlessly switch to any OpenAI-compatible embedding service without code changes.
DeepWiki uses Python's built-in logging module for diagnostic output. You can configure the verbosity and log file destination via environment variables:
| Variable | Description | Default |
|---|---|---|
LOG_LEVEL | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL). | INFO |
LOG_FILE_PATH | Path to the log file. If set, logs will be written to this file. | api/logs/application.log |
To enable debug logging and direct logs to a custom file:
export LOG_LEVEL=DEBUG
export LOG_FILE_PATH=./debug.log
python -m api.main
Or with Docker Compose:
LOG_LEVEL=DEBUG LOG_FILE_PATH=./debug.log docker-compose up
When running with Docker Compose, the container's api/logs directory is bind-mounted to ./api/logs on your host (see the volumes section in docker-compose.yml), ensuring log files persist across restarts.
Alternatively, you can store these settings in your .env file:
LOG_LEVEL=DEBUG LOG_FILE_PATH=./debug.log
Then simply run:
docker-compose up
Logging Path Security Considerations: In production environments, ensure the api/logs directory and any custom log file path are secured with appropriate filesystem permissions and access controls. The application enforces that LOG_FILE_PATH resides within the project's api/logs directory to prevent path traversal or unauthorized writes.
| Variable | Description | Required | Note |
|---|---|---|---|
GOOGLE_API_KEY | Google Gemini API key for AI generation | No | Required only if you want to use Google Gemini models |
OPENAI_API_KEY | OpenAI API key for embeddings | Yes | Note: This is required even if you're not using OpenAI models, as it's used for embeddings. |
OPENROUTER_API_KEY | OpenRouter API key for alternative models | No | Required only if you want to use OpenRouter models |
AZURE_OPENAI_API_KEY | Azure OpenAI API key | No | Required only if you want to use Azure OpenAI models |
AZURE_OPENAI_ENDPOINT | Azure OpenAI endpoint | No | Required only if you want to use Azure OpenAI models |
AZURE_OPENAI_VERSION | Azure OpenAI version | No | Required only if you want to use Azure OpenAI models |
OLLAMA_HOST | Ollama Host (default: http://localhost:11434) | No | Required only if you want to use external Ollama server |
PORT | Port for the API server (default: 8001) | No | If you host API and frontend on the same machine, make sure change port of SERVER_BASE_URL accordingly |
SERVER_BASE_URL | Base URL for the API server (default: http://localhost:8001) | No | |
DEEPWIKI_AUTH_MODE | Set to true or 1 to enable authorization mode. | No | Defaults to false. If enabled, DEEPWIKI_AUTH_CODE is required. |
DEEPWIKI_AUTH_CODE | The secret code required for wiki generation when DEEPWIKI_AUTH_MODE is enabled. | No | Only used if DEEPWIKI_AUTH_MODE is true or 1. |
If you're not using ollama mode, you need to configure an OpenAI API key for embeddings. Other API keys are only required when configuring and using models from the corresponding providers.
DeepWiki can be configured to run in an authorization mode, where wiki generation requires a valid authorization code. This is useful if you want to control who can use the generation feature. Restricts frontend initiation and protects cache deletion, but doesn't fully prevent backend generation if API endpoints are hit directly.
To enable authorization mode, set the following environment variables:
DEEPWIKI_AUTH_MODE: Set this to true or 1. When enabled, the frontend will display an input field for the authorization code.DEEPWIKI_AUTH_CODE: Set this to the desired secret code. Restricts frontend initiation and protects cache deletion, but doesn't fully prevent backend generation if API endpoints are hit directly.If DEEPWIKI_AUTH_MODE is not set or is set to false (or any other value than true/1), the authorization feature will be disabled, and no code will be required.
You can use Docker to run DeepWiki:
# Pull the image from GitHub Container Registry
docker pull ghcr.io/asyncfuncai/deepwiki-open:latest
# Run the container with environment variables
docker run -p 8001:8001 -p 3000:3000 \
-e GOOGLE_API_KEY=your_google_api_key \
-e OPENAI_API_KEY=your_openai_api_key \
-e OPENROUTER_API_KEY=your_openrouter_api_key \
-e OLLAMA_HOST=your_ollama_host \
-e AZURE_OPENAI_API_KEY=your_azure_openai_api_key \
-e AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint \
-e AZURE_OPENAI_VERSION=your_azure_openai_version \
-v ~/.adalflow:/root/.adalflow \
ghcr.io/asyncfuncai/deepwiki-open:latest
This command also mounts ~/.adalflow on your host to /root/.adalflow in the container. This path is used to store:
~/.adalflow/repos/)~/.adalflow/databases/)~/.adalflow/wikicache/)This ensures that your data persists even if the container is stopped or removed.
Or use the provided docker-compose.yml file:
# Edit the .env file with your API keys first
docker-compose up
(The docker-compose.yml file is pre-configured to mount ~/.adalflow for data persistence, similar to the docker run command above.)
You can also mount a .env file to the container:
# Create a .env file with your API keys
echo "GOOGLE_API_KEY=your_google_api_key" > .env
echo "OPENAI_API_KEY=your_openai_api_key" >> .env
echo "OPENROUTER_API_KEY=your_openrouter_api_key" >> .env
echo "AZURE_OPENAI_API_KEY=your_azure_openai_api_key" >> .env
echo "AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint" >> .env
echo "AZURE_OPENAI_VERSION=your_azure_openai_version" >>.env
echo "OLLAMA_HOST=your_ollama_host" >> .env
# Run the container with the .env file mounted
docker run -p 8001:8001 -p 3000:3000 \
-v $(pwd)/.env:/app/.env \
-v ~/.adalflow:/root/.adalflow \
ghcr.io/asyncfuncai/deepwiki-open:latest
This command also mounts ~/.adalflow on your host to /root/.adalflow in the container. This path is used to store:
~/.adalflow/repos/)~/.adalflow/databases/)~/.adalflow/wikicache/)This ensures that your data persists even if the container is stopped or removed.
If you want to build the Docker image locally:
# Clone the repository
git clone https://github.com/AsyncFuncAI/deepwiki-open.git
cd deepwiki-open
# Build the Docker image
docker build -t deepwiki-open .
# Run the container
docker run -p 8001:8001 -p 3000:3000 \
-e GOOGLE_API_KEY=your_google_api_key \
-e OPENAI_API_KEY=your_openai_api_key \
-e OPENROUTER_API_KEY=your_openrouter_api_key \
-e AZURE_OPENAI_API_KEY=your_azure_openai_api_key \
-e AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint \
-e AZURE_OPENAI_VERSION=your_azure_openai_version \
-e OLLAMA_HOST=your_ollama_host \
deepwiki-open
If you're in an environment that uses self-signed certificates, you can include them in the Docker build:
certs in your project root).crt or .pem certificate files into this directory# Build with default certificates directory (certs)
docker build .
# Or build with a custom certificates directory
docker build --build-arg CUSTOM_CERT_DIR=my-custom-certs .
The API server provides:
For more details, see the API README.
DeepWiki now supports OpenRouter as a model provider, giving you access to hundreds of AI models through a single API:
OPENROUTER_API_KEY=your_key to your .env fileOpenRouter is particularly useful if you want to:
The Ask feature allows you to chat with your repository using Retrieval Augmented Generation (RAG):
DeepResearch takes repository analysis to the next level with a multi-turn research process:
To use DeepResearch, simply toggle the "Deep Research" switch in the Ask interface before submitting your question.
The main interface of DeepWiki
Access private repositories with personal access tokens
DeepResearch conducts multi-turn investigations for complex topics
Watch DeepWiki in action!
.env file is in the project root and contains the required API keysContributions are welcome! Feel free to:
This project is licensed under the MIT License - see the LICENSE file for details.