中文文档 | Contributing | Documentation
Great news for the developer community! In our commitment to democratizing AI agent technology and fostering a vibrant ecosystem of innovation, we're thrilled to announce that Kode has transitioned from AGPLv3 to the Apache 2.0 license.
This change reflects our belief that the future of software development is collaborative, open, and augmented by AI. By removing licensing barriers, we're empowering developers worldwide to build the next generation of AI-assisted tools and workflows. Let's build the future together! 🚀
2025-08-29: We've added Windows support! All Windows users can now run Kode using Git Bash, Unix subsystems, or WSL (Windows Subsystem for Linux) on their computers.
Kode proudly supports the AGENTS.md standard protocol initiated by OpenAI - a simple, open format for guiding programming agents that's used by 20k+ open source projects.
Use # Your documentation request to generate and maintain your AGENTS.md file automatically, while maintaining full compatibility with existing Claude Code workflows.
Kode is a powerful AI assistant that lives in your terminal. It can understand your codebase, edit files, run commands, and handle entire workflows for you.
⚠️ Security Notice: Kode runs in YOLO mode by default (equivalent to Claude's
--dangerously-skip-permissionsflag), bypassing all permission checks for maximum productivity. YOLO mode is recommended only for trusted, secure environments when working on non-critical projects. If you're working with important files or using models of questionable capability, we strongly recommend usingkode --safeto enable permission checks and manual approval for all operations.📊 Model Performance: For optimal performance, we recommend using newer, more capable models designed for autonomous task completion. Avoid older Q&A-focused models like GPT-4o or Gemini 2.5 Pro, which are optimized for answering questions rather than sustained independent task execution. Choose models specifically trained for agentic workflows and extended reasoning capabilities.
@ask-model-name to consult specific AI models for specialized analysis@run-agent-name to delegate tasks to specialized subagentsOur state-of-the-art completion system provides unparalleled coding assistance:
dao to match run-agent-dao-qi-harmony-designerdq matches dao-qi, nde matches nodepy3 intelligently matches python3gp5 directly to match @ask-gpt-5@ for agents and models# documentation requests to auto-generate and maintain project documentationnpm install -g @shareai-lab/kode
After installation, you can use any of these commands:
kode - Primary commandkwa - Kode With Agent (alternative)kd - Ultra-short aliasStart an interactive session:
kode
# or
kwa
# or
kd
Get a quick response:
kode -p "explain this function" main.js
# or
kwa -p "explain this function" main.js
Kode supports a powerful @ mention system for intelligent completions:
# Consult specific AI models for expert opinions
@ask-claude-sonnet-4 How should I optimize this React component for performance?
@ask-gpt-5 What are the security implications of this authentication method?
@ask-o1-preview Analyze the complexity of this algorithm
# Delegate tasks to specialized subagents
@run-agent-simplicity-auditor Review this code for over-engineering
@run-agent-architect Design a microservices architecture for this system
@run-agent-test-writer Create comprehensive tests for these modules
# Reference files and directories with auto-completion
@src/components/Button.tsx
@docs/api-reference.md
@.env.example
The @ mention system provides intelligent completions as you type, showing available models, agents, and files.
Use the # prefix to generate and maintain your AGENTS.md documentation:
# Generate setup instructions
# How do I set up the development environment?
# Create testing documentation
# What are the testing procedures for this project?
# Document deployment process
# Explain the deployment pipeline and requirements
This mode automatically formats responses as structured documentation and appends them to your AGENTS.md file.
# Clone the repository
git clone https://github.com/shareAI-lab/Kode.git
cd Kode
# Build the image locally
docker build --no-cache -t kode .
# Run in your project directory
cd your-project
docker run -it --rm \
-v $(pwd):/workspace \
-v ~/.kode:/root/.kode \
-v ~/.kode.json:/root/.kode.json \
-w /workspace \
kode
The Docker setup includes:
Volume Mounts:
$(pwd):/workspace - Mounts your current project directory~/.kode:/root/.kode - Preserves your kode configuration directory between runs~/.kode.json:/root/.kode.json - Preserves your kode global configuration file between runsWorking Directory: Set to /workspace inside the container
Interactive Mode: Uses -it flags for interactive terminal access
Cleanup: --rm flag removes the container after exit
Note: Kode uses both ~/.kode directory for additional data (like memory files) and ~/.kode.json file for global configuration.
The first time you run the Docker command, it will build the image. Subsequent runs will use the cached image for faster startup.
You can use the onboarding to set up the model, or /model.
If you don't see the models you want on the list, you can manually set them in /config
As long as you have an openai-like endpoint, it should work.
/help - Show available commands/model - Change AI model settings/config - Open configuration panel/cost - Show token usage and costs/clear - Clear conversation history/init - Initialize project contextUnlike official Claude which supports only a single model, Kode implements true multi-model collaboration, allowing you to fully leverage the unique strengths of different AI models.
We designed a unified ModelManager system that supports:
/model command:
main: Default model for main Agenttask: Default model for SubAgentreasoning: Reserved for future ThinkTool usagequick: Fast model for simple NLP tasks (security identification, title generation, etc.)Our specially designed TaskTool (Architect tool) implements:
task pointer by defaultWe specially designed the AskExpertModel tool:
/model Command: Use /model command to configure and manage multiple model profiles, set default models for different purposesArchitecture Design Phase
Solution Refinement Phase
Code Implementation Phase
Problem Solving
# Example 1: Architecture Design
"Use o3 model to help me design a high-concurrency message queue system architecture"
# Example 2: Multi-Model Collaboration
"First use GPT-5 model to analyze the root cause of this performance issue, then use Claude Sonnet 4 model to write optimization code"
# Example 3: Parallel Task Processing
"Use Qwen Coder model as subagent to refactor these three modules simultaneously"
# Example 4: Expert Consultation
"This memory leak issue is tricky, ask Claude Opus 4.1 model separately for solutions"
# Example 5: Code Review
"Have Kimi k2 model review the code quality of this PR"
# Example 6: Complex Reasoning
"Use Grok 4 model to help me derive the time complexity of this algorithm"
# Example 7: Solution Design
"Have GLM-4.5 model design a microservice decomposition plan"
// Example of multi-model configuration support
{
"modelProfiles": {
"o3": { "provider": "openai", "model": "o3", "apiKey": "..." },
"claude4": { "provider": "anthropic", "model": "claude-sonnet-4", "apiKey": "..." },
"qwen": { "provider": "alibaba", "model": "qwen-coder", "apiKey": "..." }
},
"modelPointers": {
"main": "claude4", // Main conversation model
"task": "qwen", // Task execution model
"reasoning": "o3", // Reasoning model
"quick": "glm-4.5" // Quick response model
}
}
/cost command to view token usage and costs for each model| Feature | Kode | Official Claude |
|---|---|---|
| Number of Supported Models | Unlimited, configurable for any model | Only supports single Claude model |
| Model Switching | ✅ Tab key quick switch | ❌ Requires session restart |
| Parallel Processing | ✅ Multiple SubAgents work in parallel | ❌ Single-threaded processing |
| Cost Tracking | ✅ Separate statistics for multiple models | ❌ Single model cost |
| Task Model Configuration | ✅ Different default models for different purposes | ❌ Same model for all tasks |
| Expert Consultation | ✅ AskExpertModel tool | ❌ Not supported |
This multi-model collaboration capability makes Kode a true AI Development Workbench, not just a single AI assistant.
Kode is built with modern tools and requires Bun for development.
# macOS/Linux
curl -fsSL https://bun.sh/install | bash
# Windows
powershell -c "irm bun.sh/install.ps1 | iex"
# Clone the repository
git clone https://github.com/shareAI-lab/kode.git
cd kode
# Install dependencies
bun install
# Run in development mode
bun run dev
bun run build
# Run tests
bun test
# Test the CLI
./cli.js --help
We welcome contributions! Please see our Contributing Guide for details.
Apache 2.0 License - see LICENSE for details.