简体中文| English| Project Homepage | Documentation
[!IMPORTANT]
| Platform | Text | Images | Voice | Video | Animated Emojis/Stickers | Links (Sharing) | Quote | Forward | Location | Files |
|---|---|---|---|---|---|---|---|---|---|---|
| Telegram | ✅ | ✅ | ❌ | ❌ | ⚠️Convert to Emoji | ❌ | ❌ | ✅ | ✅ | ❌ |
| 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | |
| Discord | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 |
| Slack | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 |
| Platform | Deployment Support |
|---|---|
| Telegram | ✅ |
| 🚧 | |
| Discord | ✅ |
| Slack | ✅ |
[!IMPORTANT]
- WeClone is still in rapid iteration phase, current performance does not represent final results.
- LLM fine-tuning effectiveness largely depends on model size, quantity and quality of chat data. Theoretically, larger models with more data yield better results.
- The performance of the 7B model is average, while models with 14B or more parameters tend to deliver better results.
- Windows environment has not been rigorously tested. You can use WSL as the runtime environment.
[25/07/10] Data source added Telegram
[25/06/05] Support for image modal data fine-tuning
The project uses Qwen2.5-VL-7B-Instruct model by default with LoRA method for SFT stage fine-tuning. You can also use other models and methods supported by LLaMA Factory.
Estimated VRAM requirements:
| Method | Precision | 7B | 14B | 30B | 70B | xB |
|---|---|---|---|---|---|---|
Full (bf16 or fp16) | 32 | 120GB | 240GB | 600GB | 1200GB | 18xGB |
Full (pure_bf16) | 16 | 60GB | 120GB | 300GB | 600GB | 8xGB |
| Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 2xGB |
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | xGB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | x/2GB |
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | x/4GB |
CUDA installation (skip if already installed, requires version 12.6 or above)
It is recommended to use uv to install dependencies, which is a very fast Python environment manager. After installing uv, you can use the following commands to create a new Python environment and install dependencies.
git clone https://github.com/xming521/WeClone.git && cd WeClone
uv venv .venv --python=3.10
source .venv/bin/activate # windows .venv\Scripts\activate
uv pip install --group main -e .
settings.jsonc, and make subsequent configuration changes in this file:cp examples/tg.template.jsonc settings.jsonc
[!NOTE] Training and inference related configurations are unified in the file
settings.jsonc
python -c "import torch; print('CUDA Available:', torch.cuda.is_available());"
uv pip install flash-attn --no-build-isolation.It is recommended to use Hugging Face to download models, or use the following command:
git lfs install
git clone https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct models/Qwen2.5-VL-7B-Instruct
Please use Telegram Desktop to export chat records. Click the top right corner in the chat interface, then click "Export chat history". Select Photos for message types and JSON for format. You can export multiple contacts (group chat records are not recommended), then place the exported ChatExport_* in the ./dataset/telegram directory, meaning put different people's chat record folders together in ./dataset/telegram.
language, platform, and include_type in the configuration file according to your needs.telegram_args.my_id in the configuration file to your own telegram user ID.phone numbers, email addresses, credit card numbers, IP addresses, geographic location names, international bank account numbers, cryptocurrency wallet addresses, age information, and generic ID numbers from the data, but it cannot guarantee 100% identification.blocked_words is provided in settings.jsonc, allowing users to manually add words or phrases they want to filter (the entire sentence containing blocked words will be removed by default).[!IMPORTANT] 🚨 Please be sure to protect personal privacy and do not leak personal information!
make_dataset_args in settings.jsonc according to your own chat style.weclone-cli make-dataset
More Parameter Details: Data Preprocessing
model_name_or_path, template, lora_target in settings.jsonc to select other locally downloaded models.per_device_train_batch_size and gradient_accumulation_steps to adjust VRAM usage.num_train_epochs, lora_rank, lora_dropout in train_sft_args based on your dataset's quantity and quality.weclone-cli train-sft
Uncomment the deepspeed line in settings.jsonc and use the following command for multi-GPU training:
uv pip install "deepspeed<=0.16.9"
deepspeed --num_gpus=number_of_gpus weclone/train/train_sft.py
Test suitable temperature and top_p values, then modify infer_args in settings.jsonc for subsequent inference use.
weclone-cli webchat-demo
weclone-cli server
Does not include questions asking for personal information, only daily conversation. Test results are in test_result-my.txt.
weclone-cli server weclone-cli test-model
[!TIP] We're looking for interesting examples of native English speakers chatting with WeClone! Feel free to share them with us on Twitter.
AstrBot is an easy-to-use multi-platform LLM chatbot and development framework ✨ Supports Discord, Telegram, Slack, Feishu and other platforms.
Usage steps:
weclone-cli server to start the API service/tool off_all on the messaging platform, otherwise the fine-tuned effect won't be visible.[!IMPORTANT] Check the api_service logs to ensure that the large model service request parameters are consistent with those used during fine-tuning as much as possible, and turn off all tool plugin capabilities.
LangBot is an easy-to-use open-source LLM chatbot platform suitable for various scenarios. It connects to various global instant messaging platforms. You can set up your IM bot in just 5 minutes.
weclone-cli server to start the WeClone API servicegpt-3.5-turbo, select OpenAI as the provider, fill in the request URL as WeClone's address. For detailed connection methods, refer to the documentation, and enter any API Key.It is also recommended to use DeepWiki for problem solving.
Any Issues/Pull Requests are welcome!
You can contribute by checking Issues or helping review PRs (Pull Requests). For new feature additions, please discuss through Issues first.
Development environment:
uv pip install --group dev -e . pre-commit install
The project uses pytest for testing, pyright for type checking, and ruff for code formatting.
Before submitting your code, you should run pytest tests to ensure all tests pass.
Thanks to the following code contributors and other community members for their contributions
This project also benefits from excellent open source projects such as PyWxDump, LLaMA-Factory, AstrBot, LangBot, and others.
[!CAUTION] This project is for learning, research and experimental purposes only. There are significant risks in using it for production environments, please assess carefully. Do not use for illegal purposes, consequences are at your own risk.
[!IMPORTANT]
WeClone is currently not partnered with any platform and has not issued any cryptocurrency. The only official website is: weclone.love. Beware of imitations.
When using digital avatars generated by this project, it is strongly recommended to:
If you must use in production environments, it is recommended to:
This disclaimer may be revised with project updates, users should regularly check the latest version. Continuing to use this project indicates agreement with the latest disclaimer terms.
Once you download, clone, modify, distribute or use the code or models of this project in any way, it indicates that you have fully read, understood and agreed to unconditionally accept all terms of this disclaimer.
Please carefully read and understand all contents of this disclaimer, ensuring strict compliance with relevant regulations when using this project.
[!TIP] If this project is helpful to you, or if you are interested in the future development of this project, please give the project a Star, thank you