"BettaFish" is an innovative multi-agent public opinion analysis system built from scratch. It helps break information cocoons, restore the original public sentiment, predict future trends, and assist decision-making. Users only need to raise analysis needs like chatting; the agents automatically analyze 30+ mainstream social platforms at home and abroad and millions of public comments.
Betta is a small yet combative and beautiful fish, symbolizing "small but powerful, fearless of challenges".
See the system-generated research report on "Wuhan University Public Opinion": In-depth Analysis Report on Wuhan University's Brand Reputation
See a complete system run example on "Wuhan University Public Opinion": Video - In-depth Analysis Report on Wuhan University's Brand Reputation
Beyond just report quality, compared to similar products, we have 🚀 six major advantages:
AI-Driven Comprehensive Monitoring: AI crawler clusters operate 24/7 non-stop, comprehensively covering 10+ key domestic and international social media platforms including Weibo, Xiaohongshu, TikTok, Kuaishou, etc. Not only capturing trending content in real-time, but also drilling down to massive user comments, letting you hear the most authentic and widespread public voice.
Composite Analysis Engine Beyond LLM: We not only rely on 5 types of professionally designed Agents, but also integrate middleware such as fine-tuned models and statistical models. Through multi-model collaborative work, we ensure the depth, accuracy, and multi-dimensional perspective of analysis results.
Powerful Multimodal Capabilities: Breaking through text and image limitations, capable of deep analysis of short video content from TikTok, Kuaishou, etc., and precisely extracting structured multimodal information cards such as weather, calendar, stocks from modern search engines, giving you comprehensive control over public opinion dynamics.
Agent "Forum" Collaboration Mechanism: Endowing different Agents with unique toolsets and thinking patterns, introducing a debate moderator model, conducting chain-of-thought collision and debate through the "forum" mechanism. This not only avoids the thinking limitations of single models and homogenization caused by communication, but also catalyzes higher-quality collective intelligence and decision support.
Seamless Integration of Public and Private Domain Data: The platform not only analyzes public opinion, but also provides high-security interfaces supporting seamless integration of your internal business databases with public opinion data. Breaking through data barriers, providing powerful analysis capabilities of "external trends + internal insights" for vertical businesses.
Lightweight and Highly Extensible Framework: Based on pure Python modular design, achieving lightweight, one-click deployment. Clear code structure allows developers to easily integrate custom models and business logic, enabling rapid platform expansion and deep customization.
Starting with public opinion, but not limited to public opinion. The goal of "WeiYu" is to become a simple and universal data analysis engine that drives all business scenarios.
For example, you only need to simply modify the API parameters and prompts of the Agent toolset to transform it into a financial market analysis system.
Here's a relatively active Linux.do project discussion thread: https://linux.do/t/topic/1009280
Check out the comparison by a Linux.do fellow: Open Source Project (BettaFish) vs manus|minimax|ChatGPT Comparison
Say goodbye to traditional data dashboards. In "WeiYu", everything starts with a simple question - you just need to ask your analysis needs like a conversation

Solomon LionCC BettaFish WeiYu Benefits: Open codecodex.ai Lion Programming Channel, scan the QR code to join the WeChat community, register for VibeCodingapi.ai Lion Computing, and receive 20 USD API quota (limited to the first 1,000 users).

302.AI is a pay-as-you-go enterprise AI resource hub that offers the latest and most comprehensive AI models and APIs on the market, along with a variety of ready-to-use online AI applications.

Insight Agent Private Database Mining: AI agent for in-depth analysis of private public opinion databases
Media Agent Multimodal Content Analysis: AI agent with powerful multimodal capabilities
Query Agent Precise Information Search: AI agent with domestic and international web search capabilities
Report Agent Intelligent Report Generation: Multi-round report generation AI agent with built-in templates
| Step | Phase Name | Main Operations | Participating Components | Cycle Nature |
|---|---|---|---|---|
| 1 | User Query | Flask main application receives the query | Flask Main Application | - |
| 2 | Parallel Launch | Three Agents start working simultaneously | Query Agent, Media Agent, Insight Agent | - |
| 3 | Preliminary Analysis | Each Agent uses dedicated tools for overview search | Each Agent + Dedicated Toolsets | - |
| 4 | Strategy Formulation | Develop segmented research strategies based on preliminary results | Internal Decision Modules of Each Agent | - |
| 5-N | Iterative Phase | Forum Collaboration + In-depth Research | ForumEngine + All Agents | Multi-round cycles |
| 5.1 | In-depth Research | Each Agent conducts specialized search guided by forum host | Each Agent + Reflection Mechanisms + Forum Guidance | Each cycle |
| 5.2 | Forum Collaboration | ForumEngine monitors Agent communications and generates host guidance | ForumEngine + LLM Host | Each cycle |
| 5.3 | Communication Integration | Each Agent adjusts research directions based on discussions | Each Agent + forum_reader tool | Each cycle |
| N+1 | Result Integration | Report Agent collects all analysis results and forum content | Report Agent | - |
| N+2 | IR Intermediate Representation | Dynamically select templates and styles, generate metadata through multiple rounds, assemble into IR intermediate representation | Report Agent + Template Engine | - |
| N+3 | Report Generation | Perform quality checks on chunks, render into interactive HTML report based on IR | Report Agent + Stitching Engine | - |
BettaFish/ ├── QueryEngine/ # Domestic and international news breadth search Agent │ ├── agent.py # Agent main logic, coordinates search and analysis workflow │ ├── llms/ # LLM interface wrapper │ ├── nodes/ # Processing nodes: search, formatting, summarization, etc. │ ├── tools/ # Domestic and international news search toolkit │ ├── utils/ # Utility functions │ ├── state/ # State management │ ├── prompts/ # Prompt templates │ └── ... ├── MediaEngine/ # Powerful multimodal understanding Agent │ ├── agent.py # Agent main logic, handles video/image multimodal content │ ├── llms/ # LLM interface wrapper │ ├── nodes/ # Processing nodes: search, formatting, summarization, etc. │ ├── tools/ # Multimodal search toolkit │ ├── utils/ # Utility functions │ ├── state/ # State management │ ├── prompts/ # Prompt templates │ └── ... ├── InsightEngine/ # Private database mining Agent │ ├── agent.py # Agent main logic, coordinates database queries and analysis │ ├── llms/ # LLM interface wrapper │ │ └── base.py # Unified OpenAI-compatible client │ ├── nodes/ # Processing nodes: search, formatting, summarization, etc. │ │ ├── base_node.py # Base node class │ │ ├── search_node.py # Search node │ │ ├── formatting_node.py # Formatting node │ │ ├── report_structure_node.py # Report structure node │ │ └── summary_node.py # Summary node │ ├── tools/ # Database query and analysis toolkit │ │ ├── keyword_optimizer.py # Qwen keyword optimization middleware │ │ ├── search.py # Database operation toolkit (topic search, comment retrieval, etc.) │ │ └── sentiment_analyzer.py # Sentiment analysis integration tool │ ├── utils/ # Utility functions │ │ ├── config.py # Configuration management │ │ ├── db.py # SQLAlchemy async engine + read-only query wrapper │ │ └── text_processing.py # Text processing utilities │ ├── state/ # State management │ │ └── state.py # Agent state definition │ ├── prompts/ # Prompt templates │ │ └── prompts.py # Various prompt templates │ └── __init__.py ├── ReportEngine/ # Multi-round report generation Agent │ ├── agent.py # Master orchestrator: template selection → layout → budget → chapter → render │ ├── flask_interface.py # Flask/SSE entry point, manages task queuing and streaming events │ ├── llms/ # OpenAI-compatible LLM wrappers │ │ └── base.py # Unified streaming/retry client │ ├── core/ # Core functionalities: template parsing, chapter storage, document stitching │ │ ├── template_parser.py # Markdown template slicer and slug generator │ │ ├── chapter_storage.py # Chapter run directory, manifest, and raw stream writer │ │ └── stitcher.py # Document IR stitcher, adds anchors/metadata │ ├── ir/ # Report Intermediate Representation (IR) contract & validation │ │ ├── schema.py # Block/mark schema constant definitions │ │ └── validator.py # Chapter JSON structure validator │ ├── nodes/ # Full workflow reasoning nodes │ │ ├── base_node.py # Node base class + logging/state hooks │ │ ├── template_selection_node.py # Template candidate collection and LLM selection │ │ ├── document_layout_node.py # Title/TOC/theme designer │ │ ├── word_budget_node.py # Word budget planning and chapter directive generation │ │ └── chapter_generation_node.py # Chapter-level JSON generation + validation │ ├── prompts/ # Prompt library and schema descriptions │ │ └── prompts.py # Template selection/layout/budget/chapter prompts │ ├── renderers/ # IR renderers │ │ ├── html_renderer.py # Document IR→interactive HTML │ │ ├── pdf_renderer.py # HTML→PDF export (WeasyPrint) │ │ ├── pdf_layout_optimizer.py # PDF layout optimizer │ │ └── chart_to_svg.py # Chart to SVG conversion tool │ ├── state/ # Task/metadata state models │ │ └── state.py # ReportState and serialization utilities │ ├── utils/ # Configuration and helper utilities │ │ ├── config.py # Pydantic settings + printer helper │ │ ├── dependency_check.py # Dependency checking tool │ │ ├── json_parser.py # JSON parsing utilities │ │ ├── chart_validator.py # Chart validation tool │ │ └── chart_repair_api.py # Chart repair API │ ├── report_template/ # Markdown template library │ │ ├── 企业品牌声誉分析报告.md │ │ └── ... │ └── __init__.py ├── ForumEngine/ # Forum engine: Agent collaboration mechanism │ ├── monitor.py # Log monitoring and forum management core │ ├── llm_host.py # Forum moderator LLM module │ └── __init__.py ├── MindSpider/ # Social media crawler system │ ├── main.py # Crawler main program entry │ ├── config.py # Crawler configuration file │ ├── BroadTopicExtraction/ # Topic extraction module │ │ ├── main.py # Topic extraction main program │ │ ├── database_manager.py # Database manager │ │ ├── get_today_news.py # Today's news fetcher │ │ └── topic_extractor.py # Topic extractor │ ├── DeepSentimentCrawling/ # Deep sentiment crawling module │ │ ├── main.py # Deep crawling main program │ │ ├── keyword_manager.py # Keyword manager │ │ ├── platform_crawler.py # Platform crawler manager │ │ └── MediaCrawler/ # Media crawler core │ │ ├── main.py │ │ ├── config/ # Platform configurations │ │ ├── media_platform/ # Platform crawler implementations │ │ └── ... │ └── schema/ # Database schema definitions │ ├── db_manager.py # Database manager │ ├── init_database.py # Database initialization script │ ├── mindspider_tables.sql # Database table structure SQL │ ├── models_bigdata.py # SQLAlchemy mappings for large-scale media opinion tables │ └── models_sa.py # ORM models for DailyTopic/Task extension tables ├── SentimentAnalysisModel/ # Sentiment analysis model collection │ ├── WeiboSentiment_Finetuned/ # Fine-tuned BERT/GPT-2 models │ │ ├── BertChinese-Lora/ # BERT Chinese LoRA fine-tuning │ │ │ ├── train.py │ │ │ ├── predict.py │ │ │ └── ... │ │ └── GPT2-Lora/ # GPT-2 LoRA fine-tuning │ │ ├── train.py │ │ ├── predict.py │ │ └── ... │ ├── WeiboMultilingualSentiment/ # Multilingual sentiment analysis │ │ ├── train.py │ │ ├── predict.py │ │ └── ... │ ├── WeiboSentiment_SmallQwen/ # Small parameter Qwen3 fine-tuning │ │ ├── train.py │ │ ├── predict_universal.py │ │ └── ... │ └── WeiboSentiment_MachineLearning/ # Traditional machine learning methods │ ├── train.py │ ├── predict.py │ └── ... ├── SingleEngineApp/ # Individual Agent Streamlit applications │ ├── query_engine_streamlit_app.py # QueryEngine standalone app │ ├── media_engine_streamlit_app.py # MediaEngine standalone app │ └── insight_engine_streamlit_app.py # InsightEngine standalone app ├── query_engine_streamlit_reports/ # QueryEngine standalone app outputs ├── media_engine_streamlit_reports/ # MediaEngine standalone app outputs ├── insight_engine_streamlit_reports/ # InsightEngine standalone app outputs ├── templates/ # Flask frontend templates │ └── index.html # Main interface HTML ├── static/ # Static resources │ └── image/ # Image resources │ ├── logo_compressed.png │ ├── framework.png │ └── ... ├── logs/ # Runtime log directory ├── final_reports/ # Final generated report files │ ├── ir/ # Report IR JSON files │ └── *.html # Final HTML reports ├── utils/ # Common utility functions │ ├── forum_reader.py # Agent inter-communication forum tool │ ├── github_issues.py # Unified GitHub issue link generator and error formatter │ └── retry_helper.py # Network request retry mechanism utility ├── tests/ # Unit tests and integration tests │ ├── run_tests.py # pytest entry script │ ├── test_monitor.py # ForumEngine monitoring unit tests │ ├── test_report_engine_sanitization.py # ReportEngine security tests │ └── ... ├── app.py # Flask main application entry point ├── config.py # Global configuration file ├── .env.example # Environment variable example file ├── docker-compose.yml # Docker multi-service orchestration config ├── Dockerfile # Docker image build file ├── requirements.txt # Python dependency list ├── regenerate_latest_pdf.py # PDF regeneration utility script ├── report_engine_only.py # Report Engine CLI version ├── README.md # Chinese documentation ├── README-EN.md # English documentation ├── CONTRIBUTING.md # Chinese contribution guide ├── CONTRIBUTING-EN.md # English contribution guide └── LICENSE # GPL-2.0 open source license
Run Command: Execute the following command to start all services in the background:
docker compose up -d
Note: Slow image pull speed. In the original
docker-compose.ymlfile, we have provided alternative mirror image addresses as comments for you to replace with.
Configure the database connection information with the following parameters. The system also supports MySQL, so you can adjust the settings as needed:
| Configuration Item | Value to Use | Description |
|---|---|---|
DB_HOST | db | Database service name (as defined in docker-compose.yml) |
DB_PORT | 5432 | Default PostgreSQL port |
DB_USER | bettafish | Database username |
DB_PASSWORD | bettafish | Database password |
DB_NAME | bettafish | Database name |
| Others | Keep Default | Please keep other parameters, such as database connection pool settings, at their default values. |
All LLM calls use the OpenAI API interface standard. After you finish the database configuration, continue to configure all LLM-related parameters so the system can connect to your selected LLM service.
Once you complete and save the configurations above, the system will be ready to run normally.
If you are new to building Agent systems, you can start with a very simple demo: Deep Search Agent Demo
# Create conda environment
conda create -n your_conda_name python=3.11
conda activate your_conda_name
# Create uv environment
uv venv --python 3.11 # Create Python 3.11 environment
This section contains detailed configuration instructions:Configure the dependencies
If Step 2 is skipped, the WeasyPrint library may not install correctly, and the PDF functionality may be unavailable.
# Basic dependency installation
pip install -r requirements.txt
# uv version command (faster installation)
uv pip install -r requirements.txt
# If you do not want to use the local sentiment analysis model (which has low computational requirements and defaults to the CPU version), you can comment out the 'Machine Learning' section in this file before executing the command.
# Install browser drivers (for crawler functionality)
playwright install chromium
Copy the .env.example file in the project root directory and rename it to .env.
Edit the .env file and fill in your API keys (you can also choose your own models and search proxies; see .env.example in the project root directory or config.py for details):
# ====================== Database Configuration ======================
# Database host, e.g., localhost or 127.0.0.1
DB_HOST=your_db_host
# Database port number, default is 3306
DB_PORT=3306
# Database username
DB_USER=your_db_user
# Database password
DB_PASSWORD=your_db_password
# Database name
DB_NAME=your_db_name
# Database character set, utf8mb4 is recommended for emoji compatibility
DB_CHARSET=utf8mb4
# Database type: postgresql or mysql
DB_DIALECT=postgresql
# Database initialization is not required, as it will be checked automatically upon executing app.py
# ====================== LLM Configuration ======================
# You can switch each Engine's LLM provider as long as it follows the OpenAI-compatible request format
# The configuration file provides recommended LLMs for each Agent. For initial deployment, please refer to the recommended settings first
# Insight Agent
INSIGHT_ENGINE_API_KEY=
INSIGHT_ENGINE_BASE_URL=
INSIGHT_ENGINE_MODEL_NAME=
# Media Agent
...
# In project root directory, activate conda environment
conda activate your_conda_name
# Start main application
python app.py
uv version startup command:
# In project root directory, activate uv environment
.venv\Scripts\activate
# Start main application
python app.py
Note 1: After a run is terminated, the Streamlit app might not shut down correctly and may still be occupying the port. If this occurs, find the process that is holding the port and kill it.
Note 2: Data scraping needs to be performed as a separate operation. Please refer to the instructions in section 5.3.
Visit http://localhost:5000 to use the complete system
# Start QueryEngine
streamlit run SingleEngineApp/query_engine_streamlit_app.py --server.port 8503
# Start MediaEngine
streamlit run SingleEngineApp/media_engine_streamlit_app.py --server.port 8502
# Start InsightEngine
streamlit run SingleEngineApp/insight_engine_streamlit_app.py --server.port 8501
This section has detailed configuration documentation: MindSpider Usage Guide
MindSpider Running Example
# Enter crawler directory
cd MindSpider
# Project initialization
python main.py --setup
# Run topic extraction (get hot news and keywords)
python main.py --broad-topic
# Run complete crawler workflow
python main.py --complete --date 2024-01-20
# Run topic extraction only
python main.py --broad-topic --date 2024-01-20
# Run deep crawling only
python main.py --deep-sentiment --platforms xhs dy wb
This tool bypasses the execution phase of all three analysis engines, directly loads their most recent log files, and generates a consolidated report without requiring the Web interface (while also skipping incremental file-validation steps). It is typically used when rapid retries are needed due to unsatisfactory report outputs, or when debugging the Report Engine.
# Basic usage (automatically extract topic from filename)
python report_engine_only.py
# Specify report topic
python report_engine_only.py --query "Civil Engineering Industry Analysis"
# Skip PDF generation (even if system supports it)
python report_engine_only.py --skip-pdf
# Show verbose logging
python report_engine_only.py --verbose
# Show help information
python report_engine_only.py --help
Features:
insight_engine_streamlit_reports, media_engine_streamlit_reports, query_engine_streamlit_reports)y to continue, input n to exit)final_reports/ directoryfinal_reports/pdf/ directoryfinal_report_{topic}_{timestamp}.html/pdfNotes:
.md report files.env file in the project root directory, and other sub-agents automatically inherit the root directory configuration)Each agent has dedicated configuration files that can be adjusted according to needs:
# QueryEngine/utils/config.py
class Config:
max_reflections = 2 # Reflection rounds
max_search_results = 15 # Maximum search results
max_content_length = 8000 # Maximum content length
# MediaEngine/utils/config.py
class Config:
comprehensive_search_limit = 10 # Comprehensive search limit
web_search_limit = 15 # Web search limit
# InsightEngine/utils/config.py
class Config:
default_search_topic_globally_limit = 200 # Global search limit
default_get_comments_limit = 500 # Comment retrieval limit
max_search_results_for_llm = 50 # Max results for LLM
# InsightEngine/tools/sentiment_analyzer.py
SENTIMENT_CONFIG = {
'model_type': 'multilingual', # Options: 'bert', 'multilingual', 'qwen'
'confidence_threshold': 0.8, # Confidence threshold
'batch_size': 32, # Batch size
'max_sequence_length': 512, # Max sequence length
}
The system supports any LLM provider that follows the OpenAI request format. You only need to fill in KEY, BASE_URL, and MODEL_NAME in config.py.
What is the OpenAI request format? Here's a simple example:
from openai import OpenAI client = OpenAI(api_key="your_api_key", base_url="https://api.siliconflow.cn/v1") response = client.chat.completions.create( model="Qwen/Qwen2.5-72B-Instruct", messages=[ { 'role': 'user', 'content': "What new opportunities will reasoning models bring to the market?" } ], ) complete_response = response.choices[0].message.content print(complete_response)
The system integrates multiple sentiment analysis methods, selectable based on needs:
cd SentimentAnalysisModel/WeiboMultilingualSentiment
python predict.py --text "This product is amazing!" --lang "en"
cd SentimentAnalysisModel/WeiboSentiment_SmallQwen
python predict_universal.py --text "This event was very successful"
# Use BERT Chinese model
cd SentimentAnalysisModel/WeiboSentiment_Finetuned/BertChinese-Lora
python predict.py --text "This product is really great"
cd SentimentAnalysisModel/WeiboSentiment_Finetuned/GPT2-Lora
python predict.py --text "I'm not feeling great today"
cd SentimentAnalysisModel/WeiboSentiment_MachineLearning
python predict.py --model_type "svm" --text "Service attitude needs improvement"
# Add your business database configuration in config.py
BUSINESS_DB_HOST = "your_business_db_host"
BUSINESS_DB_PORT = 3306
BUSINESS_DB_USER = "your_business_user"
BUSINESS_DB_PASSWORD = "your_business_password"
BUSINESS_DB_NAME = "your_business_database"
# InsightEngine/tools/custom_db_tool.py
class CustomBusinessDBTool:
"""Custom business database query tool"""
def __init__(self):
self.connection_config = {
'host': config.BUSINESS_DB_HOST,
'port': config.BUSINESS_DB_PORT,
'user': config.BUSINESS_DB_USER,
'password': config.BUSINESS_DB_PASSWORD,
'database': config.BUSINESS_DB_NAME,
}
def search_business_data(self, query: str, table: str):
"""Query business data"""
# Implement your business logic
pass
def get_customer_feedback(self, product_id: str):
"""Get customer feedback data"""
# Implement customer feedback query logic
pass
# Integrate custom tools in InsightEngine/agent.py
from .tools.custom_db_tool import CustomBusinessDBTool
class DeepSearchAgent:
def __init__(self, config=None):
# ... other initialization code
self.custom_db_tool = CustomBusinessDBTool()
def execute_custom_search(self, query: str):
"""Execute custom business data search"""
return self.custom_db_tool.search_business_data(query, "your_table")
The system supports uploading custom template files (.md or .txt format), selectable when generating reports.
Create new templates in the ReportEngine/report_template/ directory, and our Agent will automatically select the most appropriate template.
We welcome all forms of contributions!
Please read the following contribution guidelines:
The system has currently completed only the first two steps of the "three-step approach": requirement input -> detailed analysis. The missing step is prediction, and directly handing this over to LLM lacks persuasiveness.
Currently, after a long period of crawling and collection, we have accumulated massive data on topic popularity trends over time, trending events, and other change patterns across the entire network. We now have the conditions to develop prediction models. Our team will apply our technical reserves in time series models, graph neural networks, multimodal fusion, and other prediction model technologies to achieve truly data-driven public opinion prediction functionality.
Important Notice: This project is for educational, academic research, and learning purposes only
Compliance Statement:
Web Scraping Disclaimer:
Data Usage Disclaimer:
Technical Disclaimer:
Liability Limitation:
Please carefully read and understand the above disclaimer before using this project. Using this project indicates that you have agreed to and accepted all the above terms.
This project is licensed under the GPL-2.0 License. Please see the LICENSE file for details.
FAQ: https://github.com/666ghj/BettaFish/issues/185
Thanks to these excellent contributors: