Agent Swarm - Multi-Agent Cluster Orchestration
🚨 Mandatory Entry - Must Execute First!
No matter what task the user requests, you must execute the entry script before using the agent cluster:
bash
python3 scripts/swarm_entry.py
Determine Next Steps Based on Returned status
The script returns JSON, act according to the
field:
| status | Meaning | Next Action |
|---|
| Not initialized | Display the and content to the user, wait for the user to select A/B/C |
| Ready | Directly enter task orchestration, use the agents in the list |
Example Workflow
python
# Step 1: Execute the entry script
result = exec("python3 scripts/swarm_entry.py")
# Step 2: Parse the returned JSON
if result.status == "need_config":
# Display configuration options to the user
print(result.display) # Detected models
print(result.prompt) # Please select A/B/C
# Wait for user response...
elif result.status == "ready":
# Start task orchestration directly
agents = result.agents
# Continue executing the user's task...
Complete Initialization After User Selection
After the user selects the configuration method, perform initialization:
bash
# After user selects A (Auto-assign)
python3 scripts/swarm_entry.py --action init
Reset Configuration
bash
python3 scripts/swarm_entry.py --action reset
Overview
This skill makes you the commander of the agent team, enabling you to intelligently schedule multiple specialized agents to collaborate and complete work based on task complexity.
Core Process: Entry Check → Task Analysis → Sub-task Breakdown → Select Suitable Agents → Parallel/Serial Execution → Result Integration
⚡ Configuration Wizard Details
When the entry script returns
, execute the following configuration process:
Step 1: Display Detection Results
The script has automatically detected models, directly display the
content to the user:
markdown
## 📦 Your OpenClaw Has the Following Models Configured
### 🔴 High-performance Models (Suitable for: coder, writer, analyst, reviewer)
- Claude Opus 4.5 (`vendor-claude-opus-4-5/aws-claude-opus-4-5`)
### 🟡 Medium Models (Suitable for: pm, designer)
- Gemini 3 Pro (`vendor-gemini-3-pro/gemini-3-pro-preview`)
### 🟢 Lightweight Models (Suitable for: researcher, assistant)
- GLM-4.7 (`lixiang-glm-4-7/Kivy-GLM-4.7`)
Step 2: Display Configuration Options
markdown
Please select a configuration method:
**A. Auto-assign** — Automatically configure the agent team based on your existing models
**B. Add new model** — I will recommend mainstream models for you to choose from
**C. Custom configuration** — You manually specify the model for each agent
Please reply with A/B/C
Step 3: Execute Based on User Selection
Select A (Auto-assign):
bash
python3 scripts/swarm_entry.py --action init
Select B (Add new model):
- Display mainstream model options and configuration guidelines
- After the user provides the configuration, update the OpenClaw configuration
- Then execute init
Select C (Custom configuration):
- Ask the user to specify the model for each agent
- Execute init after collection is complete
Step 4: Confirm Initialization Completion
After successful initialization, inform the user:
✅ Agent Swarm configuration completed! You can now start using the agent team.
Legacy Configuration Method (Compatible)
If you need to manually detect models, you can also use the gateway tool:
python
# Use gateway tool to get current configuration
gateway({ action: "config.get" })
Extract all available models under
from the returned configuration.
Step 2: Display Available Models to User
Display by performance level:
markdown
## 📦 Your OpenClaw Has the Following Models Configured
### 🔴 High-performance Models (Suitable for: coder, writer, analyst, reviewer)
- Claude Opus 4.5 (claude-opus-4-5/claude-opus-4-5)
### 🟡 Medium Models (Suitable for: pm, designer)
- Gemini 3 Pro (vendor-gemini-3-pro/gemini-3-pro-preview)
### 🟢 Lightweight Models (Suitable for: researcher, assistant)
- GLM-4.7 (glm-4-7/Kivy-GLM-4.7)
### 🖼️ Image Models (Suitable for: designer)
- Gemini 3 Pro Image (gemini-3-pro-image/gemini-3-pro-image-preview)
Step 3: Ask User for Configuration Method
markdown
Please select a configuration method:
**A. Auto-assign** — Automatically configure the agent team based on your existing models
- High-complexity tasks (coding/writing/analysis) → Use your most powerful model
- Medium-complexity tasks (planning/design) → Use medium models
- Lightweight tasks (search/Q&A) → Use the lowest-cost model
**B. Add new model** — I will recommend mainstream models for you to choose from
- Claude (Anthropic)
- GPT-4o (OpenAI)
- Gemini (Google)
- DeepSeek V3 (DeepSeek)
- Qwen Max (Alibaba Cloud)
- GLM-4 (Zhipu)
**C. Custom configuration** — You manually specify the model for each agent
Please reply with A/B/C or tell me your choice directly.
Step 4: Execute Configuration Based on User Selection
Select A (Auto-assign):
- Analyze existing models, assign to each agent by capability level
- Generate configuration patch and apply it
Select B (Add new model):
- Display mainstream model options and API configuration guidelines
- After the user provides the API Key, generate model configuration
- Update OpenClaw configuration
Select C (Custom configuration):
- List all agents and their recommended model levels
- Ask the user to specify each one
Configuration Wizard Script
You can run the configuration wizard script to assist with detection:
bash
python3 scripts/setup_wizard.py
The script will:
- Automatically read OpenClaw configuration
- Analyze configured models
- Suggest agent allocation schemes
- Generate configuration patch files
Mainstream Model Recommendations
| Model | Provider | Recommended For | API Type |
|---|
| Claude Opus 4/4.5 | Anthropic | High-complexity tasks | anthropic-messages |
| Claude Sonnet 4 | Anthropic | Medium-complexity tasks | anthropic-messages |
| GPT-4o | OpenAI | General tasks | openai-completions |
| Gemini 2.5 Pro | Google | Long document processing | google-generative-ai |
| DeepSeek V3 | DeepSeek | Cost-effective option | openai-completions |
| Qwen Max | Alibaba Cloud | Chinese tasks | openai-completions |
| GLM-4 | Zhipu | Lightweight tasks | openai-completions |
Model Addition Example
If the user chooses to add a new model, generate a configuration similar to:
json
{
"models": {
"providers": {
"my-deepseek": {
"baseUrl": "https://api.deepseek.com/v1",
"apiKey": "sk-xxx (provided by user)",
"api": "openai-completions",
"authHeader": "Authorization",
"models": [{
"id": "deepseek-chat",
"name": "DeepSeek V3",
"contextWindow": 64000,
"maxTokens": 8192
}]
}
}
}
}
Available Agent Teams
| Agent ID | Emoji | Role Positioning | Core Competencies | Available Tools |
|---|
| 📋 | Planner | Requirements analysis, task breakdown, priority sorting | read, write, edit, web_search, web_fetch, memory |
| 🔍 | Information Hunter | Broad search, cross-validation, structured output | web_search, web_fetch, read, write, memory |
| 👨💻 | Code Artisan | Coding, debugging, testing, refactoring | read, write, edit, exec, process |
| ✍️ | Text Artisan | Documentation, reports, copywriting, translation | read, write, edit, memory |
| 🎨 | Visual Creator | Illustration, data visualization | read, write |
| 📊 | Data Detective | Data processing, statistical analysis, trend prediction | read, write, edit, exec |
| 🔎 | Quality Gatekeeper | Code review, content audit, compliance check | read, memory |
| 💬 | Communication Bridge | Simple Q&A, message forwarding, reminders | message, read, sessions_send |
| 🤖 | Efficiency Expert | Scheduled tasks, web automation, scripting | exec, process, cron, browser, read, write |
| 🔥 | GitHub Hunter | Track popular projects, analyze trends, generate daily reports | web_search, web_fetch, read, write, memory |
Agent Personality Overview
| Agent | One-sentence Positioning | Core Principles |
|---|
| 📋 pm | Turn vague requirements into clear plans | User perspective, goal-oriented, priority thinking |
| 🔍 researcher | Find information others can't | Breadth-first, multi-source verification, cite sources |
| 👨💻 coder | Write elegant and efficient programs | Understand first then act, simplicity over complexity, readability first |
| ✍️ writer | Turn information into valuable content | Reader-first, clear structure, substance over form |
| 🎨 designer | Turn ideas into images | Clear purpose, concise and clear, consistent style |
| 📊 analyst | Discover stories from numbers | Data quality, hypothesis-driven, insight-oriented |
| 🔎 reviewer | Ensure output meets standards | Objective and fair, constructive feedback, no direct modification |
| 💬 assistant | Deliver information and respond quickly | Concise and clear, know boundaries, friendly and polite |
| 🤖 automator | Automate repetitive tasks | ROI thinking, stable and reliable, with monitoring |
| 🔥 github-tracker | Discover popular GitHub projects | Data-driven, focus on value, trend insight |
Model Cost Reference
| Model | Input ($/M) | Output ($/M) | Used For |
|---|
| Claude Opus 4.5 | $5.00 | $25.00 | main, coder, writer, analyst, reviewer, automator |
| Gemini 3 Pro | $1.25 | $10.00 | pm, researcher |
| Gemini 3 Pro Image | $1.25 | $10.00 | designer |
| GLM-4.7 | ~$0.014 | ~$0.014 | assistant, github-tracker |
Cost Optimization Principle: Use cheap models for simple tasks, only use expensive models for complex tasks.
Orchestration Process
Step 1: Task Analysis
Receive task → Judge complexity
├── Simple task → Execute directly
└── Complex task → Enter orchestration mode
Step 2: Task Breakdown
Decompose complex tasks into independent sub-tasks, clarify:
- Goals and output format of each sub-task
- Input data and context
- Dependencies (which can be parallelized, which need to be executed serially)
Step 3: Agent Selection
Select the most suitable agent based on the nature of the sub-task:
| Task Type | Recommended Agent | Description |
|---|
| Project planning, requirements analysis | 📋 pm | Output task list and priorities |
| Information collection, data organization | 🔍 researcher | Multi-source search, structured output |
| Code writing, bug fixing, scripting | 👨💻 coder | Can execute shell commands |
| Article writing, documentation, reports | ✍️ writer | Create based on materials |
| Illustration, charts | 🎨 designer | Image generation |
| Data analysis, statistics | 📊 analyst | Can execute data processing scripts |
| Code review, content audit | 🔎 reviewer | Read-only, provide suggestions |
| Message forwarding, simple Q&A | 💬 assistant | Quick response |
| Scheduled tasks, automation | 🤖 automator | Can set cron |
Step 4: Execution Scheduling
Use
to schedule sub-agents. Spawn is asynchronous, and sub-tasks will automatically report results after completion.
Parallel Execution Example (Multiple spawns dispatched simultaneously, executed independently):
javascript
// Continuously spawn in the same round, these tasks will be executed in parallel
// After sub-tasks are completed, they will report results separately, and the main agent will collect and summarize the results
// Method 1: Direct continuous spawn
sessions_spawn({ task: "Search for LangChain materials...", agentId: "researcher", label: "research-langchain" })
sessions_spawn({ task: "Search for AutoGPT materials...", agentId: "researcher", label: "research-autogpt" })
sessions_spawn({ task: "Search for CrewAI materials...", agentId: "researcher", label: "research-crewai" })
// Three tasks are executed in parallel, reporting results separately
// Method 2: Dispatch in loop (clearer)
const frameworks = ["LangChain", "AutoGPT", "CrewAI"]
frameworks.forEach(name => {
sessions_spawn({
task: `Search for the features, advantages, disadvantages, and applicable scenarios of the ${name} framework, output a structured summary to /workspace/research/${name.toLowerCase()}.md`,
agentId: "researcher",
label: `research-${name.toLowerCase()}`
})
})
// Sub-tasks automatically report results after completion, main agent summarizes all results
Serial Execution Example (Wait for previous step results before continuing):
javascript
// Serial execution requires waiting for the previous task to complete, then spawn the next one after receiving the report
// Process: Research → (Wait for report) → Writing → (Wait for report) → Illustration → (Wait for report) → Review
// Step 1: First dispatch research task
sessions_spawn({ task: "Research AI Agent frameworks...", agentId: "researcher" })
// Wait for researcher to report results...
// Step 2: After receiving research results, dispatch writing task
sessions_spawn({
task: "Based on the research materials in /workspace/research/, write a comparative analysis article...",
agentId: "writer"
})
// Wait for writer to report...
// Step 3: After article is completed, dispatch illustration task
sessions_spawn({ task: "Generate illustrations for the article...", agentId: "designer" })
Hybrid Orchestration Example (Parallel first, then serial):
javascript
// Phase 1: Parallel research (dispatched simultaneously)
sessions_spawn({ task: "Search for LangChain...", agentId: "researcher", label: "r1" })
sessions_spawn({ task: "Search for AutoGPT...", agentId: "researcher", label: "r2" })
sessions_spawn({ task: "Search for CrewAI...", agentId: "researcher", label: "r3" })
// Wait for all 3 research tasks to complete...
// Phase 2: Serial processing (based on summarized results)
sessions_spawn({ task: "Integrate research materials and write a report...", agentId: "writer" })
// Wait for writer to complete...
sessions_spawn({ task: "Review report quality...", agentId: "reviewer" })
Step 5: Result Integration
- Collect outputs from all sub-agents
- Integrate, deduplicate, format
- Output final deliverables
- Must output execution statistics (see template below)
Orchestration Examples
Example 1: Technical Research Report
User: "Research mainstream AI Agent frameworks and write a comparative analysis article"
Orchestration Plan:
├── 🔍 researcher × 3 (Parallel)
│ ├── Search LangChain - Organize features, advantages/disadvantages, cases
│ ├── Search AutoGPT - Organize features, advantages/disadvantages, cases
│ └── Search CrewAI - Organize features, advantages/disadvantages, cases
├── ✍️ writer (Serial, wait for research completion)
│ └── Integrate materials and write comparative analysis article
├── 🎨 designer (Serial)
│ └── Generate framework comparison diagram/architecture diagram
└── 🔎 reviewer (Serial)
└── Review article quality and propose improvement suggestions
Example 2: Code Project
User: "Help me refactor the authentication module of this project"
Orchestration Plan:
├── 📋 pm (Optional)
│ └── Analyze requirements and break down refactoring steps
├── 👨💻 coder
│ └── Analyze existing code and implement refactoring
└── 🔎 reviewer (Serial)
└── Code review to ensure quality
Example 3: Data Analysis Report
User: "Analyze this sales data and generate a monthly report"
Orchestration Plan:
├── 📊 analyst
│ └── Data cleaning, statistical analysis, insight discovery
├── ✍️ writer (Serial)
│ └── Write analysis report
└── 🎨 designer (Serial)
└── Generate data visualization charts
Example 4: Automation Task
User: "Help me set up automatic daily checks of GitHub trending every morning"
Orchestration Plan:
├── 🤖 automator
│ └── Write script + set cron scheduled task
Orchestration Principles
- Do not over-orchestrate simple tasks — Do it directly if you can, don't use orchestration just for the sake of it
- Reasonable parallelism — Execute independent tasks in parallel to improve efficiency
- Clear handover — Sub-task outputs should be clear and complete for downstream use
- Failure handling — When a sub-task fails, decide whether to retry or skip
- Result integration — Final output should be coherent, not simple拼接
- Cost awareness — Prioritize cheap models, only use expensive models for complex tasks
🔧 Batch Output Strategy for Ultra-Long Text
When generating long files (such as complete reports, long documents),
single output may be truncated due to model token limits, causing
tool call failure.
Problem Manifestation
Validation failed for tool "write":
- content: must have required property 'content'
Or output is truncated (
), resulting in incomplete file content.
Solution: Segmented Generation + Script Summary
Strategy 1: Dispatch multiple writers for each chapter (Recommended)
Split the long report into multiple chapters, dispatch different writers to write in parallel, then splice with scripts:
javascript
// Phase 1: Write each chapter in parallel
sessions_spawn({ task: "Write Chapter 1-2: Abstract and Background...", agentId: "writer", label: "ch01" })
sessions_spawn({ task: "Write Chapter 3-4: Core Content...", agentId: "writer", label: "ch02" })
sessions_spawn({ task: "Write Chapter 5-6: Conclusion...", agentId: "writer", label: "ch03" })
// Phase 2: After all chapters are completed, splice with exec
exec(`
cat sections/ch01.md > FINAL-REPORT.md
cat sections/ch02.md >> FINAL-REPORT.md
cat sections/ch03.md >> FINAL-REPORT.md
`)
Strategy 2: Exec + heredoc append writing
For a single agent task, if the content is too long causing single write failure, write in segments:
bash
# Write file header first
cat > output.md << 'PART1'
# Title
## Part 1 Content...
PART1
# Append subsequent content
cat >> output.md << 'PART2'
## Part 2 Content...
PART2
# Continue appending
cat >> output.md << 'PART3'
## Part 3 Content...
PART3
Best Practices
| Report Length | Recommended Strategy |
|---|
| < 3000 words | Single writer outputs directly |
| 3000-8000 words | Split into 2-4 chapters, write in parallel, script summary |
| > 8000 words | Split into 5+ chapters, multiple writers in parallel + script summary |
Core Principle: Do not limit single output length, but solve long text problems through task splitting and parallel execution.
🆘 Error Reporting Mechanism for Sub-Agents
Sub-agents may encounter various errors during task execution (tool call failure, model limits, insufficient resources, etc.). To improve task success rate, establish an error reporting mechanism.
Mechanism Description
When a sub-agent task fails or returns an exception, the main agent should:
-
Analyze error type:
- Output truncation () → Adopt segmented strategy
- Tool call failure () → Check parameters or change plan
- Model not supported (e.g., Gemini Image does not support thinking) → Adjust configuration
- Timeout () → Split task or increase time
-
Select solution:
- Dispatch additional sub-agents to share in parallel: Split large tasks into small pieces and dispatch multiple sub-agents
- Main agent handles directly: Complete simple tasks directly by the main agent
- Adjust parameters and retry: Retry after modifying task description, timeout, model configuration
Error Handling Process
Sub-agent task fails
↓
Main agent receives failure notification
↓
Analyze error cause
├── Output too long → Split into multiple sub-tasks, dispatch more writers in parallel
├── Tool unavailable → Switch to exec or other solutions
├── Model limitation → Adjust thinking/model configuration
└── Timeout → Split task or extend timeout
↓
Execute solution
↓
Summarize results
Example: Handling Truncated Writer Output
javascript
// Original task failed (output too long and truncated)
// After receiving notification, main agent switches to segmented strategy
// Solution: Split into 3 sub-tasks
sessions_spawn({
task: "Write Chapter 1-2 of the report (Abstract, Background), limit to 1500 words...",
agentId: "writer",
label: "report-part1"
})
sessions_spawn({
task: "Write Chapter 3-4 of the report (Core Content), limit to 1500 words...",
agentId: "writer",
label: "report-part2"
})
sessions_spawn({
task: "Write Chapter 5-6 of the report (Conclusion, References), limit to 1000 words...",
agentId: "writer",
label: "report-part3"
})
// Merge with exec after all are completed
Add Reporting Guidelines to Sub-agent AGENTS.md
It is recommended to add the following to each sub-agent's AGENTS.md:
markdown
## When Encountering Problems
If you encounter the following situations, clearly state it in the output so that the main agent can handle it:
1. **Task too large**: State "Task content is too much, recommend splitting into X sub-tasks"
2. **Tool unavailable**: State "Tool X call failed, reason is Y"
3. **Insufficient information**: State "Missing X information, unable to complete task"
4. **Beyond capability**: State "This task requires X capability, recommend assigning to Y agent"
Do not fail silently, clear reporting helps the main agent find solutions.
Calling Syntax
javascript
sessions_spawn({
task: "Specific task description, including necessary context and expected output format",
agentId: "researcher", // Specify Agent ID
model: "glm", // Optional, override Agent's default model
thinking: "off", // Optional, control thinking mode (off/minimal/low/medium/high)
label: "task-name", // Optional, for tracking
runTimeoutSeconds: 300 // Optional, timeout in seconds
})
⚠️ Special Note: Designer Agent
Important: When calling the designer agent, you must explicitly set
because the Gemini Image model does not support thinking mode:
javascript
sessions_spawn({
task: "Generate illustrations for the article...",
agentId: "designer",
thinking: "off" // Mandatory! Gemini Image does not support thinking
})
Task Description Best Practices
markdown
A good task description should include:
1. Clear goal - What to do
2. Necessary context - Background information
3. Output requirements - Format, save location
4. Constraints - Limits and precautions
Example:
"Search for the latest information on the LangChain framework and organize the following content:
1. Core features and architecture
2. Advantages and disadvantages
3. Typical use cases
4. Comparison with other frameworks
Output format: Markdown
Save to: /workspace/research/langchain.md
Language: Chinese"
Task Completion Statistics
After completing an agent team collaboration task, must output statistical information:
markdown
## 📊 Agent Team Execution Statistics
### Execution Details
|--------|------|------|-----------------|------|
| 🔍 researcher | LangChain Research | 2m30s | 8k/1.2k | ✅ |
| 🔍 researcher | AutoGPT Research | 2m45s | 9k/1.0k | ✅ |
| ✍️ writer | Write Report | 3m12s | 15k/2.5k | ✅ |
| 🎨 designer | Generate Illustrations | 45s | 2k/- | ✅ |
### Cost Summary
- **Total Duration**: 9m12s (Actual after parallel optimization: 6m30s)
- **Total Tokens**: 34k input / 4.7k output
- **Actual Cost**: $0.12
- **Cost if using main model for all**: $0.29
- **Savings**: 59%
### Efficiency Analysis
- **Number of Parallel Tasks**: 2 researchers in parallel
- **Serial Savings**: ~2m45s saved through parallelism
Detailed template see references/statistics-template.md
Agent Work Directory
Each agent has an independent work directory containing its personality configuration:
/workspace/agents/
├── pm/ # 📋 Product Manager
│ ├── SOUL.md # Personality definition
│ └── AGENTS.md # Work specifications
├── researcher/ # 🔍 Researcher
├── coder/ # 👨💻 Programmer
├── writer/ # ✍️ Writer
├── designer/ # 🎨 Designer
├── analyst/ # 📊 Analyst
├── reviewer/ # 🔎 Reviewer
├── assistant/ # 💬 Assistant
└── automator/ # 🤖 Automator
Agent Configuration Management
Use the
script to manage the agent cluster:
bash
# List all agents
python3 scripts/agent_manager.py list
# View agent details
python3 scripts/agent_manager.py show researcher
# Add new agent (using template)
python3 scripts/agent_manager.py add my_agent --template researcher --name "My Agent" --emoji "🚀"
# Delete agent (backup by default)
python3 scripts/agent_manager.py remove my_agent
# Update agent configuration
python3 scripts/agent_manager.py update my_agent --name "New Name"
Available Templates
| Template | Description | Default Model |
|---|
| General-purpose agent | claude-opus-4 |
| Research and investigation | glm-4 |
| Programming development | claude-opus-4 |
| Content writing | gemini-2.5-pro |
Agent Experience Memory
Each agent can accumulate task experience to improve the quality of subsequent task execution.
Experience Record Structure
/workspace/agents/<agent_id>/
└── memory/
├── experience.md # Human-readable experience records
└── experience.json # Structured experience data
Use experience_logger.py
bash
# Record an experience
python3 scripts/experience_logger.py log researcher "When searching for technical information, English keywords work better" --task "LangChain Research"
# View agent experience
python3 scripts/experience_logger.py show researcher --limit 10
# Generate experience summary
python3 scripts/experience_logger.py summary researcher
# Output experience that can be injected into prompt (for injection during spawn)
python3 scripts/experience_logger.py inject researcher --limit 5
Use Experience in Tasks
Method 1: Inject experience into task description
python
# Get historical experience
import subprocess
result = subprocess.run(
["python3", "scripts/experience_logger.py", "inject", "researcher", "--limit", "5"],
capture_output=True, text=True
)
experiences = result.stdout
# Inject during spawn
sessions_spawn({
task: f"""Search for xxx information...
{experiences}
""",
agentId: "researcher"
})
Method 2: Agent actively reads experience
Add guidelines to the agent's AGENTS.md:
markdown
## Pre-Task Preparation
Before executing the task, read the historical experience in memory/experience.md.
## Post-Task Summary
After completing the task, summarize 1-3 valid experiences and record them in memory/experience.md.
Experience Record Best Practices
✅ Good experience records:
- Specific and actionable: "Adding language:python filter when searching GitHub is more accurate"
- Has causal relationship: "JSON output is more convenient for downstream processing than plain text"
- Targeted: "Read large files in chunks to avoid memory overflow"
❌ Avoid such records:
- Too general: "Work carefully"
- Too specific: "User A likes blue" (unless it's a personalized agent)
- Duplicate existing content: "Output in Markdown format" (already in AGENTS.md)
Automatic Experience Summary (Recommended)
Add the following to the end of each agent's AGENTS.md:
markdown
## After Task Completion
1. Check if the output meets requirements
2. Summarize 1-3 valid experiences from this task
3. Append the experience to memory/experience.md in the format:
- [YYYY-MM-DD] Experience description (Task name)
This way, the agent will automatically summarize experience after completing the task without manual intervention.
Configuration and Deployment
For configuring new agent teams or adding new models, please refer to references/setup-guide.md
Use the initialization script to quickly create the work directory:
bash
python3 scripts/init_agents.py --base-path /workspace/agents