model-council
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseModel Council: Multi-Model Consensus
Model Council:多模型共识
Run the same problem through multiple AI models in parallel, collect their analysis only, then Claude Code synthesizes and decides the best approach.
Unlike code-council (which uses one model with multiple approaches), model-council leverages different model architectures for true ensemble diversity.
将同一问题并行提交给多个AI模型,仅收集它们的分析内容,随后由Claude Code综合分析并确定最佳方案。
与code-council(使用单一模型的多种解决思路)不同,model-council利用不同的模型架构实现真正的集成多样性。
Critical: Analysis Only Mode
关键:仅分析模式
IMPORTANT: External models provide analysis and recommendations ONLY. They do NOT make code changes.
- External models: Analyze, suggest, reason, compare options
- Claude Code: Synthesizes all inputs, makes final decision, implements changes
This ensures:
- Claude Code remains in control of the codebase
- No conflicting changes from multiple sources
- Best ideas from all models, unified execution
重要提示:外部模型仅提供分析和建议。它们不会直接修改代码。
- 外部模型:分析问题、提出建议、推理逻辑、比较可选方案
- Claude Code:综合所有输入内容、做出最终决策、执行代码修改
这确保了:
- Claude Code始终掌控代码库
- 避免来自多源的冲突修改
- 汇聚所有模型的最优思路,执行过程统一可控
Why Multi-Model?
为什么选择多模型?
Different models have different:
- Training data and knowledge cutoffs
- Reasoning patterns and biases
- Strengths (math, code, creativity, etc.)
When multiple independent models agree → High confidence the answer is correct.
不同模型具备不同的特性:
- 训练数据和知识截止时间
- 推理模式和偏好
- 优势领域(数学、代码、创意等)
当多个独立模型达成共识时 → 答案的正确性具有高可信度。
Execution Modes
执行模式
Mode 1: CLI Agents (Uses Your Existing Accounts)
模式1:CLI代理(使用您已有的账户)
Call CLI tools that use your logged-in accounts - leverages existing subscriptions!
| CLI Tool | Model | Status |
|---|---|---|
| Claude (this session) | ✅ Already running |
| OpenAI Codex | Requires setup |
| Google Gemini | Requires setup |
| Multi-model | Requires setup |
调用使用您已登录账户的CLI工具 - 充分利用现有订阅!
| CLI工具 | 模型 | 状态 |
|---|---|---|
| Claude(当前会话) | ✅ 已运行 |
| OpenAI Codex | 需要配置 |
| Google Gemini | 需要配置 |
| 多模型 | 需要配置 |
CLI Setup Instructions
CLI配置说明
OpenAI Codex CLI:
bash
undefinedOpenAI Codex CLI:
bash
undefinedInstall via npm
通过npm安装
npm install -g @openai/codex
npm install -g @openai/codex
Login (uses browser auth)
登录(使用浏览器认证)
codex auth
codex auth
Verify
验证安装
codex --version
**Google Gemini CLI:**
```bashcodex --version
**Google Gemini CLI:**
```bashInstall via npm
通过npm install
npm install -g @anthropic-ai/gemini-cli
npm install -g @anthropic-ai/gemini-cli
Or use gcloud with Vertex AI
或使用gcloud搭配Vertex AI
gcloud auth application-default login
gcloud auth application-default login
Verify
验证安装
gemini --version
**Aider (Multi-model, Recommended):**
```bashgemini --version
**Aider(多模型,推荐使用):**
```bashInstall via pip
通过pip安装
pip install aider-chat
pip install aider-chat
Configure with your API keys
使用API密钥配置
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
Run with specific model
指定模型运行
aider --model gpt-4o --message "analyze this code"
**Check What's Installed:**
```bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/detect_clis.pyaider --model gpt-4o --message "analyze this code"
**检查已安装的工具:**
```bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/detect_clis.pyMode 2: API Calls (Pay per token)
模式2:API调用(按token计费)
Direct API calls - more reliable, works without CLI setup, but costs money.
Required environment variables:
- - For Claude API (https://console.anthropic.com/)
ANTHROPIC_API_KEY - - For GPT-4 API (https://platform.openai.com/api-keys)
OPENAI_API_KEY - - For Gemini API (https://aistudio.google.com/apikey)
GOOGLE_API_KEY - - For Grok API (https://console.x.ai/)
XAI_API_KEY
直接调用API - 可靠性更高,无需配置CLI,但会产生费用。
所需环境变量:
- - 用于Claude API(https://console.anthropic.com/)
ANTHROPIC_API_KEY - - 用于GPT-4 API(https://platform.openai.com/api-keys)
OPENAI_API_KEY - - 用于Gemini API(https://aistudio.google.com/apikey)
GOOGLE_API_KEY - - 用于Grok API(https://console.x.ai/)
XAI_API_KEY
Configuration
配置
User Model Selection
用户模型选择
Users can specify models inline:
model council with claude, gpt-4o, gemini: solve this problem
model council (claude + codex): fix this bug
model council all: use all available models用户可在指令中直接指定模型:
model council with claude, gpt-4o, gemini: 解决此问题
model council (claude + codex): 修复此bug
model council all: 使用所有可用模型Default Models
默认模型
If not specified, use all available:
- Check which CLI tools are installed
- Check which API keys are set
- Use what's available
若未指定模型,则使用所有可用选项:
- 检查已安装的CLI工具
- 检查已配置的API密钥
- 使用所有可用的模型
Config File (Optional)
配置文件(可选)
Users can create :
~/.model-council.yamlyaml
undefined用户可创建文件:
~/.model-council.yamlyaml
undefinedPreferred models (in order)
首选模型(按顺序)
models:
- claude # Use Claude Code CLI (current session)
- codex # Use Codex CLI if installed
- gemini-cli # Use Gemini CLI if installed
models:
- claude # 使用Claude Code CLI(当前会话)
- codex # 若已安装则使用Codex CLI
- gemini-cli # 若已安装则使用Gemini CLI
Fallback to APIs if CLIs not available
当CLI不可用时回退到API
fallback_to_api: true
fallback_to_api: true
API models to use when falling back
回退时使用的API模型
api_models:
anthropic: claude-sonnet-4-20250514
openai: gpt-4o
google: gemini-2.0-flash
xai: grok-3
api_models:
anthropic: claude-sonnet-4-20250514
openai: gpt-4o
google: gemini-2.0-flash
xai: grok-3
Timeout per model (seconds)
每个模型的超时时间(秒)
timeout: 120
timeout: 120
Run in parallel or sequential
并行或顺序执行
parallel: true
undefinedparallel: true
undefinedWorkflow
工作流程
Step 1: Parse Model Selection
步骤1:解析模型选择
Determine which models to use:
- Check user's inline specification (e.g., "with claude, gpt-4o")
- If none specified, check config file
- If no config, detect available CLIs and APIs
确定要使用的模型:
- 检查用户的内联指定(例如:"with claude, gpt-4o")
- 若未指定,检查配置文件
- 若无配置文件,检测可用的CLI和API
Step 2: Prepare the Prompt
步骤2:准备提示词
Format the problem for each model with analysis-only instructions:
Analyze the following problem and provide your recommendations.
DO NOT output code changes directly.
Instead, provide:
1. Your analysis of the problem
2. Recommended approach(es)
3. Potential issues or edge cases to consider
4. Trade-offs between different solutions
Problem:
[user's problem here]Key rules:
- Keep the core problem identical across models
- Explicitly request analysis, not implementation
- Include relevant context (code snippets, error messages)
为每个模型格式化问题,并添加仅分析指令:
分析以下问题并提供您的建议。
请勿直接输出代码修改内容。
请提供:
1. 您对问题的分析
2. 推荐的解决思路
3. 需要考虑的潜在问题或边缘情况
4. 不同解决方案之间的权衡
问题:
[用户的问题内容]核心规则:
- 确保所有模型收到的核心问题完全一致
- 明确要求分析而非实现
- 包含相关上下文(代码片段、错误信息)
Step 3: Execute in Parallel
步骤3:并行执行
Use the API council script to query multiple models:
bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/api_council.py \
--prompt "Analyze this problem and recommend solutions (do not implement): [problem]" \
--models "claude-sonnet,gpt-4o,gemini-flash"Available models:
- ,
claude-sonnet- Anthropicclaude-opus - ,
gpt-4o,gpt-4-turbo- OpenAIo1 - ,
gemini-flashgemini-pro - - xAI
grok
List all models:
bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/api_council.py --list-models使用API协作脚本查询多个模型:
bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/api_council.py \
--prompt "Analyze this problem and recommend solutions (do not implement): [problem]" \
--models "claude-sonnet,gpt-4o,gemini-flash"可用模型:
- ,
claude-sonnet- Anthropicclaude-opus - ,
gpt-4o,gpt-4-turbo- OpenAIo1 - ,
gemini-flashgemini-pro - - xAI
grok
列出所有可用模型:
bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/api_council.py --list-modelsStep 4: Collect Responses
步骤4:收集响应
Gather all model responses with metadata:
- Model name and version
- Response time
- Token usage (if available)
- Full response
收集所有模型的响应及元数据:
- 模型名称和版本
- 响应时间
- Token使用量(若可用)
- 完整响应内容
Step 5: Analyze Consensus
步骤5:分析共识
Compare responses looking for:
- Agreement: Do models produce the same answer/approach?
- Unique insights: Does one model catch something others missed?
- Disagreements: Where do models differ and why?
对比响应内容,重点关注:
- 一致性:多个模型是否给出相同的答案/思路?
- 独特见解:是否有某个模型发现了其他模型遗漏的点?
- 分歧点:模型之间的差异在哪里?原因是什么?
Step 6: Claude Code Synthesizes and Decides
步骤6:Claude Code综合分析并决策
Claude Code (this session) uses ultrathink to:
- Evaluate each model's analysis
- Identify the strongest reasoning and recommendations
- Note where models agree (high confidence) vs disagree (investigate further)
- Make the final decision on approach
- Implement the solution - only Claude Code makes code changes
This is the key difference from just asking one model:
- Multiple perspectives inform the decision
- Claude Code remains the single source of truth for implementation
- No conflicting changes from different models
当前会话中的Claude Code会通过深度思考完成以下工作:
- 评估每个模型的分析内容
- 识别最具说服力的推理和建议
- 标记模型达成共识的部分(高可信度)与存在分歧的部分(需进一步调研)
- 最终确定解决思路
- 实现解决方案 - 仅Claude Code可修改代码
这与仅使用单一模型的核心区别:
- 多视角信息支撑决策
- Claude Code始终是实现过程的唯一可信来源
- 避免来自不同模型的冲突修改
Step 7: Deliver Results
步骤7:交付结果
Provide:
- Final synthesized answer (best combined solution)
- Consensus score (how many models agreed)
- Individual responses (for transparency)
- Insights (what each model contributed)
提供以下内容:
- 最终综合答案(最优的组合解决方案)
- 共识评分(有多少模型达成一致)
- 各模型独立响应(保证透明度)
- 洞察总结(每个模型的贡献点)
CLI Detection
CLI检测
To check available CLIs:
bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/detect_clis.pyThis checks for:
- - Claude Code CLI
claude - - OpenAI Codex CLI
codex - - Gemini CLI
gemini - - Aider (multi-model)
aider - - Cursor AI (if applicable)
cursor
检查可用的CLI工具:
bash
python3 ${CLAUDE_PLUGIN_ROOT}/skills/model-council/scripts/detect_clis.py该脚本会检测以下工具:
- - Claude Code CLI
claude - - OpenAI Codex CLI
codex - - Gemini CLI
gemini - - Aider(多模型工具)
aider - - Cursor AI(若适用)
cursor
Comparison: code-council vs model-council
对比:code-council vs model-council
| Aspect | code-council | model-council |
|---|---|---|
| Models used | Claude only | Multiple (Claude, GPT, Gemini, etc.) |
| Diversity source | Different approaches | Different architectures |
| Cost | Free (uses current session) | Free (CLIs) or paid (APIs) |
| Speed | Fast (single model) | Slower (parallel calls) |
| Best for | Quick iterations | High-stakes decisions |
| 维度 | code-council | model-council |
|---|---|---|
| 使用模型 | 仅Claude | 多个模型(Claude、GPT、Gemini等) |
| 多样性来源 | 单一模型的多种思路 | 不同的模型架构 |
| 成本 | 免费(使用当前会话) | 免费(CLI模式)或付费(API模式) |
| 速度 | 快(单一模型) | 较慢(并行调用) |
| 适用场景 | 快速迭代 | 高风险决策 |
When to Use Each
适用场景选择
Use code-council when:
- You want fast iterations
- The problem is well-defined
- You trust Claude's reasoning
Use model-council when:
- High-stakes code (production, security)
- You want architectural diversity
- Models might have different knowledge
- You want to verify Claude's answer
当以下情况时使用code-council:
- 您需要快速迭代
- 问题定义清晰
- 信任Claude的推理能力
当以下情况时使用model-council:
- 处理高风险代码(生产环境、安全相关)
- 您需要架构多样性
- 不同模型可能具备不同的知识储备
- 您需要验证Claude的答案
Error Handling
错误处理
CLI not found: Skip that model, log warning, continue with others.
API key missing: Skip that provider, try CLI fallback if available.
Timeout: Return partial results, note which models timed out.
No models available: Error with setup instructions.
未找到CLI工具:跳过该模型,记录警告,继续使用其他模型。
缺少API密钥:跳过该提供商,若有可用CLI则尝试回退到CLI模式。
超时:返回部分结果,标记超时的模型。
无可用模型:返回错误及配置指引。
Example Output
示例输出
undefinedundefinedModel Council Analysis Results
Model Council分析结果
Consensus: HIGH (3/3 models agree on approach)
共识度:高(3/3模型一致认可该方案)
Summary of Recommendations:
建议总结:
All models recommend using a hash map for O(1) lookup.
Key considerations raised:
- Handle null/empty input (Claude, GPT-4o)
- Consider memory vs speed tradeoff (Gemini)
- Add input validation (all models)
所有模型均推荐使用哈希表实现O(1)时间复杂度的查找。
提出的关键注意事项:
- 处理空值/空输入(Claude、GPT-4o)
- 考虑内存与速度的权衡(Gemini)
- 添加输入验证(所有模型)
Individual Analyses:
各模型独立分析:
Claude Sonnet (API)
Claude Sonnet(API)
Analysis: The bug is caused by off-by-one error in the loop boundary.
Recommendation: Change to
Edge cases noted: Empty array, single element
Confidence: High
i <= leni < len分析:该bug由循环边界的差一错误导致。
建议:将修改为
注意到的边缘情况:空数组、单元素数组
可信度:高
i <= leni < lenGPT-4o (API)
GPT-4o(API)
Analysis: Loop iterates one element past array bounds.
Recommendation: Fix loop condition, add bounds check
Additional insight: Could also use forEach to avoid index errors
Confidence: High
分析:循环迭代超出了数组边界。
建议:修复循环条件,添加边界检查
额外见解:也可使用forEach避免索引错误
可信度:高
Gemini Flash (API)
Gemini Flash(API)
Analysis: Array index out of bounds on final iteration.
Recommendation: Adjust loop termination condition
Reference: Similar to common off-by-one patterns
Confidence: High
分析:最后一次迭代出现数组索引越界。
建议:调整循环终止条件
参考:与常见的差一错误模式类似
可信度:高
Claude Code Decision:
Claude Code决策:
Based on consensus, implementing fix with:
- Loop condition change (i < len)
- Added null check for robustness
- Unit test for edge cases
[Claude Code now implements the solution]
undefined基于共识,将实施以下修复:
- 修改循环条件为(i < len)
- 添加空值检查以提升鲁棒性
- 为边缘情况添加单元测试
[Claude Code开始执行解决方案]
undefined