brains-trust
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseBrains Trust
Brains Trust
Consult other leading AI models for a second opinion. Not limited to code — works for architecture, strategy, prompting, debugging, writing, or any question where a fresh perspective helps.
咨询其他顶尖AI模型以获取第二意见。不仅限于代码——适用于架构、策略、提示词、调试、写作或任何需要全新视角的问题。
Setup
设置
Set at least one API key as an environment variable:
bash
undefined设置至少一个API密钥作为环境变量:
bash
undefinedRecommended — one key covers all providers
Recommended — one key covers all providers
export OPENROUTER_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"
Optional — direct access (often faster/cheaper)
Optional — direct access (often faster/cheaper)
export GEMINI_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
OpenRouter is the universal path — one key gives access to Gemini, GPT, Qwen, DeepSeek, Llama, Mistral, and more.export GEMINI_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
OpenRouter是通用方案——一个密钥即可访问Gemini、GPT、Qwen、DeepSeek、Llama、Mistral等更多模型。Current Models
当前模型
Do not use hardcoded model IDs. Before every consultation, fetch the current leading models:
https://models.flared.au/llms.txtThis is a live-updated, curated list of ~40 leading models from 11 providers, filtered from OpenRouter's full catalogue. Use it to pick the right model for the task.
For programmatic use in the generated Python script:
https://models.flared.au/json请勿使用硬编码的模型ID。每次咨询前,获取当前的顶尖模型列表:
https://models.flared.au/llms.txt这是一个实时更新的精选列表,包含来自11个提供商的约40款顶尖模型,从OpenRouter的完整目录中筛选而来。使用它为任务选择合适的模型。
若要在生成的Python脚本中程序化使用,请访问:
https://models.flared.au/jsonConsultation Patterns
咨询模式
| Pattern | When | What happens |
|---|---|---|
| Single (default) | Quick second opinion | Ask one model, synthesise with your own view |
| Consensus | Important decision, want confidence | Ask 2-3 diverse models in parallel, compare where they agree/disagree |
| Devil's advocate | Challenge an assumption | Ask a model to explicitly argue against your current position |
For consensus, pick models from different providers (e.g. one Google, one OpenAI, one Qwen) for maximum diversity of perspective.
| 模式 | 适用场景 | 执行逻辑 |
|---|---|---|
| 单一意见(默认) | 快速获取第二意见 | 调用一个模型,结合你自己的观点进行综合 |
| 多模型共识 | 重要决策,希望获得信心 | 并行调用2-3个不同的模型,对比它们的异同点 |
| 魔鬼代言人 | 挑战现有假设 | 要求模型明确反驳你当前的立场 |
对于共识模式,请选择来自不同提供商的模型(例如一个Google、一个OpenAI、一个Qwen)以获取最大的视角多样性。
Modes
功能模式
| Mode | When | Model tier |
|---|---|---|
| Code Review | Review files for bugs, patterns, security | Flash |
| Architecture | Design decisions, trade-offs | Pro |
| Debug | Stuck after 2+ failed attempts | Flash |
| Security | Vulnerability scan | Pro |
| Strategy | Business, product, approach decisions | Pro |
| Prompting | Improve prompts, system prompts, KB files | Flash |
| General | Any question, brainstorm, challenge | Flash |
Pro tier: The most capable model from the chosen provider (e.g. , ).
Flash tier: Fast, cheaper models for straightforward analysis (e.g. , ).
google/gemini-3.1-pro-previewopenai/gpt-5.4google/gemini-3-flash-previewqwen/qwen3.5-flash-02-23| 模式 | 适用场景 | 模型层级 |
|---|---|---|
| 代码评审 | 检查文件中的漏洞、模式问题和安全隐患 | Flash |
| 架构设计 | 设计决策、权衡取舍 | Pro |
| 调试 | 尝试2次以上仍未解决问题时 | Flash |
| 安全检测 | 漏洞扫描 | Pro |
| 策略制定 | 业务、产品、方法决策 | Pro |
| 提示词优化 | 改进提示词、系统提示词、知识库文件 | Flash |
| 通用咨询 | 任何问题、头脑风暴、挑战假设 | Flash |
Pro层级:所选提供商中能力最强的模型(例如、)。
Flash层级:用于简单分析的快速、低成本模型(例如、)。
google/gemini-3.1-pro-previewopenai/gpt-5.4google/gemini-3-flash-previewqwen/qwen3.5-flash-02-23Workflow
工作流程
-
Detect available keys — check,
OPENROUTER_API_KEY,GEMINI_API_KEYin environment. If none found, show setup instructions and stop.OPENAI_API_KEY -
Fetch current models —and pick appropriate models based on mode (pro vs flash) and consultation pattern (single vs consensus). If user requested a specific provider ("ask gemini"), use that.
WebFetch https://models.flared.au/llms.txt -
Read target files into context (if code-related). For non-code questions (strategy, prompting, general), skip file reading.
-
Build prompt using the AI-to-AI template from references/prompt-templates.md. Include file contents inline withseparators. Do not set output token limits — let models reason fully.
--- filename --- -
Write prompt to file at— never pass code inline via bash arguments (shell escaping breaks it).
.claude/artifacts/brains-trust-prompt.txt -
Generate and run Python script atusing patterns from references/provider-api-patterns.md:
.claude/scripts/brains-trust.py- Reads prompt from
.claude/artifacts/brains-trust-prompt.txt - Calls the selected API(s)
- For consensus mode: calls multiple APIs in parallel using
concurrent.futures - Saves each response to
.claude/artifacts/brains-trust-{model}.md - Prints results to stdout
- Reads prompt from
-
Synthesise — read the responses, present findings to the user. Note where models agree and disagree. Add your own perspective (agree/disagree with reasoning). Let the user decide what to act on.
-
检测可用密钥 —— 检查环境变量中的、
OPENROUTER_API_KEY、GEMINI_API_KEY。如果未找到任何密钥,显示设置说明并停止。OPENAI_API_KEY -
获取当前模型 —— 通过获取,并根据功能模式(Pro vs Flash)和咨询模式(单一意见 vs 多模型共识)选择合适的模型。如果用户指定了特定提供商(例如"ask gemini"),则使用该提供商的模型。
WebFetch https://models.flared.au/llms.txt -
读取目标文件到上下文(如果与代码相关)。对于非代码问题(策略、提示词、通用咨询),跳过文件读取。
-
构建提示词 —— 使用references/prompt-templates.md中的AI-to-AI模板。将文件内容用分隔符内联到提示词中。请勿设置输出令牌限制 —— 让模型充分推理。
--- filename --- -
将提示词写入文件 —— 保存到,切勿通过bash参数内联传递代码(会导致shell转义错误)。
.claude/artifacts/brains-trust-prompt.txt -
生成并运行Python脚本 —— 保存到,使用references/provider-api-patterns.md中的模式:
.claude/scripts/brains-trust.py- 从读取提示词
.claude/artifacts/brains-trust-prompt.txt - 调用选定的API(s)
- 对于共识模式:使用并行调用多个API
concurrent.futures - 将每个响应保存到
.claude/artifacts/brains-trust-{model}.md - 将结果打印到标准输出
- 从
-
综合分析 —— 读取响应,向用户呈现结果。标注模型的异同点。添加你自己的观点(同意/不同意并说明理由)。让用户决定后续行动。
When to Use
使用场景
Good use cases:
- Before committing major architectural changes
- When stuck debugging after multiple attempts
- Architecture decisions with multiple valid options
- Reviewing security-sensitive code
- Challenging your own assumptions on strategy or approach
- Improving system prompts or KB files
- Any time you want a fresh perspective
Avoid using for:
- Simple syntax checks (Claude handles these)
- Every single edit (too slow, costs money)
- Questions with obvious, well-known answers
适用场景:
- 提交重大架构变更前
- 多次尝试仍无法调试解决问题时
- 存在多个有效选项的架构决策
- 评审安全敏感代码
- 挑战自己在策略或方法上的假设
- 改进系统提示词或知识库文件
- 任何需要全新视角的时刻
不适用场景:
- 简单语法检查(Claude可处理此类任务)
- 每一次微小编辑(速度太慢,成本较高)
- 答案明显且广为人知的问题
Critical Rules
关键规则
- Never hardcode model IDs — always fetch from first
models.flared.au - Never cap output tokens — don't set or
max_tokensmaxOutputTokens - Always write prompts to file — never pass via bash arguments
- Include file contents inline — attach code context directly in the prompt
- Use AI-to-AI framing — the model is advising Claude, not talking to the human
- 请勿硬编码模型ID —— 始终先从获取
models.flared.au - 请勿限制输出令牌 —— 不要设置或
max_tokensmaxOutputTokens - 始终将提示词写入文件 —— 切勿通过bash参数传递
- 内联包含文件内容 —— 将代码上下文直接嵌入提示词中
- 使用AI-to-AI框架 —— 模型是为Claude提供建议,而非直接与人类对话
Reference Files
参考文件
| When | Read |
|---|---|
| Building prompts for any mode | references/prompt-templates.md |
| Generating the Python API call script | references/provider-api-patterns.md |
| 场景 | 参考文件 |
|---|---|
| 构建任何模式的提示词时 | references/prompt-templates.md |
| 生成Python API调用脚本时 | references/provider-api-patterns.md |