brains-trust
Original:🇺🇸 English
Translated
Get a second opinion from leading AI models on code, architecture, strategy, prompting, or anything. Queries models via OpenRouter, Gemini, or OpenAI APIs. Supports single opinion, multi-model consensus, and devil's advocate patterns. Trigger with 'brains trust', 'second opinion', 'ask gemini', 'ask gpt', 'peer review', 'consult', 'challenge this', or 'devil's advocate'.
3installs
Sourcejezweb/claude-skills
Added on
NPX Install
npx skill4agent add jezweb/claude-skills brains-trustTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Brains Trust
Consult other leading AI models for a second opinion. Not limited to code — works for architecture, strategy, prompting, debugging, writing, or any question where a fresh perspective helps.
Setup
Set at least one API key as an environment variable:
bash
# Recommended — one key covers all providers
export OPENROUTER_API_KEY="your-key"
# Optional — direct access (often faster/cheaper)
export GEMINI_API_KEY="your-key"
export OPENAI_API_KEY="your-key"OpenRouter is the universal path — one key gives access to Gemini, GPT, Qwen, DeepSeek, Llama, Mistral, and more.
Current Models
Do not use hardcoded model IDs. Before every consultation, fetch the current leading models:
https://models.flared.au/llms.txtThis is a live-updated, curated list of ~40 leading models from 11 providers, filtered from OpenRouter's full catalogue. Use it to pick the right model for the task.
For programmatic use in the generated Python script:
https://models.flared.au/jsonConsultation Patterns
| Pattern | When | What happens |
|---|---|---|
| Single (default) | Quick second opinion | Ask one model, synthesise with your own view |
| Consensus | Important decision, want confidence | Ask 2-3 diverse models in parallel, compare where they agree/disagree |
| Devil's advocate | Challenge an assumption | Ask a model to explicitly argue against your current position |
For consensus, pick models from different providers (e.g. one Google, one OpenAI, one Qwen) for maximum diversity of perspective.
Modes
| Mode | When | Model tier |
|---|---|---|
| Code Review | Review files for bugs, patterns, security | Flash |
| Architecture | Design decisions, trade-offs | Pro |
| Debug | Stuck after 2+ failed attempts | Flash |
| Security | Vulnerability scan | Pro |
| Strategy | Business, product, approach decisions | Pro |
| Prompting | Improve prompts, system prompts, KB files | Flash |
| General | Any question, brainstorm, challenge | Flash |
Pro tier: The most capable model from the chosen provider (e.g. , ).
Flash tier: Fast, cheaper models for straightforward analysis (e.g. , ).
google/gemini-3.1-pro-previewopenai/gpt-5.4google/gemini-3-flash-previewqwen/qwen3.5-flash-02-23Workflow
-
Detect available keys — check,
OPENROUTER_API_KEY,GEMINI_API_KEYin environment. If none found, show setup instructions and stop.OPENAI_API_KEY -
Fetch current models —and pick appropriate models based on mode (pro vs flash) and consultation pattern (single vs consensus). If user requested a specific provider ("ask gemini"), use that.
WebFetch https://models.flared.au/llms.txt -
Read target files into context (if code-related). For non-code questions (strategy, prompting, general), skip file reading.
-
Build prompt using the AI-to-AI template from references/prompt-templates.md. Include file contents inline withseparators. Do not set output token limits — let models reason fully.
--- filename --- -
Write prompt to file at— never pass code inline via bash arguments (shell escaping breaks it).
.claude/artifacts/brains-trust-prompt.txt -
Generate and run Python script atusing patterns from references/provider-api-patterns.md:
.claude/scripts/brains-trust.py- Reads prompt from
.claude/artifacts/brains-trust-prompt.txt - Calls the selected API(s)
- For consensus mode: calls multiple APIs in parallel using
concurrent.futures - Saves each response to
.claude/artifacts/brains-trust-{model}.md - Prints results to stdout
- Reads prompt from
-
Synthesise — read the responses, present findings to the user. Note where models agree and disagree. Add your own perspective (agree/disagree with reasoning). Let the user decide what to act on.
When to Use
Good use cases:
- Before committing major architectural changes
- When stuck debugging after multiple attempts
- Architecture decisions with multiple valid options
- Reviewing security-sensitive code
- Challenging your own assumptions on strategy or approach
- Improving system prompts or KB files
- Any time you want a fresh perspective
Avoid using for:
- Simple syntax checks (Claude handles these)
- Every single edit (too slow, costs money)
- Questions with obvious, well-known answers
Critical Rules
- Never hardcode model IDs — always fetch from first
models.flared.au - Never cap output tokens — don't set or
max_tokensmaxOutputTokens - Always write prompts to file — never pass via bash arguments
- Include file contents inline — attach code context directly in the prompt
- Use AI-to-AI framing — the model is advising Claude, not talking to the human
Reference Files
| When | Read |
|---|---|
| Building prompts for any mode | references/prompt-templates.md |
| Generating the Python API call script | references/provider-api-patterns.md |