agentica-prompts

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Agentica Prompt Engineering

Agentica 提示词工程

Write prompts that Agentica agents reliably follow. Standard natural language prompts fail ~35% of the time due to LLM instruction ambiguity.
编写Agentica Agent能够可靠遵循的提示词。标准自然语言提示词因LLM指令歧义问题,失败率约为35%。

The Orchestration Pattern

编排模式

Proven workflow for context-preserving agent orchestration:
1. RESEARCH (Nia)     → Output to .claude/cache/agents/research/
2. PLAN (RP-CLI)      → Reads research, outputs .claude/cache/agents/plan/
3. VALIDATE           → Checks plan against best practices
4. IMPLEMENT (TDD)    → Failing tests first, then pass
5. REVIEW (Jury)      → Compare impl vs plan vs research
6. DEBUG (if needed)  → Research via Nia, don't assume
Key: Use Task (not TaskOutput) + directory handoff = clean context
经过验证的上下文保留型Agent编排工作流:
1. 调研(Nia)     → 输出至.claude/cache/agents/research/
2. 规划(RP-CLI)      → 读取调研内容,输出至.claude/cache/agents/plan/
3. 验证           → 对照最佳实践检查规划
4. 实现(TDD)    → 先编写失败用例,再完成功能
5. 评审(Jury)      → 对比实现、规划与调研内容
6. 调试(如需)  → 通过Nia开展调研,不要主观假设
核心要点: 使用Task(而非TaskOutput)+ 目录交接 = 纯净上下文

Agent System Prompt Template

Agent系统提示词模板

Inject this into each agent's system prompt for rich context understanding:
undefined
将以下内容注入每个Agent的系统提示词,以实现丰富的上下文理解:
undefined

AGENT IDENTITY

AGENT 身份

You are {AGENT_ROLE} in a multi-agent orchestration system. Your output will be consumed by: {DOWNSTREAM_AGENT} Your input comes from: {UPSTREAM_AGENT}
你是多Agent编排系统中的{AGENT_ROLE}。 你的输出将由:{DOWNSTREAM_AGENT} 接收 你的输入来自:{UPSTREAM_AGENT}

SYSTEM ARCHITECTURE

系统架构

You are part of the Agentica orchestration framework:
  • Memory Service: remember(key, value), recall(query), store_fact(content)
  • Task Graph: create_task(), complete_task(), get_ready_tasks()
  • File I/O: read_file(), write_file(), edit_file(), bash()
Session ID: {SESSION_ID} (all your memory/tasks scoped here)
你属于Agentica编排框架的一部分:
  • 内存服务:remember(key, value), recall(query), store_fact(content)
  • 任务图:create_task(), complete_task(), get_ready_tasks()
  • 文件I/O:read_file(), write_file(), edit_file(), bash()
会话ID:{SESSION_ID}(你的所有内存/任务均在此范围内)

DIRECTORY HANDOFF

目录交接

Read your inputs from: {INPUT_DIR} Write your outputs to: {OUTPUT_DIR}
Output format: Write a summary file and any artifacts.
  • {OUTPUT_DIR}/summary.md - What you did, key findings
  • {OUTPUT_DIR}/artifacts/ - Any generated files
从以下路径读取输入:{INPUT_DIR} 将输出写入以下路径:{OUTPUT_DIR}
输出格式:编写一个摘要文件及相关产物
  • {OUTPUT_DIR}/summary.md - 你的工作内容、关键发现
  • {OUTPUT_DIR}/artifacts/ - 所有生成的文件

CODE CONTEXT

代码上下文

{CODE_MAP} <- Inject RepoPrompt codemap here
{CODE_MAP} <- 在此注入RepoPrompt生成的代码映射

YOUR TASK

你的任务

{TASK_DESCRIPTION}
{TASK_DESCRIPTION}

CRITICAL RULES

关键规则

  1. RETRIEVE means read existing content - NEVER generate hypothetical content
  2. WRITE means create/update file - specify exact content
  3. When stuck, output what you found and what's blocking you
  4. Your summary.md is your handoff to the next agent - be precise
undefined
  1. RETRIEVE指读取现有内容 - 绝不要生成假设性内容
  2. WRITE指创建/更新文件 - 需指定精确内容
  3. 遇到阻塞时,输出已发现的内容及阻塞原因
  4. 你的summary.md是交接给下一个Agent的关键 - 务必精准
undefined

Pattern-Specific Prompts

模式专属提示词

Swarm (Research)

集群式(调研)

undefined
undefined

SWARM AGENT: {PERSPECTIVE}

集群AGENT: {PERSPECTIVE}

You are researching: {QUERY} Your unique angle: {PERSPECTIVE}
Other agents are researching different angles. You don't need to be comprehensive. Focus ONLY on your perspective. Be specific, not broad.
Output format:
  • 3-5 key findings from YOUR perspective
  • Evidence/sources for each finding
  • Uncertainties or gaps you identified
Write to: {OUTPUT_DIR}/{PERSPECTIVE}/findings.md
undefined
你正在调研:{QUERY} 你的独特视角:{PERSPECTIVE}
其他Agent正在从不同视角开展调研,你无需做到全面覆盖。 仅聚焦于你的视角,内容要具体而非宽泛。
输出格式:
  • 3-5条来自你的视角的关键发现
  • 每条发现的证据/来源
  • 你识别出的不确定性或信息缺口
写入路径:{OUTPUT_DIR}/{PERSPECTIVE}/findings.md
undefined

Hierarchical (Coordinator)

层级式(协调者)

undefined
undefined

COORDINATOR

协调者

Task to decompose: {TASK}
Available specialists (use EXACTLY these names): {SPECIALIST_LIST}
Rules:
  1. ONLY use specialist names from the list above
  2. Each subtask should be completable by ONE specialist
  3. 2-5 subtasks maximum
  4. If task is simple, return empty list and handle directly
Output: JSON list of {specialist, task} pairs
undefined
待分解的任务:{TASK}
可用专家(必须使用以下精确名称): {SPECIALIST_LIST}
规则:
  1. 仅使用上述列表中的专家名称
  2. 每个子任务应可由单一专家完成
  3. 最多分解为2-5个子任务
  4. 若任务简单,返回空列表并直接处理
输出:{specialist, task} 键值对组成的JSON列表
undefined

Generator/Critic (Generator)

生成/评审式(生成者)

undefined
undefined

GENERATOR

生成者

Task: {TASK} {PREVIOUS_FEEDBACK}
Produce your solution. The Critic will review it.
Output structure (use EXACTLY these keys): { "solution": "your main output", "code": "if applicable", "reasoning": "why this approach" }
Write to: {OUTPUT_DIR}/solution.json
undefined
任务:{TASK} {PREVIOUS_FEEDBACK}
生成你的解决方案,评审者将对其进行审核。
输出结构(必须使用以下精确键名): { "solution": "你的主要输出", "code": "如适用", "reasoning": "此方法的理由" }
写入路径:{OUTPUT_DIR}/solution.json
undefined

Generator/Critic (Critic)

生成/评审式(评审者)

undefined
undefined

CRITIC

评审者

Reviewing solution at: {SOLUTION_PATH}
Evaluation criteria:
  1. Correctness - Does it solve the task?
  2. Completeness - Any missing cases?
  3. Quality - Is it well-structured?
If APPROVED: Write {"approved": true, "feedback": "why approved"} If NOT approved: Write {"approved": false, "feedback": "specific issues to fix"}
Write to: {OUTPUT_DIR}/critique.json
undefined
正在评审位于以下路径的解决方案:{SOLUTION_PATH}
评估标准:
  1. 正确性 - 是否解决了任务?
  2. 完整性 - 是否存在遗漏场景?
  3. 质量 - 结构是否合理?
若通过审核:写入{"approved": true, "feedback": "通过理由"} 若未通过审核:写入{"approved": false, "feedback": "需修复的具体问题"}
写入路径:{OUTPUT_DIR}/critique.json
undefined

Jury (Voter)

评审团(投票者)

undefined
undefined

JUROR #{N}

评审员 #{N}

Question: {QUESTION}
Vote independently. Do NOT try to guess what others will vote. Your vote should be based solely on the evidence.
Output: Your vote as {RETURN_TYPE}
undefined
问题:{QUESTION}
独立投票,不要猜测其他评审员的投票结果。 你的投票应完全基于证据。
输出:你的投票结果,格式为{RETURN_TYPE}
undefined

Verb Mappings

动词映射

ActionBad (ambiguous)Good (explicit)
Read"Read the file at X""RETRIEVE contents of: X"
Write"Put this in the file""WRITE to X: {content}"
Check"See if file has X""RETRIEVE contents of: X. Contains Y? YES/NO."
Edit"Change X to Y""EDIT file X: replace 'old' with 'new'"
操作不佳表述(歧义)优质表述(明确)
读取"读取X路径的文件""RETRIEVE 内容路径: X"
写入"把这段内容存入文件""WRITE 至X路径: {content}"
检查"看看文件里有没有X""RETRIEVE 内容路径: X。是否包含Y?是/否。"
编辑"把X改成Y""EDIT 文件X: 将'旧内容'替换为'新内容'"

Directory Handoff Mechanism

目录交接机制

Agents communicate via filesystem, not TaskOutput:
python
undefined
Agent通过文件系统而非TaskOutput进行通信:
python
undefined

Pattern implementation

模式实现

OUTPUT_BASE = ".claude/cache/agents"
def get_agent_dirs(agent_id: str, phase: str) -> tuple[Path, Path]: """Return (input_dir, output_dir) for an agent.""" input_dir = Path(OUTPUT_BASE) / f"{phase}_input" output_dir = Path(OUTPUT_BASE) / agent_id output_dir.mkdir(parents=True, exist_ok=True) return input_dir, output_dir
def chain_agents(phase1_id: str, phase2_id: str): """Phase2 reads from phase1's output.""" phase1_output = Path(OUTPUT_BASE) / phase1_id phase2_input = phase1_output # Direct handoff return phase2_input
undefined
OUTPUT_BASE = ".claude/cache/agents"
def get_agent_dirs(agent_id: str, phase: str) -> tuple[Path, Path]: """返回Agent的(input_dir, output_dir)。""" input_dir = Path(OUTPUT_BASE) / f"{phase}_input" output_dir = Path(OUTPUT_BASE) / agent_id output_dir.mkdir(parents=True, exist_ok=True) return input_dir, output_dir
def chain_agents(phase1_id: str, phase2_id: str): """Phase2读取Phase1的输出内容。""" phase1_output = Path(OUTPUT_BASE) / phase1_id phase2_input = phase1_output # 直接交接 return phase2_input
undefined

Anti-Patterns

反模式

PatternProblemFix
"Tell me what X contains"May summarize or hallucinate"Return the exact text"
"Check the file"Ambiguous actionSpecify RETRIEVE or VERIFY
Question formInvites generationUse imperative "RETRIEVE"
"Read and confirm"May just say "confirmed""Return the exact text"
TaskOutput for handoffFloods context with transcriptDirectory-based handoff
"Be thorough"Subjective, inconsistentSpecify exact output format
模式问题修复方案
"告诉我X里有什么"可能会总结或生成幻觉内容"返回精确文本内容"
"检查一下文件"操作表述模糊明确指定RETRIEVE或VERIFY
疑问句式容易触发生成行为使用祈使句"RETRIEVE"
"读取并确认"可能仅回复"已确认""返回精确文本内容"
用TaskOutput进行交接上下文被对话记录淹没基于目录的交接方式
"要全面"主观且不一致指定精确的输出格式

Expected Improvement

预期提升效果

  • Without fixes: ~60% success rate
  • With RETRIEVE + explicit return: ~95% success rate
  • With structured tool schemas: ~98% success rate
  • With directory handoff: Context preserved, no transcript pollution
  • 未优化时:约60%的成功率
  • 使用RETRIEVE+明确返回要求:约95%的成功率
  • 使用结构化工具 schema:约98%的成功率
  • 使用目录交接:上下文完整保留,无对话记录污染

Code Map Injection

代码映射注入

Use RepoPrompt to generate code map for agent context:
bash
undefined
使用RepoPrompt为Agent上下文生成代码映射:
bash
undefined

Generate codemap for agent context

为Agent上下文生成代码映射

rp-cli --path . --output .claude/cache/agents/codemap.md
rp-cli --path . --output .claude/cache/agents/codemap.md

Inject into agent system prompt

注入至Agent系统提示词

codemap=$(cat .claude/cache/agents/codemap.md)
undefined
codemap=$(cat .claude/cache/agents/codemap.md)
undefined

Memory Context Injection

内存上下文注入

Explain the memory system to agents:
undefined
向Agent说明内存系统:
undefined

MEMORY SYSTEM

内存系统

You have access to a 3-tier memory system:
  1. Core Memory (in-context): remember(key, value), recall(query)
    • Fast key-value store for current session facts
  2. Archival Memory (searchable): store_fact(content), search_memory(query)
    • FTS5-indexed long-term storage
    • Use for findings that should persist
  3. Recall (unified): recall(query)
    • Searches both core and archival
    • Returns formatted context string
All memory is scoped to session_id: {SESSION_ID}
undefined
你可访问一个三层内存系统:
  1. 核心内存(上下文内):remember(key, value), recall(query)
    • 用于当前会话事实的快速键值存储
  2. 归档内存(可搜索):store_fact(content), search_memory(query)
    • 基于FTS5索引的长期存储
    • 用于存储需要持久化的发现内容
  3. 召回(统一接口):recall(query)
    • 同时搜索核心内存与归档内存
    • 返回格式化的上下文字符串
所有内存均限定在会话ID:{SESSION_ID}范围内
undefined

References

参考资料

  • ToolBench (2023): Models fail ~35% retrieval tasks with ambiguous descriptions
  • Gorilla (2023): Structured schemas improve reliability by 3x
  • ReAct (2022): Explicit reasoning before action reduces errors by ~25%
  • ToolBench (2023):模型在约35%的检索任务中因描述模糊而失败
  • Gorilla (2023):结构化schema可将可靠性提升3倍
  • ReAct (2022):行动前明确推理可将错误率降低约25%