benchmark-agents
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseBenchmark Agents — Advanced AI Systems
基准测试Agent — 高级AI系统
Launch real Claude Code sessions with the plugin installed, verify skill injection, monitor PostToolUse validation catches, and produce a coverage report. This skill covers the full eval loop: setup → launch → monitor → verify → fix → release → repeat.
启动安装了插件的真实Claude Code会话,验证技能注入,监控PostToolUse校验捕获结果,并生成覆盖率报告。该技能覆盖完整的评测循环:设置 → 启动 → 监控 → 验证 → 修复 → 发布 → 重复。
How Evals Work (The Only Correct Method)
评测工作原理(唯一正确的方法)
Evals are run by you, in this conversation, not by scripts. The process is:
- You create directories and install the plugin via Bash tool calls
- You spawn WezTerm panes with — each pane runs an independent Claude Code interactive session
wezterm cli spawn - You wait, then check debug logs and claim dirs to see what the plugin injected
- You inspect the generated source code for correctness
- You read conversation logs to find what the user had to correct
- You update skills/hooks, run , and spawn more evals
/release
Never use , eval scripts, or . These do not work because:
claude --printBun.spawn(["claude", ...])- Plugin hooks (PreToolUse, PostToolUse, UserPromptSubmit) only fire during interactive tool-calling sessions
- mode generates text without executing tools — no files are created, no deps installed, no dev servers started
--print - No means dedup, profiler, and claim files don't work
session_id
The WezTerm interactive approach is the only method that exercises the plugin correctly. Every eval in our history (60+ sessions) used this approach.
评测由你在本次对话中运行,而非通过脚本执行。流程如下:
- 你通过Bash工具调用创建目录并安装插件
- 你使用命令创建WezTerm窗格——每个窗格运行独立的Claude Code交互会话
wezterm cli spawn - 等待一段时间后,检查调试日志和认领目录,查看插件注入了哪些内容
- 检查生成的源代码正确性
- 阅读会话日志,找出用户需要修正的内容
- 更新技能/钩子,运行,然后启动更多评测
/release
绝对不要使用、评测脚本或。这些方式无效,原因如下:
claude --printBun.spawn(["claude", ...])- 插件钩子(PreToolUse、PostToolUse、UserPromptSubmit)仅在交互式工具调用会话期间触发
- 模式仅生成文本而不执行工具——不会创建文件、不会安装依赖、不会启动开发服务
--print - 没有会导致去重、分析器和认领文件功能无法正常工作
session_id
WezTerm交互式方法是唯一能正确测试插件的方式。我们历史上所有评测(60+次会话)都采用该方式。
DO NOT (Hard Rules)
禁止操作(硬性规则)
These are absolute prohibitions. Violating any of them wastes the entire eval run:
- DO NOT use or
claude --printflag — hooks don't fire, no files created-p - DO NOT use — changes agent behavior
--dangerously-skip-permissions - DO NOT create projects in — always use
/tmp/~/dev/vercel-plugin-testing/ - DO NOT manually create or wire hooks by hand — use
settings.local.jsonnpx add-plugin - DO NOT set manually — the plugin manages this
CLAUDE_PLUGIN_ROOT - DO NOT use or
bash -cin WezTerm — always usebash -lc/bin/zsh -ic - DO NOT use the full path to claude — use the alias (it's configured in zsh)
x - DO NOT create custom files with stderr redirects — debug logs go to
debug.log~/.claude/debug/ - DO NOT write eval runner scripts in TypeScript/JavaScript — do everything as Bash tool calls in the conversation
- DO NOT try to or create
git initmanually —package.json+ the WezTerm session handle all scaffoldingnpx add-plugin - DO NOT use uppercase letters in directory names — npm rejects them (e.g. in timestamps breaks
T)create-next-app
Copy the exact commands below. Do not improvise.
以下是绝对禁令,违反任何一条都会导致整次评测运行失效:
- 禁止使用或
claude --print参数——钩子不会触发,不会生成文件-p - 禁止使用——会改变Agent行为
--dangerously-skip-permissions - 禁止在目录下创建项目——始终使用
/tmp/~/dev/vercel-plugin-testing/ - 禁止手动创建或手动配置钩子——使用
settings.local.json完成npx add-plugin - 禁止手动设置环境变量——由插件自动管理
CLAUDE_PLUGIN_ROOT - 禁止在WezTerm中使用或
bash -c——始终使用bash -lc/bin/zsh -ic - 禁止使用claude的完整路径——使用别名(已在zsh中配置)
x - 禁止通过stderr重定向创建自定义文件——调试日志默认存储在
debug.log~/.claude/debug/ - 禁止用TypeScript/JavaScript编写评测运行脚本——所有操作都通过对话中的Bash工具调用完成
- 禁止尝试手动执行或创建
git init——package.json和WezTerm会话会处理所有脚手架生成工作npx add-plugin - 禁止在目录名中使用大写字母——npm会拒绝此类名称(比如时间戳中的会导致
T运行失败)create-next-app
严格复制下方的 exact 命令,不要自行 improvisation。
Setup & Launch (Exact Commands)
设置与启动(精确命令)
Naming convention
命名规范
Always append a timestamp to directory names so reruns don't overwrite old projects:
<slug>-<yyyymmdd>-<hhmm>Example: ,
tarot-card-deck-20260309-1227interior-designer-20260309-1227Generate the timestamp with:
date +%Y%m%d-%H%M始终在目录名后附加时间戳,避免重新运行时覆盖旧项目:
<slug>-<yyyymmdd>-<hhmm>示例:、
tarot-card-deck-20260309-1227interior-designer-20260309-1227使用以下命令生成时间戳:
date +%Y%m%d-%H%M1. Create test directory and install plugin
1. 创建测试目录并安装插件
bash
TS=$(date +%Y%m%d-%H%M)
SLUG="my-app-$TS"
mkdir -p ~/dev/vercel-plugin-testing/$SLUG
cd ~/dev/vercel-plugin-testing/$SLUG
npx add-plugin https://github.com/vercel/vercel-plugin -s project -ybash
TS=$(date +%Y%m%d-%H%M)
SLUG="my-app-$TS"
mkdir -p ~/dev/vercel-plugin-testing/$SLUG
cd ~/dev/vercel-plugin-testing/$SLUG
npx add-plugin https://github.com/vercel/vercel-plugin -s project -y2. Launch session via WezTerm
2. 通过WezTerm启动会话
bash
wezterm cli spawn --cwd /Users/johnlindquist/dev/vercel-plugin-testing/$SLUG -- /bin/zsh -ic \
"unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '<PROMPT>' --settings .claude/settings.json; exec zsh"Key flags:
- — prevents nested session detection error
unset CLAUDECODE - — enables hook debug output in
VERCEL_PLUGIN_LOG_LEVEL=debug~/.claude/debug/ - — alias for
xCLIclaude - — loads project-level plugin settings
--settings .claude/settings.json
bash
wezterm cli spawn --cwd /Users/johnlindquist/dev/vercel-plugin-testing/$SLUG -- /bin/zsh -ic \
"unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '<PROMPT>' --settings .claude/settings.json; exec zsh"关键参数说明:
- ——避免嵌套会话检测错误
unset CLAUDECODE - ——在
VERCEL_PLUGIN_LOG_LEVEL=debug中启用钩子调试输出~/.claude/debug/ - ——
xCLI的别名claude - ——加载项目级插件配置
--settings .claude/settings.json
3. Find the debug log (wait ~25s for SessionStart hooks)
3. 查找调试日志(等待约25秒让SessionStart钩子执行完成)
bash
find ~/.claude/debug -name "*.txt" -mmin -2 -exec grep -l "$SLUG" {} +bash
find ~/.claude/debug -name "*.txt" -mmin -2 -exec grep -l "$SLUG" {} +4. Launch multiple sessions in parallel
4. 并行启动多个会话
Create dirs and install plugin in a loop, then spawn each WezTerm pane:
bash
TS=$(date +%Y%m%d-%H%M)
cd ~/dev/vercel-plugin-testing
for name in tarot-deck interior-designer superhero-origin; do
d="${name}-${TS}"
mkdir -p "$d" && (cd "$d" && npx add-plugin https://github.com/vercel/vercel-plugin -s project -y)
done循环创建目录并安装插件,然后为每个目录创建WezTerm窗格:
bash
TS=$(date +%Y%m%d-%H%M)
cd ~/dev/vercel-plugin-testing
for name in tarot-deck interior-designer superhero-origin; do
d="${name}-${TS}"
mkdir -p "$d" && (cd "$d" && npx add-plugin https://github.com/vercel/vercel-plugin -s project -y)
doneThen spawn each (these run in separate terminal panes)
然后逐个启动会话(运行在独立的终端窗格中)
wezterm cli spawn --cwd .../tarot-deck-$TS -- /bin/zsh -ic "unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '...' --settings .claude/settings.json; exec zsh"
wezterm cli spawn --cwd .../interior-designer-$TS -- /bin/zsh -ic "unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '...' --settings .claude/settings.json; exec zsh"
wezterm cli spawn --cwd .../superhero-origin-$TS -- /bin/zsh -ic "unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '...' --settings .claude/settings.json; exec zsh"
undefinedwezterm cli spawn --cwd .../tarot-deck-$TS -- /bin/zsh -ic "unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '...' --settings .claude/settings.json; exec zsh"
wezterm cli spawn --cwd .../interior-designer-$TS -- /bin/zsh -ic "unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '...' --settings .claude/settings.json; exec zsh"
wezterm cli spawn --cwd .../superhero-origin-$TS -- /bin/zsh -ic "unset CLAUDECODE; VERCEL_PLUGIN_LOG_LEVEL=debug x '...' --settings .claude/settings.json; exec zsh"
undefinedMonitoring
监控
Skill injection claims (the key metric)
技能注入认领(核心指标)
bash
TMPDIR=$(node -e "import {tmpdir} from 'os'; console.log(tmpdir())" --input-type=module)
CLAIMDIR="$TMPDIR/vercel-plugin-<session-id>-seen-skills.d"bash
TMPDIR=$(node -e "import {tmpdir} from 'os'; console.log(tmpdir())" --input-type=module)
CLAIMDIR="$TMPDIR/vercel-plugin-<session-id>-seen-skills.d"List all injected skills
列出所有注入的技能
ls "$CLAIMDIR"
ls "$CLAIMDIR"
Count
统计数量
ls "$CLAIMDIR" | wc -l
ls "$CLAIMDIR" | wc -l
Check specific skill
检查特定技能是否注入
ls "$CLAIMDIR/workflow" && echo "YES" || echo "NO"
undefinedls "$CLAIMDIR/workflow" && echo "YES" || echo "NO"
undefinedHook firing
钩子触发情况
bash
LOG=~/.claude/debug/<session-id>.txtbash
LOG=~/.claude/debug/<session-id>.txtSessionStart hooks
SessionStart钩子触发情况
grep -c 'SessionStart.*success' "$LOG"
grep -c 'SessionStart.*success' "$LOG"
PreToolUse calls and injections
PreToolUse调用和注入情况
grep -c 'executePreToolHooks' "$LOG" # total calls
grep -c 'provided additionalContext' "$LOG" # actual injections
grep -c 'executePreToolHooks' "$LOG" # 总调用次数
grep -c 'provided additionalContext' "$LOG" # 实际注入次数
PostToolUse validation catches
PostToolUse校验捕获
grep 'VALIDATION' "$LOG" | head -10
grep 'VALIDATION' "$LOG" | head -10
UserPromptSubmit
UserPromptSubmit触发情况
grep -c 'UserPromptSubmit.*success' "$LOG"
undefinedgrep -c 'UserPromptSubmit.*success' "$LOG"
undefinedQuick status check for multiple sessions
多个会话的快速状态检查
bash
TMPDIR=$(node -e "import {tmpdir} from 'os'; console.log(tmpdir())" --input-type=module 2>/dev/null)
for label_id in "slug1:SESSION_ID_1" "slug2:SESSION_ID_2" "slug3:SESSION_ID_3"; do
label="${label_id%%:*}"
id="${label_id##*:}"
claimdir="$TMPDIR/vercel-plugin-$id-seen-skills.d"
echo "=== $label ==="
count=$(ls "$claimdir" 2>/dev/null | wc -l | tr -d ' ')
claims=$(ls "$claimdir" 2>/dev/null | sort | tr '\n' ', ')
echo "Skills ($count): $claims"
donebash
TMPDIR=$(node -e "import {tmpdir} from 'os'; console.log(tmpdir())" --input-type=module 2>/dev/null)
for label_id in "slug1:SESSION_ID_1" "slug2:SESSION_ID_2" "slug3:SESSION_ID_3"; do
label="${label_id%%:*}"
id="${label_id##*:}"
claimdir="$TMPDIR/vercel-plugin-$id-seen-skills.d"
echo "=== $label ==="
count=$(ls "$claimdir" 2>/dev/null | wc -l | tr -d ' ')
claims=$(ls "$claimdir" 2>/dev/null | sort | tr '\n' ', ')
echo "Skills ($count): $claims"
doneVerification — What to Check in Generated Code
验证——生成代码需要检查的内容
After sessions build, verify these patterns in the generated projects:
会话构建完成后,在生成的项目中验证以下模式:
Project structure
项目结构
bash
echo -n "src/: "; test -d "$base/src" && echo YES || echo NO # Should be NO for WDK projects
echo -n "workflows/: "; test -d "$base/workflows" && echo YES || echo NO
echo -n "withWorkflow: "; grep -q "withWorkflow" "$base"/next.config.* && echo YES || echo NO
echo -n "components.json: "; test -f "$base/components.json" && echo YES || echo NObash
echo -n "src/: "; test -d "$base/src" && echo YES || echo NO # WDK项目应该为NO
echo -n "workflows/: "; test -d "$base/workflows" && echo YES || echo NO
echo -n "withWorkflow: "; grep -q "withWorkflow" "$base"/next.config.* && echo YES || echo NO
echo -n "components.json: "; test -f "$base/components.json" && echo YES || echo NOImage generation model
图像生成模型
bash
undefinedbash
undefinedShould use gemini-3.1-flash-image-preview, NOT dall-e-3 or older gemini models
应该使用gemini-3.1-flash-image-preview,而非dall-e-3或更旧的gemini模型
grep -rn "gemini.*image|dall-e|experimental_generateImage|result.files" "$base/workflows/" "$base/app/" 2>/dev/null | grep ".ts"
undefinedgrep -rn "gemini.*image|dall-e|experimental_generateImage|result.files" "$base/workflows/" "$base/app/" 2>/dev/null | grep ".ts"
undefinedGateway vs direct provider
Gateway与直接调用Provider对比
bash
undefinedbash
undefinedShould use gateway() or plain "provider/model" strings, NOT openai("gpt-4o") directly
应该使用gateway()或纯"provider/model"字符串,而非直接调用openai("gpt-4o")
grep -rn "from.@ai-sdk/openai|openai(" "$base" 2>/dev/null | grep ".ts" | grep -v node_modules
grep -rn "gateway(|model:."openai/" "$base" 2>/dev/null | grep ".ts" | grep -v node_modules
undefinedgrep -rn "from.@ai-sdk/openai|openai(" "$base" 2>/dev/null | grep ".ts" | grep -v node_modules
grep -rn "gateway(|model:."openai/" "$base" 2>/dev/null | grep ".ts" | grep -v node_modules
undefinedAI Elements installed
AI Elements安装情况
bash
find "$base" -path "*/ai-elements/*.tsx" 2>/dev/null | grep -v node_modules | wc -lbash
find "$base" -path "*/ai-elements/*.tsx" 2>/dev/null | grep -v node_modules | wc -lWorkflow API usage
Workflow API使用情况
bash
wf=$(find "$base" -name "*.ts" -path "*/workflow*" 2>/dev/null | grep -v node_modules | head -1)
head -5 "$wf" # Should show: import { getWritable } from "workflow"bash
wf=$(find "$base" -name "*.ts" -path "*/workflow*" 2>/dev/null | grep -v node_modules | head -1)
head -5 "$wf" # 应该显示:import { getWritable } from "workflow"Prompt Design Rules
Prompt设计规则
Describe products, not technologies. Let the plugin infer which skills to inject. This tests whether the plugin's pattern matching and prompt signals work from natural language.
描述产品而非技术,让插件自动推断需要注入的技能。这可以测试插件的模式匹配和prompt信号是否能从自然语言中正确识别需求。
DO:
正确示例:
- "runs a multi-step creation pipeline that streams each phase"
- "generates a portrait image"
- "users can chat with an AI advisor"
- "store all designs in a gallery"
- "运行多步骤创建流水线,流式输出每个阶段的进度"
- "生成一张肖像图片"
- "用户可以与AI顾问聊天"
- "将所有设计存储在画廊中"
DON'T:
错误示例:
- "use Vercel Workflow DevKit with getWritable"
- "use gateway('google/gemini-3.1-flash-image-preview')"
- "install npx ai-elements"
- "add withWorkflow to next.config.ts"
- "使用Vercel Workflow DevKit和getWritable"
- "使用gateway('google/gemini-3.1-flash-image-preview')"
- "安装npx ai-elements"
- "在next.config.ts中添加withWorkflow"
Always end prompts with:
始终在prompt末尾添加:
"Link the project to my vercel-labs team so we can deploy it later. Skip any planning and just build it. Get the dev server running."
"将项目链接到我的vercel-labs团队,方便后续部署。跳过任何规划环节直接构建,启动开发服务。"
Phrases that trigger key skills (via promptSignals):
触发核心技能的短语(通过promptSignals):
- workflow: "multi-step pipeline", "streams progress", "streams each phase", "durable pipeline", "creation pipeline"
- ai-sdk: Triggered by imports/install patterns (very broad)
- shadcn: Triggered by bash pattern
create-next-app - ai-elements: Triggered when ai-sdk is active + chat UI patterns
- workflow:"multi-step pipeline"、"streams progress"、"streams each phase"、"durable pipeline"、"creation pipeline"
- ai-sdk:通过导入/安装模式触发(覆盖范围非常广)
- shadcn:通过bash模式触发
create-next-app - ai-elements:当ai-sdk处于激活状态且存在聊天UI模式时触发
Common Issues Found in Evals (and Fixes Applied)
评测中发现的常见问题(及已应用的插件修复)
| Issue | Cause | Plugin Fix (version) |
|---|---|---|
| Workflow not triggered from natural language | promptSignals too narrow | Broadened phrases, lowered minScore 6→4 (v0.9.5) |
Agent uses | Agent's training data defaults to openai | PostToolUse validate warns "your knowledge is outdated" (v0.9.9) |
Agent uses | Agent doesn't know about gemini image gen | PostToolUse validate warns, capabilities table in ai-sdk (v0.9.7) |
Agent uses | Old API | PostToolUse validate warns, recommend |
Raw markdown rendering ( | Agent skips AI Elements | |
| Workflows outside | Canonical structure docs: no |
| Agent skipped setup step | Marked as "Required" in workflow skill (v0.8.1) |
| Agent didn't wire the 3-piece pattern | Documented as 3 required pieces (v0.9.3) |
| Agent's training data | PostToolUse validate catches as error (v0.9.3) |
| Sandbox violation | Strengthened warning in skill (v0.8.1) |
Missing | No OIDC credentials | Added as "Required" setup step (v0.9.1) |
| WDK quirk | Documented: guard with |
| shadcn not installed | No trigger for scaffolding | Added |
| Skill cap too low (3) | Only 3 skills injected per tool call | Raised to 5 with 18KB budget (v0.8.0) |
| 问题 | 原因 | 插件修复(版本) |
|---|---|---|
| 自然语言无法触发Workflow | promptSignals范围过窄 | 扩大匹配短语范围,最低匹配分从6下调到4(v0.9.5) |
Agent使用 | Agent训练数据默认使用openai | PostToolUse校验提示“你的知识已过时”(v0.9.9) |
Agent使用 | Agent不了解gemini图像生成能力 | PostToolUse校验提示,在ai-sdk中添加能力对照表(v0.9.7) |
Agent使用 | 旧API | PostToolUse校验提示,推荐使用 |
原始Markdown渲染( | Agent跳过AI Elements | 将 |
| Workflows不在 | 规范结构文档:WDK项目不使用 |
next.config中缺失 | Agent跳过了设置步骤 | 在workflow技能中标注为“必填”(v0.8.1) |
定义了 | Agent没有配置三段式模式 | 文档说明为3个必填组件(v0.9.3) |
使用了已在v6中移除的 | Agent训练数据过时 | PostToolUse校验捕获为错误(v0.9.3) |
在workflow作用域中使用 | Sandbox违规 | 增强技能中的警告提示(v0.8.1) |
缺失 | 没有OIDC凭证 | 添加为“必填”设置步骤(v0.9.1) |
首次运行时 | WDK特性 | 文档说明:使用 |
| shadcn未安装 | 没有脚手架触发条件 | 为shadcn添加 |
| 技能上限过低(3个) | 每次工具调用仅注入3个技能 | 上限提升到5个,预算为18KB(v0.8.0) |
Agent-Browser Verification
Agent-浏览器验证
After dev server starts, verify with agent-browser. Note: agents currently DO NOT self-verify despite the skill being injected. You must launch verification manually:
bash
agent-browser open http://localhost:<port>
agent-browser wait --load networkidle
agent-browser screenshot
agent-browser snapshot -i开发服务启动后,使用agent-browser进行验证。注意:尽管注入了相关技能,当前Agent不会自行执行验证,你需要手动启动验证流程:
bash
agent-browser open http://localhost:<port>
agent-browser wait --load networkidle
agent-browser screenshot
agent-browser snapshot -iCoverage Report
覆盖率报告
Write results to with:
.notes/COVERAGE.md- Session index — slug, session ID, unique skills, dedup status
- Hook coverage matrix — which hooks fired in which sessions
- Skill injection table — which of the 43 skills triggered
- Code quality checks — gateway vs direct, image model, withWorkflow, AI Elements
- PostToolUse validation catches — outdated models, deprecated APIs
- Issues found — bugs, pattern gaps, new findings to feed back into skills
将结果写入,包含以下内容:
.notes/COVERAGE.md- 会话索引——slug、会话ID、唯一技能、去重状态
- 钩子覆盖率矩阵——哪些会话触发了哪些钩子
- 技能注入表——43个技能中哪些被触发
- 代码质量检查——是否使用gateway而非直接调用、图像模型是否正确、是否有withWorkflow、是否安装AI Elements
- PostToolUse校验捕获——过时模型、废弃API
- 发现的问题——Bug、模式缺口、需要反馈到技能中的新发现
Release → Eval Loop
发布→评测循环
The standard improvement cycle:
- Run evals — launch 3 sessions with natural language prompts
- Check results — skill claims, project structure, code quality
- Identify gaps — what skills didn't trigger, what patterns are wrong
- Read conversation logs — find user follow-up corrections
- Fix skills — update SKILL.md content, patterns, validate rules
- Run gates —
bun run typecheck && bun test && bun run validate - Release — bump version, , commit, push
bun run build - Repeat — launch 3 more evals to verify fixes
标准优化周期:
- 运行评测——使用自然语言prompt启动3个会话
- 检查结果——技能认领情况、项目结构、代码质量
- 识别缺口——哪些技能没有触发、哪些模式存在错误
- 阅读会话日志——找出用户后续需要修正的内容
- 修复技能——更新SKILL.md内容、匹配模式、校验规则
- 运行门禁检查——
bun run typecheck && bun test && bun run validate - 发布——升级版本、、提交、推送
bun run build - 重复——启动3个新评测验证修复效果
Scenario Table
场景表
| # | Slug | Prompt Summary | Expected Skills |
|---|---|---|---|
| 01 | doc-qa-agent | PDF Q&A with embeddings, citations, multi-step reasoning | ai-sdk, nextjs, vercel-storage, ai-elements |
| 02 | customer-support-agent | Durable support agent, escalation, confidence tracking | ai-sdk, workflow, nextjs, ai-elements |
| 03 | deploy-monitor | Uptime monitoring, AI incident responder, durable investigation | workflow, cron-jobs, observability, ai-sdk |
| 04 | multi-model-router | Side-by-side model comparison, parallel streaming, cost tracking | ai-gateway, ai-sdk, nextjs, ai-elements |
| 05 | slack-pr-reviewer | Multi-platform chat bot, PR review, threaded conversations | chat-sdk, ai-sdk, nextjs |
| 06 | content-pipeline | Durable multi-step content production with image generation | workflow, ai-sdk, satori, nextjs |
| 07 | feature-rollout | Feature flags, A/B testing, AI experiment analysis | vercel-flags, ai-sdk, nextjs |
| 08 | event-driven-crm | Event-driven CRM, churn prediction, re-engagement emails | vercel-queues, workflow, ai-sdk, email |
| 09 | code-sandbox-tutor | AI coding tutor with sandbox execution, auto-fix | vercel-sandbox, ai-sdk, nextjs, ai-elements |
| 10 | multi-agent-research | Parallel sub-agents, durable orchestration, streaming synthesis | workflow, ai-sdk, ai-elements, nextjs |
| 11 | discord-game-master | RPG bot, persistent game state, scene illustration generation | chat-sdk, ai-sdk, vercel-storage, nextjs |
| 12 | compliance-auditor | Scheduled AI audits, durable approval workflow, deploy blocking | workflow, cron-jobs, ai-sdk, vercel-firewall |
| # | Slug | Prompt摘要 | 预期技能 |
|---|---|---|---|
| 01 | doc-qa-agent | 带嵌入、引用、多步骤推理的PDF问答 | ai-sdk, nextjs, vercel-storage, ai-elements |
| 02 | customer-support-agent | 持久化客服Agent、升级流转、置信度跟踪 | ai-sdk, workflow, nextjs, ai-elements |
| 03 | deploy-monitor | 可用性监控、AI事件响应、持久化排查 | workflow, cron-jobs, observability, ai-sdk |
| 04 | multi-model-router | 多模型横向对比、并行流式输出、成本跟踪 | ai-gateway, ai-sdk, nextjs, ai-elements |
| 05 | slack-pr-reviewer | 多平台聊天机器人、PR评审、线程对话 | chat-sdk, ai-sdk, nextjs |
| 06 | content-pipeline | 带图像生成的持久化多步骤内容生产 | workflow, ai-sdk, satori, nextjs |
| 07 | feature-rollout | 功能标志、A/B测试、AI实验分析 | vercel-flags, ai-sdk, nextjs |
| 08 | event-driven-crm | 事件驱动CRM、流失预测、召回邮件 | vercel-queues, workflow, ai-sdk, email |
| 09 | code-sandbox-tutor | 带Sandbox执行、自动修复的AI编程导师 | vercel-sandbox, ai-sdk, nextjs, ai-elements |
| 10 | multi-agent-research | 并行子Agent、持久化编排、流式合成 | workflow, ai-sdk, ai-elements, nextjs |
| 11 | discord-game-master | RPG机器人、持久化游戏状态、场景插图生成 | chat-sdk, ai-sdk, vercel-storage, nextjs |
| 12 | compliance-auditor | 定时AI审计、持久化审批工作流、部署阻断 | workflow, cron-jobs, ai-sdk, vercel-firewall |
Complexity Tiers
复杂度分层
Tier 1 — Core AI (30-45 min, --quick
)
--quick第一层——核心AI(30-45分钟,--quick
)
--quickScenarios 01, 04, 09 — AI SDK, Gateway, Sandbox, AI Elements without durable workflows.
场景01、04、09——AI SDK、Gateway、Sandbox、AI Elements,不含持久化工作流。
Tier 2 — Durable Agents (45-60 min)
第二层——持久化Agent(45-60分钟)
Scenarios 02, 03, 06, 10 — Workflow DevKit, multi-step durability, agent orchestration.
场景02、03、06、10——Workflow DevKit、多步骤持久化、Agent编排。
Tier 3 — Platform Integration (45-60 min)
第三层——平台集成(45-60分钟)
Scenarios 05, 07, 08, 11, 12 — Chat SDK, Queues, Flags, Firewall, cross-platform messaging.
场景05、07、08、11、12——Chat SDK、Queues、Flags、Firewall、跨平台消息。
Full Suite
完整套件
All 12 scenarios, ~3-4 hours.
全部12个场景,约3-4小时。
Cleanup
清理
bash
rm -rf ~/dev/vercel-plugin-testingbash
rm -rf ~/dev/vercel-plugin-testing