event-prospecting

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Event Prospecting

活动潜在客户挖掘

Take a conference URL → get a ranked list of people the AE should talk to, with a "why reach out" rationale per person.
Required:
BROWSERBASE_API_KEY
env var,
bb
CLI installed (
@browserbasehq/cli
), and
browse
CLI installed (
@browserbasehq/browse-cli
) for JS-heavy speaker pages (most modern event sites).
Path rules: Always use the full literal path in all Bash commands — NOT
~
or
$HOME
(both trigger "shell expansion syntax" approval prompts). Resolve the home directory once and use it everywhere. When constructing subagent prompts, replace
{SKILL_DIR}
with the full literal path (typically
/Users/jay/skills/skills/event-prospecting
).
Output directory: All event prospecting output goes to
~/Desktop/{event_slug}_prospects_{YYYY-MM-DD-HHMM}/
. Final deliverable is
index.html
(people grouped by company, ranked by company ICP), with
companies.html
and
people.html
(filterable) as alternate views, plus
results.csv
for cold-outbound import.
CRITICAL — Tool restrictions (applies to main agent AND all subagents):
  • All web searches: use
    bb search
    . NEVER use WebSearch.
  • All page content extraction: use
    node {SKILL_DIR}/scripts/extract_page.mjs "<url>"
    . This script fetches via
    bb fetch
    , parses title + meta tags + visible body text, and automatically falls back to
    bb browse
    when the page is JS-rendered or over 1MB. NEVER hand-roll a
    bb fetch | sed
    pipeline. NEVER use WebFetch.
  • All research output: subagents write one markdown file per company OR per person to
    {OUTPUT_DIR}/companies/{slug}.md
    or
    {OUTPUT_DIR}/people/{slug}.md
    using bash heredoc. NEVER use the Write tool or
    python3 -c
    . See
    references/example-research.md
    for both file formats.
  • Report compilation: use
    node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
    .
  • Subagents must use ONLY the Bash tool. No other tools allowed.
  • HARD TOOL-CALL CAPS: ICP triage = 1 call/company; deep research = 5 calls/company; person enrichment = 4 calls/person. See
    references/workflow.md
    for enforcement detail.
CRITICAL — Anti-hallucination rules (applies to main agent AND all subagents):
  • NEVER infer
    product_description
    ,
    industry
    , or a person's
    role_reason
    from a site's fonts, framework, design system, or typography. These are cosmetic and say nothing about what the company sells or what the person does.
  • NEVER let the user's own ICP leak into a target's description. If you don't know what the target does, write
    Unknown
    — do not pattern-match them onto the ICP.
  • product_description
    MUST quote or paraphrase a specific phrase from
    extract_page.mjs
    output. If none of TITLE/META/OG/HEADINGS/BODY yield a recognizable product statement, write
    Unknown — homepage content not accessible
    and cap
    icp_fit_score
    at 3.
  • A person's
    hook
    MUST quote or paraphrase a specific finding from a
    bb search
    result (podcast title, blog headline, GitHub repo, talk abstract). If no public signal exists in the last 6 months, fall back to event-context (their talk title at this event).
CRITICAL — Minimize permission prompts:
  • Subagents MUST batch ALL file writes into a SINGLE Bash call using chained heredocs. One Bash call = one permission prompt.
  • Batch ALL searches and ALL fetches into single Bash calls using
    &&
    chaining.
输入会议URL → 获取客户经理(AE)应联系的人员排名列表,每位人员都附带“为何联系”的理由。
必要条件:需配置
BROWSERBASE_API_KEY
环境变量,安装
bb
CLI工具(
@browserbasehq/cli
);对于JS渲染的演讲者页面(多数现代活动网站),还需安装
browse
CLI工具(
@browserbasehq/browse-cli
)。
路径规则:始终在所有Bash命令中使用完整的字面路径——禁止使用
~
$HOME
(这两个会触发“shell扩展语法”确认提示)。解析一次主目录并在所有地方使用该路径。构建子代理(subagent)提示词时,将
{SKILL_DIR}
替换为完整的字面路径(通常为
/Users/jay/skills/skills/event-prospecting
)。
输出目录:所有活动潜在客户挖掘的输出文件都将保存至
~/Desktop/{event_slug}_prospects_{YYYY-MM-DD-HHMM}/
。最终交付物为
index.html
(按公司分组、按ICP适配度排序的人员列表),同时提供可筛选的
companies.html
people.html
作为备选视图,还有可用于冷启动 outbound 导入的
results.csv
关键限制——工具使用规则(适用于主代理及所有子代理):
  • 所有网页搜索:使用
    bb search
    。禁止使用WebSearch。
  • 所有页面内容提取:使用
    node {SKILL_DIR}/scripts/extract_page.mjs "<url>"
    。该脚本通过
    bb fetch
    获取内容,解析标题、元标签和可见正文,当页面为JS渲染或大小超过1MB时,会自动 fallback 到
    bb browse
    。禁止手动编写
    bb fetch | sed
    管道命令。禁止使用WebFetch。
  • 所有调研输出:子代理需使用bash heredoc将每个公司或每个人的调研内容写入一个markdown文件,保存至
    {OUTPUT_DIR}/companies/{slug}.md
    {OUTPUT_DIR}/people/{slug}.md
    。禁止使用Write工具或
    python3 -c
    。文件格式可参考
    references/example-research.md
  • 报告编译:使用
    node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
  • 子代理只能使用Bash工具,禁止使用其他工具。
  • 工具调用次数硬限制:ICP筛选 = 每个公司1次调用;深度调研 = 每个公司5次调用;人员信息补充 = 每个人4次调用。具体执行细节可参考
    references/workflow.md
关键规则——避免幻觉(适用于主代理及所有子代理):
  • 禁止从网站的字体、框架、设计系统或排版推断
    product_description
    industry
    或人员的
    role_reason
    。这些属于外观设计,无法体现公司业务或人员职责。
  • 禁止将用户自身的ICP内容混入目标对象的描述中。如果不清楚目标对象的业务,应填写
    Unknown
    ,不得将其强行匹配到ICP中。
  • product_description
    必须引用或改写
    extract_page.mjs
    输出中的特定语句。如果标题、元标签、OG标签、标题栏、正文中均无明确的产品说明,应填写
    Unknown — homepage content not accessible
    ,并将
    icp_fit_score
    上限设为3。
  • 人员的
    hook
    必须引用或改写
    bb search
    结果中的特定发现(播客标题、博客标题、GitHub仓库、演讲摘要)。如果过去6个月内无公开信息,可 fallback 到活动上下文(此人在本次活动中的演讲标题)。
关键规则——减少权限提示:
  • 子代理必须使用链式heredoc将所有文件写入操作批量处理为单次Bash调用。一次Bash调用对应一次权限提示。
  • 使用
    &&
    链式调用将所有搜索和获取操作批量处理为单次Bash调用。

Pipeline Overview

流程概述

Follow these 10 steps in order. Do not skip steps or reorder.
  1. Setup — output dir + clean slate
  2. Load profile — read
    profiles/{user_slug}.json
  3. Recon — detect event platform
  4. Extract people
    people.jsonl
  5. Group by company
    seed_companies.txt
  6. ICP triage — fast company-level scoring (1 call/company)
  7. Filter — companies with
    icp_fit_score >= --icp-threshold
  8. Deep research — full Plan→Research→Synthesize on ICP fits
  9. Enrich speakers — ask user: ICP-fit only (default) or all speakers
  10. Compile report — HTML + CSV, open in browser
The user invokes the skill with a URL like
/event-prospecting <URL>
. Parse
EVENT_URL
from that invocation message. Defaults:
DEPTH=deep
,
ICP_THRESHOLD=6
. The
USER_SLUG
(ICP profile) is auto-resolved in Step 1 from whatever profile files exist locally — there is no built-in default profile. Do NOT ask the user to confirm the URL — they already gave you it.

请按以下10个步骤依次执行,不得跳过或调整顺序。
  1. 准备工作 — 创建输出目录并清理环境
  2. 加载配置文件 — 读取
    profiles/{user_slug}.json
  3. 侦察 — 检测活动平台
  4. 提取人员信息 — 生成
    people.jsonl
  5. 按公司分组 — 生成
    seed_companies.txt
  6. ICP筛选 — 快速进行公司层面的适配度评分(每个公司1次调用)
  7. 过滤 — 保留
    icp_fit_score >= --icp-threshold
    的公司
  8. 深度调研 — 对符合ICP的公司执行完整的计划→调研→总结流程
  9. 补充演讲者信息 — 询问用户:仅补充符合ICP的人员信息(默认)还是所有演讲者
  10. 编译报告 — 生成HTML和CSV文件并在浏览器中打开
用户通过类似
/event-prospecting <URL>
的指令调用该Skill。从调用消息中解析
EVENT_URL
。默认参数:
DEPTH=deep
ICP_THRESHOLD=6
USER_SLUG
(ICP配置文件)会在步骤1中自动从本地存在的配置文件中解析——无内置默认配置文件。无需向用户确认URL,用户已提供该信息。

Step 0: Setup Output Directory

步骤0:创建输出目录

Derive the output directory from the URL the user gave you. Do NOT hardcode any event name.
bash
undefined
根据用户提供的URL生成输出目录,不得硬编码任何活动名称。
bash
undefined

EVENT_URL came from the invocation message (whatever the user typed after
/event-prospecting
)

EVENT_URL来自调用消息(用户在
/event-prospecting
后输入的内容)

EVENT_SLUG=$(node -e 'const h = new URL(process.argv[1]).hostname.replace(/^www./,""); console.log(h.split(".")[0])' "$EVENT_URL") TIMESTAMP=$(date +%Y-%m-%d-%H%M) OUTPUT_DIR=/Users/jay/Desktop/${EVENT_SLUG}prospects${TIMESTAMP} mkdir -p "$OUTPUT_DIR/companies" "$OUTPUT_DIR/people"

Use the full literal home path — never `~` or `$HOME`. Pass `{OUTPUT_DIR}` as the full literal path to all subagent prompts.
EVENT_SLUG=$(node -e 'const h = new URL(process.argv[1]).hostname.replace(/^www./,""); console.log(h.split(".")[0])' "$EVENT_URL") TIMESTAMP=$(date +%Y-%m-%d-%H%M) OUTPUT_DIR=/Users/jay/Desktop/${EVENT_SLUG}prospects${TIMESTAMP} mkdir -p "$OUTPUT_DIR/companies" "$OUTPUT_DIR/people"

使用完整的字面主目录路径——禁止使用`~`或`$HOME`。将`{OUTPUT_DIR}`作为完整字面路径传入所有子代理提示词。

Step 1: Load User Profile

步骤1:加载用户配置文件

The profile defines the ICP that ICP triage and deep research score against. Load from
{SKILL_DIR}/profiles/{user_slug}.json
(interchangeable across all GTM skills — same shape as company-research).
example.json
is a template, not a real profile — never use it.
DO NOT look outside
{SKILL_DIR}/profiles/
for profiles — never reach into other skills' directories. If a profile is needed elsewhere, the user copies it explicitly.
Resolution order:
  1. If the user invoked with
    --user-company <slug>
    , use that slug.
  2. Else, list
    profiles/*.json
    excluding
    example.json
    . If exactly one profile exists, use it (and tell the user which one). If multiple exist, ask the user (plain chat) which one.
  3. If zero profiles exist, fail loudly and instruct the user to create one (copy
    profiles/example.json
    to
    profiles/<your_slug>.json
    and fill it in, or run the company-research skill which builds one automatically).
bash
PROFILES=$(ls {SKILL_DIR}/profiles/*.json 2>/dev/null | xargs -n1 basename | sed 's/\.json$//' | grep -v '^example$')
COUNT=$(echo "$PROFILES" | grep -c .)

if [ -z "$USER_SLUG" ]; then
  if [ "$COUNT" -eq 0 ]; then
    echo "No profiles found in {SKILL_DIR}/profiles/. Copy profiles/example.json to profiles/<your_slug>.json and fill it in, or run the company-research skill to build one."
    exit 1
  elif [ "$COUNT" -eq 1 ]; then
    USER_SLUG=$PROFILES
    echo "Using the only profile available: ${USER_SLUG}"
  else
    echo "Multiple profiles found:"
    echo "$PROFILES" | sed 's/^/  - /'
    echo "Re-invoke with --user-company <slug> to pick one."
    exit 1
  fi
fi

test -f {SKILL_DIR}/profiles/${USER_SLUG}.json || {
  echo "Profile not found: profiles/${USER_SLUG}.json"
  exit 1
}
cat {SKILL_DIR}/profiles/${USER_SLUG}.json
The profile yields:
company
,
product
,
icp_description
,
existing_customers
. These get embedded verbatim in every subagent prompt downstream.
配置文件定义了ICP筛选和深度调研的评分标准。从
{SKILL_DIR}/profiles/{user_slug}.json
加载(可在所有GTM Skill中通用——与公司调研Skill的格式一致)。
example.json
是模板,并非真实配置文件——禁止使用。
禁止在
{SKILL_DIR}/profiles/
之外查找配置文件
——不得访问其他Skill的目录。如果需要在其他地方使用配置文件,用户需手动复制。
解析顺序:
  1. 如果用户调用时附带
    --user-company <slug>
    参数,使用该slug。
  2. 否则,列出
    profiles/*.json
    中除
    example.json
    外的文件。如果仅存在一个配置文件,使用该文件(并告知用户)。如果存在多个配置文件,询问用户(普通聊天形式)选择哪一个。
  3. 如果不存在任何配置文件,直接报错并指导用户创建一个(复制
    profiles/example.json
    profiles/<your_slug>.json
    并填写内容,或运行公司调研Skill自动生成)。
bash
PROFILES=$(ls {SKILL_DIR}/profiles/*.json 2>/dev/null | xargs -n1 basename | sed 's/\.json$//' | grep -v '^example$')
COUNT=$(echo "$PROFILES" | grep -c .)

if [ -z "$USER_SLUG" ]; then
  if [ "$COUNT" -eq 0 ]; then
    echo "No profiles found in {SKILL_DIR}/profiles/. Copy profiles/example.json to profiles/<your_slug>.json and fill it in, or run the company-research skill to build one."
    exit 1
  elif [ "$COUNT" -eq 1 ]; then
    USER_SLUG=$PROFILES
    echo "Using the only profile available: ${USER_SLUG}"
  else
    echo "Multiple profiles found:"
    echo "$PROFILES" | sed 's/^/  - /'
    echo "Re-invoke with --user-company <slug> to pick one."
    exit 1
  fi
fi

test -f {SKILL_DIR}/profiles/${USER_SLUG}.json || {
  echo "Profile not found: profiles/${USER_SLUG}.json"
  exit 1
}
cat {SKILL_DIR}/profiles/${USER_SLUG}.json
配置文件包含:
company
product
icp_description
existing_customers
。这些内容会直接嵌入到后续所有子代理的提示词中。

Step 2: Recon

步骤2:侦察

Detect the event platform and extraction strategy. One command:
bash
node {SKILL_DIR}/scripts/recon.mjs {EVENT_URL} {OUTPUT_DIR}
Writes
{OUTPUT_DIR}/recon.json
with
platform
,
strategy
, and (for Next.js)
nextDataPaths
. See
references/event-platforms.md
for the platform catalog and detection priority.
Expected outcomes:
  • Stripe Sessions class (Next.js):
    platform: "next-data"
    , 1-3 paths
  • Sessionize:
    platform: "sessionize"
  • Lu.ma / Eventbrite:
    platform: "luma" | "eventbrite"
  • Anything else:
    platform: "custom"
    ,
    strategy: "markdown"
    (best-effort fallback)
检测活动平台和提取策略,执行以下命令:
bash
node {SKILL_DIR}/scripts/recon.mjs {EVENT_URL} {OUTPUT_DIR}
该命令会将
platform
strategy
以及(针对Next.js平台的)
nextDataPaths
写入
{OUTPUT_DIR}/recon.json
。平台目录和检测优先级可参考
references/event-platforms.md
预期结果:
  • Stripe Sessions类(Next.js):
    platform: "next-data"
    ,1-3个路径
  • Sessionize:
    platform: "sessionize"
  • Lu.ma / Eventbrite:
    platform: "luma" | "eventbrite"
  • 其他平台:
    platform: "custom"
    strategy: "markdown"
    (最佳尝试 fallback)

Step 3: Extract People

步骤3:提取人员信息

bash
node {SKILL_DIR}/scripts/extract_event.mjs {OUTPUT_DIR} --user-company {USER_SLUG}
Reads
recon.json
, dispatches to the platform-specific extractor, writes
people.jsonl
(one speaker per line) and
seed_companies.txt
(deduped companies).
The
--user-company
flag also drops the host-org's own employees (a Stripe-hosted event drops Stripe employees) and the user's own employees from the speaker list — those aren't prospects.
Sanity-check the output:
bash
wc -l {OUTPUT_DIR}/people.jsonl {OUTPUT_DIR}/seed_companies.txt
head -3 {OUTPUT_DIR}/people.jsonl
If
people.jsonl
is empty or under ~10 lines, recon picked the wrong platform — see
references/event-platforms.md
and re-run with adjusted strategy.
bash
node {SKILL_DIR}/scripts/extract_event.mjs {OUTPUT_DIR} --user-company {USER_SLUG}
读取
recon.json
,调用平台特定的提取器,生成
people.jsonl
(每行一个演讲者信息)和
seed_companies.txt
(去重后的公司列表)。
--user-company
参数还会从演讲者列表中移除主办方自身的员工(如Stripe主办的活动会移除Stripe员工)和用户自身公司的员工——这些不属于潜在客户。
检查输出是否合理:
bash
wc -l {OUTPUT_DIR}/people.jsonl {OUTPUT_DIR}/seed_companies.txt
head -3 {OUTPUT_DIR}/people.jsonl
如果
people.jsonl
为空或行数少于约10行,说明侦察阶段选择了错误的平台——参考
references/event-platforms.md
并调整策略后重新运行。

Step 4: Group by Company

步骤4:按公司分组

extract_event.mjs
emits
seed_companies.txt
already (one company per line, deduped, sorted). This step is informational — verify the count looks reasonable before fanning out:
bash
wc -l {OUTPUT_DIR}/seed_companies.txt
Expected: roughly 0.4-0.6× the speaker count (most events have ~2 speakers per company on average, some companies send 5+, many send 1).
extract_event.mjs
已生成
seed_companies.txt
(每行一个公司,已去重、排序)。本步骤仅用于确认——在后续处理前验证公司数量是否合理:
bash
wc -l {OUTPUT_DIR}/seed_companies.txt
预期数量:约为演讲者数量的0.4-0.6倍(多数活动平均每个公司有2位演讲者,部分公司会派出5位以上,很多公司仅派出1位)。

Step 5: ICP Triage

步骤5:ICP筛选

Fast pass — one tool call per company, no deep research. Score every company in
seed_companies.txt
against the user's ICP and write a thin triage stub to
companies/{slug}.md
. Companies with
icp_fit_score >= --icp-threshold
(default 6) advance to Step 7's deep research; the rest stay as triage stubs.
Dispatch pattern: split
seed_companies.txt
into batches of ~10 and fan out N subagents in a SINGLE Agent batch (multiple Agent tool calls in one message). Each subagent runs the prompt from
references/workflow.md
→ "ICP Triage" section. Hard cap: 1 tool call per company (just
extract_page.mjs
on the homepage), enforced via the
# bb call N/1
comment pattern.
bash
undefined
快速筛选——每个公司仅一次工具调用,无深度调研。
seed_companies.txt
中的每个公司与用户的ICP进行评分,并将精简的筛选结果写入
companies/{slug}.md
icp_fit_score >= --icp-threshold
(默认6分)的公司进入步骤7的深度调研;其余公司仅保留筛选结果。
调度模式:将
seed_companies.txt
分成约10个公司一组的批次,在单次Agent批量调用(一条消息中包含多个Agent工具调用)中调度N个子代理。每个子代理执行
references/workflow.md
→ "ICP Triage"部分的提示词。硬限制:每个公司仅1次工具调用(仅对主页执行
extract_page.mjs
),通过
# bb call N/1
注释模式强制执行。
bash
undefined

Build batch files: each batch line is "name|guessed_homepage|slug".

生成批量文件:每行格式为"name|guessed_homepage|slug"。

extract_event.mjs only emits company NAMES (no URLs), so we slugify and guess

extract_event.mjs仅输出公司名称(无URL),因此我们将其slug化并猜测

https://{slug-without-spaces}.com as the canonical homepage. The triage subagent

https://{slug-without-spaces}.com为标准主页。如果猜测的URL返回404,筛选子代理

is allowed to write product_description: "Unknown — homepage content not accessible"

可填写product_description: "Unknown — homepage content not accessible"

and cap score at 3 if the guessed URL 404s — that's the documented fallback in

并将评分上限设为3——这是workflow.md中记录的fallback规则(ICP筛选提示词第3条)。使用真实的bb搜索

workflow.md (rule 3 of the ICP Triage prompt). Burning a real bb search to

来查找URL会违反每个公司1次调用的硬限制。

discover the URL would bust the 1-call-per-company HARD CAP.

node -e ' const fs = require("fs"); const slugify = (s) => (s || "").toLowerCase().replace(/[^a-z0-9]+/g, "-").replace(/^-+|-+$/g, ""); const seed = fs.readFileSync("{OUTPUT_DIR}/seed_companies.txt", "utf-8").split("\n").filter(Boolean); const lines = seed.map(c => { const slug = slugify(c); const guessedHost = c.toLowerCase().replace(/[^a-z0-9]/g, ""); return
${c}|https://${guessedHost}.com|${slug}
; }); fs.writeFileSync("{OUTPUT_DIR}/_seed_with_urls.txt", lines.join("\n") + "\n"); '
node -e ' const fs = require("fs"); const slugify = (s) => (s || "").toLowerCase().replace(/[^a-z0-9]+/g, "-").replace(/^-+|-+$/g, ""); const seed = fs.readFileSync("{OUTPUT_DIR}/seed_companies.txt", "utf-8").split("\n").filter(Boolean); const lines = seed.map(c => { const slug = slugify(c); const guessedHost = c.toLowerCase().replace(/[^a-z0-9]/g, ""); return
${c}|https://${guessedHost}.com|${slug}
; }); fs.writeFileSync("{OUTPUT_DIR}/_seed_with_urls.txt", lines.join("\n") + "\n"); '

Split into ~10-company batches

分成约10个公司一组的批次

split -l 10 {OUTPUT_DIR}/_seed_with_urls.txt {OUTPUT_DIR}/batch_triage
split -l 10 {OUTPUT_DIR}/_seed_with_urls.txt {OUTPUT_DIR}/batch_triage

Count batches → number of subagents to dispatch (cap at 6 per message; second wave for the rest)

统计批次数量 → 需调度的子代理数量(每条消息最多6个;剩余部分在第一波返回后调度)

ls {OUTPUT_DIR}/batch_triage* | wc -l

Then in a single message, dispatch one Agent call per batch (up to 6 in parallel; subsequent waves after the first returns). Each Agent gets the prompt from `references/workflow.md` → "ICP Triage" with these substitutions before sending:
- `{SKILL_DIR}` → full literal skill path (e.g. `/Users/jay/skills/skills/event-prospecting`)
- `{OUTPUT_DIR}` → full literal output path
- `{USER_COMPANY}`, `{USER_PRODUCT}`, `{ICP_DESCRIPTION}` → from the loaded profile
- `{EVENT_NAME}` → `recon.json` `.title`
- `{COMPANY_LIST}` → contents of the batch file (e.g. `cat {OUTPUT_DIR}/_batch_triage_aa`)
- `{TOTAL}` → number of lines in this batch (substitute into `# bb call N/{TOTAL}`)

**Agent dispatch (skeleton, repeat per batch in one message)**:
Agent( description: "ICP triage batch aa", prompt: <ICP Triage prompt from workflow.md with all placeholders substituted>, subagent_type: "general-purpose" ) Agent( description: "ICP triage batch ab", prompt: <same prompt template, COMPANY_LIST swapped to batch ab>, subagent_type: "general-purpose" ) ... up to 6 per message

After all subagents return, verify every company in `seed_companies.txt` has a corresponding `companies/{slug}.md`:

```bash
ls {OUTPUT_DIR}/companies/*.md | wc -l
ls {OUTPUT_DIR}/batch_triage* | wc -l

然后在一条消息中,为每个批次调度一次Agent调用(最多并行6个;第一波返回后调度后续批次)。发送前需将`references/workflow.md` → "ICP Triage"中的占位符替换为以下内容:
- `{SKILL_DIR}` → 完整的Skill字面路径(例如`/Users/jay/skills/skills/event-prospecting`)
- `{OUTPUT_DIR}` → 完整的输出字面路径
- `{USER_COMPANY}`、`{USER_PRODUCT}`、`{ICP_DESCRIPTION}` → 来自加载的配置文件
- `{EVENT_NAME}` → `recon.json`中的`.title`字段
- `{COMPANY_LIST}` → 批量文件的内容(例如`cat {OUTPUT_DIR}/_batch_triage_aa`)
- `{TOTAL}` → 本批次的公司数量(替换到`# bb call N/{TOTAL}`中)

**Agent调度(框架,每条消息中重复每个批次)**:
Agent( description: "ICP triage batch aa", prompt: <ICP Triage prompt from workflow.md with all placeholders substituted>, subagent_type: "general-purpose" ) Agent( description: "ICP triage batch ab", prompt: <same prompt template, COMPANY_LIST swapped to batch ab>, subagent_type: "general-purpose" ) ... up to 6 per message

所有子代理返回后,验证`seed_companies.txt`中的每个公司都有对应的`companies/{slug}.md`:

```bash
ls {OUTPUT_DIR}/companies/*.md | wc -l

Should equal
wc -l {OUTPUT_DIR}/seed_companies.txt

应等于
wc -l {OUTPUT_DIR}/seed_companies.txt
的结果


Clean up the batch files: `rm {OUTPUT_DIR}/_batch_triage_*`.

清理批量文件:`rm {OUTPUT_DIR}/_batch_triage_*`。

Step 6: Filter by ICP Threshold

步骤6:按ICP阈值过滤

Read each
companies/*.md
frontmatter, keep those with
icp_fit_score >= 6
(or whatever
--icp-threshold
is). Write the surviving company slugs to
{OUTPUT_DIR}/icp_fits.txt
:
bash
THRESHOLD=6   # from --icp-threshold flag
for f in {OUTPUT_DIR}/companies/*.md; do
  score=$(awk '/^icp_fit_score:/{print $2; exit}' "$f")
  if [ -n "$score" ] && [ "$score" -ge "$THRESHOLD" ]; then
    basename "$f" .md
  fi
done > {OUTPUT_DIR}/icp_fits.txt

wc -l {OUTPUT_DIR}/icp_fits.txt
Expected: 20-40% of
seed_companies.txt
. If the survival rate is < 10%, the threshold may be too high or the ICP description too narrow — surface a warning to the user.
读取每个
companies/*.md
的前置内容,保留
icp_fit_score >= 6
(或
--icp-threshold
指定的值)的公司。将符合条件的公司slug写入
{OUTPUT_DIR}/icp_fits.txt
bash
THRESHOLD=6   # 来自--icp-threshold参数
for f in {OUTPUT_DIR}/companies/*.md; do
  score=$(awk '/^icp_fit_score:/{print $2; exit}' "$f")
  if [ -n "$score" ] && [ "$score" -ge "$THRESHOLD" ]; then
    basename "$f" .md
  fi
done > {OUTPUT_DIR}/icp_fits.txt

wc -l {OUTPUT_DIR}/icp_fits.txt
预期数量:
seed_companies.txt
的20-40%。如果留存率低于10%,可能是阈值过高或ICP描述过于狭窄——向用户发出警告。

Step 7: Deep Research

步骤7:深度调研

Full Plan→Research→Synthesize on ICP-fit companies only. Hard cap: 5 tool calls per company (homepage extract + 2-3 sub-question searches + 1-2 supplementary fetches). Subagents OVERWRITE the existing
companies/{slug}.md
triage stub with the richer deep-research version (frontmatter
triage_only: false
).
Dispatch pattern: split
icp_fits.txt
into batches of ~5 (deep mode default) and fan out one Agent per batch in a SINGLE message (up to 6 Agents per message). Each Agent gets the prompt from
references/workflow.md
→ "Deep Research" with these substitutions:
  • {SKILL_DIR}
    ,
    {OUTPUT_DIR}
    ,
    {USER_COMPANY}
    ,
    {USER_PRODUCT}
    ,
    {ICP_DESCRIPTION}
  • {EVENT_NAME}
    (from
    recon.json
    .title
    ),
    {EVENT_CONTEXT}
    (track / topic, manually inferred from the event homepage)
  • {COMPANY_LIST}
    → contents of the batch file (each line
    slug|website
    )
bash
undefined
仅对符合ICP的公司执行完整的计划→调研→总结流程。硬限制:每个公司5次工具调用(主页提取 + 2-3次子问题搜索 + 1-2次补充获取)。子代理会用更详细的深度调研版本覆盖现有的
companies/{slug}.md
筛选结果(前置内容
triage_only: false
)。
调度模式:将
icp_fits.txt
分成约5个公司一组的批次(深度模式默认),在单次消息中为每个批次调度一个Agent(最多6个Agent)。每个Agent执行
references/workflow.md
→ "Deep Research"的提示词,并替换以下占位符:
  • {SKILL_DIR}
    {OUTPUT_DIR}
    {USER_COMPANY}
    {USER_PRODUCT}
    {ICP_DESCRIPTION}
  • {EVENT_NAME}
    (来自
    recon.json
    .title
    字段)、
    {EVENT_CONTEXT}
    (从活动主页手动推断的主题/赛道)
  • {COMPANY_LIST}
    → 批量文件的内容(每行格式为
    slug|website
bash
undefined

Build {company-slug|website} pairs by reading frontmatter from each triage stub

通过读取筛选结果的前置内容,生成{company-slug|website}对

while read slug; do website=$(awk '/^website:/{print $2; exit}' {OUTPUT_DIR}/companies/${slug}.md) echo "${slug}|${website}" done < {OUTPUT_DIR}/icp_fits.txt > {OUTPUT_DIR}/_deep_targets.txt
while read slug; do website=$(awk '/^website:/{print $2; exit}' {OUTPUT_DIR}/companies/${slug}.md) echo "${slug}|${website}" done < {OUTPUT_DIR}/icp_fits.txt > {OUTPUT_DIR}/_deep_targets.txt

Split into ~5-company batches (deep mode)

分成约5个公司一组的批次(深度模式)

split -l 5 {OUTPUT_DIR}/_deep_targets.txt {OUTPUT_DIR}/batch_deep ls {OUTPUT_DIR}/batch_deep* | wc -l

**Agent dispatch (skeleton, repeat per batch in one message)**:
Agent( description: "Deep research batch aa", prompt: <Deep Research prompt from workflow.md with all placeholders substituted; COMPANY_LIST = cat _batch_deep_aa>, subagent_type: "general-purpose" ) Agent( description: "Deep research batch ab", prompt: <same template, COMPANY_LIST = cat _batch_deep_ab>, subagent_type: "general-purpose" ) ... up to 6 per message; second wave after the first returns

After all subagents return, verify the deep-research files exist and have `triage_only: false`:

```bash
grep -l "triage_only: false" {OUTPUT_DIR}/companies/*.md | wc -l
split -l 5 {OUTPUT_DIR}/_deep_targets.txt {OUTPUT_DIR}/batch_deep ls {OUTPUT_DIR}/batch_deep* | wc -l

**Agent调度(框架,每条消息中重复每个批次)**:
Agent( description: "Deep research batch aa", prompt: <Deep Research prompt from workflow.md with all placeholders substituted; COMPANY_LIST = cat _batch_deep_aa>, subagent_type: "general-purpose" ) Agent( description: "Deep research batch ab", prompt: <same template, COMPANY_LIST = cat _batch_deep_ab>, subagent_type: "general-purpose" ) ... up to 6 per message; second wave after the first returns

所有子代理返回后,验证深度调研文件存在且包含`triage_only: false`:

```bash
grep -l "triage_only: false" {OUTPUT_DIR}/companies/*.md | wc -l

Should equal wc -l icp_fits.txt

应等于wc -l icp_fits.txt的结果

undefined
undefined

Step 8: Enrich Speakers

步骤8:补充演讲者信息

Per person: harvest LinkedIn URL, recent activity (podcast / blog / talk / GitHub / X), and write
people/{slug}.md
. Hard cap: 4 tool calls per person, three lanes:
  1. bb search "{name} {company} linkedin"
    (always)
  2. bb search "{name} podcast OR talk OR blog 2026"
    (deep+)
  3. bb search "{name} github"
    (deeper)
  4. bb search "{name} site:x.com OR site:twitter.com"
    (deeper)
Quick mode: skip Step 8 entirely. Deep mode: lanes 1-2. Deeper mode: lanes 1-4.
针对每位人员:获取LinkedIn URL、近期活动(播客/博客/演讲/GitHub/X),并写入
people/{slug}.md
。硬限制:每个人4次工具调用,分为三个层级:
  1. bb search "{name} {company} linkedin"
    (必选)
  2. bb search "{name} podcast OR talk OR blog 2026"
    (深度模式)
  3. bb search "{name} github"
    (更深模式)
  4. bb search "{name} site:x.com OR site:twitter.com"
    (更深模式)
快速模式:完全跳过步骤8。深度模式:执行层级1-2。更深模式:执行层级1-4。

Step 8a — Ask the user: scope of enrichment

步骤8a — 询问用户:补充范围

Before dispatching, compute the two candidate counts and ask the user to choose. The default is ICP-fit only (faster, cheaper, what most users want); enriching every speaker is opt-in because cost scales linearly with people enriched.
bash
TOTAL=$(wc -l < {OUTPUT_DIR}/people.jsonl)
ICP_FITS=$(node -e '
const fs = require("fs");
const fits = new Set(fs.readFileSync("{OUTPUT_DIR}/icp_fits.txt", "utf-8").split("\n").filter(Boolean));
const slug2name = {};
for (const slug of fits) {
  const md = fs.readFileSync(`{OUTPUT_DIR}/companies/${slug}.md`, "utf-8");
  const m = md.match(/^company_name:\s*(.+)$/m);
  if (m) slug2name[slug] = m[1].trim();
}
const want = new Set(Object.values(slug2name).map(s => s.toLowerCase()));
const ppl = fs.readFileSync("{OUTPUT_DIR}/people.jsonl","utf-8").split("\n").filter(Boolean).map(JSON.parse);
console.log(ppl.filter(p => p.company && want.has(p.company.toLowerCase())).length);
')
调度前,计算两个候选数量并让用户选择。默认选项为仅补充符合ICP的人员(更快、成本更低,多数用户需求);补充所有演讲者为可选,因为成本随补充人数线性增长。
bash
TOTAL=$(wc -l < {OUTPUT_DIR}/people.jsonl)
ICP_FITS=$(node -e '
const fs = require("fs");
const fits = new Set(fs.readFileSync("{OUTPUT_DIR}/icp_fits.txt", "utf-8").split("\n").filter(Boolean));
const slug2name = {};
for (const slug of fits) {
  const md = fs.readFileSync(`{OUTPUT_DIR}/companies/${slug}.md`, "utf-8");
  const m = md.match(/^company_name:\s*(.+)$/m);
  if (m) slug2name[slug] = m[1].trim();
}
const want = new Set(Object.values(slug2name).map(s => s.toLowerCase()));
const ppl = fs.readFileSync("{OUTPUT_DIR}/people.jsonl","utf-8").split("\n").filter(Boolean).map(JSON.parse);
console.log(ppl.filter(p => p.company && want.has(p.company.toLowerCase())).length);
')

Lanes per person: 2 (deep) or 4 (deeper) — match {DEPTH}

每人调用次数:2次(深度模式)或4次(更深模式)——与{DEPTH}匹配

LANES=2 # or 4 for deeper echo "ICP fits: ${ICP_FITS} speakers × ${LANES} = $((ICP_FITS * LANES)) calls" echo "All: ${TOTAL} speakers × ${LANES} = $((TOTAL * LANES)) calls"

Then ask via `AskUserQuestion` — clean two-option choice with the quantified cost on each:
AskUserQuestion(questions: [ { question: "Enrich which speakers?", header: "Enrichment scope", multiSelect: false, options: [ { label: "ICP fits only", description: "${ICP_FITS} speakers, ~$((ICP_FITS * LANES)) calls (recommended)" }, { label: "All speakers", description: "${TOTAL} speakers, ~$((TOTAL * LANES)) calls" } ] } ])

Save the chosen scope as `ENRICH_SCOPE=icp_fits` or `ENRICH_SCOPE=all`. If the user picks "All speakers" and `TOTAL × LANES > 600`, print a warning and ask once more — that's a 10+ minute run with hundreds of tool calls.
LANES=2 # 更深模式为4 echo "ICP fits: ${ICP_FITS} speakers × ${LANES} = $((ICP_FITS * LANES)) calls" echo "All: ${TOTAL} speakers × ${LANES} = $((TOTAL * LANES)) calls"

然后通过`AskUserQuestion`询问用户——提供清晰的二选一选项,并标注量化成本:
AskUserQuestion(questions: [ { question: "Enrich which speakers?", header: "Enrichment scope", multiSelect: false, options: [ { label: "ICP fits only", description: "${ICP_FITS} speakers, ~$((ICP_FITS * LANES)) calls (recommended)" }, { label: "All speakers", description: "${TOTAL} speakers, ~$((TOTAL * LANES)) calls" } ] } ])

将用户选择的范围保存为`ENRICH_SCOPE=icp_fits`或`ENRICH_SCOPE=all`。如果用户选择“All speakers”且`TOTAL × LANES > 600`,打印警告并再次确认——这将是一个耗时10分钟以上、包含数百次工具调用的任务。

Step 8b — Filter and batch

步骤8b — 过滤并批量处理

bash
undefined
bash
undefined

Build _people_to_enrich.jsonl based on ENRICH_SCOPE

根据ENRICH_SCOPE生成_people_to_enrich.jsonl

if [ "$ENRICH_SCOPE" = "all" ]; then cp {OUTPUT_DIR}/people.jsonl {OUTPUT_DIR}/_people_to_enrich.jsonl else node -e ' const fs = require("fs"); const fits = new Set(fs.readFileSync("{OUTPUT_DIR}/icp_fits.txt", "utf-8").split("\n").filter(Boolean)); const slug2name = {}; for (const slug of fits) { const md = fs.readFileSync(
{OUTPUT_DIR}/companies/${slug}.md
, "utf-8"); const m = md.match(/^company_name:\s*(.+)$/m); if (m) slug2name[slug] = m[1].trim(); } const wantNames = new Set(Object.values(slug2name).map(s => s.toLowerCase())); const lines = fs.readFileSync("{OUTPUT_DIR}/people.jsonl", "utf-8").split("\n").filter(Boolean); const keep = lines.filter(l => { const p = JSON.parse(l); return p.company && wantNames.has(p.company.toLowerCase()); }); fs.writeFileSync("{OUTPUT_DIR}/_people_to_enrich.jsonl", keep.join("\n") + "\n"); console.error(
Enriching ${keep.length} of ${lines.length} speakers
); ' fi
if [ "$ENRICH_SCOPE" = "all" ]; then cp {OUTPUT_DIR}/people.jsonl {OUTPUT_DIR}/_people_to_enrich.jsonl else node -e ' const fs = require("fs"); const fits = new Set(fs.readFileSync("{OUTPUT_DIR}/icp_fits.txt", "utf-8").split("\n").filter(Boolean)); const slug2name = {}; for (const slug of fits) { const md = fs.readFileSync(
{OUTPUT_DIR}/companies/${slug}.md
, "utf-8"); const m = md.match(/^company_name:\s*(.+)$/m); if (m) slug2name[slug] = m[1].trim(); } const wantNames = new Set(Object.values(slug2name).map(s => s.toLowerCase())); const lines = fs.readFileSync("{OUTPUT_DIR}/people.jsonl", "utf-8").split("\n").filter(Boolean); const keep = lines.filter(l => { const p = JSON.parse(l); return p.company && wantNames.has(p.company.toLowerCase()); }); fs.writeFileSync("{OUTPUT_DIR}/_people_to_enrich.jsonl", keep.join("\n") + "\n"); console.error(
Enriching ${keep.length} of ${lines.length} speakers
); ' fi

Split into ~5-person batches

分成约5个人一组的批次

split -l 5 {OUTPUT_DIR}/_people_to_enrich.jsonl {OUTPUT_DIR}/batch_people

Then in a single message, dispatch one Agent call per batch (up to 6 per message) with the prompt from `references/workflow.md` → "Person Enrichment". Each subagent's prompt should include:
- `{SKILL_DIR}`, `{OUTPUT_DIR}`, `{DEPTH}` (`deep` | `deeper`)
- `{USER_COMPANY}`, `{USER_PRODUCT}`, `{ICP_DESCRIPTION}`
- `{EVENT_NAME}` (from `recon.json` `.title`)
- `{LANES}` → `2` for deep mode, `4` for deeper mode (substituted into `# bb call N/{LANES}`)
- `{PEOPLE_BATCH}` → contents of `_batch_people_aa` (each line a JSON record from `people.jsonl`)

**Agent dispatch (skeleton, repeat per batch in one message)**:
Agent( description: "Person enrichment batch aa", prompt: <Person Enrichment prompt from workflow.md with all placeholders substituted; PEOPLE_BATCH = cat _batch_people_aa>, subagent_type: "general-purpose" ) Agent( description: "Person enrichment batch ab", prompt: <same template, PEOPLE_BATCH = cat _batch_people_ab>, subagent_type: "general-purpose" ) ... up to 6 per message

After all subagents return, verify the people files exist:

```bash
ls {OUTPUT_DIR}/people/*.md | wc -l
split -l 5 {OUTPUT_DIR}/_people_to_enrich.jsonl {OUTPUT_DIR}/batch_people

然后在一条消息中,为每个批次调度一次Agent调用(最多6个),使用`references/workflow.md` → "Person Enrichment"的提示词。每个子代理的提示词应包含:
- `{SKILL_DIR}`、`{OUTPUT_DIR}`、`{DEPTH}`(`deep` | `deeper`)
- `{USER_COMPANY}`、`{USER_PRODUCT}`、`{ICP_DESCRIPTION}`
- `{EVENT_NAME}`(来自`recon.json`的`.title`字段)
- `{LANES}` → 深度模式为`2`,更深模式为`4`(替换到`# bb call N/{LANES}`中)
- `{PEOPLE_BATCH}` → `_batch_people_aa`的内容(每行是`people.jsonl`中的一条JSON记录)

**Agent调度(框架,每条消息中重复每个批次)**:
Agent( description: "Person enrichment batch aa", prompt: <Person Enrichment prompt from workflow.md with all placeholders substituted; PEOPLE_BATCH = cat _batch_people_aa>, subagent_type: "general-purpose" ) Agent( description: "Person enrichment batch ab", prompt: <same template, PEOPLE_BATCH = cat _batch_people_ab>, subagent_type: "general-purpose" ) ... up to 6 per message

所有子代理返回后,验证人员文件存在:

```bash
ls {OUTPUT_DIR}/people/*.md | wc -l

Should equal wc -l _people_to_enrich.jsonl

应等于wc -l _people_to_enrich.jsonl的结果

undefined
undefined

Step 9: Compile Report

步骤9:编译报告

Generate the company-grouped HTML index, alternate views, and CSV in one command:
bash
node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
This generates:
  • {OUTPUT_DIR}/index.html
    — people grouped by company, ranked by company ICP score (opens in browser)
  • {OUTPUT_DIR}/people.html
    — filterable speaker list (alternate view)
  • {OUTPUT_DIR}/companies.html
    — ICP-ranked company table with attendees
  • {OUTPUT_DIR}/results.csv
    — cold-outbound-ready spreadsheet
Then present a summary in chat:
undefined
通过一条命令生成按公司分组的HTML主页、备选视图和CSV文件:
bash
node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
该命令会生成:
  • {OUTPUT_DIR}/index.html
    — 按公司分组、按ICP评分排序的人员列表(在浏览器中打开)
  • {OUTPUT_DIR}/people.html
    — 可筛选的演讲者列表(备选视图)
  • {OUTPUT_DIR}/companies.html
    — 按ICP评分排序的公司表格,包含参会人员
  • {OUTPUT_DIR}/results.csv
    — 可用于冷启动 outbound 的表格文件
然后在聊天中展示总结:
undefined

Event Prospecting Complete — {Event Name}

活动潜在客户挖掘完成 — {Event Name}

  • Total speakers extracted: {count}
  • Unique companies: {count}
  • ICP fits (score ≥ {threshold}): {count}
  • Speakers enriched: {count}
  • Score distribution (companies):
    • Strong fit (8-10): {count}
    • Partial fit (5-7): {count}
    • Weak fit (1-4): {count}
  • Report opened in browser: {OUTPUT_DIR}/index.html

Show the **top 5 people cards** as a markdown table sorted by company ICP score, then offer to:
- Adjust `--icp-threshold` and re-run Steps 6-9
- Export the CSV to a CRM
  • 提取的总演讲者数量:{count}
  • 独立公司数量:{count}
  • 符合ICP的公司(评分≥{threshold}):{count}
  • 补充信息的演讲者数量:{count}
  • 公司评分分布
    • 高适配度(8-10分):{count}
    • 部分适配度(5-7分):{count}
    • 低适配度(1-4分):{count}
  • 报告已在浏览器中打开:{OUTPUT_DIR}/index.html

展示按公司ICP评分排序的**前5位人员卡片**(markdown表格形式),然后提供以下选项:
- 调整`--icp-threshold`并重新运行步骤6-9
- 将CSV导出到CRM