company-research

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Company Research

企业调研

Discover and deeply research companies to sell to. Uses Browserbase Search API for discovery and a Plan→Research→Synthesize pattern for deep enrichment — outputting a scored research report and CSV.
Required:
BROWSERBASE_API_KEY
env var and
bb
CLI installed.
First-run setup: On the first run you'll be prompted to approve
bb fetch
,
bb search
,
cat
,
mkdir
,
sed
, etc. Select "Yes, and don't ask again for: bb fetch:*" (or equivalent) for each to auto-approve for the session. To permanently approve, add these to your
~/.claude/settings.json
under
permissions.allow
:
json
"Bash(bb:*)", "Bash(bunx:*)", "Bash(bun:*)", "Bash(node:*)",
"Bash(cat:*)", "Bash(mkdir:*)", "Bash(sed:*)", "Bash(head:*)", "Bash(tr:*)", "Bash(rm:*)"
Path rules: Always use the full literal path in all Bash commands — NOT
~
or
$HOME
(both trigger "shell expansion syntax" approval prompts). Resolve the home directory once and use it everywhere. When constructing subagent prompts, replace
{SKILL_DIR}
with the full literal path.
Output directory: All research output goes to
~/Desktop/{company_slug}_research_{YYYY-MM-DD}/
. This directory contains one
.md
file per researched company plus a final
.csv
. The user gets both the scored spreadsheet and the full research files on their Desktop.
CRITICAL — Tool restrictions (applies to main agent AND all subagents):
  • All web searches: use
    bb search
    . NEVER use WebSearch.
  • All page content extraction: use
    node {SKILL_DIR}/scripts/extract_page.mjs "<url>"
    . This script fetches via
    bb fetch
    , parses title + meta tags + visible body text, and automatically falls back to
    bb browse
    when the page is JS-rendered or over 1MB. NEVER hand-roll a
    bb fetch | sed
    pipeline — it silently strips meta tags and doesn't handle the JSON envelope. NEVER use WebFetch.
  • All research output: subagents write one markdown file per company to
    {OUTPUT_DIR}/{company-slug}.md
    using bash heredoc. NEVER use the Write tool or
    python3 -c
    . See
    references/example-research.md
    for the file format.
  • Report + CSV compilation: use
    node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
    — generates HTML report and CSV in one step, opens overview in browser.
  • URL deduplication: use
    node {SKILL_DIR}/scripts/list_urls.mjs /tmp
    after discovery.
  • Subagents must use ONLY the Bash tool. No other tools allowed.
  • Main agent NEVER reads raw discovery JSON batch files. Use
    list_urls.mjs
    for dedup.
CRITICAL — Anti-hallucination rules (applies to main agent AND all subagents):
  • NEVER infer
    product_description
    ,
    industry
    , or
    target_audience
    from a site's fonts, framework (Framer/Next.js/React), design system, or typography. These are cosmetic and say nothing about what the company sells.
  • NEVER let the user's own ICP leak into a target's description. If you don't know what the target does, write
    Unknown
    — do not pattern-match them onto the ICP.
  • product_description
    MUST quote or paraphrase a specific phrase from
    extract_page.mjs
    output (TITLE, META_DESCRIPTION, OG_DESCRIPTION, HEADINGS, or BODY). If none of those fields yield a recognizable product statement, write
    Unknown — homepage content not accessible
    .
  • If
    product_description
    is
    Unknown
    , cap
    icp_fit_score
    at 3 and set
    icp_fit_reasoning
    to
    Insufficient evidence — homepage returned no readable content
    .
CRITICAL — Minimize permission prompts:
  • Subagents MUST batch ALL file writes into a SINGLE Bash call using chained heredocs. One Bash call = one permission prompt.
  • Batch ALL searches and ALL fetches into single Bash calls using
    &&
    chaining.
发掘并深度调研潜在销售目标企业。借助Browserbase Search API进行企业发掘,采用「规划→调研→整合」模式开展深度信息补充,最终输出带评分的调研报告与CSV文件。
必要条件:需配置
BROWSERBASE_API_KEY
环境变量并安装
bb
CLI工具。
首次运行设置:首次运行时,系统会提示您授权
bb fetch
bb search
cat
mkdir
sed
等命令。请选择**"Yes, and don't ask again for: bb fetch:*"**(或同类选项),以便在本次会话中自动授权所有相关命令。若要永久授权,请将以下内容添加到
~/.claude/settings.json
permissions.allow
字段中:
json
"Bash(bb:*)", "Bash(bunx:*)", "Bash(bun:*)", "Bash(node:*)",
"Bash(cat:*)", "Bash(mkdir:*)", "Bash(sed:*)", "Bash(head:*)", "Bash(tr:*)", "Bash(rm:*)"
路径规则:所有Bash命令必须使用完整的字面路径——禁止使用
~
$HOME
(这两个符号会触发「shell扩展语法」授权提示)。请先解析主目录路径,再在所有命令中使用该路径。构建子Agent提示词时,请将
{SKILL_DIR}
替换为完整的字面路径。
输出目录:所有调研输出文件将保存至
~/Desktop/{company_slug}_research_{YYYY-MM-DD}/
。该目录包含每个被调研企业对应的
.md
文件,以及最终生成的
.csv
文件。用户将在桌面获得带评分的电子表格与完整调研文件。
重要限制——工具使用规则(适用于主Agent与所有子Agent)
  • 所有网页搜索:必须使用
    bb search
    。禁止使用WebSearch工具。
  • 所有页面内容提取:必须使用
    node {SKILL_DIR}/scripts/extract_page.mjs "<url>"
    。该脚本通过
    bb fetch
    获取内容,解析标题、元标签与可见正文,并在页面为JS渲染或大小超过1MB时自动 fallback 到
    bb browse
    。禁止手动编写
    bb fetch | sed
    管道——该方式会静默剔除元标签且无法处理JSON信封。禁止使用WebFetch工具。
  • 所有调研输出:子Agent必须使用bash heredoc将每个企业的调研内容写入单独的markdown文件,保存至
    {OUTPUT_DIR}/{company-slug}.md
    。禁止使用Write工具或
    python3 -c
    命令。文件格式可参考
    references/example-research.md
  • 报告与CSV编译:必须使用
    node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
    ——该命令可一键生成HTML报告与CSV文件,并在浏览器中打开概览页面。
  • URL去重:发现阶段完成后,使用
    node {SKILL_DIR}/scripts/list_urls.mjs /tmp
    进行去重。
  • 子Agent只能使用Bash工具,禁止使用其他工具。
  • 主Agent禁止直接读取原始发现JSON批量文件,请使用
    list_urls.mjs
    进行去重。
重要规则——防幻觉(适用于主Agent与所有子Agent)
  • 禁止从网站的字体、框架(Framer/Next.js/React)、设计系统或排版推断
    product_description
    industry
    target_audience
    。这些属于视觉层面的内容,无法体现企业的业务范畴。
  • 禁止将用户自身的ICP内容混入目标企业的描述中。若不清楚目标企业的业务,请填写
    Unknown
    ——切勿强行匹配ICP。
  • product_description
    必须引用或改写
    extract_page.mjs
    输出中的特定内容(TITLE、META_DESCRIPTION、OG_DESCRIPTION、HEADINGS或BODY字段)。若上述字段均未提供可识别的产品说明,请填写
    Unknown — homepage content not accessible
  • product_description
    Unknown
    ,则
    icp_fit_score
    最高只能打3分,并将
    icp_fit_reasoning
    设置为
    Insufficient evidence — homepage returned no readable content
重要提示——减少授权提示
  • 子Agent必须将所有文件写入操作批量整合为单次Bash调用,使用链式heredocs完成。一次Bash调用对应一次授权提示。
  • 将所有搜索与获取操作批量整合为单次Bash调用,使用
    &&
    链式执行。

Pipeline Overview

流程概览

Follow these 5 steps in order. Do not skip steps or reorder.
  1. Company Research — Deeply understand the user's company, product, and who they sell to
  2. Depth Mode Selection — Choose research depth based on how many targets they want
  3. Discovery — Find target companies using diverse search queries
  4. Deep Research & Scoring — Research each company, score ICP fit
  5. Report & CSV — Present findings, compile scored CSV

请按以下5个步骤依次执行,不得跳过或调整顺序。
  1. 企业深度调研——深入了解用户企业的业务、产品与销售对象
  2. 调研模式选择——根据目标企业数量选择调研深度
  3. 企业发掘——通过多样化搜索查询寻找目标企业
  4. 深度调研与评分——调研每家企业并进行ICP匹配度评分
  5. 报告与CSV生成——展示调研结果并编译带评分的CSV文件

Step 0: Setup Output Directory

步骤0:创建输出目录

Before starting, create the output directory on the user's Desktop:
bash
OUTPUT_DIR=~/Desktop/{company_slug}_research_{YYYY-MM-DD}
mkdir -p "$OUTPUT_DIR"
Replace
{company_slug}
with the user's company name (lowercase, hyphenated) and
{YYYY-MM-DD}
with today's date. Pass
{OUTPUT_DIR}
(as a full literal path, not with
~
) to all subagent prompts so they write research files there.
Also clean up discovery batch files from prior runs:
bash
rm -f /tmp/company_discovery_batch_*.json
开始操作前,请在用户桌面创建输出目录:
bash
OUTPUT_DIR=~/Desktop/{company_slug}_research_{YYYY-MM-DD}
mkdir -p "$OUTPUT_DIR"
{company_slug}
替换为用户企业名称(小写,连字符分隔),
{YYYY-MM-DD}
替换为当前日期。请将
{OUTPUT_DIR}
(完整字面路径,禁止使用
~
)传递给所有子Agent的提示词,以便它们将调研文件写入该目录。
同时清理之前运行产生的发现批量文件:
bash
rm -f /tmp/company_discovery_batch_*.json

Step 1: Deep Company Research

步骤1:用户企业深度调研

This is the most important step. The quality of everything downstream depends on deeply understanding the user's company.
  1. Ask the user for their company name or URL
  2. Check for an existing profile:
    • List files in
      {SKILL_DIR}/profiles/
      (ignore
      example.json
      )
    • If a matching profile exists → load it, present to user: "I have your profile from {researched_at}. Still accurate?" If yes → skip to Step 2.
    • If no profile exists → proceed with deep research below.
  3. Run a full deep research on the user's company using the Plan→Research→Synthesize pattern. See
    references/research-patterns.md
    for sub-question templates and research methodology.
    Key research steps:
    • Search:
      bb search "{company name}" --num-results 10
    • Fetch homepage:
      node {SKILL_DIR}/scripts/extract_page.mjs "{company website}"
    • Discover site pages via sitemap (do NOT hardcode paths like
      /about
      or
      /customers
      ):
      1. bb fetch --allow-redirects "{company website}/sitemap.xml"
        — sitemap is small, raw
        bb fetch
        is fine
      2. Scan for URLs with keywords:
        customer
        ,
        case-stud
        ,
        pricing
        ,
        about
        ,
        use-case
        ,
        industry
        ,
        solution
      3. Optionally also fetch
        /llms.txt
        for page descriptions
      4. Pick 3-5 most relevant URLs and extract with
        extract_page.mjs
        (NOT raw
        bb fetch
        )
    • Search for external context and competitors
    • Accumulate findings with confidence levels
    Synthesize into a profile: Company, Product, Existing Customers, Competitors, Use Cases. Do NOT include ICP or sub-verticals — those are per-run decisions.
  4. Present the profile to the user for confirmation. Do not proceed until confirmed.
  5. Save the confirmed profile to
    {SKILL_DIR}/profiles/{company-slug}.json
  6. Ask clarifying questions using
    AskUserQuestion
    with checkboxes:
    • "Which segments are you targeting?" with options derived from the company research
    • "Company stage?" — Startups, Mid-market, Enterprise, All
    • "How many companies / depth?" — Quick (~100), Deep (~50), Deeper (~25)
    • This is the ONLY user interaction. After this, execute silently until results are ready.
这是最重要的步骤,后续所有环节的质量都取决于对用户企业的深入理解。
  1. 向用户询问其企业名称或官网URL
  2. 检查是否存在已有档案
    • 列出
      {SKILL_DIR}/profiles/
      目录下的文件(忽略
      example.json
    • 若存在匹配的档案→加载该档案并展示给用户:"我有您于{researched_at}创建的企业档案,是否仍然准确?"若用户确认是→跳至步骤2。
    • 若不存在匹配档案→继续执行以下深度调研步骤。
  3. 采用「规划→调研→整合」模式对用户企业进行完整深度调研。子问题模板与调研方法可参考
    references/research-patterns.md
    核心调研步骤
    • 搜索:
      bb search "{company name}" --num-results 10
    • 获取主页内容:
      node {SKILL_DIR}/scripts/extract_page.mjs "{company website}"
    • 通过站点地图发掘网站页面(禁止硬编码
      /about
      /customers
      等路径):
      1. bb fetch --allow-redirects "{company website}/sitemap.xml"
        ——站点地图体积较小,可直接使用原始
        bb fetch
        命令
      2. 扫描包含以下关键词的URL:
        customer
        case-stud
        pricing
        about
        use-case
        industry
        solution
      3. 可选操作:获取
        /llms.txt
        文件以获取页面描述
      4. 挑选3-5个最相关的URL,使用
        extract_page.mjs
        提取内容(禁止使用原始
        bb fetch
    • 搜索外部背景信息与竞争对手
    • 汇总调研结果并标注置信度
    整合为企业档案: 内容包括企业名称、产品、现有客户、竞争对手、应用场景。 请勿包含ICP或细分垂直领域——这些属于每次运行时的决策内容。
  4. 将企业档案展示给用户确认,获得确认后再继续执行后续步骤。
  5. 将确认后的档案保存
    {SKILL_DIR}/profiles/{company-slug}.json
  6. 使用
    AskUserQuestion
    工具并提供复选框,向用户询问澄清问题
    • "您的目标客户群体是哪些?"选项需基于企业调研结果生成
    • "企业阶段?"——初创企业、中型企业、大型企业、全部
    • "目标企业数量/调研深度?"——快速(约100家)、深度(约50家)、超深度(约25家)
    • 这是唯一需要用户参与的环节,完成后将自动执行直至生成结果。

Step 2: Depth Mode Selection

步骤2:调研模式选择

ModeResearch per companyBest for
quick
Homepage + 1-2 searches~100 companies, broad scan
deep
2-3 sub-questions, 5-8 tool calls~50 companies, solid research
deeper
4-5 sub-questions, 10-15 tool calls~25 companies, full intelligence
模式单企业调研内容适用场景
quick
主页内容 + 1-2次搜索约100家企业,广泛扫描
deep
2-3个子问题,5-8次工具调用约50家企业,深度调研
deeper
4-5个子问题,10-15次工具调用约25家企业,全面智能调研

Step 3: Discovery

步骤3:企业发掘

Formula:
ceil(requested_companies / 35)
search queries needed. Over-discover by ~2-3x because filtering typically drops 50-70%.
Generate search queries with these patterns:
  • Industry + company stage + geography ("fintech startups series A Bay Area")
  • Technology stack + use case ("companies using Selenium for web scraping")
  • Competitor adjacency ("alternatives to {known company in ICP}")
  • Buyer persona + pain point ("engineering teams struggling with browser automation")
Process:
  1. Launch ALL discovery subagents at once (up to ~6 per message). Each runs its queries in a SINGLE Bash call:
    bash
    bb search "{query}" --num-results 25 --output /tmp/company_discovery_batch_{N}.json
  2. After all waves complete, deduplicate:
    node {SKILL_DIR}/scripts/list_urls.mjs /tmp
  3. Filter the URL list — remove:
    • Blog posts, news articles (globenewswire.com, techcrunch.com, etc.)
    • Directories/aggregators (tracxn.com, crunchbase.com, g2.com)
    • The user's own competitors and existing customers (from profile) Keep only company homepages.
See
references/workflow.md
for subagent prompt templates and wave management.
公式:需要
ceil(requested_companies / 35)
个搜索查询。需超额发掘约2-3倍的企业,因为筛选通常会剔除50-70%的结果。
可通过以下模式生成搜索查询:
  • 行业 + 企业阶段 + 地域("fintech startups series A Bay Area")
  • 技术栈 + 应用场景("companies using Selenium for web scraping")
  • 竞品关联("alternatives to {known company in ICP}")
  • 买家角色 + 痛点("engineering teams struggling with browser automation")
流程
  1. 同时启动所有发掘子Agent(每条消息最多约6个)。每个子Agent通过单次Bash调用执行其搜索查询:
    bash
    bb search "{query}" --num-results 25 --output /tmp/company_discovery_batch_{N}.json
  2. 所有批次完成后,执行去重:
    node {SKILL_DIR}/scripts/list_urls.mjs /tmp
  3. 过滤URL列表——移除以下内容:
    • 博客文章、新闻稿(globenewswire.com、techcrunch.com等)
    • 目录/聚合平台(tracxn.com、crunchbase.com、g2.com)
    • 用户企业的竞争对手与现有客户(来自企业档案) 仅保留企业主页URL。
子Agent提示词模板与批次管理可参考
references/workflow.md

Step 4: Deep Research & Scoring

步骤4:深度调研与评分

Launch subagents to research companies in parallel. See
references/workflow.md
for the enrichment subagent prompt template. See
references/research-patterns.md
for the full research methodology.
Process:
  1. Split filtered URLs into groups per subagent (quick: ~10, deep: ~5, deeper: ~2-3)
  2. Launch ALL enrichment subagents at once (up to ~6 per message)
  3. Each subagent uses ONLY Bash — for each company:
    Phase A — Plan (skip in quick mode): Decompose into 2-5 sub-questions based on ICP and enrichment fields.
    Phase B — Research Loop: Search and fetch pages, extract findings. Respect step budget (quick: 2-3, deep: 5-8, deeper: 10-15).
    Phase C — Synthesize: Score ICP fit 1-10 with evidence. Fill enrichment fields from findings.
  4. Subagents write ALL markdown files in a SINGLE Bash call using chained heredocs to
    {OUTPUT_DIR}/
  5. After ALL subagents complete, proceed to Step 5
Critical: Include the confirmed ICP description verbatim in every subagent prompt. Pass the full literal
{OUTPUT_DIR}
path to every subagent.
并行启动子Agent对企业进行调研。信息补充子Agent提示词模板可参考
references/workflow.md
,完整调研方法可参考
references/research-patterns.md
流程
  1. 将过滤后的URL按子Agent分组(快速模式:每组约10个,深度模式:每组约5个,超深度模式:每组约2-3个)
  2. 同时启动所有信息补充子Agent(每条消息最多约6个)
  3. 每个子Agent只能使用Bash工具——针对每家企业执行以下操作:
    阶段A——规划(快速模式跳过): 根据ICP与信息补充字段分解为2-5个子问题。
    阶段B——调研循环: 执行搜索与页面获取操作,提取调研结果。需遵守步骤预算(快速模式:2-3步,深度模式:5-8步,超深度模式:10-15步)。
    阶段C——整合: 为ICP匹配度打分(1-10分)并提供依据。根据调研结果填写信息补充字段。
  4. 子Agent通过单次Bash调用使用链式heredocs将所有markdown文件写入
    {OUTPUT_DIR}/
  5. 所有子Agent完成后,进入步骤5
重要提示:需在每个子Agent的提示词中逐字包含已确认的ICP描述。需将完整字面路径
{OUTPUT_DIR}
传递给每个子Agent。

Step 5: Report & CSV

步骤5:报告与CSV生成

  1. Generate HTML report + CSV (opens overview in browser automatically):
    bash
    node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
    This generates:
    • {OUTPUT_DIR}/index.html
      — overview page with scored table (opens in browser)
    • {OUTPUT_DIR}/companies/*.html
      — individual company pages (linked from overview)
    • {OUTPUT_DIR}/results.csv
      — scored spreadsheet for import into sheets/CRM
  2. Present a summary in chat too:
undefined
  1. 生成HTML报告 + CSV文件(自动在浏览器中打开概览页面):
    bash
    node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --open
    该命令将生成:
    • {OUTPUT_DIR}/index.html
      ——带评分表格的概览页面(将在浏览器中打开)
    • {OUTPUT_DIR}/companies/*.html
      ——单个企业的详情页面(从概览页面跳转)
    • {OUTPUT_DIR}/results.csv
      ——带评分的电子表格,可导入表格工具或CRM系统
  2. 同时在聊天窗口展示调研总结
undefined

Company Research Complete

企业调研完成

  • Total companies researched: {count}
  • Depth mode: {mode}
  • Score distribution:
    • Strong fit (8-10): {count}
    • Partial fit (5-7): {count}
    • Weak fit (1-4): {count}
  • Report opened in browser: ~/Desktop/{company_slug}research{date}/index.html

3. Show the **top companies** sorted by ICP score in a table:
CompanyScoreProductIndustryFit Reasoning
Acme9AI inventory managementE-commerce SaaSSeries A, uses Selenium, expanding to EU

4. For the top 3-5 companies, show a brief research summary — key findings, why they're a good fit, and what specific angle to approach them with.

Offer to dig deeper into specific companies, adjust scoring criteria, or re-run discovery with different queries.
  • 调研企业总数:{count}
  • 调研模式:{mode}
  • 评分分布
    • 高匹配度(8-10分):{count}
    • 部分匹配(5-7分):{count}
    • 低匹配度(1-4分):{count}
  • 报告已在浏览器中打开:~/Desktop/{company_slug}research{date}/index.html

3. 展示**按ICP评分排序的Top企业**表格:
企业名称评分产品行业匹配依据
Acme9AI库存管理电商SaaSA轮融资,使用Selenium,正拓展欧洲市场

4. 针对Top 3-5家企业,展示简要调研总结——核心发现、匹配原因以及具体沟通切入点。

最后可向用户提供以下选项:深入调研特定企业、调整评分标准或使用不同查询重新执行发掘流程。