product-analysis

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Product Analysis

产品分析

Multi-path parallel product analysis that combines Claude Code agent teams and Codex CLI for cross-model test-time compute scaling.
Core principle: Same analysis task, multiple AI perspectives, deep synthesis.
结合Claude Code代理团队Codex CLI的跨模型测试时计算扩展多路径并行产品分析。
核心原则:同一分析任务,多AI视角,深度整合。

How It Works

工作原理

/product-analysis full
         ├─ Step 0: Auto-detect available tools (codex? competitors?)
    ┌────┼──────────────┐
    │    │              │
 Claude Code         Codex CLI (auto-detected)
 Task Agents         (background Bash)
 (Explore ×3-5)      (×2-3 parallel)
    │                   │
    └────────┬──────────┘
      Synthesis (main context)
      Structured Report
/product-analysis full
         ├─ 步骤0:自动检测可用工具(codex?竞品分析?)
    ┌────┼──────────────┐
    │    │              │
 Claude Code         Codex CLI(自动检测)
 任务代理           (后台Bash运行)
 (3-5个并行探索)    (2-3个并行)
    │                   │
    └────────┬──────────┘
      结果整合(主会话上下文)
      结构化报告

Step 0: Auto-Detect Available Tools

步骤0:自动检测可用工具

Before launching any agents, detect what tools are available:
bash
undefined
在启动任何代理之前,先检测可用工具:
bash
undefined

Check if Codex CLI is installed

检查Codex CLI是否已安装

which codex 2>/dev/null && codex --version

**Decision logic**:
- If `codex` is found: Inform the user — "Codex CLI detected (version X). Will run cross-model analysis for richer perspectives."
- If `codex` is not found: Silently proceed with Claude Code agents only. Do NOT ask the user to install anything.

Also detect the project type to tailor agent prompts:
```bash
which codex 2>/dev/null && codex --version

**决策逻辑**:
- 若检测到`codex`:告知用户 — "已检测到Codex CLI(版本X)。将运行跨模型分析以获取更丰富的视角。"
- 若未检测到`codex`:仅使用Claude Code代理静默执行,**请勿**要求用户安装任何工具。

同时检测项目类型以适配代理提示词:
```bash

Detect project type

检测项目类型

ls package.json 2>/dev/null # Node.js/React ls pyproject.toml 2>/dev/null # Python ls Cargo.toml 2>/dev/null # Rust ls go.mod 2>/dev/null # Go
undefined
ls package.json 2>/dev/null # Node.js/React ls pyproject.toml 2>/dev/null # Python ls Cargo.toml 2>/dev/null # Rust ls go.mod 2>/dev/null # Go
undefined

Scope Modes

范围模式

Parse
$ARGUMENTS
to determine analysis scope:
ScopeWhat it coversTypical agents
full
UX + API + Architecture + Docs (default)5 Claude + Codex (if available)
ux
Frontend navigation, information density, user journey, empty state, onboarding3 Claude + Codex (if available)
api
Backend API coverage, endpoint health, error handling, consistency2 Claude + Codex (if available)
arch
Module structure, dependency graph, code duplication, separation of concerns2 Claude + Codex (if available)
compare X Y
Self-audit + competitive benchmarking (invokes
/competitors-analysis
)
3 Claude + competitors-analysis
解析
$ARGUMENTS
以确定分析范围:
范围覆盖内容典型代理配置
full
UX + API + 架构 + 文档(默认)5个Claude代理 + Codex(若可用)
ux
前端导航、信息密度、用户旅程、空状态、新手引导3个Claude代理 + Codex(若可用)
api
后端API覆盖范围、端点健康度、错误处理、一致性2个Claude代理 + Codex(若可用)
arch
模块结构、依赖图谱、代码重复、关注点分离2个Claude代理 + Codex(若可用)
compare X Y
自我审计 + 竞品对标(调用
/competitors-analysis
3个Claude代理 + 竞品分析

Phase 1: Parallel Exploration

阶段1:并行探索

Launch all exploration agents simultaneously using Task tool (background mode).
使用Task工具(后台模式)同时启动所有探索代理。

Claude Code Agents (always)

Claude Code代理(始终启用)

For each dimension, spawn a Task agent with
subagent_type: Explore
and
run_in_background: true
:
Agent A — Frontend Navigation & Information Density
Explore the frontend navigation structure and entry points:
1. App.tsx: How many top-level components are mounted simultaneously?
2. Left sidebar: How many buttons/entries? What does each link to?
3. Right sidebar: How many tabs? How many sections per tab?
4. Floating panels: How many drawers/modals? Which overlap in functionality?
5. Count total first-screen interactive elements for a new user.
6. Identify duplicate entry points (same feature accessible from 2+ places).
Give specific file paths, line numbers, and element counts.
Agent B — User Journey & Empty State
Explore the new user experience:
1. Empty state page: What does a user with no sessions see? Count clickable elements.
2. Onboarding flow: How many steps? What information is presented?
3. Prompt input area: How many buttons/controls surround the input box? Which are high-frequency vs low-frequency?
4. Mobile adaptation: How many nav items? How does it differ from desktop?
5. Estimate: Can a new user complete their first conversation in 3 minutes?
Give specific file paths, line numbers, and UX assessment.
Agent C — Backend API & Health
Explore the backend API surface:
1. List ALL API endpoints (method + path + purpose).
2. Identify endpoints that are unused or have no frontend consumer.
3. Check error handling consistency (do all endpoints return structured errors?).
4. Check authentication/authorization patterns (which endpoints require auth?).
5. Identify any endpoints that duplicate functionality.
Give specific file paths and line numbers.
Agent D — Architecture & Module Structure (full/arch scope only)
Explore the module structure and dependencies:
1. Map the module dependency graph (which modules import which).
2. Identify circular dependencies or tight coupling.
3. Find code duplication across modules (same pattern in 3+ places).
4. Check separation of concerns (does each module have a single responsibility?).
5. Identify dead code or unused exports.
Give specific file paths and line numbers.
Agent E — Documentation & Config Consistency (full scope only)
Explore documentation and configuration:
1. Compare README claims vs actual implemented features.
2. Check config file consistency (base.yaml vs .env.example vs code defaults).
3. Find outdated documentation (references to removed features/files).
4. Check test coverage gaps (which modules have no tests?).
Give specific file paths and line numbers.
针对每个维度,启动一个
subagent_type: Explore
run_in_background: true
的Task代理:
代理A — 前端导航与信息密度
探索前端导航结构与入口点:
1. App.tsx:同时挂载了多少个顶级组件?
2. 左侧侧边栏:有多少个按钮/入口?每个链接指向哪里?
3. 右侧侧边栏:有多少个标签页?每个标签页包含多少个板块?
4. 浮动面板:有多少个抽屉/模态框?哪些功能存在重叠?
5. 统计新用户首屏的交互式元素总数。
6. 识别重复入口点(同一功能可从2个及以上位置访问)。
提供具体文件路径、行号和元素数量。
代理B — 用户旅程与空状态
探索新用户体验:
1. 空状态页面:无会话的用户会看到什么?统计可点击元素数量。
2. 新手引导流程:有多少个步骤?展示了哪些信息?
3. 提示输入区域:输入框周围有多少个按钮/控件?哪些是高频操作,哪些是低频操作?
4. 移动端适配:有多少个导航项?与桌面端有何不同?
5. 评估:新用户能否在3分钟内完成首次对话?
提供具体文件路径、行号和UX评估。
代理C — 后端API与健康度
探索后端API表面:
1. 列出所有API端点(方法 + 路径 + 用途)。
2. 识别未被使用或无前端消费者的端点。
3. 检查错误处理一致性(所有端点是否都返回结构化错误?)。
4. 检查认证/授权模式(哪些端点需要认证?)。
5. 识别功能重复的端点。
提供具体文件路径和行号。
代理D — 架构与模块结构(仅full/arch范围)
探索模块结构与依赖关系:
1. 绘制模块依赖图谱(哪些模块导入了哪些模块)。
2. 识别循环依赖或紧耦合。
3. 查找跨模块的代码重复(同一模式出现3次及以上)。
4. 检查关注点分离(每个模块是否只承担单一职责?)。
5. 识别死代码或未使用的导出。
提供具体文件路径和行号。
代理E — 文档与配置一致性(仅full范围)
探索文档与配置:
1. 对比README声明与实际实现的功能。
2. 检查配置文件一致性(base.yaml vs .env.example vs 代码默认值)。
3. 查找过时文档(提及已移除的功能/文件)。
4. 检查测试覆盖缺口(哪些模块没有测试?)。
提供具体文件路径和行号。

Codex CLI Agents (auto-detected)

Codex CLI代理(自动检测)

If Codex CLI was detected in Step 0, launch parallel Codex analyses via background Bash.
Each Codex invocation gets the same dimensional prompt but from a different model's perspective:
bash
codex -m o4-mini \
  -c model_reasoning_effort="high" \
  --full-auto \
  "Analyze the frontend navigation structure of this project. Count all interactive elements visible to a new user on first screen. Identify duplicate entry points where the same feature is accessible from 2+ places. Give specific file paths and counts."
Run 2-3 Codex commands in parallel (background Bash), one per major dimension.
Important: Codex runs in the project's working directory. It has full filesystem access. The
--full-auto
flag (or
--dangerously-bypass-approvals-and-sandbox
for older versions) enables autonomous execution.
若步骤0中检测到Codex CLI,则通过后台Bash启动并行Codex分析。
每个Codex调用使用相同维度的提示词,但基于不同模型视角:
bash
codex -m o4-mini \
  -c model_reasoning_effort="high" \
  --full-auto \
  "Analyze the frontend navigation structure of this project. Count all interactive elements visible to a new user on first screen. Identify duplicate entry points where the same feature is accessible from 2+ places. Give specific file paths and counts."
并行运行2-3个Codex命令(后台Bash),每个对应一个主要维度。
重要提示:Codex在项目工作目录中运行,拥有完整文件系统访问权限。
--full-auto
标志(旧版本使用
--dangerously-bypass-approvals-and-sandbox
)启用自主执行。

Phase 2: Competitive Benchmarking (compare scope only)

阶段2:竞品对标(仅compare范围)

When scope is
compare
, invoke the competitors-analysis skill for each competitor:
Use the Skill tool to invoke: /competitors-analysis {competitor-name} {competitor-url}
This delegates to the orthogonal
competitors-analysis
skill which handles:
  • Repository cloning and validation
  • Evidence-based code analysis (file:line citations)
  • Competitor profile generation
当范围为
compare
时,针对每个竞品调用竞品分析技能:
使用Skill工具调用:/competitors-analysis {competitor-name} {competitor-url}
此操作将委托给独立的
competitors-analysis
技能,该技能负责:
  • 仓库克隆与验证
  • 基于证据的代码分析(文件:行引用)
  • 竞品档案生成

Phase 3: Synthesis

阶段3:结果整合

After all agents complete, synthesize findings in the main conversation context.
所有代理完成后,在主会话上下文中整合分析结果。

Cross-Validation

交叉验证

Compare findings across agents (Claude vs Claude, Claude vs Codex):
  • Agreement = high confidence finding
  • Disagreement = investigate deeper (one agent may have missed context)
  • Codex-only finding = different model perspective, validate manually
对比不同代理的结果(Claude vs Claude,Claude vs Codex):
  • 一致结果 = 高可信度发现
  • 不一致结果 = 深入调查(某代理可能遗漏上下文)
  • 仅Codex发现 = 不同模型视角,需手动验证

Quantification

量化分析

Extract hard numbers from agent reports:
MetricWhat to measure
First-screen interactive elementsTotal count of buttons/links/inputs visible to new user
Feature entry point duplicationNumber of features with 2+ entry points
API endpoints without frontend consumerCount of unused backend routes
Onboarding steps to first valueSteps from launch to first successful action
Module coupling scoreNumber of circular or bi-directional dependencies
从代理报告中提取硬数据:
指标测量内容
首屏交互式元素新用户可见的按钮/链接/输入框总数
功能入口点重复数存在2个及以上入口点的功能数量
无前端消费者的API端点未使用的后端路由数量
首次价值达成的引导步骤数从启动到首次成功操作的步骤数
模块耦合度得分循环或双向依赖的数量

Structured Output

结构化输出

Produce a layered optimization report:
markdown
undefined
生成分层优化报告:
markdown
undefined

Product Analysis Report

产品分析报告

Executive Summary

执行摘要

[1-2 sentences: key finding]
[1-2句话:核心发现]

Quantified Findings

量化结果

MetricValueAssessment
.........
指标数值评估
.........

P0: Critical (block launch)

P0:关键问题(阻碍发布)

[Issues that prevent basic usability]
[影响基础可用性的问题]

P1: High Priority (launch week)

P1:高优先级(发布周解决)

[Issues that significantly degrade experience]
[显著影响体验的问题]

P2: Medium Priority (next sprint)

P2:中优先级(下一个迭代解决)

[Issues worth addressing but not blocking]
[值得处理但不阻碍发布的问题]

Cross-Model Insights

跨模型洞察

[Findings that only one model identified — worth investigating]
[仅单个模型识别出的发现 — 值得深入调查]

Competitive Position (if compare scope)

竞品定位(若为compare范围)

[How we compare on key dimensions]
undefined
[关键维度上的竞品对比情况]
undefined

Workflow Checklist

工作流检查清单

  • Parse
    $ARGUMENTS
    for scope
  • Auto-detect Codex CLI availability (
    which codex
    )
  • Auto-detect project type (package.json / pyproject.toml / etc.)
  • Launch Claude Code Explore agents (3-5 parallel, background)
  • Launch Codex CLI commands (2-3 parallel, background) if detected
  • Invoke
    /competitors-analysis
    if
    compare
    scope
  • Collect all agent results
  • Cross-validate findings
  • Quantify metrics
  • Generate structured report with P0/P1/P2 priorities
  • 解析
    $ARGUMENTS
    确定范围
  • 自动检测Codex CLI可用性(
    which codex
  • 自动检测项目类型(package.json / pyproject.toml等)
  • 启动Claude Code探索代理(3-5个并行,后台运行)
  • 若检测到则启动Codex CLI命令(2-3个并行,后台运行)
  • 若为compare范围则调用
    /competitors-analysis
  • 收集所有代理结果
  • 交叉验证结果
  • 量化指标
  • 生成带P0/P1/P2优先级的结构化报告

References

参考资料

  • references/analysis_dimensions.md — Detailed audit dimension definitions and prompts
  • references/synthesis_methodology.md — How to weight and merge multi-agent findings
  • references/codex_patterns.md — Codex CLI invocation patterns and flag reference
  • references/analysis_dimensions.md — 详细审计维度定义与提示词
  • references/synthesis_methodology.md — 多代理结果加权与整合方法
  • references/codex_patterns.md — Codex CLI调用模式与参数参考