vendor-rip

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

vendor.rip Agent Skill

vendor.rip Agent Skill

Scan, assess, plan, and replace SaaS tools in any codebase. Works with any AI agent: Claude Code, Cursor, Windsurf, or any tool with codebase access.
扫描、评估、规划并替换任意代码库中的SaaS工具。 可与任意AI Agent配合使用:Claude Code、Cursor、Windsurf,或任何可访问代码库的工具。

Version & Prerequisites

版本与前置要求

  • Skill version: 0.1.1
  • Spec:
    design/agent-skill-spec.md
    v0.1.1
  • Requires: file system access, code search, web search, package installation (npm install, pip install, etc.), code generation, command execution
  • Optional: tool profiles in
    profiles/{tool}/
    for enriched analysis
  • Skill版本:0.1.1
  • 规范:
    design/agent-skill-spec.md
    v0.1.1
  • 所需权限:文件系统访问、代码搜索、网页搜索、包安装(npm install、pip install等)、代码生成、命令执行
  • 可选:
    profiles/{tool}/
    目录下的工具配置文件,用于增强分析效果

Quick Start

快速开始

Three entry points:
  • Scan:
    vendor-rip scan <tool>
    — deep assessment of one tool
  • Discover:
    vendor-rip discover
    — find all rippable SaaS in this codebase
  • Plan:
    vendor-rip plan <tool>
    — generate migration plan (runs scan first if needed)
三个入口点:
  • 扫描
    vendor-rip scan <tool>
    — 深度评估单个工具
  • 发现
    vendor-rip discover
    — 查找当前代码库中所有可替换的SaaS工具
  • 规划
    vendor-rip plan <tool>
    — 生成迁移计划(如需会先执行扫描)

How This Skill Works

本Skill工作流程

This skill follows a five-step journey:
┌─────────────────────────────────────────────────────────────┐
│  ○ Scan   ○ Plan   ○ Execute   ○ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘
Between Scan and Plan, the developer can get community context by exchanging scan data at vendor.rip/calibrate. This is a natural part of entering planning — not a separate step. After Done, teams often run
discover
to find more candidates.
Each phase produces structured output. Community exchange is always manual — the developer copies a file and pastes it at vendor.rip/calibrate. You never send data anywhere. Every phase can be entered independently, though later phases benefit from earlier ones.
All output files are written to the project root. Session state persists in
~/.vendor-rip/session
.

本Skill遵循五步流程:
┌─────────────────────────────────────────────────────────────┐
│  ○ Scan   ○ Plan   ○ Execute   ○ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘
在扫描和规划阶段之间,开发者可通过在vendor.rip/calibrate上传扫描数据获取社区上下文,这是进入规划阶段的自然环节,而非独立步骤。完成全部流程后,团队通常会运行
discover
命令查找更多可替换的候选工具。
每个阶段都会生成结构化输出。社区数据交互全程为手动操作——开发者需复制文件内容并粘贴到vendor.rip/calibrate。本Skill不会自动发送任何数据。所有阶段均可独立启动,但后续阶段会受益于前期阶段的结果。
所有输出文件均会写入项目根目录。会话状态将保存在
~/.vendor-rip/session
中。

Phase 1: Scan

阶段1:扫描

Goal

目标

Analyze this codebase's integration with a specific SaaS tool. Produce a structured report of integration points, feature usage, and complexity assessment.
分析代码库与特定SaaS工具的集成情况,生成包含集成点、功能使用情况和复杂度评估的结构化报告。

Session & Handoff

会话与交接

Before starting any work, check for existing context:
  1. Arguments (Claude Code): If invoked with
    --token=vrip_h_...
    , use that token.
  2. Session file: Read
    ~/.vendor-rip/session
    . If it contains a
    vrip_h_
    token, use it.
  3. Interactive prompt: Ask the developer:
    Do you have a vendor.rip token from a previous assessment? (paste token or press Enter to skip)
If a token is provided:
  • Call
    GET https://vendor.rip/api/handoff/{token}
  • The response contains: tool name, features assessed, annual spend, team size, Rip Score, recommended stack, feature breakdown, risk flags
  • Write the token to
    ~/.vendor-rip/session
  • Display: "I have context from your vendor.rip assessment. {toolName} (Rip Score: {ripScore}). Features: {featuresUsed}. Scanning your codebase now..."
  • Use the features list to focus the scan — prioritize finding integration points for these specific features
  • Use the recommended stack to inform the planning phase
If no token, token is invalid, or network is unavailable:
  • Proceed normally (cold start). All functionality works without a token.
开始工作前,检查是否存在现有上下文:
  1. 参数(Claude Code):如果调用时带有
    --token=vrip_h_...
    ,则使用该令牌。
  2. 会话文件:读取
    ~/.vendor-rip/session
    。如果文件中包含
    vrip_h_
    前缀的令牌,则使用该令牌。
  3. 交互式提示:询问开发者:
    你是否拥有来自之前评估的vendor.rip令牌?(粘贴令牌或按Enter跳过)
如果提供了令牌:
  • 调用
    GET https://vendor.rip/api/handoff/{token}
  • 响应内容包含:工具名称、已评估的功能、年度支出、团队规模、替换评分(Rip Score)、推荐技术栈、功能细分、风险标记
  • 将令牌写入
    ~/.vendor-rip/session
  • 显示:"我已获取你在vendor.rip的评估上下文。{toolName}(替换评分:{ripScore})。已使用功能:{featuresUsed}。正在扫描你的代码库..."
  • 使用功能列表聚焦扫描——优先查找这些特定功能的集成点
  • 使用推荐技术栈为规划阶段提供参考
如果未提供令牌、令牌无效或网络不可用:
  • 正常执行(冷启动)。所有功能无需令牌即可使用。

Methodology

方法

  1. Find the SDK
    Search the project's dependency files (
    package.json
    ,
    requirements.txt
    ,
    pyproject.toml
    ,
    go.mod
    ,
    Gemfile
    ,
    build.gradle
    ,
    pom.xml
    ,
    Cargo.toml
    ) for the tool's SDK package.
    If a tool profile exists in
    profiles/{tool}/
    , load it for known package names and import patterns. If no profile, search the web for "{tool} SDK {language}" to identify the package.
    Record: package name, version, source.
  2. Map integration points
    Grep for all imports of the SDK package across the codebase. For each import, trace the usage:
    • Which SDK methods or classes are called
    • How many times each method is called
    • How many files reference each method
    • The surrounding code context (is this in a utility wrapper, scattered inline, or deeply embedded in business logic?)
    Record each integration point as:
    {method, calls_count, files_count, pattern_type}
    where
    pattern_type
    is one of:
    wrapper
    (centralized),
    scattered
    (inline across many files),
    embedded
    (intertwined with business logic).
  3. Map to features
    If a tool profile exists: use the feature mapping from
    profiles/{tool}/features.yaml
    to classify each method into a feature category.
    If no profile: use your knowledge of the tool's API to classify methods into feature categories. For example:
    • amplitude.track()
      maps to event_tracking
    • amplitude.identify()
      maps to user_identification
    • amplitude.Experiment.fetch()
      maps to experimentation
    If you are unsure about a method's feature category, search the tool's SDK documentation online. Mark any uncertain classifications with a note.
  4. Assess complexity per feature
    Rate each feature:
    trivial
    (mature OSS alternatives, straightforward swap),
    moderate
    (some design work, partial OSS),
    hard
    (significant effort, limited OSS, complex state), or
    very_hard
    (deep integration, no alternatives, architectural changes needed).
    Base on: code patterns observed, OSS alternatives available, integration depth, whether the feature involves stored state or real-time behavior. Provide brief reasoning for each. Be honest — if something looks hard, say so.
  5. Collect evidence
    Read the lock file for the tool's packages. Compute
    lock_hash
    : sort all related package names alphabetically, concatenate as
    {name}@{version}
    joined by newlines, SHA-256 the result. Also record:
    scan_duration_seconds
    (wall time for the full scan),
    dependency_count
    , and the full
    dependencies
    list.
  6. Produce the scan report
    Save to
    ./vendor-rip-report.json
    with this schema:
    json
    {
      "tool": {
        "name": "<tool_name>",
        "slug": "<tool_slug>",
        "sdk_package": "<package_name>",
        "sdk_version": "<version>"
      },
      "integration_points": [
        {
          "method": "<method_name>",
          "calls_count": 14,
          "files_count": 4,
          "pattern_type": "wrapper|scattered|embedded",
          "feature_category": "<feature_category>",
          "complexity_assessment": "trivial|moderate|hard|very_hard",
          "reasoning": "<brief justification>"
        }
      ],
      "feature_coverage": {
        "detected_features": ["<feature_1>", "<feature_2>"],
        "estimated_total_features": 12,
        "coverage_pct": 25
      },
      "data_dependencies": [
        { "type": "env_var|config_file|data_store|api_key", "name": "<name>", "description": "<desc>" }
      ],
      "dependencies": ["<pkg_1>", "<pkg_2>"],
      "agent_assessment": {
        "overall_complexity": "trivial|moderate|hard|very_hard",
        "estimated_days_min": 3,
        "estimated_days_max": 5,
        "key_risks": ["<risk_1>"],
        "recommended_approach": "<approach and rationale>"
      },
      "evidence": {
        "package_lock_hash": "sha256:<hash>",
        "scan_duration_seconds": 12.4,
        "sdk_dependency_tree": ["<pkg_1>@<version>", "<pkg_2>@<version>"]
      }
    }
    Note on
    data_dependencies
    :
    Search for environment variables (grep for
    process.env
    ,
    os.environ
    ,
    .env
    files), config files (tool-specific like
    amplitude.config.js
    ,
    .amplituderc
    ), API keys, and data stores related to this tool. These are critical for the cleanup phase.
    Include one entry in
    integration_points
    per detected SDK method. Populate
    feature_coverage.estimated_total
    using the tool profile (if available) or your best estimate of the tool's total feature surface.
  7. Display the scan report and transition to planning
    Display the full output in two parts: the scan report, then the community context step. The progress bar appears first.
    ┌─────────────────────────────────────────────────────────────┐
    │  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
    └─────────────────────────────────────────────────────────────┘
    
    {tool} | {sdk_package} v{sdk_version}
    
      Method                   Calls   Files   Feature                Complexity   Pattern
      -------------------------------------------------------------------------------------
      {method}                 {n}     {n}     {feature}              {complexity} {pattern}
      ...
    
      Features: {n} of ~{total} ({pct}%) | Complexity: {level} | Est: {min}-{max} days
    
      Risks:
        - {risk_1}
        - {risk_2}
    
      Saved: ./vendor-rip-report.json
    
    ────────────────────────────────────────────────────────────────
      Next: Get community context for your migration plan.
    
      1. Copy the block below
      2. Paste at vendor.rip/calibrate
      3. Paste the insights code you get back here
    
      tool: {tool_name}
      sdk: {sdk_package}@{sdk_version}
      features:
        - {feature}: {calls_count} calls
      complexity: {overall_complexity}
      depth: {methods} methods across {files} files
      evidence:
        lock_hash: {sha256}
        scan_duration: {seconds}s
      session: {token or empty}
    
      (saved to ./vendor-rip-scan-summary.txt)
    ────────────────────────────────────────────────────────────────
    
    Insights code: ___ (or Enter to plan with local analysis only)
    If no tool profile was used, append after "Saved:":
      Note: Agent assessment — no community-calibrated rules for this tool.

  1. 查找SDK
    搜索项目的依赖文件(
    package.json
    requirements.txt
    pyproject.toml
    go.mod
    Gemfile
    build.gradle
    pom.xml
    Cargo.toml
    ),查找该工具的SDK包。
    如果
    profiles/{tool}/
    目录下存在工具配置文件,则加载该文件以获取已知包名和导入模式。如果没有配置文件,则通过网页搜索"{tool} SDK {language}"来识别包。
    记录:包名、版本、来源。
  2. 映射集成点
    在整个代码库中搜索所有SDK包的导入语句。对于每个导入,追踪其使用情况:
    • 调用了哪些SDK方法或类
    • 每个方法被调用的次数
    • 有多少文件引用了每个方法
    • 周围的代码上下文(是在工具包装器中、分散在代码中还是深度嵌入业务逻辑?)
    按以下格式记录每个集成点:
    {method, calls_count, files_count, pattern_type}
    ,其中
    pattern_type
    可选值为:
    wrapper
    (集中包装)、
    scattered
    (分散嵌入)、
    embedded
    (深度嵌入)。
  3. 功能映射
    如果存在工具配置文件:使用
    profiles/{tool}/features.yaml
    中的功能映射将每个方法分类到对应的功能类别。
    如果没有配置文件:使用你对该工具API的了解将方法分类到功能类别。例如:
    • amplitude.track()
      对应事件追踪(event_tracking)
    • amplitude.identify()
      对应用户识别(user_identification)
    • amplitude.Experiment.fetch()
      对应实验管理(experimentation)
    如果你不确定某个方法的功能类别,可在线搜索该工具的SDK文档。对不确定的分类添加备注。
  4. 功能复杂度评估
    为每个功能评级:
    trivial
    (简单,有成熟开源替代方案,直接替换即可)、
    moderate
    (中等,需要一些设计工作,有部分开源方案)、
    hard
    (困难,需要大量工作,开源方案有限,涉及复杂状态)或
    very_hard
    (极难,深度集成,无替代方案,需要架构变更)。
    评估依据:观察到的代码模式、可用的开源替代方案、集成深度、该功能是否涉及存储状态或实时行为。为每个评估提供简短理由。如实评估——如果某项工作看起来很困难,就如实说明。
  5. 收集证据
    读取该工具包的锁定文件。计算
    lock_hash
    :将所有相关包名按字母顺序排序,以
    {name}@{version}
    格式连接成新行分隔的字符串,然后计算其SHA-256哈希值。同时记录:
    scan_duration_seconds
    (整个扫描的耗时)、
    dependency_count
    和完整的
    dependencies
    列表。
  6. 生成扫描报告
    将报告保存到
    ./vendor-rip-report.json
    ,格式如下:
    json
    {
      "tool": {
        "name": "<tool_name>",
        "slug": "<tool_slug>",
        "sdk_package": "<package_name>",
        "sdk_version": "<version>"
      },
      "integration_points": [
        {
          "method": "<method_name>",
          "calls_count": 14,
          "files_count": 4,
          "pattern_type": "wrapper|scattered|embedded",
          "feature_category": "<feature_category>",
          "complexity_assessment": "trivial|moderate|hard|very_hard",
          "reasoning": "<brief justification>"
        }
      ],
      "feature_coverage": {
        "detected_features": ["<feature_1>", "<feature_2>"],
        "estimated_total_features": 12,
        "coverage_pct": 25
      },
      "data_dependencies": [
        { "type": "env_var|config_file|data_store|api_key", "name": "<name>", "description": "<desc>" }
      ],
      "dependencies": ["<pkg_1>", "<pkg_2>"],
      "agent_assessment": {
        "overall_complexity": "trivial|moderate|hard|very_hard",
        "estimated_days_min": 3,
        "estimated_days_max": 5,
        "key_risks": ["<risk_1>"],
        "recommended_approach": "<approach and rationale>"
      },
      "evidence": {
        "package_lock_hash": "sha256:<hash>",
        "scan_duration_seconds": 12.4,
        "sdk_dependency_tree": ["<pkg_1>@<version>", "<pkg_2>@<version>"]
      }
    }
    关于
    data_dependencies
    的说明
    :搜索环境变量(grep查找
    process.env
    os.environ
    .env
    文件)、配置文件(工具特定的配置文件,如
    amplitude.config.js
    .amplituderc
    )、API密钥和与该工具相关的数据存储。这些对于清理阶段至关重要。
    每个检测到的SDK方法在
    integration_points
    中对应一个条目。使用工具配置文件(如果可用)或你对该工具总功能范围的最佳估计来填充
    feature_coverage.estimated_total
  7. 显示扫描报告并过渡到规划阶段
    将完整输出分为两部分显示:扫描报告,然后是社区上下文步骤。首先显示进度条。
    ┌─────────────────────────────────────────────────────────────┐
    │  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
    └─────────────────────────────────────────────────────────────┘
    
    {tool} | {sdk_package} v{sdk_version}
    
      Method                   Calls   Files   Feature                Complexity   Pattern
      -------------------------------------------------------------------------------------
      {method}                 {n}     {n}     {feature}              {complexity} {pattern}
      ...
    
      Features: {n} of ~{total} ({pct}%) | Complexity: {level} | Est: {min}-{max} days
    
      Risks:
        - {risk_1}
        - {risk_2}
    
      Saved: ./vendor-rip-report.json
    
    ────────────────────────────────────────────────────────────────
      Next: Get community context for your migration plan.
    
      1. Copy the block below
      2. Paste at vendor.rip/calibrate
      3. Paste the insights code you get back here
    
      tool: {tool_name}
      sdk: {sdk_package}@{sdk_version}
      features:
        - {feature}: {calls_count} calls
      complexity: {overall_complexity}
      depth: {methods} methods across {files} files
      evidence:
        lock_hash: {sha256}
        scan_duration: {seconds}s
      session: {token or empty}
    
      (saved to ./vendor-rip-scan-summary.txt)
    ────────────────────────────────────────────────────────────────
    
    Insights code: ___ (or Enter to plan with local analysis only)
    如果未使用工具配置文件,则在"Saved:"后追加:
      Note: Agent assessment — no community-calibrated rules for this tool.

Phase 2: Community Context (between Scan and Plan)

阶段2:社区上下文(扫描与规划之间)

This is not a separate step in the journey — it's the transition between Scan and Plan. The community exchange happens here, feeding calibration data into the plan. The progress bar stays on
▸ Plan
throughout.
这不是独立的步骤——它是扫描到规划的过渡环节。社区数据交互在此进行,为规划提供校准数据。进度条在此期间保持在
▸ Plan
状态。

Generate the scan summary

生成扫描摘要

Save to
./vendor-rip-scan-summary.txt
alongside the scan report:
tool: {tool_name}
sdk: {sdk_package}@{sdk_version}
features:
  - {feature}: {calls_count} calls
complexity: {overall_complexity}
depth: {methods} methods across {files} files
evidence:
  deps: {comma-separated dependency list}
  lock_hash: {sha256}
  scan_duration: {seconds}s
session: {read from ~/.vendor-rip/session if it exists, otherwise leave empty}
The summary text is printed inline in the scan output — the developer copies it directly from the terminal. The file is a backup for programmatic use or later reference.
将扫描摘要保存到
./vendor-rip-scan-summary.txt
,与扫描报告放在同一目录:
tool: {tool_name}
sdk: {sdk_package}@{sdk_version}
features:
  - {feature}: {calls_count} calls
complexity: {overall_complexity}
depth: {methods} methods across {files} files
evidence:
  deps: {comma-separated dependency list}
  lock_hash: {sha256}
  scan_duration: {seconds}s
session: {read from ~/.vendor-rip/session if it exists, otherwise leave empty}
摘要文本会在扫描输出中直接打印——开发者可直接从终端复制。该文件是供程序化使用或后续参考的备份。

Handling the response

处理响应

If the developer pastes an insights code:
The code is a short string like
vrip_8nK3qM2pL4...
.
  1. Make a GET request to
    https://api.vendor.rip/insights/{code}
  2. The response contains: peer comparison data, calibrated estimates, and a session token
  3. Store the session token to
    ~/.vendor-rip/session
    (create the directory and file if needed)
  4. Display with updated progress bar:
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘

Community context loaded.

  Compared to {N} similar scans:
    Your integration depth: {percentile}th percentile
    Calibrated estimate: {days} days (vs local estimate: {local_days})
    Recommended approach: {approach} ({success_rate}% success rate)
    Watch out: {common_gotcha}

Generate migration plan? (y/n)
  1. Use this calibration data when generating the migration plan in Phase 3.
If the developer presses Enter (plan with local analysis):
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘

Generate migration plan? (y/n)
Do not create or update any session file.
If you cannot make HTTP requests (agent capability limitation):
Display: "Cannot fetch community context in this environment. The scan summary is saved to ./vendor-rip-scan-summary.txt — you can paste it at vendor.rip/calibrate manually." Then proceed to the "Generate migration plan?" prompt.
如果开发者粘贴了洞察代码:
代码是类似
vrip_8nK3qM2pL4...
的短字符串。
  1. https://api.vendor.rip/insights/{code}
    发送GET请求
  2. 响应内容包含:同行比较数据、校准后的估计值和会话令牌
  3. 将会话令牌存储到
    ~/.vendor-rip/session
    (如果目录和文件不存在则创建)
  4. 更新进度条并显示:
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘

Community context loaded.

  Compared to {N} similar scans:
    Your integration depth: {percentile}th percentile
    Calibrated estimate: {days} days (vs local estimate: {local_days})
    Recommended approach: {approach} ({success_rate}% success rate)
    Watch out: {common_gotcha}

Generate migration plan? (y/n)
  1. 在阶段3生成迁移计划时使用该校准数据。
如果开发者按Enter(仅使用本地分析进行规划):
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘

Generate migration plan? (y/n)
不要创建或更新任何会话文件。
如果无法发送HTTP请求(Agent能力限制):
显示:"在此环境中无法获取社区上下文。扫描摘要已保存到./vendor-rip-scan-summary.txt——你可手动将其粘贴到vendor.rip/calibrate。" 然后继续显示"Generate migration plan?"提示。

Session management

会话管理

  • Check for
    ~/.vendor-rip/session
    at scan start. If it exists, read the token.
  • Sessions can be pre-seeded from the website chat via a handoff token (
    vrip_h_
    prefix), or created after the first community calibration exchange. Both use the same session file.
  • Include the token in the
    session:
    field of every calibration summary. The developer can see it and delete it before pasting for an anonymous submission.
  • If the file does not exist and the developer completes a community exchange, store the token from the insights code response.
  • The session file is a single line containing only the token string.

  • 扫描开始时检查
    ~/.vendor-rip/session
    。如果文件存在,则读取令牌。
  • 会话可通过网站聊天的交接令牌(
    vrip_h_
    前缀)预填充,或在首次社区校准交互后创建。两种方式使用同一个会话文件。
  • 在每个校准摘要的
    session:
    字段中包含令牌。开发者可在粘贴前查看并删除令牌,以进行匿名提交。
  • 如果文件不存在且开发者完成了社区交互,则将洞察代码响应中的令牌存储起来。
  • 会话文件是仅包含令牌字符串的单行文件。

Phase 3: Plan

阶段3:规划

Goal

目标

Generate a phased migration plan. Use community calibration data if available from Phase 2, otherwise use local analysis.
生成分阶段的迁移计划。如果阶段2提供了社区校准数据,则使用该数据;否则使用本地分析结果。

Methodology

方法

  1. Select approach
    Choose the migration approach based on what you observed in the scan:
    Integration patternRecommended approach
    Few calls, concentrated in one or two filesDirect replacement — rewrite the files
    Many calls scattered across many filesAdapter pattern — create a wrapper, replace behind it
    Deep integration with business logicIncremental replacement — swap feature by feature
    Mixed patternsAdapter + incremental — wrap first, then replace incrementally
    If community data is available from Phase 2, prefer the approach with the highest observed success rate among similar integrations.
    Present the chosen approach to the developer with your rationale. Let them override your choice.
  2. Generate phased plan
    For each phase, include: tasks, estimated days, pitfalls to watch for, and validation criteria.
    Standard phases (adapt as needed): Setup (adapter/module structure, no behavior change) -> Core build (per feature, easiest first) -> Data migration (export/transform/import if needed) -> Parallel run (old + new side by side) -> Cutover (switch, monitor) -> Cleanup (remove SDK, config, imports).
  3. Save the plan
    Save to
    ./vendor-rip-plan.json
    :
    json
    {
      "phases": [
        {
          "phase_name": "<name>",
          "estimated_days": 0.5,
          "tasks": ["<task_1>"],
          "pitfalls": ["<pitfall_1>"],
          "validation_criteria": ["<criterion_1>"]
        }
      ],
      "total_estimated_days": 6.0,
      "approach": { "name": "<approach>", "rationale": "<why>" },
      "risk_flags": ["<risk_1>"],
      "parallel_run_strategy": "<strategy>"
    }
  4. Display the plan
    ┌─────────────────────────────────────────────────────────────┐
    │  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
    └─────────────────────────────────────────────────────────────┘
    
    Migration Plan: amplitude -> custom replacement
    
    Approach: Adapter pattern
      Wrap all Amplitude calls behind src/lib/analytics.ts
      Replace implementation behind the adapter
      Rationale: 14 track() calls across 4 files — easier to change one adapter than 14 call sites
    
    Phase 1: Setup (0.5 days)
      Tasks:
        - Create src/lib/analytics.ts adapter module
        - Define interface matching current usage patterns
        - Point adapter to existing Amplitude SDK (no behavior change)
      Validation: All existing tests pass. No behavior change.
    
    Phase 2: Core build (2-3 days)
      Tasks:
        - Implement event tracking behind adapter (trivial — 0.5 days)
        - Implement user identification behind adapter (trivial — 0.3 days)
        - Implement experimentation replacement (hard — 1.5-2 days)
          WARNING: experiment assignment logic often has hidden state
      Validation: Each feature works independently behind the adapter.
    
    Phase 3: Data migration (1 day)
      Tasks:
        - Export historical events if needed
        - Set up new event storage (or decide to start fresh)
      Validation: Historical data accessible in new system (or conscious decision to skip).
    
    Phase 4: Parallel run (1-2 days)
      Tasks:
        - Run both Amplitude and replacement simultaneously
        - Compare outputs for consistency
        - Monitor for edge cases
      Validation: 48+ hours of parallel operation with no discrepancies.
      IMPORTANT: Do NOT cancel the Amplitude subscription until parallel run completes.
    
    Phase 5: Cutover & cleanup (0.5 days)
      Tasks:
        - Switch adapter to new implementation only
        - Remove @amplitude/* packages from dependencies
        - Remove Amplitude-specific config and env vars
        - Clean up unused imports
      Validation: Full test suite passes. No Amplitude references remain in codebase.
    
    Total: 5-7 days
    Risk flags:
      - Experimentation module is the hardest part — budget extra time
      - Session replay plugin needs an alternative or conscious removal
    
    Ready to start? (y/n)

  1. 选择迁移方式
    根据扫描中观察到的结果选择迁移方式:
    集成模式推荐方式
    调用次数少,集中在一两个文件中直接替换——重写文件
    调用次数多,分散在多个文件中适配器模式——创建包装器,在包装器后进行替换
    与业务逻辑深度集成增量替换——逐个功能替换
    混合模式适配器+增量——先包装,再逐步替换
    如果阶段2提供了社区数据,则优先选择在类似集成中成功率最高的方式。
    向开发者展示所选方式及其理由。允许开发者覆盖你的选择。
  2. 生成分阶段计划
    为每个阶段包含:任务、预计天数、需要注意的陷阱和验证标准。
    标准阶段(可根据需要调整):设置(适配器/模块结构,无行为变更)-> 核心构建(按功能,从最简单的开始)-> 数据迁移(如需导出/转换/导入)-> 并行运行(新旧系统同时运行)-> 切换(切换到新系统,监控)-> 清理(移除SDK、配置、导入语句)。
  3. 保存计划
    将计划保存到
    ./vendor-rip-plan.json
    json
    {
      "phases": [
        {
          "phase_name": "<name>",
          "estimated_days": 0.5,
          "tasks": ["<task_1>"],
          "pitfalls": ["<pitfall_1>"],
          "validation_criteria": ["<criterion_1>"]
        }
      ],
      "total_estimated_days": 6.0,
      "approach": { "name": "<approach>", "rationale": "<why>" },
      "risk_flags": ["<risk_1>"],
      "parallel_run_strategy": "<strategy>"
    }
  4. 显示计划
    ┌─────────────────────────────────────────────────────────────┐
    │  ✓ Scan   ▸ Plan   ○ Execute   ○ Validate   ○ Done        │
    └─────────────────────────────────────────────────────────────┘
    
    Migration Plan: amplitude -> custom replacement
    
    Approach: Adapter pattern
      Wrap all Amplitude calls behind src/lib/analytics.ts
      Replace implementation behind the adapter
      Rationale: 14 track() calls across 4 files — easier to change one adapter than 14 call sites
    
    Phase 1: Setup (0.5 days)
      Tasks:
        - Create src/lib/analytics.ts adapter module
        - Define interface matching current usage patterns
        - Point adapter to existing Amplitude SDK (no behavior change)
      Validation: All existing tests pass. No behavior change.
    
    Phase 2: Core build (2-3 days)
      Tasks:
        - Implement event tracking behind adapter (trivial — 0.5 days)
        - Implement user identification behind adapter (trivial — 0.3 days)
        - Implement experimentation replacement (hard — 1.5-2 days)
          WARNING: experiment assignment logic often has hidden state
      Validation: Each feature works independently behind the adapter.
    
    Phase 3: Data migration (1 day)
      Tasks:
        - Export historical events if needed
        - Set up new event storage (or decide to start fresh)
      Validation: Historical data accessible in new system (or conscious decision to skip).
    
    Phase 4: Parallel run (1-2 days)
      Tasks:
        - Run both Amplitude and replacement simultaneously
        - Compare outputs for consistency
        - Monitor for edge cases
      Validation: 48+ hours of parallel operation with no discrepancies.
      IMPORTANT: Do NOT cancel the Amplitude subscription until parallel run completes.
    
    Phase 5: Cutover & cleanup (0.5 days)
      Tasks:
        - Switch adapter to new implementation only
        - Remove @amplitude/* packages from dependencies
        - Remove Amplitude-specific config and env vars
        - Clean up unused imports
      Validation: Full test suite passes. No Amplitude references remain in codebase.
    
    Total: 5-7 days
    Risk flags:
      - Experimentation module is the hardest part — budget extra time
      - Session replay plugin needs an alternative or conscious removal
    
    Ready to start? (y/n)

Phase 4: Execute

阶段4:执行

Goal

目标

Perform the actual replacement. You write the code, the developer reviews.
Display the progress bar at each phase boundary:
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ▸ Execute   ○ Validate   ○ Done        │
│                     Phase 2/5: Core Build                   │
└─────────────────────────────────────────────────────────────┘
执行实际的替换操作。由你编写代码,开发者进行审核。
在每个阶段边界显示进度条:
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ▸ Execute   ○ Validate   ○ Done        │
│                     Phase 2/5: Core Build                   │
└─────────────────────────────────────────────────────────────┘

Guidelines

指导原则

Follow the plan phase by phase. Do not skip ahead.
At each phase boundary:
  • Run the validation criteria before proceeding to the next phase
  • Present the code changes to the developer for review
  • If a validation fails, stop and discuss with the developer before continuing
When making implementation decisions:
  • Search the web for current best libraries. Do not rely on stale knowledge.
  • When multiple libraries could work, present options with tradeoffs. Let the developer choose. You execute.
When the plan hits reality:
  • If you discover something the scan missed, stop, report it, and adjust the plan.
  • If a phase is taking significantly longer than estimated, say so.
Commits: At each phase boundary, ask the developer if they want a git commit. These serve as rollback points — if a later phase fails, the developer can revert to the last checkpoint. Never commit without asking. Never force-push or amend.
按计划逐步执行。不要跳过任何阶段。
在每个阶段边界:
  • 进入下一阶段前运行验证标准
  • 向开发者展示代码变更以供审核
  • 如果验证失败,停止操作并与开发者讨论后再继续
做出实现决策时:
  • 在线搜索当前最佳库。不要依赖过时的知识。
  • 如果有多个库可用,向开发者展示选项及其权衡。由开发者选择,你执行。
当计划遇到实际情况时:
  • 如果发现扫描中遗漏的内容,停止操作,报告情况并调整计划。
  • 如果某个阶段的耗时明显超过估计值,如实说明。
提交代码: 在每个阶段边界,询问开发者是否需要提交git commit。这些提交可作为回滚点——如果后续阶段失败,开发者可回滚到上一个检查点。未经询问不要提交代码。不要强制推送或修改历史提交。

Decision points to surface

需要提出的决策点

Stop and ask rather than deciding silently: library choices, data decisions (migrate or start fresh?), architecture decisions (simple random split vs. targeting rules?), scope decisions (skip features only used in tests?).

遇到以下情况时停止操作并询问,而非自行决定:库选择、数据决策(迁移还是重新开始?)、架构决策(简单随机拆分还是目标规则?)、范围决策(跳过仅在测试中使用的功能?)。

Phase 5: Validate

阶段5:验证

Goal

目标

Verify the replacement works correctly and the old tool is fully removed.
验证替换操作是否正确执行,旧工具是否已完全移除。

Checklist

检查清单

Run these checks in order. If any fail, fix and re-check before proceeding.
  1. Run existing tests — execute the project's test suite (
    npm test
    ,
    pytest
    , etc.). Report pass/fail counts.
  2. Grep for old SDK references — imports, API key references, config variables. Flag comments mentioning the old tool but do not auto-remove them.
  3. Check configuration
    .env
    files, tool-specific config files (
    amplitude.config.js
    ,
    .amplituderc
    ), CI config, build tool plugins.
  4. Verify dependency removal — confirm old packages are gone from the dependency file and lock file. If not, remove and reinstall.
  5. Prompt for manual verification — list the key behaviors the developer should verify (feature-specific, based on what was replaced).
按顺序运行以下检查。如果任何检查失败,修复后重新检查再继续。
  1. 运行现有测试 — 执行项目的测试套件(
    npm test
    pytest
    等)。报告通过/失败的数量。
  2. 查找旧SDK引用 — 导入语句、API密钥引用、配置变量。标记提及旧工具的注释,但不要自动删除它们。
  3. 检查配置
    .env
    文件、工具特定的配置文件(
    amplitude.config.js
    .amplituderc
    )、CI配置、构建工具插件。
  4. 验证依赖是否已移除 — 确认旧包已从依赖文件和锁定文件中移除。如果未移除,则移除并重新安装依赖。
  5. 提示手动验证 — 列出开发者需要验证的关键行为(基于替换的功能)。

Display format

显示格式

┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ▸ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘

Validation Results:

  Tests: 47 passed, 0 failed
  Old SDK references: 0 remaining
  Old config entries: 0 remaining
  Dependencies removed: <list>

  Manual checks needed:
    - <check_1>
    - <check_2>

All automated checks passed.

┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ▸ Validate   ○ Done        │
└─────────────────────────────────────────────────────────────┘

Validation Results:

  Tests: 47 passed, 0 failed
  Old SDK references: 0 remaining
  Old config entries: 0 remaining
  Dependencies removed: <list>

  Manual checks needed:
    - <check_1>
    - <check_2>

All automated checks passed.

Phase 6: Complete

阶段6:完成

Goal

目标

Produce the completion report and offer to exchange outcomes with the community.
生成完成报告,并提供与社区共享结果的选项。

Completion report

完成报告

Save to
./vendor-rip-completion.json
. This extends the scan report (all fields from
vendor-rip-report.json
are included) plus:
json
{
  "tool": { "name": "...", "slug": "...", "sdk_package": "...", "sdk_version": "..." },
  "integration_points": [ "... (same as scan report)" ],
  "feature_coverage": { "... (same as scan report)" },
  "data_dependencies": [ "... (same as scan report)" ],
  "dependencies": ["..."],
  "agent_assessment": { "... (same as scan report)" },
  "evidence": {
    "package_lock_hash_before": "sha256:<hash>",
    "package_lock_hash_after": "sha256:<hash>",
    "scan_duration_seconds": 12.4,
    "sdk_dependency_tree": ["<removed packages>"]
  },
  "migration": {
    "approach": "<approach_used>",
    "duration_days": 5.2,
    "features_replaced": ["event_tracking", "user_identification", "experimentation"],
    "outcome": "success|partial|reverted"
  },
  "per_feature_timing": [
    { "feature": "<feature>", "days": 0.5, "notes": "" }
  ],
  "gotchas_encountered": ["<gotcha_1>"],
  "validation_results": {
    "tests_passed": 47, "tests_failed": 0,
    "manual_checks": ["<check_1>"]
  }
}
Include one entry in
phases
per migration phase actually executed, with honest
actual_days
. Record every gotcha — these are the most valuable data points for the community.
将报告保存到
./vendor-rip-completion.json
。该报告扩展了扫描报告(包含
vendor-rip-report.json
中的所有字段),并添加了以下内容:
json
{
  "tool": { "name": "...", "slug": "...", "sdk_package": "...", "sdk_version": "..." },
  "integration_points": [ "... (same as scan report)" ],
  "feature_coverage": { "... (same as scan report)" },
  "data_dependencies": [ "... (same as scan report)" ],
  "dependencies": ["..."],
  "agent_assessment": { "... (same as scan report)" },
  "evidence": {
    "package_lock_hash_before": "sha256:<hash>",
    "package_lock_hash_after": "sha256:<hash>",
    "scan_duration_seconds": 12.4,
    "sdk_dependency_tree": ["<removed packages>"]
  },
  "migration": {
    "approach": "<approach_used>",
    "duration_days": 5.2,
    "features_replaced": ["event_tracking", "user_identification", "experimentation"],
    "outcome": "success|partial|reverted"
  },
  "per_feature_timing": [
    { "feature": "<feature>", "days": 0.5, "notes": "" }
  ],
  "gotchas_encountered": ["<gotcha_1>"],
  "validation_results": {
    "tests_passed": 47, "tests_failed": 0,
    "manual_checks": ["<check_1>"]
  }
}
为每个实际执行的迁移阶段在
phases
中添加一个条目,并如实记录
actual_days
。记录所有遇到的问题——这些是对社区最有价值的数据点。

Completion summary

完成摘要

Generate and save to
./vendor-rip-completion-summary.txt
:
tool: amplitude
sdk: @amplitude/analytics-browser@2.8.0
features:
  - event_tracking: 14 calls -> replaced (0.5 days)
  - user_identification: 3 calls -> replaced (0.3 days)
  - experimentation: 4 calls -> replaced (2.5 days)
approach: adapter pattern
duration: 5.2 days
outcome: success
gotcha: experiment assignment had hidden state in localStorage — needed migration
evidence:
  lock_hash_before: sha256:a1b2c3d4e5f6
  lock_hash_after: sha256:x7y8z9a0b1c2
  scan_duration: 12.4s
session: {read from ~/.vendor-rip/session, or empty}
生成并保存到
./vendor-rip-completion-summary.txt
tool: amplitude
sdk: @amplitude/analytics-browser@2.8.0
features:
  - event_tracking: 14 calls -> replaced (0.5 days)
  - user_identification: 3 calls -> replaced (0.3 days)
  - experimentation: 4 calls -> replaced (2.5 days)
approach: adapter pattern
duration: 5.2 days
outcome: success
gotcha: experiment assignment had hidden state in localStorage — needed migration
evidence:
  lock_hash_before: sha256:a1b2c3d4e5f6
  lock_hash_after: sha256:x7y8z9a0b1c2
  scan_duration: 12.4s
session: {read from ~/.vendor-rip/session, or empty}

Display the completion report and transition

显示完成报告并过渡

┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ✓ Validate   ▸ Done        │
└─────────────────────────────────────────────────────────────┘

{tool} replaced in {days} days | {approach} | {outcome}

  Feature               Calls   Actual Days   Notes
  -----------------------------------------------------------
  {feature}             {n}     {days}        {notes or blank}
  ...

  Gotchas:
    - {gotcha_1}
    - {gotcha_2}

  Validation: {passed} passed, {failed} failed | {remaining} old references remaining

  Saved: ./vendor-rip-completion.json

────────────────────────────────────────────────────────────────
  Next: See what teams like yours did after this.

  1. Copy the block below
  2. Paste at vendor.rip/calibrate
  3. Paste the insights code you get back here

  tool: {tool_name}
  sdk: {sdk_package}@{sdk_version}
  features:
    - {feature}: {calls_count} calls -> replaced ({days} days)
  approach: {approach}
  duration: {total_days} days
  outcome: {outcome}
  gotcha: {gotcha or "none"}
  evidence:
    lock_hash_before: {sha256}
    lock_hash_after: {sha256}
    scan_duration: {seconds}s
  session: {token or empty}

  (saved to ./vendor-rip-completion-summary.txt)
────────────────────────────────────────────────────────────────

Insights code: ___ (or Enter to finish)
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ✓ Validate   ▸ Done        │
└─────────────────────────────────────────────────────────────┘

{tool} replaced in {days} days | {approach} | {outcome}

  Feature               Calls   Actual Days   Notes
  -----------------------------------------------------------
  {feature}             {n}     {days}        {notes or blank}
  ...

  Gotchas:
    - {gotcha_1}
    - {gotcha_2}

  Validation: {passed} passed, {failed} failed | {remaining} old references remaining

  Saved: ./vendor-rip-completion.json

────────────────────────────────────────────────────────────────
  Next: See what teams like yours did after this.

  1. Copy the block below
  2. Paste at vendor.rip/calibrate
  3. Paste the insights code you get back here

  tool: {tool_name}
  sdk: {sdk_package}@{sdk_version}
  features:
    - {feature}: {calls_count} calls -> replaced ({days} days)
  approach: {approach}
  duration: {total_days} days
  outcome: {outcome}
  gotcha: {gotcha or "none"}
  evidence:
    lock_hash_before: {sha256}
    lock_hash_after: {sha256}
    scan_duration: {seconds}s
  session: {token or empty}

  (saved to ./vendor-rip-completion-summary.txt)
────────────────────────────────────────────────────────────────

Insights code: ___ (or Enter to finish)

Handling the response

处理响应

If the developer pastes an insights code:
  1. Make a GET request to
    https://api.vendor.rip/insights/{code}
  2. The response contains peer cluster data and recommendations
  3. Update
    ~/.vendor-rip/session
    with the refreshed token
  4. Display with all-done progress bar:
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ✓ Validate   ✓ Done        │
└─────────────────────────────────────────────────────────────┘

Your Rip Receipt:
  RIP {tool} — {days} days — {approach}

Teams in your cluster typically find opportunities in:
  {category_1}    — {pct}% had rippable tools
  {category_2}    — {pct}% had rippable tools
  {category_3}    — {pct}% had rippable tools

{total_tools} tools across these categories.

Run a discovery scan? (y/n)
If the developer presses Enter (finish):
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ✓ Validate   ✓ Done        │
└─────────────────────────────────────────────────────────────┘

Done. Reports saved to ./vendor-rip-completion.json and ./vendor-rip-completion-summary.txt

如果开发者粘贴了洞察代码:
  1. https://api.vendor.rip/insights/{code}
    发送GET请求
  2. 响应内容包含同行集群数据和建议
  3. 使用刷新后的令牌更新
    ~/.vendor-rip/session
  4. 显示完成状态的进度条:
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ✓ Validate   ✓ Done        │
└─────────────────────────────────────────────────────────────┘

Your Rip Receipt:
  RIP {tool} — {days} days — {approach}

Teams in your cluster typically find opportunities in:
  {category_1}    — {pct}% had rippable tools
  {category_2}    — {pct}% had rippable tools
  {category_3}    — {pct}% had rippable tools

{total_tools} tools across these categories.

Run a discovery scan? (y/n)
如果开发者按Enter(完成):
┌─────────────────────────────────────────────────────────────┐
│  ✓ Scan   ✓ Plan   ✓ Execute   ✓ Validate   ✓ Done        │
└─────────────────────────────────────────────────────────────┘

Done. Reports saved to ./vendor-rip-completion.json and ./vendor-rip-completion-summary.txt

Phase 7: Discover

阶段7:发现

Goal

目标

Broad scan of the codebase to find all SaaS tool dependencies that might be replacement candidates.
对代码库进行全面扫描,查找所有可能成为替换候选的SaaS工具依赖。

Entry points

入口点

  • vendor-rip discover
    (direct)
  • After a completion exchange, when peer cluster data suggests categories to explore
  • vendor-rip discover
    (直接调用)
  • 完成社区交互后,当同行集群数据提示需要探索某些类别时

Methodology

方法

  1. Read dependency files
    Read all dependency manifests in the project root (
    package.json
    ,
    requirements.txt
    ,
    pyproject.toml
    ,
    go.mod
    ,
    Gemfile
    ,
    build.gradle
    ,
    pom.xml
    ,
    Cargo.toml
    ,
    composer.json
    ).
  2. Identify SaaS SDKs
    For each dependency, determine whether it is a SaaS tool's SDK (vs. a framework, library, or utility). Heuristics: package name matches a known SaaS company (
    @amplitude/*
    ,
    @datadog/*
    ,
    launchdarkly-*
    ), requires API keys, or describes itself as a client for a hosted service.
    If unsure, search the web. Do not guess — a false positive is worse than a miss.
    Exclude: frameworks (React, Django), utilities (lodash), infrastructure SDKs (aws-sdk), databases (pg, redis).
  3. Quick assessment per tool
    For each SaaS SDK: count integration points (grep for imports/calls), assess complexity, estimate days, categorize (analytics, feature flags, monitoring, auth, CMS, internal tools, etc.), and estimate savings if a tool profile with
    pricing.yaml
    is available. If no pricing data, omit savings — do not guess.
  4. Save the discovery report
    Save to
    ./vendor-rip-discovery.json
    :
    json
    {
      "tools_detected": [
        {
          "name": "<tool>",
          "package": "<pkg>",
          "version": "<version>",
          "integration_points_count": 8,
          "quick_assessment": "<one-line summary>",
          "rip_score_estimate": 85,
          "savings_estimate": "$12k/yr or null"
        }
      ],
      "categorized": {
        "quick_wins": [{ "name": "<tool>", "package": "<pkg>", "reason": "<why>" }],
        "bigger_projects": [{ "name": "<tool>", "package": "<pkg>", "reason": "<why>" }],
        "probably_keep": [{ "name": "<tool>", "package": "<pkg>", "reason": "<why>" }]
      },
      "total_addressable_savings_estimate": "$40k-80k/yr or null"
    }
  5. Categorize and display
    Sort into: Quick wins (< 3 days, trivial-moderate), Bigger projects (3+ days, moderate-hard), Probably keep (very_hard, deeply integrated, or high value). Show savings only when pricing data is available.
    ┌──────────────────────────────────────┐
    │  ▸ Discover   ○ Scan   ○ Plan  ...  │
    └──────────────────────────────────────┘
    
    Found 6 SaaS tools:
    
      Tool             Category         Points   Complexity    Est. Days   Savings
      ---------------------------------------------------------------------------
      LaunchDarkly     feature flags      8      trivial         1-2       $12k/yr
      Datadog          monitoring        14      moderate        6-8       $50k/yr
      Auth0            auth               6      moderate        3-5
      Retool           internal tools     3      moderate        4-6
      Contentful       CMS                4      trivial         2-3
      Statuspage       status page        1      trivial         0.5       $3k/yr
    
    Quick wins (< 3 days):
      LaunchDarkly, Contentful, Statuspage
    
    Bigger projects:
      Datadog (RUM replacement is the hard part), Retool (internal tool rebuild)
    
    Probably keep:
      Auth0 — deep integration, good value for money
    
    Addressable savings: $65k+/yr (partial — pricing unknown for 3 tools)
    
    Saved: ./vendor-rip-discovery.json
    
    Deep scan a tool? (type name, or Enter to finish)
    If the developer names a tool, update the progress bar and run Phase 1 (Scan):
    ┌─────────────────────────────────────────────────────────────┐
    │  ✓ Discover   ▸ Scan   ○ Plan   ○ Execute   ...           │
    └─────────────────────────────────────────────────────────────┘

  1. 读取依赖文件
    读取项目根目录下的所有依赖清单(
    package.json
    requirements.txt
    pyproject.toml
    go.mod
    Gemfile
    build.gradle
    pom.xml
    Cargo.toml
    composer.json
    )。
  2. 识别SaaS SDK
    对于每个依赖,确定它是否是SaaS工具的SDK(而非框架、库或工具)。判断依据:包名与知名SaaS公司匹配(如
    @amplitude/*
    @datadog/*
    launchdarkly-*
    )、需要API密钥、或自称为托管服务的客户端。
    如果不确定,在线搜索。不要猜测——误报比漏报更糟糕。
    排除:框架(React、Django)、工具库(lodash)、基础设施SDK(aws-sdk)、数据库(pg、redis)。
  3. 对每个工具进行快速评估
    对于每个SaaS SDK:统计集成点数量(查找导入/调用)、评估复杂度、估计耗时、分类(分析、功能标记、监控、认证、CMS、内部工具等),如果有包含
    pricing.yaml
    的工具配置文件,则估计节省成本。如果没有定价数据,则省略节省成本——不要猜测。
  4. 保存发现报告
    将报告保存到
    ./vendor-rip-discovery.json
    json
    {
      "tools_detected": [
        {
          "name": "<tool>",
          "package": "<pkg>",
          "version": "<version>",
          "integration_points_count": 8,
          "quick_assessment": "<one-line summary>",
          "rip_score_estimate": 85,
          "savings_estimate": "$12k/yr or null"
        }
      ],
      "categorized": {
        "quick_wins": [{ "name": "<tool>", "package": "<pkg>", "reason": "<why>" }],
        "bigger_projects": [{ "name": "<tool>", "package": "<pkg>", "reason": "<why>" }],
        "probably_keep": [{ "name": "<tool>", "package": "<pkg>", "reason": "<why>" }]
      },
      "total_addressable_savings_estimate": "$40k-80k/yr or null"
    }
  5. 分类并显示
    分为:快速获胜(耗时<3天,简单-中等)、大型项目(耗时≥3天,中等-困难)、建议保留(极难,深度集成,或高价值)。仅当有定价数据时才显示节省成本。
    ┌──────────────────────────────────────┐
    │  ▸ Discover   ○ Scan   ○ Plan  ...  │
    └──────────────────────────────────────┘
    
    Found 6 SaaS tools:
    
      Tool             Category         Points   Complexity    Est. Days   Savings
      ---------------------------------------------------------------------------
      LaunchDarkly     feature flags      8      trivial         1-2       $12k/yr
      Datadog          monitoring        14      moderate        6-8       $50k/yr
      Auth0            auth               6      moderate        3-5
      Retool           internal tools     3      moderate        4-6
      Contentful       CMS                4      trivial         2-3
      Statuspage       status page        1      trivial         0.5       $3k/yr
    
    Quick wins (< 3 days):
      LaunchDarkly, Contentful, Statuspage
    
    Bigger projects:
      Datadog (RUM replacement is the hard part), Retool (internal tool rebuild)
    
    Probably keep:
      Auth0 — deep integration, good value for money
    
    Addressable savings: $65k+/yr (partial — pricing unknown for 3 tools)
    
    Saved: ./vendor-rip-discovery.json
    
    Deep scan a tool? (type name, or Enter to finish)
    如果开发者指定了某个工具,更新进度条并运行阶段1(扫描):
    ┌─────────────────────────────────────────────────────────────┐
    │  ✓ Discover   ▸ Scan   ○ Plan   ○ Execute   ...           │
    └─────────────────────────────────────────────────────────────┘

Error Handling

错误处理

Handle these situations gracefully — degrade, don't crash.
SituationWhat to do
Tool not found in dependencies"No SDK detected for {tool}." List the dependency files you checked. Suggest retrying with the exact package name, or running
discover
to see what's installed.
Scan interruptedDo not save partial results. Report what happened. The developer can re-run — scans are idempotent.
Migration fails (tests break)Show the failing tests. Do NOT overwrite the developer's code without confirmation. Offer: fix the issue, revert to last checkpoint (git commit), or abort.
Validation failsShow specific failures. Offer: fix and re-validate, accept as partial outcome, or abandon.
Network unavailable"Cannot reach vendor.rip. Skipping community insights." All local functionality works — the skill never blocks on network.
Invalid insights code"That code didn't work. Try copying it again, or press Enter to skip." Allow retry.
Tool profile malformedLog a warning. Fall back to agent-only reasoning. Do not fail the scan.
Insufficient file permissionsReport which files are inaccessible. Proceed with what's available. Note the gap in the report.

优雅地处理以下情况——降级处理,不要崩溃。
情况处理方式
在依赖中未找到工具"未检测到{tool}的SDK。"列出你检查过的依赖文件。建议使用准确的包名重试,或运行
discover
查看已安装的工具。
扫描被中断不要保存部分结果。报告发生的情况。开发者可重新运行——扫描是幂等的。
迁移失败(测试中断)显示失败的测试。未经确认不要覆盖开发者的代码。提供选项:修复问题、回滚到上一个检查点(git commit)、或中止。
验证失败显示具体的失败项。提供选项:修复后重新验证、接受为部分结果、或放弃。
网络不可用"无法连接到vendor.rip。跳过社区洞察。"所有本地功能均可正常工作——本Skill永远不会因网络问题而阻塞。
无效的洞察代码"该代码无效。请重新复制,或按Enter跳过。"允许重试。
工具配置文件格式错误记录警告。回退到仅使用Agent的推理。不要使扫描失败。
文件权限不足报告哪些文件无法访问。使用可用的内容继续。在报告中注明存在差距。

Appendix A: Working with Tool Profiles

附录A:使用工具配置文件

Tool profiles are optional YAML files that make scans faster and more accurate. They are the community-calibrated rules for a specific tool.
工具配置文件是可选的YAML文件,可加快扫描速度并提高准确性。它们是针对特定工具的社区校准规则。

If a profile exists (
profiles/{tool-slug}/
)

如果存在配置文件(
profiles/{tool-slug}/

Load:
features.yaml
(method-to-feature mapping),
patterns.yaml
(import/call signatures),
complexity.yaml
(pattern-to-tier mapping),
migration.yaml
(known paths per feature),
pricing.yaml
(cost ranges),
data.yaml
(export capabilities). Profiles accelerate scans and enrich reports with community-calibrated data.
加载:
features.yaml
(方法到功能的映射)、
patterns.yaml
(导入/调用签名)、
complexity.yaml
(模式到级别的映射)、
migration.yaml
(每个功能的已知迁移路径)、
pricing.yaml
(成本范围)、
data.yaml
(导出能力)。配置文件可加快扫描速度,并为报告添加社区校准的数据。

If no profile exists

如果不存在配置文件

You operate in reasoning mode: identify the SDK from dependencies, search the web for documentation, read TypeScript definitions if available, map methods to features using general knowledge, assess complexity from code patterns. Mark results: "Agent assessment — no community-calibrated rules for this tool."
The results are still useful — a capable agent reasoning about code patterns beats a generic article. But profiles produce more precise and consistent results.
你将在推理模式下运行:从依赖中识别SDK,在线搜索文档,读取TypeScript定义(如果可用),使用通用知识将方法映射到功能,根据代码模式评估复杂度。在结果中标记:"Agent assessment — no community-calibrated rules for this tool."
结果仍然有用——基于代码模式的智能Agent推理比通用文章更有价值。但配置文件可产生更精确和一致的结果。

Contributing new patterns

贡献新模式

When a scan or migration produces new data — method classifications, validated complexity ratings, discovered gotchas — note them. These can be contributed back through the community calibration exchange or as PRs to the profiles repository.

当扫描或迁移产生新数据时——方法分类、已验证的复杂度评级、发现的问题——记录下来。这些数据可通过社区校准交互或提交PR到配置文件仓库贡献回社区。

Appendix B: Files Created by This Skill

附录B:本Skill创建文件

FileCreated whenPurpose
./vendor-rip-report.json
After scan (Phase 1)Full local scan report
./vendor-rip-scan-summary.txt
After scan (Phase 2)Scan summary for community calibration exchange
./vendor-rip-plan.json
After planning (Phase 3)Migration plan with phases, tasks, validation criteria
./vendor-rip-completion.json
After migration (Phase 6)Full completion report extending scan report
./vendor-rip-completion-summary.txt
After migration (Phase 6)Completion summary for community calibration exchange
./vendor-rip-discovery.json
After discovery (Phase 7)All detected SaaS tools with quick assessments
~/.vendor-rip/session
After first calibration exchangeSession token for linking exchanges across projects
All project files are written to the project root (next to
package.json
or equivalent). The session file is the only file written outside the project.

文件创建时机用途
./vendor-rip-report.json
扫描完成后(阶段1)完整的本地扫描报告
./vendor-rip-scan-summary.txt
扫描完成后(阶段2)用于社区校准交互的扫描摘要
./vendor-rip-plan.json
规划完成后(阶段3)包含阶段、任务、验证标准的迁移计划
./vendor-rip-completion.json
迁移完成后(阶段6)扩展扫描报告的完整完成报告
./vendor-rip-completion-summary.txt
迁移完成后(阶段6)用于社区校准交互的完成摘要
./vendor-rip-discovery.json
发现完成后(阶段7)所有检测到的SaaS工具及其快速评估
~/.vendor-rip/session
首次校准交互后用于跨项目关联交互的会话令牌
所有项目文件均写入项目根目录(与
package.json
或等效文件同目录)。会话文件是唯一写入项目外部的文件。

Appendix C: Trust and Transparency

附录C:信任与透明度

This skill is designed around a simple principle: the developer sees everything.
  • No hidden data. The calibration summaries are plain text. What the developer reads is exactly what gets exchanged. No JSON blobs, no encoded payloads, no hidden fields.
  • No network calls without consent. The only network call this skill makes is a GET to
    https://api.vendor.rip/insights/{code}
    when the developer explicitly pastes an insights code. The skill never phones home, never sends telemetry, never posts data.
  • Session is visible and deletable. The session token appears in every calibration summary. The developer can delete it from the text before pasting (anonymous exchange) or delete
    ~/.vendor-rip/session
    entirely.
  • Local-first. Every phase produces useful output without any network interaction. Sharing is a bonus, not a requirement.
本Skill围绕一个简单原则设计:开发者可看到所有内容。
  • 无隐藏数据:校准摘要是纯文本。开发者看到的内容与交互的内容完全一致。没有JSON blob、编码负载或隐藏字段。
  • 未经同意不进行网络调用:本Skill仅在开发者明确粘贴洞察代码时才会向
    https://api.vendor.rip/insights/{code}
    发送GET请求。本Skill永远不会自动发送数据、遥测或发布内容。
  • 会话可见且可删除:会话令牌出现在每个校准摘要中。开发者可在粘贴前从文本中删除令牌(匿名交互),或完全删除
    ~/.vendor-rip/session
  • 本地优先:所有阶段无需网络交互即可生成有用的输出。共享数据是额外功能,而非必需功能。