analyze
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseInitial Analysis
初始分析
Step 1 of 6 in the Reverse Engineering to Spec-Driven Development process.
Estimated Time: 5 minutes
Output:
analysis-report.md逆向工程转规范驱动开发流程中的第1步
预计耗时: 5分钟
输出:
analysis-report.mdWhen to Use This Skill
何时使用该技能
Use this skill when:
- Starting reverse engineering on a new or existing codebase
- Need to understand tech stack and architecture before making changes
- Want to assess project completeness and identify gaps
- First time analyzing this project with the toolkit
- User asks "analyze this codebase" or "what's in this project?"
Trigger Phrases:
- "Analyze this codebase"
- "What tech stack is this using?"
- "How complete is this application?"
- "Run initial analysis"
- "Start reverse engineering process"
在以下场景使用本技能:
- 对新的或现有代码库启动逆向工程时
- 需要在进行更改前了解技术栈和架构时
- 想要评估项目完整性并识别差距时
- 首次使用工具包分析该项目时
- 用户询问“分析这个代码库”或“这个项目里有什么?”时
触发短语:
- "Analyze this codebase"
- "What tech stack is this using?"
- "How complete is this application?"
- "Run initial analysis"
- "Start reverse engineering process"
What This Skill Does
该技能的功能
This skill performs comprehensive initial analysis by:
- Asking which path you want - Greenfield (new app) or Brownfield (manage existing)
- Auto-detecting application context - Identifies programming languages, frameworks, and build systems
- Analyzing directory structure - Maps architecture patterns and key components
- Scanning existing documentation - Assesses current documentation quality
- Estimating completeness - Evaluates how complete the implementation is
- Generating analysis report - Creates with all findings
analysis-report.md - Storing path choice - Saves your selection to guide subsequent steps
本技能通过以下步骤执行全面的初始分析:
- 询问路径选择 - Greenfield(新项目)或Brownfield(维护现有项目)
- 自动检测应用上下文 - 识别编程语言、框架和构建系统
- 分析目录结构 - 映射架构模式和关键组件
- 扫描现有文档 - 评估当前文档质量
- 估算完整性 - 评估实现的完成度
- 生成分析报告 - 创建包含所有发现的
analysis-report.md - 保存路径选择 - 保存您的选择以指导后续步骤
Choose Your Path
选择您的路径
FIRST: Determine which path aligns with your goals.
首先: 确定哪条路径符合您的目标。
Path A: Greenfield (Build New App from Business Logic)
路径A:Greenfield(基于业务逻辑构建新应用)
Use when:
- Building a new application based on existing app's business logic
- Migrating to a different tech stack
- Want flexibility in implementation choices
- Need platform-agnostic specifications
Result:
- Specifications focus on WHAT, not HOW
- Business requirements only
- Can implement in any technology
- Tech-stack agnostic
Example: "Extract the business logic from this Rails app so we can rebuild it in Next.js"
适用场景:
- 基于现有应用的业务逻辑构建新应用
- 迁移到不同的技术栈
- 希望在实现选择上拥有灵活性
- 需要与平台无关的规范
结果:
- 规范专注于“做什么”,而非“怎么做”
- 仅包含业务需求
- 可在任何技术中实现
- 与技术栈无关
示例: "从这个Rails应用中提取业务逻辑,以便我们用Next.js重新构建它"
Path B: Brownfield (Manage Existing with Spec Kit)
路径B:Brownfield(使用Spec Kit维护现有项目)
Use when:
- Managing an existing codebase with GitHub Spec Kit
- Want spec-code validation with
/speckit.analyze - Planning upgrades or refactoring
- Need specs that match current implementation exactly
Result:
- Specifications include both WHAT and HOW
- Business logic + technical implementation
- Tech-stack prescriptive
- can validate alignment
/speckit.analyze
Example: "Add GitHub Spec Kit to this Next.js app so we can manage it with specs going forward"
适用场景:
- 使用GitHub Spec Kit管理现有代码库
- 希望通过进行规范-代码验证
/speckit.analyze - 计划升级或重构
- 需要与当前实现完全匹配的规范
结果:
- 规范同时包含“做什么”和“怎么做”
- 业务逻辑+技术实现细节
- 规定技术栈
- 可验证一致性
/speckit.analyze
示例: "将GitHub Spec Kit添加到这个Next.js应用中,以便我们后续用规范管理它"
Batch Session Auto-Configuration
批量会话自动配置
Before showing questions, check for batch session by walking up directories:
bash
undefined在显示问题之前,通过遍历目录检查是否存在批量会话:
bash
undefinedFunction to find batch session file (walks up like .git search)
Function to find batch session file (walks up like .git search)
find_batch_session() {
local current_dir="$(pwd)"
while [[ "$current_dir" != "/" ]]; do
# Stop at git root to prevent path traversal
if [[ -d "$current_dir/.git" ]] && [[ ! -f "$current_dir/.stackshift-batch-session.json" ]]; then
return 1
fi
if [[ -f "$current_dir/.stackshift-batch-session.json" ]]; then
echo "$current_dir/.stackshift-batch-session.json"
return 0
fi
current_dir="$(dirname "$current_dir")"
done
return 1
}
find_batch_session() {
local current_dir="$(pwd)"
while [[ "$current_dir" != "/" ]]; do
# Stop at git root to prevent path traversal
if [[ -d "$current_dir/.git" ]] && [[ ! -f "$current_dir/.stackshift-batch-session.json" ]]; then
return 1
fi
if [[ -f "$current_dir/.stackshift-batch-session.json" ]]; then
echo "$current_dir/.stackshift-batch-session.json"
return 0
fi
current_dir="$(dirname "$current_dir")"
done
return 1
}
Check if batch session exists
Check if batch session exists
BATCH_SESSION=$(find_batch_session)
if [[ -n "$BATCH_SESSION" ]]; then
echo "✅ Using batch session configuration from: $BATCH_SESSION"
cat "$BATCH_SESSION" | jq '.answers'
Auto-apply answers from batch session
Skip questionnaire entirely
fi
**If batch session exists:**
1. Walk up directory tree to find `.stackshift-batch-session.json`
2. Load answers from found batch session file
3. Show: "Using batch session configuration: route=osiris, spec_output=~/git/specs, ..."
4. Skip all questions below
5. Proceed directly to analysis with pre-configured answers
6. Save answers to local `.stackshift-state.json` as usual
**Example directory structure:**~/git/osiris/
├── .stackshift-batch-session.json ← Batch session here
├── ws-vehicle-details/
│ └── [agent working here finds parent session]
├── ws-hours/
│ └── [agent working here finds parent session]
└── ws-contact/
└── [agent working here finds parent session]
**If no batch session:**
- Continue with normal questionnaire below
---BATCH_SESSION=$(find_batch_session)
if [[ -n "$BATCH_SESSION" ]]; then
echo "✅ Using batch session configuration from: $BATCH_SESSION"
cat "$BATCH_SESSION" | jq '.answers'
Auto-apply answers from batch session
Skip questionnaire entirely
fi
**如果存在批量会话:**
1. 遍历目录树查找`.stackshift-batch-session.json`
2. 从找到的批量会话文件中加载答案
3. 显示:"Using batch session configuration: route=osiris, spec_output=~/git/specs, ..."
4. 跳过以下所有问题
5. 使用预配置的答案直接进行分析
6. 像往常一样将答案保存到本地`.stackshift-state.json`
**示例目录结构:**~/git/osiris/
├── .stackshift-batch-session.json ← Batch session here
├── ws-vehicle-details/
│ └── [agent working here finds parent session]
├── ws-hours/
│ └── [agent working here finds parent session]
└── ws-contact/
└── [agent working here finds parent session]
**如果没有批量会话:**
- 继续以下正常问卷流程
---Step 1: Auto-Detect Application Type
步骤1:自动检测应用类型
Before asking questions, detect what kind of application this is:
bash
undefined在询问问题之前,先检测这是哪种类型的应用:
bash
undefinedCheck repository name and structure
Check repository name and structure
REPO_NAME=$(basename $(pwd))
PARENT_DIR=$(basename $(dirname $(pwd)))
REPO_NAME=$(basename $(pwd))
PARENT_DIR=$(basename $(dirname $(pwd)))
Detection patterns (in priority order)
Detection patterns (in priority order)
Add your own patterns here for your framework/architecture!
Add your own patterns here for your framework/architecture!
Monorepo service detection
Monorepo service detection
if [[ "$PARENT_DIR" == "services" || "$PARENT_DIR" == "apps" ]] && [ -f "../../package.json" ]; then
DETECTION="monorepo-service"
echo "📦 Detected: Monorepo Service (services/* or apps/* directory)"
if [[ "$PARENT_DIR" == "services" || "$PARENT_DIR" == "apps" ]] && [ -f "../../package.json" ]; then
DETECTION="monorepo-service"
echo "📦 Detected: Monorepo Service (services/* or apps/* directory)"
Nx workspace detection
Nx workspace detection
elif [ -f "nx.json" ] || [ -f "../../nx.json" ]; then
DETECTION="nx-app"
echo "⚡ Detected: Nx Application"
elif [ -f "nx.json" ] || [ -f "../../nx.json" ]; then
DETECTION="nx-app"
echo "⚡ Detected: Nx Application"
Turborepo detection
Turborepo detection
elif [ -f "turbo.json" ] || [ -f "../../turbo.json" ]; then
DETECTION="turborepo-package"
echo "🚀 Detected: Turborepo Package"
elif [ -f "turbo.json" ] || [ -f "../../turbo.json" ]; then
DETECTION="turborepo-package"
echo "🚀 Detected: Turborepo Package"
Lerna package detection
Lerna package detection
elif [ -f "lerna.json" ] || [ -f "../../lerna.json" ]; then
DETECTION="lerna-package"
echo "📦 Detected: Lerna Package"
elif [ -f "lerna.json" ] || [ -f "../../lerna.json" ]; then
DETECTION="lerna-package"
echo "📦 Detected: Lerna Package"
Generic application (default)
Generic application (default)
else
DETECTION="generic"
echo "🔍 Detected: Generic Application"
fi
echo "Detection type: $DETECTION"
**How Detection Patterns Work:**
Detection identifies WHAT patterns to look for during analysis:
- **monorepo-service**: Look for shared packages, inter-service calls, monorepo structure
- **nx-app**: Look for project.json, workspace deps, Nx-specific patterns
- **generic**: Standard application analysis
**Add Your Own Patterns:**
```bashelse
DETECTION="generic"
echo "🔍 Detected: Generic Application"
fi
echo "Detection type: $DETECTION"
**检测模式的工作原理:**
检测会确定分析期间需要查找的模式:
- **monorepo-service**:查找共享包、服务间调用、单体仓库结构
- **nx-app**:查找project.json、工作区依赖、Nx特定模式
- **generic**:标准应用分析
**添加您自己的模式:**
```bashExample: Custom framework detection
Example: Custom framework detection
elif [[ "$REPO_NAME" =~ ^my-widget- ]]; then
elif [[ "$REPO_NAME" =~ ^my-widget- ]]; then
DETECTION="my-framework-widget"
DETECTION="my-framework-widget"
echo "🎯 Detected: My Framework Widget"
echo "🎯 Detected: My Framework Widget"
**Detection determines what to analyze, but NOT how to spec it!**
---
**检测决定了要分析的内容,但不决定如何制定规范!**
---Step 2: Initial Questionnaire
步骤2:初始问卷
Now that we know what kind of application this is, let's configure the extraction approach:
Question 1: Choose Your Route
Which path best aligns with your goals?
A) Greenfield: Extract for migration to new tech stack
→ Extract business logic only (tech-agnostic)
→ Can implement in any stack
→ Suitable for platform migrations
→ Example: Extract Rails app business logic → rebuild in Next.js
B) Brownfield: Extract for maintaining existing codebase
→ Extract business logic + technical details (tech-prescriptive)
→ Manage existing codebase with specs
→ Suitable for in-place improvements
→ Example: Add specs to Express API for ongoing maintenanceThis applies to ALL detection types:
- Monorepo Service + Greenfield = Business logic for platform migration
- Monorepo Service + Brownfield = Full implementation for maintenance
- Nx App + Greenfield = Business logic for rebuild
- Nx App + Brownfield = Full Nx/Angular details for refactoring
- Generic + Greenfield = Business logic for rebuild
- Generic + Brownfield = Full implementation for management
Question 2: Implementation Framework
Which implementation framework do you want to use?
A) GitHub Spec Kit (Recommended for most projects)
→ Feature-level specifications in .specify/
→ Task-driven implementation with /speckit.* commands
→ Simpler, lightweight workflow
→ Best for: small-medium projects, focused features
B) BMAD Auto-Pilot (Recommended for BMAD users)
→ Auto-generates BMAD artifacts (PRD, Architecture, Epics) from reverse-eng docs
→ Three modes: YOLO (fully automatic), Guided (ask on ambiguities), Interactive
→ Optionally hand off to BMAD agents for collaborative refinement
→ Best for: projects that want BMAD format without the full conversation
C) BMAD Method (Full collaborative workflow)
→ Uses same reverse-engineering docs as other frameworks
→ Hands off to BMAD's collaborative PM/Architect agents
→ BMAD creates PRD + Architecture through conversation
→ Best for: large projects needing deep collaborative refinement
D) Architecture Only
→ Generates architecture document with your constraints
→ Asks about tech stack, cloud, scale, hard constraints
→ Includes Mermaid diagrams, ADRs, infrastructure recommendations
→ Best for: when you already know what to build, need architecture
After StackShift extracts documentation (Gear 2):
- All frameworks get the same 11 docs in docs/reverse-engineering/
- Spec Kit: Gears 3-6 create .specify/ specs, use /speckit.implement
- BMAD Auto-Pilot: /stackshift.bmad-synthesize generates BMAD artifacts automatically
- BMAD: Skip to Gear 6, hand off to *workflow-init with rich context
- Architecture Only: /stackshift.architect generates architecture.md with your constraintsQuestion 3: Brownfield Mode (If Brownfield selected)
Do you want to upgrade dependencies after establishing specs?
A) Standard - Just create specs for current state
→ Document existing implementation as-is
→ Specs match current code exactly
→ Good for maintaining existing versions
B) Upgrade - Create specs + upgrade all dependencies
→ Spec current state first (100% coverage)
→ Then upgrade all dependencies to latest versions
→ Fix breaking changes with spec guidance
→ Improve test coverage to spec standards
→ End with modern, fully-spec'd application
→ Perfect for modernizing legacy apps
**Upgrade mode includes:**
- npm update / pip upgrade / go get -u (based on tech stack)
- Automated breaking change detection
- Test-driven upgrade fixes
- Spec updates for API changes
- Coverage improvement to 85%+Question 4: Choose Your Transmission
How do you want to shift through the gears?
A) Manual - Review each gear before proceeding
→ You're in control
→ Stop at each step
→ Good for first-time users
B) Cruise Control - Shift through all gears automatically
→ Hands-free
→ Unattended execution
→ Good for experienced users or overnight runsQuestion 5: Specification Thoroughness
How thorough should specification generation be in Gear 3?
A) Specs only (30 min - fast)
→ Generate specs for all features
→ Create plans manually with /speckit.plan as needed
→ Good for: quick assessment, flexibility
B) Specs + Plans (45-60 min - recommended)
→ Generate specs for all features
→ Auto-generate implementation plans for incomplete features
→ Ready for /speckit.tasks when you implement
→ Good for: most projects, balanced automation
C) Specs + Plans + Tasks (90-120 min - complete roadmap)
→ Generate specs for all features
→ Auto-generate plans for incomplete features
→ Auto-generate comprehensive task lists (300-500 lines each)
→ Ready for immediate implementation
→ Good for: large projects, maximum automationQuestion 6: Clarifications Strategy (If Cruise Control selected)
How should [NEEDS CLARIFICATION] markers be handled?
A) Defer - Mark them, continue implementation around them
→ Fastest
→ Can clarify later with /speckit.clarify
B) Prompt - Stop and ask questions interactively
→ Most thorough
→ Takes longer
C) Skip - Only implement fully-specified features
→ Safest
→ Some features won't be implementedQuestion 7: Implementation Scope (If Cruise Control selected)
What should be implemented in Gear 6?
A) None - Stop after specs are ready
→ Just want specifications
→ Will implement manually later
B) P0 only - Critical features only
→ Essential features
→ Fastest implementation
C) P0 + P1 - Critical + high-value features
→ Good balance
→ Most common choice
D) All - Every feature (may take hours/days)
→ Complete implementation
→ Longest runtimeQuestion 8: Spec Output Location (If Greenfield selected)
Where should specifications and documentation be written?
A) Current repository (default)
→ Specs in: ./docs/reverse-engineering/, ./.specify/
→ Simple, everything in one place
→ Good for: small teams, single repo
B) New application repository
→ Specs in: ~/git/my-new-app/.specify/
→ Specs live with NEW codebase
→ Good for: clean separation, NEW repo already exists
C) Separate documentation repository
→ Specs in: ~/git/my-app-docs/.specify/
→ Central docs repo for multiple apps
→ Good for: enterprise, multiple related apps
D) Custom location
→ Your choice: [specify path]
Default: Current repository (A)Question 9: Target Stack (If Greenfield + Implementation selected)
What tech stack for the new implementation?
Examples:
- Next.js 15 + TypeScript + Prisma + PostgreSQL
- Python/FastAPI + SQLAlchemy + PostgreSQL
- Go + Gin + GORM + PostgreSQL
- Your choice: [specify your preferred stack]Question 10: Build Location (If Greenfield + Implementation selected)
Where should the new application be built?
A) Subfolder (recommended for Web)
→ Examples: greenfield/, v2/, new-app/
→ Keeps old and new in same repo
→ Works in Claude Code Web
B) Separate directory (local only)
→ Examples: ~/git/my-new-app, ../my-app-v2
→ Completely separate location
→ Requires local Claude Code (doesn't work in Web)
C) Replace in place (destructive)
→ Removes old code as new is built
→ Not recommendedThen ask for the specific path:
If subfolder (A):
Folder name within this repo? (default: greenfield/)
Examples: v2/, new-app/, nextjs-version/, rebuilt/
Your choice: [or press enter for greenfield/]If separate directory (B):
Full path to new application directory:
Examples:
- ~/git/my-new-app
- ../my-app-v2
- /Users/you/projects/new-version
Your choice: [absolute or relative path]
⚠️ Note: Directory will be created if it doesn't exist.
Claude Code Web users: This won't work in Web - use subfolder instead.All answers are stored in and guide the entire workflow.
.stackshift-state.jsonState file example:
json
{
"detection_type": "monorepo-service", // What kind of app: monorepo-service, nx-app, generic, etc.
"route": "greenfield", // How to spec it: greenfield or brownfield
"implementation_framework": "speckit", // speckit, bmad-autopilot, bmad, or architect-only
"config": {
"spec_output_location": "~/git/my-new-app", // Where to write specs/docs
"build_location": "~/git/my-new-app", // Where to build new code (Gear 6)
"target_stack": "Next.js 15 + React 19 + Prisma",
"clarifications_strategy": "defer",
"implementation_scope": "p0_p1"
}
}Key fields:
- - What we're analyzing (monorepo-service, nx-app, turborepo-package, generic)
detection_type - - How to spec it (greenfield = tech-agnostic, brownfield = tech-prescriptive)
route - - Which tool for implementation (speckit = GitHub Spec Kit, bmad = BMAD Method)
implementation_framework
Examples:
- Monorepo Service + Greenfield = Extract business logic for platform migration
- Monorepo Service + Brownfield = Extract full implementation for maintenance
- Nx App + Greenfield = Extract business logic (framework-agnostic)
- Nx App + Brownfield = Extract full Nx/Angular implementation details
How it works:
Spec Output Location:
- Gear 2 writes to:
{spec_output_location}/docs/reverse-engineering/ - Gear 3 writes to:
{spec_output_location}/.specify/memory/ - If not set: defaults to current directory
Build Location:
- Gear 6 writes code to: ,
{build_location}/src/, etc.{build_location}/package.json - Can be same as spec location OR different
- If not set: defaults to subfolder
greenfield/
现在我们知道了应用的类型,接下来配置提取方法:
问题1:选择您的路径
Which path best aligns with your goals?
A) Greenfield: Extract for migration to new tech stack
→ Extract business logic only (tech-agnostic)
→ Can implement in any stack
→ Suitable for platform migrations
→ Example: Extract Rails app business logic → rebuild in Next.js
B) Brownfield: Extract for maintaining existing codebase
→ Extract business logic + technical details (tech-prescriptive)
→ Manage existing codebase with specs
→ Suitable for in-place improvements
→ Example: Add specs to Express API for ongoing maintenance这适用于所有检测类型:
- Monorepo Service + Greenfield = 用于平台迁移的业务逻辑
- Monorepo Service + Brownfield = 用于维护的完整实现细节
- Nx App + Greenfield = 用于重构的业务逻辑
- Nx App + Brownfield = 用于重构的完整Nx/Angular细节
- Generic + Greenfield = 用于重构的业务逻辑
- Generic + Brownfield = 用于管理的完整实现细节
问题2:实现框架
Which implementation framework do you want to use?
A) GitHub Spec Kit (Recommended for most projects)
→ Feature-level specifications in .specify/
→ Task-driven implementation with /speckit.* commands
→ Simpler, lightweight workflow
→ Best for: small-medium projects, focused features
B) BMAD Auto-Pilot (Recommended for BMAD users)
→ Auto-generates BMAD artifacts (PRD, Architecture, Epics) from reverse-eng docs
→ Three modes: YOLO (fully automatic), Guided (ask on ambiguities), Interactive
→ Optionally hand off to BMAD agents for collaborative refinement
→ Best for: projects that want BMAD format without the full conversation
C) BMAD Method (Full collaborative workflow)
→ Uses same reverse-engineering docs as other frameworks
→ Hands off to BMAD's collaborative PM/Architect agents
→ BMAD creates PRD + Architecture through conversation
→ Best for: large projects needing deep collaborative refinement
D) Architecture Only
→ Generates architecture document with your constraints
→ Asks about tech stack, cloud, scale, hard constraints
→ Includes Mermaid diagrams, ADRs, infrastructure recommendations
→ Best for: when you already know what to build, need architecture
After StackShift extracts documentation (Gear 2):
- All frameworks get the same 11 docs in docs/reverse-engineering/
- Spec Kit: Gears 3-6 create .specify/ specs, use /speckit.implement
- BMAD Auto-Pilot: /stackshift.bmad-synthesize generates BMAD artifacts automatically
- BMAD: Skip to Gear 6, hand off to *workflow-init with rich context
- Architecture Only: /stackshift.architect generates architecture.md with your constraints问题3:Brownfield模式 (若选择Brownfield)
Do you want to upgrade dependencies after establishing specs?
A) Standard - Just create specs for current state
→ Document existing implementation as-is
→ Specs match current code exactly
→ Good for maintaining existing versions
B) Upgrade - Create specs + upgrade all dependencies
→ Spec current state first (100% coverage)
→ Then upgrade all dependencies to latest versions
→ Fix breaking changes with spec guidance
→ Improve test coverage to spec standards
→ End with modern, fully-spec'd application
→ Perfect for modernizing legacy apps
**Upgrade mode includes:**
- npm update / pip upgrade / go get -u (based on tech stack)
- Automated breaking change detection
- Test-driven upgrade fixes
- Spec updates for API changes
- Coverage improvement to 85%+问题4:选择传输模式
How do you want to shift through the gears?
A) Manual - Review each gear before proceeding
→ You're in control
→ Stop at each step
→ Good for first-time users
B) Cruise Control - Shift through all gears automatically
→ Hands-free
→ Unattended execution
→ Good for experienced users or overnight runs问题5:规范详尽程度
How thorough should specification generation be in Gear 3?
A) Specs only (30 min - fast)
→ Generate specs for all features
→ Create plans manually with /speckit.plan as needed
→ Good for: quick assessment, flexibility
B) Specs + Plans (45-60 min - recommended)
→ Generate specs for all features
→ Auto-generate implementation plans for incomplete features
→ Ready for /speckit.tasks when you implement
→ Good for: most projects, balanced automation
C) Specs + Plans + Tasks (90-120 min - complete roadmap)
→ Generate specs for all features
→ Auto-generate plans for incomplete features
→ Auto-generate comprehensive task lists (300-500 lines each)
→ Ready for immediate implementation
→ Good for: large projects, maximum automation问题6:澄清策略 (若选择Cruise Control)
How should [NEEDS CLARIFICATION] markers be handled?
A) Defer - Mark them, continue implementation around them
→ Fastest
→ Can clarify later with /speckit.clarify
B) Prompt - Stop and ask questions interactively
→ Most thorough
→ Takes longer
C) Skip - Only implement fully-specified features
→ Safest
→ Some features won't be implemented问题7:实现范围 (若选择Cruise Control)
What should be implemented in Gear 6?
A) None - Stop after specs are ready
→ Just want specifications
→ Will implement manually later
B) P0 only - Critical features only
→ Essential features
→ Fastest implementation
C) P0 + P1 - Critical + high-value features
→ Good balance
→ Most common choice
D) All - Every feature (may take hours/days)
→ Complete implementation
→ Longest runtime问题8:规范输出位置 (若选择Greenfield)
Where should specifications and documentation be written?
A) Current repository (default)
→ Specs in: ./docs/reverse-engineering/, ./.specify/
→ Simple, everything in one place
→ Good for: small teams, single repo
B) New application repository
→ Specs in: ~/git/my-new-app/.specify/
→ Specs live with NEW codebase
→ Good for: clean separation, NEW repo already exists
C) Separate documentation repository
→ Specs in: ~/git/my-app-docs/.specify/
→ Central docs repo for multiple apps
→ Good for: enterprise, multiple related apps
D) Custom location
→ Your choice: [specify path]
Default: Current repository (A)问题9:目标技术栈 (若选择Greenfield + 实现)
What tech stack for the new implementation?
Examples:
- Next.js 15 + TypeScript + Prisma + PostgreSQL
- Python/FastAPI + SQLAlchemy + PostgreSQL
- Go + Gin + GORM + PostgreSQL
- Your choice: [specify your preferred stack]问题10:构建位置 (若选择Greenfield + 实现)
Where should the new application be built?
A) Subfolder (recommended for Web)
→ Examples: greenfield/, v2/, new-app/
→ Keeps old and new in same repo
→ Works in Claude Code Web
B) Separate directory (local only)
→ Examples: ~/git/my-new-app, ../my-app-v2
→ Completely separate location
→ Requires local Claude Code (doesn't work in Web)
C) Replace in place (destructive)
→ Removes old code as new is built
→ Not recommended然后询问具体路径:
如果选择子文件夹(A):
Folder name within this repo? (default: greenfield/)
Examples: v2/, new-app/, nextjs-version/, rebuilt/
Your choice: [or press enter for greenfield/]如果选择单独目录(B):
Full path to new application directory:
Examples:
- ~/git/my-new-app
- ../my-app-v2
- /Users/you/projects/new-version
Your choice: [absolute or relative path]
⚠️ Note: Directory will be created if it doesn't exist.
Claude Code Web users: This won't work in Web - use subfolder instead.所有答案都存储在中,并指导整个工作流程。
.stackshift-state.json状态文件示例:
json
{
"detection_type": "monorepo-service", // What kind of app: monorepo-service, nx-app, generic, etc.
"route": "greenfield", // How to spec it: greenfield or brownfield
"implementation_framework": "speckit", // speckit, bmad-autopilot, bmad, or architect-only
"config": {
"spec_output_location": "~/git/my-new-app", // Where to write specs/docs
"build_location": "~/git/my-new-app", // Where to build new code (Gear 6)
"target_stack": "Next.js 15 + React 19 + Prisma",
"clarifications_strategy": "defer",
"implementation_scope": "p0_p1"
}
}关键字段:
- - 我们分析的应用类型(monorepo-service、nx-app、turborepo-package、generic等)
detection_type - - 制定规范的方式(greenfield = 与技术无关,brownfield = 规定技术栈)
route - - 用于实现的工具(speckit、bmad-autopilot、bmad或architect-only)
implementation_framework
示例:
- Monorepo Service + Greenfield = 提取用于平台迁移的业务逻辑
- Monorepo Service + Brownfield = 提取用于维护的完整实现细节
- Nx App + Greenfield = 提取与框架无关的业务逻辑
- Nx App + Brownfield = 提取完整的Nx/Angular实现细节
工作原理:
规范输出位置:
- Gear 2 写入到:
{spec_output_location}/docs/reverse-engineering/ - Gear 3 写入到:
{spec_output_location}/.specify/memory/ - 如果未设置:默认为当前目录
构建位置:
- Gear 6 将代码写入到:、
{build_location}/src/等{build_location}/package.json - 可以与规范位置相同或不同
- 如果未设置:默认为子文件夹
greenfield/
Implementing the Questionnaire
问卷的实现
Present the questions conversationally and collect answers through natural dialogue. Ask questions one at a time (or in small groups of related questions) and wait for the user to respond before continuing.
Based on answers, ask follow-up questions conditionally:
- If cruise control: Ask clarifications strategy, implementation scope
- If greenfield + implementing: Ask target stack
- If greenfield subfolder: Ask folder name (or accept default: greenfield/)
- If BMAD Auto-Pilot selected: Skip spec thoroughness question (BMAD Synthesize handles artifact creation)
- If BMAD Auto-Pilot + cruise control: After Gear 2, runs /stackshift.bmad-synthesize in YOLO mode
- If BMAD selected: Skip spec thoroughness question (BMAD handles its own planning)
- If BMAD + cruise control: Gear 6 hands off to BMAD instead of /speckit.implement
- If Architecture Only selected: Skip spec thoroughness, clarifications, implementation scope questions
- If Architecture Only + cruise control: After Gear 2, runs /stackshift.architect
For custom folder name: Use free-text input or accept default.
Example:
StackShift: "What folder name for the new application? (default: greenfield/)"
User: "v2/" (or just press enter for greenfield/)
StackShift: "✅ New app will be built in: v2/"Stored in state as:
json
{
"config": {
"greenfield_location": "v2/" // Relative (subfolder)
// OR
"greenfield_location": "~/git/my-new-app" // Absolute (separate)
}
}How it works:
Subfolder (relative path):
bash
undefined以对话形式呈现问题,并通过自然对话收集答案。一次提出一个问题(或一组相关问题),等待用户响应后再继续。
根据答案,有条件地提出后续问题:
- 如果选择巡航控制:询问澄清策略、实现范围
- 如果选择Greenfield + 实现:询问目标技术栈
- 如果选择Greenfield子文件夹:询问文件夹名称(或接受默认值:greenfield/)
- 如果选择BMAD Auto-Pilot:跳过规范详尽程度问题(BMAD Synthesize处理工件创建)
- 如果选择BMAD Auto-Pilot + 巡航控制:在Gear 2之后,以YOLO模式运行/stackshift.bmad-synthesize
- 如果选择BMAD:跳过规范详尽程度问题(BMAD处理自己的规划)
- 如果选择BMAD + 巡航控制:Gear 6 将任务交给BMAD,而非/speckit.implement
- 如果选择Architecture Only:跳过规范详尽程度、澄清策略、实现范围问题
- 如果选择Architecture Only + 巡航控制:在Gear 2之后,运行/stackshift.architect
对于自定义文件夹名称: 使用自由文本输入或接受默认值。
示例:
StackShift: "What folder name for the new application? (default: greenfield/)"
User: "v2/" (or just press enter for greenfield/)
StackShift: "✅ New app will be built in: v2/"存储在状态中为:
json
{
"config": {
"greenfield_location": "v2/" // Relative (subfolder)
// OR
"greenfield_location": "~/git/my-new-app" // Absolute (separate)
}
}工作原理:
子文件夹(相对路径):
bash
undefinedBuilding in: /Users/you/git/my-app/greenfield/
Building in: /Users/you/git/my-app/greenfield/
cd /Users/you/git/my-app
cd /Users/you/git/my-app
StackShift creates: ./greenfield/
StackShift creates: ./greenfield/
Everything in one repo
Everything in one repo
**Separate directory (absolute path):**
```bash
**单独目录(绝对路径):**
```bashCurrent repo: /Users/you/git/my-app
Current repo: /Users/you/git/my-app
New app: /Users/you/git/my-new-app
New app: /Users/you/git/my-new-app
StackShift:
StackShift:
- Reads specs from: /Users/you/git/my-app/.specify/
- Reads specs from: /Users/you/git/my-app/.specify/
- Builds new app in: /Users/you/git/my-new-app/
- Builds new app in: /Users/you/git/my-new-app/
- Two completely separate repos
- Two completely separate repos
---
---Step 0: Install Slash Commands (FIRST!)
步骤0:安装斜杠命令(首先执行!)
Before any analysis, ensure /speckit. commands are available:*
bash
undefined*在进行任何分析之前,确保/speckit.命令可用:
bash
undefinedCreate project commands directory
Create project commands directory
mkdir -p .claude/commands
mkdir -p .claude/commands
Copy StackShift's slash commands to project
Copy StackShift's slash commands to project
cp ~/.claude/plugins/stackshift/.claude/commands/speckit.*.md .claude/commands/
cp ~/.claude/plugins/stackshift/.claude/commands/stackshift.modernize.md .claude/commands/
cp ~/.claude/plugins/stackshift/.claude/commands/speckit.*.md .claude/commands/
cp ~/.claude/plugins/stackshift/.claude/commands/stackshift.modernize.md .claude/commands/
Verify installation
Verify installation
ls .claude/commands/speckit.*.md
**You should see:**
- ✅ speckit.analyze.md
- ✅ speckit.clarify.md
- ✅ speckit.implement.md
- ✅ speckit.plan.md
- ✅ speckit.specify.md
- ✅ speckit.tasks.md
- ✅ stackshift.modernize.md
**Why this is needed:**
- Claude Code looks for slash commands in project `.claude/commands/` directory
- Plugin-level commands are not automatically discovered
- This copies them to the current project so they're available
- Only needs to be done once per project
**After copying:**
- `/speckit.*` commands will be available for this project
- No need to restart Claude Code
- Commands work immediatelyls .claude/commands/speckit.*.md
**您应该看到:**
- ✅ speckit.analyze.md
- ✅ speckit.clarify.md
- ✅ speckit.implement.md
- ✅ speckit.plan.md
- ✅ speckit.specify.md
- ✅ speckit.tasks.md
- ✅ stackshift.modernize.md
**为什么需要这一步:**
- Claude Code 在项目的`.claude/commands/`目录中查找斜杠命令
- 插件级别的命令不会被自动发现
- 这一步将命令复制到当前项目,使其可用
- 每个项目只需执行一次
**复制完成后:**
- `/speckit.*`命令将在该项目中可用
- 无需重启Claude Code
- 命令立即生效Critical: Commit Commands to Git
关键:将命令提交到Git
Add to .gitignore (or create if missing):
bash
undefined添加到.gitignore(如果不存在则创建):
bash
undefinedAllow .claude directory structure
Allow .claude directory structure
!.claude/
!.claude/commands/
!.claude/
!.claude/commands/
Track slash commands (team needs these!)
Track slash commands (team needs these!)
!.claude/commands/*.md
!.claude/commands/*.md
Ignore user-specific settings
Ignore user-specific settings
.claude/settings.json
.claude/mcp-settings.json
**Then commit:**
```bash
git add .claude/commands/
git commit -m "chore: add StackShift and Spec Kit slash commands
Adds /speckit.* and /stackshift.* slash commands for team use.
Commands added:
- /speckit.specify - Create feature specifications
- /speckit.plan - Create technical plans
- /speckit.tasks - Generate task lists
- /speckit.implement - Execute implementation
- /speckit.clarify - Resolve ambiguities
- /speckit.analyze - Validate specs match code
- /stackshift.modernize - Upgrade dependencies
These commands enable spec-driven development workflow.
All team members will have access after cloning.
"Why this is critical:
- ✅ Teammates get commands when they clone
- ✅ Commands are versioned with project
- ✅ No setup needed for new team members
- ✅ Commands always available
Without committing:
- ❌ Each developer needs to run StackShift or manually copy
- ❌ Confusion: "Why don't slash commands work?"
- ❌ Inconsistent developer experience
.claude/settings.json
.claude/mcp-settings.json
**然后提交:**
```bash
git add .claude/commands/
git commit -m "chore: add StackShift and Spec Kit slash commands
Adds /speckit.* and /stackshift.* slash commands for team use.
Commands added:
- /speckit.specify - Create feature specifications
- /speckit.plan - Create technical plans
- /speckit.tasks - Generate task lists
- /speckit.implement - Execute implementation
- /speckit.clarify - Resolve ambiguities
- /speckit.analyze - Validate specs match code
- /stackshift.modernize - Upgrade dependencies
These commands enable spec-driven development workflow.
All team members will have access after cloning.
"为什么这一步很关键:
- ✅ 团队成员克隆项目时会获得这些命令
- ✅ 命令与项目版本化
- ✅ 新团队成员无需额外设置
- ✅ 命令始终可用
如果不提交:
- ❌ 每个开发人员都需要运行StackShift或手动复制命令
- ❌ 困惑:"为什么斜杠命令不起作用?"
- ❌ 不一致的开发人员体验
Process Overview
流程概述
The analysis follows 5 steps:
分析遵循以下5个步骤:
Step 1: Auto-Detect Application Context
步骤1:自动检测应用上下文
- Run detection commands for all major languages/frameworks
- Identify the primary technology stack
- Extract version information
- 为所有主要语言/框架运行检测命令
- 识别主要技术栈
- 提取版本信息
Step 2: Extract Core Metadata
步骤2:提取核心元数据
- Application name from manifest or directory
- Version number from package manifests
- Description from README or manifest
- Git repository URL if available
- Technology stack summary
- 从清单或目录获取应用名称
- 从包清单获取版本号
- 从README或清单获取描述
- 如果可用,获取Git仓库URL
- 技术栈摘要
Step 3: Analyze Directory Structure
步骤3:分析目录结构
- Identify architecture patterns (MVC, microservices, monolith, etc.)
- Find configuration files
- Count source files by type
- Map key components (backend, frontend, database, API, infrastructure)
- 识别架构模式(MVC、微服务、单体应用等)
- 查找配置文件
- 按类型统计源文件数量
- 映射关键组件(后端、前端、数据库、API、基础设施)
Step 4: Check for Existing Documentation
步骤4:检查现有文档
- Scan for docs folders and markdown files
- Assess documentation quality
- Identify what's documented vs. what's missing
- 扫描docs文件夹和markdown文件
- 评估文档质量
- 识别已记录和缺失的内容
Step 5: Assess Completeness
步骤5:评估完整性
- Look for placeholder files (TODO, WIP, etc.)
- Check README for mentions of incomplete features
- Count test files and estimate test coverage
- Verify deployment/CI setup
- 查找占位符文件(TODO、WIP等)
- 检查README中是否提到不完整的功能
- 统计测试文件数量并估算测试覆盖率
- 验证部署/CI设置
Output Format
输出格式
This skill generates in the project root with:
analysis-report.md- Application Metadata - Name, version, description, repository
- Technology Stack - Languages, frameworks, libraries, build system
- Architecture Overview - Directory structure, key components
- Existing Documentation - What docs exist and their quality
- Completeness Assessment - Estimated % completion with evidence
- Source Code Statistics - File counts, lines of code estimates
- Recommended Next Steps - Focus areas for reverse engineering
- Notes - Additional observations
本技能在项目根目录生成,包含:
analysis-report.md- 应用元数据 - 名称、版本、描述、仓库
- 技术栈 - 语言、框架、库、构建系统
- 架构概述 - 目录结构、关键组件
- 现有文档 - 存在的文档及其质量
- 完整性评估 - 估计完成百分比及依据
- 源代码统计 - 文件数量、代码行数估计
- 推荐后续步骤 - 逆向工程的重点领域
- 备注 - 其他观察结果
Success Criteria
成功标准
After running this skill, you should have:
- ✅ file created in project root
analysis-report.md - ✅ Technology stack clearly identified
- ✅ Directory structure and architecture understood
- ✅ Completeness estimated (% done for backend, frontend, tests, docs)
- ✅ Ready to proceed to Step 2 (Reverse Engineer)
运行本技能后,您应该获得:
- ✅ 在项目根目录创建了文件
analysis-report.md - ✅ 技术栈已明确识别
- ✅ 已理解目录结构和架构
- ✅ 已估算完整性(后端、前端、测试、文档的完成百分比)
- ✅ 准备好进入步骤2(逆向工程)
Next Step
下一步
Once is created and reviewed, proceed to:
analysis-report.mdStep 2: Reverse Engineer - Use the reverse-engineer skill to generate comprehensive documentation.
创建并查看后,继续进行:
analysis-report.md步骤2:逆向工程 - 使用逆向工程技能生成全面的文档。
Common Workflows
常见工作流程
New Project Analysis:
- User asks to analyze codebase
- Run all detection commands in parallel
- Generate analysis report
- Present summary and ask if ready for Step 2
Re-analysis:
- Check if analysis-report.md already exists
- Ask user if they want to update it or skip to Step 2
- If updating, re-run analysis and show diff
Partial Analysis:
- User already knows tech stack
- Skip detection, focus on completeness assessment
- Generate abbreviated report
新项目分析:
- 用户要求分析代码库
- 并行运行所有检测命令
- 生成分析报告
- 呈现摘要并询问是否准备好进入步骤2
重新分析:
- 检查analysis-report.md是否已存在
- 询问用户是要更新报告还是直接进入步骤2
- 如果更新,重新运行分析并显示差异
部分分析:
- 用户已经知道技术栈
- 跳过检测,专注于完整性评估
- 生成简化报告
Technical Notes
技术说明
- Parallel execution: Run all language detection commands in parallel for speed
- Error handling: Missing manifest files are normal (return empty), don't error
- File limits: Use to limit output for large codebases
head - Exclusions: Always exclude node_modules, vendor, .git, build, dist, target
- Platform compatibility: Commands work on macOS, Linux, WSL
- 并行执行: 并行运行所有语言检测命令以提高速度
- 错误处理: 缺少清单文件是正常的(返回空值),不要报错
- 文件限制: 对于大型代码库,使用限制输出
head - 排除项: 始终排除node_modules、vendor、.git、build、dist、target
- 平台兼容性: 命令适用于macOS、Linux、WSL
Example Invocation
示例调用
When a user says:
"I need to reverse engineer this application and create specifications. Let's start."
This skill auto-activates and:
- Detects tech stack (e.g., Next.js, TypeScript, Prisma, AWS)
- Analyzes directory structure (identifies app/, lib/, prisma/, infrastructure/)
- Scans documentation (finds README.md, basic setup docs)
- Assesses completeness (estimates backend 100%, frontend 60%, tests 30%)
- Generates analysis-report.md
- Presents summary and recommends proceeding to Step 2
Remember: This is Step 1 of 6. After analysis, you'll proceed to reverse-engineer, create-specs, gap-analysis, complete-spec, and implement. Each step builds on the previous one.
当用户说:
"I need to reverse engineer this application and create specifications. Let's start."
本技能自动激活并:
- 检测技术栈(例如Next.js、TypeScript、Prisma、AWS)
- 分析目录结构(识别app/、lib/、prisma/、infrastructure/)
- 扫描文档(找到README.md、基本设置文档)
- 评估完整性(估计后端100%、前端60%、测试30%)
- 生成analysis-report.md
- 呈现摘要并推荐进入步骤2
请注意: 这是6步流程中的第1步。分析完成后,您将继续进行逆向工程、创建规范、差距分析、完善规范和实现步骤。每个步骤都建立在前一个步骤的基础上。