resume-tailoring

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Resume Tailoring Skill

简历定制技能

Overview

概述

Generates high-quality, tailored resumes optimized for specific job descriptions while maintaining factual integrity. Builds resumes around the holistic person by surfacing undocumented experiences through conversational discovery.
Core Principle: Truth-preserving optimization - maximize fit while maintaining factual integrity. Never fabricate experience, but intelligently reframe and emphasize relevant aspects.
Mission: A person's ability to get a job should be based on their experiences and capabilities, not on their resume writing skills.
生成高质量、贴合特定职位描述的定制化简历,同时严格保持事实准确性。通过对话式挖掘发掘未记录的经历,围绕求职者的整体背景打造简历。
核心原则: 保真优化——在保持事实准确的前提下最大化简历与职位的匹配度。绝不编造经历,但可智能重构并突出相关内容。
使命: 求职者获得工作的能力应基于自身的经历与能力,而非简历撰写技巧。

When to Use

适用场景

Use this skill when:
  • User provides a job description and wants a tailored resume
  • User has multiple existing resumes in markdown format
  • User wants to optimize their application for a specific role/company
  • User needs help surfacing and articulating undocumented experiences
DO NOT use for:
  • Generic resume writing from scratch (user needs existing resume library)
  • Cover letters (different skill)
  • LinkedIn profile optimization (different skill)
在以下场景使用本技能:
  • 用户提供职位描述,需要定制化简历
  • 用户拥有多份Markdown格式的现有简历
  • 用户希望针对特定职位/公司优化求职申请材料
  • 用户需要帮助发掘并表述未记录的经历
禁止用于:
  • 从零开始撰写通用简历(用户需已有简历库)
  • 求职信撰写(对应其他技能)
  • LinkedIn档案优化(对应其他技能)

Quick Start

快速开始

Required from user:
  1. Job description (text or URL)
  2. Resume library location (defaults to
    resumes/
    in current directory)
Workflow:
  1. Build library from existing resumes
  2. Research company/role
  3. Create template (with user checkpoint)
  4. Optional: Branching experience discovery
  5. Match content with confidence scoring
  6. Generate MD + DOCX + PDF + Report
  7. User review → Optional library update
需用户提供:
  1. 职位描述(文本或URL)
  2. 简历库位置(默认当前目录下的
    resumes/
    文件夹)
工作流程:
  1. 基于现有简历构建简历库
  2. 研究目标公司/职位
  3. 创建简历模板(需用户确认)
  4. 可选:分支式经历挖掘
  5. 内容匹配并给出置信度评分
  6. 生成MD + DOCX + PDF + 报告格式的文件
  7. 用户审核 → 可选更新简历库

Implementation

实现细节

See supporting files:
  • research-prompts.md
    - Structured prompts for company/role research
  • matching-strategies.md
    - Content matching algorithms and scoring
  • branching-questions.md
    - Experience discovery conversation patterns
参考支持文件:
  • research-prompts.md
    - 用于公司/职位研究的结构化提示词
  • matching-strategies.md
    - 内容匹配算法与评分规则
  • branching-questions.md
    - 经历挖掘的对话模式

Workflow Details

工作流程详情

Multi-Job Detection

多职位检测

Triggers when user provides:
  • Multiple JD URLs (comma or newline separated)
  • Phrases: "multiple jobs", "several positions", "batch", "3 jobs"
  • List of companies/roles: "Microsoft PM, Google TPM, AWS PM"
Detection Logic:
python
undefined
触发条件:用户提供以下内容时
  • 多个JD URL(逗号或换行分隔)
  • 相关表述:"multiple jobs"、"several positions"、"batch"、"3 jobs"
  • 公司/职位列表:"Microsoft PM, Google TPM, AWS PM"
检测逻辑:
python
undefined

Pseudo-code

伪代码

def detect_multi_job(user_input): indicators = [ len(extract_urls(user_input)) > 1, any(phrase in user_input.lower() for phrase in ["multiple jobs", "several positions", "batch of", "3 jobs", "5 jobs"]), count_company_mentions(user_input) > 1 ] return any(indicators)

**If detected:**
"I see you have multiple job applications. Would you like to use multi-job mode?
BENEFITS:
  • Shared experience discovery (faster - ask questions once for all jobs)
  • Batch processing with progress tracking
  • Incremental additions (add more jobs later)
TIME COMPARISON (3 similar jobs):
  • Sequential single-job: ~45 minutes (15 min × 3)
  • Multi-job mode: ~40 minutes (15 min discovery + 8 min per job)
Use multi-job mode? (Y/N)"

**If user confirms Y:**
- Use multi-job workflow (see multi-job-workflow.md)

**If user confirms N or single job detected:**
- Use existing single-job workflow (Phase 0 onwards)

**Backward Compatibility:** Single-job workflow completely unchanged.

**Multi-Job Workflow:**

When multi-job mode is activated, see `multi-job-workflow.md` for complete workflow.

**High-Level Multi-Job Process:**
┌─────────────────────────────────────────────────────────────┐ │ PHASE 0: Intake & Batch Initialization │ │ - Collect 3-5 job descriptions │ │ - Initialize batch structure │ │ - Run library initialization (once) │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ PHASE 1: Aggregate Gap Analysis │ │ - Extract requirements from all JDs │ │ - Cross-reference against library │ │ - Build unified gap map (deduplicate) │ │ - Prioritize: Critical → Important → Job-specific │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ PHASE 2: Shared Experience Discovery │ │ - Single branching interview covering ALL gaps │ │ - Multi-job context for each question │ │ - Tag experiences with job relevance │ │ - Enrich library with discoveries │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ PHASE 3: Per-Job Processing (Sequential) │ │ For each job: │ │ ├─ Research (company + role benchmarking) │ │ ├─ Template generation │ │ ├─ Content matching (uses enriched library) │ │ └─ Generation (MD + DOCX + Report) │ │ Interactive or Express mode │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ PHASE 4: Batch Finalization │ │ - Generate batch summary │ │ - User reviews all resumes together │ │ - Approve/revise individual or batch │ │ - Update library with approved resumes │ └─────────────────────────────────────────────────────────────┘

**Time Savings:**
- 3 jobs: ~40 min (vs 45 min sequential) = 11% savings
- 5 jobs: ~55 min (vs 75 min sequential) = 27% savings

**Quality:** Same depth as single-job workflow (research, matching, generation)

**See `multi-job-workflow.md` for complete implementation details.**
def detect_multi_job(user_input): indicators = [ len(extract_urls(user_input)) > 1, any(phrase in user_input.lower() for phrase in ["multiple jobs", "several positions", "batch of", "3 jobs", "5 jobs"]), count_company_mentions(user_input) > 1 ] return any(indicators)

**检测到多职位时:**
"我注意到你需要申请多个职位。是否要启用多职位模式?
优势:
  • 共享经历挖掘(更高效——只需一次问答即可覆盖所有职位)
  • 批量处理并跟踪进度
  • 可增量添加职位(后续可补充更多职位)
时间对比(3个相似职位):
  • 单职位依次处理:约45分钟(15分钟/个 ×3)
  • 多职位模式:约40分钟(15分钟挖掘 + 8分钟/个处理)
是否启用多职位模式?(Y/N)"

**用户确认Y时:**
- 使用多职位工作流程(详见multi-job-workflow.md)

**用户确认N或检测为单职位时:**
- 使用现有单职位工作流程(从第0阶段开始)

**向后兼容性:** 单职位工作流程完全保持不变。

**多职位工作流程:**

启用多职位模式后,完整流程详见`multi-job-workflow.md`。

**多职位流程概览:**
┌─────────────────────────────────────────────────────────────┐ │ 阶段0:信息收集与批次初始化 │ │ - 收集3-5份职位描述 │ │ - 初始化批次结构 │ │ - 运行简历库初始化(仅执行一次) │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ 阶段1:汇总差距分析 │ │ - 提取所有JD中的要求 │ │ - 与简历库进行交叉比对 │ │ - 构建统一的差距图谱(去重) │ │ - 优先级排序:关键需求 → 重要需求 → 职位特定需求 │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ 阶段2:共享经历挖掘 │ │ - 一次分支式访谈覆盖所有差距 │ │ - 每个问题结合多职位背景 │ │ - 为经历标记职位相关性 │ │ - 将发掘的经历补充到简历库中 │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ 阶段3:按职位依次处理(顺序执行) │ │ 针对每个职位: │ │ ├─ 研究(公司 + 职位基准分析) │ │ ├─ 生成模板 │ │ ├─ 内容匹配(使用更新后的简历库) │ │ └─ 生成文件(MD + DOCX + 报告) │ │ 支持交互式或快速模式 │ └─────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────┐ │ 阶段4:批次收尾 │ │ - 生成批次汇总报告 │ │ - 用户统一审核所有简历 │ │ - 批准/修订单个或全部简历 │ │ - 将获批简历更新到简历库中 │ └─────────────────────────────────────────────────────────────┘

**时间节省:**
- 3个职位:约40分钟(vs 依次处理45分钟)= 节省11%
- 5个职位:约55分钟(vs 依次处理75分钟)= 节省27%

**质量保障:** 与单职位流程深度一致(包含研究、匹配、生成环节)

**详见`multi-job-workflow.md`获取完整实现细节。**

Phase 0: Library Initialization

阶段0:简历库初始化

Always runs first - builds fresh resume database
Process:
  1. Locate resume directory:
    User provides path OR default to ./resumes/
    Validate directory exists
  2. Scan for markdown files:
    Use Glob tool: pattern="*.md" path={resume_directory}
    Count files found
    Announce: "Building resume library... found {N} resumes"
  3. Parse each resume: For each resume file:
    • Use Read tool to load content
    • Extract sections: roles, bullets, skills, education
    • Identify patterns: bullet structure, length, formatting
  4. Build experience database structure:
    json
    {
      "roles": [
        {
          "role_id": "company_title_year",
          "company": "Company Name",
          "title": "Job Title",
          "dates": "YYYY-YYYY",
          "description": "Role summary",
          "bullets": [
            {
              "text": "Full bullet text",
              "themes": ["leadership", "technical"],
              "metrics": ["17x improvement", "$3M revenue"],
              "keywords": ["cross-functional", "program"],
              "source_resumes": ["resume1.md"]
            }
          ]
        }
      ],
      "skills": {
        "technical": ["Python", "Kusto", "AI/ML"],
        "product": ["Roadmap", "Strategy"],
        "leadership": ["Stakeholder mgmt"]
      },
      "education": [...],
      "user_preferences": {
        "typical_length": "1-page|2-page",
        "section_order": ["summary", "experience", "education"],
        "bullet_style": "pattern"
      }
    }
  5. Tag content automatically:
    • Themes: Scan for keywords (leadership, technical, analytics, etc.)
    • Metrics: Extract numbers, percentages, dollar amounts
    • Keywords: Frequent technical terms, action verbs
Output: In-memory database ready for matching
Code pattern:
python
undefined
始终首先执行——构建全新的简历数据库
流程:
  1. 定位简历目录:
    用户提供路径或默认使用./resumes/
    验证目录是否存在
  2. 扫描Markdown文件:
    使用Glob工具:pattern="*.md" path={resume_directory}
    统计找到的文件数量
    提示:"正在构建简历库... 找到{N}份简历"
  3. 解析每份简历: 针对每份简历文件:
    • 使用Read工具加载内容
    • 提取板块:职位、项目 bullet、技能、教育背景
    • 识别格式模式:bullet结构、长度、排版
  4. 构建经历数据库结构:
    json
    {
      "roles": [
        {
          "role_id": "company_title_year",
          "company": "公司名称",
          "title": "职位名称",
          "dates": "YYYY-YYYY",
          "description": "职位概述",
          "bullets": [
            {
              "text": "完整bullet文本",
              "themes": ["leadership", "technical"],
              "metrics": ["17x improvement", "$3M revenue"],
              "keywords": ["cross-functional", "program"],
              "source_resumes": ["resume1.md"]
            }
          ]
        }
      ],
      "skills": {
        "technical": ["Python", "Kusto", "AI/ML"],
        "product": ["Roadmap", "Strategy"],
        "leadership": ["Stakeholder mgmt"]
      },
      "education": [...],
      "user_preferences": {
        "typical_length": "1-page|2-page",
        "section_order": ["summary", "experience", "education"],
        "bullet_style": "pattern"
      }
    }
  5. 自动为内容打标签:
    • 主题:扫描关键词(领导力、技术、分析等)
    • 量化指标:提取数字、百分比、金额
    • 关键词:高频技术术语、动作动词
输出: 可用于匹配的内存数据库
代码模式参考:
python
undefined

Pseudo-code for reference

参考伪代码

library = { "roles": [], "skills": {}, "education": [] }
for resume_file in glob("resumes/*.md"): content = read(resume_file) roles = extract_roles(content) for role in roles: role["bullets"] = tag_bullets(role["bullets"]) library["roles"].append(role)
return library
undefined
library = { "roles": [], "skills": {}, "education": [] }
for resume_file in glob("resumes/*.md"): content = read(resume_file) roles = extract_roles(content) for role in roles: role["bullets"] = tag_bullets(role["bullets"]) library["roles"].append(role)
return library
undefined

Phase 1: Research Phase

阶段1:研究阶段

Goal: Build comprehensive "success profile" beyond just the job description
Inputs:
  • Job description (text or URL from user)
  • Optional: Company name if not in JD
Process:
1.1 Job Description Parsing:
Use research-prompts.md JD parsing template
Extract: requirements, keywords, implicit preferences, red flags, role archetype
1.2 Company Research:
WebSearch queries:
- "{company} mission values culture"
- "{company} engineering blog"
- "{company} recent news"

Synthesize: mission, values, business model, stage
1.3 Role Benchmarking:
WebSearch: "site:linkedin.com {job_title} {company}"
WebFetch: Top 3-5 profiles
Analyze: common backgrounds, skills, terminology

If sparse results, try similar companies
1.4 Success Profile Synthesis:
Combine all research into structured profile (see research-prompts.md template)

Include:
- Core requirements (must-have)
- Valued capabilities (nice-to-have)
- Cultural fit signals
- Narrative themes
- Terminology map (user's background → their language)
- Risk factors + mitigations
Checkpoint:
Present success profile to user:

"Based on my research, here's what makes candidates successful for this role:

{SUCCESS_PROFILE_SUMMARY}

Key findings:
- {Finding 1}
- {Finding 2}
- {Finding 3}

Does this match your understanding? Any adjustments?"

Wait for user confirmation before proceeding.
Output: Validated success profile document
目标: 构建超越职位描述的全面“成功画像”
输入:
  • 职位描述(用户提供的文本或URL)
  • 可选:若JD中未包含公司名称,需用户提供
流程:
1.1 职位描述解析:
使用research-prompts.md中的JD解析模板
提取:要求、关键词、隐含偏好、风险点、职位类型
1.2 公司研究:
WebSearch查询:
- "{company} mission values culture"
- "{company} engineering blog"
- "{company} recent news"

综合信息:使命、价值观、商业模式、发展阶段
1.3 职位基准分析:
WebSearch: "site:linkedin.com {job_title} {company}"
WebFetch: 前3-5份相关档案
分析:共同背景、技能、术语

若结果较少,可参考同类公司
1.4 成功画像整合:
将所有研究整合成结构化画像(详见research-prompts.md模板)

包含:
- 核心要求(必备项)
- 增值能力(加分项)
- 文化适配信号
- 叙事主题
- 术语映射(用户背景 → 目标公司术语)
- 风险因素及应对方案
检查点:
向用户展示成功画像:

"基于我的研究,以下是该职位的成功候选人画像:

{SUCCESS_PROFILE_SUMMARY}

关键发现:
- {发现1}
- {发现2}
- {发现3}

是否符合你的认知?是否需要调整?"

等待用户确认后再继续。
输出: 经验证的成功画像文档

Phase 2: Template Generation

阶段2:模板生成

Goal: Create resume structure optimized for this specific role
Inputs:
  • Success profile (from Phase 1)
  • User's resume library (from Phase 0)
Process:
2.1 Analyze User's Resume Library:
Extract from library:
- All roles, titles, companies, date ranges
- Role archetypes (technical contributor, manager, researcher, specialist)
- Experience clusters (what domains/skills appear frequently)
- Career progression and narrative
2.2 Role Consolidation Decision:
When to consolidate:
  • Same company, similar responsibilities
  • Target role values continuity over granular progression
  • Combined narrative stronger than separate
  • Page space constrained
When to keep separate:
  • Different companies (ALWAYS separate)
  • Dramatically different responsibilities that both matter
  • Target role values specific progression story
  • One position has significantly more relevant experience
Decision template:
For {Company} with {N} positions:

OPTION A (Consolidated):
Title: "{Combined_Title}"
Dates: "{First_Start} - {Last_End}"
Rationale: {Why consolidation makes sense}

OPTION B (Separate):
Position 1: "{Title}" ({Dates})
Position 2: "{Title}" ({Dates})
Rationale: {Why separate makes sense}

RECOMMENDED: Option {A/B} because {reasoning}
2.3 Title Reframing Principles:
Core rule: Stay truthful to what you did, emphasize aspect most relevant to target
Strategies:
  1. Emphasize different aspects:
    • "Graduate Researcher" → "Research Software Engineer" (if coding-heavy)
    • "Data Science Lead" → "Technical Program Manager" (if leadership)
  2. Use industry-standard terminology:
    • "Scientist III" → "Senior Research Scientist" (clearer seniority)
    • "Program Coordinator" → "Project Manager" (standard term)
  3. Add specialization when truthful:
    • "Engineer" → "ML Engineer" (if ML work substantial)
    • "Researcher" → "Computational Ecologist" (if computational methods)
  4. Adjust seniority indicators:
    • "Lead" vs "Senior" vs "Staff" based on scope
Constraints:
  • NEVER claim work you didn't do
  • NEVER inflate seniority beyond defensible
  • Company name and dates MUST be exact
  • Core responsibilities MUST be accurate
2.4 Generate Template Structure:
markdown
undefined
目标: 创建针对该职位的优化简历结构
输入:
  • 成功画像(来自阶段1)
  • 用户的简历库(来自阶段0)
流程:
2.1 分析用户简历库:
从简历库提取:
- 所有职位、头衔、公司、任职时间
- 职位类型(技术贡献者、管理者、研究员、专家)
- 经历集群(频繁出现的领域/技能)
- 职业发展脉络与叙事
2.2 职位合并决策:
合并场景:
  • 同一家公司,职责相似
  • 目标职位更看重连续性而非细分发展
  • 合并后的叙事比单独呈现更有说服力
  • 简历页数受限
单独呈现场景:
  • 不同公司(始终单独呈现)
  • 职责差异显著且均与目标职位相关
  • 目标职位看重特定的职业发展路径
  • 某一职位的相关经历远多于其他
决策模板:
针对{Company}的{N}个职位:

选项A(合并):
头衔:"{Combined_Title}"
时间:"{First_Start} - {Last_End}"
理由:{合并的合理性}

选项B(单独):
职位1:"{Title}" ({Dates})
职位2:"{Title}" ({Dates})
理由:{单独呈现的合理性}

推荐:选项{A/B},因为{理由}
2.3 头衔重构原则:
核心规则: 忠实于实际工作内容,突出与目标职位最相关的方面
策略:
  1. 突出不同维度:
    • "Graduate Researcher" → "Research Software Engineer"(若工作以编码为主)
    • "Data Science Lead" → "Technical Program Manager"(若工作以管理为主)
  2. 使用行业标准术语:
    • "Scientist III" → "Senior Research Scientist"(更清晰体现资深级别)
    • "Program Coordinator" → "Project Manager"(标准术语)
  3. 如实添加专业方向:
    • "Engineer" → "ML Engineer"(若机器学习工作占比大)
    • "Researcher" → "Computational Ecologist"(若以计算方法为主)
  4. 调整资深级别表述:
    • 根据工作范围选择"Lead"、"Senior"、"Staff"
约束:
  • 绝不虚构未从事的工作
  • 绝不夸大资深级别
  • 公司名称与任职时间必须准确
  • 核心职责必须真实
2.4 生成模板结构:
markdown
undefined

Professional Summary

专业概述

[GUIDANCE: {X} sentences emphasizing {themes from success profile}] [REQUIRED ELEMENTS: {keywords from JD}]
[指导:{X}句话,突出{成功画像中的主题}] [必备元素:{JD中的关键词}]

Key Skills

核心技能

[STRUCTURE: {2-4 categories based on JD structure}] [SOURCE: Extract from library matching success profile]
[结构:{基于JD结构的2-4个分类}] [来源:从简历库中提取匹配成功画像的技能]

Professional Experience

职业经历

[ROLE 1 - Most Recent/Relevant]

[职位1 - 最新/最相关]

[CONSOLIDATION: {merge X positions OR keep separate}] [TITLE OPTIONS: A: {emphasize aspect 1} B: {emphasize aspect 2} Recommended: {option with rationale}] [BULLET ALLOCATION: {N bullets based on relevance + recency}] [GUIDANCE: Emphasize {themes}, look for {experience types}]
Bullet 1: [SEEKING: {requirement type}] Bullet 2: [SEEKING: {requirement type}] ...
[合并决策:{合并X个职位 或 单独呈现}] [头衔选项: A: {突出维度1} B: {突出维度2} 推荐:{选项及理由}] [Bullet分配:{基于相关性与时效性的N个bullet}] [指导:突出{主题},寻找{经历类型}]
Bullet 1: [需求类型:{需求类型}] Bullet 2: [需求类型:{需求类型}] ...

[ROLE 2]

[职位2]

...
...

Education

教育背景

[PLACEMENT: {top if required/recent, bottom if experience-heavy}]
[位置:{若为必备/近期则放在顶部,若经历丰富则放在底部}]

[Optional Sections]

[可选板块]

[INCLUDE IF: {criteria from success profile}]

**Checkpoint:**
Present template to user:
"Here's the optimized resume structure for this role:
STRUCTURE: {Section order and rationale}
ROLE CONSOLIDATION: {Decisions with options}
TITLE REFRAMING: {Proposed titles with alternatives}
BULLET ALLOCATION: Role 1: {N} bullets (most relevant) Role 2: {N} bullets ...
Does this structure work? Any adjustments to:
  • Role consolidation?
  • Title reframing?
  • Bullet allocation?"
Wait for user approval before proceeding.

**Output:** Approved template skeleton with guidance for each section
[包含条件:{成功画像中的标准}]

**检查点:**
向用户展示模板:
"以下是针对该职位的优化简历结构:
结构: {板块顺序及理由}
职位合并决策: {决策及选项}
头衔重构: {提议的头衔及替代选项}
Bullet分配: 职位1:{N}个bullet(相关性最高) 职位2:{N}个bullet ...
该结构是否可行?是否需要调整:
  • 职位合并方式?
  • 头衔重构?
  • Bullet分配数量?"
等待用户批准后再继续。

**输出:** 经批准的模板框架,包含各板块的指导说明

Phase 2.5: Experience Discovery (OPTIONAL)

阶段2.5:经历挖掘(可选)

Goal: Surface undocumented experiences through conversational discovery
When to trigger:
After template approval, if gaps identified:

"I've identified {N} gaps or areas where we have weak matches:
- {Gap 1}: {Current confidence}
- {Gap 2}: {Current confidence}
...

Would you like to do a structured brainstorming session to surface
any experiences you haven't documented yet?

This typically takes 10-15 minutes and often uncovers valuable content."

User can accept or skip.
Branching Interview Process:
Approach: Conversational with follow-up questions based on answers
For each gap, conduct branching dialogue (see branching-questions.md):
  1. Start with open probe:
    • Technical gap: "Have you worked with {skill}?"
    • Soft skill gap: "Tell me about times you've {demonstrated_skill}"
    • Recent work: "What have you worked on recently?"
  2. Branch based on answer:
    • YES/Strong → Deep dive (scale, challenges, metrics)
    • INDIRECT → Explore role and transferability
    • ADJACENT → Explore related experience
    • PERSONAL → Assess recency and substance
    • NO → Try broader category or move on
  3. Follow-up systematically:
    • Ask "what," "how," "why" to get details
    • Quantify: "Any metrics?"
    • Contextualize: "Was this production?"
    • Validate: "Does this address the gap?"
  4. Capture immediately:
    • Document experience as shared
    • Ask clarifying questions (dates, scope, impact)
    • Help articulate as resume bullet
    • Tag which gap(s) it addresses
Capture Structure:
markdown
undefined
目标: 通过对话式挖掘发掘未记录的经历
触发时机:
模板获批后,若识别到差距:

"我识别到{N}个差距或匹配度较弱的领域:
- {差距1}:当前匹配度{Current confidence}
- {差距2}:当前匹配度{Current confidence}
...

是否要进行结构化头脑风暴,发掘你尚未记录的经历?

该过程通常需要10-15分钟,往往能发现有价值的内容。"

用户可选择接受或跳过。
分支式访谈流程:
方法: 对话式,根据用户回答提出跟进问题
针对每个差距,进行分支式对话(详见branching-questions.md):
  1. 开放式提问开场:
    • 技术差距:"你是否有过{skill}相关的工作经历?"
    • 软技能差距:"请分享你展现{demonstrated_skill}的经历"
    • 近期工作:"你最近在做什么项目?"
  2. 根据回答分支:
    • 是/明确相关 → 深入挖掘(规模、挑战、量化成果)
    • 间接相关 → 探索职位关联性与可迁移性
    • 相邻领域 → 探索相关经历
    • 个人项目 → 评估时效性与重要性
    • 否 → 尝试更宽泛的类别或跳过
  3. 系统性跟进:
    • 问"什么"、"如何"、"为什么"以获取细节
    • 量化:"是否有可量化的成果?"
    • 场景化:"这是生产环境中的项目吗?"
    • 验证:"这能填补当前差距吗?"
  4. 即时记录:
    • 记录用户分享的经历
    • 询问澄清问题(时间、范围、影响)
    • 帮助用户将其整理为简历bullet
    • 标记该经历填补的差距
记录结构:
markdown
undefined

Newly Discovered Experiences

新发掘的经历

Experience 1: {Brief description}

经历1:{简要描述}

  • Context: {Where/when}
  • Scope: {Scale, duration, impact}
  • Addresses: {Which gaps}
  • Bullet draft: "{Achievement-focused bullet}"
  • Confidence: {How well fills gap - percentage}
  • 背景:{时间/地点}
  • 范围:{规模、时长、影响}
  • 填补差距:{对应的差距}
  • Bullet草稿:"{以成果为导向的bullet}"
  • 匹配度:{填补差距的程度 - 百分比}

Experience 2: ...

经历2:...


**Integration Options:**

After discovery session:
"Great! I captured {N} new experiences. For each one:
  1. ADD TO CURRENT RESUME - Integrate now
  2. ADD TO LIBRARY ONLY - Save for future, not needed here
  3. REFINE FURTHER - Think more about articulation
  4. DISCARD - Not relevant enough
Let me know for each experience."

**Important Notes:**
- Keep truthfulness bar high - help articulate, NEVER fabricate
- Focus on gaps and weak matches, not strong areas
- Time-box if needed (10-15 minutes typical)
- User can skip entirely if confident in library
- Recognize when to move on - don't exhaust user

**Output:** New experiences integrated into library, ready for matching

**整合选项:**

挖掘会话结束后:
"很好!我记录了{N}个新经历。针对每个经历,你可以选择:
  1. 添加到当前简历 - 立即整合
  2. 仅添加到简历库 - 保存供未来使用,本次不整合
  3. 进一步优化 - 完善表述
  4. 丢弃 - 相关性不足
请告知你的选择。"

**重要注意事项:**
- 严格保持真实性 - 仅帮助优化表述,绝不编造经历
- 聚焦差距与匹配度弱的领域,而非已有优势
- 可设置时间限制(通常10-15分钟)
- 用户可随时跳过该环节
- 适时停止,避免让用户疲惫

**输出:** 新经历整合到简历库,可用于后续匹配

Phase 3: Assembly Phase

阶段3:内容填充阶段

Goal: Fill approved template with best-matching content, with transparent scoring
Inputs:
  • Approved template (from Phase 2)
  • Resume library + discovered experiences (from Phase 0 + 2.5)
  • Success profile (from Phase 1)
Process:
3.1 For Each Template Slot:
  1. Extract all candidate bullets from library
    • All bullets from library database
    • All newly discovered experiences
    • Include source resume for each
  2. Score each candidate (see matching-strategies.md)
    • Direct match (40%): Keywords, domain, technology, outcome
    • Transferable (30%): Same capability, different context
    • Adjacent (20%): Related tools, methods, problem space
    • Impact (10%): Achievement type alignment
    Overall = (Direct × 0.4) + (Transfer × 0.3) + (Adjacent × 0.2) + (Impact × 0.1)
  3. Rank candidates by score
    • Sort high to low
    • Group by confidence band:
      • 90-100%: DIRECT
      • 75-89%: TRANSFERABLE
      • 60-74%: ADJACENT
      • <60%: WEAK/GAP
  4. Present top 3 matches with analysis:
    TEMPLATE SLOT: {Role} - Bullet {N}
    SEEKING: {Requirement description}
    
    MATCHES:
    [DIRECT - 95%] "{bullet_text}"
      ✓ Direct: {what matches directly}
      ✓ Transferable: {what transfers}
      ✓ Metrics: {quantified impact}
      Source: {resume_name}
    
    [TRANSFERABLE - 78%] "{bullet_text}"
      ✓ Transferable: {what transfers}
      ✓ Adjacent: {what's adjacent}
      ⚠ Gap: {what's missing}
      Source: {resume_name}
    
    [ADJACENT - 62%] "{bullet_text}"
      ✓ Adjacent: {what's related}
      ⚠ Gap: {what's missing}
      Source: {resume_name}
    
    RECOMMENDATION: Use DIRECT match (95%)
    ALTERNATIVE: If avoiding repetition, use TRANSFERABLE (78%) with reframing
  5. Handle gaps (confidence <60%):
    GAP IDENTIFIED: {Requirement}
    
    BEST AVAILABLE: {score}% - "{bullet_text}"
    
    REFRAME OPPORTUNITY: {If applicable}
    Original: "{text}"
    Reframed: "{adjusted_text}" (truthful because {reason})
    New confidence: {score}%
    
    OPTIONS:
    1. Use reframed version ({new_score}%)
    2. Acknowledge gap in cover letter
    3. Omit bullet slot (reduce allocation)
    4. Use best available with disclosure
    
    RECOMMENDATION: {Most appropriate option}
3.2 Content Reframing:
When good match (>60%) but terminology misaligned:
Apply strategies from matching-strategies.md:
  • Keyword alignment (preserve meaning, adjust terms)
  • Emphasis shift (same facts, different focus)
  • Abstraction level (adjust technical specificity)
  • Scale emphasis (highlight relevant aspects)
Show before/after for transparency:
REFRAMING APPLIED:
Bullet: {template_slot}

Original: "{original_bullet}"
Source: {resume_name}

Reframed: "{reframed_bullet}"
Changes: {what changed and why}
Truthfulness: {why this is accurate}
Checkpoint:
"I've matched content to your template. Here's the complete mapping:

COVERAGE SUMMARY:
- Direct matches: {N} bullets ({percentage}%)
- Transferable: {N} bullets ({percentage}%)
- Adjacent: {N} bullets ({percentage}%)
- Gaps: {N} ({percentage}%)

REFRAMINGS APPLIED: {N}
- {Example 1}
- {Example 2}

GAPS IDENTIFIED:
- {Gap 1}: {Recommendation}
- {Gap 2}: {Recommendation}

OVERALL JD COVERAGE: {percentage}%

Review the detailed mapping below. Any adjustments to:
- Match selections?
- Reframings?
- Gap handling?"

[Present full detailed mapping]

Wait for user approval before generation.
Output: Complete bullet-by-bullet mapping with confidence scores and reframings
目标: 为获批的模板填充匹配度最高的内容,并提供透明的评分
输入:
  • 获批的模板(来自阶段2)
  • 简历库 + 新发掘的经历(来自阶段0 + 2.5)
  • 成功画像(来自阶段1)
流程:
3.1 针对每个模板空位:
  1. 从简历库提取所有候选bullet
    • 简历库数据库中的所有bullet
    • 所有新发掘的经历
    • 标注每个bullet的来源简历
  2. 为每个候选bullet评分(详见matching-strategies.md)
    • 直接匹配(40%):关键词、领域、技术、成果
    • 可迁移匹配(30%):能力相同,场景不同
    • 相邻匹配(20%):工具、方法、问题领域相关
    • 成果匹配(10%):成果类型匹配
    总分 = (直接匹配×0.4) + (可迁移×0.3) + (相邻×0.2) + (成果×0.1)
  3. 按分数排序候选bullet
    • 从高到低排序
    • 按匹配度分组:
      • 90-100%:直接匹配
      • 75-89%:可迁移匹配
      • 60-74%:相邻匹配
      • <60%:弱匹配/差距
  4. 展示前3个匹配结果及分析:
    模板空位:{职位} - Bullet {N}
    需求:{需求描述}
    
    匹配结果:
    [直接匹配 - 95%] "{bullet_text}"
      ✓ 直接匹配:{直接匹配点}
      ✓ 可迁移:{可迁移点}
      ✓ 量化成果:{量化影响}
      来源:{简历名称}
    
    [可迁移匹配 - 78%] "{bullet_text}"
      ✓ 可迁移:{可迁移点}
      ✓ 相邻:{相邻点}
      ⚠ 差距:{缺失点}
      来源:{简历名称}
    
    [相邻匹配 - 62%] "{bullet_text}"
      ✓ 相邻:{相关点}
      ⚠ 差距:{缺失点}
      来源:{简历名称}
    
    推荐:使用直接匹配(95%)
    替代方案:若避免重复,可使用可迁移匹配(78%)并重构表述
  5. 处理差距(匹配度<60%):
    识别到差距:{需求}
    
    最佳可用匹配:{score}% - "{bullet_text}"
    
    重构机会:{若适用}
    原表述:"{text}"
    重构后:"{adjusted_text}"(真实合理,理由:{reason})
    新匹配度:{score}%
    
    选项:
    1. 使用重构后的表述({new_score}%)
    2. 在求职信中说明差距
    3. 省略该bullet空位(减少分配数量)
    4. 使用最佳可用匹配并标注
    
    推荐:{最合适的选项}
3.2 内容重构:
当匹配度较好(>60%)但术语不匹配时:
应用matching-strategies.md中的策略:
  • 关键词对齐(保留原意,调整术语)
  • 重点转移(事实不变,突出不同重点)
  • 抽象级别调整(调整技术细节程度)
  • 规模突出(强调相关维度)
透明展示前后对比:
已应用重构:
空位:{template_slot}

原表述:"{original_bullet}"
来源:{resume_name}

重构后:"{reframed_bullet}"
调整点:{调整内容及原因}
真实性:{为何该表述准确}
检查点:
"我已为模板匹配好内容。以下是完整的匹配情况:

覆盖情况汇总:
- 直接匹配:{N}个bullet({percentage}%)
- 可迁移匹配:{N}个bullet({percentage}%)
- 相邻匹配:{N}个bullet({percentage}%)
- 差距:{N}个({percentage}%)

已应用的重构:{N}处
- {示例1}
- {示例2}

识别到的差距:
- {差距1}:{推荐方案}
- {差距2}:{推荐方案}

整体JD覆盖度:{percentage}%

请查看下方详细匹配情况。是否需要调整:
- 匹配内容选择?
- 重构表述?
- 差距处理方式?"

[展示完整详细匹配情况]

等待用户批准后再生成文件。
输出: 完整的逐bullet匹配结果,包含匹配度评分与重构说明

Phase 4: Generation Phase

阶段4:生成阶段

Goal: Create professional multi-format outputs
Inputs:
  • Approved content mapping (from Phase 3)
  • User's formatting preferences (from library analysis)
  • Target role information (from Phase 1)
Process:
4.1 Markdown Generation:
Compile mapped content into clean markdown:
markdown
undefined
目标: 创建专业的多格式输出文件
输入:
  • 获批的内容匹配结果(来自阶段3)
  • 用户的排版偏好(来自简历库分析)
  • 目标职位信息(来自阶段1)
流程:
4.1 Markdown生成:
将匹配内容整理为清晰的Markdown格式:
markdown
undefined

{User_Name}

{用户姓名}

{Contact_Info}

{联系方式}

Professional Summary

专业概述

{Summary_from_template}

{来自模板的概述}

Key Skills

核心技能

{Category_1}:
  • {Skills_from_library_matching_profile}
{Category_2}:
  • {Skills_from_library_matching_profile}
{Repeat for all categories}

{分类1}:
  • {来自简历库且匹配画像的技能}
{分类2}:
  • {来自简历库且匹配画像的技能}
{重复所有分类}

Professional Experience

职业经历

{Job_Title}

{职位头衔}

{Company} | {Location} | {Dates}
{Role_summary_if_applicable}
• {Bullet_1_from_mapping} • {Bullet_2_from_mapping} ...
{公司} | {地点} | {任职时间}
{若适用,添加职位概述}
• {来自匹配结果的Bullet 1} • {来自匹配结果的Bullet 2} ...

{Next_Role}

{下一个职位}

...

...

Education

教育背景

{Degree} | {Institution} ({Year}) {Degree} | {Institution} ({Year})

**Use user's preferences:**
- Formatting style from library analysis
- Bullet structure pattern
- Section ordering
- Typical length (1-page vs 2-page)

**Output:** `{Name}_{Company}_{Role}_Resume.md`

**4.2 DOCX Generation:**

**Use document-skills:docx:**
REQUIRED SUB-SKILL: Use document-skills:docx
Create Word document with:
  • Professional fonts (Calibri 11pt body, 12pt headers)
  • Proper spacing (single within sections, space between)
  • Clean bullet formatting (proper numbering config, NOT unicode)
  • Header with contact information
  • Appropriate margins (0.5-1 inch)
  • Bold/italic emphasis (company names, titles, dates)
  • Page breaks if 2-page resume
See docx skill documentation for:
  • Paragraph and TextRun structure
  • Numbering configuration for bullets
  • Heading levels and styles
  • Spacing and margins

**Output:** `{Name}_{Company}_{Role}_Resume.docx`

**4.3 PDF Generation (Optional):**

**If user requests PDF:**
OPTIONAL SUB-SKILL: Use document-skills:pdf
Convert DOCX to PDF OR generate directly Ensure formatting preservation Professional appearance for direct submission

**Output:** `{Name}_{Company}_{Role}_Resume.pdf`

**4.4 Generation Summary Report:**

**Create metadata file:**

```markdown
{学位} | {院校} ({毕业年份}) {学位} | {院校} ({毕业年份})

**遵循用户偏好:**
- 简历库分析得出的排版风格
- Bullet结构模式
- 板块顺序
- 常规长度(1页vs2页)

**输出:** `{姓名}_{公司}_{职位}_Resume.md`

**4.2 DOCX生成:**

**使用document-skills:docx:**
必备子技能:使用document-skills:docx
创建Word文档要求:
  • 专业字体(正文字体Calibri 11pt,标题12pt)
  • 合理间距(板块内单倍行距,板块间空行)
  • 清晰的Bullet格式(正确编号配置,非unicode符号)
  • 包含联系方式的页眉
  • 合适的页边距(0.5-1英寸)
  • 加粗/斜体强调(公司名称、头衔、时间)
  • 若为2页简历,添加分页符
详见docx技能文档:
  • 段落与TextRun结构
  • Bullet编号配置
  • 标题级别与样式
  • 间距与页边距

**输出:** `{姓名}_{公司}_{职位}_Resume.docx`

**4.3 PDF生成(可选):**

**若用户要求PDF:**
可选子技能:使用document-skills:pdf
将DOCX转换为PDF或直接生成 确保格式完全保留 外观专业,可直接提交

**输出:** `{姓名}_{公司}_{职位}_Resume.pdf`

**4.4 生成汇总报告:**

**创建元数据文件:**

```markdown

Resume Generation Report

简历生成报告

{Role} at {Company}
Date Generated: {timestamp}
{职位} @ {公司}
生成日期:{时间戳}

Target Role Summary

目标职位概述

  • Company: {Company}
  • Position: {Role}
  • IC Level: {If known}
  • Focus Areas: {Key areas}
  • 公司:{公司}
  • 职位:{职位}
  • IC级别:{若已知}
  • 核心领域:{关键方向}

Success Profile Summary

成功画像概述

  • Key Requirements: {top 5}
  • Cultural Fit Signals: {themes}
  • Risk Factors Addressed: {mitigations}
  • 核心要求:{前5项}
  • 文化适配信号:{主题}
  • 已处理的风险因素:{应对方案}

Content Mapping Summary

内容匹配汇总

  • Total bullets: {N}
  • Direct matches: {N} ({percentage}%)
  • Transferable: {N} ({percentage}%)
  • Adjacent: {N} ({percentage}%)
  • Gaps identified: {list}
  • 总Bullet数:{N}
  • 直接匹配:{N}个({percentage}%)
  • 可迁移匹配:{N}个({percentage}%)
  • 相邻匹配:{N}个({percentage}%)
  • 识别到的差距:{列表}

Reframing Applied

已应用的重构

  • {bullet}: {original} → {reframed} [Reason: {why}] ...
  • {bullet}:{原表述} → {重构后} [原因:{理由}] ...

Source Resumes Used

使用的源简历

  • {resume1}: {N} bullets
  • {resume2}: {N} bullets ...
  • {resume1}:{N}个bullet
  • {resume2}:{N}个bullet ...

Gaps Addressed

差距处理情况

Before Experience Discovery:

经历挖掘前:

{Gap analysis showing initial state}
{初始差距分析}

After Experience Discovery:

经历挖掘后:

{Gap analysis showing final state}
{最终差距分析}

Remaining Gaps:

剩余差距:

{Any unresolved gaps with recommendations}
{未解决的差距及推荐方案}

Key Differentiators for This Role

该职位的核心竞争力

{What makes user uniquely qualified}
{用户的独特优势}

Recommendations for Interview Prep

面试准备建议

  • Stories to prepare
  • Questions to expect
  • Gaps to address

**Output:** `{Name}_{Company}_{Role}_Resume_Report.md`

**Present to user:**
"Your tailored resume has been generated!
FILES CREATED:
  • {Name}{Company}{Role}_Resume.md
  • {Name}{Company}{Role}_Resume.docx
  • {Name}{Company}{Role}Resume_Report.md {- {Name}{Company}_{Role}_Resume.pdf (if requested)}
QUALITY METRICS:
  • JD Coverage: {percentage}%
  • Direct Matches: {percentage}%
  • Newly Discovered: {N} experiences
Review the files and let me know:
  1. Save to library (recommended)
  2. Need revisions
  3. Save but don't add to library"
undefined
  • 需要准备的故事
  • 可能遇到的问题
  • 需要弥补的差距

**输出:** `{姓名}_{公司}_{职位}_Resume_Report.md`

**向用户交付:**
"你的定制化简历已生成!
创建的文件:
  • {姓名}{公司}{职位}_Resume.md
  • {姓名}{公司}{职位}_Resume.docx
  • {姓名}{公司}{职位}Resume_Report.md {- {姓名}{公司}_{职位}_Resume.pdf(若用户要求)}
质量指标:
  • JD覆盖度:{percentage}%
  • 直接匹配占比:{percentage}%
  • 新发掘的经历:{N}个
请查看文件并告知:
  1. 保存到简历库(推荐)
  2. 需要修订
  3. 保存但不添加到简历库"
undefined

Phase 5: Library Update (CONDITIONAL)

阶段5:简历库更新(可选)

Goal: Optionally add successful resume to library for future use
When: After user reviews and approves generated resume
Checkpoint Question:
"Are you satisfied with this resume?

OPTIONS:
1. YES - Save to library
   → Adds resume to permanent location
   → Rebuilds library database
   → Makes new content available for future resumes

2. NO - Need revisions
   → What would you like to adjust?
   → Make changes and re-present

3. SAVE BUT DON'T ADD TO LIBRARY
   → Keep files in current location
   → Don't enrich database
   → Useful for experimental resumes

Which option?"
If Option 1 (YES - Save to library):
Process:
  1. Move resume to library:
    Source: {current_directory}/{Name}_{Company}_{Role}_Resume.md
    Destination: {resume_library}/{Name}_{Company}_{Role}_Resume.md
    
    Also move:
    - .docx file
    - .pdf file (if exists)
    - _Report.md file
  2. Rebuild library database:
    Re-run Phase 0 library initialization
    Parse newly created resume
    Add bullets to experience database
    Update keyword/theme indices
    Tag with metadata:
      - target_company: {Company}
      - target_role: {Role}
      - generated_date: {timestamp}
      - jd_coverage: {percentage}
      - success_profile: {reference to profile}
  3. Preserve generation metadata:
    json
    {
      "resume_id": "{Name}_{Company}_{Role}",
      "generated": "{timestamp}",
      "source_resumes": ["{resume1}", "{resume2}"],
      "reframings": [
        {
          "original": "{text}",
          "reframed": "{text}",
          "reason": "{why}"
        }
      ],
      "match_scores": {
        "bullet_1": 95,
        "bullet_2": 87,
        ...
      },
      "newly_discovered": [
        {
          "experience": "{description}",
          "bullet": "{text}",
          "addresses_gap": "{gap}"
        }
      ]
    }
  4. Announce completion:
    "Resume saved to library!
    
    Library updated:
    - Total resumes: {N}
    - New content variations: {N}
    - Newly discovered experiences added: {N}
    
    This resume and its new content are now available for future tailoring sessions."
If Option 2 (NO - Need revisions):
"What would you like to adjust?"

[Collect user feedback]
[Make requested changes]
[Re-run relevant phases]
[Re-present for approval]

[Repeat until satisfied or user cancels]
If Option 3 (SAVE BUT DON'T ADD TO LIBRARY):
"Resume files saved to current directory:
- {Name}_{Company}_{Role}_Resume.md
- {Name}_{Company}_{Role}_Resume.docx
- {Name}_{Company}_{Role}_Resume_Report.md

Not added to library - you can manually move later if desired."
Benefits of Library Update:
  • Grows library with each successful resume
  • New bullet variations become available
  • Reframings that work can be reused
  • Discovered experiences permanently captured
  • Future sessions start with richer library
  • Self-improving system over time
Output: Updated library database + metadata preservation (if Option 1)
目标: 可选将获批简历添加到简历库供未来使用
时机: 用户审核并批准生成的简历后
确认问题:
"你对这份简历满意吗?

选项:
1. 是 - 保存到简历库
   → 将简历添加到永久存储位置
   → 重建简历库数据库
   → 新内容可用于未来简历定制

2. 否 - 需要修订
   → 请告知需要调整的内容
   → 修改后重新交付

3. 保存但不添加到简历库
   → 将文件保存在当前目录
   → 不更新数据库
   → 适用于实验性简历

你的选择是?"
若选择选项1(是 - 保存到简历库):
流程:
  1. 将简历移动到简历库:
    来源:{current_directory}/{姓名}_{公司}_{职位}_Resume.md
    目标:{resume_library}/{姓名}_{公司}_{职位}_Resume.md
    
    同时移动:
    - .docx文件
    - .pdf文件(若存在)
    - _Report.md文件
  2. 重建简历库数据库:
    重新运行阶段0的简历库初始化
    解析新生成的简历
    将bullet添加到经历数据库
    更新关键词/主题索引
    添加元数据标签:
      - target_company: {Company}
      - target_role: {Role}
      - generated_date: {timestamp}
      - jd_coverage: {percentage}
      - success_profile: {画像引用}
  3. 保留生成元数据:
    json
    {
      "resume_id": "{姓名}_{公司}_{职位}",
      "generated": "{timestamp}",
      "source_resumes": ["{resume1}", "{resume2}"],
      "reframings": [
        {
          "original": "{text}",
          "reframed": "{text}",
          "reason": "{why}"
        }
      ],
      "match_scores": {
        "bullet_1": 95,
        "bullet_2": 87,
        ...
      },
      "newly_discovered": [
        {
          "experience": "{description}",
          "bullet": "{text}",
          "addresses_gap": "{gap}"
        }
      ]
    }
  4. 完成通知:
    "简历已保存到简历库!
    
    简历库已更新:
    - 总简历数:{N}
    - 新增内容变体:{N}
    - 新增发掘的经历:{N}
    
    这份简历及其新内容现在可用于未来的定制会话。"
若选择选项2(否 - 需要修订):
"请告知需要调整的内容?"

[收集用户反馈]
[进行所需修改]
[重新运行相关阶段]
[重新交付供用户批准]

[重复直到用户满意或取消]
若选择选项3(保存但不添加到简历库):
"简历文件已保存到当前目录:
- {姓名}_{公司}_{职位}_Resume.md
- {姓名}_{公司}_{职位}_Resume.docx
- {姓名}_{公司}_{职位}_Resume_Report.md

未添加到简历库——你可在之后手动移动。"
更新简历库的优势:
  • 每一份获批简历都能丰富简历库
  • 新增的bullet变体可复用
  • 有效的重构表述可重复使用
  • 发掘的经历被永久记录
  • 未来会话可从更丰富的简历库开始
  • 系统随使用不断自我优化
输出: 更新后的简历库数据库 + 元数据保留(若选择选项1)

Error Handling & Edge Cases

错误处理与边缘场景

Edge Case 1: Insufficient Resume Library
SCENARIO: User has only 1-2 resumes, limited content

HANDLING:
"⚠️ Limited resume library detected ({N} resumes).

This may result in:
- Fewer matching options
- More gaps in coverage
- Less variety in bullet phrasing

RECOMMENDATIONS:
- Proceed with available content (I'll do my best!)
- Consider adding more resumes after this generation
- Experience Discovery phase will be especially valuable

Continue? (Y/N)"
Edge Case 2: No Good Matches (confidence <60% for critical requirement)
SCENARIO: Template slot requires experience user doesn't have

HANDLING:
"❌ GAP: {Requirement}

No matches found with confidence >60%

OPTIONS:
1. Run Experience Discovery - might surface undocumented work
2. Reframe best available ({score}%) - I'll show you the reframing
3. Omit bullet slot - reduce template allocation
4. Note for cover letter - emphasize learning ability

Which approach?"

[Don't force matches - be transparent about gaps]
Edge Case 3: Research Phase Failures
SCENARIO: WebSearch fails, LinkedIn unavailable, company info sparse

HANDLING:
"⚠️ Limited company research available.

What I found:
- {Available info}

What's missing:
- {Missing areas}

Falling back to job description-only analysis.
Result: Template will be optimized for JD but may miss cultural nuances.

Do you have additional context about:
- Company culture?
- Team structure?
- Technologies used?

(Optional - you can share or skip)"

[Proceed with best-effort approach]
Edge Case 4: Job Description Quality Issues
SCENARIO: Vague JD, missing requirements, poorly written

HANDLING:
"⚠️ Job description lacks detail in areas:
- {Missing area 1}
- {Missing area 2}

This may limit template optimization.

Do you have additional context about:
- {Question 1}
- {Question 2}

(Optional - I'll work with what's available)"

[Extract what's possible, proceed]
Edge Case 5: Ambiguous Role Consolidation
SCENARIO: Unclear whether to merge roles or keep separate

HANDLING:
"🤔 Ambiguous consolidation decision for {Company}

OPTION A (Consolidated): {Rationale}
OPTION B (Separate): {Rationale}

Both are defensible. Which do you prefer?

(This becomes your preference for similar situations)"

[Remember preference for future]
Edge Case 6: Resume Length Constraints
SCENARIO: Too much good content, exceeds 2 pages

HANDLING:
"⚠️ Content exceeds 2 pages (current: {N} bullets, ~{pages} pages)

PRUNING SUGGESTIONS (ranked by relevance):
Remove:
- {Bullet X}: {score}% match (lowest)
- {Bullet Y}: {score}% match
...

Keep all bullets and accept >2 pages?
OR
Remove {N} bullets to fit 2 pages?

Your preference?"

[User decides priority]
Error Recovery:
  • All checkpoints allow going back to previous phase
  • User can request adjustments at any checkpoint
  • Generation failures (DOCX/PDF) fall back to markdown-only
  • Progress saved between phases (can resume if interrupted)
Graceful Degradation:
  • Research limited → Fall back to JD-only analysis
  • Library small → Work with available + emphasize discovery
  • Matches weak → Transparent gap identification
  • Generation fails → Provide markdown + error details
边缘场景1:简历库内容不足
场景:用户仅有1-2份简历,内容有限

处理方式:
"⚠️ 检测到简历库内容有限({N}份简历)。

这可能导致:
- 匹配选项更少
- 覆盖度差距更多
- Bullet表述多样性不足

推荐:
- 使用现有内容继续(我会尽力优化!)
- 考虑在本次生成后补充更多简历
- 经历挖掘阶段将尤为重要

是否继续?(Y/N)"
边缘场景2:无优质匹配(关键需求匹配度<60%)
场景:模板空位要求的经历用户未拥有

处理方式:
"❌ 差距:{需求}

未找到匹配度>60%的内容

选项:
1. 运行经历挖掘——可能发掘未记录的工作经历
2. 重构最佳可用内容({score}%)——我会展示重构方案
3. 省略该bullet空位——减少模板分配数量
4. 在求职信中说明——强调学习能力

你选择哪种方式?"

[绝不强行匹配——透明告知差距]
边缘场景3:研究阶段失败
场景:WebSearch失败、LinkedIn不可用、公司信息稀少

处理方式:
"⚠️ 公司研究信息有限。

已找到的信息:
- {可用信息}

缺失的信息:
- {缺失领域}

将 fallback 到仅基于职位描述的分析。
结果:模板将针对JD优化,但可能缺失文化适配细节。

你是否有额外信息可以提供:
- 公司文化?
- 团队结构?
- 使用的技术?

(可选——你可以分享或跳过)"

[尽最大努力继续]
边缘场景4:职位描述质量问题
场景:JD模糊、缺失要求、撰写质量差

处理方式:
"⚠️ 职位描述在以下领域缺乏细节:
- {缺失领域1}
- {缺失领域2}

这可能限制模板优化效果。

你是否有额外信息可以提供:
- {问题1}
- {问题2}

(可选——我会基于现有内容继续)"

[提取可用信息,继续流程]
边缘场景5:职位合并决策模糊
场景:无法明确是否合并职位

处理方式:
"🤔 {Company}的职位合并决策模糊

选项A(合并):{理由}
选项B(单独):{理由}

两种选择均合理。你偏好哪种?

(该选择将作为未来类似场景的偏好)"

[记住用户偏好供未来使用]
边缘场景6:简历长度限制
场景:优质内容过多,超过2页

处理方式:
"⚠️ 内容超过2页(当前:{N}个bullet,约{pages}页)

修剪建议(按相关性排序):
移除:
- {Bullet X}:{score}%匹配度(最低)
- {Bullet Y}:{score}%匹配度
...

保留所有bullet并接受超过2页?
移除{N}个bullet以适配2页?

你的选择是?"

[由用户决定优先级]
错误恢复:
  • 所有检查点允许回到上一阶段
  • 用户可在任何检查点要求调整
  • 生成失败(DOCX/PDF)时 fallback 到仅生成Markdown
  • 阶段间保存进度(中断后可恢复)
优雅降级:
  • 研究受限 → 仅基于JD分析
  • 简历库内容少 → 使用现有内容 + 强调挖掘环节
  • 匹配度低 → 透明识别差距
  • 生成失败 → 提供Markdown文件 + 错误详情

Usage Examples

使用示例

Example 1: Internal Role (Same Company)
USER: "I want to apply for Principal PM role in 1ES team at Microsoft.
      Here's the JD: {paste}"

SKILL:
1. Library Build: Finds 29 resumes
2. Research: Microsoft 1ES team, internal culture, role benchmarking
3. Template: Features PM2 Azure Eng Systems role (most relevant)
4. Discovery: Surfaces VS Code extension, Bhavana AI side project
5. Assembly: 92% JD coverage, 75% direct matches
6. Generate: MD + DOCX + Report
7. User approves → Library updated with new resume + 6 discovered experiences

RESULT: Highly competitive application leveraging internal experience
Example 2: Career Transition (Different Domain)
USER: "I'm a TPM trying to transition to ecology PM role. JD: {paste}"

SKILL:
1. Library Build: Finds existing TPM resumes
2. Research: Ecology sector, sustainability focus, cross-domain transfers
3. Template: Reframes "Technical Program Manager" → "Program Manager,
             Environmental Systems" emphasizing systems thinking
4. Discovery: Surfaces volunteer conservation work, graduate research in
             environmental modeling
5. Assembly: 65% JD coverage - flags gaps in domain-specific knowledge
6. Generate: Resume + gap analysis with cover letter recommendations

RESULT: Bridges technical skills with environmental domain
Example 3: Career Gap Handling
USER: "I have a 2-year gap while starting a company. JD: {paste}"

SKILL:
1. Library Build: Finds pre-gap resumes
2. Research: Standard analysis
3. Template: Includes startup as legitimate role
4. Discovery: Surfaces skills developed during startup (fundraising,
             product development, team building)
5. Assembly: Frames gap as entrepreneurial experience
6. Generate: Resume presenting gap as valuable experience

RESULT: Gap becomes strength showing initiative and diverse skills
Example 4: Multi-Job Batch (3 Similar Roles)
USER: "I want to apply for these 3 TPM roles:
      1. Microsoft 1ES Principal PM
      2. Google Cloud Senior TPM
      3. AWS Container Services Senior PM
      Here are the JDs: {paste 3 JDs}"

SKILL:
1. Multi-job detection: Triggered (3 JDs detected)
2. Intake: Collects all 3 JDs, initializes batch
3. Library Build: Finds 29 resumes (once)
4. Gap Analysis: Identifies 14 gaps, 8 unique after deduplication
5. Shared Discovery: 30-minute session surfaces 5 new experiences
   - Kubernetes CI/CD for nonprofits
   - Azure migration for university lab
   - Cross-functional team leadership examples
   - Recent hackathon project
   - Open source contributions
6. Per-Job Processing (×3):
   - Job 1 (Microsoft): 85% coverage, emphasizes Azure/1ES alignment
   - Job 2 (Google): 88% coverage, emphasizes technical depth
   - Job 3 (AWS): 78% coverage, addresses AWS gap in cover letter recs
7. Batch Finalization: All 3 resumes reviewed, approved, added to library

RESULT: 3 high-quality resumes in 40 minutes vs 45 minutes sequential
        5 new experiences captured, available for future applications
        Average coverage: 84%, all critical gaps resolved
Example 5: Incremental Batch Addition
WEEK 1:
USER: "I want to apply for 3 jobs: {Microsoft, Google, AWS}"
SKILL: [Processes batch as above, completes in 40 min]

WEEK 2:
USER: "I found 2 more jobs: Stripe and Meta. Add them to my batch?"
SKILL:
1. Load existing batch (includes 5 previously discovered experiences)
2. Intake: Adds Job 4 (Stripe), Job 5 (Meta)
3. Incremental Gap Analysis: Only 3 new gaps (vs 14 original)
   - Payment systems (Stripe-specific)
   - Social networking (Meta-specific)
   - React/frontend (both)
4. Incremental Discovery: 10-minute session for new gaps only
   - Surfaces payment processing side project
   - React work from bootcamp
   - Large-scale system design course
5. Per-Job Processing (×2): Jobs 4, 5 processed independently
6. Updated Batch Summary: Now 5 jobs total, 8 experiences discovered

RESULT: 2 additional resumes in 20 minutes (vs 30 min if starting from scratch)
        Time saved by not re-asking 8 previous gaps: ~20 minutes
示例1:内部职位(同一家公司)
用户:"我想申请微软1ES团队的Principal PM职位。这是JD:{粘贴内容}"

技能处理:
1. 简历库构建:找到29份简历
2. 研究:微软1ES团队、内部文化、职位基准
3. 模板:突出PM2 Azure Eng Systems职位(相关性最高)
4. 挖掘:发掘VS Code扩展、Bhavana AI副业项目
5. 匹配:92% JD覆盖度,75%直接匹配
6. 生成:MD + DOCX + 报告
7. 用户批准 → 简历库更新,添加新简历 + 6个新发掘的经历

结果:利用内部经历打造极具竞争力的申请材料
示例2:职业转型(不同领域)
用户:"我是TPM,想转型为生态领域PM。JD:{粘贴内容}"

技能处理:
1. 简历库构建:找到现有TPM简历
2. 研究:生态行业、可持续发展重点、跨领域迁移
3. 模板:将"Technical Program Manager"重构为"Program Manager,
             Environmental Systems",突出系统思维
4. 挖掘:发掘志愿者环保工作、研究生阶段的环境建模研究
5. 匹配:65% JD覆盖度——标记领域知识差距
6. 生成:简历 + 差距分析 + 求职信建议

结果:将技术技能与环境领域需求建立关联
示例3:职业空白期处理
用户:"我有2年创业空白期。JD:{粘贴内容}"

技能处理:
1. 简历库构建:找到空白期前的简历
2. 研究:标准分析
3. 模板:将创业经历作为合法职位包含
4. 挖掘:发掘创业期间培养的技能(融资、产品开发、团队建设)
5. 匹配:将空白期重构为创业经历
6. 生成:将空白期呈现为宝贵经历的简历

结果:空白期转化为展现主动性与多元技能的优势
示例4:多职位批量处理(3个相似职位)
用户:"我想申请这3个TPM职位:
      1. 微软1ES Principal PM
      2. Google Cloud Senior TPM
      3. AWS Container Services Senior TPM
      这是JD:{粘贴3份JD}"

技能处理:
1. 多职位检测:触发(检测到3份JD)
2. 信息收集:收集所有3份JD,初始化批次
3. 简历库构建:找到29份简历(仅执行一次)
4. 差距分析:识别14个差距,去重后8个独特差距
5. 共享挖掘:30分钟会话发掘5个新经历
   - 为非营利组织搭建的Kubernetes CI/CD
   - 为大学实验室完成的Azure迁移
   - 跨职能团队领导力案例
   - 近期黑客松项目
   - 开源贡献
6. 按职位处理(×3):
   - 职位1(微软):85%覆盖度,突出Azure/1ES适配性
   - 职位2(谷歌):88%覆盖度,突出技术深度
   - 职位3(AWS):78%覆盖度,在求职信建议中说明AWS差距
7. 批次收尾:所有3份简历审核通过,添加到简历库

结果:40分钟内生成3份高质量简历(vs 依次处理45分钟)
        5个新经历被记录,可用于未来申请
        平均覆盖度:84%,所有关键差距已解决
示例5:增量添加职位到批次
第1周:
用户:"我想申请3个职位:{微软、谷歌、AWS}"
技能:[按上述流程处理批次,40分钟完成]

第2周:
用户:"我又找到2个职位:Stripe和Meta。可以添加到我的批次吗?"
技能:
1. 加载现有批次(包含5个已发掘的经历)
2. 信息收集:添加职位4(Stripe)、职位5(Meta)
3. 增量差距分析:仅3个新差距(vs 原14个)
   - 支付系统(Stripe特定)
   - 社交网络(Meta特定)
   - React/前端(两者均要求)
4. 增量挖掘:仅针对新差距的10分钟会话
   - 发掘支付处理副业项目
   - 训练营中的React学习经历
   - 大规模系统设计课程
5. 按职位处理(×2):独立处理职位4、5
6. 更新批次汇总:共5个职位,发掘8个经历

结果:20分钟内生成2份额外简历(vs 从零开始的30分钟)
        因无需重复询问8个已有差距,节省约20分钟

Testing Guidelines

测试指南

Manual Testing Checklist:
Test 1: Happy Path
- Provide JD with clear requirements
- Library with 10+ resumes
- Run all phases without skipping
- Verify generated files
- Check library update
PASS CRITERIA:
- All files generated correctly
- JD coverage >70%
- No errors in any phase
Test 2: Minimal Library
- Provide only 2 resumes
- Run through workflow
- Verify gap handling
PASS CRITERIA:
- Graceful warning about limited library
- Still produces reasonable output
- Gaps clearly identified
Test 3: Research Failures
- Use obscure company with minimal online presence
- Verify fallback to JD-only
PASS CRITERIA:
- Warning about limited research
- Proceeds with JD analysis
- Template still reasonable
Test 4: Experience Discovery Value
- Run with deliberate gaps in library
- Conduct experience discovery
- Verify new experiences integrated
PASS CRITERIA:
- Discovers genuine undocumented experiences
- Integrates into final resume
- Improves JD coverage
Test 5: Title Reframing
- Test various role transitions
- Verify title reframing suggestions
PASS CRITERIA:
- Multiple options provided
- Truthfulness maintained
- Rationales clear
Test 6: Multi-format Generation
- Generate MD, DOCX, PDF, Report
- Verify formatting consistency
PASS CRITERIA:
- All formats readable
- Formatting professional
- Content identical across formats
Regression Testing:
After any SKILL.md changes:
1. Re-run Test 1 (happy path)
2. Verify no functionality broken
3. Commit only if passes
手动测试 checklist:
测试1:正常流程
- 提供要求明确的JD
- 简历库包含10+份简历
- 完整运行所有阶段,不跳过
- 验证生成的文件
- 检查简历库更新
通过标准:
- 所有文件生成正确
- JD覆盖度>70%
- 所有阶段无错误
测试2:简历库内容极少
- 仅提供2份简历
- 运行完整流程
- 验证差距处理
通过标准:
- 清晰提示简历库内容有限
- 仍能生成合理输出
- 差距被明确识别
测试3:研究失败
- 使用在线信息极少的小众公司
- 验证 fallback 到仅JD分析
通过标准:
- 提示研究信息有限
- 继续基于JD分析
- 模板仍合理
测试4:经历挖掘价值
- 提供存在刻意差距的简历库
- 运行经历挖掘
- 验证新经历被整合
通过标准:
- 发掘真实的未记录经历
- 整合到最终简历
- 提升JD覆盖度
测试5:头衔重构
- 测试多种职业转型场景
- 验证头衔重构建议
通过标准:
- 提供多个选项
- 保持真实性
- 理由清晰
测试6:多格式生成
- 生成MD、DOCX、PDF、报告
- 验证格式一致性
通过标准:
- 所有格式可读
- 排版专业
- 各格式内容一致
回归测试:
修改SKILL.md后:
1. 重新运行测试1(正常流程)
2. 验证功能未被破坏
3. 仅在通过后提交