Loading...
Loading...
Compare original and translation side by side
interview-loop-strategistcv-creatorcareer-biographerinterview-loop-strategistcv-creatorcareer-biographergraph TB
subgraph Client["Client Layer"]
MOBILE["Mobile App<br/>React Native + Expo<br/>Flash cards, voice drills,<br/>progress dashboard"]
DESKTOP["Desktop Web<br/>Next.js<br/>Full sessions, whiteboard,<br/>proctoring"]
end
subgraph Engines["Engine Layer"]
VOICE["Voice Engine<br/>Hume AI EVI<br/>Emotion-sensitive<br/>interviewer voice"]
BOARD["Whiteboard Engine<br/>tldraw + Claude Vision<br/>Diagram evaluation<br/>and scoring"]
PROCTOR["Proctor Engine<br/>MediaPipe Face Mesh<br/>Gaze tracking,<br/>attention monitoring"]
end
subgraph Orchestrator["Session Orchestrator — Node.js"]
ROUND["Round Selector<br/>Weakness-weighted<br/>random selection"]
ADAPT["Adaptive Difficulty<br/>Performance-based<br/>question scaling"]
DEBRIEF["Debrief Generator<br/>Transcript + emotion +<br/>proctor + whiteboard<br/>scored rubric"]
SM2["SM-2 Scheduler<br/>Spaced repetition<br/>for concepts and stories"]
end
subgraph Data["Data Layer — Supabase"]
SESSIONS[("sessions<br/>recordings, transcripts")]
SCORES[("scores<br/>per-dimension breakdowns")]
STORIES[("story_bank<br/>STAR-L entries")]
CARDS[("flash_cards<br/>SM-2 intervals")]
end
MOBILE --> Orchestrator
DESKTOP --> Orchestrator
Orchestrator --> VOICE
Orchestrator --> BOARD
Orchestrator --> PROCTOR
Orchestrator --> Data
VOICE --> DEBRIEF
BOARD --> DEBRIEF
PROCTOR --> DEBRIEFgraph TB
subgraph Client["Client Layer"]
MOBILE["Mobile App<br/>React Native + Expo<br/>Flash cards, voice drills,<br/>progress dashboard"]
DESKTOP["Desktop Web<br/>Next.js<br/>Full sessions, whiteboard,<br/>proctoring"]
end
subgraph Engines["Engine Layer"]
VOICE["Voice Engine<br/>Hume AI EVI<br/>Emotion-sensitive<br/>interviewer voice"]
BOARD["Whiteboard Engine<br/>tldraw + Claude Vision<br/>Diagram evaluation<br/>and scoring"]
PROCTOR["Proctor Engine<br/>MediaPipe Face Mesh<br/>Gaze tracking,<br/>attention monitoring"]
end
subgraph Orchestrator["Session Orchestrator — Node.js"]
ROUND["Round Selector<br/>Weakness-weighted<br/>random selection"]
ADAPT["Adaptive Difficulty<br/>Performance-based<br/>question scaling"]
DEBRIEF["Debrief Generator<br/>Transcript + emotion +<br/>proctor + whiteboard<br/>scored rubric"]
SM2["SM-2 Scheduler<br/>Spaced repetition<br/>for concepts and stories"]
end
subgraph Data["Data Layer — Supabase"]
SESSIONS[("sessions<br/>recordings, transcripts")]
SCORES[("scores<br/>per-dimension breakdowns")]
STORIES[("story_bank<br/>STAR-L entries")]
CARDS[("flash_cards<br/>SM-2 intervals")]
end
MOBILE --> Orchestrator
DESKTOP --> Orchestrator
Orchestrator --> VOICE
Orchestrator --> BOARD
Orchestrator --> PROCTOR
Orchestrator --> Data
VOICE --> DEBRIEF
BOARD --> DEBRIEF
PROCTOR --> DEBRIEFflowchart TD
V{Voice AI?}
V -->|"Emotion detection needed"| HUME["Hume AI EVI<br/>Emotion callbacks,<br/>adaptive persona,<br/>WebSocket streaming"]
V -->|"Voice only, no emotion"| ELEVEN["ElevenLabs<br/>Fallback: high-quality<br/>TTS, no affect reading"]
V -->|"Cost-constrained"| OPENAI_RT["OpenAI Realtime API<br/>Cheaper per minute,<br/>no emotion detection"]
W{Whiteboard?}
W -->|"React ecosystem, extensible"| TLDRAW["tldraw<br/>MIT license, React native,<br/>rich API, snapshot export"]
W -->|"Simpler, self-hosted"| EXCALI["Excalidraw<br/>Good but harder to<br/>integrate programmatic<br/>screenshot capture"]
P{Proctoring?}
P -->|"Privacy-first, free"| MEDIAPIPE["MediaPipe Face Mesh<br/>Browser-based, 468 landmarks,<br/>iris tracking, no cloud"]
P -->|"Commercial accuracy"| COMMERCIAL["Commercial proctoring<br/>Expensive, privacy concerns,<br/>overkill for self-practice"]
style HUME fill:#2d5016,stroke:#333,color:#fff
style TLDRAW fill:#2d5016,stroke:#333,color:#fff
style MEDIAPIPE fill:#2d5016,stroke:#333,color:#fffeditor.getSnapshot()flowchart TD
V{Voice AI?}
V -->|"Emotion detection needed"| HUME["Hume AI EVI<br/>Emotion callbacks,<br/>adaptive persona,<br/>WebSocket streaming"]
V -->|"Voice only, no emotion"| ELEVEN["ElevenLabs<br/>Fallback: high-quality<br/>TTS, no affect reading"]
V -->|"Cost-constrained"| OPENAI_RT["OpenAI Realtime API<br/>Cheaper per minute,<br/>no emotion detection"]
W{Whiteboard?}
W -->|"React ecosystem, extensible"| TLDRAW["tldraw<br/>MIT license, React native,<br/>rich API, snapshot export"]
W -->|"Simpler, self-hosted"| EXCALI["Excalidraw<br/>Good but harder to<br/>integrate programmatic<br/>screenshot capture"]
P{Proctoring?}
P -->|"Privacy-first, free"| MEDIAPIPE["MediaPipe Face Mesh<br/>Browser-based, 468 landmarks,<br/>iris tracking, no cloud"]
P -->|"Commercial accuracy"| COMMERCIAL["Commercial proctoring<br/>Expensive, privacy concerns,<br/>overkill for self-practice"]
style HUME fill:#2d5016,stroke:#333,color:#fff
style TLDRAW fill:#2d5016,stroke:#333,color:#fff
style MEDIAPIPE fill:#2d5016,stroke:#333,color:#fffeditor.getSnapshot()sequenceDiagram
participant U as User
participant O as Orchestrator
participant V as Voice Engine
participant W as Whiteboard
participant P as Proctor
participant D as Debrief
U->>O: Start session
O->>O: Select round type<br/>(weakness-weighted)
O->>U: Confirm: ML Design, Difficulty 3/5,<br/>Persona: Collaborative
U->>O: Accept / override
O->>V: Initialize interviewer persona
O->>P: Activate gaze tracking
alt Design or Coding Round
O->>W: Open whiteboard
end
loop During Session (30-45 min)
V->>U: Ask question / follow-up
U->>V: Respond (voice)
V->>O: Emotion data (confidence, hesitation)
O->>V: Adjust difficulty / tone
P->>O: Gaze flags (second monitor, notes)
alt Design Round
W-->>O: Periodic screenshot (every 30s active)
O-->>W: Evaluate diagram (Claude Vision)
end
end
U->>O: End session
O->>D: Compile transcript + emotion<br/>timeline + proctor flags +<br/>whiteboard evaluations
D->>U: Scored debrief with<br/>strengths, weaknesses,<br/>specific improvement actions
O->>O: Update weakness tracker,<br/>adjust next session focussequenceDiagram
participant U as User
participant O as Orchestrator
participant V as Voice Engine
participant W as Whiteboard
participant P as Proctor
participant D as Debrief
U->>O: Start session
O->>O: Select round type<br/>(weakness-weighted)
O->>U: Confirm: ML Design, Difficulty 3/5,<br/>Persona: Collaborative
U->>O: Accept / override
O->>V: Initialize interviewer persona
O->>P: Activate gaze tracking
alt Design or Coding Round
O->>W: Open whiteboard
end
loop During Session (30-45 min)
V->>U: Ask question / follow-up
U->>V: Respond (voice)
V->>O: Emotion data (confidence, hesitation)
O->>V: Adjust difficulty / tone
P->>O: Gaze flags (second monitor, notes)
alt Design Round
W-->>O: Periodic screenshot (every 30s active)
O-->>W: Evaluate diagram (Claude Vision)
end
end
U->>O: End session
O->>D: Compile transcript + emotion<br/>timeline + proctor flags +<br/>whiteboard evaluations
D->>U: Scored debrief with<br/>strengths, weaknesses,<br/>specific improvement actions
O->>O: Update weakness tracker,<br/>adjust next session focus| Parameter | Options | Default |
|---|---|---|
| Round type | Coding, ML Design, Behavioral, Tech Presentation, HM, Technical Deep Dive | Auto (weakness-weighted) |
| Difficulty | 1 (warm-up) to 5 (adversarial) | 3 |
| Interviewer persona | Friendly, Neutral, Adversarial, Socratic | Neutral |
| Proctor strictness | Off, Training (lenient), Simulation (strict) | Training |
| Session length | 15 / 30 / 45 / 60 min | 45 min |
| Whiteboard | On / Off | Auto (on for design rounds) |
| Recording | Audio only / Audio + Video / Off | Audio only |
| 参数 | 可选值 | 默认值 |
|---|---|---|
| 面试轮次类型 | 编程、ML设计、行为面、技术演示、HM、技术深度面 | 自动(按薄弱点权重分配) |
| 难度 | 1(热身)到5(压力面) | 3 |
| 面试官人设 | 友好、中立、施压型、苏格拉底式 | 中立 |
| 监考严格度 | 关闭、训练模式(宽松)、模拟模式(严格) | 训练模式 |
| 场次时长 | 15 / 30 / 45 / 60 分钟 | 45 分钟 |
| 白板功能 | 开启 / 关闭 | 自动(设计轮次默认开启) |
| 录制 | 仅音频 / 音视频 / 关闭 | 仅音频 |
07:00 Open mobile app
07:00 3 flash cards — spaced repetition surfaces weakest concepts
(ML concepts, system design patterns, Anthropic-specific topics)
07:05 1 behavioral story rehearsal — voice, 3 minutes max
App plays the prompt, you respond aloud, app records duration
07:08 Quick self-check — rate confidence 1-5 on today's cards
07:10 Done — push notification schedules evening session07:00 打开移动端应用
07:00 3张闪卡——间隔重复机制推送最薄弱的知识点
(ML概念、系统设计模式、Anthropic-specific topics)
07:05 1次行为面经历复述——语音作答,最长3分钟
应用播放问题,你大声作答,应用记录时长
07:08 快速自我检查——对当日闪卡的掌握度打1-5分
07:10 练习完成——推送通知提醒晚间练习场次19:00 Open desktop app, orchestrator selects round type
19:02 Configure: confirm round, set proctor to Training mode
19:05 Session begins — voice AI drives conversation
Whiteboard opens for design rounds
Proctor tracks gaze, flags second monitor use
19:35 Session ends (30 min) or 19:50 (45 min)
19:35 Debrief displays: scored rubric, emotion timeline,
proctor flags, whiteboard evaluation (if applicable)
19:45 Review debrief — spend 1/3 of practice time here
19:55 Update story bank with any new insights
20:00 Done — weakness tracker updated automatically19:00 打开桌面端应用,编排器选择面试轮次类型
19:02 配置:确认轮次,将监考设置为训练模式
19:05 练习开始——语音AI主导对话
设计轮次自动打开白板
监考模块跟踪视线,标记使用第二屏幕的行为
19:35 练习结束(30分钟)或19:50(45分钟)
19:35 展示复盘报告:评分表、情绪时间线、
监考标记、白板评估(如适用)
19:45 复盘报告 review——投入总练习时间的1/3在此环节
19:55 用新的感悟更新经历库
20:00 练习完成——薄弱点跟踪器自动更新10:00 Full loop: 2-3 back-to-back rounds (different types)
5-minute breaks between rounds (no phone, no notes)
Proctor set to Simulation (strict) mode
11:30 Energy management practice — track cognitive fatigue
11:45 Cross-round story coherence review
Did you tell the same project consistently across rounds?
12:00 Comprehensive weekly debrief — pattern analysis across sessions10:00 全流程模拟:2-3个连续的不同类型面试轮次
轮次间休息5分钟(不能用手机,不能看笔记)
监考设置为模拟(严格)模式
11:30 精力管理练习——跟踪认知疲劳情况
11:45 跨轮次经历一致性检查
你在不同轮次中描述的项目经历是否一致?
12:00 综合周度复盘——跨场次的模式分析| Dimension | Weight | Measurement Source |
|---|---|---|
| Technical accuracy | 25% | Debrief AI evaluation of transcript |
| Communication clarity | 20% | Emotion data (hesitation rate, filler words) |
| Time management | 15% | Section timing vs target budget |
| Structured thinking | 15% | Whiteboard evaluation (design rounds) or verbal structure |
| Composure under pressure | 10% | Emotion timeline stability, recovery from stumbles |
| Question handling | 10% | Follow-up depth reached (levels 1-6 per values-behavioral) |
| Proctor compliance | 5% | Flag count (gaze deviations, note references) |
| 维度 | 权重 | 数据来源 |
|---|---|---|
| 技术准确性 | 25% | 复盘AI对作答文本的评估 |
| 沟通清晰度 | 20% | 情感数据(犹豫率、填充词占比) |
| 时间管理 | 15% | 各环节用时 vs 目标用时 |
| 结构化思维 | 15% | 白板评估(设计轮次)或口头表达结构 |
| 压力下的表现 | 10% | 情绪时间线稳定性、失误后的恢复能力 |
| 问题应对能力 | 10% | 跟进问题的回答深度(values-behavioral问题分为1-6级) |
| 监考合规性 | 5% | 标记次数(视线偏离、参考笔记) |
| Component | What You Need | Where to Get It |
|---|---|---|
| Hume AI API key | EVI access for voice + emotion | https://hume.ai — apply for developer access |
| Anthropic API key | Claude for debrief + whiteboard eval | https://console.anthropic.com |
| Supabase project | Database + auth + storage | https://supabase.com — free tier works initially |
| Node.js 20+ | Session orchestrator runtime | https://nodejs.org |
| React Native + Expo | Mobile companion app | |
| 组件 | 所需内容 | 获取地址 |
|---|---|---|
| Hume AI API key | EVI access for voice + emotion | https://hume.ai — 申请开发者访问权限 |
| Anthropic API key | Claude for debrief + whiteboard eval | https://console.anthropic.com |
| Supabase project | 数据库 + 认证 + 存储 | https://supabase.com — 免费层级初期足够使用 |
| Node.js 20+ | 场次编排器运行环境 | https://nodejs.org |
| React Native + Expo | 移动端配套应用 | |
undefinedundefinedundefinedundefined| Component | Monthly Usage | Unit Cost | Monthly Total |
|---|---|---|---|
| Hume AI EVI | 20 evening sessions x 35 min + 30 morning drills x 3 min | ~$0.07/min | $60-80 |
| Claude (debrief) | 20 sessions x 1 debrief | ~$0.15/debrief | $3 |
| Claude Vision (whiteboard) | 10 design sessions x 5 evals | ~$0.03/eval | $1.50 |
| Supabase | Free tier (< 500MB, < 50K auth) | $0 free / $25 pro | $0-25 |
| MediaPipe | All sessions, runs locally | $0 | $0 |
| ElevenLabs (mobile fallback) | 30 morning voice drills x 3 min | ~$0.05/min | $4.50 |
| Total | $70-115/mo |
| 组件 | 月使用量 | 单位成本 | 月总成本 |
|---|---|---|---|
| Hume AI EVI | 20次晚间练习 × 35分钟 + 30次早间练习 × 3分钟 | ~$0.07/min | $60-80 |
| Claude(复盘) | 20次练习 × 1次复盘 | ~$0.15/debrief | $3 |
| Claude Vision(白板评估) | 10次设计轮次 × 5次评估 | ~$0.03/eval | $1.50 |
| Supabase | 免费层级(< 500MB,< 50K auth) | $0 free / $25 pro | $0-25 |
| MediaPipe | 所有场次,本地运行 | $0 | $0 |
| ElevenLabs(移动端 fallback) | 30次早间语音练习 × 3分钟 | ~$0.05/min | $4.50 |
| 总计 | $70-115/mo |
| Round Type | Content Skill | What Simulator Gets |
|---|---|---|
| Coding | | Problem archetypes, follow-up ladders, senior signals checklist |
| ML System Design | | 7-stage framework, canonical problems, whiteboard strategy |
| Behavioral / Values | | Follow-up ladder depth, STAR-L format, negative framing patterns |
| Tech Presentation | | Narrative arc, depth calibration, Q&A stress test questions |
| Hiring Manager | | Scope-of-impact evaluation, leadership signal rubric |
| Anthropic Technical | | Topic areas, opinion evaluation criteria, safety depth |
| Full Loop | | Round sequencing, energy management, story coherence matrix |
| 轮次类型 | 内容skill | 模拟器获取的内容 |
|---|---|---|
| 编程面 | | 问题原型、跟进问题层级、高级工程师能力点清单 |
| ML系统设计 | | 7-stage framework、经典问题、白板策略 |
| 行为面/价值观 | | 跟进问题深度、STAR-L格式、负面提问模式 |
| 技术演示 | | 叙事结构、深度校准、Q&A压力测试问题 |
| 招聘经理面 | | 影响范围评估、领导力能力点评分表 |
| Anthropic技术面 | | 主题范围、观点评估标准、安全问题深度 |
| 全流程模拟 | | 轮次排序、精力管理、经历一致性矩阵 |
| File | Consult When |
|---|---|
| Integrating Hume AI EVI, configuring interviewer personas, emotion-adaptive logic, WebSocket connection setup, ElevenLabs fallback |
| Setting up tldraw for diagram evaluation, Claude Vision scoring prompts, periodic screenshot strategy, cost per evaluation |
| MediaPipe Face Mesh setup, gaze vector calculation, suspicion thresholds, privacy configuration, flag integration with debrief |
| React Native + Expo stack, SM-2 spaced repetition implementation, push notifications, offline mode, data sync strategy |
| Round selection algorithm, adaptive difficulty, performance tracking schema, SM-2 details, debrief generation prompts, weakness detection |
| 文件 | 适用场景 |
|---|---|
| 集成Hume AI EVI、配置面试官人设、情感自适应逻辑、WebSocket连接设置、ElevenLabs降级方案 |
| 搭建用于图表评估的tldraw、Claude Vision评分提示词、定期截图策略、单次评估成本 |
| MediaPipe Face Mesh搭建、视线向量计算、异常阈值、隐私配置、标记与复盘的集成 |
| React Native + Expo技术栈、SM-2间隔重复实现、推送通知、离线模式、数据同步策略 |
| 轮次选择算法、难度自适应、表现跟踪schema、SM-2细节、复盘生成提示词、薄弱点检测 |