morphiq-track
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePipeline Position
流程位置
Step 4 of 4 — measurement + flywheel.
- Input: Build Output (JSON) from morphiq-build + MORPHIQ-TRACKER.md (persistent state).
- Output: Delta Report (JSON) → loops back to morphiq-rank.
- Owns: MORPHIQ-TRACKER.md — generates on first run, updates every run.
- Owns: state directory — JSON state layer for prompts, results, citations.
morphiq-track/ - Drives: 3 ongoing workflows (Content Optimization, Content Creation, Query Fanout Expansion).
- Data contract: See §4 for the Delta Report, §5 for MORPHIQ-TRACKER.md, §6 for the JSON State Layer.
PIPELINE.md
第4步(共4步)—— 测量 + 飞轮效应
- 输入: 来自morphiq-build的构建输出(JSON) + MORPHIQ-TRACKER.md(持久状态)
- 输出: 差异报告(JSON)→ 循环回morphiq-rank
- 负责维护: MORPHIQ-TRACKER.md —— 首次运行时生成,每次运行时更新
- 负责维护: 状态目录 —— 存储prompts、结果、引用的JSON状态层
morphiq-track/ - 驱动: 3个持续运行的工作流(内容优化、内容创作、查询扩散扩展)
- 数据契约: 差异报告见第4节,MORPHIQ-TRACKER.md见第5节,JSON状态层见第6节
PIPELINE.md
Purpose
用途
Morphiq Track is the measurement and flywheel skill. It queries AI providers to measure brand visibility, computes GEO scores and Share of Voice, tracks deltas over time, and drives three ongoing workflows that feed back into the pipeline.
Morphiq Track是测量与飞轮效应技能。它会查询AI提供商以测量品牌可见度、计算GEO分数和声量份额、追踪随时间的变化差异,并驱动三个反馈回流程的持续工作流。
Workflow
工作流
Step 0: Initialize or Load State
步骤0:初始化或加载状态
Check if exists in the project root.
morphiq-track/manifest.json- Missing (first run): Proceed to Step 1. The state directory will be created.
- Present (subsequent run): Load directly — this contains the full prompt set with config, metadata, and tracking state. Skip to Step 2. If
morphiq-track/prompts.jsonhas elapsed sincerecommendations.cooldown_days, generate 20 new recommendations viarecommendations.last_generated.create-prompts.py --state-dir morphiq-track/ --refresh - Migration (tracker exists but no state dir): Parse MORPHIQ-TRACKER.md §8 to bootstrap , parse §7 to bootstrap
prompts.json. Seecitations.jsonMigration section.references/state-layer.md
For state layer specification, read .
references/state-layer.md检查项目根目录下是否存在
morphiq-track/manifest.json- 不存在(首次运行): 进入步骤1,将自动创建状态目录
- 存在(后续运行): 直接加载—— 该文件包含完整的prompt集合,附带配置、元数据和追踪状态。跳至步骤2。如果自
morphiq-track/prompts.json起已经过了recommendations.last_generated指定的天数,通过recommendations.cooldown_days生成20条新推荐create-prompts.py --state-dir morphiq-track/ --refresh - 迁移(存在tracker但无状态目录): 解析MORPHIQ-TRACKER.md第8节来初始化,解析第7节来初始化
prompts.json。详见citations.json的迁移部分references/state-layer.md
状态层规范请阅读
references/state-layer.mdStep 1: Generate Prompts
步骤1:生成Prompts
First run only. Generate 50 prompts across 5 GEO categories:
| Category | Share | Brand Name? |
|---|---|---|
| Organic | 45% | No |
| Competitor | 11% | Mixed |
| How-to | 14% | No |
| Brand-Specific | 13% | Yes |
| FAQ | 17% | No |
Apply quality rules per category. Add temporal markers to 70%+ prompts. Include entities in comparison/technical prompts.
Run . This writes and initializes .
scripts/create-prompts.py --state-dir morphiq-track/ --brand {brand} --category {category} --competitors {competitors}morphiq-track/prompts.jsonmorphiq-track/manifest.jsonFor taxonomy, fanout profiles, and generation rules, read .
references/prompt-taxonomy.md仅首次运行需要。跨5个GEO类别生成50条prompts:
| 类别 | 占比 | 是否包含品牌名? |
|---|---|---|
| 自然搜索 | 45% | 否 |
| 竞品 | 11% | 混合 |
| 教程类 | 14% | 否 |
| 品牌专属 | 13% | 是 |
| 常见问题 | 17% | 否 |
按类别应用质量规则。为70%以上的prompts添加时间标记。在对比/技术类prompts中包含实体。
运行,会写入并初始化。
scripts/create-prompts.py --state-dir morphiq-track/ --brand {brand} --category {category} --competitors {competitors}morphiq-track/prompts.jsonmorphiq-track/manifest.json分类体系、扩散配置、生成规则请阅读
references/prompt-taxonomy.mdStep 2: Query AI Providers
步骤2:查询AI提供商
Distribute prompts evenly across 4 providers. Execute using . This reads prompts from , writes versioned results to , and updates .
scripts/run-queries.py --state-dir morphiq-track/ --mode executemorphiq-track/prompts.jsonmorphiq-track/results/track-{date}.jsonmorphiq-track/manifest.json| Provider | Model | Concurrency |
|---|---|---|
| OpenAI | gpt-4o | Full |
| Perplexity | sonar-pro | 2 concurrent |
| Anthropic | claude-sonnet-4-5-20250514 → claude-sonnet-4-20250514 | Serialized |
| Gemini | gemini-2.5-flash | 3 concurrent |
Mandatory requirements for every query:
- Full response text. Store the complete response — never truncate. morphiq-build's content creation workflow requires the full text for analysis.
- Sub-query extraction. For each provider that exposes tool calls, extract the search queries the model issued. These feed Workflow C (Query Fanout Expansion) and invisible SoV.
- Citation deduplication. After collecting citations per response, strip UTM/tracking params from URLs and deduplicate. Track (number of times each URL was cited).
citation_weight - Retry on transient failure. Retry once with 2-second delay before marking as error. This handles rate limits.
Provider-specific requirements:
- OpenAI: Iterate for items with
response.output— extract thetype == "web_search_call"field intoquery. This reveals GPT'ssub_queries[]operator searches and two-phase research pattern.site: - Perplexity: Citations are a Perplexity-specific field. Check , then
response.citations, thenresponse.model_extra["citations"], thenresponse.__dict__["citations"]. The OpenAI-compatible client puts unknown API fields inresponse.choices[0].message.model_extra["citations"].model_extra - Anthropic: Tool config must be . Try model
{"type": "web_search_20250305", "name": "web_search", "max_uses": 5}first, fall back toclaude-sonnet-4-5-20250514. Response content blocks includeclaude-sonnet-4-20250514(final answer),text(search results with URLs), andweb_search_tool_result(the search call). Extract text only fromserver_tool_useblocks; extract citations from bothtextblock inline citations andtextblock content.web_search_tool_result - Gemini: Grounding metadata returns redirect URLs. Follow the redirect to get the real URL. If redirect fails, use the
vertexaisearch.cloud.google.comas fallback domain:grounding_chunk.web.title.{url: proxy_url, title: title, resolved_domain: title}
For full provider config and response pipeline, read .
For selection rules and distribution, read .
references/provider-strategies.mdreferences/query-targets.md将prompt平均分配到4个提供商。使用执行。该脚本会从读取prompt,将带版本的结果写入,并更新。
scripts/run-queries.py --state-dir morphiq-track/ --mode executemorphiq-track/prompts.jsonmorphiq-track/results/track-{date}.jsonmorphiq-track/manifest.json| 提供商 | 模型 | 并发数 |
|---|---|---|
| OpenAI | gpt-4o | 全并发 |
| Perplexity | sonar-pro | 2并发 |
| Anthropic | claude-sonnet-4-5-20250514 → claude-sonnet-4-20250514 | 串行 |
| Gemini | gemini-2.5-flash | 3并发 |
所有查询的强制要求:
- 完整响应文本:存储完整响应,绝对不要截断。morphiq-build的内容创作工作流需要完整文本做分析。
- 子查询提取:对所有暴露工具调用的提供商,提取模型发出的搜索查询,这些会供给工作流C(查询扩散扩展)和隐形声量计算。
- 引用去重:收集每条响应的引用后,移除URL中的UTM/追踪参数并去重。追踪(每个URL被引用的次数)。
citation_weight - 临时故障重试:标记为错误前重试1次,间隔2秒,用于处理速率限制。
提供商专属要求:
- OpenAI: 遍历中
response.output的条目,将type == "web_search_call"字段提取到query中,这会暴露GPT的sub_queries[]运算符搜索和两阶段研究模式。site: - Perplexity: 引用是Perplexity专属字段。依次检查、
response.citations、response.model_extra["citations"]、response.__dict__["citations"]。兼容OpenAI的客户端会将未知API字段放在response.choices[0].message.model_extra["citations"]中。model_extra - Anthropic: 工具配置必须为。优先尝试模型
{"type": "web_search_20250305", "name": "web_search", "max_uses": 5},降级到claude-sonnet-4-5-20250514。响应内容块包含claude-sonnet-4-20250514(最终答案)、text(带URL的搜索结果)和web_search_tool_result(搜索调用)。仅从server_tool_use块提取文本;从text块的内联引用和text块内容中提取引用。web_search_tool_result - Gemini: 落地元数据返回重定向URL,跟随重定向获取真实URL。如果重定向失败,使用
vertexaisearch.cloud.google.com作为备用域名:grounding_chunk.web.title。{url: proxy_url, title: title, resolved_domain: title}
完整的提供商配置和响应流程请阅读,选择规则和分配规则请阅读。
references/provider-strategies.mdreferences/query-targets.mdStep 3: Analyze Responses
步骤3:分析响应
5-step pipeline: extract raw response/citations/sub-queries → structured analysis (using the agent's reasoning capabilities) → brand mention validation (exact → TLD → LLM judge) → competitor filtering → entity normalization.
Input requirements for analysis: Each response must include the full response text, deduplicated citations with , and extracted sub-queries. The analysis uses the block from the prompts file for brand, domain, and competitors — never hardcoded values.
citation_weightconfig5步流程:提取原始响应/引用/子查询 → 结构化分析(使用Agent的推理能力)→ 品牌提及验证(精确匹配 → TLD匹配 → LLM判定)→ 竞品过滤 → 实体标准化。
分析的输入要求: 每条响应必须包含完整响应文本、带的去重引用、提取的子查询。分析使用prompts文件中的块获取品牌、域名、竞品信息,绝对不使用硬编码值。
citation_weightconfigStep 4: Compute GEO Score
步骤4:计算GEO分数
GEO = mean(provider_scores)
Weighted GEO = (Organic × 0.45) + (Competitor × 0.22) + (How-to × 0.22) + (Brand × 0.11)Thresholds: ≥60 Excellent, ≥40 Good, ≥20 Fair, ≥10 Poor, <10 Very Poor.
For GEO methodology, read .
references/query-targets.mdGEO = mean(provider_scores)
Weighted GEO = (Organic × 0.45) + (Competitor × 0.22) + (How-to × 0.22) + (Brand × 0.11)阈值:≥60 优秀,≥40 良好,≥20 一般,≥10 较差,<10 极差。
GEO计算方法请阅读。
references/query-targets.mdStep 5: Compute Share of Voice
步骤5:计算声量份额
Three SoV tiers:
| Metric | What It Measures |
|---|---|
| Mention SoV | Brand name in final responses |
| Fanout-Weighted SoV | Weighted by prompt type fan-out depth |
| Influence SoV | Brand presence in sub-queries (invisible influence) |
Track Conversion Gap = Influence SoV − Citation SoV.
For SoV methodology, read .
references/share-of-voice.md三个声量层级:
| 指标 | 测量内容 |
|---|---|
| 提及声量 | 最终响应中的品牌名出现情况 |
| 扩散加权声量 | 按prompt类型扩散深度加权 |
| 影响力声量 | 子查询中的品牌存在情况(隐形影响力) |
追踪 转化缺口 = 影响力声量 − 引用声量。
声量计算方法请阅读。
references/share-of-voice.mdStep 6: Compute Deltas
步骤6:计算差异
Compare against previous snapshot using . The script reads to auto-resolve the current () and previous () results paths, and reads for previous citation state. Flag changes >5 points. Generate flagged actions for regressions, losses, displacement, and conversion gaps.
scripts/diff-results.py --state-dir morphiq-track/manifest.jsonruns[0]runs[1]morphiq-track/citations.jsonFor delta methodology, read .
references/delta-scoring.md使用与之前的快照对比。脚本读取自动解析当前()和之前()的结果路径,读取获取之前的引用状态。标记变化超过5分的情况,为表现回落、损失、替代、转化缺口生成标记操作项。
scripts/diff-results.py --state-dir morphiq-track/manifest.jsonruns[0]runs[1]morphiq-track/citations.json差异计算方法请阅读。
references/delta-scoring.mdStep 7: Update State Layer and MORPHIQ-TRACKER.md
步骤7:更新状态层和MORPHIQ-TRACKER.md
State layer updates (JSON — source of truth for track-owned data):
- Rebuild from current results + previous citation state (gained/lost/stable)
morphiq-track/citations.json - Update tracking fields (mentioned, cited, best_provider, runs_tracked, last_run)
morphiq-track/prompts.json - Update
morphiq-track/manifest.jsonupdated_at
Tracker updates (markdown — user-facing dashboard):
Project state layer data into MORPHIQ-TRACKER.md sections 5-9 and 14 (SoV, SoV Trend, Citations, Prompts, Competitors, Run History). Update remaining sections (1-4, 10-13) per tracker-spec.md rules.
For tracker specification, read .
For state layer specification, read .
references/tracker-spec.mdreferences/state-layer.md状态层更新(JSON —— track所属数据的可信源):
- 从当前结果 + 之前的引用状态(新增/丢失/稳定)重建
morphiq-track/citations.json - 更新的追踪字段(mentioned、cited、best_provider、runs_tracked、last_run)
morphiq-track/prompts.json - 更新的
morphiq-track/manifest.json字段updated_at
Tracker更新(markdown —— 用户侧看板):
将状态层数据投射到MORPHIQ-TRACKER.md的5-9节和14节(声量、声量趋势、引用、Prompts、竞品、运行历史)。按照tracker-spec.md规则更新剩余章节(1-4、10-13)。
Tracker规范请阅读,状态层规范请阅读。
references/tracker-spec.mdreferences/state-layer.mdStep 8: Produce Delta Report
步骤8:生成差异报告
Assemble JSON ( §4): SoV metrics, citations, per-provider data, competitors, flagged actions, content queue. Loops back to morphiq-rank.
PIPELINE.md组装JSON(见第4节):包含声量指标、引用、各提供商数据、竞品、标记操作项、内容队列,循环回morphiq-rank。
PIPELINE.mdThree Ongoing Workflows
三个持续工作流
Workflow A: Content Optimization
工作流A:内容优化
- Identify pages with declining SoV or lost citations
- Feed to morphiq-build (existing content path)
- Re-track to measure impact
- 识别声量下降或丢失引用的页面
- 供给morphiq-build(已有内容路径)
- 重新追踪以衡量影响
Workflow B: Content Creation
工作流B:内容创作
- Collect prompts where brand is absent
- Identify competitor citation sources
- Generate content briefs for missing coverage
- Feed to morphiq-build (new content path)
- 收集品牌未出现的prompts
- 识别竞品引用来源
- 为缺失的覆盖范围生成内容brief
- 供给morphiq-build(新内容路径)
Workflow C: Query Fanout Expansion
工作流C:查询扩散扩展
- Run with optional
scripts/analyze-fanout.py --state-dir morphiq-track/for page inventory and simulated queries--scan-report - Script extracts sub-queries from latest track results, merges with scan simulated queries (fills Perplexity/Gemini gap)
- Compares against site page inventory to identify unanswered sub-queries
- Extracts competitor citation sources for each unanswered sub-query
- Generates content briefs prioritized by citation weight (2x, citation-producing 1.5x, silent 0.5x)
site: - Output feeds Delta Report via
content_creation_queueflag on generate-report.py--fanout - Update MORPHIQ-TRACKER.md §12 (Query Fanout Coverage) and §13 (Content Creation Queue) with new entries
- 运行,可搭配
scripts/analyze-fanout.py --state-dir morphiq-track/获取页面清单和模拟查询--scan-report - 脚本从最新的追踪结果中提取子查询,与扫描模拟查询合并(填补Perplexity/Gemini的缺口)
- 与站点页面清单对比,识别未被回答的子查询
- 为每个未被回答的子查询提取竞品引用来源
- 按引用权重生成优先排序的内容brief(类2倍权重、产生引用的1.5倍权重、无引用的0.5倍权重)
site: - 输出通过generate-report.py的参数供给差异报告的
--fanoutcontent_creation_queue - 用新条目更新MORPHIQ-TRACKER.md第12节(查询扩散覆盖)和第13节(内容创作队列)
Reference Files
参考文件
| File | Purpose |
|---|---|
| Prompt types, GEO categories, fanout depth, generation rules |
| SoV formulas, mention types, invisible SoV, competitive tracking |
| Provider config, models, response analysis pipeline |
| Provider selection, distribution, citation categories, GEO score |
| Delta calculation, significance thresholds, flagged actions |
| Full MORPHIQ-TRACKER.md specification (14 sections) |
| JSON state layer: directory structure, file schemas, read/write rules, sync rules |
| 文件 | 用途 |
|---|---|
| Prompt类型、GEO类别、扩散深度、生成规则 |
| 声量计算公式、提及类型、隐形声量、竞品追踪 |
| 提供商配置、模型、响应分析流程 |
| 提供商选择、分配、引用类别、GEO分数 |
| 差异计算、显著性阈值、标记操作项 |
| MORPHIQ-TRACKER.md完整规范(14个章节) |
| JSON状态层:目录结构、文件 schema、读写规则、同步规则 |