morphiq-track

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Pipeline Position

流程位置

Step 4 of 4 — measurement + flywheel.
  • Input: Build Output (JSON) from morphiq-build + MORPHIQ-TRACKER.md (persistent state).
  • Output: Delta Report (JSON) → loops back to morphiq-rank.
  • Owns: MORPHIQ-TRACKER.md — generates on first run, updates every run.
  • Owns:
    morphiq-track/
    state directory — JSON state layer for prompts, results, citations.
  • Drives: 3 ongoing workflows (Content Optimization, Content Creation, Query Fanout Expansion).
  • Data contract: See
    PIPELINE.md
    §4 for the Delta Report, §5 for MORPHIQ-TRACKER.md, §6 for the JSON State Layer.
第4步(共4步)—— 测量 + 飞轮效应
  • 输入: 来自morphiq-build的构建输出(JSON) + MORPHIQ-TRACKER.md(持久状态)
  • 输出: 差异报告(JSON)→ 循环回morphiq-rank
  • 负责维护: MORPHIQ-TRACKER.md —— 首次运行时生成,每次运行时更新
  • 负责维护:
    morphiq-track/
    状态目录 —— 存储prompts、结果、引用的JSON状态层
  • 驱动: 3个持续运行的工作流(内容优化、内容创作、查询扩散扩展)
  • 数据契约: 差异报告见
    PIPELINE.md
    第4节,MORPHIQ-TRACKER.md见第5节,JSON状态层见第6节

Purpose

用途

Morphiq Track is the measurement and flywheel skill. It queries AI providers to measure brand visibility, computes GEO scores and Share of Voice, tracks deltas over time, and drives three ongoing workflows that feed back into the pipeline.
Morphiq Track是测量与飞轮效应技能。它会查询AI提供商以测量品牌可见度、计算GEO分数和声量份额、追踪随时间的变化差异,并驱动三个反馈回流程的持续工作流。

Workflow

工作流

Step 0: Initialize or Load State

步骤0:初始化或加载状态

Check if
morphiq-track/manifest.json
exists in the project root.
  • Missing (first run): Proceed to Step 1. The state directory will be created.
  • Present (subsequent run): Load
    morphiq-track/prompts.json
    directly — this contains the full prompt set with config, metadata, and tracking state. Skip to Step 2. If
    recommendations.cooldown_days
    has elapsed since
    recommendations.last_generated
    , generate 20 new recommendations via
    create-prompts.py --state-dir morphiq-track/ --refresh
    .
  • Migration (tracker exists but no state dir): Parse MORPHIQ-TRACKER.md §8 to bootstrap
    prompts.json
    , parse §7 to bootstrap
    citations.json
    . See
    references/state-layer.md
    Migration section.
For state layer specification, read
references/state-layer.md
.
检查项目根目录下是否存在
morphiq-track/manifest.json
  • 不存在(首次运行): 进入步骤1,将自动创建状态目录
  • 存在(后续运行): 直接加载
    morphiq-track/prompts.json
    —— 该文件包含完整的prompt集合,附带配置、元数据和追踪状态。跳至步骤2。如果自
    recommendations.last_generated
    起已经过了
    recommendations.cooldown_days
    指定的天数,通过
    create-prompts.py --state-dir morphiq-track/ --refresh
    生成20条新推荐
  • 迁移(存在tracker但无状态目录): 解析MORPHIQ-TRACKER.md第8节来初始化
    prompts.json
    ,解析第7节来初始化
    citations.json
    。详见
    references/state-layer.md
    的迁移部分
状态层规范请阅读
references/state-layer.md

Step 1: Generate Prompts

步骤1:生成Prompts

First run only. Generate 50 prompts across 5 GEO categories:
CategoryShareBrand Name?
Organic45%No
Competitor11%Mixed
How-to14%No
Brand-Specific13%Yes
FAQ17%No
Apply quality rules per category. Add temporal markers to 70%+ prompts. Include entities in comparison/technical prompts.
Run
scripts/create-prompts.py --state-dir morphiq-track/ --brand {brand} --category {category} --competitors {competitors}
. This writes
morphiq-track/prompts.json
and initializes
morphiq-track/manifest.json
.
For taxonomy, fanout profiles, and generation rules, read
references/prompt-taxonomy.md
.
仅首次运行需要。跨5个GEO类别生成50条prompts:
类别占比是否包含品牌名?
自然搜索45%
竞品11%混合
教程类14%
品牌专属13%
常见问题17%
按类别应用质量规则。为70%以上的prompts添加时间标记。在对比/技术类prompts中包含实体。
运行
scripts/create-prompts.py --state-dir morphiq-track/ --brand {brand} --category {category} --competitors {competitors}
,会写入
morphiq-track/prompts.json
并初始化
morphiq-track/manifest.json
分类体系、扩散配置、生成规则请阅读
references/prompt-taxonomy.md

Step 2: Query AI Providers

步骤2:查询AI提供商

Distribute prompts evenly across 4 providers. Execute using
scripts/run-queries.py --state-dir morphiq-track/ --mode execute
. This reads prompts from
morphiq-track/prompts.json
, writes versioned results to
morphiq-track/results/track-{date}.json
, and updates
morphiq-track/manifest.json
.
ProviderModelConcurrency
OpenAIgpt-4oFull
Perplexitysonar-pro2 concurrent
Anthropicclaude-sonnet-4-5-20250514 → claude-sonnet-4-20250514Serialized
Geminigemini-2.5-flash3 concurrent
Mandatory requirements for every query:
  1. Full response text. Store the complete response — never truncate. morphiq-build's content creation workflow requires the full text for analysis.
  2. Sub-query extraction. For each provider that exposes tool calls, extract the search queries the model issued. These feed Workflow C (Query Fanout Expansion) and invisible SoV.
  3. Citation deduplication. After collecting citations per response, strip UTM/tracking params from URLs and deduplicate. Track
    citation_weight
    (number of times each URL was cited).
  4. Retry on transient failure. Retry once with 2-second delay before marking as error. This handles rate limits.
Provider-specific requirements:
  • OpenAI: Iterate
    response.output
    for items with
    type == "web_search_call"
    — extract the
    query
    field into
    sub_queries[]
    . This reveals GPT's
    site:
    operator searches and two-phase research pattern.
  • Perplexity: Citations are a Perplexity-specific field. Check
    response.citations
    , then
    response.model_extra["citations"]
    , then
    response.__dict__["citations"]
    , then
    response.choices[0].message.model_extra["citations"]
    . The OpenAI-compatible client puts unknown API fields in
    model_extra
    .
  • Anthropic: Tool config must be
    {"type": "web_search_20250305", "name": "web_search", "max_uses": 5}
    . Try model
    claude-sonnet-4-5-20250514
    first, fall back to
    claude-sonnet-4-20250514
    . Response content blocks include
    text
    (final answer),
    web_search_tool_result
    (search results with URLs), and
    server_tool_use
    (the search call). Extract text only from
    text
    blocks; extract citations from both
    text
    block inline citations and
    web_search_tool_result
    block content.
  • Gemini: Grounding metadata returns
    vertexaisearch.cloud.google.com
    redirect URLs. Follow the redirect to get the real URL. If redirect fails, use the
    grounding_chunk.web.title
    as fallback domain:
    {url: proxy_url, title: title, resolved_domain: title}
    .
For full provider config and response pipeline, read
references/provider-strategies.md
. For selection rules and distribution, read
references/query-targets.md
.
将prompt平均分配到4个提供商。使用
scripts/run-queries.py --state-dir morphiq-track/ --mode execute
执行。该脚本会从
morphiq-track/prompts.json
读取prompt,将带版本的结果写入
morphiq-track/results/track-{date}.json
,并更新
morphiq-track/manifest.json
提供商模型并发数
OpenAIgpt-4o全并发
Perplexitysonar-pro2并发
Anthropicclaude-sonnet-4-5-20250514 → claude-sonnet-4-20250514串行
Geminigemini-2.5-flash3并发
所有查询的强制要求:
  1. 完整响应文本:存储完整响应,绝对不要截断。morphiq-build的内容创作工作流需要完整文本做分析。
  2. 子查询提取:对所有暴露工具调用的提供商,提取模型发出的搜索查询,这些会供给工作流C(查询扩散扩展)和隐形声量计算。
  3. 引用去重:收集每条响应的引用后,移除URL中的UTM/追踪参数并去重。追踪
    citation_weight
    (每个URL被引用的次数)。
  4. 临时故障重试:标记为错误前重试1次,间隔2秒,用于处理速率限制。
提供商专属要求:
  • OpenAI: 遍历
    response.output
    type == "web_search_call"
    的条目,将
    query
    字段提取到
    sub_queries[]
    中,这会暴露GPT的
    site:
    运算符搜索和两阶段研究模式。
  • Perplexity: 引用是Perplexity专属字段。依次检查
    response.citations
    response.model_extra["citations"]
    response.__dict__["citations"]
    response.choices[0].message.model_extra["citations"]
    。兼容OpenAI的客户端会将未知API字段放在
    model_extra
    中。
  • Anthropic: 工具配置必须为
    {"type": "web_search_20250305", "name": "web_search", "max_uses": 5}
    。优先尝试模型
    claude-sonnet-4-5-20250514
    ,降级到
    claude-sonnet-4-20250514
    。响应内容块包含
    text
    (最终答案)、
    web_search_tool_result
    (带URL的搜索结果)和
    server_tool_use
    (搜索调用)。仅从
    text
    块提取文本;从
    text
    块的内联引用和
    web_search_tool_result
    块内容中提取引用。
  • Gemini: 落地元数据返回
    vertexaisearch.cloud.google.com
    重定向URL,跟随重定向获取真实URL。如果重定向失败,使用
    grounding_chunk.web.title
    作为备用域名:
    {url: proxy_url, title: title, resolved_domain: title}
完整的提供商配置和响应流程请阅读
references/provider-strategies.md
,选择规则和分配规则请阅读
references/query-targets.md

Step 3: Analyze Responses

步骤3:分析响应

5-step pipeline: extract raw response/citations/sub-queries → structured analysis (using the agent's reasoning capabilities) → brand mention validation (exact → TLD → LLM judge) → competitor filtering → entity normalization.
Input requirements for analysis: Each response must include the full response text, deduplicated citations with
citation_weight
, and extracted sub-queries. The analysis uses the
config
block from the prompts file for brand, domain, and competitors — never hardcoded values.
5步流程:提取原始响应/引用/子查询 → 结构化分析(使用Agent的推理能力)→ 品牌提及验证(精确匹配 → TLD匹配 → LLM判定)→ 竞品过滤 → 实体标准化。
分析的输入要求: 每条响应必须包含完整响应文本、带
citation_weight
的去重引用、提取的子查询。分析使用prompts文件中的
config
块获取品牌、域名、竞品信息,绝对不使用硬编码值。

Step 4: Compute GEO Score

步骤4:计算GEO分数

GEO = mean(provider_scores)
Weighted GEO = (Organic × 0.45) + (Competitor × 0.22) + (How-to × 0.22) + (Brand × 0.11)
Thresholds: ≥60 Excellent, ≥40 Good, ≥20 Fair, ≥10 Poor, <10 Very Poor.
For GEO methodology, read
references/query-targets.md
.
GEO = mean(provider_scores)
Weighted GEO = (Organic × 0.45) + (Competitor × 0.22) + (How-to × 0.22) + (Brand × 0.11)
阈值:≥60 优秀,≥40 良好,≥20 一般,≥10 较差,<10 极差。
GEO计算方法请阅读
references/query-targets.md

Step 5: Compute Share of Voice

步骤5:计算声量份额

Three SoV tiers:
MetricWhat It Measures
Mention SoVBrand name in final responses
Fanout-Weighted SoVWeighted by prompt type fan-out depth
Influence SoVBrand presence in sub-queries (invisible influence)
Track Conversion Gap = Influence SoV − Citation SoV.
For SoV methodology, read
references/share-of-voice.md
.
三个声量层级:
指标测量内容
提及声量最终响应中的品牌名出现情况
扩散加权声量按prompt类型扩散深度加权
影响力声量子查询中的品牌存在情况(隐形影响力)
追踪 转化缺口 = 影响力声量 − 引用声量。
声量计算方法请阅读
references/share-of-voice.md

Step 6: Compute Deltas

步骤6:计算差异

Compare against previous snapshot using
scripts/diff-results.py --state-dir morphiq-track/
. The script reads
manifest.json
to auto-resolve the current (
runs[0]
) and previous (
runs[1]
) results paths, and reads
morphiq-track/citations.json
for previous citation state. Flag changes >5 points. Generate flagged actions for regressions, losses, displacement, and conversion gaps.
For delta methodology, read
references/delta-scoring.md
.
使用
scripts/diff-results.py --state-dir morphiq-track/
与之前的快照对比。脚本读取
manifest.json
自动解析当前(
runs[0]
)和之前(
runs[1]
)的结果路径,读取
morphiq-track/citations.json
获取之前的引用状态。标记变化超过5分的情况,为表现回落、损失、替代、转化缺口生成标记操作项。
差异计算方法请阅读
references/delta-scoring.md

Step 7: Update State Layer and MORPHIQ-TRACKER.md

步骤7:更新状态层和MORPHIQ-TRACKER.md

State layer updates (JSON — source of truth for track-owned data):
  1. Rebuild
    morphiq-track/citations.json
    from current results + previous citation state (gained/lost/stable)
  2. Update
    morphiq-track/prompts.json
    tracking fields (mentioned, cited, best_provider, runs_tracked, last_run)
  3. Update
    morphiq-track/manifest.json
    updated_at
Tracker updates (markdown — user-facing dashboard): Project state layer data into MORPHIQ-TRACKER.md sections 5-9 and 14 (SoV, SoV Trend, Citations, Prompts, Competitors, Run History). Update remaining sections (1-4, 10-13) per tracker-spec.md rules.
For tracker specification, read
references/tracker-spec.md
. For state layer specification, read
references/state-layer.md
.
状态层更新(JSON —— track所属数据的可信源):
  1. 从当前结果 + 之前的引用状态(新增/丢失/稳定)重建
    morphiq-track/citations.json
  2. 更新
    morphiq-track/prompts.json
    的追踪字段(mentioned、cited、best_provider、runs_tracked、last_run)
  3. 更新
    morphiq-track/manifest.json
    updated_at
    字段
Tracker更新(markdown —— 用户侧看板): 将状态层数据投射到MORPHIQ-TRACKER.md的5-9节和14节(声量、声量趋势、引用、Prompts、竞品、运行历史)。按照tracker-spec.md规则更新剩余章节(1-4、10-13)。
Tracker规范请阅读
references/tracker-spec.md
,状态层规范请阅读
references/state-layer.md

Step 8: Produce Delta Report

步骤8:生成差异报告

Assemble JSON (
PIPELINE.md
§4): SoV metrics, citations, per-provider data, competitors, flagged actions, content queue. Loops back to morphiq-rank.
组装JSON(见
PIPELINE.md
第4节):包含声量指标、引用、各提供商数据、竞品、标记操作项、内容队列,循环回morphiq-rank。

Three Ongoing Workflows

三个持续工作流

Workflow A: Content Optimization

工作流A:内容优化

  1. Identify pages with declining SoV or lost citations
  2. Feed to morphiq-build (existing content path)
  3. Re-track to measure impact
  1. 识别声量下降或丢失引用的页面
  2. 供给morphiq-build(已有内容路径)
  3. 重新追踪以衡量影响

Workflow B: Content Creation

工作流B:内容创作

  1. Collect prompts where brand is absent
  2. Identify competitor citation sources
  3. Generate content briefs for missing coverage
  4. Feed to morphiq-build (new content path)
  1. 收集品牌未出现的prompts
  2. 识别竞品引用来源
  3. 为缺失的覆盖范围生成内容brief
  4. 供给morphiq-build(新内容路径)

Workflow C: Query Fanout Expansion

工作流C:查询扩散扩展

  1. Run
    scripts/analyze-fanout.py --state-dir morphiq-track/
    with optional
    --scan-report
    for page inventory and simulated queries
  2. Script extracts sub-queries from latest track results, merges with scan simulated queries (fills Perplexity/Gemini gap)
  3. Compares against site page inventory to identify unanswered sub-queries
  4. Extracts competitor citation sources for each unanswered sub-query
  5. Generates content briefs prioritized by citation weight (
    site:
    2x, citation-producing 1.5x, silent 0.5x)
  6. Output feeds Delta Report
    content_creation_queue
    via
    --fanout
    flag on generate-report.py
  7. Update MORPHIQ-TRACKER.md §12 (Query Fanout Coverage) and §13 (Content Creation Queue) with new entries
  1. 运行
    scripts/analyze-fanout.py --state-dir morphiq-track/
    ,可搭配
    --scan-report
    获取页面清单和模拟查询
  2. 脚本从最新的追踪结果中提取子查询,与扫描模拟查询合并(填补Perplexity/Gemini的缺口)
  3. 与站点页面清单对比,识别未被回答的子查询
  4. 为每个未被回答的子查询提取竞品引用来源
  5. 按引用权重生成优先排序的内容brief(
    site:
    类2倍权重、产生引用的1.5倍权重、无引用的0.5倍权重)
  6. 输出通过generate-report.py的
    --fanout
    参数供给差异报告的
    content_creation_queue
  7. 用新条目更新MORPHIQ-TRACKER.md第12节(查询扩散覆盖)和第13节(内容创作队列)

Reference Files

参考文件

FilePurpose
references/prompt-taxonomy.md
Prompt types, GEO categories, fanout depth, generation rules
references/share-of-voice.md
SoV formulas, mention types, invisible SoV, competitive tracking
references/provider-strategies.md
Provider config, models, response analysis pipeline
references/query-targets.md
Provider selection, distribution, citation categories, GEO score
references/delta-scoring.md
Delta calculation, significance thresholds, flagged actions
references/tracker-spec.md
Full MORPHIQ-TRACKER.md specification (14 sections)
references/state-layer.md
JSON state layer: directory structure, file schemas, read/write rules, sync rules
文件用途
references/prompt-taxonomy.md
Prompt类型、GEO类别、扩散深度、生成规则
references/share-of-voice.md
声量计算公式、提及类型、隐形声量、竞品追踪
references/provider-strategies.md
提供商配置、模型、响应分析流程
references/query-targets.md
提供商选择、分配、引用类别、GEO分数
references/delta-scoring.md
差异计算、显著性阈值、标记操作项
references/tracker-spec.md
MORPHIQ-TRACKER.md完整规范(14个章节)
references/state-layer.md
JSON状态层:目录结构、文件 schema、读写规则、同步规则