morphiq-build

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Pipeline Position

流程位置

Step 3 of 4 — consumes morphiq-rank output.
  • Input: Prioritized Roadmap (JSON) OR user prompt OR existing content.
  • Output: Build Output (JSON + artifacts) → consumed by morphiq-track.
  • Data contract: See
    PIPELINE.md
    §3 for the Build Output schema.
4步流程中的第3步——接收morphiq-rank的输出。
  • 输入: 优先级路线图(JSON)或用户提示或现有内容
  • 输出: 构建输出(JSON + 产物)→ 供morphiq-track使用
  • 数据契约: 构建输出的schema请参考
    PIPELINE.md
    第3节

Purpose

用途

Morphiq Build fixes issues from morphiq-rank. It creates new content, optimizes existing content, generates schema markup, builds policy files, and produces artifacts to improve AI visibility. The core engine is a 6-step content lab pipeline.
Morphiq Build修复morphiq-rank识别出的问题,可创建新内容、优化现有内容、生成schema标记、构建策略文件、产出各类产物以提升AI可见性,核心引擎是6步内容实验室流程。

Entry Points

入口

Path A — From Prioritized Roadmap: Process issues by tier and priority. Route each to the appropriate fix workflow.
Path B — From User Prompt: Accept topic, optional source URLs (up to 5), optional ICP/brand context. Route to content lab pipeline.
Path C — From Existing Content: Accept content URL or raw text. Route to quality rewrite workflow.
路径A——来自优先级路线图: 按层级和优先级处理问题,将每个问题分配到对应的修复工作流。
路径B——来自用户提示: 接收主题、可选的源URL(最多5个)、可选的ICP/品牌上下文,分发到内容实验室流程。
路径C——来自现有内容: 接收内容URL或原始文本,分发到质量重写工作流。

Content Lab Pipeline (6 Steps)

内容实验室流程(6步)

Step 1: Ingest Sources

步骤1:源数据导入

Validate URLs, filter blocked domains, deduplicate, cap at 10. Accept raw text or PDF alternatives. Halt if zero valid sources.
验证URL,过滤被拦截域名,去重,最多保留10个。支持接收原始文本或PDF作为替代输入。如无有效源数据则终止流程。

Step 2: Extract Content

步骤2:内容提取

Crawl each URL → clean markdown. Extract title, content, outbound links, publish date. Halt if zero successful extractions.
爬取每个URL → 清洗为markdown格式。提取标题、内容、出站链接、发布日期。如无成功提取的内容则终止流程。

Step 3: Analyze Gaps

步骤3:缺口分析

Analyze against query space. Identify 5 gap types:
Gap TypeWhat Is Missing
ContentUnanswered questions, missing perspectives
DataMissing statistics, quantitative evidence
FormatWrong format for LLM retrieval
DepthSurface-level, no expert insight
Fanout coverageSub-queries AI would chain but site cannot answer
Detect comparative intent. Evaluate fanout coverage using content type → sub-query rules. Generate up to 5 search queries.
For gap taxonomy and severity, read
references/gap-taxonomy.md
.
对照查询空间进行分析,识别5类缺口:
缺口类型缺失内容
内容未解答的问题、缺失的视角
数据缺失的统计数据、量化证据
格式不符合LLM检索要求的格式
深度内容流于表面,无专业见解
辐射覆盖度AI会链式查询但站点无法回答的子查询
检测对比意图,使用内容类型→子查询规则评估辐射覆盖度,最多生成5个搜索查询。
缺口分类和严重程度请参考
references/gap-taxonomy.md

Step 4: Research to Fill Gaps

步骤4:调研补全缺口

Run up to 5 live web searches. Collect authoritative sources, statistics (number + source + URL), expert quotes (speaker + credential), industry insights. If comparative intent, dedicate 1 search to brand data.
For citation rules, read
references/enrichment-sources.md
.
最多运行5次实时网页搜索,收集权威来源、统计数据(数值+来源+URL)、专家观点(发言人+资质)、行业洞察。如果存在对比意图,专门分配1次搜索查询品牌数据。
引用规则请参考
references/enrichment-sources.md

Step 5: Generate / Rewrite

步骤5:生成/重写内容

Produce final content applying Morphiq standard:
  • E-E-A-T signals, name-drop citations, expert quotes
  • Heading hierarchy, 50–75 word paragraphs, direct-answer blocks
  • Brand positioning (comparative or authority mode)
  • 1,200–1,600 words, 5–7 H2 sections, FAQ with 3–5 Q&As
  • Minimum 3 statistics, 1 expert quote, sources section
  • No fabricated case studies
For full pipeline spec, read
references/content-lab-pipeline.md
.
遵循Morphiq标准产出最终内容:
  • E-E-A-T信号、明确标注引用来源、专家观点
  • 标题层级结构、50-75词段落、直接回答区块
  • 品牌定位(对比模式或权威模式)
  • 1200-1600词、5-7个H2章节、包含3-5个问答的FAQ模块
  • 至少3个统计数据、1条专家观点、来源板块
  • 禁止虚构案例研究
完整流程规范请参考
references/content-lab-pipeline.md

Step 6: Validate Fanout Coverage (Fanout Issues Only)

步骤6:验证辐射覆盖度(仅辐射类问题)

For
fanout-*
issues with
fanout_context
: validate generated content addresses all triggering sub-queries and meets the competitive quality floor. If coverage < 80% or quality floor not met, revise once. Skip for non-fanout content.
Run
scripts/validate-coverage.py
with generated content + triggering sub-queries + quality floor.
对于带有
fanout_context
fanout-*
类问题:验证生成的内容覆盖所有触发的子查询,且达到质量竞争底线。如果覆盖率<80%或未达质量底线,执行1次修订。非辐射类内容跳过本步骤。
运行
scripts/validate-coverage.py
,传入生成的内容、触发的子查询、质量底线作为参数。

Post-Pipeline Processing

流程后处理

ProcessWhat It DoesReference
Schema InjectionClassify content type, generate JSON-LD. New content: embed schema in content artifact. Existing content: separate schema artifact with implementation tracking.
references/schema-templates.md
Metadata OptimizationMeta description, slug, OG tags
references/metadata-patterns.md
llms.txt GenerationFull autonomous pipeline: scrape → LLM → validate → repair → template fallback
references/llms-txt-spec.md
Content RestructuringFix headings, split paragraphs
Internal LinkingLink related pages for
site:
coverage
references/content-lab-pipeline.md
EnrichmentAdditional search for missing stats/citations
references/enrichment-sources.md
FAQ GenerationGenerate FAQ from gap analysis
references/faq-guidelines.md
处理环节功能参考文档
Schema注入分类内容类型,生成JSON-LD。新内容:将schema嵌入内容产物;现有内容:输出独立的schema产物并跟踪落地情况
references/schema-templates.md
元数据优化元描述、slug、OG标签
references/metadata-patterns.md
llms.txt生成全自动化流程:爬取 → LLM处理 → 验证 → 修复 → 模板兜底
references/llms-txt-spec.md
内容重构修复标题问题、拆分段落
内部链接关联相关页面提升
site:
查询覆盖度
references/content-lab-pipeline.md
内容丰富补充搜索缺失的统计数据/引用来源
references/enrichment-sources.md
FAQ生成基于缺口分析生成FAQ
references/faq-guidelines.md

Issue Type → Fix Routing

问题类型→修复路径映射

Issue CategoryFix Approach
agentic-*
schema
Schema Injection — generate JSON-LD
agentic-*
metadata
Metadata Optimization — generate tags
content-*
quality
Quality Rewrite — Step 5 pipeline
chunking-buried-answer
Quality Rewrite — Step 5 pipeline (Claude-driven rewrite to answer-first structure)
chunking-*
structure (other)
Content Restructuring
policy-*
files
Policy file generation
fanout-*
coverage
Full 6-step pipeline for new content. When
fanout_context
is present, pass
triggering_sub_queries
to Step 3 and
competitor_sources
to Step 4. Run Step 6 (coverage validation) before post-pipeline processing.
visibility-*
Enrich existing content via pipeline
问题分类修复方案
agentic-*
schema
Schema注入——生成JSON-LD
agentic-*
元数据
元数据优化——生成标签
content-*
质量
质量重写——流程第5步
chunking-buried-answer
质量重写——流程第5步(Claude驱动重写为答案前置结构)
其他
chunking-*
结构问题
内容重构
policy-*
文件
策略文件生成
fanout-*
覆盖度
完整6步流程生成新内容。如存在
fanout_context
,将
triggering_sub_queries
传入第3步、
competitor_sources
传入第4步。流程后处理前运行第6步(覆盖度验证)
visibility-*
通过流程丰富现有内容

Build Output

构建输出

Artifacts with
type
: "content", "schema", "metadata", "policy_file". Each includes placement instructions.
产物带有
type
字段:"content"、"schema"、"metadata"、"policy_file",每个产物都包含落位指引。

Reference Files

参考文件

FilePurpose
references/content-lab-pipeline.md
Full 6-step pipeline with I/O formats
references/gap-taxonomy.md
Gap types, severity, search query rules
references/enrichment-sources.md
Citation format, source preferences
references/schema-templates.md
JSON-LD templates, skip conditions
references/metadata-patterns.md
SEO metadata rules
references/llms-txt-spec.md
llms.txt spec and generation
references/faq-guidelines.md
FAQ generation rules
文件用途
references/content-lab-pipeline.md
完整6步流程及输入输出格式
references/gap-taxonomy.md
缺口类型、严重程度、搜索查询规则
references/enrichment-sources.md
引用格式、来源偏好
references/schema-templates.md
JSON-LD模板、跳过条件
references/metadata-patterns.md
SEO元数据规则
references/llms-txt-spec.md
llms.txt规范及生成规则
references/faq-guidelines.md
FAQ生成规则