morphiq-build
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePipeline Position
流程位置
Step 3 of 4 — consumes morphiq-rank output.
- Input: Prioritized Roadmap (JSON) OR user prompt OR existing content.
- Output: Build Output (JSON + artifacts) → consumed by morphiq-track.
- Data contract: See §3 for the Build Output schema.
PIPELINE.md
4步流程中的第3步——接收morphiq-rank的输出。
- 输入: 优先级路线图(JSON)或用户提示或现有内容
- 输出: 构建输出(JSON + 产物)→ 供morphiq-track使用
- 数据契约: 构建输出的schema请参考第3节
PIPELINE.md
Purpose
用途
Morphiq Build fixes issues from morphiq-rank. It creates new content, optimizes existing content, generates schema markup, builds policy files, and produces artifacts to improve AI visibility. The core engine is a 6-step content lab pipeline.
Morphiq Build修复morphiq-rank识别出的问题,可创建新内容、优化现有内容、生成schema标记、构建策略文件、产出各类产物以提升AI可见性,核心引擎是6步内容实验室流程。
Entry Points
入口
Path A — From Prioritized Roadmap: Process issues by tier and priority. Route each to the appropriate fix workflow.
Path B — From User Prompt: Accept topic, optional source URLs (up to 5), optional ICP/brand context. Route to content lab pipeline.
Path C — From Existing Content: Accept content URL or raw text. Route to quality rewrite workflow.
路径A——来自优先级路线图: 按层级和优先级处理问题,将每个问题分配到对应的修复工作流。
路径B——来自用户提示: 接收主题、可选的源URL(最多5个)、可选的ICP/品牌上下文,分发到内容实验室流程。
路径C——来自现有内容: 接收内容URL或原始文本,分发到质量重写工作流。
Content Lab Pipeline (6 Steps)
内容实验室流程(6步)
Step 1: Ingest Sources
步骤1:源数据导入
Validate URLs, filter blocked domains, deduplicate, cap at 10. Accept raw text or PDF alternatives. Halt if zero valid sources.
验证URL,过滤被拦截域名,去重,最多保留10个。支持接收原始文本或PDF作为替代输入。如无有效源数据则终止流程。
Step 2: Extract Content
步骤2:内容提取
Crawl each URL → clean markdown. Extract title, content, outbound links, publish date. Halt if zero successful extractions.
爬取每个URL → 清洗为markdown格式。提取标题、内容、出站链接、发布日期。如无成功提取的内容则终止流程。
Step 3: Analyze Gaps
步骤3:缺口分析
Analyze against query space. Identify 5 gap types:
| Gap Type | What Is Missing |
|---|---|
| Content | Unanswered questions, missing perspectives |
| Data | Missing statistics, quantitative evidence |
| Format | Wrong format for LLM retrieval |
| Depth | Surface-level, no expert insight |
| Fanout coverage | Sub-queries AI would chain but site cannot answer |
Detect comparative intent. Evaluate fanout coverage using content type → sub-query rules. Generate up to 5 search queries.
For gap taxonomy and severity, read .
references/gap-taxonomy.md对照查询空间进行分析,识别5类缺口:
| 缺口类型 | 缺失内容 |
|---|---|
| 内容 | 未解答的问题、缺失的视角 |
| 数据 | 缺失的统计数据、量化证据 |
| 格式 | 不符合LLM检索要求的格式 |
| 深度 | 内容流于表面,无专业见解 |
| 辐射覆盖度 | AI会链式查询但站点无法回答的子查询 |
检测对比意图,使用内容类型→子查询规则评估辐射覆盖度,最多生成5个搜索查询。
缺口分类和严重程度请参考。
references/gap-taxonomy.mdStep 4: Research to Fill Gaps
步骤4:调研补全缺口
Run up to 5 live web searches. Collect authoritative sources, statistics (number + source + URL), expert quotes (speaker + credential), industry insights. If comparative intent, dedicate 1 search to brand data.
For citation rules, read .
references/enrichment-sources.md最多运行5次实时网页搜索,收集权威来源、统计数据(数值+来源+URL)、专家观点(发言人+资质)、行业洞察。如果存在对比意图,专门分配1次搜索查询品牌数据。
引用规则请参考。
references/enrichment-sources.mdStep 5: Generate / Rewrite
步骤5:生成/重写内容
Produce final content applying Morphiq standard:
- E-E-A-T signals, name-drop citations, expert quotes
- Heading hierarchy, 50–75 word paragraphs, direct-answer blocks
- Brand positioning (comparative or authority mode)
- 1,200–1,600 words, 5–7 H2 sections, FAQ with 3–5 Q&As
- Minimum 3 statistics, 1 expert quote, sources section
- No fabricated case studies
For full pipeline spec, read .
references/content-lab-pipeline.md遵循Morphiq标准产出最终内容:
- E-E-A-T信号、明确标注引用来源、专家观点
- 标题层级结构、50-75词段落、直接回答区块
- 品牌定位(对比模式或权威模式)
- 1200-1600词、5-7个H2章节、包含3-5个问答的FAQ模块
- 至少3个统计数据、1条专家观点、来源板块
- 禁止虚构案例研究
完整流程规范请参考。
references/content-lab-pipeline.mdStep 6: Validate Fanout Coverage (Fanout Issues Only)
步骤6:验证辐射覆盖度(仅辐射类问题)
For issues with : validate generated content addresses all triggering sub-queries and meets the competitive quality floor. If coverage < 80% or quality floor not met, revise once. Skip for non-fanout content.
fanout-*fanout_contextRun with generated content + triggering sub-queries + quality floor.
scripts/validate-coverage.py对于带有的类问题:验证生成的内容覆盖所有触发的子查询,且达到质量竞争底线。如果覆盖率<80%或未达质量底线,执行1次修订。非辐射类内容跳过本步骤。
fanout_contextfanout-*运行,传入生成的内容、触发的子查询、质量底线作为参数。
scripts/validate-coverage.pyPost-Pipeline Processing
流程后处理
| Process | What It Does | Reference |
|---|---|---|
| Schema Injection | Classify content type, generate JSON-LD. New content: embed schema in content artifact. Existing content: separate schema artifact with implementation tracking. | |
| Metadata Optimization | Meta description, slug, OG tags | |
| llms.txt Generation | Full autonomous pipeline: scrape → LLM → validate → repair → template fallback | |
| Content Restructuring | Fix headings, split paragraphs | — |
| Internal Linking | Link related pages for | |
| Enrichment | Additional search for missing stats/citations | |
| FAQ Generation | Generate FAQ from gap analysis | |
| 处理环节 | 功能 | 参考文档 |
|---|---|---|
| Schema注入 | 分类内容类型,生成JSON-LD。新内容:将schema嵌入内容产物;现有内容:输出独立的schema产物并跟踪落地情况 | |
| 元数据优化 | 元描述、slug、OG标签 | |
| llms.txt生成 | 全自动化流程:爬取 → LLM处理 → 验证 → 修复 → 模板兜底 | |
| 内容重构 | 修复标题问题、拆分段落 | — |
| 内部链接 | 关联相关页面提升 | |
| 内容丰富 | 补充搜索缺失的统计数据/引用来源 | |
| FAQ生成 | 基于缺口分析生成FAQ | |
Issue Type → Fix Routing
问题类型→修复路径映射
| Issue Category | Fix Approach |
|---|---|
| Schema Injection — generate JSON-LD |
| Metadata Optimization — generate tags |
| Quality Rewrite — Step 5 pipeline |
| Quality Rewrite — Step 5 pipeline (Claude-driven rewrite to answer-first structure) |
| Content Restructuring |
| Policy file generation |
| Full 6-step pipeline for new content. When |
| Enrich existing content via pipeline |
| 问题分类 | 修复方案 |
|---|---|
| Schema注入——生成JSON-LD |
| 元数据优化——生成标签 |
| 质量重写——流程第5步 |
| 质量重写——流程第5步(Claude驱动重写为答案前置结构) |
其他 | 内容重构 |
| 策略文件生成 |
| 完整6步流程生成新内容。如存在 |
| 通过流程丰富现有内容 |
Build Output
构建输出
Artifacts with : "content", "schema", "metadata", "policy_file". Each includes placement instructions.
type产物带有字段:"content"、"schema"、"metadata"、"policy_file",每个产物都包含落位指引。
typeReference Files
参考文件
| File | Purpose |
|---|---|
| Full 6-step pipeline with I/O formats |
| Gap types, severity, search query rules |
| Citation format, source preferences |
| JSON-LD templates, skip conditions |
| SEO metadata rules |
| llms.txt spec and generation |
| FAQ generation rules |
| 文件 | 用途 |
|---|---|
| 完整6步流程及输入输出格式 |
| 缺口类型、严重程度、搜索查询规则 |
| 引用格式、来源偏好 |
| JSON-LD模板、跳过条件 |
| SEO元数据规则 |
| llms.txt规范及生成规则 |
| FAQ生成规则 |