nocobase-workflow-manage

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Goal

目标

Orchestrate NocoBase workflows end-to-end through NocoBase MCP tools: design trigger logic, build node chains, manage versions, and inspect execution results.
通过NocoBase MCP工具端到端编排NocoBase工作流:设计触发器逻辑、构建节点链、管理版本以及检查执行结果。

Dependency Gate

依赖门槛

  • Related helper skills:
    nocobase-mcp-setup
    ,
    nocobase-data-modeling
    .
  • Check whether NocoBase MCP tools are available before planning write operations.
  • If MCP is not configured, guide the user to use
    nocobase-mcp-setup
    .
  • If MCP tools return authentication errors such as
    Auth required
    , stop and ask the user to complete MCP authentication or refresh the MCP connection before continuing.
  • Data modeling skill may be used to understand related collections and fields when configuring workflow triggers and nodes.
  • 相关辅助技能:
    nocobase-mcp-setup
    nocobase-data-modeling
  • 在规划写入操作前检查NocoBase MCP工具是否可用。
  • 如果未配置MCP,引导用户使用
    nocobase-mcp-setup
  • 如果MCP工具返回诸如
    Auth required
    之类的认证错误,请停止操作,要求用户完成MCP认证或刷新MCP连接后再继续。
  • 配置工作流触发器和节点时,可以使用数据建模技能来了解相关的集合和字段。

Mandatory MCP Gate

强制MCP准入检查

Confirm the NocoBase MCP server is reachable and authenticated before attempting workflow operations. Do not proceed with any workflow mutation until the MCP server exposes the relevant workflow endpoints.
在尝试进行工作流操作前,确认NocoBase MCP服务器可访问且已完成认证。在MCP服务器暴露相关工作流端点之前,不要进行任何工作流变更操作。

Hard Rules

硬性规则

  1. Never create a workflow with
    enabled: true
    — always create with
    enabled: false
    , complete all trigger and node configuration, then enable.
  2. Never edit a frozen version directly — if
    versionStats.executed > 0
    , create a new revision first via
    workflows:revision
    . The
    filter
    parameter must include
    {"key":"<key>"}
    (the workflow's
    key
    ) to ensure the new version belongs to the same workflow; omitting
    key
    will create an independent copy instead. Use the returned new
    id
    for all subsequent operations; discard the old
    id
    .
  3. Never use an empty
    filter
    — update and destroy nodes require
    filter
    with at least one condition. Confirm the filter is non-empty before calling the API.
  4. Always chain nodes via
    upstreamId
    — every node (except the first) must reference its upstream node. Do not skip or leave
    upstreamId
    unset.
  5. Never create nodes concurrently — node creation calls must be executed one at a time, sequentially. Wait for the previous node to be fully created before creating the next one, because the server adjusts internal link relationships during each creation. Batch/parallel node creation is not supported.
  6. Always wrap filter in
    $and
    or
    $or
    — the root of any filter object must be a condition group. See Common Conventions - filter.
  7. Always reference node results by
    key
    , not
    id
    — use
    {{$jobsMapByNodeKey.<nodeKey>.<path>}}
    where
    nodeKey
    is the node's
    key
    property (a short random string). Never use the numeric
    id
    , never invent a key — always read the actual
    key
    from the node record after creating it. See Common Conventions - Variable Expressions.
  8. Always verify after mutation — after creating, updating, or deleting a workflow or node, read back the result to confirm the change took effect.
  9. Do not auto-enable without user confirmation — always ask the user before setting
    enabled: true
    .
  1. 严禁创建
    enabled: true
    的工作流
    ——始终以
    enabled: false
    状态创建,完成所有触发器和节点配置后再启用。
  2. 严禁直接编辑已冻结版本——如果
    versionStats.executed > 0
    ,先通过
    workflows:revision
    创建新修订版本。
    filter
    参数必须包含
    {"key":"<key>"}
    (工作流的
    key
    ),确保新版本属于同一个工作流;省略
    key
    会创建独立副本而非修订版本。后续所有操作都使用返回的新
    id
    ,丢弃旧
    id
  3. 严禁使用空
    filter
    ——更新和删除节点时要求
    filter
    至少包含一个条件,调用API前确认filter非空。
  4. 始终通过
    upstreamId
    串联节点
    ——除第一个节点外,每个节点都必须引用其上游节点,不得跳过或不设置
    upstreamId
  5. 严禁并发创建节点——节点创建调用必须逐个顺序执行,前一个节点完全创建完成后再创建下一个,因为服务器会在每次创建过程中调整内部链接关系,不支持批量/并行创建节点。
  6. 始终将filter包裹在
    $and
    $or
    ——任何filter对象的根层级必须是条件组,参考通用约定 - filter
  7. 始终通过
    key
    而非
    id
    引用节点结果
    ——使用
    {{$jobsMapByNodeKey.<nodeKey>.<path>}}
    ,其中
    nodeKey
    是节点的
    key
    属性(短随机字符串)。严禁使用数字
    id
    ,严禁自行编造key,始终在节点创建完成后从节点记录中读取实际的
    key
    ,参考通用约定 - 变量表达式
  8. 变更后始终验证——创建、更新或删除工作流/节点后,回读结果确认变更已生效。
  9. 未经用户确认不得自动启用——设置
    enabled: true
    前必须征得用户同意。

Orchestration Process

编排流程

Planning Phase

规划阶段

Before making any MCP calls, clarify with the user:
  1. Trigger type — what event starts the workflow? → see Trigger Reference
  2. Node chain — what processing steps are needed? → see Node Reference
  3. Execution mode — synchronous or async? See sync vs async
  4. Key parameters — collection names, filter conditions, field mappings, variable expressions
Summarize the complete plan in natural language and confirm with the user before making any MCP calls.
Then map the requested action to the corresponding MCP-exposed endpoint:
  • Workflow CRUD and revisions →
    workflows:*
  • Node operations →
    workflows/<workflowId>/nodes:create
    and
    flow_nodes:*
  • Execution inspection →
    executions:*
  • Job detail inspection →
    jobs:get
在发起任何MCP调用前,和用户明确以下内容:
  1. 触发器类型——什么事件会启动工作流?→ 参考触发器参考
  2. 节点链——需要哪些处理步骤?→ 参考节点参考
  3. 执行模式——同步还是异步?参考同步vs异步
  4. 核心参数——集合名称、过滤条件、字段映射、变量表达式
在发起任何MCP调用前,用自然语言总结完整方案并和用户确认。
然后将请求的操作映射到对应的MCP暴露端点:
  • 工作流CRUD和版本管理 →
    workflows:*
  • 节点操作 →
    workflows/<workflowId>/nodes:create
    flow_nodes:*
  • 执行结果检查 →
    executions:*
  • 任务详情检查 →
    jobs:get

Creating a New Workflow

创建新工作流

  1. Create workflow
    POST /api/workflows:create
    with
    type
    ,
    title
    ,
    sync
    ,
    enabled: false
  2. Configure trigger
    POST /api/workflows:update?filterByTk=<id>
    with
    config
  3. Add nodes in order
    POST /api/workflows/<workflowId>/nodes:create
    for each node, chaining via
    upstreamId
  4. Configure each node
    POST /api/flow_nodes:update?filterByTk=<nodeId>
    with
    config
  5. Verify → read back the workflow with nodes to confirm trigger config, node count, order, and each node's config are correct
  6. Enable workflow → confirm with the user, then
    POST /api/workflows:update?filterByTk=<id>
    with
    enabled: true
  7. Test / verify
    POST /api/workflows:execute?filterByTk=<id>&autoRevision=1
  1. 创建工作流 → 调用
    POST /api/workflows:create
    ,传入
    type
    title
    sync
    enabled: false
  2. 配置触发器 → 调用
    POST /api/workflows:update?filterByTk=<id>
    ,传入
    config
  3. 按顺序添加节点 → 为每个节点调用
    POST /api/workflows/<workflowId>/nodes:create
    ,通过
    upstreamId
    串联
  4. 配置每个节点 → 调用
    POST /api/flow_nodes:update?filterByTk=<nodeId>
    ,传入
    config
  5. 验证 → 回读带节点的工作流,确认触发器配置、节点数量、顺序以及每个节点的配置正确
  6. 启用工作流 → 和用户确认后,调用
    POST /api/workflows:update?filterByTk=<id>
    ,传入
    enabled: true
  7. 测试/验证 → 调用
    POST /api/workflows:execute?filterByTk=<id>&autoRevision=1

Editing an Existing Workflow

编辑现有工作流

  1. Fetch workflow with nodes and version stats
    GET /api/workflows:get?filterByTk=<id>&appends[]=nodes&appends[]=versionStats
  2. Check if version is frozen (
    versionStats.executed > 0
    )
    • Yes → create a new revision first:
      POST /api/workflows:revision?filterByTk=<id>&filter={"key":"<key>"}
      The
      key
      is the workflow's
      key
      field (obtained from the workflow record in step 1). It must be provided to create a revision of the same workflow. Omitting
      key
      creates an independent copy instead. Use the returned new
      id
      for all subsequent operations. Discard the old
      id
      .
    • No → proceed directly
  3. Edit as needed:
    • Update trigger config →
      POST /api/workflows:update?filterByTk=<id>
      with
      config
    • Add node →
      POST /api/workflows/<workflowId>/nodes:create
    • Update node config →
      POST /api/flow_nodes:update?filterByTk=<nodeId>
    • Delete node →
      POST /api/flow_nodes:destroy?filterByTk=<nodeId>
    • Move node →
      POST /api/flow_nodes:move?filterByTk=<nodeId>
      with body
      { "values": { "upstreamId": <targetId>, "branchIndex": null } }
      (
      upstreamId: null
      moves to the front;
      branchIndex
      specifies a branch,
      null
      for the main chain)
    • Copy node →
      POST /api/flow_nodes:duplicate?filterByTk=<nodeId>
      with body
      { "values": { "upstreamId": <targetId>, "branchIndex": null } }
  4. Verify → read back modified nodes to confirm changes took effect
  5. Enable (if needed) → confirm with the user, then
    POST /api/workflows:update?filterByTk=<id>
    with
    enabled: true
  1. 获取工作流及节点、版本统计信息 → 调用
    GET /api/workflows:get?filterByTk=<id>&appends[]=nodes&appends[]=versionStats
  2. 检查版本是否已冻结
    versionStats.executed > 0
    • 是 → 先创建新修订版本: 调用
      POST /api/workflows:revision?filterByTk=<id>&filter={"key":"<key>"}
      key
      是工作流的
      key
      字段(从步骤1的工作流记录中获取),必须传入该参数才能创建同一工作流的修订版本,省略
      key
      会创建独立副本。后续所有操作都使用返回的新
      id
      ,丢弃旧
      id
    • 否 → 直接继续操作
  3. 按需编辑
    • 更新触发器配置 → 调用
      POST /api/workflows:update?filterByTk=<id>
      ,传入
      config
    • 添加节点 → 调用
      POST /api/workflows/<workflowId>/nodes:create
    • 更新节点配置 → 调用
      POST /api/flow_nodes:update?filterByTk=<nodeId>
      ,传入
      config
    • 删除节点 → 调用
      POST /api/flow_nodes:destroy?filterByTk=<nodeId>
    • 移动节点 → 调用
      POST /api/flow_nodes:move?filterByTk=<nodeId>
      ,请求体为
      { "values": { "upstreamId": <targetId>, "branchIndex": null } }
      upstreamId: null
      表示移动到最前面;
      branchIndex
      指定分支,
      null
      表示主链)
    • 复制节点 → 调用
      POST /api/flow_nodes:duplicate?filterByTk=<nodeId>
      ,请求体为
      { "values": { "upstreamId": <targetId>, "branchIndex": null } }
  4. 验证 → 回读修改后的节点,确认变更已生效
  5. 启用(如有需要) → 和用户确认后,调用
    POST /api/workflows:update?filterByTk=<id>
    ,传入
    enabled: true

Diagnosing a Failed Execution

诊断执行失败

  1. List executions to find the failed one:
    GET /api/executions:list?filter[workflowId]=<id>&sort=-id
  2. Get execution detail with jobs (exclude result to reduce size):
    GET /api/executions:get?filterByTk=<execId>&appends[]=jobs&appends[]=workflow.nodes&except[]=jobs.result
  3. Find the failed job — look for
    job.status
    values of
    -1
    (FAILED),
    -2
    (ERROR), or
    -3
    (ABORTED)
  4. Get full job detail to see the error:
    GET /api/jobs:get?filterByTk=<jobId>
    Inspect
    result
    for the error message or output that caused the failure.
  5. Fix the issue (update node config or create a new revision if version is frozen), then re-execute.
  1. 列出执行记录找到失败的条目: 调用
    GET /api/executions:list?filter[workflowId]=<id>&sort=-id
  2. 获取执行详情和关联任务(排除结果以减少体积): 调用
    GET /api/executions:get?filterByTk=<execId>&appends[]=jobs&appends[]=workflow.nodes&except[]=jobs.result
  3. 找到失败的任务——查找
    job.status
    值为
    -1
    (失败)、
    -2
    (错误)或
    -3
    (中止)的记录
  4. 获取完整任务详情查看错误: 调用
    GET /api/jobs:get?filterByTk=<jobId>
    检查
    result
    字段获取导致失败的错误信息或输出。
  5. 修复问题(更新节点配置,若版本已冻结则创建新修订版本),然后重新执行。

Error Handling

错误处理

  • MCP returns 400/422: Read the error message carefully. Common causes: invalid node
    type
    , missing required config fields, referencing a non-existent
    upstreamId
    . Fix the parameter and retry.
  • MCP returns 401/403: Stop all operations. Ask the user to re-authenticate or refresh the MCP connection.
  • Node creation fails: Do not continue adding downstream nodes. Fix or remove the failed node first, then resume.
  • Revision creation fails: The original workflow may be in an inconsistent state. Re-fetch the workflow to verify its current state before retrying.
  • MCP返回400/422:仔细阅读错误信息,常见原因:无效的节点
    type
    、缺少必填配置字段、引用不存在的
    upstreamId
    。修复参数后重试。
  • MCP返回401/403:停止所有操作,要求用户重新认证或刷新MCP连接。
  • 节点创建失败:不要继续添加下游节点,先修复或移除失败的节点后再恢复操作。
  • 修订版本创建失败:原始工作流可能处于不一致状态,重试前重新获取工作流确认其当前状态。

Verification Checklist

验证检查清单

After completing any workflow operation, verify:
  1. Workflow exists and has the correct
    type
    ,
    title
    , and
    sync
    mode
  2. Trigger
    config
    matches the planned configuration
  3. Node count and order match the plan (check
    upstreamId
    chain)
  4. Each node's
    type
    and
    config
    are correct
  5. Filter conditions are non-empty where required (update, destroy nodes)
  6. enabled
    status matches the intended state
  7. For edits on frozen versions: the new revision
    id
    is being used, not the old one
完成任何工作流操作后,验证以下内容:
  1. 工作流存在,且
    type
    title
    sync
    模式正确
  2. 触发器
    config
    与规划的配置一致
  3. 节点数量和顺序与规划一致(检查
    upstreamId
    链路)
  4. 每个节点的
    type
    config
    正确
  5. 要求非空的过滤条件均非空(更新、删除节点场景)
  6. enabled
    状态与预期一致
  7. 若编辑的是已冻结版本:正在使用新修订版本的
    id
    ,而非旧id

Reference Index

参考索引

TopicFile
Common conventions (collection, filter format)references/conventions/index.md
Architecture, data model & conceptsreferences/modeling/index.md
Triggersreferences/triggers/index.md
Nodesreferences/nodes/index.md
Endpoint mapping used through MCPreferences/http-api/index.md
主题文件
通用约定(集合、filter格式)references/conventions/index.md
架构、数据模型与概念references/modeling/index.md
触发器references/triggers/index.md
节点references/nodes/index.md
MCP使用的端点映射references/http-api/index.md