sf-datacloud
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinesesf-datacloud: Salesforce Data Cloud Orchestrator
sf-datacloud: Salesforce Data Cloud 编排器
Use this skill when the user needs product-level Data Cloud workflow guidance rather than a single isolated command family: pipeline setup, cross-phase troubleshooting, data spaces, data kits, or deciding whether a task belongs in Connect, Prepare, Harmonize, Segment, Act, or Retrieve.
This skill intentionally follows sf-skills house style while using the external command surface as the runtime. The plugin is not vendored into this repo.
sf data360当用户需要产品级Data Cloud工作流指导而非单个独立命令家族时使用此技能:包括管道设置、跨阶段故障排查、数据空间、数据套件,或判断任务属于连接、准备、协调、细分、执行还是检索阶段。
本技能严格遵循sf-skills的内部风格,同时使用外部命令作为运行时。该插件未包含在本仓库中。
sf data360When This Skill Owns the Task
此技能负责的任务场景
Use when the work involves:
sf-datacloud- multi-phase Data Cloud setup or remediation
- data spaces ()
sf data360 data-space * - data kits ()
sf data360 data-kit * - health checks ()
sf data360 doctor - CRM-to-unified-profile pipeline design
- deciding how to move from ingestion → harmonization → segmentation → activation
- cross-phase troubleshooting where the root cause is not yet clear
Delegate to a phase-specific skill when the user is focused on one area:
| Phase | Use this skill | Typical scope |
|---|---|---|
| Connect | sf-datacloud-connect | connections, connectors, source discovery |
| Prepare | sf-datacloud-prepare | data streams, DLOs, transforms, DocAI |
| Harmonize | sf-datacloud-harmonize | DMOs, mappings, identity resolution, data graphs |
| Segment | sf-datacloud-segment | segments, calculated insights |
| Act | sf-datacloud-act | activations, activation targets, data actions |
| Retrieve | sf-datacloud-retrieve | SQL, search indexes, vector search, async query |
Delegate outside the family when the user is:
- extracting Session Tracing / STDM telemetry → sf-ai-agentforce-observability
- writing CRM SOQL only → sf-soql
- loading CRM source data → sf-data
- creating missing CRM schema → sf-metadata
- implementing downstream Apex or Flow logic → sf-apex, sf-flow
当工作涉及以下内容时使用:
sf-datacloud- 多阶段Data Cloud设置或修复
- 数据空间()
sf data360 data-space * - 数据套件()
sf data360 data-kit * - 健康检查()
sf data360 doctor - CRM到统一档案的管道设计
- 规划从数据摄入→协调→细分→激活的流程
- 根本原因尚不明确的跨阶段故障排查
当用户聚焦于单一领域时,应委托给特定阶段的技能:
| 阶段 | 使用技能 | 典型范围 |
|---|---|---|
| 连接 | sf-datacloud-connect | 连接、连接器、数据源发现 |
| 准备 | sf-datacloud-prepare | 数据流、DLO、转换、DocAI |
| 协调 | sf-datacloud-harmonize | DMO、映射、身份解析、数据图谱 |
| 细分 | sf-datacloud-segment | 细分群体、计算洞察 |
| 执行 | sf-datacloud-act | 激活、激活目标、数据操作 |
| 检索 | sf-datacloud-retrieve | SQL、搜索索引、向量搜索、异步查询 |
当用户进行以下操作时,应委托给外部技能:
- 提取会话跟踪/STDM遥测 → sf-ai-agentforce-observability
- 仅编写CRM SOQL → sf-soql
- 加载CRM源数据 → sf-data
- 创建缺失的CRM架构 → sf-metadata
- 实现下游Apex或Flow逻辑 → sf-apex, sf-flow
Required Context to Gather First
首先需要收集的必要上下文
Ask for or infer:
- target org alias
- whether the plugin is already installed and linked
- whether the user wants design guidance, read-only inspection, or live mutation
- data sources involved: CRM objects, external databases, file ingestion, knowledge, etc.
- desired outcome: unified profiles, segments, activations, vector search, analytics, or troubleshooting
- whether the user is working in the default data space or a custom one
- whether the org has already been classified with
scripts/diagnose-org.mjs - which command family is failing today, if any
If plugin availability or org readiness is uncertain, start with:
- references/plugin-setup.md
- references/feature-readiness.md
scripts/verify-plugin.shscripts/diagnose-org.mjsscripts/bootstrap-plugin.sh
询问或推断以下信息:
- 目标组织别名
- 插件是否已安装并关联
- 用户需要的是设计指导、只读检查还是实时修改
- 涉及的数据源:CRM对象、外部数据库、文件摄入、知识库等
- 期望结果:统一档案、细分群体、激活、向量搜索、分析或故障排查
- 用户是否在默认数据空间或自定义数据空间中工作
- 组织是否已通过完成分类
scripts/diagnose-org.mjs - 当前失败的命令家族(如有)
如果插件可用性或组织就绪状态不确定,从以下内容开始:
- references/plugin-setup.md
- references/feature-readiness.md
scripts/verify-plugin.shscripts/diagnose-org.mjsscripts/bootstrap-plugin.sh
Core Operating Rules
核心操作规则
- Use the external plugin runtime; do not reimplement or vendor the command layer.
sf data360 - Prefer the smallest phase-specific skill once the task is localized.
- Run readiness classification before mutation-heavy work. Prefer over guessing from one failing command.
scripts/diagnose-org.mjs - For commands, suppress linked-plugin warning noise with
sf data360unless the stderr output is needed for debugging.2>/dev/null - Distinguish Data Cloud SQL from CRM SOQL.
- Do not treat as a full-product readiness check; the current upstream command only checks the search-index surface.
sf data360 doctor - Do not treat as a universal tenant probe; only use it with a known DMO/DLO table after broader readiness is confirmed.
query describe - Preserve Data Cloud-specific API-version workarounds when they matter.
- Prefer generic, reusable JSON definition files over org-specific workshop payloads.
- 使用外部插件运行时;不要重新实现或包含命令层。
sf data360 - 一旦任务定位到具体阶段,优先使用最小化的阶段特定技能。
- 在进行大量修改工作前,先运行就绪状态分类。优先使用而非通过单个失败命令猜测。
scripts/diagnose-org.mjs - 对于命令,使用
sf data360抑制关联插件警告信息,除非调试需要stderr输出。2>/dev/null - 区分Data Cloud SQL与CRM SOQL。
- 不要将视为完整产品就绪检查;当前上游命令仅检查搜索索引层面。
sf data360 doctor - 不要将视为通用租户探测;仅在确认更广泛的就绪状态后,对已知的DMO/DLO表使用它。
query describe - 当Data Cloud特定的API版本解决方法重要时,保留这些方法。
- 优先使用通用、可复用的JSON定义文件,而非特定组织的工作负载。
Recommended Workflow
推荐工作流
1. Verify the runtime and auth
1. 验证运行时和身份验证
Confirm:
- is installed
sf - the community Data Cloud plugin is linked
- the target org is authenticated
Recommended checks:
bash
sf data360 man
sf org display -o <alias>
bash ~/.claude/skills/sf-datacloud/scripts/verify-plugin.sh <alias>Treat as a broad health signal, not the sole gate. On partially provisioned orgs it can fail even when read-only command families like connectors, DMOs, or segments still work.
sf data360 doctor确认:
- 已安装
sf - 社区Data Cloud插件已关联
- 目标组织已通过身份验证
推荐检查:
bash
sf data360 man
sf org display -o <alias>
bash ~/.claude/skills/sf-datacloud/scripts/verify-plugin.sh <alias>将视为广泛的健康信号,而非唯一的准入条件。在部分配置的组织中,即使连接器、DMO或细分群体等只读命令家族仍能正常工作,该命令也可能失败。
sf data360 doctor2. Classify readiness before changing anything
2. 在进行任何更改前分类就绪状态
Run the shared classifier first:
bash
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --jsonOnly use a query-plane probe after you know the table name is real:
bash
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --phase retrieve --describe-table MyDMO__dlm --jsonUse the classifier to distinguish:
- empty-but-enabled modules
- feature-gated modules
- query-plane issues
- runtime/auth failures
先运行共享分类器:
bash
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --json仅在已知表名真实存在后,使用查询平面探测:
bash
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --phase retrieve --describe-table MyDMO__dlm --json使用分类器区分:
- 空但已启用的模块
- 功能 gated 的模块
- 查询平面问题
- 运行时/身份验证失败
3. Discover existing state with read-only commands
3. 使用只读命令发现现有状态
Use targeted inspection after classification:
bash
sf data360 doctor -o <org> 2>/dev/null
sf data360 data-space list -o <org> 2>/dev/null
sf data360 data-stream list -o <org> 2>/dev/null
sf data360 dmo list -o <org> 2>/dev/null
sf data360 identity-resolution list -o <org> 2>/dev/null
sf data360 segment list -o <org> 2>/dev/null
sf data360 activation platforms -o <org> 2>/dev/null分类后进行针对性检查:
bash
sf data360 doctor -o <org> 2>/dev/null
sf data360 data-space list -o <org> 2>/dev/null
sf data360 data-stream list -o <org> 2>/dev/null
sf data360 dmo list -o <org> 2>/dev/null
sf data360 identity-resolution list -o <org> 2>/dev/null
sf data360 segment list -o <org> 2>/dev/null
sf data360 activation platforms -o <org> 2>/dev/null4. Localize the phase
4. 定位阶段
Route the task:
- source/connector issue → Connect
- ingestion/DLO/stream issue → Prepare
- mapping/IR/unified profile issue → Harmonize
- audience or insight issue → Segment
- downstream push issue → Act
- SQL/search/index issue → Retrieve
路由任务:
- 源/连接器问题 → 连接
- 摄入/DLO/流问题 → 准备
- 映射/IR/统一档案问题 → 协调
- 受众或洞察问题 → 细分
- 下游推送问题 → 执行
- SQL/搜索/索引问题 → 检索
5. Choose deterministic artifacts when possible
5. 尽可能选择确定性工件
Prefer JSON definition files and repeatable scripts over one-off manual steps. Generic templates live in:
assets/definitions/data-stream.template.jsonassets/definitions/dmo.template.jsonassets/definitions/mapping.template.jsonassets/definitions/relationship.template.jsonassets/definitions/identity-resolution.template.jsonassets/definitions/data-graph.template.jsonassets/definitions/calculated-insight.template.jsonassets/definitions/segment.template.jsonassets/definitions/activation-target.template.jsonassets/definitions/activation.template.jsonassets/definitions/data-action-target.template.jsonassets/definitions/data-action.template.jsonassets/definitions/search-index.template.json
优先使用JSON定义文件和可重复脚本,而非一次性手动步骤。通用模板位于:
assets/definitions/data-stream.template.jsonassets/definitions/dmo.template.jsonassets/definitions/mapping.template.jsonassets/definitions/relationship.template.jsonassets/definitions/identity-resolution.template.jsonassets/definitions/data-graph.template.jsonassets/definitions/calculated-insight.template.jsonassets/definitions/segment.template.jsonassets/definitions/activation-target.template.jsonassets/definitions/activation.template.jsonassets/definitions/data-action-target.template.jsonassets/definitions/data-action.template.jsonassets/definitions/search-index.template.json
6. Verify after each phase
6. 每个阶段后进行验证
Typical verification:
- stream/DLO exists
- DMO/mapping exists
- identity resolution run completed
- unified records or segment counts look correct
- activation/search index status is healthy
典型验证内容:
- 流/DLO存在
- DMO/映射存在
- 身份解析运行完成
- 统一记录或细分群体计数看起来正确
- 激活/搜索索引状态健康
High-Signal Gotchas
关键注意事项
- requires
connection list.--connector-type - is useful when you need the full catalog, but first-page
dmo list --allis often enough for readiness checks and much faster.dmo list - Segment creation may need .
--api-version 64.0 - returns opaque IDs; use SQL joins for human-readable details.
segment members - can fail on partially provisioned orgs even when some read-only commands still work; fall back to targeted smoke checks.
sf data360 doctor - errors such as
query describeorCouldn't find CDP tenant IDare query-plane clues, not automatic proof that the whole product is disabled.DataModelEntity ... not found - Many long-running jobs are asynchronous in practice even when the command returns quickly.
- Some Data Cloud operations still require UI setup outside the CLI runtime.
- 需要
connection list参数。--connector-type - 当需要完整目录时,很有用,但首次运行
dmo list --all通常足以进行就绪检查且速度更快。dmo list - 细分群体创建可能需要。
--api-version 64.0 - 返回不透明ID;使用SQL连接获取人类可读的详细信息。
segment members - 在部分配置的组织中,即使某些只读命令仍能工作,也可能失败;此时应回退到针对性的冒烟测试。
sf data360 doctor - 错误如
query describe或Couldn't find CDP tenant ID是查询平面的线索,而非整个产品已禁用的自动证明。DataModelEntity ... not found - 许多长时间运行的作业实际上是异步的,即使命令快速返回。
- 某些Data Cloud操作仍需要在CLI运行时之外通过UI进行设置。
Output Format
输出格式
When finishing, report in this order:
- Task classification
- Runtime status
- Readiness classification
- Phase(s) involved
- Commands or artifacts used
- Verification result
- Next recommended step
Suggested shape:
text
Data Cloud task: <setup / inspect / troubleshoot / migrate>
Runtime: <plugin ready / missing / partially verified>
Readiness: <ready / ready_empty / partial / feature_gated / blocked>
Phases: <connect / prepare / harmonize / segment / act / retrieve>
Artifacts: <json files, commands, scripts>
Verification: <passed / partial / blocked>
Next step: <next phase, setup guidance, or cross-skill handoff>完成任务后,按以下顺序报告:
- 任务分类
- 运行时状态
- 就绪状态分类
- 涉及的阶段
- 使用的命令或工件
- 验证结果
- 推荐的下一步
建议格式:
text
Data Cloud任务: <设置 / 检查 / 故障排查 / 迁移>
运行时: <插件就绪 / 缺失 / 部分验证>
就绪状态: <就绪 / 就绪但空 / 部分就绪 / 功能受限 / 受阻>
阶段: <连接 / 准备 / 协调 / 细分 / 执行 / 检索>
工件: <JSON文件、命令、脚本>
验证结果: <通过 / 部分通过 / 受阻>
下一步: <下一阶段、设置指导或跨技能移交>Cross-Skill Integration
跨技能集成
| Need | Delegate to | Reason |
|---|---|---|
| load or clean CRM source data | sf-data | seed or fix source records before ingestion |
| create missing CRM schema | sf-metadata | Data Cloud expects existing objects/fields |
| deploy permissions or bundles | sf-deploy | environment preparation |
| write Apex against Data Cloud outputs | sf-apex | code implementation |
| Flow automation after segmentation/activation | sf-flow | declarative orchestration |
| session tracing / STDM / parquet analysis | sf-ai-agentforce-observability | different Data Cloud use case |
| 需求 | 委托给 | 原因 |
|---|---|---|
| 加载或清理CRM源数据 | sf-data | 在摄入前填充或修复源记录 |
| 创建缺失的CRM架构 | sf-metadata | Data Cloud期望现有对象/字段 |
| 部署权限或捆绑包 | sf-deploy | 环境准备 |
| 针对Data Cloud输出编写Apex | sf-apex | 代码实现 |
| 细分/激活后的Flow自动化 | sf-flow | 声明式编排 |
| 会话跟踪/STDM/Parquet分析 | sf-ai-agentforce-observability | 不同的Data Cloud用例 |
Reference Map
参考地图
Start here
入门
- README.md
- references/plugin-setup.md
- references/feature-readiness.md
- UPSTREAM.md
- README.md
- references/plugin-setup.md
- references/feature-readiness.md
- UPSTREAM.md
Phase skills
阶段技能
- sf-datacloud-connect
- sf-datacloud-prepare
- sf-datacloud-harmonize
- sf-datacloud-segment
- sf-datacloud-act
- sf-datacloud-retrieve
- sf-datacloud-connect
- sf-datacloud-prepare
- sf-datacloud-harmonize
- sf-datacloud-segment
- sf-datacloud-act
- sf-datacloud-retrieve
Deterministic helpers
确定性助手
- scripts/bootstrap-plugin.sh
- scripts/verify-plugin.sh
- scripts/diagnose-org.mjs
- assets/definitions/
- scripts/bootstrap-plugin.sh
- scripts/verify-plugin.sh
- scripts/diagnose-org.mjs
- assets/definitions/