sf-datacloud-prepare
Original:🇺🇸 English
Translated
Salesforce Data Cloud Prepare phase. TRIGGER when: user creates or manages Data Cloud data streams, DLOs, transforms, or Document AI configurations, or asks about ingestion into Data Cloud. DO NOT TRIGGER when: the task is connection setup only (use sf-datacloud-connect), DMOs and identity resolution (use sf-datacloud-harmonize), or query/search work (use sf-datacloud-retrieve).
2installs
Sourcejaganpro/sf-skills
Added on
NPX Install
npx skill4agent add jaganpro/sf-skills sf-datacloud-prepareTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →sf-datacloud-prepare: Data Cloud Prepare Phase
Use this skill when the user needs ingestion and lake preparation work: data streams, Data Lake Objects, transforms, or DocAI-based extraction.
When This Skill Owns the Task
Use when the work involves:
sf-datacloud-preparesf data360 data-stream *sf data360 dlo *sf data360 transform *sf data360 docai *- choosing how data should enter Data Cloud
Delegate elsewhere when the user is:
- still creating/testing source connections → sf-datacloud-connect
- mapping to DMOs or designing IR/data graphs → sf-datacloud-harmonize
- querying ingested data → sf-datacloud-retrieve
Required Context to Gather First
Ask for or infer:
- target org alias
- source connection name
- source object / dataset
- desired stream type
- DLO naming expectations
- whether the user is creating, updating, running, or deleting a stream
Core Operating Rules
- Verify the external plugin runtime before running Data Cloud commands.
- Run the shared readiness classifier before mutating ingestion assets: .
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --phase prepare --json - Prefer inspecting existing streams and DLOs before creating new ingestion assets.
- Suppress linked-plugin warning noise with for normal usage.
2>/dev/null - Treat DLO naming and field naming as Data Cloud-specific, not CRM-native.
- Hand off to Harmonize only after ingestion assets are clearly healthy.
Recommended Workflow
1. Classify readiness for prepare work
bash
node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o <org> --phase prepare --json2. Inspect existing ingestion assets
bash
sf data360 data-stream list -o <org> 2>/dev/null
sf data360 dlo list -o <org> 2>/dev/null3. Create or inspect streams intentionally
bash
sf data360 data-stream get -o <org> --name <stream> 2>/dev/null
sf data360 data-stream create-from-object -o <org> --object Contact --connection SalesforceDotCom_Home 2>/dev/null
sf data360 data-stream create -o <org> -f stream.json 2>/dev/null4. Check DLO shape
bash
sf data360 dlo get -o <org> --name Contact_Home__dll 2>/dev/null5. Only then move into harmonization
Once the stream and DLO are healthy, hand off to sf-datacloud-harmonize.
High-Signal Gotchas
- CRM-backed stream behavior is not the same as fully custom connector-framework ingestion.
- Stream deletion can also delete the associated DLO unless the delete mode says otherwise.
- DLO field naming differs from CRM field naming.
- Query DLO record counts with Data Cloud SQL instead of assuming list output is sufficient.
- means the stream module is gated for the current org/user; guide the user to provisioning/permissions review instead of retrying blindly.
CdpDataStreams
Output Format
text
Prepare task: <stream / dlo / transform / docai>
Source: <connection + object>
Target org: <alias>
Artifacts: <stream names / dlo names / json definitions>
Verification: <passed / partial / blocked>
Next step: <harmonize or retrieve>References
- README.md
- ../sf-datacloud/assets/definitions/data-stream.template.json
- ../sf-datacloud/references/plugin-setup.md
- ../sf-datacloud/references/feature-readiness.md