sf-datamapper
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinesesf-datamapper: OmniStudio Data Mapper Creation and Validation
sf-datamapper: OmniStudio Data Mapper 的创建与验证
Expert OmniStudio Data Mapper developer specializing in Extract, Transform, Load, and Turbo Extract configurations. Generate production-ready, performant, and maintainable Data Mapper definitions with proper field mappings, query optimization, and data integrity safeguards.
专业的OmniStudio Data Mapper开发者,专注于Extract、Transform、Load和Turbo Extract类型的配置。可生成生产就绪、高性能且易于维护的Data Mapper定义,包含规范的字段映射、查询优化和数据完整性保障机制。
Core Responsibilities
核心职责
- Generation: Create Data Mapper configurations (Extract, Transform, Load, Turbo Extract) from requirements
- Field Mapping: Design object-to-output field mappings with proper type handling, lookup resolution, and null safety
- Dependency Tracking: Identify related OmniStudio components (Integration Procedures, OmniScripts, FlexCards) that consume or feed Data Mappers
- Validation & Scoring: Score Data Mapper configurations against 5 categories (0-100 points)
- 生成:根据需求创建Extract、Transform、Load、Turbo Extract类型的Data Mapper配置
- 字段映射:设计对象到输出的字段映射,包含正确的类型处理、查找解析和空值安全机制
- 依赖跟踪:识别与Data Mapper交互的相关OmniStudio组件(Integration Procedures、OmniScripts、FlexCards)
- 验证与评分:从5个维度对Data Mapper配置进行0-100分的评分
CRITICAL: Orchestration Order
重要提示:编排顺序
sf-omnistudio-analyze -> sf-datamapper -> sf-integration-procedure -> sf-omniscript -> sf-flexcard (you are here: sf-datamapper)
Data Mappers are the data access layer of the OmniStudio stack. They must be created and deployed before Integration Procedures or OmniScripts that reference them. Use sf-omnistudio-analyze FIRST to understand existing component dependencies.
sf-omnistudio-analyze -> sf-datamapper -> sf-integration-procedure -> sf-omniscript -> sf-flexcard(当前位置:sf-datamapper)
Data Mapper是OmniStudio技术栈中的数据访问层。必须在引用它们的Integration Procedures或OmniScript之前创建并部署。请先使用sf-omnistudio-analyze了解现有组件的依赖关系。
Key Insights
关键要点
| Insight | Details |
|---|---|
| Extract vs Turbo Extract | Extract uses standard SOQL with relationship queries. Turbo Extract uses server-side compiled queries for read-heavy, high-volume scenarios (10x+ faster). Turbo Extract does not support formula fields, related lists, or write operations. |
| Transform is in-memory | Transform Data Mappers operate entirely in memory with no DML or SOQL. They reshape data structures between steps in an Integration Procedure. Use for JSON-to-JSON transformations, field renaming, and data flattening. |
| Load = DML | Load Data Mappers perform insert, update, upsert, or delete operations. They require proper FLS checks and error handling. Always validate field-level security before deploying Load Data Mappers to production. |
| OmniDataTransform metadata | Data Mappers are stored as OmniDataTransform and OmniDataTransformItem records. Retrieve and deploy using these metadata type names, not the legacy DataRaptor API names. |
| 关键要点 | 详情 |
|---|---|
| Extract vs Turbo Extract | Extract使用标准SOQL及关联查询。Turbo Extract使用服务器端编译查询,适用于读密集型、高数据量场景(速度提升10倍以上)。Turbo Extract不支持公式字段、关联列表或写入操作。 |
| Transform 内存运行 | Transform Data Mapper完全在内存中运行,不涉及DML或SOQL操作。用于在集成流程的步骤之间重塑数据结构。适用于JSON到JSON的转换、字段重命名和数据扁平化处理。 |
| Load = DML操作 | Load Data Mapper执行插入、更新、 upsert或删除操作。需要完善的FLS检查和错误处理机制。部署到生产环境前,务必验证字段级安全性。 |
| OmniDataTransform元数据 | Data Mapper以OmniDataTransform和OmniDataTransformItem记录形式存储。请使用这些元数据类型名称进行检索和部署,而非旧版DataRaptor API名称。 |
Workflow (5-Phase Pattern)
工作流程(5阶段模式)
Phase 1: Requirements Gathering
阶段1:需求收集
Ask the user to gather:
- Data Mapper type (Extract, Transform, Load, Turbo Extract)
- Target Salesforce object(s) and fields
- Target org alias
- Consuming component (Integration Procedure, OmniScript, or FlexCard name)
- Data volume expectations (record counts, frequency)
Then:
- Check existing Data Mappers:
Glob: **/OmniDataTransform* - Check existing OmniStudio metadata:
Glob: **/omnistudio/** - Create a task list
请用户提供以下信息:
- Data Mapper类型(Extract、Transform、Load、Turbo Extract)
- 目标Salesforce对象及字段
- 目标组织别名
- 消费组件(Integration Procedure、OmniScript或FlexCard名称)
- 数据量预期(记录数、频率)
后续操作:
- 检查现有Data Mapper:
Glob: **/OmniDataTransform* - 检查现有OmniStudio元数据:
Glob: **/omnistudio/** - 创建任务清单
Phase 2: Design & Type Selection
阶段2:设计与类型选择
| Type | Use Case | Naming Prefix | Supports DML | Supports SOQL |
|---|---|---|---|---|
| Extract | Read data from one or more objects with relationship queries | | No | Yes |
| Turbo Extract | High-volume read-only queries, server-side compiled | | No | Yes (compiled) |
| Transform | In-memory data reshaping between procedure steps | | No | No |
| Load | Write data (insert, update, upsert, delete) | | Yes | No |
Naming Format: using PascalCase
[Prefix][Object]_[Purpose]Examples:
- -- Extract Account with related Contacts
DR_Extract_Account_Details - -- High-volume Case list for FlexCard
DR_TurboExtract_Case_List - -- Flatten nested Lead data structure
DR_Transform_Lead_Flatten - -- Insert Opportunity records
DR_Load_Opportunity_Create
| 类型 | 使用场景 | 命名前缀 | 支持DML | 支持SOQL |
|---|---|---|---|---|
| Extract | 从一个或多个对象中读取数据,支持关联查询 | | 否 | 是 |
| Turbo Extract | 高数据量只读查询场景,采用服务器端编译 | | 否 | 是(编译型) |
| Transform | 在集成流程步骤之间进行内存数据重塑 | | 否 | 否 |
| Load | 执行数据写入操作(插入、更新、upsert、删除) | | 是 | 否 |
命名格式:,采用PascalCase命名法
[前缀][对象]_[用途]示例:
- -- 提取Account及关联的Contacts数据
DR_Extract_Account_Details - -- 为FlexCard提供高数据量Case列表
DR_TurboExtract_Case_List - -- 扁平化嵌套的Lead数据结构
DR_Transform_Lead_Flatten - -- 插入Opportunity记录
DR_Load_Opportunity_Create
Phase 3: Generation & Validation
阶段3:生成与验证
For Generation:
- Define the OmniDataTransform record (Name, Type, Active status)
- Define OmniDataTransformItem records (field mappings, input/output paths)
- Configure query filters, sort order, and limits for Extract types
- Set up lookup mappings and default values for Load types
- Validate field-level security for all mapped fields
For Review:
- Read existing Data Mapper configuration
- Run validation against best practices
- Generate improvement report with specific fixes
Run Validation:
Score: XX/100 Rating
|- Design & Naming: XX/20
|- Field Mapping: XX/25
|- Data Integrity: XX/25
|- Performance: XX/15
|- Documentation: XX/15生成配置时:
- 定义OmniDataTransform记录(名称、类型、激活状态)
- 定义OmniDataTransformItem记录(字段映射、输入/输出路径)
- 为Extract类型配置查询过滤器、排序规则和限制条件
- 为Load类型设置查找映射和默认值
- 验证所有映射字段的字段级安全性(FLS)
审核配置时:
- 读取现有Data Mapper配置
- 对照最佳实践执行验证
- 生成包含具体修复建议的改进报告
执行验证评分:
得分: XX/100 评级
|- 设计与命名: XX/20
|- 字段映射: XX/25
|- 数据完整性: XX/25
|- 性能: XX/15
|- 文档: XX/15Generation Guardrails (MANDATORY)
生成规则(强制要求)
BEFORE generating ANY Data Mapper configuration, Claude MUST verify no anti-patterns are introduced.
If ANY of these patterns would be generated, STOP and ask the user:
"I noticed [pattern]. This will cause [problem]. Should I: A) Refactor to use [correct pattern] B) Proceed anyway (not recommended)"
| Anti-Pattern | Detection | Impact |
|---|---|---|
| Extracting all fields | No field list specified, wildcard selection | Performance degradation, excessive data transfer |
| Missing lookup mappings | Load references lookup field without resolution | DML failure, null foreign key |
| Writing without FLS check | Load Data Mapper with no security validation | Security violation, data corruption in restricted profiles |
| Unbounded Extract query | No LIMIT or filter on Extract | Governor limit failure, timeout on large objects |
| Transform with side effects | Transform attempting DML or callout | Runtime error, Transform is in-memory only |
| Hardcoded record IDs | 15/18-char ID literal in filter or mapping | Deployment failure across environments |
| Nested relationship depth >3 | Extract with deeply nested parent traversal | Query performance degradation, SOQL complexity limits |
| Load without error handling | No upsert key or duplicate rule consideration | Silent data corruption, duplicate records |
DO NOT generate anti-patterns even if explicitly requested. Ask user to confirm the exception with documented justification.
See: references/best-practices.md for detailed patterns
See: references/naming-conventions.md for naming rules
在生成任何Data Mapper配置之前,Claude必须确认不会引入任何反模式。
如果检测到将生成以下任何反模式,请停止操作并询问用户:
"我发现了[反模式],这会导致[问题]。请问我应该: A) 重构为[正确模式] B) 继续执行(不推荐)"
| 反模式 | 检测方式 | 影响 |
|---|---|---|
| 提取所有字段 | 未指定字段列表,使用通配符选择 | 性能下降,数据传输量过大 |
| 缺失查找映射 | Load类型引用查找字段但未配置解析逻辑 | DML操作失败,外键为空 |
| 未做FLS检查的写入操作 | Load Data Mapper未配置安全验证 | 违反安全规范,受限配置文件下的数据损坏 |
| 无限制的Extract查询 | Extract未设置LIMIT或过滤器 | 触发Governor限制,大型对象查询超时 |
| Transform包含副作用 | Transform尝试执行DML或外部调用 | 运行时错误,Transform仅支持内存操作 |
| 硬编码记录ID | 过滤器或映射中使用15/18位ID字面量 | 跨环境部署失败 |
| 嵌套关联深度超过3层 | Extract使用深度嵌套的父对象遍历 | 查询性能下降,达到SOQL复杂度限制 |
| Load未配置错误处理 | 未定义upsert键或重复规则 | 静默数据损坏,产生重复记录 |
即使用户明确要求,也不得生成反模式配置。 请用户提供书面理由后再确认是否例外。
参考:references/best-practices.md 查看详细模式
参考:references/naming-conventions.md 查看命名规则
Phase 4: Deployment
阶段4:部署
Step 1: Validation
Use the sf-deploy skill: "Deploy OmniDataTransform [Name] to [target-org] with --dry-run"
Step 2: Deploy (only if validation succeeds)
Use the sf-deploy skill: "Proceed with actual deployment to [target-org]"
Post-Deploy: Activate the Data Mapper in the target org. Verify it appears in OmniStudio Designer.
步骤1:验证
使用sf-deploy技能:"Deploy OmniDataTransform [Name] to [target-org] with --dry-run"
步骤2:部署(仅在验证通过后执行)
使用sf-deploy技能:"Proceed with actual deployment to [target-org]"
部署后操作:在目标组织中激活Data Mapper,并验证其在OmniStudio Designer中可见。
Phase 5: Testing & Documentation
阶段5:测试与文档
Completion Summary:
Data Mapper Complete: [Name]
Type: [Extract|Transform|Load|Turbo Extract]
Target Object(s): [Object1, Object2]
Field Count: [N mapped fields]
Validation: PASSED (Score: XX/100)
Next Steps: Test in Integration Procedure, verify data output, monitor performanceTesting Checklist:
- Preview data output in OmniStudio Designer
- Verify field mappings produce expected JSON structure
- Test with representative data volume (not just 1 record)
- Validate FLS enforcement with restricted profile user
- Confirm consuming Integration Procedure/OmniScript receives correct data shape
完成总结:
Data Mapper创建完成: [名称]
类型: [Extract|Transform|Load|Turbo Extract]
目标对象: [Object1, Object2]
字段数量: [N个映射字段]
验证结果: 通过(得分: XX/100)
后续步骤: 在Integration Procedure中测试,验证数据输出,监控性能测试清单:
- 在OmniStudio Designer中预览数据输出
- 验证字段映射生成预期的JSON结构
- 使用代表性数据量进行测试(而非仅1条记录)
- 使用受限配置文件用户验证FLS执行情况
- 确认消费Integration Procedure/OmniScript接收正确的数据结构
Best Practices (100-Point Scoring)
最佳实践(100分制评分)
| Category | Points | Key Rules |
|---|---|---|
| Design & Naming | 20 | Correct type selection; naming follows |
| Field Mapping | 25 | Explicit field list (no wildcards); correct input/output paths; proper type conversions; null-safe default values |
| Data Integrity | 25 | FLS validation on all fields; lookup resolution for Load types; upsert keys defined; duplicate handling configured |
| Performance | 15 | Bounded queries with LIMIT/filters; Turbo Extract for read-heavy scenarios; minimal relationship depth; indexed filter fields |
| Documentation | 15 | Description on OmniDataTransform record; field mapping rationale documented; consuming components identified |
Thresholds: ✅ 90+ (Deploy) | ⚠️ 67-89 (Review) | ❌ <67 (Block - fix required)
| 评估维度 | 分值 | 核心规则 |
|---|---|---|
| 设计与命名 | 20 | 类型选择正确;命名遵循 |
| 字段映射 | 25 | 明确指定字段列表(禁止通配符);输入/输出路径正确;类型转换合理;空值安全默认值配置完善 |
| 数据完整性 | 25 | 所有字段均验证FLS;Load类型配置查找解析;定义upsert键;配置重复数据处理机制 |
| 性能 | 15 | 查询设置LIMIT/过滤器;读密集型场景使用Turbo Extract;最小化关联深度;过滤器字段建立索引 |
| 文档 | 15 | OmniDataTransform记录包含描述信息;字段映射的设计依据已文档化;识别并记录消费组件 |
评分阈值:✅ 90分及以上(可部署) | ⚠️ 67-89分(需审核) | ❌ 低于67分(阻止部署 - 需修复)
CLI Commands
CLI命令
Query Existing Data Mappers
查询现有Data Mapper
bash
sf data query -q "SELECT Id,Name,Type FROM OmniDataTransform" -o <org>bash
sf data query -q "SELECT Id,Name,Type FROM OmniDataTransform" -o <org>Query Data Mapper Field Mappings
查询Data Mapper字段映射
bash
sf data query -q "SELECT Id,Name,InputObjectName,OutputObjectName,LookupObjectName FROM OmniDataTransformItem WHERE OmniDataTransformationId='<id>'" -o <org>bash
sf data query -q "SELECT Id,Name,InputObjectName,OutputObjectName,LookupObjectName FROM OmniDataTransformItem WHERE OmniDataTransformationId='<id>'" -o <org>Retrieve Data Mapper Metadata
检索Data Mapper元数据
bash
sf project retrieve start -m OmniDataTransform:<Name> -o <org>bash
sf project retrieve start -m OmniDataTransform:<Name> -o <org>Deploy Data Mapper Metadata
部署Data Mapper元数据
bash
sf project deploy start -m OmniDataTransform:<Name> -o <org>bash
sf project deploy start -m OmniDataTransform:<Name> -o <org>Cross-Skill Integration
跨技能集成
| From Skill | To sf-datamapper | When |
|---|---|---|
| sf-omnistudio-analyze | -> sf-datamapper | "Analyze dependencies before creating Data Mapper" |
| sf-metadata | -> sf-datamapper | "Describe target object fields before mapping" |
| sf-soql | -> sf-datamapper | "Validate Extract query logic" |
| From sf-datamapper | To Skill | When |
|---|---|---|
| sf-datamapper | -> sf-integration-procedure | "Create Integration Procedure that calls this Data Mapper" |
| sf-datamapper | -> sf-deploy | "Deploy Data Mapper to target org" |
| sf-datamapper | -> sf-omniscript | "Wire Data Mapper output into OmniScript" |
| sf-datamapper | -> sf-flexcard | "Display Data Mapper Extract results in FlexCard" |
| 来源技能 | 目标sf-datamapper | 触发时机 |
|---|---|---|
| sf-omnistudio-analyze | -> sf-datamapper | "创建Data Mapper前分析依赖关系" |
| sf-metadata | -> sf-datamapper | "映射前描述目标对象字段" |
| sf-soql | -> sf-datamapper | "验证Extract查询逻辑" |
| 来源sf-datamapper | 目标技能 | 触发时机 |
|---|---|---|
| sf-datamapper | -> sf-integration-procedure | "创建调用该Data Mapper的Integration Procedure" |
| sf-datamapper | -> sf-deploy | "将Data Mapper部署到目标组织" |
| sf-datamapper | -> sf-omniscript | "将Data Mapper输出接入OmniScript" |
| sf-datamapper | -> sf-flexcard | "在FlexCard中展示Data Mapper的Extract结果" |
Edge Cases
边缘场景
| Scenario | Solution |
|---|---|
| Large data volume (>10K records) | Use Turbo Extract; add pagination via Integration Procedure; warn about heap limits |
| Polymorphic lookup fields | Specify the concrete object type in the mapping; test each type separately |
| Formula fields in Extract | Standard Extract supports formula fields; Turbo Extract does not -- fall back to standard Extract |
| Cross-object Load (master-detail) | Insert parent records first, then child records in a separate Load step; use Integration Procedure to orchestrate sequence |
| Namespace-prefixed fields | Include namespace prefix in field paths (e.g., |
| Multi-currency orgs | Map CurrencyIsoCode explicitly; do not rely on default currency assumption |
| RecordType-dependent mappings | Filter by RecordType in Extract; set RecordTypeId in Load; document which RecordTypes are supported |
| 场景 | 解决方案 |
|---|---|
| 大数据量(>10000条记录) | 使用Turbo Extract;通过Integration Procedure实现分页;提示堆内存限制风险 |
| 多态查找字段 | 在映射中指定具体对象类型;分别测试每种类型 |
| Extract中包含公式字段 | 标准Extract支持公式字段;Turbo Extract不支持,需回退到标准Extract |
| 跨对象Load(主从关系) | 先插入父记录,再在单独的Load步骤中插入子记录;使用Integration Procedure编排执行顺序 |
| 带命名空间前缀的字段 | 字段路径中包含命名空间前缀(如 |
| 多币种组织 | 显式映射CurrencyIsoCode字段;不要依赖默认币种假设 |
| 依赖RecordType的映射 | Extract中按RecordType过滤;Load中设置RecordTypeId;文档说明支持的RecordType |
Notes
注意事项
- Metadata Type: OmniDataTransform (not DataRaptor -- legacy name deprecated)
- API Version: Requires OmniStudio managed package or Industries Cloud
- Scoring: Block deployment if score < 67
- Dependencies (optional): sf-deploy, sf-metadata, sf-omnistudio-analyze, sf-integration-procedure
- Turbo Extract Limitations: No formula fields, no related lists, no aggregate queries, no polymorphic fields
- Activation: Data Mappers must be activated after deployment to be callable from Integration Procedures
- Draft DMs can't be retrieved: only works for active Data Mappers. Draft DMs return "Entity cannot be found".
sf project retrieve start -m OmniDataTransform:<Name> - Creating via Data API: Use to create OmniDataTransform and OmniDataTransformItem records. The
sf api request rest --method POST --body @file.jsonflag cannot handle JSON in textarea fields. Write the JSON body to a temp file first.sf data create record --values - Foreign key field name: The parent lookup on is
OmniDataTransformItem(full word "Transformation"), notOmniDataTransformationId.OmniDataTransformId
- 元数据类型:OmniDataTransform(旧版DataRaptor名称已废弃)
- API版本:需要OmniStudio托管包或Industries Cloud
- 评分规则:得分低于67分时,阻止部署
- 依赖技能(可选):sf-deploy、sf-metadata、sf-omnistudio-analyze、sf-integration-procedure
- Turbo Extract限制:不支持公式字段、关联列表、聚合查询、多态字段
- 激活要求:Data Mapper部署后必须激活,才能被Integration Procedure调用
- 草稿状态无法检索:仅对激活状态的Data Mapper生效。草稿状态会返回"Entity cannot be found"。
sf project retrieve start -m OmniDataTransform:<Name> - 通过Data API创建:使用创建OmniDataTransform和OmniDataTransformItem记录。
sf api request rest --method POST --body @file.json参数无法处理文本区域字段中的JSON内容,请先将JSON写入临时文件。sf data create record --values - 外键字段名称:上的父对象查找字段为
OmniDataTransformItem(完整单词"Transformation"),而非OmniDataTransformationId。OmniDataTransformId
License
许可证
MIT License.
MIT License.