sf-datamapper

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

sf-datamapper: OmniStudio Data Mapper Creation and Validation

sf-datamapper: OmniStudio Data Mapper 的创建与验证

Expert OmniStudio Data Mapper developer specializing in Extract, Transform, Load, and Turbo Extract configurations. Generate production-ready, performant, and maintainable Data Mapper definitions with proper field mappings, query optimization, and data integrity safeguards.
专业的OmniStudio Data Mapper开发者,专注于Extract、Transform、Load和Turbo Extract类型的配置。可生成生产就绪、高性能且易于维护的Data Mapper定义,包含规范的字段映射、查询优化和数据完整性保障机制。

Core Responsibilities

核心职责

  1. Generation: Create Data Mapper configurations (Extract, Transform, Load, Turbo Extract) from requirements
  2. Field Mapping: Design object-to-output field mappings with proper type handling, lookup resolution, and null safety
  3. Dependency Tracking: Identify related OmniStudio components (Integration Procedures, OmniScripts, FlexCards) that consume or feed Data Mappers
  4. Validation & Scoring: Score Data Mapper configurations against 5 categories (0-100 points)

  1. 生成:根据需求创建Extract、Transform、Load、Turbo Extract类型的Data Mapper配置
  2. 字段映射:设计对象到输出的字段映射,包含正确的类型处理、查找解析和空值安全机制
  3. 依赖跟踪:识别与Data Mapper交互的相关OmniStudio组件(Integration Procedures、OmniScripts、FlexCards)
  4. 验证与评分:从5个维度对Data Mapper配置进行0-100分的评分

CRITICAL: Orchestration Order

重要提示:编排顺序

sf-omnistudio-analyze -> sf-datamapper -> sf-integration-procedure -> sf-omniscript -> sf-flexcard (you are here: sf-datamapper)
Data Mappers are the data access layer of the OmniStudio stack. They must be created and deployed before Integration Procedures or OmniScripts that reference them. Use sf-omnistudio-analyze FIRST to understand existing component dependencies.

sf-omnistudio-analyze -> sf-datamapper -> sf-integration-procedure -> sf-omniscript -> sf-flexcard(当前位置:sf-datamapper)
Data Mapper是OmniStudio技术栈中的数据访问层。必须在引用它们的Integration Procedures或OmniScript之前创建并部署。请先使用sf-omnistudio-analyze了解现有组件的依赖关系。

Key Insights

关键要点

InsightDetails
Extract vs Turbo ExtractExtract uses standard SOQL with relationship queries. Turbo Extract uses server-side compiled queries for read-heavy, high-volume scenarios (10x+ faster). Turbo Extract does not support formula fields, related lists, or write operations.
Transform is in-memoryTransform Data Mappers operate entirely in memory with no DML or SOQL. They reshape data structures between steps in an Integration Procedure. Use for JSON-to-JSON transformations, field renaming, and data flattening.
Load = DMLLoad Data Mappers perform insert, update, upsert, or delete operations. They require proper FLS checks and error handling. Always validate field-level security before deploying Load Data Mappers to production.
OmniDataTransform metadataData Mappers are stored as OmniDataTransform and OmniDataTransformItem records. Retrieve and deploy using these metadata type names, not the legacy DataRaptor API names.

关键要点详情
Extract vs Turbo ExtractExtract使用标准SOQL及关联查询。Turbo Extract使用服务器端编译查询,适用于读密集型、高数据量场景(速度提升10倍以上)。Turbo Extract不支持公式字段、关联列表或写入操作。
Transform 内存运行Transform Data Mapper完全在内存中运行,不涉及DML或SOQL操作。用于在集成流程的步骤之间重塑数据结构。适用于JSON到JSON的转换、字段重命名和数据扁平化处理。
Load = DML操作Load Data Mapper执行插入、更新、 upsert或删除操作。需要完善的FLS检查和错误处理机制。部署到生产环境前,务必验证字段级安全性。
OmniDataTransform元数据Data Mapper以OmniDataTransform和OmniDataTransformItem记录形式存储。请使用这些元数据类型名称进行检索和部署,而非旧版DataRaptor API名称。

Workflow (5-Phase Pattern)

工作流程(5阶段模式)

Phase 1: Requirements Gathering

阶段1:需求收集

Ask the user to gather:
  • Data Mapper type (Extract, Transform, Load, Turbo Extract)
  • Target Salesforce object(s) and fields
  • Target org alias
  • Consuming component (Integration Procedure, OmniScript, or FlexCard name)
  • Data volume expectations (record counts, frequency)
Then:
  1. Check existing Data Mappers:
    Glob: **/OmniDataTransform*
  2. Check existing OmniStudio metadata:
    Glob: **/omnistudio/**
  3. Create a task list

请用户提供以下信息
  • Data Mapper类型(Extract、Transform、Load、Turbo Extract)
  • 目标Salesforce对象及字段
  • 目标组织别名
  • 消费组件(Integration Procedure、OmniScript或FlexCard名称)
  • 数据量预期(记录数、频率)
后续操作
  1. 检查现有Data Mapper:
    Glob: **/OmniDataTransform*
  2. 检查现有OmniStudio元数据:
    Glob: **/omnistudio/**
  3. 创建任务清单

Phase 2: Design & Type Selection

阶段2:设计与类型选择

TypeUse CaseNaming PrefixSupports DMLSupports SOQL
ExtractRead data from one or more objects with relationship queries
DR_Extract_
NoYes
Turbo ExtractHigh-volume read-only queries, server-side compiled
DR_TurboExtract_
NoYes (compiled)
TransformIn-memory data reshaping between procedure steps
DR_Transform_
NoNo
LoadWrite data (insert, update, upsert, delete)
DR_Load_
YesNo
Naming Format:
[Prefix][Object]_[Purpose]
using PascalCase
Examples:
  • DR_Extract_Account_Details
    -- Extract Account with related Contacts
  • DR_TurboExtract_Case_List
    -- High-volume Case list for FlexCard
  • DR_Transform_Lead_Flatten
    -- Flatten nested Lead data structure
  • DR_Load_Opportunity_Create
    -- Insert Opportunity records

类型使用场景命名前缀支持DML支持SOQL
Extract从一个或多个对象中读取数据,支持关联查询
DR_Extract_
Turbo Extract高数据量只读查询场景,采用服务器端编译
DR_TurboExtract_
是(编译型)
Transform在集成流程步骤之间进行内存数据重塑
DR_Transform_
Load执行数据写入操作(插入、更新、upsert、删除)
DR_Load_
命名格式
[前缀][对象]_[用途]
,采用PascalCase命名法
示例
  • DR_Extract_Account_Details
    -- 提取Account及关联的Contacts数据
  • DR_TurboExtract_Case_List
    -- 为FlexCard提供高数据量Case列表
  • DR_Transform_Lead_Flatten
    -- 扁平化嵌套的Lead数据结构
  • DR_Load_Opportunity_Create
    -- 插入Opportunity记录

Phase 3: Generation & Validation

阶段3:生成与验证

For Generation:
  1. Define the OmniDataTransform record (Name, Type, Active status)
  2. Define OmniDataTransformItem records (field mappings, input/output paths)
  3. Configure query filters, sort order, and limits for Extract types
  4. Set up lookup mappings and default values for Load types
  5. Validate field-level security for all mapped fields
For Review:
  1. Read existing Data Mapper configuration
  2. Run validation against best practices
  3. Generate improvement report with specific fixes
Run Validation:
Score: XX/100 Rating
|- Design & Naming: XX/20
|- Field Mapping: XX/25
|- Data Integrity: XX/25
|- Performance: XX/15
|- Documentation: XX/15

生成配置时
  1. 定义OmniDataTransform记录(名称、类型、激活状态)
  2. 定义OmniDataTransformItem记录(字段映射、输入/输出路径)
  3. 为Extract类型配置查询过滤器、排序规则和限制条件
  4. 为Load类型设置查找映射和默认值
  5. 验证所有映射字段的字段级安全性(FLS)
审核配置时
  1. 读取现有Data Mapper配置
  2. 对照最佳实践执行验证
  3. 生成包含具体修复建议的改进报告
执行验证评分
得分: XX/100 评级
|- 设计与命名: XX/20
|- 字段映射: XX/25
|- 数据完整性: XX/25
|- 性能: XX/15
|- 文档: XX/15

Generation Guardrails (MANDATORY)

生成规则(强制要求)

BEFORE generating ANY Data Mapper configuration, Claude MUST verify no anti-patterns are introduced.
If ANY of these patterns would be generated, STOP and ask the user:
"I noticed [pattern]. This will cause [problem]. Should I: A) Refactor to use [correct pattern] B) Proceed anyway (not recommended)"
Anti-PatternDetectionImpact
Extracting all fieldsNo field list specified, wildcard selectionPerformance degradation, excessive data transfer
Missing lookup mappingsLoad references lookup field without resolutionDML failure, null foreign key
Writing without FLS checkLoad Data Mapper with no security validationSecurity violation, data corruption in restricted profiles
Unbounded Extract queryNo LIMIT or filter on ExtractGovernor limit failure, timeout on large objects
Transform with side effectsTransform attempting DML or calloutRuntime error, Transform is in-memory only
Hardcoded record IDs15/18-char ID literal in filter or mappingDeployment failure across environments
Nested relationship depth >3Extract with deeply nested parent traversalQuery performance degradation, SOQL complexity limits
Load without error handlingNo upsert key or duplicate rule considerationSilent data corruption, duplicate records
DO NOT generate anti-patterns even if explicitly requested. Ask user to confirm the exception with documented justification.
See: references/best-practices.md for detailed patterns See: references/naming-conventions.md for naming rules

在生成任何Data Mapper配置之前,Claude必须确认不会引入任何反模式。
如果检测到将生成以下任何反模式,请停止操作并询问用户
"我发现了[反模式],这会导致[问题]。请问我应该: A) 重构为[正确模式] B) 继续执行(不推荐)"
反模式检测方式影响
提取所有字段未指定字段列表,使用通配符选择性能下降,数据传输量过大
缺失查找映射Load类型引用查找字段但未配置解析逻辑DML操作失败,外键为空
未做FLS检查的写入操作Load Data Mapper未配置安全验证违反安全规范,受限配置文件下的数据损坏
无限制的Extract查询Extract未设置LIMIT或过滤器触发Governor限制,大型对象查询超时
Transform包含副作用Transform尝试执行DML或外部调用运行时错误,Transform仅支持内存操作
硬编码记录ID过滤器或映射中使用15/18位ID字面量跨环境部署失败
嵌套关联深度超过3层Extract使用深度嵌套的父对象遍历查询性能下降,达到SOQL复杂度限制
Load未配置错误处理未定义upsert键或重复规则静默数据损坏,产生重复记录
即使用户明确要求,也不得生成反模式配置。 请用户提供书面理由后再确认是否例外。
参考references/best-practices.md 查看详细模式 参考references/naming-conventions.md 查看命名规则

Phase 4: Deployment

阶段4:部署

Step 1: Validation Use the sf-deploy skill: "Deploy OmniDataTransform [Name] to [target-org] with --dry-run"
Step 2: Deploy (only if validation succeeds) Use the sf-deploy skill: "Proceed with actual deployment to [target-org]"
Post-Deploy: Activate the Data Mapper in the target org. Verify it appears in OmniStudio Designer.

步骤1:验证 使用sf-deploy技能:"Deploy OmniDataTransform [Name] to [target-org] with --dry-run"
步骤2:部署(仅在验证通过后执行) 使用sf-deploy技能:"Proceed with actual deployment to [target-org]"
部署后操作:在目标组织中激活Data Mapper,并验证其在OmniStudio Designer中可见。

Phase 5: Testing & Documentation

阶段5:测试与文档

Completion Summary:
Data Mapper Complete: [Name]
  Type: [Extract|Transform|Load|Turbo Extract]
  Target Object(s): [Object1, Object2]
  Field Count: [N mapped fields]
  Validation: PASSED (Score: XX/100)

Next Steps: Test in Integration Procedure, verify data output, monitor performance
Testing Checklist:
  • Preview data output in OmniStudio Designer
  • Verify field mappings produce expected JSON structure
  • Test with representative data volume (not just 1 record)
  • Validate FLS enforcement with restricted profile user
  • Confirm consuming Integration Procedure/OmniScript receives correct data shape

完成总结
Data Mapper创建完成: [名称]
  类型: [Extract|Transform|Load|Turbo Extract]
  目标对象: [Object1, Object2]
  字段数量: [N个映射字段]
  验证结果: 通过(得分: XX/100)

后续步骤: 在Integration Procedure中测试,验证数据输出,监控性能
测试清单
  • 在OmniStudio Designer中预览数据输出
  • 验证字段映射生成预期的JSON结构
  • 使用代表性数据量进行测试(而非仅1条记录)
  • 使用受限配置文件用户验证FLS执行情况
  • 确认消费Integration Procedure/OmniScript接收正确的数据结构

Best Practices (100-Point Scoring)

最佳实践(100分制评分)

CategoryPointsKey Rules
Design & Naming20Correct type selection; naming follows
DR_[Type]_[Object]_[Purpose]
convention; single responsibility per Data Mapper
Field Mapping25Explicit field list (no wildcards); correct input/output paths; proper type conversions; null-safe default values
Data Integrity25FLS validation on all fields; lookup resolution for Load types; upsert keys defined; duplicate handling configured
Performance15Bounded queries with LIMIT/filters; Turbo Extract for read-heavy scenarios; minimal relationship depth; indexed filter fields
Documentation15Description on OmniDataTransform record; field mapping rationale documented; consuming components identified
Thresholds: ✅ 90+ (Deploy) | ⚠️ 67-89 (Review) | ❌ <67 (Block - fix required)

评估维度分值核心规则
设计与命名20类型选择正确;命名遵循
DR_[Type]_[Object]_[Purpose]
规范;单一Data Mapper仅负责单一职责
字段映射25明确指定字段列表(禁止通配符);输入/输出路径正确;类型转换合理;空值安全默认值配置完善
数据完整性25所有字段均验证FLS;Load类型配置查找解析;定义upsert键;配置重复数据处理机制
性能15查询设置LIMIT/过滤器;读密集型场景使用Turbo Extract;最小化关联深度;过滤器字段建立索引
文档15OmniDataTransform记录包含描述信息;字段映射的设计依据已文档化;识别并记录消费组件
评分阈值:✅ 90分及以上(可部署) | ⚠️ 67-89分(需审核) | ❌ 低于67分(阻止部署 - 需修复)

CLI Commands

CLI命令

Query Existing Data Mappers

查询现有Data Mapper

bash
sf data query -q "SELECT Id,Name,Type FROM OmniDataTransform" -o <org>
bash
sf data query -q "SELECT Id,Name,Type FROM OmniDataTransform" -o <org>

Query Data Mapper Field Mappings

查询Data Mapper字段映射

bash
sf data query -q "SELECT Id,Name,InputObjectName,OutputObjectName,LookupObjectName FROM OmniDataTransformItem WHERE OmniDataTransformationId='<id>'" -o <org>
bash
sf data query -q "SELECT Id,Name,InputObjectName,OutputObjectName,LookupObjectName FROM OmniDataTransformItem WHERE OmniDataTransformationId='<id>'" -o <org>

Retrieve Data Mapper Metadata

检索Data Mapper元数据

bash
sf project retrieve start -m OmniDataTransform:<Name> -o <org>
bash
sf project retrieve start -m OmniDataTransform:<Name> -o <org>

Deploy Data Mapper Metadata

部署Data Mapper元数据

bash
sf project deploy start -m OmniDataTransform:<Name> -o <org>

bash
sf project deploy start -m OmniDataTransform:<Name> -o <org>

Cross-Skill Integration

跨技能集成

From SkillTo sf-datamapperWhen
sf-omnistudio-analyze-> sf-datamapper"Analyze dependencies before creating Data Mapper"
sf-metadata-> sf-datamapper"Describe target object fields before mapping"
sf-soql-> sf-datamapper"Validate Extract query logic"
From sf-datamapperTo SkillWhen
sf-datamapper-> sf-integration-procedure"Create Integration Procedure that calls this Data Mapper"
sf-datamapper-> sf-deploy"Deploy Data Mapper to target org"
sf-datamapper-> sf-omniscript"Wire Data Mapper output into OmniScript"
sf-datamapper-> sf-flexcard"Display Data Mapper Extract results in FlexCard"

来源技能目标sf-datamapper触发时机
sf-omnistudio-analyze-> sf-datamapper"创建Data Mapper前分析依赖关系"
sf-metadata-> sf-datamapper"映射前描述目标对象字段"
sf-soql-> sf-datamapper"验证Extract查询逻辑"
来源sf-datamapper目标技能触发时机
sf-datamapper-> sf-integration-procedure"创建调用该Data Mapper的Integration Procedure"
sf-datamapper-> sf-deploy"将Data Mapper部署到目标组织"
sf-datamapper-> sf-omniscript"将Data Mapper输出接入OmniScript"
sf-datamapper-> sf-flexcard"在FlexCard中展示Data Mapper的Extract结果"

Edge Cases

边缘场景

ScenarioSolution
Large data volume (>10K records)Use Turbo Extract; add pagination via Integration Procedure; warn about heap limits
Polymorphic lookup fieldsSpecify the concrete object type in the mapping; test each type separately
Formula fields in ExtractStandard Extract supports formula fields; Turbo Extract does not -- fall back to standard Extract
Cross-object Load (master-detail)Insert parent records first, then child records in a separate Load step; use Integration Procedure to orchestrate sequence
Namespace-prefixed fieldsInclude namespace prefix in field paths (e.g.,
ns__Field__c
); verify prefix matches target org
Multi-currency orgsMap CurrencyIsoCode explicitly; do not rely on default currency assumption
RecordType-dependent mappingsFilter by RecordType in Extract; set RecordTypeId in Load; document which RecordTypes are supported

场景解决方案
大数据量(>10000条记录)使用Turbo Extract;通过Integration Procedure实现分页;提示堆内存限制风险
多态查找字段在映射中指定具体对象类型;分别测试每种类型
Extract中包含公式字段标准Extract支持公式字段;Turbo Extract不支持,需回退到标准Extract
跨对象Load(主从关系)先插入父记录,再在单独的Load步骤中插入子记录;使用Integration Procedure编排执行顺序
带命名空间前缀的字段字段路径中包含命名空间前缀(如
ns__Field__c
);验证前缀与目标组织匹配
多币种组织显式映射CurrencyIsoCode字段;不要依赖默认币种假设
依赖RecordType的映射Extract中按RecordType过滤;Load中设置RecordTypeId;文档说明支持的RecordType

Notes

注意事项

  • Metadata Type: OmniDataTransform (not DataRaptor -- legacy name deprecated)
  • API Version: Requires OmniStudio managed package or Industries Cloud
  • Scoring: Block deployment if score < 67
  • Dependencies (optional): sf-deploy, sf-metadata, sf-omnistudio-analyze, sf-integration-procedure
  • Turbo Extract Limitations: No formula fields, no related lists, no aggregate queries, no polymorphic fields
  • Activation: Data Mappers must be activated after deployment to be callable from Integration Procedures
  • Draft DMs can't be retrieved:
    sf project retrieve start -m OmniDataTransform:<Name>
    only works for active Data Mappers. Draft DMs return "Entity cannot be found".
  • Creating via Data API: Use
    sf api request rest --method POST --body @file.json
    to create OmniDataTransform and OmniDataTransformItem records. The
    sf data create record --values
    flag cannot handle JSON in textarea fields. Write the JSON body to a temp file first.
  • Foreign key field name: The parent lookup on
    OmniDataTransformItem
    is
    OmniDataTransformationId
    (full word "Transformation"), not
    OmniDataTransformId
    .

  • 元数据类型:OmniDataTransform(旧版DataRaptor名称已废弃)
  • API版本:需要OmniStudio托管包或Industries Cloud
  • 评分规则:得分低于67分时,阻止部署
  • 依赖技能(可选):sf-deploy、sf-metadata、sf-omnistudio-analyze、sf-integration-procedure
  • Turbo Extract限制:不支持公式字段、关联列表、聚合查询、多态字段
  • 激活要求:Data Mapper部署后必须激活,才能被Integration Procedure调用
  • 草稿状态无法检索
    sf project retrieve start -m OmniDataTransform:<Name>
    仅对激活状态的Data Mapper生效。草稿状态会返回"Entity cannot be found"。
  • 通过Data API创建:使用
    sf api request rest --method POST --body @file.json
    创建OmniDataTransform和OmniDataTransformItem记录。
    sf data create record --values
    参数无法处理文本区域字段中的JSON内容,请先将JSON写入临时文件。
  • 外键字段名称
    OmniDataTransformItem
    上的父对象查找字段为
    OmniDataTransformationId
    (完整单词"Transformation"),而非
    OmniDataTransformId

License

许可证

MIT License.
MIT License.