personize-no-code-pipelines

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Skill: Personize No-Code Pipelines

技能:Personize无代码管道

Generate importable n8n workflow JSON files that sync data between Personize and 400+ apps. The user describes their source/destination and you produce ready-to-import
.json
files they paste into n8n.

生成可导入的n8n工作流JSON文件,用于在Personize与400+款应用之间同步数据。用户描述数据源/目标后,你即可生成可直接导入的
.json
文件,用户只需将其粘贴到n8n中即可使用。

When to Use This Skill

何时使用此技能

  • User wants to connect Personize to HubSpot, Salesforce, Google Sheets, Slack, Postgres, or any app with an API
  • User asks for "no-code" or "n8n" or "workflow automation" integration
  • User wants scheduled syncs without writing code
  • User needs webhook-triggered data flows

  • 用户希望将Personize与HubSpot、Salesforce、Google Sheets、Slack、Postgres或任何具备API的应用连接
  • 用户询问“无代码”“n8n”或“工作流自动化”集成方案
  • 用户需要无需编写代码的定时同步功能
  • 用户需要由Webhook触发的数据流

When NOT to Use This Skill

何时不使用此技能

  • Want TypeScript/SDK code you can test and version → use entity-memory (CRM sync section)
  • Need durable pipelines with retries, waits, and complex orchestration → use code-pipelines
  • Only need a single API call → use entity-memory directly

  • 想要可测试、可版本控制的TypeScript/SDK代码 → 使用entity-memory(CRM同步章节)
  • 需要具备重试、等待机制及复杂编排的持久化管道 → 使用code-pipelines
  • 仅需单次API调用 → 直接使用entity-memory

Prerequisites

前置条件

  • An n8n instance (cloud or self-hosted)
  • A Personize secret key (
    sk_live_...
    ) configured as an HTTP Header Auth credential in n8n:
    • Name:
      Authorization
    • Value:
      Bearer sk_live_...
    • Credential name:
      Personize API Key
      (referenced in all templates)

  • 一个n8n实例(云端或自托管)
  • 在n8n中配置为HTTP Header Auth凭证的Personize密钥(
    sk_live_...
    ):
    • 名称
      Authorization
    • Bearer sk_live_...
    • 凭证名称
      Personize API Key
      (所有模板均会引用此名称)

Personize API Endpoints for n8n

适用于n8n的Personize API端点

All requests go to
https://agent.personize.ai
with
Authorization: Bearer sk_live_...
header.
ActionMethodPathUse Case
Batch sync in (structured + AI)POST
/api/v1/batch-memorize
Sync CRM records into Personize with per-property AI control
Single memorize (AI)POST
/api/v1/memorize
Store one content item with AI extraction + vectors
Structured upsertPOST
/api/v1/upsert
Store properties without AI extraction
Semantic searchPOST
/api/v1/smart-recall
Search memories by meaning
Export/filter recordsPOST
/api/v1/search
Query records by property conditions
Entity context digestPOST
/api/v1/smart-memory-digest
Get compiled context for an entity
Smart contextPOST
/api/v1/ai/smart-guidelines
Get relevant variables for a message
Auth checkGET
/api/v1/me
Verify key and read plan limits

所有请求均发送至
https://agent.personize.ai
,请求头需包含
Authorization: Bearer sk_live_...
操作请求方法路径使用场景
批量同步入(结构化+AI)POST
/api/v1/batch-memorize
将CRM记录同步至Personize,支持按属性的AI控制
单条记录记忆(AI)POST
/api/v1/memorize
存储单个内容项,支持AI提取与向量处理
结构化更新插入POST
/api/v1/upsert
直接存储属性数据,无需AI提取
语义搜索POST
/api/v1/smart-recall
基于语义搜索记忆内容
记录导出/筛选POST
/api/v1/search
根据属性条件查询记录
实体上下文摘要POST
/api/v1/smart-memory-digest
获取某一实体的整合上下文信息
智能上下文POST
/api/v1/ai/smart-guidelines
获取消息相关的变量信息
权限校验GET
/api/v1/me
验证密钥有效性并查看套餐限制

n8n Workflow JSON Structure

n8n工作流JSON结构

Every workflow is a JSON object with
nodes
,
connections
, and
settings
. Users import it via Menu > Import from File or Ctrl+V paste into the n8n canvas.
json
{
  "name": "Workflow Name",
  "nodes": [ /* array of node objects */ ],
  "connections": { /* source node name → target connections */ },
  "settings": { "executionOrder": "v1" }
}
每个工作流都是一个包含
nodes
connections
settings
的JSON对象。用户可通过菜单 > 从文件导入Ctrl+V粘贴到n8n画布中完成导入。
json
{
  "name": "Workflow Name",
  "nodes": [ /* 节点对象数组 */ ],
  "connections": { /* 源节点名称 → 目标连接关系 */ },
  "settings": { "executionOrder": "v1" }
}

Constraints

约束规则

Keywords follow RFC 2119: MUST = non-negotiable, SHOULD = strong default (override with stated reasoning), MAY = agent discretion.
  1. MUST use unique node names -- because node names serve as connection keys in the
    connections
    object; duplicates cause wiring failures on import.
  2. MUST set
    "settings": { "executionOrder": "v1" }
    -- because omitting it causes non-deterministic node execution order in n8n.
  3. MUST reference credentials by the display name the user created in n8n -- because n8n resolves credentials by name, not by ID; mismatched names cause auth failures on import.
  4. SHOULD space nodes ~220px horizontally in position values
    [x, y]
    -- because consistent spacing produces readable canvas layouts that users can navigate without rearranging.
  5. SHOULD use UUID-like strings for node IDs (e.g.,
    "pz-sync-001"
    ) -- because predictable IDs simplify debugging and log correlation.
  6. MUST connect Loop Over Items output 0 to the loop body and output 1 to the post-loop node -- because reversed connections cause infinite loops or skipped processing.

关键词遵循RFC 2119规范:MUST = 不可协商,SHOULD = 强默认规则(需说明理由方可覆盖),MAY = 可自主决定
  1. 必须使用唯一节点名称——因为节点名称是
    connections
    对象中的连接键,重复会导致导入时连接失败
  2. 必须设置
    "settings": { "executionOrder": "v1" }
    ——省略此配置会导致n8n中节点执行顺序不确定
  3. 必须使用用户在n8n中创建的凭证显示名称——因为n8n通过名称而非ID解析凭证,名称不匹配会导致导入时权限验证失败
  4. 建议在位置值
    [x, y]
    中设置节点水平间距约为220px——一致的间距可生成可读性强的画布布局,用户无需重新排列即可轻松导航
  5. 建议使用类UUID字符串作为节点ID(例如
    "pz-sync-001"
    )——可预测的ID便于调试和日志关联
  6. 必须将Loop Over Items的输出0连接至循环体,输出1连接至循环后节点——连接顺序颠倒会导致无限循环或跳过处理

Common Node Reference

常用节点参考

Node
type
typeVersion
Schedule Trigger
n8n-nodes-base.scheduleTrigger
1.1
Manual Trigger
n8n-nodes-base.manualTrigger
1
Webhook
n8n-nodes-base.webhook
2
HTTP Request
n8n-nodes-base.httpRequest
4.2
Loop Over Items
n8n-nodes-base.splitInBatches
3
Code (JavaScript)
n8n-nodes-base.code
2
IF
n8n-nodes-base.if
2
Set / Edit Fields
n8n-nodes-base.set
3.4
No Operation
n8n-nodes-base.noOp
1
HubSpot
n8n-nodes-base.hubspot
2
Salesforce
n8n-nodes-base.salesforce
1
Google Sheets
n8n-nodes-base.googleSheets
4.5
Slack
n8n-nodes-base.slack
2.2
Postgres
n8n-nodes-base.postgres
2.5
MySQL
n8n-nodes-base.mySql
2.4

节点
type
typeVersion
定时触发器
n8n-nodes-base.scheduleTrigger
1.1
手动触发器
n8n-nodes-base.manualTrigger
1
Webhook
n8n-nodes-base.webhook
2
HTTP请求
n8n-nodes-base.httpRequest
4.2
循环处理项
n8n-nodes-base.splitInBatches
3
代码(JavaScript)
n8n-nodes-base.code
2
条件判断
n8n-nodes-base.if
2
设置/编辑字段
n8n-nodes-base.set
3.4
无操作
n8n-nodes-base.noOp
1
HubSpot
n8n-nodes-base.hubspot
2
Salesforce
n8n-nodes-base.salesforce
1
Google Sheets
n8n-nodes-base.googleSheets
4.5
Slack
n8n-nodes-base.slack
2.2
Postgres
n8n-nodes-base.postgres
2.5
MySQL
n8n-nodes-base.mySql
2.4

HTTP Request Node Pattern for Personize

用于Personize的HTTP请求节点模板

All Personize API calls use the same HTTP Request node pattern:
json
{
  "id": "pz-api-001",
  "name": "Personize: Batch Memorize",
  "type": "n8n-nodes-base.httpRequest",
  "typeVersion": 4.2,
  "position": [860, 300],
  "parameters": {
    "method": "POST",
    "url": "https://agent.personize.ai/api/v1/batch-memorize",
    "authentication": "predefinedCredentialType",
    "nodeCredentialType": "httpHeaderAuth",
    "sendBody": true,
    "specifyBody": "json",
    "jsonBody": "={{ JSON.stringify($json.payload) }}",
    "options": {
      "response": { "response": { "fullResponse": true } }
    }
  },
  "credentials": {
    "httpHeaderAuth": {
      "id": "pz-cred",
      "name": "Personize API Key"
    }
  }
}
所有Personize API调用均使用以下HTTP请求节点模板:
json
{
  "id": "pz-api-001",
  "name": "Personize: Batch Memorize",
  "type": "n8n-nodes-base.httpRequest",
  "typeVersion": 4.2,
  "position": [860, 300],
  "parameters": {
    "method": "POST",
    "url": "https://agent.personize.ai/api/v1/batch-memorize",
    "authentication": "predefinedCredentialType",
    "nodeCredentialType": "httpHeaderAuth",
    "sendBody": true,
    "specifyBody": "json",
    "jsonBody": "={{ JSON.stringify($json.payload) }}",
    "options": {
      "response": { "response": { "fullResponse": true } }
    }
  },
  "credentials": {
    "httpHeaderAuth": {
      "id": "pz-cred",
      "name": "Personize API Key"
    }
  }
}

Expression Syntax

表达式语法

  • {{ $json.fieldName }}
    — field from previous node
  • {{ $('Node Name').item.json.field }}
    — field from a specific node
  • {{ JSON.stringify($json) }}
    — entire previous output as JSON string
  • ={{ expression }}
    — prefix for expression mode in parameter values

  • {{ $json.fieldName }}
    —— 来自上一节点的字段
  • {{ $('Node Name').item.json.field }}
    —— 来自指定节点的字段
  • {{ JSON.stringify($json) }}
    —— 将上一节点的完整输出转为JSON字符串
  • ={{ expression }}
    —— 参数值中表达式模式的前缀

Workflow Pattern 1: Sync IN (Source App → Personize)

工作流模式1:同步入(源应用 → Personize)

Flow:
[Trigger] → [Fetch Records] → [Build Payload] → [Batch Memorize] → [Done]
For sources that return many records, use the batch-memorize endpoint which handles both structured storage and AI extraction in one call.
流程:
[触发器] → [获取记录] → 构建请求体] → [批量记忆] → [完成]
对于返回大量记录的数据源,建议使用batch-memorize端点,该端点可在一次调用中同时处理结构化存储与AI提取。

When to use Loop Over Items vs. Single Call

何时使用循环处理项 vs 单次调用

  • Single
    batch-memorize
    call (preferred):
    When the source returns all rows at once and total rows < 500. The Code node builds the entire
    { source, mapping, rows }
    payload and one HTTP Request sends it.
  • Loop Over Items with chunking: When source returns thousands of rows. Split into chunks of 50-100, send each chunk as a separate
    batch-memorize
    call with a Wait node for rate limits.
  • 单次
    batch-memorize
    调用(推荐):
    当数据源一次性返回所有记录且总记录数<500时使用。代码节点构建完整的
    { source, mapping, rows }
    请求体,通过一次HTTP请求发送。
  • 分块循环处理: 当数据源返回数千条记录时使用。将记录拆分为50-100条的块,每个块单独发送
    batch-memorize
    请求,并添加Wait节点以应对速率限制。

Code Node: Build batch-memorize Payload

代码节点:构建batch-memorize请求体

This Code node transforms source records into the Personize
batch-memorize
payload:
javascript
// Input: array of source records from previous node
const items = $input.all();

const rows = items.map(item => item.json);

const payload = {
  source: 'n8n-sync',
  mapping: {
    entityType: 'contact',
    email: 'email',           // source field name containing email
    runName: 'n8n-sync-' + Date.now(),
    properties: {
      // Structured fields — stored directly
      email:      { sourceField: 'email',      collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      first_name: { sourceField: 'firstname',  collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      last_name:  { sourceField: 'lastname',   collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      company:    { sourceField: 'company',    collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      // AI extraction fields — extractMemories: true
      notes: {
        sourceField: 'notes',
        collectionId: 'YOUR_GEN_COL_ID',
        collectionName: 'Generated Content',
        extractMemories: true,
      },
    },
  },
  rows: rows,
  chunkSize: 1,
};

return [{ json: { payload } }];
Important: Replace
YOUR_COL_ID
and
YOUR_GEN_COL_ID
with actual collection IDs. The user can find these by calling
GET /api/v1/collections
or using
client.collections.list()
from the SDK.
此代码节点将源记录转换为Personize的
batch-memorize
请求体:
javascript
// 输入:来自上一节点的源记录数组
const items = $input.all();

const rows = items.map(item => item.json);

const payload = {
  source: 'n8n-sync',
  mapping: {
    entityType: 'contact',
    email: 'email',           // 包含邮箱的源字段名称
    runName: 'n8n-sync-' + Date.now(),
    properties: {
      // 结构化字段 —— 直接存储
      email:      { sourceField: 'email',      collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      first_name: { sourceField: 'firstname',  collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      last_name:  { sourceField: 'lastname',   collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      company:    { sourceField: 'company',    collectionId: 'YOUR_COL_ID', collectionName: 'Contacts Standard Schema' },
      // AI提取字段 —— extractMemories: true
      notes: {
        sourceField: 'notes',
        collectionId: 'YOUR_GEN_COL_ID',
        collectionName: 'Generated Content',
        extractMemories: true,
      },
    },
  },
  rows: rows,
  chunkSize: 1,
};

return [{ json: { payload } }];
重要提示: 请将
YOUR_COL_ID
YOUR_GEN_COL_ID
替换为实际的集合ID。用户可通过调用
GET /api/v1/collections
或使用SDK中的
client.collections.list()
获取这些ID。

Per-Property
extractMemories
Decision

按属性设置
extractMemories
的决策规则

Rule of thumb: Set
extractMemories: true
on free-form text (notes, transcripts, emails). Omit it for structured fields (email, name, dates, counts). See the
entity-memory
skill's
reference/memorize.md
for the complete decision table and examples.

经验法则: 对于自由格式文本(备注、转录文本、邮件)设置
extractMemories: true
;对于结构化字段(邮箱、姓名、日期、计数)则不设置。完整的决策表及示例请参见
entity-memory
技能的
reference/memorize.md
文档。

Workflow Pattern 2: Sync OUT (Personize → Destination App)

工作流模式2:同步出(Personize → 目标应用)

Flow:
[Trigger] → [Export from Personize] → [Loop Over Items] → [Push to Destination] → [Done]
Use the export endpoint to query records by filters, then push each to the destination.
流程:
[触发器] → [从Personize导出] → [循环处理项] → [推送至目标应用] → [完成]
使用export端点按筛选条件查询记录,然后将每条记录推送至目标应用。

Code Node: Build Export Query

代码节点:构建导出查询

javascript
return [{
  json: {
    groups: [{
      id: 'g1',
      logic: 'AND',
      conditions: [
        { property: 'company', operator: 'IS_SET' }
      ]
    }],
    type: 'Contact',
    returnRecords: true,
    pageSize: 100,
    includeMemories: false
  }
}];
javascript
return [{
  json: {
    groups: [{
      id: 'g1',
      logic: 'AND',
      conditions: [
        { property: 'company', operator: 'IS_SET' }
      ]
    }],
    type: 'Contact',
    returnRecords: true,
    pageSize: 100,
    includeMemories: false
  }
}];

Export Filter Operators

导出筛选操作符

OperatorDescription
EQ
Equals
NEQ
Not equals
CONTAINS
Contains substring
GT
/
LT
Greater / less than
IS_SET
Field has a value
IS_NOT_SET
Field is empty
IN
Value in array

操作符描述
EQ
等于
NEQ
不等于
CONTAINS
包含子字符串
GT
/
LT
大于 / 小于
IS_SET
字段有值
IS_NOT_SET
字段为空
IN
值在数组中

Workflow Pattern 3: Per-Record AI Enrichment

工作流模式3:单条记录AI增强

Flow:
[Trigger] → [Fetch Record] → [Smart Digest] → [AI Prompt] → [Push Result]
Use
smart-memory-digest
to compile all known context for a contact, then feed it to the Personize prompt API for AI-generated content.
流程:
[触发器] → [获取记录] → [智能摘要] → [AI提示] → [推送结果]
使用
smart-memory-digest
编译联系人的所有已知上下文,然后将其输入至Personize提示API以生成AI内容。

Code Node: Build Smart Digest Request

代码节点:构建智能摘要请求

javascript
const email = $json.email;
return [{
  json: {
    email: email,
    type: 'Contact',
    token_budget: 2000,
    include_properties: true,
    include_memories: true
  }
}];
javascript
const email = $json.email;
return [{
  json: {
    email: email,
    type: 'Contact',
    token_budget: 2000,
    include_properties: true,
    include_memories: true
  }
}];

Code Node: Build AI Prompt Request

代码节点:构建AI提示请求

javascript
const context = $('Personize: Smart Digest').item.json.data.compiledContext;
const email = $('Personize: Smart Digest').item.json.data.properties?.email || '';

return [{
  json: {
    prompt: `Using the context below, write a personalized outreach email for ${email}.\n\nContext:\n${context}`,
    model: 'claude-sonnet-4-6'
  }
}];

javascript
const context = $('Personize: Smart Digest').item.json.data.compiledContext;
const email = $('Personize: Smart Digest').item.json.data.properties?.email || '';

return [{
  json: {
    prompt: `使用以下上下文,为${email}撰写一封个性化的开发信。\n\n上下文:\n${context}`,
    model: 'claude-sonnet-4-6'
  }
}];

Workflow Pattern 4: Webhook → Memorize (Real-time Sync)

工作流模式4:Webhook → 记忆(实时同步)

Flow:
[Webhook] → [Transform] → [Memorize Pro] → [Respond]
For real-time data capture — CRM webhooks, form submissions, Zapier/Make triggers.
流程:
[Webhook] → [转换数据] → [高级记忆] → [响应]
用于实时数据捕获——CRM Webhook、表单提交、Zapier/Make触发器等场景。

Webhook Node

Webhook节点

json
{
  "id": "pz-webhook-001",
  "name": "Webhook",
  "type": "n8n-nodes-base.webhook",
  "typeVersion": 2,
  "position": [200, 300],
  "parameters": {
    "path": "personize-ingest",
    "httpMethod": "POST",
    "responseMode": "lastNode"
  }
}
json
{
  "id": "pz-webhook-001",
  "name": "Webhook",
  "type": "n8n-nodes-base.webhook",
  "typeVersion": 2,
  "position": [200, 300],
  "parameters": {
    "path": "personize-ingest",
    "httpMethod": "POST",
    "responseMode": "lastNode"
  }
}

Code Node: Build memorize_pro Payload

代码节点:构建memorize_pro请求体

javascript
const data = $json.body || $json;
return [{
  json: {
    content: `Name: ${data.name}\nEmail: ${data.email}\nCompany: ${data.company}\nNotes: ${data.notes || ''}`,
    speaker: data.source || 'webhook',
    enhanced: true,
    tags: ['webhook', 'real-time'],
    email: data.email
  }
}];

javascript
const data = $json.body || $json;
return [{
  json: {
    content: `Name: ${data.name}\nEmail: ${data.email}\nCompany: ${data.company}\nNotes: ${data.notes || ''}`,
    speaker: data.source || 'webhook',
    enhanced: true,
    tags: ['webhook', 'real-time'],
    email: data.email
  }
}];

Rate Limit Handling in n8n

n8n中的速率限制处理

Personize returns HTTP 429 when rate limited. Handle this in n8n:
当超出速率限制时,Personize会返回HTTP 429状态码。可在n8n中通过以下方式处理:

Option A: Retry on Fail (Recommended)

选项A:失败时重试(推荐)

Set on the HTTP Request node:
json
{
  "retryOnFail": true,
  "maxTries": 3,
  "waitBetweenTries": 60000
}
在HTTP请求节点中设置:
json
{
  "retryOnFail": true,
  "maxTries": 3,
  "waitBetweenTries": 60000
}

Option B: Wait Node Between Batches

选项B:批量调用间添加Wait节点

Add a Wait node (type
n8n-nodes-base.wait
) after each batch call:
json
{
  "id": "pz-wait-001",
  "name": "Rate Limit Wait",
  "type": "n8n-nodes-base.wait",
  "typeVersion": 1.1,
  "position": [1080, 300],
  "parameters": {
    "amount": 62,
    "unit": "seconds"
  }
}
Always call
GET /api/v1/me
first to read the actual limits for the user's plan.

在每个批量调用后添加Wait节点(类型为
n8n-nodes-base.wait
):
json
{
  "id": "pz-wait-001",
  "name": "速率限制等待",
  "type": "n8n-nodes-base.wait",
  "typeVersion": 1.1,
  "position": [1080, 300],
  "parameters": {
    "amount": 62,
    "unit": "seconds"
  }
}
请始终先调用
GET /api/v1/me
以查看用户套餐的实际速率限制。

Building a Workflow: Step-by-Step

构建工作流:分步指南

When the user asks for an n8n workflow, follow these steps:
当用户请求n8n工作流时,请遵循以下步骤:

Step 1: Identify the direction

步骤1:确定同步方向

  • Sync IN = data flows into Personize (use
    batch-memorize
    or
    memorize_pro
    )
  • Sync OUT = data flows out of Personize (use
    export
    or
    recall_pro
    or
    smart-memory-digest
    )
  • Bidirectional = combine both patterns in one workflow
  • 同步入 = 数据流入Personize(使用
    batch-memorize
    memorize_pro
  • 同步出 = 数据流出Personize(使用
    export
    recall_pro
    smart-memory-digest
  • 双向同步 = 在一个工作流中结合上述两种模式

Step 2: Choose the trigger

步骤2:选择触发器

  • Schedule — for periodic batch syncs (hourly, daily)
  • Webhook — for real-time event-driven ingestion
  • Manual — for testing and one-time imports
  • 定时触发器 —— 用于周期性批量同步(每小时、每天)
  • Webhook —— 用于实时事件驱动的数据摄取
  • 手动触发器 —— 用于测试和一次性导入

Step 3: Choose source/destination nodes

步骤3:选择源/目标节点

If n8n has a built-in node for the app (HubSpot, Salesforce, Google Sheets, Slack, Postgres), use it. Otherwise, use the HTTP Request node.
如果n8n有该应用的内置节点(HubSpot、Salesforce、Google Sheets、Slack、Postgres),请优先使用。否则,使用HTTP请求节点。

Step 4: Build the mapping

步骤4:构建字段映射

Ask the user which fields to sync. For each field, determine:
  • The source field name (from the source app)
  • The target property name (in Personize)
  • Whether it needs AI extraction (
    extractMemories: true
    )
询问用户需要同步哪些字段。对于每个字段,确定:
  • 源字段名称(来自源应用)
  • 目标属性名称(在Personize中)
  • 是否需要AI提取(
    extractMemories: true

Step 5: Generate the workflow JSON

步骤5:生成工作流JSON

Use the templates in
templates/
as starting points. Customize:
  • Node names (descriptive of the actual source/destination)
  • API URLs and endpoints
  • Code node logic for field mapping
  • Credential references
  • Schedule interval
templates/
目录中的模板为起点进行自定义:
  • 节点名称(需明确描述实际的源/目标应用)
  • API URL和端点
  • 字段映射的代码节点逻辑
  • 凭证引用
  • 定时间隔

Step 6: Add error handling

步骤6:添加错误处理

  • Set
    retryOnFail: true
    on all HTTP Request nodes hitting Personize
  • Add Wait nodes between batches if syncing > 50 records
  • Optionally add an IF node after the API call to check
    {{ $json.success }}
    and handle errors
  • 所有调用Personize的HTTP请求节点均设置
    retryOnFail: true
  • 若同步记录数>50,在批量调用间添加Wait节点
  • 可选择在API调用后添加IF节点,检查
    {{ $json.success }}
    并处理错误

Step 7: Output the JSON

步骤7:输出JSON文件

Save the workflow JSON to a
.json
file. Tell the user to:
  1. Open n8n
  2. Create a new workflow
  3. Menu > Import from File (or Ctrl+V to paste)
  4. Create the Personize API Key credential (Header Auth with
    Authorization: Bearer sk_live_...
    )
  5. Create credentials for the source/destination app
  6. Test with Manual Trigger first, then activate the Schedule Trigger

将工作流JSON保存为
.json
文件。告知用户:
  1. 打开n8n
  2. 创建新工作流
  3. 菜单 > 从文件导入(或按Ctrl+V粘贴)
  4. 创建Personize API Key凭证(Header Auth,值为
    Authorization: Bearer sk_live_...
  5. 为源/目标应用创建凭证
  6. 先使用手动触发器测试,然后激活定时触发器

Template Workflows

模板工作流

Ready-to-customize templates are in the
templates/
folder:
TemplateFileDescription
HubSpot → Personize
templates/hubspot-to-personize.json
Sync HubSpot contacts into Personize memory
Personize → Slack
templates/personize-to-slack.json
Export records and post digest to Slack channel
Webhook → Personize
templates/webhook-to-personize.json
Real-time ingest from any webhook source
Google Sheets → Personize
templates/gsheets-to-personize.json
Batch import rows from a Google Sheet
Each template is a complete, importable n8n workflow JSON with placeholder credentials.

templates/
目录中提供了可直接自定义的模板:
模板文件描述
HubSpot → Personize
templates/hubspot-to-personize.json
将HubSpot联系人同步至Personize记忆库
Personize → Slack
templates/personize-to-slack.json
导出记录并将摘要发布至Slack频道
Webhook → Personize
templates/webhook-to-personize.json
从任意Webhook源实时摄取数据
Google Sheets → Personize
templates/gsheets-to-personize.json
批量导入Google Sheet中的行数据
每个模板都是完整的可导入n8n工作流JSON,包含凭证占位符。

n8n App Integrations (400+)

n8n应用集成(400+款)

n8n has built-in nodes for these categories — use them instead of HTTP Request when available:
CategoryApps
CRMHubSpot, Salesforce, Pipedrive, Zoho CRM
SpreadsheetsGoogle Sheets, Airtable, Microsoft Excel
CommunicationSlack, Microsoft Teams, Discord, Telegram, Email (SMTP/IMAP)
Project MgmtJira, Asana, Trello, Monday.com, Notion, Linear
DatabasesPostgres, MySQL, MongoDB, Redis
Dev ToolsGitHub, GitLab
MarketingMailchimp, SendGrid, ActiveCampaign
E-CommerceShopify, Stripe, WooCommerce
CloudAWS S3/SES/Lambda, Google Cloud
AIOpenAI, Anthropic Claude, Google Gemini
For any app not listed, use the HTTP Request node with the app's REST API.

n8n为以下类别提供了内置节点——若有可用的内置节点,请优先使用而非HTTP请求节点:
类别应用
CRMHubSpot、Salesforce、Pipedrive、Zoho CRM
电子表格Google Sheets、Airtable、Microsoft Excel
通讯工具Slack、Microsoft Teams、Discord、Telegram、邮件(SMTP/IMAP)
项目管理Jira、Asana、Trello、Monday.com、Notion、Linear
数据库Postgres、MySQL、MongoDB、Redis
开发工具GitHub、GitLab
营销工具Mailchimp、SendGrid、ActiveCampaign
电商平台Shopify、Stripe、WooCommerce
云服务AWS S3/SES/Lambda、Google Cloud
AI服务OpenAI、Anthropic Claude、Google Gemini
对于未列出的应用,请使用HTTP请求节点调用该应用的REST API。

Reference Documentation & Search Queries

参考文档与搜索查询

For n8n official docs URLs, app-specific node docs, trigger node docs, community templates, suggested search queries, and version notes — see reference/n8n-reference.md.
n8n官方文档URL、应用特定节点文档、触发器节点文档、社区模板、建议搜索查询及版本说明,请参见reference/n8n-reference.md