langfuse-prompt-migration

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Langfuse Prompt Migration

Langfuse提示词迁移指南

Migrate hardcoded prompts to Langfuse for version control, A/B testing, and deployment-free iteration.
将硬编码提示词迁移至Langfuse,以实现版本控制、A/B测试与无需部署的迭代。

Prerequisites

前置条件

Verify credentials before starting:
bash
echo $LANGFUSE_PUBLIC_KEY   # pk-...
echo $LANGFUSE_SECRET_KEY   # sk-...
echo $LANGFUSE_HOST         # https://cloud.langfuse.com or self-hosted
If not set, ask user to configure them first.
开始前请验证凭证:
bash
echo $LANGFUSE_PUBLIC_KEY   # pk-...
echo $LANGFUSE_SECRET_KEY   # sk-...
echo $LANGFUSE_HOST         # https://cloud.langfuse.com 或自部署地址
若未设置,请先要求用户配置。

Migration Flow

迁移流程

1. Scan codebase for prompts
2. Analyze templating compatibility
3. Propose structure (names, subprompts, variables)
4. User approves
5. Create prompts in Langfuse
6. Refactor code to use get_prompt()
7. Link prompts to traces (if tracing enabled)
8. Verify application works
1. 扫描代码库定位提示词
2. 分析模板兼容性
3. 提出结构方案(命名、子提示词、变量)
4. 等待用户确认
5. 在Langfuse中创建提示词
6. 重构代码以使用get_prompt()
7. 将提示词关联到追踪链路(若已启用追踪)
8. 验证应用功能正常

Step 1: Find Prompts

步骤1:定位提示词

Search for these patterns:
FrameworkLook for
OpenAI
messages=[{"role": "system", "content": "..."}]
Anthropic
system="..."
LangChain
ChatPromptTemplate
,
SystemMessage
Vercel AI
system: "..."
,
prompt: "..."
RawMulti-line strings near LLM calls
搜索以下特征:
框架查找特征
OpenAI
messages=[{"role": "system", "content": "..."}]
Anthropic
system="..."
LangChain
ChatPromptTemplate
,
SystemMessage
Vercel AI
system: "..."
,
prompt: "..."
原生代码LLM调用附近的多行字符串

Step 2: Check Templating Compatibility

步骤2:检查模板兼容性

CRITICAL: Langfuse only supports simple
{{variable}}
substitution. No conditionals, loops, or filters.
Template FeatureLangfuse NativeAction
{{variable}}
Direct migration
{var}
/
${var}
⚠️Convert to
{{var}}
{% if %}
/
{% for %}
Move logic to code
{{ var | filter }}
Apply filter in code
关键注意事项: Langfuse仅支持简单的
{{variable}}
替换,不支持条件判断、循环或过滤器。
模板特性Langfuse原生支持处理方式
{{variable}}
直接迁移
{var}
/
${var}
⚠️转换为
{{var}}
格式
{% if %}
/
{% for %}
将逻辑迁移至代码中
{{ var | filter }}
在代码中应用过滤器

Decision Tree

决策树

Contains {% if %}, {% for %}, or filters?
├─ No → Direct migration
└─ Yes → Choose:
    ├─ Option A (RECOMMENDED): Move logic to code, pass pre-computed values
    └─ Option B: Store raw template, compile client-side with Jinja2
        └─ ⚠️ Loses: Playground preview, UI experiments
是否包含{% if %}、{% for %}或过滤器?
├─ 否 → 直接迁移
└─ 是 → 选择:
    ├─ 方案A(推荐):将逻辑迁移至代码,传入预计算值
    └─ 方案B:存储原生模板,在客户端使用Jinja2编译
        └─ ⚠️ 损失功能:Playground预览、UI实验

Simplifying Complex Templates

简化复杂模板

Conditionals → Pre-compute in code:
python
undefined
条件判断 → 在代码中预计算:
python
undefined

Instead of {% if user.is_premium %}...{% endif %} in prompt

替代提示词中的{% if user.is_premium %}...{% endif %}

Use {{tier_message}} and compute value in code before compile()

使用{{tier_message}},并在compile()前在代码中计算值


**Loops** → Pre-format in code:
```python

**循环** → 在代码中预格式化:
```python

Instead of {% for tool in tools %}...{% endfor %} in prompt

替代提示词中的{% for tool in tools %}...{% endfor %}

Use {{tools_list}} and format the list in code before compile()

使用{{tools_list}},并在compile()前在代码中格式化列表


For external templating details, fetch: https://langfuse.com/faq/all/using-external-templating-libraries

如需了解外部模板的详细信息,请访问:https://langfuse.com/faq/all/using-external-templating-libraries

Step 3: Propose Structure

步骤3:提出结构方案

Naming Conventions

命名规范

RuleExampleBad
Lowercase, hyphenated
chat-assistant
ChatAssistant_v2
Feature-based
document-summarizer
prompt1
Hierarchical for related
support/triage
supportTriage
Prefix subprompts with
_
_base-personality
shared-personality
规则示例错误示范
小写、连字符分隔
chat-assistant
ChatAssistant_v2
基于功能命名
document-summarizer
prompt1
相关提示词使用层级结构
support/triage
supportTriage
子提示词以
_
为前缀
_base-personality
shared-personality

Identify Subprompts

提取子提示词

Extract when:
  • Same text in 2+ prompts
  • Represents distinct component (personality, safety rules, format)
  • Would need to change together
满足以下条件时提取:
  • 同一文本出现在2个及以上提示词中
  • 代表独立组件(如人设、安全规则、格式要求)
  • 需要同步修改的内容

Variable Extraction

变量提取

Make VariableKeep Hardcoded
User-specific (
{{user_name}}
)
Output format instructions
Dynamic content (
{{context}}
)
Safety guardrails
Per-request (
{{query}}
)
Persona/personality
Environment-specific (
{{company_name}}
)
Static examples
设为变量保留硬编码
用户专属内容(
{{user_name}}
输出格式说明
动态内容(
{{context}}
安全防护规则
单请求专属内容(
{{query}}
角色/人设
环境专属内容(
{{company_name}}
静态示例

Step 4: Present Plan to User

步骤4:向用户展示迁移方案

Format:
Found N prompts across M files:

src/chat.py:
  - System prompt (47 lines) → 'chat-assistant'

src/support/triage.py:
  - Triage prompt (34 lines) → 'support/triage'
    ⚠️ Contains {% if %} - will simplify

Subprompts to extract:
  - '_base-personality' - used by: chat-assistant, support/triage

Variables to add:
  - {{user_name}} - hardcoded in 2 prompts

Proceed?
格式示例:
在M个文件中发现N个提示词:

src/chat.py:
  - 系统提示词(47行)→ 命名为'chat-assistant'

src/support/triage.py:
  - 分诊提示词(34行)→ 命名为'support/triage'
    ⚠️ 包含{% if %}语句 - 将进行简化

待提取的子提示词:
  - '_base-personality' - 被chat-assistant、support/triage共用

待新增的变量:
  - {{user_name}} - 在2个提示词中为硬编码

是否继续?

Step 5: Create Prompts in Langfuse

步骤5:在Langfuse中创建提示词

Use
langfuse.create_prompt()
with:
  • name
    : Your chosen name
  • prompt
    : Template text (or message array for chat type)
  • type
    :
    "text"
    or
    "chat"
  • labels
    :
    ["production"]
    (they're already live)
  • config
    : Optional model settings
Labeling strategy:
  • production
    → All migrated prompts
  • staging
    → Add later for testing
  • latest
    → Auto-applied by Langfuse
使用
langfuse.create_prompt()
方法,参数包括:
  • name
    : 选定的提示词名称
  • prompt
    : 模板文本(或聊天类型的消息数组)
  • type
    :
    "text"
    "chat"
  • labels
    :
    ["production"]
    (因为这些提示词已在生产环境使用)
  • config
    : 可选的模型配置
标签策略:
  • production
    → 所有已迁移的提示词
  • staging
    → 后续用于测试的提示词
  • latest
    → Langfuse自动添加

Step 6: Refactor Code

步骤6:重构代码

Replace hardcoded prompts with:
python
prompt = langfuse.get_prompt("name", label="production")
messages = prompt.compile(var1=value1, var2=value2)
Key points:
  • Always use
    label="production"
    (not
    latest
    ) for stability
  • Call
    .compile()
    to substitute variables
  • For chat prompts, result is message array ready for API
For SDK examples (Python/JS/TS): fetch https://langfuse.com/docs/prompts/get-started
将硬编码提示词替换为:
python
prompt = langfuse.get_prompt("name", label="production")
messages = prompt.compile(var1=value1, var2=value2)
核心要点:
  • 始终使用
    label="production"
    (而非
    latest
    )以保证稳定性
  • 调用
    .compile()
    方法完成变量替换
  • 对于聊天类型提示词,返回结果为可直接用于API的消息数组
SDK示例(Python/JS/TS)请访问:https://langfuse.com/docs/prompts/get-started

Step 7: Link Prompts to Traces

步骤7:关联提示词与追踪链路

If codebase uses Langfuse tracing, link prompts so you can see which version produced each response.
若代码库已使用Langfuse追踪功能,可关联提示词以查看每个响应对应的提示词版本。

Detect Existing Tracing

检测现有追踪配置

Look for:
  • @observe()
    decorators
  • langfuse.trace()
    calls
  • from langfuse.openai import openai
    (instrumented client)
查找以下特征:
  • @observe()
    装饰器
  • langfuse.trace()
    调用
  • from langfuse.openai import openai
    (已埋点的客户端)

Link Methods

关联方法

SetupHow to Link
@observe()
decorator
langfuse_context.update_current_observation(prompt=prompt)
Manual tracing
trace.generation(prompt=prompt, ...)
OpenAI integration
openai.chat.completions.create(..., langfuse_prompt=prompt)
配置方式关联操作
@observe()
装饰器
langfuse_context.update_current_observation(prompt=prompt)
手动追踪
trace.generation(prompt=prompt, ...)
OpenAI集成
openai.chat.completions.create(..., langfuse_prompt=prompt)

Verify in UI

在UI中验证

  1. Go to Traces → select a trace
  2. Click on Generation
  3. Check Prompt field shows name and version
  1. 进入Traces页面 → 选择一条追踪链路
  2. 点击Generation选项
  3. 检查Prompt字段是否显示提示词名称与版本

Step 8: Verify Migration

步骤8:验证迁移结果

Checklist

检查清单

  • All prompts created with
    production
    label
  • Code fetches with
    label="production"
  • Variables compile without errors
  • Subprompts resolve correctly
  • Application behavior unchanged
  • Generations show linked prompt in UI (if tracing)
  • 所有已迁移提示词均添加
    production
    标签
  • 代码使用
    label="production"
    拉取提示词
  • 变量替换无错误
  • 子提示词解析正常
  • 应用功能无变化
  • 生成结果在UI中关联到对应提示词(若启用追踪)

Common Issues

常见问题

IssueSolution
PromptNotFoundError
Check name spelling
Variables not replacedUse
{{var}}
not
{var}
, call
.compile()
Subprompt not resolvedMust exist with same label
Old prompt cachedRestart app
问题解决方案
PromptNotFoundError
检查提示词名称拼写
变量未被替换使用
{{var}}
格式而非
{var}
,确保调用
.compile()
方法
子提示词未解析子提示词必须存在相同标签
旧提示词被缓存重启应用

Out of Scope

非本指南覆盖范围

  • Prompt engineering (writing better prompts)
  • Evaluation setup
  • A/B testing workflow
  • Non-LLM string templates
  • 提示词工程(优化提示词内容)
  • 评估体系搭建
  • A/B测试流程
  • 非LLM场景的字符串模板