langfuse-prompt-migration
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseLangfuse Prompt Migration
Langfuse提示词迁移指南
Migrate hardcoded prompts to Langfuse for version control, A/B testing, and deployment-free iteration.
将硬编码提示词迁移至Langfuse,以实现版本控制、A/B测试与无需部署的迭代。
Prerequisites
前置条件
Verify credentials before starting:
bash
echo $LANGFUSE_PUBLIC_KEY # pk-...
echo $LANGFUSE_SECRET_KEY # sk-...
echo $LANGFUSE_HOST # https://cloud.langfuse.com or self-hostedIf not set, ask user to configure them first.
开始前请验证凭证:
bash
echo $LANGFUSE_PUBLIC_KEY # pk-...
echo $LANGFUSE_SECRET_KEY # sk-...
echo $LANGFUSE_HOST # https://cloud.langfuse.com 或自部署地址若未设置,请先要求用户配置。
Migration Flow
迁移流程
1. Scan codebase for prompts
2. Analyze templating compatibility
3. Propose structure (names, subprompts, variables)
4. User approves
5. Create prompts in Langfuse
6. Refactor code to use get_prompt()
7. Link prompts to traces (if tracing enabled)
8. Verify application works1. 扫描代码库定位提示词
2. 分析模板兼容性
3. 提出结构方案(命名、子提示词、变量)
4. 等待用户确认
5. 在Langfuse中创建提示词
6. 重构代码以使用get_prompt()
7. 将提示词关联到追踪链路(若已启用追踪)
8. 验证应用功能正常Step 1: Find Prompts
步骤1:定位提示词
Search for these patterns:
| Framework | Look for |
|---|---|
| OpenAI | |
| Anthropic | |
| LangChain | |
| Vercel AI | |
| Raw | Multi-line strings near LLM calls |
搜索以下特征:
| 框架 | 查找特征 |
|---|---|
| OpenAI | |
| Anthropic | |
| LangChain | |
| Vercel AI | |
| 原生代码 | LLM调用附近的多行字符串 |
Step 2: Check Templating Compatibility
步骤2:检查模板兼容性
CRITICAL: Langfuse only supports simple substitution. No conditionals, loops, or filters.
{{variable}}| Template Feature | Langfuse Native | Action |
|---|---|---|
| ✅ | Direct migration |
| ⚠️ | Convert to |
| ❌ | Move logic to code |
| ❌ | Apply filter in code |
关键注意事项: Langfuse仅支持简单的替换,不支持条件判断、循环或过滤器。
{{variable}}| 模板特性 | Langfuse原生支持 | 处理方式 |
|---|---|---|
| ✅ | 直接迁移 |
| ⚠️ | 转换为 |
| ❌ | 将逻辑迁移至代码中 |
| ❌ | 在代码中应用过滤器 |
Decision Tree
决策树
Contains {% if %}, {% for %}, or filters?
├─ No → Direct migration
└─ Yes → Choose:
├─ Option A (RECOMMENDED): Move logic to code, pass pre-computed values
└─ Option B: Store raw template, compile client-side with Jinja2
└─ ⚠️ Loses: Playground preview, UI experiments是否包含{% if %}、{% for %}或过滤器?
├─ 否 → 直接迁移
└─ 是 → 选择:
├─ 方案A(推荐):将逻辑迁移至代码,传入预计算值
└─ 方案B:存储原生模板,在客户端使用Jinja2编译
└─ ⚠️ 损失功能:Playground预览、UI实验Simplifying Complex Templates
简化复杂模板
Conditionals → Pre-compute in code:
python
undefined条件判断 → 在代码中预计算:
python
undefinedInstead of {% if user.is_premium %}...{% endif %} in prompt
替代提示词中的{% if user.is_premium %}...{% endif %}
Use {{tier_message}} and compute value in code before compile()
使用{{tier_message}},并在compile()前在代码中计算值
**Loops** → Pre-format in code:
```python
**循环** → 在代码中预格式化:
```pythonInstead of {% for tool in tools %}...{% endfor %} in prompt
替代提示词中的{% for tool in tools %}...{% endfor %}
Use {{tools_list}} and format the list in code before compile()
使用{{tools_list}},并在compile()前在代码中格式化列表
For external templating details, fetch: https://langfuse.com/faq/all/using-external-templating-libraries
如需了解外部模板的详细信息,请访问:https://langfuse.com/faq/all/using-external-templating-librariesStep 3: Propose Structure
步骤3:提出结构方案
Naming Conventions
命名规范
| Rule | Example | Bad |
|---|---|---|
| Lowercase, hyphenated | | |
| Feature-based | | |
| Hierarchical for related | | |
Prefix subprompts with | | |
| 规则 | 示例 | 错误示范 |
|---|---|---|
| 小写、连字符分隔 | | |
| 基于功能命名 | | |
| 相关提示词使用层级结构 | | |
子提示词以 | | |
Identify Subprompts
提取子提示词
Extract when:
- Same text in 2+ prompts
- Represents distinct component (personality, safety rules, format)
- Would need to change together
满足以下条件时提取:
- 同一文本出现在2个及以上提示词中
- 代表独立组件(如人设、安全规则、格式要求)
- 需要同步修改的内容
Variable Extraction
变量提取
| Make Variable | Keep Hardcoded |
|---|---|
User-specific ( | Output format instructions |
Dynamic content ( | Safety guardrails |
Per-request ( | Persona/personality |
Environment-specific ( | Static examples |
| 设为变量 | 保留硬编码 |
|---|---|
用户专属内容( | 输出格式说明 |
动态内容( | 安全防护规则 |
单请求专属内容( | 角色/人设 |
环境专属内容( | 静态示例 |
Step 4: Present Plan to User
步骤4:向用户展示迁移方案
Format:
Found N prompts across M files:
src/chat.py:
- System prompt (47 lines) → 'chat-assistant'
src/support/triage.py:
- Triage prompt (34 lines) → 'support/triage'
⚠️ Contains {% if %} - will simplify
Subprompts to extract:
- '_base-personality' - used by: chat-assistant, support/triage
Variables to add:
- {{user_name}} - hardcoded in 2 prompts
Proceed?格式示例:
在M个文件中发现N个提示词:
src/chat.py:
- 系统提示词(47行)→ 命名为'chat-assistant'
src/support/triage.py:
- 分诊提示词(34行)→ 命名为'support/triage'
⚠️ 包含{% if %}语句 - 将进行简化
待提取的子提示词:
- '_base-personality' - 被chat-assistant、support/triage共用
待新增的变量:
- {{user_name}} - 在2个提示词中为硬编码
是否继续?Step 5: Create Prompts in Langfuse
步骤5:在Langfuse中创建提示词
Use with:
langfuse.create_prompt()- : Your chosen name
name - : Template text (or message array for chat type)
prompt - :
typeor"text""chat" - :
labels(they're already live)["production"] - : Optional model settings
config
Labeling strategy:
- → All migrated prompts
production - → Add later for testing
staging - → Auto-applied by Langfuse
latest
For full API: fetch https://langfuse.com/docs/prompts/get-started
使用方法,参数包括:
langfuse.create_prompt()- : 选定的提示词名称
name - : 模板文本(或聊天类型的消息数组)
prompt - :
type或"text""chat" - :
labels(因为这些提示词已在生产环境使用)["production"] - : 可选的模型配置
config
标签策略:
- → 所有已迁移的提示词
production - → 后续用于测试的提示词
staging - → Langfuse自动添加
latest
Step 6: Refactor Code
步骤6:重构代码
Replace hardcoded prompts with:
python
prompt = langfuse.get_prompt("name", label="production")
messages = prompt.compile(var1=value1, var2=value2)Key points:
- Always use (not
label="production") for stabilitylatest - Call to substitute variables
.compile() - For chat prompts, result is message array ready for API
For SDK examples (Python/JS/TS): fetch https://langfuse.com/docs/prompts/get-started
将硬编码提示词替换为:
python
prompt = langfuse.get_prompt("name", label="production")
messages = prompt.compile(var1=value1, var2=value2)核心要点:
- 始终使用(而非
label="production")以保证稳定性latest - 调用方法完成变量替换
.compile() - 对于聊天类型提示词,返回结果为可直接用于API的消息数组
SDK示例(Python/JS/TS)请访问:https://langfuse.com/docs/prompts/get-started
Step 7: Link Prompts to Traces
步骤7:关联提示词与追踪链路
If codebase uses Langfuse tracing, link prompts so you can see which version produced each response.
若代码库已使用Langfuse追踪功能,可关联提示词以查看每个响应对应的提示词版本。
Detect Existing Tracing
检测现有追踪配置
Look for:
- decorators
@observe() - calls
langfuse.trace() - (instrumented client)
from langfuse.openai import openai
查找以下特征:
- 装饰器
@observe() - 调用
langfuse.trace() - (已埋点的客户端)
from langfuse.openai import openai
Link Methods
关联方法
| Setup | How to Link |
|---|---|
| |
| Manual tracing | |
| OpenAI integration | |
| 配置方式 | 关联操作 |
|---|---|
| |
| 手动追踪 | |
| OpenAI集成 | |
Verify in UI
在UI中验证
- Go to Traces → select a trace
- Click on Generation
- Check Prompt field shows name and version
For tracing details: fetch https://langfuse.com/docs/prompts/get-started#link-with-langfuse-tracing
- 进入Traces页面 → 选择一条追踪链路
- 点击Generation选项
- 检查Prompt字段是否显示提示词名称与版本
Step 8: Verify Migration
步骤8:验证迁移结果
Checklist
检查清单
- All prompts created with label
production - Code fetches with
label="production" - Variables compile without errors
- Subprompts resolve correctly
- Application behavior unchanged
- Generations show linked prompt in UI (if tracing)
- 所有已迁移提示词均添加标签
production - 代码使用拉取提示词
label="production" - 变量替换无错误
- 子提示词解析正常
- 应用功能无变化
- 生成结果在UI中关联到对应提示词(若启用追踪)
Common Issues
常见问题
| Issue | Solution |
|---|---|
| Check name spelling |
| Variables not replaced | Use |
| Subprompt not resolved | Must exist with same label |
| Old prompt cached | Restart app |
| 问题 | 解决方案 |
|---|---|
| 检查提示词名称拼写 |
| 变量未被替换 | 使用 |
| 子提示词未解析 | 子提示词必须存在相同标签 |
| 旧提示词被缓存 | 重启应用 |
Out of Scope
非本指南覆盖范围
- Prompt engineering (writing better prompts)
- Evaluation setup
- A/B testing workflow
- Non-LLM string templates
- 提示词工程(优化提示词内容)
- 评估体系搭建
- A/B测试流程
- 非LLM场景的字符串模板