ai-fixing-errors
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseFix Your Broken AI
修复故障AI
Systematic approach to diagnosing and fixing AI features that aren't working. Run through these checks in order.
这是一套系统化的AI功能故障诊断与修复方法,请按顺序完成以下检查。
Quick Diagnostic Checklist
快速诊断清单
1. Is the AI provider configured?
1. AI提供商是否已配置?
python
import dspypython
import dspyCheck current config
Check current config
print(dspy.settings.lm) # Should show your LM, not None
print(dspy.settings.lm) # Should show your LM, not None
If None, configure it:
If None, configure it:
lm = dspy.LM("openai/gpt-4o-mini")
dspy.configure(lm=lm)
**Common issues:**
- Forgot to call `dspy.configure(lm=lm)`
- API key not set in environment
- Wrong model name format (should be `provider/model-name`)lm = dspy.LM("openai/gpt-4o-mini")
dspy.configure(lm=lm)
**常见问题:**
- 忘记调用`dspy.configure(lm=lm)`
- 环境变量中未设置API密钥
- 模型名称格式错误(应为`provider/model-name`格式)2. Does the AI respond at all?
2. AI是否有任何响应?
python
undefinedpython
undefinedTest the AI provider directly
Test the AI provider directly
lm = dspy.LM("openai/gpt-4o-mini")
response = lm("Hello, respond with just 'OK'")
print(response)
undefinedlm = dspy.LM("openai/gpt-4o-mini")
response = lm("Hello, respond with just 'OK'")
print(response)
undefined3. Is the task definition correct?
3. 任务定义是否正确?
python
undefinedpython
undefinedCheck your signature defines the right fields
Check your signature defines the right fields
class MySignature(dspy.Signature):
"""Clear task description here."""
input_field: str = dspy.InputField(desc="what this contains")
output_field: str = dspy.OutputField(desc="what to produce")
class MySignature(dspy.Signature):
"""Clear task description here."""
input_field: str = dspy.InputField(desc="what this contains")
output_field: str = dspy.OutputField(desc="what to produce")
Verify by inspecting
Verify by inspecting
print(MySignature.fields)
**Common issues:**
- Missing `dspy.InputField()` / `dspy.OutputField()` annotations
- Wrong type hints (use `str`, `list[str]`, `Literal[...]`, Pydantic models)
- Vague or missing docstring (the docstring IS the task instruction)print(MySignature.fields)
**常见问题:**
- 缺少`dspy.InputField()` / `dspy.OutputField()`注解
- 类型提示错误(请使用`str`、`list[str]`、`Literal[...]`或Pydantic模型)
- 文档字符串模糊或缺失(文档字符串就是任务指令)4. Are you passing the right inputs?
4. 传入的输入是否正确?
python
undefinedpython
undefinedCheck that input field names match
Check that input field names match
result = my_program(question="test") # field name must match signature
result = my_program(question="test") # field name must match signature
Wrong:
Wrong:
result = my_program(q="test") # 'q' doesn't match 'question'
result = my_program("test") # positional args don't work
undefinedresult = my_program(q="test") # 'q' doesn't match 'question'
result = my_program("test") # positional args don't work
undefined5. Is the output being parsed?
5. 输出是否被正确解析?
python
result = my_program(question="test")
print(result) # see all fields
print(result.answer) # access specific field
print(type(result.answer)) # check typeCommon issues with typed outputs:
- type doesn't match any of the provided options
Literal - Pydantic model validation fails
- List output returns string instead of list
python
result = my_program(question="test")
print(result) # see all fields
print(result.answer) # access specific field
print(type(result.answer)) # check type带类型输出的常见问题:
- 类型与提供的选项不匹配
Literal - Pydantic模型验证失败
- 列表输出返回字符串而非列表
Inspect What the AI Actually Sees
查看AI实际接收的内容
The most powerful debugging tool — shows exactly what prompts were sent and what came back:
python
undefined这是最强大的调试工具——它会准确显示发送给AI的提示词以及返回的内容:
python
undefinedShow the last 3 AI calls
Show the last 3 AI calls
dspy.inspect_history(n=3)
This shows:
- The full prompt sent to the AI
- The AI's raw response
- How DSPy parsed the response
**What to look for:**
- Is the prompt clear? Does it describe the task well?
- Is the AI's response in the expected format?
- Are few-shot examples (if any) helpful or misleading?dspy.inspect_history(n=3)
它会显示:
- 发送给AI的完整提示词
- AI的原始响应
- DSPy如何解析响应
**需要关注的点:**
- 提示词是否清晰?是否准确描述了任务?
- AI的响应是否符合预期格式?
- 少样本示例(如果有)是否有帮助还是起到了误导作用?Common Errors and Fixes
常见错误与修复方案
AttributeError: 'NoneType' has no attribute ...
AttributeError: 'NoneType' has no attribute ...AttributeError: 'NoneType' has no attribute ...
AttributeError: 'NoneType' has no attribute ...Cause: AI provider not configured.
Fix: Call before using any module.
dspy.configure(lm=lm)原因: AI提供商未配置。
修复方案: 在使用任何模块前调用。
dspy.configure(lm=lm)ValueError: Could not parse output
ValueError: Could not parse outputValueError: Could not parse output
ValueError: Could not parse outputCause: AI output doesn't match expected format.
Fix:
- Check to see what the AI returned
dspy.inspect_history() - Simplify your output types
- Add clearer field descriptions
- Use instead of
dspy.ChainOfThought(reasoning helps formatting)dspy.Predict
原因: AI输出不符合预期格式。
修复方案:
- 调用查看AI返回的内容
dspy.inspect_history() - 简化输出类型
- 补充更清晰的字段描述
- 使用替代
dspy.ChainOfThought(推理过程有助于格式规范)dspy.Predict
TypeError: forward() got an unexpected keyword argument
TypeError: forward() got an unexpected keyword argumentTypeError: forward() got an unexpected keyword argument
TypeError: forward() got an unexpected keyword argumentCause: Input field name mismatch.
Fix: Make sure you're passing keyword arguments that match your signature's names.
InputField原因: 输入字段名称不匹配。
修复方案: 确保传入的关键字参数与签名中的名称一致。
InputFieldSearch/retriever returns empty results
搜索/检索器返回空结果
Cause: Retriever not configured or wrong endpoint.
Fix:
python
undefined原因: 检索器未配置或端点错误。
修复方案:
python
undefinedCheck retriever config
Check retriever config
print(dspy.settings.rm)
print(dspy.settings.rm)
Test retriever directly
Test retriever directly
rm = dspy.ColBERTv2(url="http://...")
results = rm("test query", k=3)
print(results)
undefinedrm = dspy.ColBERTv2(url="http://...")
results = rm("test query", k=3)
print(results)
undefinedOptimizer makes things worse
优化器导致效果变差
Cause: Bad metric, too little data, or overfitting.
Fix:
- Manually verify your metric on 10-20 examples
- Add more training data
- Reduce
max_bootstrapped_demos - Use a validation set to check for overfitting
原因: 指标设置不当、数据量过少或过拟合。
修复方案:
- 手动在10-20个示例上验证指标
- 添加更多训练数据
- 减小的值
max_bootstrapped_demos - 使用验证集检查是否过拟合
dspy.Assert
/ dspy.Suggest
failures
dspy.Assertdspy.Suggestdspy.Assert
/ dspy.Suggest
验证失败
dspy.Assertdspy.SuggestCause: AI output doesn't meet constraints.
Fix:
- Check if constraints are reasonable (not too strict)
- Make constraint messages more descriptive
- Ensure the AI can reasonably satisfy the constraints
原因: AI输出未满足约束条件。
修复方案:
- 检查约束条件是否合理(不要过于严格)
- 优化约束条件的提示信息,使其更具描述性
- 确保AI能够合理满足约束条件
Advanced Debugging
高级调试
Enable verbose tracing
启用详细追踪
python
dspy.configure(lm=lm, trace=[])python
dspy.configure(lm=lm, trace=[])Now run your program — trace will be populated
Now run your program — trace will be populated
result = my_program(question="test")
undefinedresult = my_program(question="test")
undefinedInspect module structure
检查模块结构
python
undefinedpython
undefinedPrint the module tree
Print the module tree
print(my_program)
print(my_program)
See all named predictors
See all named predictors
for name, predictor in my_program.named_predictors():
print(f"{name}: {predictor}")
undefinedfor name, predictor in my_program.named_predictors():
print(f"{name}: {predictor}")
undefinedTest individual components
测试单个组件
Break your pipeline into pieces and test each one:
python
class MyPipeline(dspy.Module):
def __init__(self):
self.step1 = dspy.ChainOfThought("question -> search_query")
self.step2 = dspy.Retrieve(k=3)
self.step3 = dspy.ChainOfThought("context, question -> answer")
def forward(self, question):
query = self.step1(question=question)
print(f"Step 1 output: {query.search_query}") # Debug
context = self.step2(query.search_query)
print(f"Step 2 retrieved: {len(context.passages)} passages") # Debug
answer = self.step3(context=context.passages, question=question)
print(f"Step 3 output: {answer.answer}") # Debug
return answer将你的流水线拆分为多个部分,逐个测试:
python
class MyPipeline(dspy.Module):
def __init__(self):
self.step1 = dspy.ChainOfThought("question -> search_query")
self.step2 = dspy.Retrieve(k=3)
self.step3 = dspy.ChainOfThought("context, question -> answer")
def forward(self, question):
query = self.step1(question=question)
print(f"Step 1 output: {query.search_query}") # Debug
context = self.step2(query.search_query)
print(f"Step 2 retrieved: {len(context.passages)} passages") # Debug
answer = self.step3(context=context.passages, question=question)
print(f"Step 3 output: {answer.answer}") # Debug
return answerCompare prompts before/after optimization
比较优化前后的提示词
python
undefinedpython
undefinedBefore optimization
Before optimization
baseline = MyProgram()
baseline(question="test")
print("=== BASELINE PROMPT ===")
dspy.inspect_history(n=1)
baseline = MyProgram()
baseline(question="test")
print("=== BASELINE PROMPT ===")
dspy.inspect_history(n=1)
After optimization
After optimization
optimized = MyProgram()
optimized.load("optimized.json")
optimized(question="test")
print("=== OPTIMIZED PROMPT ===")
dspy.inspect_history(n=1)
undefinedoptimized = MyProgram()
optimized.load("optimized.json")
optimized(question="test")
print("=== OPTIMIZED PROMPT ===")
dspy.inspect_history(n=1)
undefinedAdditional resources
额外资源
- For complete error index, see reference.md
- To measure and improve accuracy, use
/ai-improving-accuracy - Use to trace a specific request end-to-end (every LM call, retrieval, latency)
/ai-tracing-requests - For DSPy API details, see
docs/dspy-reference.md
- 完整错误索引请查看reference.md
- 如需衡量并提升准确率,请使用
/ai-improving-accuracy - 使用进行端到端追踪特定请求(包括所有LM调用、检索操作、延迟信息)
/ai-tracing-requests - DSPy API详情请查看
docs/dspy-reference.md