langchain-sdk-patterns
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseLangChain SDK Patterns
LangChain SDK 模式
Overview
概述
Production-ready patterns for LangChain applications including LCEL chains, structured output, and error handling.
适用于LangChain应用的生产就绪模式,包括LCEL链、结构化输出和错误处理。
Prerequisites
前置条件
- Completed setup
langchain-install-auth - Familiarity with async/await patterns
- Understanding of error handling best practices
- 已完成配置
langchain-install-auth - 熟悉async/await模式
- 了解错误处理最佳实践
Core Patterns
核心模式
Pattern 1: Type-Safe Chain with Pydantic
模式1:基于Pydantic的类型安全链
python
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
class SentimentResult(BaseModel):
"""Structured output for sentiment analysis."""
sentiment: str = Field(description="positive, negative, or neutral")
confidence: float = Field(description="Confidence score 0-1")
reasoning: str = Field(description="Brief explanation")
llm = ChatOpenAI(model="gpt-4o-mini")
structured_llm = llm.with_structured_output(SentimentResult)
prompt = ChatPromptTemplate.from_template(
"Analyze the sentiment of: {text}"
)
chain = prompt | structured_llmpython
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
class SentimentResult(BaseModel):
"""Structured output for sentiment analysis."""
sentiment: str = Field(description="positive, negative, or neutral")
confidence: float = Field(description="Confidence score 0-1")
reasoning: str = Field(description="Brief explanation")
llm = ChatOpenAI(model="gpt-4o-mini")
structured_llm = llm.with_structured_output(SentimentResult)
prompt = ChatPromptTemplate.from_template(
"Analyze the sentiment of: {text}"
)
chain = prompt | structured_llmReturns typed SentimentResult
Returns typed SentimentResult
result: SentimentResult = chain.invoke({"text": "I love LangChain!"})
print(f"Sentiment: {result.sentiment} ({result.confidence})")
undefinedresult: SentimentResult = chain.invoke({"text": "I love LangChain!"})
print(f"Sentiment: {result.sentiment} ({result.confidence})")
undefinedPattern 2: Retry with Fallback
模式2:带降级回退的重试机制
python
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import RunnableWithFallbacks
primary = ChatOpenAI(model="gpt-4o")
fallback = ChatAnthropic(model="claude-3-5-sonnet-20241022")python
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import RunnableWithFallbacks
primary = ChatOpenAI(model="gpt-4o")
fallback = ChatAnthropic(model="claude-3-5-sonnet-20241022")Automatically falls back on failure
Automatically falls back on failure
robust_llm = primary.with_fallbacks([fallback])
response = robust_llm.invoke("Hello!")
undefinedrobust_llm = primary.with_fallbacks([fallback])
response = robust_llm.invoke("Hello!")
undefinedPattern 3: Async Batch Processing
模式3:异步批量处理
python
import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm
async def process_batch(texts: list[str]) -> list:
"""Process multiple texts concurrently."""
inputs = [{"text": t} for t in texts]
results = await chain.abatch(inputs, config={"max_concurrency": 5})
return resultspython
import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm
async def process_batch(texts: list[str]) -> list:
"""Process multiple texts concurrently."""
inputs = [{"text": t} for t in texts]
results = await chain.abatch(inputs, config={"max_concurrency": 5})
return resultsUsage
Usage
results = asyncio.run(process_batch(["text1", "text2", "text3"]))
undefinedresults = asyncio.run(process_batch(["text1", "text2", "text3"]))
undefinedPattern 4: Streaming with Callbacks
模式4:带回调的流式输出
python
from langchain_openai import ChatOpenAI
from langchain_core.callbacks import StreamingStdOutCallbackHandler
llm = ChatOpenAI(
model="gpt-4o-mini",
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()]
)python
from langchain_openai import ChatOpenAI
from langchain_core.callbacks import StreamingStdOutCallbackHandler
llm = ChatOpenAI(
model="gpt-4o-mini",
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()]
)Streams tokens to stdout as they arrive
Streams tokens to stdout as they arrive
for chunk in llm.stream("Tell me a story"):
# Each chunk contains partial content
pass
undefinedfor chunk in llm.stream("Tell me a story"):
# Each chunk contains partial content
pass
undefinedPattern 5: Caching for Cost Reduction
模式5:降低成本的缓存策略
python
from langchain_openai import ChatOpenAI
from langchain_core.globals import set_llm_cache
from langchain_community.cache import SQLiteCachepython
from langchain_openai import ChatOpenAI
from langchain_core.globals import set_llm_cache
from langchain_community.cache import SQLiteCacheEnable SQLite caching
Enable SQLite caching
set_llm_cache(SQLiteCache(database_path=".langchain_cache.db"))
llm = ChatOpenAI(model="gpt-4o-mini")
set_llm_cache(SQLiteCache(database_path=".langchain_cache.db"))
llm = ChatOpenAI(model="gpt-4o-mini")
First call hits API
First call hits API
response1 = llm.invoke("What is 2+2?")
response1 = llm.invoke("What is 2+2?")
Second identical call uses cache (no API cost)
Second identical call uses cache (no API cost)
response2 = llm.invoke("What is 2+2?")
undefinedresponse2 = llm.invoke("What is 2+2?")
undefinedOutput
输出成果
- Type-safe chains with Pydantic models
- Robust error handling with fallbacks
- Efficient async batch processing
- Cost-effective caching strategies
- 基于Pydantic模型的类型安全链
- 带回退机制的健壮错误处理
- 高效的异步批量处理
- 高性价比的缓存策略
Error Handling
错误处理
Standard Error Pattern
标准错误处理模式
python
from langchain_core.exceptions import OutputParserException
from openai import RateLimitError, APIError
def safe_invoke(chain, input_data, max_retries=3):
"""Invoke chain with error handling."""
for attempt in range(max_retries):
try:
return chain.invoke(input_data)
except RateLimitError:
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
continue
raise
except OutputParserException as e:
# Handle parsing failures
return {"error": str(e), "raw": e.llm_output}
except APIError as e:
raise RuntimeError(f"API error: {e}")python
from langchain_core.exceptions import OutputParserException
from openai import RateLimitError, APIError
def safe_invoke(chain, input_data, max_retries=3):
"""Invoke chain with error handling."""
for attempt in range(max_retries):
try:
return chain.invoke(input_data)
except RateLimitError:
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
continue
raise
except OutputParserException as e:
# Handle parsing failures
return {"error": str(e), "raw": e.llm_output}
except APIError as e:
raise RuntimeError(f"API error: {e}")Resources
参考资源
Next Steps
下一步
Proceed to for chains and prompts workflow.
langchain-core-workflow-a继续学习,了解链与提示词工作流。
langchain-core-workflow-a