addon-langchain-llm
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAdd-on: LangChain LLM
附加组件:LangChain LLM
Use this skill when an existing project needs LangChain primitives for chat, retrieval, or summarization.
当现有项目需要用于聊天、检索或摘要功能的LangChain基元时,可使用此技能。
Compatibility
兼容性
- Works with ,
architect-python-uv-fastapi-sqlalchemy, andarchitect-python-uv-batch.architect-nextjs-bun-app - Can be combined with .
addon-rag-ingestion-pipeline - Can be combined with when graph orchestration is required.
addon-langgraph-agent - Can be combined with ; when used together, declare
addon-llm-judge-evalsinlangchainso the judge runner can resolve the backend without guessing.config/skill_manifest.json
- 适配 、
architect-python-uv-fastapi-sqlalchemy和architect-python-uv-batch技术栈。architect-nextjs-bun-app - 可与 搭配使用。
addon-rag-ingestion-pipeline - 需要图编排能力时,可与 搭配使用。
addon-langgraph-agent - 可与 搭配使用;同时使用时,请在
addon-llm-judge-evals中声明config/skill_manifest.json,以便评测运行器可以直接解析后端,无需猜测。langchain
Inputs
输入参数
Collect:
- :
LLM_PROVIDER|openai|anthropic.ollama - : provider model id.
DEFAULT_MODEL - :
ENABLE_STREAMING|yes(defaultno).yes - :
USE_RAG|yes.no - : default
MAX_INPUT_TOKENS.8000
需收集以下信息:
- :
LLM_PROVIDER|openai|anthropic。ollama - :对应提供商的模型ID。
DEFAULT_MODEL - :
ENABLE_STREAMING|yes(默认值为no)。yes - :
USE_RAG|yes。no - :默认值为
MAX_INPUT_TOKENS。8000
Integration Workflow
集成工作流
- Add dependencies:
- Python:
bash
uv add langchain langchain-core langchain-community pydantic-settings tiktoken- Next.js:
bash
bun add langchain zod- Provider packages (as needed):
bash
uv add langchain-openai langchain-anthropic langchain-ollama
bun add @langchain/openai @langchain/anthropic @langchain/ollama- Add files by architecture:
- Python API:
text
src/{{MODULE_NAME}}/llm/provider.py
src/{{MODULE_NAME}}/llm/chains.py
src/{{MODULE_NAME}}/api/routes/llm.py- Next.js:
text
src/lib/llm/langchain.ts
src/lib/llm/chains.ts
src/app/api/llm/chat/route.ts- Enforce typed request/response contracts:
- Validate input lengths before chain invocation.
- Return stable schema for streaming and non-streaming modes.
- If , compose retriever + prompt + model chain:
USE_RAG=yes
- Keep retrieval source metadata in outputs.
- Bound document count and token budget.
- If is also selected:
addon-llm-judge-evals
- emit with
config/skill_manifest.jsoninaddon-langchain-llmaddons - declare in
"judge_backends": ["langchain"]capabilities - allow the judge runner to reuse when
DEFAULT_MODELis unsetJUDGE_MODEL
- 添加依赖:
- Python 环境:
bash
uv add langchain langchain-core langchain-community pydantic-settings tiktoken- Next.js 环境:
bash
bun add langchain zod- 提供商对应包(按需安装):
bash
uv add langchain-openai langchain-anthropic langchain-ollama
bun add @langchain/openai @langchain/anthropic @langchain/ollama- 按架构添加对应文件:
- Python API:
text
src/{{MODULE_NAME}}/llm/provider.py
src/{{MODULE_NAME}}/llm/chains.py
src/{{MODULE_NAME}}/api/routes/llm.py- Next.js:
text
src/lib/llm/langchain.ts
src/lib/llm/chains.ts
src/app/api/llm/chat/route.ts- 强制使用带类型的请求/响应契约:
- 在调用链之前验证输入长度。
- 为流式和非流式模式返回稳定的schema。
- 若 ,组合检索器 + 提示词 + 模型链:
USE_RAG=yes
- 在输出中保留检索源元数据。
- 限制文档数量和token预算。
- 若同时选择了 :
addon-llm-judge-evals
- 生成 ,在
config/skill_manifest.json字段中加入addonsaddon-langchain-llm - 在 字段中声明
capabilities"judge_backends": ["langchain"] - 当 未设置时,允许评测运行器复用
JUDGE_MODELDEFAULT_MODEL
Required Template
所需模板
Chat response shape
聊天响应结构
json
{
"outputText": "string",
"model": "string",
"provider": "string"
}json
{
"outputText": "string",
"model": "string",
"provider": "string"
}Guardrails
防护规则
-
Documentation contract for generated code:
- Python: write module docstrings and docstrings for public classes, methods, and functions.
- Next.js/TypeScript: write JSDoc for exported components, hooks, utilities, and route handlers.
- Add concise rationale comments only for non-obvious logic, invariants, or safety constraints.
- Apply this contract even when using template snippets below; expand templates as needed.
-
Enforce provider/model allow-lists.
-
Add timeout and retry limits around provider calls.
-
Never log secrets or raw auth headers.
-
On streaming disconnect, stop upstream generation promptly.
-
If judge evals are enabled, keep the judge path on the same provider abstraction instead of bypassing it with ad hoc SDK calls.
-
生成代码的文档契约:
- Python:为模块、公开类、方法和函数编写文档字符串。
- Next.js/TypeScript:为导出的组件、钩子、工具函数和路由处理器编写JSDoc。
- 仅为非显见逻辑、不变量或安全约束添加简明的原理注释。
- 即使使用下方的模板片段也需遵守此契约,可根据需要扩展模板。
-
强制实施提供商/模型白名单。
-
为提供商调用添加超时和重试限制。
-
永远不要记录密钥或原始认证头。
-
流式连接断开时,及时停止上游生成。
-
若启用了评测功能,需将评测路径保持在同一提供商抽象层上,不要通过临时SDK调用绕过该层。
Validation Checklist
验证检查清单
- Confirm generated code includes required docstrings/JSDoc and rationale comments for non-obvious logic.
bash
uv run ruff check . || true
uv run mypy src || true
bun run lint || true
rg -n "langchain|outputText|provider" src- Manual checks:
- Typed chat route returns valid response.
- Invalid payloads fail with controlled validation errors.
- 确认生成的代码包含所需的文档字符串/JSDoc,以及非显见逻辑的原理注释。
bash
uv run ruff check . || true
uv run mypy src || true
bun run lint || true
rg -n "langchain|outputText|provider" src- 人工检查项:
- 带类型的聊天路由返回有效响应。
- 无效载荷会返回受控的验证错误。
Decision Justification Rule
决策论证规则
- Every non-trivial decision must include a concrete justification.
- Capture the alternatives considered and why they were rejected.
- State tradeoffs and residual risks for the chosen option.
- If justification is missing, treat the task as incomplete and surface it as a blocker.
- 每个非琐碎决策都必须包含具体的论证依据。
- 记录考虑过的替代方案以及拒绝的原因。
- 说明所选方案的权衡和残留风险。
- 若缺少论证,视为任务未完成,将其标记为阻塞项。