dify-llm-platform
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseWhen to Use This Skill
适用场景
Use this skill when you need to work with Dify, including:
- Building LLM-powered applications with visual workflows
- Creating AI chatbots and conversational agents
- Implementing RAG (Retrieval-Augmented Generation) systems
- Developing AI agents with tool calling capabilities
- Deploying production-ready LLM applications
- Managing knowledge bases and document processing
- Integrating LLMs with existing tools and data sources
- Monitoring and optimizing LLM application performance
当你需要使用Dify时,可借助此技能,包括:
- 使用可视化工作流构建基于LLM的应用
- 创建AI聊天机器人和对话Agent
- 实现RAG(检索增强生成)系统
- 开发具备工具调用能力的AI Agent
- 部署可用于生产环境的LLM应用
- 管理知识库和文档处理
- 将LLM与现有工具和数据源集成
- 监控和优化LLM应用性能
Overview
平台概述
Dify (Do It For You) is an open-source platform for building agentic workflows and LLM applications. It provides a visual interface for designing complex AI processes without extensive coding, supporting integration with hundreds of LLM models and existing tools.
Key Resources:
- https://github.com/langgenius/dify
- https://docs.dify.ai/
- https://cloud.dify.ai (Cloud version with 200 free GPT-4 calls)
- Discord: https://discord.gg/dify
Platform Name: Do It For You - reflecting its purpose of simplifying LLM application development.
Dify(Do It For You)是一个用于构建Agent工作流和LLM应用的开源平台。它提供可视化界面,无需大量编码即可设计复杂的AI流程,支持集成数百种LLM模型和现有工具。
核心资源:
- https://github.com/langgenius/dify
- https://docs.dify.ai/
- https://cloud.dify.ai(云版本,包含200次免费GPT-4调用)
- Discord: https://discord.gg/dify
平台名称含义:Do It For You——体现其简化LLM应用开发的宗旨。
Core Capabilities
核心功能
1. Visual Workflow Engine
1. 可视化工作流引擎
- Node-based interface for building AI workflows
- Drag-and-drop components for rapid development
- Real-time preview and testing
- Version control for workflow iterations
- 基于节点的AI工作流构建界面
- 拖拽式组件,实现快速开发
- 实时预览与测试
- 工作流迭代的版本控制
2. Model Support
2. 模型支持
- Hundreds of LLMs: Integration with proprietary and open-source models
- Dozens of providers: OpenAI, Anthropic, Google, Hugging Face, local models, etc.
- Model comparison: Side-by-side testing in Prompt IDE
- Flexible switching: Easy model provider changes
- 数百种LLM模型:集成专有和开源模型
- 数十家提供商:OpenAI、Anthropic、Google、Hugging Face、本地模型等
- 模型对比:在Prompt IDE中进行并行测试
- 灵活切换:轻松更换模型提供商
3. RAG Pipeline
3. RAG流水线
- Document processing: PDF, PPT, TXT, Markdown support
- Knowledge bases: Create and manage multiple knowledge sources
- Chunking strategies: Configurable text segmentation
- Retrieval methods: Vector search, keyword search, hybrid retrieval
- Embedding models: Multiple embedding provider support
- 文档处理:支持PDF、PPT、TXT、Markdown格式
- 知识库:创建并管理多类知识来源
- 文本分割策略:可配置的文本分段设置
- 检索方式:向量搜索、关键词搜索、混合检索
- 嵌入模型:支持多种嵌入模型提供商
4. Agent Capabilities
4. Agent能力
- Function calling: ReAct-based agent architecture
- 50+ built-in tools: Pre-configured integrations
- Custom tools: Add your own API and function tools
- Multi-step reasoning: Complex task decomposition
- 函数调用:基于ReAct的Agent架构
- 50+内置工具:预配置的集成工具
- 自定义工具:添加自有API和函数工具
- 多步推理:复杂任务分解能力
5. LLMOps
5. LLMOps
- Application monitoring: Track usage and performance
- Analytics dashboard: User interactions and metrics
- Debugging tools: Step-by-step execution inspection
- Variable tracking: Monitor data flow through workflows
- 应用监控:追踪使用情况和性能
- 分析仪表盘:用户交互数据与指标
- 调试工具:分步执行检查
- 变量追踪:监控工作流中的数据流
6. Backend-as-a-Service
6. 后端即服务
- REST APIs: Programmatic access to applications
- Web applications: Deploy as standalone web apps
- Embedded components: Integrate into existing applications
- API documentation: Auto-generated API specs
- REST API:通过编程方式访问应用
- Web应用:部署为独立Web应用
- 嵌入式组件:集成到现有应用中
- API文档:自动生成的API规范
Installation
安装部署
Prerequisites
前置条件
System Requirements:
- CPU: ≥2 cores
- RAM: ≥4 GB
- Docker and Docker Compose installed
系统要求:
- CPU: ≥2核
- 内存: ≥4 GB
- 已安装Docker和Docker Compose
Option 1: Docker Compose (Recommended)
选项1:Docker Compose(推荐)
bash
undefinedbash
undefinedClone the repository
Clone the repository
git clone https://github.com/langgenius/dify.git
cd dify
git clone https://github.com/langgenius/dify.git
cd dify
Navigate to docker directory
Navigate to docker directory
cd docker
cd docker
Copy environment example
Copy environment example
cp .env.example .env
cp .env.example .env
Edit .env with your configuration
Edit .env with your configuration
vim .env
vim .env
Start Dify
Start Dify
docker compose up -d
docker compose up -d
Access the dashboard
Access the dashboard
undefinedundefinedOption 2: Dify Cloud
选项2:Dify云服务
bash
undefinedbash
undefinedSign up at https://cloud.dify.ai
Sign up at https://cloud.dify.ai
Get 200 free GPT-4 calls
Get 200 free GPT-4 calls
No installation required
No installation required
undefinedundefinedOption 3: Deploy to Cloud Platforms
选项3:部署到云平台
Kubernetes:
bash
helm repo add dify https://langgenius.github.io/dify-helm
helm install dify dify/difyAWS:
bash
undefinedKubernetes:
bash
helm repo add dify https://langgenius.github.io/dify-helm
helm install dify dify/difyAWS:
bash
undefinedUsing AWS CDK
Using AWS CDK
cdk deploy DifyStack
**Azure/Google Cloud/Alibaba Cloud:**
See deployment guides in official documentation.cdk deploy DifyStack
**Azure/Google Cloud/Alibaba Cloud:**
请参阅官方文档中的部署指南。Environment Configuration
环境配置
Key environment variables in :
.envbash
undefined.env文件中的关键环境变量:
bash
undefinedAPI Service
API Service
API_URL=http://localhost:5001
API_URL=http://localhost:5001
Web Service
Web Service
WEB_URL=http://localhost:3000
WEB_URL=http://localhost:3000
Database
Database
DB_USERNAME=postgres
DB_PASSWORD=your_password
DB_HOST=db
DB_PORT=5432
DB_DATABASE=dify
DB_USERNAME=postgres
DB_PASSWORD=your_password
DB_HOST=db
DB_PORT=5432
DB_DATABASE=dify
Redis
Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password
Storage (S3, Azure Blob, or local)
Storage (S3, Azure Blob, or local)
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage
Vector Database
Vector Database
VECTOR_STORE=weaviate # or pgvector, qdrant, milvus
VECTOR_STORE=weaviate # or pgvector, qdrant, milvus
API Keys for LLM Providers
API Keys for LLM Providers
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
undefinedOPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
undefinedQuick Start
快速开始
1. Create Your First Application
1. 创建首个应用
bash
undefinedbash
undefinedAfter installation, access http://localhost/install
After installation, access http://localhost/install
Complete initial setup wizard
Complete initial setup wizard
Create your first workspace
Create your first workspace
**Application Types:**
1. **Chatbot** - Conversational AI with memory
2. **Text Generator** - Single-turn text generation
3. **Agent** - Task-oriented AI with tools
4. **Workflow** - Custom multi-step processes
**应用类型:**
1. **Chatbot** - 具备记忆能力的对话AI
2. **Text Generator** - 单轮文本生成应用
3. **Agent** - 面向任务的带工具AI
4. **Workflow** - 自定义多步骤流程2. Build a Simple Chatbot
2. 构建简单聊天机器人
Via Web Interface:
- Click "Create Application"
- Select "Chatbot"
- Choose your LLM model (e.g., GPT-4, Claude)
- Configure system prompt
- Test in playground
- Publish as API or web app
Example System Prompt:
You are a helpful customer service assistant for TechCorp.
You can help users with:
- Product information
- Order tracking
- Technical support
- Account management
Be friendly, professional, and concise.通过Web界面操作:
- 点击“Create Application”
- 选择“Chatbot”
- 选择LLM模型(如GPT-4、Claude)
- 配置系统提示词
- 在测试场进行测试
- 发布为API或Web应用
示例系统提示词:
You are a helpful customer service assistant for TechCorp.
You can help users with:
- Product information
- Order tracking
- Technical support
- Account management
Be friendly, professional, and concise.3. Add Knowledge Base (RAG)
3. 添加知识库(RAG)
Create Knowledge Base:
- Go to "Knowledge" section
- Click "Create Knowledge Base"
- Upload documents (PDF, TXT, Markdown)
- Configure chunking settings:
- Chunk size: 500 tokens (recommended)
- Overlap: 50 tokens
- Select embedding model
- Process documents
Connect to Application:
- Open your chatbot
- Add "Knowledge Retrieval" node
- Select your knowledge base
- Configure retrieval settings:
- Top K: 3-5 results
- Score threshold: 0.7
- Reranking: Enable for better results
创建知识库:
- 进入“Knowledge”板块
- 点击“Create Knowledge Base”
- 上传文档(PDF、TXT、Markdown)
- 配置文本分段设置:
- Chunk size: 500 tokens(推荐值)
- Overlap: 50 tokens
- 选择嵌入模型
- 处理文档
关联到应用:
- 打开聊天机器人应用
- 添加“Knowledge Retrieval”节点
- 选择你的知识库
- 配置检索设置:
- Top K: 3-5条结果
- Score threshold: 0.7
- Reranking: 启用以优化结果
4. Using the API
4. API使用示例
Python Example:
python
import requests
API_KEY = "your_dify_api_key"
API_URL = "http://localhost/v1"Python示例:
python
import requests
API_KEY = "your_dify_api_key"
API_URL = "http://localhost/v1"Send a chat message
Send a chat message
response = requests.post(
f"{API_URL}/chat-messages",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json={
"inputs": {},
"query": "What are your business hours?",
"response_mode": "streaming",
"conversation_id": "",
"user": "user-123"
}
)
for line in response.iter_lines():
if line:
print(line.decode('utf-8'))
**Node.js Example:**
```javascript
const fetch = require('node-fetch');
const API_KEY = 'your_dify_api_key';
const API_URL = 'http://localhost/v1';
async function sendMessage(query) {
const response = await fetch(`${API_URL}/chat-messages`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
inputs: {},
query: query,
response_mode: 'blocking',
user: 'user-123'
})
});
return await response.json();
}
sendMessage('Hello!').then(console.log);response = requests.post(
f"{API_URL}/chat-messages",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json={
"inputs": {},
"query": "What are your business hours?",
"response_mode": "streaming",
"conversation_id": "",
"user": "user-123"
}
)
for line in response.iter_lines():
if line:
print(line.decode('utf-8'))
**Node.js示例:**
```javascript
const fetch = require('node-fetch');
const API_KEY = 'your_dify_api_key';
const API_URL = 'http://localhost/v1';
async function sendMessage(query) {
const response = await fetch(`${API_URL}/chat-messages`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
inputs: {},
query: query,
response_mode: 'blocking',
user: 'user-123'
})
});
return await response.json();
}
sendMessage('Hello!').then(console.log);Workflow Development
工作流开发
Visual Workflow Builder
可视化工作流构建器
Core Nodes:
- LLM Node - Call language models
- Knowledge Retrieval - Query knowledge bases
- Tool Node - Execute functions and APIs
- Code Node - Run custom Python/JavaScript
- Conditional - Branch based on logic
- Variable Aggregator - Combine data
- HTTP Request - Call external APIs
- Template Transform - Format text
核心节点:
- LLM Node - 调用语言模型
- Knowledge Retrieval - 查询知识库
- Tool Node - 执行函数和API
- Code Node - 运行自定义Python/JavaScript代码
- Conditional - 基于逻辑分支
- Variable Aggregator - 数据合并
- HTTP Request - 调用外部API
- Template Transform - 文本格式化
Example: Customer Support Workflow
示例:客户支持工作流
yaml
Workflow Steps:
1. Start → User Message
2. Knowledge Retrieval → Search documentation
3. Conditional Branch:
- If relevant docs found → Use context
- If not found → Use general knowledge
4. LLM Node → Generate response
5. Tool Node → Create ticket (if needed)
6. End → Return responseImplementation:
- Drag "LLM" node to canvas
- Connect to "Knowledge Retrieval" node
- Add "Conditional" node with logic:
python
{{retrieval.score}} > 0.7 - Configure LLM prompt:
Context: {{retrieval.context}} User question: {{user.query}} Provide a helpful answer based on the context. If the context doesn't contain the answer, say so politely.
yaml
Workflow Steps:
1. Start → User Message
2. Knowledge Retrieval → Search documentation
3. Conditional Branch:
- If relevant docs found → Use context
- If not found → Use general knowledge
4. LLM Node → Generate response
5. Tool Node → Create ticket (if needed)
6. End → Return response实现步骤:
- 将“LLM”节点拖拽到画布
- 连接到“Knowledge Retrieval”节点
- 添加“Conditional”节点并配置逻辑:
python
{{retrieval.score}} > 0.7 - 配置LLM提示词:
Context: {{retrieval.context}} User question: {{user.query}} Provide a helpful answer based on the context. If the context doesn't contain the answer, say so politely.
Code Node Examples
代码节点示例
Python Code Node:
python
def main(data: dict) -> dict:
# Access workflow variables
user_query = data.get('query', '')
# Custom processing
processed = user_query.upper()
# Return results
return {
'result': processed,
'length': len(user_query)
}JavaScript Code Node:
javascript
function main(data) {
const query = data.query || '';
// Process data
const words = query.split(' ');
return {
word_count: words.length,
first_word: words[0]
};
}Python代码节点:
python
def main(data: dict) -> dict:
# Access workflow variables
user_query = data.get('query', '')
# Custom processing
processed = user_query.upper()
# Return results
return {
'result': processed,
'length': len(user_query)
}JavaScript代码节点:
javascript
function main(data) {
const query = data.query || '';
// Process data
const words = query.split(' ');
return {
word_count: words.length,
first_word: words[0]
};
}Agent Development
Agent开发
Building an Agent with Tools
构建带工具的Agent
Built-in Tools:
- Google Search
- Wikipedia
- Weather APIs
- Calculator
- Web Scraper
- Database queries
- File operations
- Email sending
Example: Research Agent
yaml
Agent Configuration:
Model: gpt-4
Reasoning Mode: ReAct
Tools:
- Google Search
- Wikipedia
- Web Scraper
System Prompt: |
You are a research assistant that helps users find
accurate information from reliable sources.
Always:
1. Search for current information
2. Cite your sources
3. Verify facts from multiple sourcesCustom Tool Definition:
python
undefined内置工具:
- Google搜索
- Wikipedia
- 天气API
- 计算器
- 网页抓取工具
- 数据库查询
- 文件操作
- 邮件发送
示例:研究Agent
yaml
Agent Configuration:
Model: gpt-4
Reasoning Mode: ReAct
Tools:
- Google Search
- Wikipedia
- Web Scraper
System Prompt: |
You are a research assistant that helps users find
accurate information from reliable sources.
Always:
1. Search for current information
2. Cite your sources
3. Verify facts from multiple sources自定义工具定义:
python
undefinedDefine custom tool for Dify
Define custom tool for Dify
{
"name": "check_inventory",
"description": "Check product inventory levels",
"parameters": {
"type": "object",
"properties": {
"product_id": {
"type": "string",
"description": "Product identifier"
}
},
"required": ["product_id"]
}
}
undefined{
"name": "check_inventory",
"description": "Check product inventory levels",
"parameters": {
"type": "object",
"properties": {
"product_id": {
"type": "string",
"description": "Product identifier"
}
},
"required": ["product_id"]
}
}
undefinedKnowledge Base Management
知识库管理
Document Processing
文档处理
Supported Formats:
- Microsoft Word (.docx)
- PowerPoint (.pptx)
- Text files (.txt)
- Markdown (.md)
- HTML
- CSV
支持格式:
- Microsoft Word (.docx)
- PowerPoint (.pptx)
- 文本文件 (.txt)
- Markdown (.md)
- HTML
- CSV
Chunking Strategies
文本分割策略
1. Fixed Size Chunking:
yaml
Strategy: fixed_size
Chunk Size: 500 tokens
Overlap: 50 tokens
Use Case: General documents, articles2. Paragraph Chunking:
yaml
Strategy: paragraph
Min Size: 100 tokens
Max Size: 800 tokens
Use Case: Well-formatted documents3. Semantic Chunking:
yaml
Strategy: semantic
Model: text-embedding-ada-002
Similarity Threshold: 0.8
Use Case: Complex technical documents1. 固定长度分割:
yaml
Strategy: fixed_size
Chunk Size: 500 tokens
Overlap: 50 tokens
Use Case: General documents, articles2. 段落分割:
yaml
Strategy: paragraph
Min Size: 100 tokens
Max Size: 800 tokens
Use Case: Well-formatted documents3. 语义分割:
yaml
Strategy: semantic
Model: text-embedding-ada-002
Similarity Threshold: 0.8
Use Case: Complex technical documentsRetrieval Configuration
检索配置
Vector Search:
yaml
Type: vector
Top K: 5
Score Threshold: 0.7
Embedding Model: text-embedding-3-largeHybrid Retrieval:
yaml
Type: hybrid
Vector Weight: 0.7
Keyword Weight: 0.3
Reranking: enabled
Reranking Model: cross-encoder/ms-marco-MiniLM-L-12-v2向量搜索:
yaml
Type: vector
Top K: 5
Score Threshold: 0.7
Embedding Model: text-embedding-3-large混合检索:
yaml
Type: hybrid
Vector Weight: 0.7
Keyword Weight: 0.3
Reranking: enabled
Reranking Model: cross-encoder/ms-marco-MiniLM-L-12-v2API Integration
API集成
REST API Endpoints
REST API端点
Chat Messages:
bash
POST /v1/chat-messages
Content-Type: application/json
Authorization: Bearer {api_key}
{
"inputs": {},
"query": "Your question here",
"response_mode": "streaming",
"user": "user-identifier"
}Completion Messages:
bash
POST /v1/completion-messages
Content-Type: application/json
Authorization: Bearer {api_key}
{
"inputs": {
"name": "John",
"topic": "AI"
},
"response_mode": "blocking",
"user": "user-123"
}Feedback:
bash
POST /v1/messages/{message_id}/feedbacks
Content-Type: application/json
Authorization: Bearer {api_key}
{
"rating": "like",
"user": "user-123"
}聊天消息:
bash
POST /v1/chat-messages
Content-Type: application/json
Authorization: Bearer {api_key}
{
"inputs": {},
"query": "Your question here",
"response_mode": "streaming",
"user": "user-identifier"
}补全消息:
bash
POST /v1/completion-messages
Content-Type: application/json
Authorization: Bearer {api_key}
{
"inputs": {
"name": "John",
"topic": "AI"
},
"response_mode": "blocking",
"user": "user-123"
}反馈:
bash
POST /v1/messages/{message_id}/feedbacks
Content-Type: application/json
Authorization: Bearer {api_key}
{
"rating": "like",
"user": "user-123"
}SDK Usage
SDK使用
Python SDK:
python
from dify_client import DifyClient
client = DifyClient(api_key="your_api_key")Python SDK:
python
from dify_client import DifyClient
client = DifyClient(api_key="your_api_key")Chat completion
Chat completion
response = client.chat(
query="What is Dify?",
user="user-123",
conversation_id=None
)
print(response.answer)
**Streaming Response:**
```python
for chunk in client.chat_stream(
query="Explain quantum computing",
user="user-123"
):
print(chunk.delta, end="", flush=True)response = client.chat(
query="What is Dify?",
user="user-123",
conversation_id=None
)
print(response.answer)
**流式响应:**
```python
for chunk in client.chat_stream(
query="Explain quantum computing",
user="user-123"
):
print(chunk.delta, end="", flush=True)Best Practices
最佳实践
Development
开发阶段
- Start Simple: Begin with basic chatbots before complex workflows
- Test Incrementally: Validate each node before connecting
- Use Variables: Properly scope and name workflow variables
- Version Control: Save workflow versions before major changes
- Monitor Costs: Track token usage and API costs
- 从简入手:先构建基础聊天机器人,再尝试复杂工作流
- 增量测试:在连接节点前先验证每个节点的功能
- 使用变量:合理定义和命名工作流变量
- 版本控制:在重大变更前保存工作流版本
- 成本监控:追踪Token使用量和API成本
Production
生产环境
-
Security:
- Use environment variables for API keys
- Implement rate limiting
- Enable authentication on public endpoints
- Regular security updates
-
Performance:
- Cache frequent queries
- Optimize knowledge base chunking
- Use streaming for long responses
- Implement connection pooling
-
Monitoring:yaml
Metrics to Track: - Response latency - Token usage - Error rates - User satisfaction scores - Knowledge retrieval accuracy -
Scalability:
- Use load balancers
- Scale workers horizontally
- Separate databases for different services
- Implement queue systems for async tasks
-
安全防护:
- 使用环境变量存储API密钥
- 实现速率限制
- 为公开端点启用身份验证
- 定期进行安全更新
-
性能优化:
- 缓存频繁查询的结果
- 优化知识库的文本分割策略
- 对长响应使用流式输出
- 实现连接池
-
监控指标:yaml
Metrics to Track: - Response latency - Token usage - Error rates - User satisfaction scores - Knowledge retrieval accuracy -
可扩展性:
- 使用负载均衡器
- 水平扩展工作节点
- 为不同服务分离数据库
- 为异步任务实现队列系统
Knowledge Base Optimization
知识库优化
-
Document Quality:
- Clean and format documents before upload
- Remove duplicates and outdated content
- Use consistent terminology
-
Chunking:
- Start with 500-token chunks
- Adjust based on document structure
- Use overlap for context preservation
-
Retrieval Tuning:
- Monitor retrieval accuracy
- Adjust score thresholds
- Use reranking for critical applications
- A/B test retrieval configurations
-
文档质量:
- 上传前清理和格式化文档
- 移除重复和过时内容
- 使用统一的术语
-
文本分割:
- 从500-token的分段开始尝试
- 根据文档结构调整参数
- 使用重叠设置保留上下文
-
检索调优:
- 监控检索准确率
- 调整分数阈值
- 对关键应用启用重排序
- A/B测试检索配置
Troubleshooting
故障排查
Common Issues
常见问题
Issue 1: Docker Containers Won't Start
问题1:Docker容器无法启动
Symptoms:
- fails
docker compose up - Services crash on startup
Solutions:
bash
undefined症状:
- 执行失败
docker compose up - 服务在启动时崩溃
解决方案:
bash
undefinedCheck logs
Check logs
docker compose logs
docker compose logs
Verify environment variables
Verify environment variables
cat .env
cat .env
Ensure ports are available
Ensure ports are available
lsof -i :3000
lsof -i :5001
lsof -i :3000
lsof -i :5001
Reset and restart
Reset and restart
docker compose down -v
docker compose up -d
undefineddocker compose down -v
docker compose up -d
undefinedIssue 2: Knowledge Retrieval Returns No Results
问题2:知识库检索无结果返回
Symptoms:
- Queries return empty results
- Low relevance scores
Solutions:
- Check document processing status
- Verify embedding model configuration
- Lower score threshold temporarily
- Test with exact document text
- Rebuild knowledge base index
bash
undefined症状:
- 查询返回空结果
- 相关性分数低
解决方案:
- 检查文档处理状态
- 验证嵌入模型配置
- 暂时降低分数阈值
- 使用文档中的精确文本进行测试
- 重建知识库索引
bash
undefinedVia API - trigger reindex
Via API - trigger reindex
POST /v1/datasets/{dataset_id}/documents/{document_id}/processing
undefinedPOST /v1/datasets/{dataset_id}/documents/{document_id}/processing
undefinedIssue 3: High Latency on API Calls
问题3:API调用延迟高
Symptoms:
- Slow response times
- Timeout errors
Solutions:
-
Enable caching:yaml
cache: enabled: true ttl: 3600 -
Use streaming mode:python
response_mode: "streaming" -
Optimize LLM settings:yaml
max_tokens: 500 # Reduce if possible temperature: 0.7 top_p: 0.9 -
Check database performance:bash
# Monitor PostgreSQL docker exec -it dify-db psql -U postgres -c "\ SELECT pid, query, state, wait_event_type \ FROM pg_stat_activity WHERE state != 'idle';"
症状:
- 响应速度慢
- 出现超时错误
解决方案:
-
启用缓存:yaml
cache: enabled: true ttl: 3600 -
使用流式模式:python
response_mode: "streaming" -
优化LLM设置:yaml
max_tokens: 500 # Reduce if possible temperature: 0.7 top_p: 0.9 -
检查数据库性能:bash
# Monitor PostgreSQL docker exec -it dify-db psql -U postgres -c "\ SELECT pid, query, state, wait_event_type \ FROM pg_stat_activity WHERE state != 'idle';"
Issue 4: Model API Errors
问题4:模型API调用错误
Symptoms:
- "Invalid API key" errors
- Rate limit exceeded
Solutions:
bash
undefined症状:
- 出现“Invalid API key”错误
- 触发速率限制
解决方案:
bash
undefinedVerify API keys in .env
Verify API keys in .env
cat .env | grep API_KEY
cat .env | grep API_KEY
Check provider status
Check provider status
Implement retry logic
Implement retry logic
max_retries: 3
retry_delay: 1000 # milliseconds
undefinedmax_retries: 3
retry_delay: 1000 # milliseconds
undefinedDebugging Tips
调试技巧
-
Use Workflow Debugger:
- Step through each node
- Inspect variable values
- Check node execution time
-
Enable Detailed Logging:bash
# In .env LOG_LEVEL=DEBUG -
Test Components Individually:
- Test LLM calls separately
- Verify knowledge retrieval standalone
- Test tools in isolation
-
Monitor System Resources:bash
docker stats
-
使用工作流调试器:
- 分步执行每个节点
- 检查变量值
- 查看节点执行时间
-
启用详细日志:bash
# In .env LOG_LEVEL=DEBUG -
单独测试组件:
- 单独测试LLM调用
- 独立验证知识库检索功能
- 单独测试工具
-
监控系统资源:bash
docker stats
Advanced Topics
进阶主题
Custom Model Providers
自定义模型提供商
Add custom LLM providers:
python
undefined添加自定义LLM提供商:
python
undefinedmodel_providers/custom_provider.py
model_providers/custom_provider.py
from dify.core.model_runtime import ModelProvider
class CustomProvider(ModelProvider):
def get_models(self):
return [
{
'model': 'custom-gpt',
'label': 'Custom GPT Model',
'model_type': 'llm'
}
]
def invoke(self, model, credentials, prompt, **kwargs):
# Custom API call logic
response = your_api_call(prompt)
return responseundefinedfrom dify.core.model_runtime import ModelProvider
class CustomProvider(ModelProvider):
def get_models(self):
return [
{
'model': 'custom-gpt',
'label': 'Custom GPT Model',
'model_type': 'llm'
}
]
def invoke(self, model, credentials, prompt, **kwargs):
# Custom API call logic
response = your_api_call(prompt)
return responseundefinedWorkflow Optimization
工作流优化
Parallel Execution:
yaml
Workflow:
- Node1: LLM Call
- Parallel:
- Node2a: Knowledge Retrieval
- Node2b: External API Call
- Node3: Combine ResultsConditional Caching:
python
undefined并行执行:
yaml
Workflow:
- Node1: LLM Call
- Parallel:
- Node2a: Knowledge Retrieval
- Node2b: External API Call
- Node3: Combine Results条件缓存:
python
undefinedCache expensive operations
Cache expensive operations
if cache.exists(query_hash):
return cache.get(query_hash)
else:
result = expensive_operation()
cache.set(query_hash, result, ttl=3600)
return result
undefinedif cache.exists(query_hash):
return cache.get(query_hash)
else:
result = expensive_operation()
cache.set(query_hash, result, ttl=3600)
return result
undefinedEnterprise Deployment
企业级部署
High Availability Setup:
yaml
Services:
API:
replicas: 3
load_balancer: nginx
Worker:
replicas: 5
queue: redis
Database:
primary: postgres-main
replicas: 2
backup: dailyMonitoring Stack:
yaml
Monitoring:
- Prometheus: Metrics collection
- Grafana: Visualization
- Loki: Log aggregation
- Alertmanager: Alerts高可用架构:
yaml
Services:
API:
replicas: 3
load_balancer: nginx
Worker:
replicas: 5
queue: redis
Database:
primary: postgres-main
replicas: 2
backup: daily监控栈:
yaml
Monitoring:
- Prometheus: Metrics collection
- Grafana: Visualization
- Loki: Log aggregation
- Alertmanager: AlertsResources
资源
Official Documentation
官方文档
- Main Docs: https://docs.dify.ai/
- GitHub: https://github.com/langgenius/dify
- API Reference: https://docs.dify.ai/api-reference
- Cloud Platform: https://cloud.dify.ai
- 主文档: https://docs.dify.ai/
- GitHub: https://github.com/langgenius/dify
- API参考: https://docs.dify.ai/api-reference
- 云平台: https://cloud.dify.ai
Community
社区
- Discord: https://discord.gg/dify
- GitHub Discussions: https://github.com/langgenius/dify/discussions
- Issue Tracker: https://github.com/langgenius/dify/issues
- Discord: https://discord.gg/dify
- GitHub讨论区: https://github.com/langgenius/dify/discussions
- 问题追踪: https://github.com/langgenius/dify/issues
Deployment Guides
部署指南
- Kubernetes: https://github.com/langgenius/dify-helm
- AWS: https://github.com/langgenius/dify-aws
- Azure: https://github.com/langgenius/dify-azure
- Google Cloud: https://github.com/langgenius/dify-gcp
- Kubernetes: https://github.com/langgenius/dify-helm
- AWS: https://github.com/langgenius/dify-aws
- Azure: https://github.com/langgenius/dify-azure
- Google Cloud: https://github.com/langgenius/dify-gcp
Related Tools
相关工具
- LangChain: Alternative framework for LLM apps
- LlamaIndex: Data framework for LLMs
- Flowise: Visual LLM workflow builder
- LangFlow: Drag-and-drop LLM pipeline builder
- LangChain: LLM应用开发的替代框架
- LlamaIndex: 面向LLM的数据框架
- Flowise: 可视化LLM工作流构建器
- LangFlow: 拖拽式LLM流水线构建器
Contributing
贡献指南
Dify is an active open-source project with 8,220+ commits.
Ways to Contribute:
- Report bugs via GitHub Issues
- Submit pull requests for features
- Improve documentation
- Share use cases and examples
- Help with translations
Development Setup:
bash
undefinedDify是一个活跃的开源项目,已有8220+次提交。
贡献方式:
- 通过GitHub Issues报告Bug
- 提交功能改进的Pull Request
- 完善文档
- 分享使用案例和示例
- 协助翻译工作
开发环境搭建:
bash
undefinedClone repository
Clone repository
git clone https://github.com/langgenius/dify.git
cd dify
git clone https://github.com/langgenius/dify.git
cd dify
See deployment guide
See deployment guide
**Security Issues:**
Email: security@dify.ai
**安全问题:**
邮箱: security@dify.aiVersion Information
版本信息
Last Updated: 2025-12-15
Skill Version: 1.0.0
Dify GitHub: 8,220+ commits, actively maintained
Note: Dify is rapidly evolving. Always check the official documentation for the latest features and best practices. This skill is based on official documentation and repository information as of December 2025.
最后更新: 2025-12-15
技能版本: 1.0.0
Dify GitHub: 8220+次提交,持续维护中
注:Dify正快速迭代。请始终查阅官方文档获取最新功能和最佳实践。本技能基于2025年12月的官方文档和仓库信息整理。