deepseek
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDeepSeek API
DeepSeek API
Use the DeepSeek API via direct calls to access powerful AI language models for chat, reasoning, and code generation.
curlOfficial docs:https://api-docs.deepseek.com/
通过直接调用 使用 DeepSeek API,即可访问强大的 AI 语言模型,实现聊天、推理和代码生成能力。
curl官方文档:https://api-docs.deepseek.com/
When to Use
适用场景
Use this skill when you need to:
- Chat completions with DeepSeek-V3.2 model
- Deep reasoning tasks using the reasoning model
- Code generation and completion (FIM - Fill-in-the-Middle)
- OpenAI-compatible API as a cost-effective alternative
当你需要完成以下任务时可使用该技能:
- 基于 DeepSeek-V3.2 模型的聊天补全
- 使用推理模型完成深度推理任务
- 代码生成与补全(FIM - Fill-in-the-Middle)
- 作为高性价比替代方案的OpenAI 兼容 API
Prerequisites
前置条件
- Sign up at DeepSeek Platform and create an account
- Go to API Keys and generate a new API key
- Top up your balance (no free tier, but very affordable pricing)
bash
export DEEPSEEK_API_KEY="your-api-key"- 前往 DeepSeek 平台 注册并创建账号
- 进入 API Keys 页面 生成新的 API 密钥
- 充值账户余额(无免费额度,但定价非常亲民)
bash
export DEEPSEEK_API_KEY="your-api-key"Pricing (per 1M tokens)
定价(每百万 tokens)
| Type | Price |
|---|---|
| Input (cache hit) | $0.028 |
| Input (cache miss) | $0.28 |
| Output | $0.42 |
| 类型 | 价格 |
|---|---|
| 输入(缓存命中) | $0.028 |
| 输入(缓存未命中) | $0.28 |
| 输出 | $0.42 |
Rate Limits
速率限制
DeepSeek does not enforce strict rate limits. They will try to serve every request. During high traffic, connections are maintained with keep-alive signals.
Important: When usingin a command that pipes to another command, wrap the command containing$VARin$VAR. Due to a Claude Code bug, environment variables are silently cleared when pipes are used directly.bash -c '...'bashbash -c 'curl -s "https://api.example.com" -H "Authorization: Bearer $API_KEY"'
DeepSeek不会强制实施严格的速率限制,会尽量处理所有请求。流量高峰时会通过保活信号维持连接。
注意: 当你在需要管道传输到其他命令的指令中使用时,请将包含$VAR的命令包裹在$VAR中。由于 Claude Code 的一个 bug,直接使用管道时环境变量会被静默清除。bash -c '...'bashbash -c 'curl -s "https://api.example.com" -H "Authorization: Bearer $API_KEY"'
How to Use
使用指南
All examples below assume you have set.
DEEPSEEK_API_KEYThe base URL for the DeepSeek API is:
- (recommended)
https://api.deepseek.com - (OpenAI-compatible)
https://api.deepseek.com/v1
以下所有示例都假设你已经设置了 环境变量。
DEEPSEEK_API_KEYDeepSeek API 的基础 URL 为:
- (推荐)
https://api.deepseek.com - (兼容 OpenAI)
https://api.deepseek.com/v1
1. Basic Chat Completion
1. 基础聊天补全
Send a simple chat message:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, who are you?"
}
]
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'Available models:
- : DeepSeek-V3.2 non-thinking mode (128K context, 8K max output)
deepseek-chat - : DeepSeek-V3.2 thinking mode (128K context, 64K max output)
deepseek-reasoner
发送简单的聊天消息:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, who are you?"
}
]
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'可用模型:
- :DeepSeek-V3.2 非思考模式(128K 上下文,最大 8K 输出)
deepseek-chat - :DeepSeek-V3.2 思考模式(128K 上下文,最大 64K 输出)
deepseek-reasoner
2. Chat with Temperature Control
2. 带温度参数控制的聊天
Adjust creativity/randomness with temperature:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Write a short poem about coding."
}
],
"temperature": 0.7,
"max_tokens": 200
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'Parameters:
- (0-2, default 1): Higher = more creative, lower = more deterministic
temperature - (0-1, default 1): Nucleus sampling threshold
top_p - : Maximum tokens to generate
max_tokens
通过 temperature 参数调整生成内容的创造性/随机性:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Write a short poem about coding."
}
],
"temperature": 0.7,
"max_tokens": 200
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'参数说明:
- (0-2,默认1):数值越高生成内容越有创意,数值越低结果越确定
temperature - (0-1,默认1):核采样阈值
top_p - :生成内容的最大 token 数
max_tokens
3. Streaming Response
3. 流式响应
Get real-time token-by-token output:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"stream": true
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'Streaming returns Server-Sent Events (SSE) with delta chunks, ending with .
data: [DONE]实现逐 token 实时输出:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
"stream": true
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'流式接口返回 Server-Sent Events (SSE) 格式的增量块,以 标识结束。
data: [DONE]4. Deep Reasoning (Thinking Mode)
4. 深度推理(思考模式)
Use the reasoner model for complex reasoning tasks:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-reasoner",
"messages": [
{
"role": "user",
"content": "What is 15 * 17? Show your work."
}
]
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'The reasoner model excels at math, logic, and multi-step problems.
使用推理模型完成复杂推理任务:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-reasoner",
"messages": [
{
"role": "user",
"content": "What is 15 * 17? Show your work."
}
]
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'推理模型擅长处理数学、逻辑和多步骤问题。
5. JSON Output Mode
5. JSON 输出模式
Force the model to return valid JSON:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a JSON generator. Always respond with valid JSON."
},
{
"role": "user",
"content": "List 3 programming languages with their main use cases."
}
],
"response_format": {
"type": "json_object"
}
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'强制模型返回合法的 JSON 格式:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "system",
"content": "You are a JSON generator. Always respond with valid JSON."
},
{
"role": "user",
"content": "List 3 programming languages with their main use cases."
}
],
"response_format": {
"type": "json_object"
}
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'6. Multi-turn Conversation
6. 多轮对话
Continue a conversation with message history:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "My name is Alice."
},
{
"role": "assistant",
"content": "Nice to meet you, Alice."
},
{
"role": "user",
"content": "What is my name?"
}
]
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'通过消息历史延续对话上下文:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "My name is Alice."
},
{
"role": "assistant",
"content": "Nice to meet you, Alice."
},
{
"role": "user",
"content": "What is my name?"
}
]
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].message.content'7. Code Completion (FIM)
7. 代码补全(FIM)
Use Fill-in-the-Middle for code completion (beta endpoint):
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"prompt": "def add(a, b):\n ",
"max_tokens": 20
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/beta/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].text'FIM is useful for:
- Code completion in editors
- Filling gaps in documents
- Context-aware text generation
使用 Fill-in-the-Middle 能力实现代码补全(测试版端点):
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"prompt": "def add(a, b):\n ",
"max_tokens": 20
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/beta/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq -r '.choices[0].text'FIM 适用于以下场景:
- 编辑器内的代码补全
- 填充文档中的缺失内容
- 上下文感知的文本生成
8. Function Calling (Tools)
8. 函数调用(工具)
Define functions the model can call:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "What is the weather in Tokyo?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name"
}
},
"required": ["location"]
}
}
}
]
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'The model will return a array when it wants to use a function.
tool_calls定义模型可调用的函数:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "What is the weather in Tokyo?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name"
}
},
"required": ["location"]
}
}
}
]
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'当模型需要调用函数时会返回 数组。
tool_calls9. Check Token Usage
9. 查看 token 使用量
Extract usage information from response:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq '.usage'Response includes:
- : Input token count
prompt_tokens - : Output token count
completion_tokens - : Sum of both
total_tokens
从响应中提取使用统计信息:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json' | jq '.usage'响应包含:
- :输入 token 数量
prompt_tokens - :输出 token 数量
completion_tokens - :两者总和
total_tokens
OpenAI SDK Compatibility
OpenAI SDK 兼容性
DeepSeek is fully compatible with OpenAI SDKs. Just change the base URL:
Python:
python
from openai import OpenAI
client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")Node.js:
javascript
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'your-deepseek-key', baseURL: 'https://api.deepseek.com' });DeepSeek 完全兼容 OpenAI SDK,仅需修改基础 URL 即可:
Python:
python
from openai import OpenAI
client = OpenAI(api_key="your-deepseek-key", base_url="https://api.deepseek.com")Node.js:
javascript
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: 'your-deepseek-key', baseURL: 'https://api.deepseek.com' });Tips: Complex JSON Payloads
提示:复杂 JSON 负载处理
For complex requests with nested JSON (like function calling), use a temp file to avoid shell escaping issues:
Write to :
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}]
}Then run:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'对于包含嵌套 JSON 的复杂请求(比如函数调用),使用临时文件可以避免 shell 转义问题:
写入 文件:
/tmp/deepseek_request.jsonjson
{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "What is the weather in Tokyo?"}],
"tools": [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}]
}然后运行:
bash
bash -c 'curl -s "https://api.deepseek.com/chat/completions" -X POST -H "Content-Type: application/json" -H "Authorization: Bearer ${DEEPSEEK_API_KEY}" -d @/tmp/deepseek_request.json'Guidelines
使用规范
- Choose the right model: Use for general tasks,
deepseek-chatfor complex reasoningdeepseek-reasoner - Use caching: Repeated prompts with same prefix benefit from cache pricing ($0.028 vs $0.28)
- Set max_tokens: Prevent runaway generation by setting appropriate limits
- Use streaming for long responses: Better UX for real-time applications
- JSON mode requires system prompt: When using , include JSON instructions in system message
response_format - FIM uses beta endpoint: Code completion endpoint is at
api.deepseek.com/beta - Complex JSON: Use temp files with to avoid shell quoting issues
-d @filename
- 选择合适的模型:通用任务使用 ,复杂推理任务使用
deepseek-chatdeepseek-reasoner - 利用缓存:前缀相同的重复请求可享受缓存定价($0.028 对比 $0.28)
- 设置 max_tokens:通过设置合理的上限避免生成内容过长
- 长响应使用流式接口:为实时应用提供更好的用户体验
- JSON 模式需要系统提示:使用 时,请在系统消息中包含 JSON 输出的指令
response_format - FIM 使用测试版端点:代码补全端点地址为
api.deepseek.com/beta - 复杂 JSON 处理:使用 读取临时文件,避免 shell 引号转义问题
-d @filename