pinme-llm
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePinMe Worker OpenRouter API Integration
PinMe Worker OpenRouter API 集成指南
Guides how to call PinMe platform's OpenRouter proxy APIs in a PinMe Worker (TypeScript). Workers use the PinMe project API key; they never hold the real OpenRouter API key.
本文指导如何在PinMe Worker(TypeScript)中调用PinMe平台的OpenRouter代理API。Worker使用PinMe项目API密钥,不会持有真实的OpenRouter API密钥。
Environment Variables
环境变量
The following environment variables are automatically injected when the Worker is created — no manual configuration needed:
typescript
// backend/src/worker.ts
export interface Env {
DB: D1Database;
API_KEY: string; // Project API Key from create_worker
PROJECT_NAME: string; // Actual project_name from create_worker; must match API_KEY
BASE_URL?: string; // Optional override for PinMe API base URL, defaults to https://pinme.cloud
}authenticates the Worker to PinMe.API_KEYis required forPROJECT_NAMEand must belong to the same project aschat/completions. WhenAPI_KEYis not set, useBASE_URL.https://pinme.cloud
创建Worker时会自动注入以下环境变量,无需手动配置:
typescript
// backend/src/worker.ts
export interface Env {
DB: D1Database;
API_KEY: string; // Project API Key from create_worker
PROJECT_NAME: string; // Actual project_name from create_worker; must match API_KEY
BASE_URL?: string; // Optional override for PinMe API base URL, defaults to https://pinme.cloud
}用于Worker向PinMe进行身份验证。API_KEY是调用PROJECT_NAME接口的必填项,且必须与chat/completions所属的项目一致。如果未设置API_KEY,则默认使用BASE_URL。https://pinme.cloud
Models API
Models API
Endpoint:
Authentication: header (using )
Request Body: none
GET {BASE_URL}/api/v1/modelsX-API-Keyenv.API_KEYUse this when the Worker needs to list available OpenRouter models. The response body, status, and headers are passed through from OpenRouter .
/modelstypescript
async function listModels(env: Env): Promise<unknown> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(`${baseUrl}/api/v1/models`, {
headers: { 'X-API-Key': env.API_KEY },
});
if (!resp.ok) {
throw new Error(await extractPinmeOpenRouterError(resp));
}
return await resp.json();
}接口地址:
身份验证: 请求头(使用)
请求体: 无
GET {BASE_URL}/api/v1/modelsX-API-Keyenv.API_KEY当Worker需要列出可用的OpenRouter模型时使用此接口。响应体、状态码和请求头均直接透传自OpenRouter的接口。
/modelstypescript
async function listModels(env: Env): Promise<unknown> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(`${baseUrl}/api/v1/models`, {
headers: { 'X-API-Key': env.API_KEY },
});
if (!resp.ok) {
throw new Error(await extractPinmeOpenRouterError(resp));
}
return await resp.json();
}Chat Completions API
Chat Completions API
Endpoint:
Authentication: header (using )
Request Body: OpenRouter chat/completions format, passed through as-is after a 1MB size check
Streaming: Supports SSE ()
Web Search: Supports OpenRouter server tool via the array
POST {BASE_URL}/api/v1/chat/completions?project_name={project_name}X-API-Keyenv.API_KEYstream: trueopenrouter:web_searchtools接口地址:
身份验证: 请求头(使用)
请求体: OpenRouter chat/completions格式,经过1MB大小检查后直接透传
流式传输: 支持SSE()
网页搜索: 通过数组支持OpenRouter的服务工具
POST {BASE_URL}/api/v1/chat/completions?project_name={project_name}X-API-Keyenv.API_KEYstream: truetoolsopenrouter:web_searchRequest Format
请求格式
json
{
"model": "openai/gpt-4o-mini",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
],
"stream": true
}Usefromenv.PROJECT_NAME; always URL-encode it in the query string. For available models, callcreate_workeror refer to OpenRouter model IDs.GET /api/v1/models
json
{
"model": "openai/gpt-4o-mini",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
],
"stream": true
}使用返回的create_worker;务必在查询字符串中对其进行URL编码。如需查看可用模型,可调用env.PROJECT_NAME接口或参考OpenRouter模型ID。GET /api/v1/models
OpenRouter Web Search
OpenRouter 网页搜索
PinMe does not provide a raw search endpoint. To search the web, pass OpenRouter's server tool to ; the model decides whether and when to search.
openrouter:web_searchchat/completionsAlways set and to keep search volume and cost bounded.
max_resultsmax_total_resultstypescript
async function searchWithLLM(env: Env, query: string): Promise<string> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify({
model: 'openai/gpt-5.2',
messages: [{ role: 'user', content: query }],
tools: [
{
type: 'openrouter:web_search',
parameters: {
engine: 'auto',
max_results: 5,
max_total_results: 10,
},
},
],
}),
},
);
if (!resp.ok) {
throw new Error(await extractPinmeOpenRouterError(resp));
}
const data = await resp.json() as { choices: Array<{ message?: { content?: string } }> };
return data.choices[0]?.message?.content ?? '';
}PinMe不提供原生搜索接口。如需进行网页搜索,需将OpenRouter的服务工具传入接口,由模型决定是否以及何时执行搜索。
openrouter:web_searchchat/completions请始终设置和以控制搜索量和成本。
max_resultsmax_total_resultstypescript
async function searchWithLLM(env: Env, query: string): Promise<string> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify({
model: 'openai/gpt-5.2',
messages: [{ role: 'user', content: query }],
tools: [
{
type: 'openrouter:web_search',
parameters: {
engine: 'auto',
max_results: 5,
max_total_results: 10,
},
},
],
}),
},
);
if (!resp.ok) {
throw new Error(await extractPinmeOpenRouterError(resp));
}
const data = await resp.json() as { choices: Array<{ message?: { content?: string } }> };
return data.choices[0]?.message?.content ?? '';
}Response Format
响应格式
Successful requests return OpenRouter's raw response body.
Non-streaming Success (200):
json
{
"id": "chatcmpl-...",
"choices": [{ "message": { "role": "assistant", "content": "Hello!" }, "finish_reason": "stop" }],
"usage": { "prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15 }
}Streaming Success (200): SSE format
data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{"content":" there"}}]}
data: [DONE]Errors:
| HTTP Status | Meaning | data.error Example |
|---|---|---|
| 401 | API Key missing, invalid, or mismatched with project_name | |
| 400 | project_name missing or OpenRouter key not configured | |
| 403 | LLM balance insufficient or disabled | |
| 413 | Request body exceeds 1MB | |
| 500 | Proxy failed before upstream request | |
| 502 | LLM service unavailable | |
If OpenRouter receives the request and returns a 4xx/5xx, PinMe passes through OpenRouter's status, headers, and response body instead of wrapping it.
请求成功后将返回OpenRouter的原始响应体。
非流式成功响应(200):
json
{
"id": "chatcmpl-...",
"choices": [{ "message": { "role": "assistant", "content": "Hello!" }, "finish_reason": "stop" }],
"usage": { "prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15 }
}流式成功响应(200): SSE格式
data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{"content":" there"}}]}
data: [DONE]错误响应:
| HTTP状态码 | 含义 | data.error示例 |
|---|---|---|
| 401 | API密钥缺失、无效或与project_name不匹配 | |
| 400 | project_name缺失或未配置OpenRouter密钥 | |
| 403 | LLM余额不足或服务已禁用 | |
| 413 | 请求体超过1MB | |
| 500 | 代理在向上游发送请求前失败 | |
| 502 | LLM服务不可用 | |
如果OpenRouter接收请求后返回4xx/5xx状态码,PinMe会直接透传OpenRouter的状态码、请求头和响应体,而非进行包装。
Worker Example Code — Non-streaming
Worker示例代码 — 非流式
typescript
async function callLLM(
env: Env,
messages: Array<{ role: string; content: string }>,
model = 'openai/gpt-4o-mini',
): Promise<{ content: string; error?: string }> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify({ model, messages }),
},
);
if (!resp.ok) {
return { content: '', error: await extractPinmeOpenRouterError(resp) };
}
const data = await resp.json() as { choices: Array<{ message: { content: string } }> };
return { content: data.choices[0]?.message?.content || '' };
}
// Usage in routes
async function handleChat(request: Request, env: Env): Promise<Response> {
const { question } = await request.json() as { question: string };
const result = await callLLM(env, [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: question },
]);
if (result.error) {
return json({ error: result.error }, 502);
}
return json({ answer: result.content });
}typescript
async function callLLM(
env: Env,
messages: Array<{ role: string; content: string }>,
model = 'openai/gpt-4o-mini',
): Promise<{ content: string; error?: string }> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify({ model, messages }),
},
);
if (!resp.ok) {
return { content: '', error: await extractPinmeOpenRouterError(resp) };
}
const data = await resp.json() as { choices: Array<{ message: { content: string } }> };
return { content: data.choices[0]?.message?.content || '' };
}
// Usage in routes
async function handleChat(request: Request, env: Env): Promise<Response> {
const { question } = await request.json() as { question: string };
const result = await callLLM(env, [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: question },
]);
if (result.error) {
return json({ error: result.error }, 502);
}
return json({ answer: result.content });
}Worker Example Code — Streaming (SSE Passthrough)
Worker示例代码 — 流式传输(SSE透传)
typescript
async function handleChatStream(request: Request, env: Env): Promise<Response> {
const body = await request.text();
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
// Ensure stream=true in the request
let parsed = JSON.parse(body);
parsed.stream = true;
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify(parsed),
},
);
if (!resp.ok) {
return json({ error: await extractPinmeOpenRouterError(resp) }, resp.status);
}
// Pass through SSE stream directly
return new Response(resp.body, {
status: 200,
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
...CORS_HEADERS,
},
});
}typescript
async function handleChatStream(request: Request, env: Env): Promise<Response> {
const body = await request.text();
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
// Ensure stream=true in the request
let parsed = JSON.parse(body);
parsed.stream = true;
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify(parsed),
},
);
if (!resp.ok) {
return json({ error: await extractPinmeOpenRouterError(resp) }, resp.status);
}
// Pass through SSE stream directly
return new Response(resp.body, {
status: 200,
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
...CORS_HEADERS,
},
});
}Frontend SSE Stream Consumer Example
前端SSE流消费示例
typescript
async function streamChat(question: string, onChunk: (text: string) => void): Promise<void> {
const resp = await fetch(getApiUrl('/api/chat/stream'), {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question }),
});
const reader = resp.body!.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop()!; // Keep incomplete line
for (const line of lines) {
if (!line.startsWith('data: ')) continue;
const payload = line.slice(6);
if (payload === '[DONE]') return;
const chunk = JSON.parse(payload) as { choices: Array<{ delta: { content?: string } }> };
const content = chunk.choices[0]?.delta?.content;
if (content) onChunk(content);
}
}
}typescript
async function streamChat(question: string, onChunk: (text: string) => void): Promise<void> {
const resp = await fetch(getApiUrl('/api/chat/stream'), {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question }),
});
const reader = resp.body!.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop()!; // Keep incomplete line
for (const line of lines) {
if (!line.startsWith('data: ')) continue;
const payload = line.slice(6);
if (payload === '[DONE]') return;
const chunk = JSON.parse(payload) as { choices: Array<{ delta: { content?: string } }> };
const content = chunk.choices[0]?.delta?.content;
if (content) onChunk(content);
}
}
}Error Handling Pattern
错误处理模式
For and , successful responses are raw OpenRouter responses. Proxy failures before the OpenRouter request use PinMe's wrapped error format:
/api/v1/models/api/v1/chat/completionstypescript
interface PinmeResponse<T = unknown> {
code: number; // 200=success, other=failure
msg: string; // "ok" | "error" | "invalid params"
data?: T; // Business data on success, may contain { error: string } on failure
}对于和接口,成功响应为OpenRouter的原始响应。在发送请求到OpenRouter之前发生的代理失败会使用PinMe的包装错误格式:
/api/v1/models/api/v1/chat/completionstypescript
interface PinmeResponse<T = unknown> {
code: number; // 200=success, other=failure
msg: string; // "ok" | "error" | "invalid params"
data?: T; // Business data on success, may contain { error: string } on failure
}Recommended Error Extractor
推荐的错误提取函数
typescript
async function extractPinmeOpenRouterError(resp: Response): Promise<string> {
const fallback = `HTTP ${resp.status}`;
try {
const body = await resp.clone().json() as PinmeResponse | { error?: { message?: string } } | { error?: string };
if ('data' in body && body.data && typeof body.data === 'object' && 'error' in body.data) {
return String((body.data as { error: unknown }).error);
}
if ('msg' in body && typeof body.msg === 'string' && body.msg) {
return body.msg;
}
if ('error' in body) {
const error = body.error;
if (typeof error === 'string') return error;
if (error && typeof error === 'object' && 'message' in error) {
return String((error as { message: unknown }).message);
}
}
} catch {
try {
const text = await resp.text();
if (text) return text;
} catch {
// Ignore and return fallback below.
}
}
return fallback;
}typescript
async function extractPinmeOpenRouterError(resp: Response): Promise<string> {
const fallback = `HTTP ${resp.status}`;
try {
const body = await resp.clone().json() as PinmeResponse | { error?: { message?: string } } | { error?: string };
if ('data' in body && body.data && typeof body.data === 'object' && 'error' in body.data) {
return String((body.data as { error: unknown }).error);
}
if ('msg' in body && typeof body.msg === 'string' && body.msg) {
return body.msg;
}
if ('error' in body) {
const error = body.error;
if (typeof error === 'string') return error;
if (error && typeof error === 'object' && 'message' in error) {
return String((error as { message: unknown }).message);
}
}
} catch {
try {
const text = await resp.text();
if (text) return text;
} catch {
// Ignore and return fallback below.
}
}
return fallback;
}Optional JSON Helper
可选的JSON辅助函数
Use this helper for non-streaming calls. It returns the raw OpenRouter JSON on success.
POSTtypescript
async function callOpenRouterJSON<T>(url: string, apiKey: string, body: unknown): Promise<{ data?: T; error?: string }> {
let resp: Response;
try {
resp = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey },
body: JSON.stringify(body),
});
} catch {
return { error: 'Network error' };
}
if (!resp.ok) {
return { error: await extractPinmeOpenRouterError(resp) };
}
return { data: await resp.json() as T };
}此辅助函数用于非流式调用。成功时返回OpenRouter的原始JSON数据。
POSTtypescript
async function callOpenRouterJSON<T>(url: string, apiKey: string, body: unknown): Promise<{ data?: T; error?: string }> {
let resp: Response;
try {
resp = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey },
body: JSON.stringify(body),
});
} catch {
return { error: 'Network error' };
}
if (!resp.ok) {
return { error: await extractPinmeOpenRouterError(resp) };
}
return { data: await resp.json() as T };
}Usage Example
使用示例
typescript
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
// Call LLM (non-streaming)
const llmResult = await callOpenRouterJSON<{ choices: Array<{ message: { content: string } }> }>(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`, env.API_KEY,
{ model: 'openai/gpt-4o-mini', messages: [{ role: 'user', content: 'Hi' }] },
);
if (llmResult.error) return json({ error: llmResult.error }, 502);typescript
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
// Call LLM (non-streaming)
const llmResult = await callOpenRouterJSON<{ choices: Array<{ message: { content: string } }> }>(
`${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`, env.API_KEY,
{ model: 'openai/gpt-4o-mini', messages: [{ role: 'user', content: 'Hi' }] },
);
if (llmResult.error) return json({ error: llmResult.error }, 502);