pinme-llm

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

PinMe Worker OpenRouter API Integration

PinMe Worker OpenRouter API 集成指南

Guides how to call PinMe platform's OpenRouter proxy APIs in a PinMe Worker (TypeScript). Workers use the PinMe project API key; they never hold the real OpenRouter API key.
本文指导如何在PinMe Worker(TypeScript)中调用PinMe平台的OpenRouter代理API。Worker使用PinMe项目API密钥,不会持有真实的OpenRouter API密钥。

Environment Variables

环境变量

The following environment variables are automatically injected when the Worker is created — no manual configuration needed:
typescript
// backend/src/worker.ts
export interface Env {
  DB: D1Database;
  API_KEY: string;       // Project API Key from create_worker
  PROJECT_NAME: string;  // Actual project_name from create_worker; must match API_KEY
  BASE_URL?: string;     // Optional override for PinMe API base URL, defaults to https://pinme.cloud
}
API_KEY
authenticates the Worker to PinMe.
PROJECT_NAME
is required for
chat/completions
and must belong to the same project as
API_KEY
. When
BASE_URL
is not set, use
https://pinme.cloud
.

创建Worker时会自动注入以下环境变量,无需手动配置:
typescript
// backend/src/worker.ts
export interface Env {
  DB: D1Database;
  API_KEY: string;       // Project API Key from create_worker
  PROJECT_NAME: string;  // Actual project_name from create_worker; must match API_KEY
  BASE_URL?: string;     // Optional override for PinMe API base URL, defaults to https://pinme.cloud
}
API_KEY
用于Worker向PinMe进行身份验证。
PROJECT_NAME
是调用
chat/completions
接口的必填项,且必须与
API_KEY
所属的项目一致。如果未设置
BASE_URL
,则默认使用
https://pinme.cloud

Models API

Models API

Endpoint:
GET {BASE_URL}/api/v1/models
Authentication:
X-API-Key
header (using
env.API_KEY
) Request Body: none
Use this when the Worker needs to list available OpenRouter models. The response body, status, and headers are passed through from OpenRouter
/models
.
typescript
async function listModels(env: Env): Promise<unknown> {
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
  const resp = await fetch(`${baseUrl}/api/v1/models`, {
    headers: { 'X-API-Key': env.API_KEY },
  });

  if (!resp.ok) {
    throw new Error(await extractPinmeOpenRouterError(resp));
  }

  return await resp.json();
}

接口地址:
GET {BASE_URL}/api/v1/models
身份验证:
X-API-Key
请求头(使用
env.API_KEY
请求体:
当Worker需要列出可用的OpenRouter模型时使用此接口。响应体、状态码和请求头均直接透传自OpenRouter的
/models
接口。
typescript
async function listModels(env: Env): Promise<unknown> {
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
  const resp = await fetch(`${baseUrl}/api/v1/models`, {
    headers: { 'X-API-Key': env.API_KEY },
  });

  if (!resp.ok) {
    throw new Error(await extractPinmeOpenRouterError(resp));
  }

  return await resp.json();
}

Chat Completions API

Chat Completions API

Endpoint:
POST {BASE_URL}/api/v1/chat/completions?project_name={project_name}
Authentication:
X-API-Key
header (using
env.API_KEY
) Request Body: OpenRouter chat/completions format, passed through as-is after a 1MB size check Streaming: Supports SSE (
stream: true
) Web Search: Supports OpenRouter
openrouter:web_search
server tool via the
tools
array
接口地址:
POST {BASE_URL}/api/v1/chat/completions?project_name={project_name}
身份验证:
X-API-Key
请求头(使用
env.API_KEY
请求体: OpenRouter chat/completions格式,经过1MB大小检查后直接透传 流式传输: 支持SSE(
stream: true
网页搜索: 通过
tools
数组支持OpenRouter的
openrouter:web_search
服务工具

Request Format

请求格式

json
{
  "model": "openai/gpt-4o-mini",
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Hello!" }
  ],
  "stream": true
}
Use
env.PROJECT_NAME
from
create_worker
; always URL-encode it in the query string. For available models, call
GET /api/v1/models
or refer to OpenRouter model IDs.
json
{
  "model": "openai/gpt-4o-mini",
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Hello!" }
  ],
  "stream": true
}
使用
create_worker
返回的
env.PROJECT_NAME
;务必在查询字符串中对其进行URL编码。如需查看可用模型,可调用
GET /api/v1/models
接口或参考OpenRouter模型ID。

OpenRouter Web Search

OpenRouter 网页搜索

PinMe does not provide a raw search endpoint. To search the web, pass OpenRouter's
openrouter:web_search
server tool to
chat/completions
; the model decides whether and when to search.
Always set
max_results
and
max_total_results
to keep search volume and cost bounded.
typescript
async function searchWithLLM(env: Env, query: string): Promise<string> {
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
  const resp = await fetch(
    `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': env.API_KEY,
      },
      body: JSON.stringify({
        model: 'openai/gpt-5.2',
        messages: [{ role: 'user', content: query }],
        tools: [
          {
            type: 'openrouter:web_search',
            parameters: {
              engine: 'auto',
              max_results: 5,
              max_total_results: 10,
            },
          },
        ],
      }),
    },
  );

  if (!resp.ok) {
    throw new Error(await extractPinmeOpenRouterError(resp));
  }

  const data = await resp.json() as { choices: Array<{ message?: { content?: string } }> };
  return data.choices[0]?.message?.content ?? '';
}
PinMe不提供原生搜索接口。如需进行网页搜索,需将OpenRouter的
openrouter:web_search
服务工具传入
chat/completions
接口,由模型决定是否以及何时执行搜索。
请始终设置
max_results
max_total_results
以控制搜索量和成本。
typescript
async function searchWithLLM(env: Env, query: string): Promise<string> {
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
  const resp = await fetch(
    `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': env.API_KEY,
      },
      body: JSON.stringify({
        model: 'openai/gpt-5.2',
        messages: [{ role: 'user', content: query }],
        tools: [
          {
            type: 'openrouter:web_search',
            parameters: {
              engine: 'auto',
              max_results: 5,
              max_total_results: 10,
            },
          },
        ],
      }),
    },
  );

  if (!resp.ok) {
    throw new Error(await extractPinmeOpenRouterError(resp));
  }

  const data = await resp.json() as { choices: Array<{ message?: { content?: string } }> };
  return data.choices[0]?.message?.content ?? '';
}

Response Format

响应格式

Successful requests return OpenRouter's raw response body.
Non-streaming Success (200):
json
{
  "id": "chatcmpl-...",
  "choices": [{ "message": { "role": "assistant", "content": "Hello!" }, "finish_reason": "stop" }],
  "usage": { "prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15 }
}
Streaming Success (200): SSE format
data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{"content":" there"}}]}
data: [DONE]
Errors:
HTTP StatusMeaningdata.error Example
401API Key missing, invalid, or mismatched with project_name
"X-API-Key header is required"
/
"Invalid API key"
/
"Invalid API key or project name"
400project_name missing or OpenRouter key not configured
"project_name is required"
/
"LLM service not configured for this project"
403LLM balance insufficient or disabled
"Insufficient balance, please recharge to continue using LLM service"
413Request body exceeds 1MB
"Request body too large (max 1MB)"
500Proxy failed before upstream request
"Failed to build request"
502LLM service unavailable
"LLM service unavailable"
If OpenRouter receives the request and returns a 4xx/5xx, PinMe passes through OpenRouter's status, headers, and response body instead of wrapping it.
请求成功后将返回OpenRouter的原始响应体。
非流式成功响应(200):
json
{
  "id": "chatcmpl-...",
  "choices": [{ "message": { "role": "assistant", "content": "Hello!" }, "finish_reason": "stop" }],
  "usage": { "prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15 }
}
流式成功响应(200): SSE格式
data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{"content":" there"}}]}
data: [DONE]
错误响应:
HTTP状态码含义data.error示例
401API密钥缺失、无效或与project_name不匹配
"X-API-Key header is required"
/
"Invalid API key"
/
"Invalid API key or project name"
400project_name缺失或未配置OpenRouter密钥
"project_name is required"
/
"LLM service not configured for this project"
403LLM余额不足或服务已禁用
"Insufficient balance, please recharge to continue using LLM service"
413请求体超过1MB
"Request body too large (max 1MB)"
500代理在向上游发送请求前失败
"Failed to build request"
502LLM服务不可用
"LLM service unavailable"
如果OpenRouter接收请求后返回4xx/5xx状态码,PinMe会直接透传OpenRouter的状态码、请求头和响应体,而非进行包装。

Worker Example Code — Non-streaming

Worker示例代码 — 非流式

typescript
async function callLLM(
  env: Env,
  messages: Array<{ role: string; content: string }>,
  model = 'openai/gpt-4o-mini',
): Promise<{ content: string; error?: string }> {
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
  const resp = await fetch(
    `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': env.API_KEY,
      },
      body: JSON.stringify({ model, messages }),
    },
  );

  if (!resp.ok) {
    return { content: '', error: await extractPinmeOpenRouterError(resp) };
  }

  const data = await resp.json() as { choices: Array<{ message: { content: string } }> };
  return { content: data.choices[0]?.message?.content || '' };
}

// Usage in routes
async function handleChat(request: Request, env: Env): Promise<Response> {
  const { question } = await request.json() as { question: string };

  const result = await callLLM(env, [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: question },
  ]);

  if (result.error) {
    return json({ error: result.error }, 502);
  }
  return json({ answer: result.content });
}
typescript
async function callLLM(
  env: Env,
  messages: Array<{ role: string; content: string }>,
  model = 'openai/gpt-4o-mini',
): Promise<{ content: string; error?: string }> {
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
  const resp = await fetch(
    `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': env.API_KEY,
      },
      body: JSON.stringify({ model, messages }),
    },
  );

  if (!resp.ok) {
    return { content: '', error: await extractPinmeOpenRouterError(resp) };
  }

  const data = await resp.json() as { choices: Array<{ message: { content: string } }> };
  return { content: data.choices[0]?.message?.content || '' };
}

// Usage in routes
async function handleChat(request: Request, env: Env): Promise<Response> {
  const { question } = await request.json() as { question: string };

  const result = await callLLM(env, [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: question },
  ]);

  if (result.error) {
    return json({ error: result.error }, 502);
  }
  return json({ answer: result.content });
}

Worker Example Code — Streaming (SSE Passthrough)

Worker示例代码 — 流式传输(SSE透传)

typescript
async function handleChatStream(request: Request, env: Env): Promise<Response> {
  const body = await request.text();
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';

  // Ensure stream=true in the request
  let parsed = JSON.parse(body);
  parsed.stream = true;

  const resp = await fetch(
    `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': env.API_KEY,
      },
      body: JSON.stringify(parsed),
    },
  );

  if (!resp.ok) {
    return json({ error: await extractPinmeOpenRouterError(resp) }, resp.status);
  }

  // Pass through SSE stream directly
  return new Response(resp.body, {
    status: 200,
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive',
      ...CORS_HEADERS,
    },
  });
}
typescript
async function handleChatStream(request: Request, env: Env): Promise<Response> {
  const body = await request.text();
  const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';

  // Ensure stream=true in the request
  let parsed = JSON.parse(body);
  parsed.stream = true;

  const resp = await fetch(
    `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': env.API_KEY,
      },
      body: JSON.stringify(parsed),
    },
  );

  if (!resp.ok) {
    return json({ error: await extractPinmeOpenRouterError(resp) }, resp.status);
  }

  // Pass through SSE stream directly
  return new Response(resp.body, {
    status: 200,
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive',
      ...CORS_HEADERS,
    },
  });
}

Frontend SSE Stream Consumer Example

前端SSE流消费示例

typescript
async function streamChat(question: string, onChunk: (text: string) => void): Promise<void> {
  const resp = await fetch(getApiUrl('/api/chat/stream'), {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ question }),
  });

  const reader = resp.body!.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    buffer += decoder.decode(value, { stream: true });
    const lines = buffer.split('\n');
    buffer = lines.pop()!; // Keep incomplete line

    for (const line of lines) {
      if (!line.startsWith('data: ')) continue;
      const payload = line.slice(6);
      if (payload === '[DONE]') return;

      const chunk = JSON.parse(payload) as { choices: Array<{ delta: { content?: string } }> };
      const content = chunk.choices[0]?.delta?.content;
      if (content) onChunk(content);
    }
  }
}

typescript
async function streamChat(question: string, onChunk: (text: string) => void): Promise<void> {
  const resp = await fetch(getApiUrl('/api/chat/stream'), {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ question }),
  });

  const reader = resp.body!.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    buffer += decoder.decode(value, { stream: true });
    const lines = buffer.split('\n');
    buffer = lines.pop()!; // Keep incomplete line

    for (const line of lines) {
      if (!line.startsWith('data: ')) continue;
      const payload = line.slice(6);
      if (payload === '[DONE]') return;

      const chunk = JSON.parse(payload) as { choices: Array<{ delta: { content?: string } }> };
      const content = chunk.choices[0]?.delta?.content;
      if (content) onChunk(content);
    }
  }
}

Error Handling Pattern

错误处理模式

For
/api/v1/models
and
/api/v1/chat/completions
, successful responses are raw OpenRouter responses. Proxy failures before the OpenRouter request use PinMe's wrapped error format:
typescript
interface PinmeResponse<T = unknown> {
  code: number;   // 200=success, other=failure
  msg: string;    // "ok" | "error" | "invalid params"
  data?: T;       // Business data on success, may contain { error: string } on failure
}
对于
/api/v1/models
/api/v1/chat/completions
接口,成功响应为OpenRouter的原始响应。在发送请求到OpenRouter之前发生的代理失败会使用PinMe的包装错误格式:
typescript
interface PinmeResponse<T = unknown> {
  code: number;   // 200=success, other=failure
  msg: string;    // "ok" | "error" | "invalid params"
  data?: T;       // Business data on success, may contain { error: string } on failure
}

Recommended Error Extractor

推荐的错误提取函数

typescript
async function extractPinmeOpenRouterError(resp: Response): Promise<string> {
  const fallback = `HTTP ${resp.status}`;
  try {
    const body = await resp.clone().json() as PinmeResponse | { error?: { message?: string } } | { error?: string };
    if ('data' in body && body.data && typeof body.data === 'object' && 'error' in body.data) {
      return String((body.data as { error: unknown }).error);
    }
    if ('msg' in body && typeof body.msg === 'string' && body.msg) {
      return body.msg;
    }
    if ('error' in body) {
      const error = body.error;
      if (typeof error === 'string') return error;
      if (error && typeof error === 'object' && 'message' in error) {
        return String((error as { message: unknown }).message);
      }
    }
  } catch {
    try {
      const text = await resp.text();
      if (text) return text;
    } catch {
      // Ignore and return fallback below.
    }
  }
  return fallback;
}
typescript
async function extractPinmeOpenRouterError(resp: Response): Promise<string> {
  const fallback = `HTTP ${resp.status}`;
  try {
    const body = await resp.clone().json() as PinmeResponse | { error?: { message?: string } } | { error?: string };
    if ('data' in body && body.data && typeof body.data === 'object' && 'error' in body.data) {
      return String((body.data as { error: unknown }).error);
    }
    if ('msg' in body && typeof body.msg === 'string' && body.msg) {
      return body.msg;
    }
    if ('error' in body) {
      const error = body.error;
      if (typeof error === 'string') return error;
      if (error && typeof error === 'object' && 'message' in error) {
        return String((error as { message: unknown }).message);
      }
    }
  } catch {
    try {
      const text = await resp.text();
      if (text) return text;
    } catch {
      // Ignore and return fallback below.
    }
  }
  return fallback;
}

Optional JSON Helper

可选的JSON辅助函数

Use this helper for non-streaming
POST
calls. It returns the raw OpenRouter JSON on success.
typescript
async function callOpenRouterJSON<T>(url: string, apiKey: string, body: unknown): Promise<{ data?: T; error?: string }> {
  let resp: Response;
  try {
    resp = await fetch(url, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey },
      body: JSON.stringify(body),
    });
  } catch {
    return { error: 'Network error' };
  }

  if (!resp.ok) {
    return { error: await extractPinmeOpenRouterError(resp) };
  }

  return { data: await resp.json() as T };
}
此辅助函数用于非流式
POST
调用。成功时返回OpenRouter的原始JSON数据。
typescript
async function callOpenRouterJSON<T>(url: string, apiKey: string, body: unknown): Promise<{ data?: T; error?: string }> {
  let resp: Response;
  try {
    resp = await fetch(url, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey },
      body: JSON.stringify(body),
    });
  } catch {
    return { error: 'Network error' };
  }

  if (!resp.ok) {
    return { error: await extractPinmeOpenRouterError(resp) };
  }

  return { data: await resp.json() as T };
}

Usage Example

使用示例

typescript
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';

// Call LLM (non-streaming)
const llmResult = await callOpenRouterJSON<{ choices: Array<{ message: { content: string } }> }>(
  `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`, env.API_KEY,
  { model: 'openai/gpt-4o-mini', messages: [{ role: 'user', content: 'Hi' }] },
);
if (llmResult.error) return json({ error: llmResult.error }, 502);
typescript
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';

// Call LLM (non-streaming)
const llmResult = await callOpenRouterJSON<{ choices: Array<{ message: { content: string } }> }>(
  `${baseUrl}/api/v1/chat/completions?project_name=${encodeURIComponent(env.PROJECT_NAME)}`, env.API_KEY,
  { model: 'openai/gpt-4o-mini', messages: [{ role: 'user', content: 'Hi' }] },
);
if (llmResult.error) return json({ error: llmResult.error }, 502);