freebuff2api-openai-proxy

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Freebuff2API OpenAI Proxy

Freebuff2API OpenAI Proxy

Skill by ara.so — Daily 2026 Skills collection.
Freebuff2API is an OpenAI-compatible proxy server written in Go that translates standard OpenAI API requests into Freebuff's backend format. It enables any OpenAI-compatible client, SDK, or CLI tool to use Freebuff's free models without modification.
Skill by ara.so — Daily 2026 Skills collection.
Freebuff2API是一款用Go语言编写的兼容OpenAI的代理服务器,可将标准OpenAI API请求转换为Freebuff后端格式。它支持任何兼容OpenAI的客户端、SDK或CLI工具无需修改即可使用Freebuff的免费模型。

What It Does

功能介绍

  • Exposes standard OpenAI endpoints (
    /v1/chat/completions
    , etc.)
  • Dynamically randomizes client fingerprints to mimic official Freebuff SDK behavior
  • Rotates multiple auth tokens on a configurable interval
  • Routes outbound traffic through an optional HTTP proxy
  • Ships as a multi-arch Docker image for easy deployment
  • 提供标准OpenAI接口(
    /v1/chat/completions
    等)
  • 动态随机化客户端指纹,模拟官方Freebuff SDK行为
  • 支持按可配置的时间间隔轮换多个认证令牌
  • 可通过可选的HTTP代理路由出站流量
  • 提供多架构Docker镜像,便于部署

Getting Auth Tokens

获取认证令牌

Before deploying, obtain one or more Freebuff auth tokens:
Method 1 — Web (easiest): Visit https://freebuff.llm.pm, log in with your Freebuff account, and copy the displayed token.
Method 2 — Freebuff CLI:
bash
npm i -g freebuff
freebuff  # follow login prompts
Token is stored at:
  • Linux/macOS:
    ~/.config/manicode/credentials.json
  • Windows:
    C:\Users\<username>\.config\manicode\credentials.json
json
{
  "default": {
    "authToken": "fa82b5c1-e39d-4c7a-961f-d2b3c4e5f6a7"
  }
}
Copy the
authToken
value — this is your
AUTH_TOKENS
value.
部署前,请获取一个或多个Freebuff认证令牌:
方法1 — 网页端(最简单): 访问https://freebuff.llm.pm,使用你的Freebuff账号登录,复制显示的令牌。
方法2 — Freebuff CLI:
bash
npm i -g freebuff
freebuff  # follow login prompts
令牌存储位置:
  • Linux/macOS:
    ~/.config/manicode/credentials.json
  • Windows:
    C:\Users\<username>\.config\manicode\credentials.json
json
{
  "default": {
    "authToken": "fa82b5c1-e39d-4c7a-961f-d2b3c4e5f6a7"
  }
}
复制
authToken
的值——这就是你的
AUTH_TOKENS
值。

Installation & Deployment

安装与部署

Docker (Recommended)

Docker(推荐方式)

bash
undefined
bash
undefined

Single token

Single token

docker run -d --name freebuff2api
-p 8080:8080
-e AUTH_TOKENS="$FREEBUFF_TOKEN"
ghcr.io/quorinex/freebuff2api:latest
docker run -d --name freebuff2api
-p 8080:8080
-e AUTH_TOKENS="$FREEBUFF_TOKEN"
ghcr.io/quorinex/freebuff2api:latest

Multiple tokens (comma-separated for higher throughput)

Multiple tokens (comma-separated for higher throughput)

docker run -d --name freebuff2api
-p 8080:8080
-e AUTH_TOKENS="$FREEBUFF_TOKEN_1,$FREEBUFF_TOKEN_2,$FREEBUFF_TOKEN_3"
ghcr.io/quorinex/freebuff2api:latest
docker run -d --name freebuff2api
-p 8080:8080
-e AUTH_TOKENS="$FREEBUFF_TOKEN_1,$FREEBUFF_TOKEN_2,$FREEBUFF_TOKEN_3"
ghcr.io/quorinex/freebuff2api:latest

With HTTP proxy and API key protection

With HTTP proxy and API key protection

docker run -d --name freebuff2api
-p 8080:8080
-e AUTH_TOKENS="$FREEBUFF_TOKEN"
-e API_KEYS="$MY_PROXY_API_KEY"
-e HTTP_PROXY="$HTTP_PROXY_URL"
ghcr.io/quorinex/freebuff2api:latest
undefined
docker run -d --name freebuff2api
-p 8080:8080
-e AUTH_TOKENS="$FREEBUFF_TOKEN"
-e API_KEYS="$MY_PROXY_API_KEY"
-e HTTP_PROXY="$HTTP_PROXY_URL"
ghcr.io/quorinex/freebuff2api:latest
undefined

Docker Compose

Docker Compose

yaml
undefined
yaml
undefined

docker-compose.yml

docker-compose.yml

version: "3.8" services: freebuff2api: image: ghcr.io/quorinex/freebuff2api:latest container_name: freebuff2api restart: unless-stopped ports: - "8080:8080" environment: AUTH_TOKENS: "${FREEBUFF_TOKENS}" API_KEYS: "${PROXY_API_KEYS}" ROTATION_INTERVAL: "6h" REQUEST_TIMEOUT: "15m" # Or mount a config file: # volumes: # - ./config.json:/app/config.json # command: ["-config", "/app/config.json"]

```bash
version: "3.8" services: freebuff2api: image: ghcr.io/quorinex/freebuff2api:latest container_name: freebuff2api restart: unless-stopped ports: - "8080:8080" environment: AUTH_TOKENS: "${FREEBUFF_TOKENS}" API_KEYS: "${PROXY_API_KEYS}" ROTATION_INTERVAL: "6h" REQUEST_TIMEOUT: "15m" # Or mount a config file: # volumes: # - ./config.json:/app/config.json # command: ["-config", "/app/config.json"]

```bash

.env file

.env file

FREEBUFF_TOKENS=token1,token2,token3 PROXY_API_KEYS=my-secret-key
undefined
FREEBUFF_TOKENS=token1,token2,token3 PROXY_API_KEYS=my-secret-key
undefined

Build from Source

从源码构建

Requirements: Go 1.23+
bash
git clone https://github.com/Quorinex/Freebuff2API.git
cd Freebuff2API
go build -o freebuff2api .
要求:Go 1.23+
bash
git clone https://github.com/Quorinex/Freebuff2API.git
cd Freebuff2API
go build -o freebuff2api .

Run with config file

Run with config file

./freebuff2api -config config.json
./freebuff2api -config config.json

Run with environment variables

Run with environment variables

AUTH_TOKENS="$FREEBUFF_TOKEN" ./freebuff2api

```bash
docker build -t freebuff2api .
docker run -d -p 8080:8080 -e AUTH_TOKENS="$FREEBUFF_TOKEN" freebuff2api
AUTH_TOKENS="$FREEBUFF_TOKEN" ./freebuff2api

```bash
docker build -t freebuff2api .
docker run -d -p 8080:8080 -e AUTH_TOKENS="$FREEBUFF_TOKEN" freebuff2api

Configuration

配置

config.json

config.json

json
{
  "LISTEN_ADDR": ":8080",
  "UPSTREAM_BASE_URL": "https://codebuff.com",
  "AUTH_TOKENS": ["token1", "token2"],
  "ROTATION_INTERVAL": "6h",
  "REQUEST_TIMEOUT": "15m",
  "API_KEYS": ["my-proxy-api-key"],
  "HTTP_PROXY": ""
}
json
{
  "LISTEN_ADDR": ":8080",
  "UPSTREAM_BASE_URL": "https://codebuff.com",
  "AUTH_TOKENS": ["token1", "token2"],
  "ROTATION_INTERVAL": "6h",
  "REQUEST_TIMEOUT": "15m",
  "API_KEYS": ["my-proxy-api-key"],
  "HTTP_PROXY": ""
}

Configuration Reference

配置参考

Key / Env VarDefaultDescription
LISTEN_ADDR
:8080
Proxy listen address
UPSTREAM_BASE_URL
https://codebuff.com
Freebuff backend URL
AUTH_TOKENS
Freebuff auth tokens (JSON array or comma-separated)
ROTATION_INTERVAL
6h
How often to rotate tokens
REQUEST_TIMEOUT
15m
Upstream request timeout
API_KEYS
[]
Client API keys for proxy auth (empty = open access)
HTTP_PROXY
""
HTTP proxy for outbound requests
Note: Environment variables override JSON file values when both are set.
键/环境变量默认值描述
LISTEN_ADDR
:8080
代理监听地址
UPSTREAM_BASE_URL
https://codebuff.com
Freebuff后端URL
AUTH_TOKENS
Freebuff认证令牌(JSON数组或逗号分隔)
ROTATION_INTERVAL
6h
令牌轮换间隔
REQUEST_TIMEOUT
15m
上游请求超时时间
API_KEYS
[]
代理认证用的客户端API密钥(空值表示开放访问)
HTTP_PROXY
""
出站请求用的HTTP代理
注意: 当同时设置环境变量和JSON文件时,环境变量优先级更高。

Using the Proxy

使用代理

Once running, point any OpenAI-compatible client at
http://localhost:8080
:
运行后,将任何兼容OpenAI的客户端指向
http://localhost:8080

curl

curl

bash
curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $PROXY_API_KEY" \
  -d '{
    "model": "claude-3-5-sonnet",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
If
API_KEYS
is empty (open access), omit the Authorization header or use any value:
bash
curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "claude-3-5-sonnet", "messages": [{"role": "user", "content": "Hello!"}]}'
bash
curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $PROXY_API_KEY" \
  -d '{
    "model": "claude-3-5-sonnet",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
如果
API_KEYS
为空(开放访问),可省略Authorization头或使用任意值:
bash
curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "claude-3-5-sonnet", "messages": [{"role": "user", "content": "Hello!"}]}'

Python (openai SDK)

Python(openai SDK)

python
from openai import OpenAI
import os

client = OpenAI(
    base_url="http://localhost:8080/v1",
    api_key=os.environ.get("PROXY_API_KEY", "unused"),  # any value if API_KEYS is empty
)

response = client.chat.completions.create(
    model="claude-3-5-sonnet",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain async/await in Python."},
    ],
)
print(response.choices[0].message.content)
python
from openai import OpenAI
import os

client = OpenAI(
    base_url="http://localhost:8080/v1",
    api_key=os.environ.get("PROXY_API_KEY", "unused"),  # any value if API_KEYS is empty
)

response = client.chat.completions.create(
    model="claude-3-5-sonnet",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain async/await in Python."},
    ],
)
print(response.choices[0].message.content)

Streaming

流式传输

python
from openai import OpenAI
import os

client = OpenAI(
    base_url="http://localhost:8080/v1",
    api_key=os.environ.get("PROXY_API_KEY", "unused"),
)

stream = client.chat.completions.create(
    model="claude-3-5-sonnet",
    messages=[{"role": "user", "content": "Write a short poem."}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="", flush=True)
print()
python
from openai import OpenAI
import os

client = OpenAI(
    base_url="http://localhost:8080/v1",
    api_key=os.environ.get("PROXY_API_KEY", "unused"),
)

stream = client.chat.completions.create(
    model="claude-3-5-sonnet",
    messages=[{"role": "user", "content": "Write a short poem."}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="", flush=True)
print()

Node.js (openai SDK)

Node.js(openai SDK)

javascript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "http://localhost:8080/v1",
  apiKey: process.env.PROXY_API_KEY ?? "unused",
});

const response = await client.chat.completions.create({
  model: "claude-3-5-sonnet",
  messages: [{ role: "user", content: "Hello from Node.js!" }],
});

console.log(response.choices[0].message.content);
javascript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "http://localhost:8080/v1",
  apiKey: process.env.PROXY_API_KEY ?? "unused",
});

const response = await client.chat.completions.create({
  model: "claude-3-5-sonnet",
  messages: [{ role: "user", content: "Hello from Node.js!" }],
});

console.log(response.choices[0].message.content);

LangChain (Python)

LangChain(Python)

python
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(
    model="claude-3-5-sonnet",
    openai_api_base="http://localhost:8080/v1",
    openai_api_key=os.environ.get("PROXY_API_KEY", "unused"),
)

result = llm.invoke("What is the capital of France?")
print(result.content)
python
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(
    model="claude-3-5-sonnet",
    openai_api_base="http://localhost:8080/v1",
    openai_api_key=os.environ.get("PROXY_API_KEY", "unused"),
)

result = llm.invoke("What is the capital of France?")
print(result.content)

Common Patterns

常见使用场景

Multi-Token Setup for Higher Throughput

多令牌配置提升吞吐量

Collect tokens from multiple Freebuff accounts:
bash
undefined
收集多个Freebuff账号的令牌:
bash
undefined

config.json approach

config.json approach

{ "AUTH_TOKENS": [ "token-account-1", "token-account-2", "token-account-3" ], "ROTATION_INTERVAL": "3h" }
{ "AUTH_TOKENS": [ "token-account-1", "token-account-2", "token-account-3" ], "ROTATION_INTERVAL": "3h" }

Environment variable approach (comma-separated)

Environment variable approach (comma-separated)

export AUTH_TOKENS="token1,token2,token3"
undefined
export AUTH_TOKENS="token1,token2,token3"
undefined

Securing the Proxy with API Keys

使用API密钥保护代理

json
{
  "API_KEYS": ["secret-key-for-team", "another-key-for-ci"]
}
Clients must then include
Authorization: Bearer secret-key-for-team
.
json
{
  "API_KEYS": ["secret-key-for-team", "another-key-for-ci"]
}
客户端必须包含
Authorization: Bearer secret-key-for-team
头。

Using with HTTP Proxy (Corporate/Regional)

搭配HTTP代理使用(企业/区域场景)

json
{
  "HTTP_PROXY": "http://proxy.company.com:3128"
}
Or via environment:
bash
docker run -d --name freebuff2api \
  -p 8080:8080 \
  -e AUTH_TOKENS="$FREEBUFF_TOKEN" \
  -e HTTP_PROXY="$CORPORATE_PROXY" \
  ghcr.io/quorinex/freebuff2api:latest
json
{
  "HTTP_PROXY": "http://proxy.company.com:3128"
}
或通过环境变量设置:
bash
docker run -d --name freebuff2api \
  -p 8080:8080 \
  -e AUTH_TOKENS="$FREEBUFF_TOKEN" \
  -e HTTP_PROXY="$CORPORATE_PROXY" \
  ghcr.io/quorinex/freebuff2api:latest

Health Check / Verify Running

健康检查/验证运行状态

bash
undefined
bash
undefined

Check the proxy is responding

Check the proxy is responding

Or a minimal chat request

Or a minimal chat request

curl -s http://localhost:8080/v1/chat/completions
-H "Content-Type: application/json"
-d '{"model":"claude-3-5-sonnet","messages":[{"role":"user","content":"ping"}]}'
| jq '.choices[0].message.content'
undefined
curl -s http://localhost:8080/v1/chat/completions
-H "Content-Type: application/json"
-d '{"model":"claude-3-5-sonnet","messages":[{"role":"user","content":"ping"}]}'
| jq '.choices[0].message.content'
undefined

Troubleshooting

故障排查

Proxy returns 401 Unauthorized

代理返回401未授权

  • If
    API_KEYS
    is configured, ensure your client sends
    Authorization: Bearer <key>
    matching one of the configured keys.
  • If
    API_KEYS
    is empty
    []
    , the proxy is open — no auth header needed.
  • 如果配置了
    API_KEYS
    ,请确保客户端发送的
    Authorization: Bearer <key>
    与配置的密钥之一匹配。
  • 如果
    API_KEYS
    为空
    []
    ,代理是开放的——无需认证头。

Auth token rejected / 403 from upstream

认证令牌被拒绝/上游返回403

  • Token may be expired — re-fetch from https://freebuff.llm.pm or re-run
    freebuff
    CLI.
  • Check
    AUTH_TOKENS
    is set correctly (not empty, no extra whitespace).
  • 令牌可能已过期——请从https://freebuff.llm.pm重新获取或重新运行
    freebuff
    CLI。
  • 检查
    AUTH_TOKENS
    是否设置正确(非空,无多余空格)。

Request timeouts

请求超时

  • Increase
    REQUEST_TIMEOUT
    beyond default
    15m
    for very long completions:
    json
    { "REQUEST_TIMEOUT": "30m" }
  • 对于超长生成任务,可将
    REQUEST_TIMEOUT
    设置为超过默认的
    15m
    json
    { "REQUEST_TIMEOUT": "30m" }

Connection refused on localhost:8080

localhost:8080连接被拒绝

  • Confirm the container/process is running:
    docker ps
    or
    ps aux | grep freebuff2api
  • Check
    LISTEN_ADDR
    — if set to a specific interface (e.g.,
    127.0.0.1:8080
    ), ensure you're connecting to the right address.
  • For Docker, verify the port mapping:
    -p 8080:8080
  • 确认容器/进程正在运行:执行
    docker ps
    ps aux | grep freebuff2api
  • 检查
    LISTEN_ADDR
    ——如果设置为特定接口(如
    127.0.0.1:8080
    ),请确保连接到正确的地址。
  • 对于Docker,验证端口映射:
    -p 8080:8080

Docker container exits immediately

Docker容器立即退出

bash
undefined
bash
undefined

Check logs

Check logs

docker logs freebuff2api
docker logs freebuff2api

Common cause: AUTH_TOKENS not set

Common cause: AUTH_TOKENS not set

docker run --rm -e AUTH_TOKENS="$FREEBUFF_TOKEN" ghcr.io/quorinex/freebuff2api:latest
undefined
docker run --rm -e AUTH_TOKENS="$FREEBUFF_TOKEN" ghcr.io/quorinex/freebuff2api:latest
undefined

Multiple tokens not rotating

多令牌未轮换

  • Verify tokens are comma-separated (env var) or a JSON array (config file).
  • Check
    ROTATION_INTERVAL
    is a valid Go duration string:
    "1h"
    ,
    "30m"
    ,
    "6h"
    , etc.
  • 验证令牌是否为逗号分隔(环境变量)或JSON数组(配置文件)。
  • 检查
    ROTATION_INTERVAL
    是否为有效的Go时长字符串:
    "1h"
    "30m"
    "6h"
    等。

Project Structure (Source)

项目结构(源码)

Freebuff2API/
├── main.go           # Entry point, config loading, server startup
├── config.json       # Default config file (gitignored if contains secrets)
├── Dockerfile        # Multi-arch Docker build
├── go.mod / go.sum   # Go module dependencies
├── README.md
└── README_zh.md
Freebuff2API/
├── main.go           # Entry point, config loading, server startup
├── config.json       # Default config file (gitignored if contains secrets)
├── Dockerfile        # Multi-arch Docker build
├── go.mod / go.sum   # Go module dependencies
├── README.md
└── README_zh.md

Key CLI Flag

主要CLI参数

bash
./freebuff2api -config /path/to/config.json
The only CLI flag is
-config
to specify an alternate config file path. All other configuration is via the JSON file or environment variables.
bash
./freebuff2api -config /path/to/config.json
唯一的CLI参数是
-config
,用于指定替代配置文件路径。所有其他配置通过JSON文件或环境变量完成。