vercel-functions

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Vercel Functions

Vercel Functions

You are an expert in Vercel Functions — the compute layer of the Vercel platform.
你是 Vercel Functions 领域的专家——Vercel Functions 是 Vercel 平台的计算层。

Function Types

函数类型

Serverless Functions (Node.js)

Serverless Functions (Node.js)

  • Full Node.js runtime, all npm packages available
  • Default for Next.js API routes, Server Actions, Server Components
  • Cold starts: 800ms–2.5s (with DB connections)
  • Max duration: 10s (Hobby), 300s (Pro default), 800s (Fluid Compute Pro/Enterprise)
ts
// app/api/hello/route.ts
export async function GET() {
  return Response.json({ message: 'Hello from Node.js' })
}
  • 完整 Node.js 运行时,支持所有 npm 包
  • Next.js API 路由、Server Actions、Server Components 的默认运行时
  • 冷启动时间:800ms–2.5s(带数据库连接时)
  • 最大执行时长:10s(Hobby 套餐)、300s(Pro 套餐默认)、800s(Fluid Compute Pro/企业版)
ts
// app/api/hello/route.ts
export async function GET() {
  return Response.json({ message: 'Hello from Node.js' })
}

Edge Functions (V8 Isolates)

Edge Functions (V8 Isolates)

  • Lightweight V8 runtime, Web Standard APIs only
  • Ultra-low cold starts (<1ms globally)
  • Limited API surface (no full Node.js)
  • Best for: auth checks, redirects, A/B testing, simple transformations
ts
// app/api/hello/route.ts
export const runtime = 'edge'

export async function GET() {
  return new Response('Hello from the Edge')
}
  • 轻量级 V8 运行时,仅支持 Web 标准 API
  • 超低冷启动(全球范围内<1ms)
  • API 覆盖范围有限(无完整 Node.js 能力)
  • 适用场景:鉴权校验、重定向、A/B 测试、简单数据转换
ts
// app/api/hello/route.ts
export const runtime = 'edge'

export async function GET() {
  return new Response('Hello from the Edge')
}

Bun Runtime (Public Beta)

Bun Runtime (公开测试版)

Add
"bunVersion": "1.x"
to
vercel.json
to run Node.js functions on Bun instead. ~28% lower latency for CPU-bound workloads. Supports Next.js, Express, Hono, Nitro.
vercel.json
中添加
"bunVersion": "1.x"
即可将 Node.js 函数运行在 Bun 上,CPU 密集型工作负载的延迟可降低约28%,支持 Next.js、Express、Hono、Nitro。

Rust Runtime (Public Beta)

Rust Runtime (公开测试版)

Rust functions run on Fluid Compute with HTTP streaming and Active CPU pricing. Built on the community Rust runtime. Supports environment variables up to 64 KB.
Rust 函数运行在支持 HTTP 流式响应和活跃 CPU 计费的 Fluid Compute 上,基于社区 Rust 运行时构建,支持最高 64 KB 的环境变量。

Node.js 24 LTS

Node.js 24 LTS

Node.js 24 LTS is now GA on Vercel for both builds and functions. Features V8 13.6, global
URLPattern
, Undici v7 for faster
fetch()
, and npm v11.
Node.js 24 LTS 现已在 Vercel 正式可用,同时支持构建和函数运行场景,特性包括 V8 13.6、全局
URLPattern
、用于加速
fetch()
的 Undici v7 以及 npm v11。

Choosing Runtime

运行时选择

NeedRuntimeWhy
Full Node.js APIs, npm packages
nodejs
Full compatibility
Lower latency, CPU-bound work
nodejs
+ Bun
~28% latency reduction
Ultra-low latency, simple logic
edge
<1ms cold start, global
Database connections, heavy deps
nodejs
Edge lacks full Node.js
Auth/redirect at the edge
edge
Fastest response
AI streamingEitherBoth support streaming
Systems-level performance
rust
(beta)
Native speed, Fluid Compute
需求运行时原因
完整 Node.js API、npm 包支持
nodejs
完全兼容
更低延迟、CPU 密集型工作
nodejs
+ Bun
延迟降低约28%
超低延迟、逻辑简单
edge
<1ms 冷启动,全球部署
数据库连接、重度依赖
nodejs
Edge 运行时无完整 Node.js 能力
边缘侧鉴权/重定向
edge
响应速度最快
AI 流式响应两者均可都支持流式输出
系统级性能
rust
(beta)
原生运行速度,Fluid Compute 支持

Fluid Compute

Fluid Compute

Fluid Compute is the unified execution model for all Vercel Functions (both Node.js and Edge).
Key benefits:
  • Optimized concurrency: Multiple invocations on a single instance — up to 85% cost reduction for high-concurrency workloads
  • Extended durations: Default 300s for all plans; up to 800s on Pro/Enterprise
  • Active CPU pricing: Charges only while CPU is actively working, not during idle/await time. Enabled by default for all plans. Memory-only periods billed at a significantly lower rate.
  • Background processing:
    waitUntil
    /
    after
    for post-response tasks
  • Dynamic scaling: Automatic during traffic spikes
  • Bytecode caching: Reduces cold starts via Rust-based runtime with pre-compiled function code
  • Multi-region failover: Default for Enterprise when Fluid is activated
Fluid Compute 是所有 Vercel Functions(包括 Node.js 和 Edge 运行时)的统一执行模型。
核心优势:
  • 优化并发:单个实例可处理多次调用,高并发工作负载成本最高可降低 85%
  • 更长执行时长:所有套餐默认 300s,Pro/企业版最高支持 800s
  • 活跃 CPU 计费:仅在 CPU 活跃工作时计费,空闲/等待状态不计费,所有套餐默认启用,仅占用内存的时段计费费率大幅降低
  • 后台处理:通过
    waitUntil
    /
    after
    执行返回响应后的异步任务
  • 动态扩缩容:流量峰值时自动扩容
  • 字节码缓存:基于 Rust 运行时预编译函数代码,减少冷启动时间
  • 多区域容灾:企业版启用 Fluid 后默认生效

Instance Sizes

实例规格

SizeCPUMemory
Standard (default)1 vCPU2 GB
Performance2 vCPU4 GB
Hobby projects use Standard CPU. The Basic CPU instance has been removed.
规格CPU内存
标准型(默认)1 vCPU2 GB
性能型2 vCPU4 GB
Hobby 项目使用标准型 CPU,基础型 CPU 实例已下线。

Background Processing with
waitUntil

使用
waitUntil
实现后台处理

ts
// Continue work after sending response
import { waitUntil } from '@vercel/functions'

export async function POST(req: Request) {
  const data = await req.json()

  // Send response immediately
  const response = Response.json({ received: true })

  // Continue processing in background
  waitUntil(async () => {
    await processAnalytics(data)
    await sendNotification(data)
  })

  return response
}
ts
// 返回响应后继续执行任务
import { waitUntil } from '@vercel/functions'

export async function POST(req: Request) {
  const data = await req.json()

  // 立即返回响应
  const response = Response.json({ received: true })

  // 后台继续处理逻辑
  waitUntil(async () => {
    await processAnalytics(data)
    await sendNotification(data)
  })

  return response
}

Next.js
after
(equivalent)

Next.js
after
(等价实现)

ts
import { after } from 'next/server'

export async function POST(req: Request) {
  const data = await req.json()

  after(async () => {
    await logToAnalytics(data)
  })

  return Response.json({ ok: true })
}
ts
import { after } from 'next/server'

export async function POST(req: Request) {
  const data = await req.json()

  after(async () => {
    await logToAnalytics(data)
  })

  return Response.json({ ok: true })
}

Streaming

流式响应

Zero-config streaming for both runtimes. Essential for AI applications.
ts
export async function POST(req: Request) {
  const encoder = new TextEncoder()
  const stream = new ReadableStream({
    async start(controller) {
      for (const chunk of data) {
        controller.enqueue(encoder.encode(chunk))
        await new Promise(r => setTimeout(r, 100))
      }
      controller.close()
    },
  })

  return new Response(stream, {
    headers: { 'Content-Type': 'text/event-stream' },
  })
}
For AI streaming, use the AI SDK's
toUIMessageStreamResponse()
(for chat UIs with
useChat
) which handles SSE formatting automatically.
两种运行时均支持零配置流式响应,是 AI 应用的核心能力。
ts
export async function POST(req: Request) {
  const encoder = new TextEncoder()
  const stream = new ReadableStream({
    async start(controller) {
      for (const chunk of data) {
        controller.enqueue(encoder.encode(chunk))
        await new Promise(r => setTimeout(r, 100))
      }
      controller.close()
    },
  })

  return new Response(stream, {
    headers: { 'Content-Type': 'text/event-stream' },
  })
}
对于 AI 流式响应,可使用 AI SDK 的
toUIMessageStreamResponse()
(配合使用
useChat
的聊天 UI),它会自动处理 SSE 格式。

Cron Jobs

Cron 定时任务

Schedule function invocations via
vercel.json
:
json
{
  "crons": [
    {
      "path": "/api/daily-report",
      "schedule": "0 8 * * *"
    },
    {
      "path": "/api/cleanup",
      "schedule": "0 */6 * * *"
    }
  ]
}
The cron endpoint receives a normal HTTP request. Verify it's from Vercel:
ts
export async function GET(req: Request) {
  const authHeader = req.headers.get('authorization')
  if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response('Unauthorized', { status: 401 })
  }
  // Do scheduled work
  return Response.json({ ok: true })
}
通过
vercel.json
配置函数的定时触发规则:
json
{
  "crons": [
    {
      "path": "/api/daily-report",
      "schedule": "0 8 * * *"
    },
    {
      "path": "/api/cleanup",
      "schedule": "0 */6 * * *"
    }
  ]
}
Cron 端点会收到普通 HTTP 请求,可通过以下方式验证请求来自 Vercel:
ts
export async function GET(req: Request) {
  const authHeader = req.headers.get('authorization')
  if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response('Unauthorized', { status: 401 })
  }
  // 执行定时任务逻辑
  return Response.json({ ok: true })
}

Configuration via vercel.json

通过 vercel.json 配置

Deprecation notice: Support for the legacy
now.json
config file will be removed on March 31, 2026. Rename
now.json
to
vercel.json
(no content changes required).
json
{
  "functions": {
    "app/api/heavy/**": {
      "maxDuration": 300,
      "memory": 1024
    },
    "app/api/edge/**": {
      "runtime": "edge"
    }
  }
}
弃用通知:旧版
now.json
配置文件的支持将于 2026年3月31日 终止,将
now.json
重命名为
vercel.json
即可(无需修改内容)。
json
{
  "functions": {
    "app/api/heavy/**": {
      "maxDuration": 300,
      "memory": 1024
    },
    "app/api/edge/**": {
      "runtime": "edge"
    }
  }
}

Timeout Limits

超时限制

All plans now default to 300s execution time with Fluid Compute.
PlanDefaultMax
Hobby300s300s
Pro300s800s
Enterprise300s800s
启用 Fluid Compute 后,所有套餐的默认执行时长均为 300s。
套餐默认时长最大时长
Hobby300s300s
Pro300s800s
企业版300s800s

Common Pitfalls

常见问题

  1. Cold starts with DB connections: Use connection pooling (e.g., Neon's
    @neondatabase/serverless
    )
  2. Edge limitations: No
    fs
    , no native modules, limited
    crypto
    — use Node.js runtime if needed
  3. Timeout exceeded: Use Fluid Compute for long-running tasks, or Workflow DevKit for very long processes
  4. Bundle size: Python runtime supports up to 500MB; Node.js has smaller limits
  5. Environment variables: Available in all functions automatically; use
    vercel env pull
    for local dev
  1. 带数据库连接的冷启动问题:使用连接池(例如 Neon 的
    @neondatabase/serverless
  2. Edge 运行时限制:无
    fs
    能力、不支持原生模块、
    crypto
    能力有限,如有需要请使用 Node.js 运行时
  3. 执行超时:长时运行任务使用 Fluid Compute,极长流程使用 Workflow DevKit
  4. 打包体积限制:Python 运行时最高支持 500MB,Node.js 运行时限制更低
  5. 环境变量:所有函数自动继承环境变量,本地开发可使用
    vercel env pull
    拉取

Function Runtime Diagnostics

函数运行时问题诊断

Timeout Diagnostics

超时诊断

504 Gateway Timeout?
├─ All plans default to 300s with Fluid Compute
├─ Pro/Enterprise: configurable up to 800s
├─ Long-running task?
│  ├─ Under 5 min → Use Fluid Compute with streaming
│  ├─ Up to 15 min → Use Vercel Functions with `maxDuration` in vercel.json
│  └─ Hours/days → Use Workflow DevKit (DurableAgent or workflow steps)
└─ DB query slow? → Add connection pooling, check cold start, use Edge Config
504 网关超时?
├─ 启用 Fluid Compute 后所有套餐默认执行时长为 300s
├─ Pro/企业版:可配置最高 800s
├─ 长时运行任务?
│  ├─ 低于 5 分钟 → 启用流式响应的 Fluid Compute
│  ├─ 最长 15 分钟 → 在 vercel.json 中配置 `maxDuration` 后使用 Vercel Functions
│  └─ 数小时/数天 → 使用 Workflow DevKit(DurableAgent 或工作流步骤)
└─ 数据库查询过慢? → 新增连接池、检查冷启动问题、使用 Edge Config

500 Error Diagnostics

500 错误诊断

500 Internal Server Error?
├─ Check Vercel Runtime Logs (Dashboard → Deployments → Functions tab)
├─ Missing env vars? → Compare `.env.local` against Vercel dashboard settings
├─ Import error? → Verify package is in `dependencies`, not `devDependencies`
└─ Uncaught exception? → Wrap handler in try/catch, use `after()` for error reporting
500 内部服务错误?
├─ 查看 Vercel 运行时日志(控制台 → 部署 → 函数标签页)
├─ 缺少环境变量? → 对比 `.env.local` 和 Vercel 控制台的配置
├─ 导入错误? → 确认依赖包在 `dependencies` 而非 `devDependencies` 中
└─ 未捕获异常? → 在 handler 外层包裹 try/catch,使用 `after()` 上报错误

Invocation Failure Diagnostics

调用失败诊断

"FUNCTION_INVOCATION_FAILED"?
├─ Memory exceeded? → Increase `memory` in vercel.json (up to 3008 MB on Pro)
├─ Crashed during init? → Check top-level await or heavy imports at module scope
└─ Edge Function crash? → Check for Node.js APIs not available in Edge runtime
提示 "FUNCTION_INVOCATION_FAILED"?
├─ 内存超限? → 在 vercel.json 中提升 `memory` 配置(Pro 套餐最高支持 3008 MB)
├─ 初始化时崩溃? → 检查顶层 await 或模块层面的重度导入
└─ Edge Function 崩溃? → 检查是否使用了 Edge 运行时不支持的 Node.js API

Cold Start Diagnostics

冷启动诊断

Cold start latency > 1s?
├─ Using Node.js runtime? → Consider Edge Functions for latency-sensitive routes
├─ Large function bundle? → Audit imports, use dynamic imports, tree-shake
├─ DB connection in cold start? → Use connection pooling (Neon serverless driver)
└─ Enable Fluid Compute to reuse warm instances across requests
冷启动延迟超过 1s?
├─ 使用 Node.js 运行时? → 延迟敏感的路由可以考虑使用 Edge Functions
├─ 函数打包体积过大? → 审计导入依赖、使用动态导入、Tree Shaking 优化
├─ 冷启动时创建数据库连接? → 使用连接池(Neon serverless 驱动)
└─ 启用 Fluid Compute 以复用预热实例处理请求

Edge Function Timeout Diagnostics

Edge Function 超时诊断

"EDGE_FUNCTION_INVOCATION_TIMEOUT"?
├─ Edge Functions have 25s hard limit (not configurable)
├─ Move heavy computation to Node.js Serverless Functions
└─ Use streaming to start response early, process in background with `waitUntil`
提示 "EDGE_FUNCTION_INVOCATION_TIMEOUT"?
├─ Edge Functions 有 25s 的硬限制(不可配置)
├─ 将重度计算逻辑移到 Node.js Serverless Functions
└─ 使用流式响应尽早返回响应,通过 `waitUntil` 在后台处理逻辑

Official Documentation

官方文档