vercel-functions

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Vercel Functions

Vercel Functions

You are an expert in Vercel Functions — the compute layer of the Vercel platform.
您是Vercel Functions的专家——Vercel平台的计算层。

Function Types

函数类型

Serverless Functions (Node.js)

Serverless Functions (Node.js)

  • Full Node.js runtime, all npm packages available
  • Default for Next.js API routes, Server Actions, Server Components
  • Cold starts: 800ms–2.5s (with DB connections)
  • Max duration: 10s (Hobby), 300s (Pro default), 800s (Fluid Compute Pro/Enterprise)
ts
// app/api/hello/route.ts
export async function GET() {
  return Response.json({ message: 'Hello from Node.js' })
}
  • 完整的Node.js运行时,支持所有npm包
  • Next.js API路由、Server Actions、Server Components的默认选择
  • 冷启动时间:800ms–2.5s(含数据库连接)
  • 最大运行时长:10s(免费版),300s(专业版默认),800s(Fluid Compute专业版/企业版)
ts
// app/api/hello/route.ts
export async function GET() {
  return Response.json({ message: 'Hello from Node.js' })
}

Edge Functions (V8 Isolates)

Edge Functions (V8 Isolates)

  • Lightweight V8 runtime, Web Standard APIs only
  • Ultra-low cold starts (<1ms globally)
  • Limited API surface (no full Node.js)
  • Best for: auth checks, redirects, A/B testing, simple transformations
ts
// app/api/hello/route.ts
export const runtime = 'edge'

export async function GET() {
  return new Response('Hello from the Edge')
}
  • 轻量级V8运行时,仅支持Web标准API
  • 极低冷启动时间(全球范围内<1ms)
  • API范围有限(无完整Node.js支持)
  • 最适用于:权限校验、重定向、A/B测试、简单数据转换
ts
// app/api/hello/route.ts
export const runtime = 'edge'

export async function GET() {
  return new Response('Hello from the Edge')
}

Bun Runtime (Public Beta)

Bun Runtime(公开测试版)

Add
"bunVersion": "1.x"
to
vercel.json
to run Node.js functions on Bun instead. ~28% lower latency for CPU-bound workloads. Supports Next.js, Express, Hono, Nitro.
vercel.json
中添加
"bunVersion": "1.x"
,即可使用Bun替代Node.js运行Node.js函数。对于CPU密集型工作负载,延迟可降低约28%。支持Next.js、Express、Hono、Nitro。

Rust Runtime (Public Beta)

Rust Runtime(公开测试版)

Rust functions run on Fluid Compute with HTTP streaming and Active CPU pricing. Built on the community Rust runtime. Supports environment variables up to 64 KB.
Rust函数在Fluid Compute上运行,支持HTTP流式传输和Active CPU计费。基于社区Rust运行时构建。支持最大64 KB的环境变量。

Node.js 24 LTS

Node.js 24 LTS

Node.js 24 LTS is now GA on Vercel for both builds and functions. Features V8 13.6, global
URLPattern
, Undici v7 for faster
fetch()
, and npm v11.
Node.js 24 LTS现已在Vercel正式发布(GA),支持构建和函数运行。包含V8 13.6、全局
URLPattern
、用于更快
fetch()
的Undici v7,以及npm v11。

Choosing Runtime

运行时选择

NeedRuntimeWhy
Full Node.js APIs, npm packages
nodejs
Full compatibility
Lower latency, CPU-bound work
nodejs
+ Bun
~28% latency reduction
Ultra-low latency, simple logic
edge
<1ms cold start, global
Database connections, heavy deps
nodejs
Edge lacks full Node.js
Auth/redirect at the edge
edge
Fastest response
AI streamingEitherBoth support streaming
Systems-level performance
rust
(beta)
Native speed, Fluid Compute
需求运行时原因
完整Node.js API、npm包
nodejs
完全兼容
更低延迟、CPU密集型工作
nodejs
+ Bun
延迟降低约28%
极低延迟、简单逻辑
edge
<1ms冷启动,全球部署
数据库连接、重依赖
nodejs
Edge不支持完整Node.js
边缘层权限校验/重定向
edge
响应速度最快
AI流式传输两者均可均支持流式传输
系统级性能
rust
(测试版)
原生速度,Fluid Compute

Fluid Compute

Fluid Compute

Fluid Compute is the unified execution model for all Vercel Functions (both Node.js and Edge).
Key benefits:
  • Optimized concurrency: Multiple invocations on a single instance — up to 85% cost reduction for high-concurrency workloads
  • Extended durations: Default 300s for all plans; up to 800s on Pro/Enterprise
  • Active CPU pricing: Charges only while CPU is actively working, not during idle/await time. Enabled by default for all plans. Memory-only periods billed at a significantly lower rate.
  • Background processing:
    waitUntil
    /
    after
    for post-response tasks
  • Dynamic scaling: Automatic during traffic spikes
  • Bytecode caching: Reduces cold starts via Rust-based runtime with pre-compiled function code
  • Multi-region failover: Default for Enterprise when Fluid is activated
Fluid Compute是所有Vercel Functions(Node.js和Edge均可)的统一执行模型。
主要优势:
  • 优化并发:单个实例可处理多个调用——高并发工作负载成本最高可降低85%
  • 延长运行时长:所有套餐默认300s;专业版/企业版最高可达800s
  • Active CPU计费:仅在CPU活跃工作时计费,空闲/等待时段不计费。所有套餐默认启用。仅内存占用时段按极低费率计费。
  • 后台处理:使用
    waitUntil
    /
    after
    处理响应后的任务
  • 动态扩缩容:流量高峰时自动扩缩容
  • 字节码缓存:通过基于Rust的运行时预编译函数代码,减少冷启动时间
  • 多区域故障转移:启用Fluid后,企业版默认支持

Instance Sizes

实例规格

SizeCPUMemory
Standard (default)1 vCPU2 GB
Performance2 vCPU4 GB
Hobby projects use Standard CPU. The Basic CPU instance has been removed.
规格CPU内存
标准(默认)1 vCPU2 GB
高性能2 vCPU4 GB
免费项目使用标准CPU。基础CPU实例已被移除。

Background Processing with
waitUntil

使用
waitUntil
实现后台处理

ts
// Continue work after sending response
import { waitUntil } from '@vercel/functions'

export async function POST(req: Request) {
  const data = await req.json()

  // Send response immediately
  const response = Response.json({ received: true })

  // Continue processing in background
  waitUntil(async () => {
    await processAnalytics(data)
    await sendNotification(data)
  })

  return response
}
ts
// 发送响应后继续执行任务
import { waitUntil } from '@vercel/functions'

export async function POST(req: Request) {
  const data = await req.json()

  // 立即发送响应
  const response = Response.json({ received: true })

  // 在后台继续处理
  waitUntil(async () => {
    await processAnalytics(data)
    await sendNotification(data)
  })

  return response
}

Next.js
after
(equivalent)

Next.js
after
(等效方案)

ts
import { after } from 'next/server'

export async function POST(req: Request) {
  const data = await req.json()

  after(async () => {
    await logToAnalytics(data)
  })

  return Response.json({ ok: true })
}
ts
import { after } from 'next/server'

export async function POST(req: Request) {
  const data = await req.json()

  after(async () => {
    await logToAnalytics(data)
  })

  return Response.json({ ok: true })
}

Streaming

流式传输

Zero-config streaming for both runtimes. Essential for AI applications.
ts
export async function POST(req: Request) {
  const encoder = new TextEncoder()
  const stream = new ReadableStream({
    async start(controller) {
      for (const chunk of data) {
        controller.enqueue(encoder.encode(chunk))
        await new Promise(r => setTimeout(r, 100))
      }
      controller.close()
    },
  })

  return new Response(stream, {
    headers: { 'Content-Type': 'text/event-stream' },
  })
}
For AI streaming, use the AI SDK's
toUIMessageStreamResponse()
(for chat UIs with
useChat
) which handles SSE formatting automatically.
两种运行时均支持零配置流式传输,对AI应用至关重要。
ts
export async function POST(req: Request) {
  const encoder = new TextEncoder()
  const stream = new ReadableStream({
    async start(controller) {
      for (const chunk of data) {
        controller.enqueue(encoder.encode(chunk))
        await new Promise(r => setTimeout(r, 100))
      }
      controller.close()
    },
  })

  return new Response(stream, {
    headers: { 'Content-Type': 'text/event-stream' },
  })
}
对于AI流式传输,使用AI SDK的
toUIMessageStreamResponse()
(配合
useChat
用于聊天界面),它会自动处理SSE格式。

Cron Jobs

定时任务(Cron Jobs)

Schedule function invocations via
vercel.json
:
json
{
  "crons": [
    {
      "path": "/api/daily-report",
      "schedule": "0 8 * * *"
    },
    {
      "path": "/api/cleanup",
      "schedule": "0 */6 * * *"
    }
  ]
}
The cron endpoint receives a normal HTTP request. Verify it's from Vercel:
ts
export async function GET(req: Request) {
  const authHeader = req.headers.get('authorization')
  if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response('Unauthorized', { status: 401 })
  }
  // Do scheduled work
  return Response.json({ ok: true })
}
通过
vercel.json
调度函数调用:
json
{
  "crons": [
    {
      "path": "/api/daily-report",
      "schedule": "0 8 * * *"
    },
    {
      "path": "/api/cleanup",
      "schedule": "0 */6 * * *"
    }
  ]
}
定时任务端点会接收常规HTTP请求。验证请求来自Vercel:
ts
export async function GET(req: Request) {
  const authHeader = req.headers.get('authorization')
  if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response('Unauthorized', { status: 401 })
  }
  // 执行定时任务
  return Response.json({ ok: true })
}

Configuration via vercel.json

通过vercel.json配置

Deprecation notice: Support for the legacy
now.json
config file will be removed on March 31, 2026. Rename
now.json
to
vercel.json
(no content changes required).
json
{
  "functions": {
    "app/api/heavy/**": {
      "maxDuration": 300,
      "memory": 1024
    },
    "app/api/edge/**": {
      "runtime": "edge"
    }
  }
}
弃用通知:对旧版
now.json
配置文件的支持将于2026年3月31日移除。请将
now.json
重命名为
vercel.json
(无需修改内容)。
json
{
  "functions": {
    "app/api/heavy/**": {
      "maxDuration": 300,
      "memory": 1024
    },
    "app/api/edge/**": {
      "runtime": "edge"
    }
  }
}

Timeout Limits

超时限制

All plans now default to 300s execution time with Fluid Compute.
PlanDefaultMax
Hobby300s300s
Pro300s800s
Enterprise300s800s
所有套餐现在默认使用Fluid Compute,执行时间为300s。
套餐默认时长最大时长
免费版300s300s
专业版300s800s
企业版300s800s

Common Pitfalls

常见陷阱

  1. Cold starts with DB connections: Use connection pooling (e.g., Neon's
    @neondatabase/serverless
    )
  2. Edge limitations: No
    fs
    , no native modules, limited
    crypto
    — use Node.js runtime if needed
  3. Timeout exceeded: Use Fluid Compute for long-running tasks, or Workflow DevKit for very long processes
  4. Bundle size: Python runtime supports up to 500MB; Node.js has smaller limits
  5. Environment variables: Available in all functions automatically; use
    vercel env pull
    for local dev
  1. 数据库连接导致冷启动:使用连接池(如Neon的
    @neondatabase/serverless
  2. Edge限制:不支持
    fs
    、原生模块,
    crypto
    功能有限——如有需要请使用Node.js运行时
  3. 超时超出:对于长时间运行的任务,使用Fluid Compute;对于极长时间的流程,使用Workflow DevKit
  4. 包大小:Python运行时支持最大500MB;Node.js限制更小
  5. 环境变量:所有函数自动可用;本地开发使用
    vercel env pull

Function Runtime Diagnostics

函数运行时诊断

Timeout Diagnostics

超时诊断

504 Gateway Timeout?
├─ All plans default to 300s with Fluid Compute
├─ Pro/Enterprise: configurable up to 800s
├─ Long-running task?
│  ├─ Under 5 min → Use Fluid Compute with streaming
│  ├─ Up to 15 min → Use Vercel Functions with `maxDuration` in vercel.json
│  └─ Hours/days → Use Workflow DevKit (DurableAgent or workflow steps)
└─ DB query slow? → Add connection pooling, check cold start, use Edge Config
出现504网关超时?
├─ 所有套餐使用Fluid Compute默认时长为300s
├─ 专业版/企业版:可配置至最高800s
├─ 长时间运行任务?
│  ├─ 5分钟以内 → 使用带流式传输的Fluid Compute
│  ├─ 15分钟以内 → 在vercel.json中配置`maxDuration`,使用Vercel Functions
│  └─ 数小时/数天 → 使用Workflow DevKit(DurableAgent或工作流步骤)
└─ 数据库查询缓慢? → 添加连接池,检查冷启动,使用Edge Config

500 Error Diagnostics

500错误诊断

500 Internal Server Error?
├─ Check Vercel Runtime Logs (Dashboard → Deployments → Functions tab)
├─ Missing env vars? → Compare `.env.local` against Vercel dashboard settings
├─ Import error? → Verify package is in `dependencies`, not `devDependencies`
└─ Uncaught exception? → Wrap handler in try/catch, use `after()` for error reporting
出现500内部服务器错误?
├─ 查看Vercel运行时日志(控制台 → 部署 → 函数标签页)
├─ 缺少环境变量? → 对比`.env.local`与Vercel控制台设置
├─ 导入错误? → 验证包在`dependencies`中,而非`devDependencies`
└─ 未捕获异常? → 用try/catch包裹处理器,使用`after()`上报错误

Invocation Failure Diagnostics

调用失败诊断

"FUNCTION_INVOCATION_FAILED"?
├─ Memory exceeded? → Increase `memory` in vercel.json (up to 3008 MB on Pro)
├─ Crashed during init? → Check top-level await or heavy imports at module scope
└─ Edge Function crash? → Check for Node.js APIs not available in Edge runtime
出现"FUNCTION_INVOCATION_FAILED"?
├─ 内存超出? → 在vercel.json中增加`memory`(专业版最高3008 MB)
├─ 初始化时崩溃? → 检查顶层await或模块级别的重依赖导入
└─ Edge函数崩溃? → 检查Edge运行时不支持的Node.js API

Cold Start Diagnostics

冷启动诊断

Cold start latency > 1s?
├─ Using Node.js runtime? → Consider Edge Functions for latency-sensitive routes
├─ Large function bundle? → Audit imports, use dynamic imports, tree-shake
├─ DB connection in cold start? → Use connection pooling (Neon serverless driver)
└─ Enable Fluid Compute to reuse warm instances across requests
冷启动延迟>1s?
├─ 使用Node.js运行时? → 对延迟敏感的路由可考虑Edge Functions
├─ 函数包过大? → 审计导入,使用动态导入,摇树优化
├─ 冷启动时建立数据库连接? → 使用连接池(Neon无服务器驱动)
└─ 启用Fluid Compute以在请求间复用热实例

Edge Function Timeout Diagnostics

Edge函数超时诊断

"EDGE_FUNCTION_INVOCATION_TIMEOUT"?
├─ Edge Functions have 25s hard limit (not configurable)
├─ Move heavy computation to Node.js Serverless Functions
└─ Use streaming to start response early, process in background with `waitUntil`
出现"EDGE_FUNCTION_INVOCATION_TIMEOUT"?
├─ Edge Functions有25s硬限制(不可配置)
├─ 将重计算逻辑移至Node.js Serverless Functions
└─ 使用流式传输提前返回响应,用`waitUntil`在后台处理

Official Documentation

官方文档