twelve-factor

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Twelve-Factor App Patterns

12-Factor App 模式

Core factors (config, dependencies, backing services, logs) apply to any deployed application — services, frontends, workers, and CLI tools. Server-specific factors (port binding, concurrency, disposability) apply only to backend services that run as long-lived processes.
Based on 12factor.net. All 12 factors are covered below. Factors that primarily affect code (config, dependencies, backing services, stateless processes, disposability, logging, concurrency) get full treatment with code examples. Factors that are primarily operational (codebase, build/release/run) get brief guidance on the code-level implications.
See the
typescript-strict
skill for schema-first patterns at trust boundaries. See the
testing
skill for how to TDD these patterns — config validation, shutdown behavior, and backing service integration are all testable through behavior-driven tests.
核心要素(配置、依赖、后端服务、日志)适用于所有可部署应用——服务、前端、工作进程和CLI工具。特定于服务器的要素(端口绑定、并发处理、可处置性)仅适用于作为长期运行进程的后端服务。
本内容基于 12factor.net。以下涵盖全部12项要素。主要影响代码的要素(配置、依赖、后端服务、无状态进程、可处置性、日志、并发)配有完整代码示例。主要涉及运维的要素(代码库、构建/发布/运行)则提供代码层面的简要指导。
如需了解信任边界处的 schema-first 模式,请查看
typescript-strict
skill;如需了解如何通过测试驱动开发(TDD)实现这些模式——配置验证、关闭行为、后端服务集成均可通过行为驱动测试验证,请查看
testing
skill。

When to Apply

适用场景

  • Greenfield projects: All 12-factor rules are mandatory. Structure the application to follow every applicable factor from the start.
  • Brownfield projects: Aim to follow as many factors as possible. Adopt incrementally in this priority order:
    1. Config (Factor III) — add env var validation without restructuring
    2. Logs (Factor XI) — switch to structured stdout logging
    3. Disposability (Factor IX) — add graceful shutdown handlers
    4. Backing services (Factor IV) — abstract connections behind config URLs
    5. Stateless processes (Factor VI) — migrate in-memory state to backing services
  • Greenfield项目(全新项目):所有12项规则均为强制要求。从项目启动阶段就遵循所有适用要素构建应用结构。
  • Brownfield项目(遗留项目):尽可能遵循更多要素。按以下优先级逐步采用:
    1. 配置(第三要素)——在不重构的前提下添加环境变量验证
    2. 日志(第十一要素)——切换为结构化标准输出日志
    3. 可处置性(第九要素)——添加优雅关闭处理程序
    4. 后端服务(第四要素)——通过配置URL抽象连接逻辑
    5. 无状态进程(第六要素)——将内存中的状态迁移至后端服务

Codebase (Factor I)

代码库(第一要素)

One codebase tracked in revision control, many deploys. Each deployable service has its own codebase. Shared code between services is extracted into libraries managed via the package manager, not copy-pasted.
In a monorepo, each service should have its own entry point, its own deploy pipeline, and its own set of backing service connections. A single repo is fine as long as each service deploys independently.
一个代码库对应版本控制,可部署多次。每个可部署服务都有独立代码库。服务间的共享代码需提取为库,通过包管理器管理,而非复制粘贴。
在单仓(monorepo)中,每个服务应拥有独立的入口点、部署流水线和后端服务连接配置。只要每个服务可独立部署,单个代码库是可行的。

Config (Factor III)

配置(第三要素)

Store all configuration in environment variables. Never hardcode URLs, credentials, or per-environment values.
Validate config at startup with a schema. Fail fast if config is invalid:
typescript
import { z } from 'zod';

const ConfigSchema = z.object({
  PORT: z.coerce.number().default(3000),
  DATABASE_URL: z.string().url(),
  REDIS_URL: z.string().url(),
  API_URL: z.string().url(),
  LOG_LEVEL: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
  API_KEY: z.string().min(1),
  SENTRY_DSN: z.string().url().optional(),
  ALLOWED_ORIGINS: z.string().default('').transform((s) => s === '' ? [] : s.split(',')),
});

type Config = z.infer<typeof ConfigSchema>;

export const createConfig = (env: Record<string, string | undefined> = process.env): Config => {
  const result = ConfigSchema.safeParse(env);
  if (!result.success) {
    console.error(JSON.stringify({ level: 'error', message: 'Invalid config', errors: result.error.flatten() }));
    process.exit(1);
  }
  return result.data;
};
Inject config via options objects — never import
process.env
deep in the call tree:
typescript
const UserSchema = z.object({ id: z.string(), name: z.string(), email: z.string().email() });
type User = z.infer<typeof UserSchema>;

export const createUserService = ({ config }: { config: Pick<Config, 'API_URL'> }) => ({
  async getUser(id: string): Promise<User> {
    const response = await fetch(`${config.API_URL}/users/${id}`);
    if (!response.ok) throw new Error(`Failed to fetch user: ${response.status}`);
    const data: unknown = await response.json();
    return UserSchema.parse(data);
  },
});
Provide
.env.example
as documentation (never
.env
with real values):
PORT=3000
DATABASE_URL=postgres://localhost:5432/myapp
REDIS_URL=redis://localhost:6379
API_URL=http://localhost:8080
LOG_LEVEL=info
API_KEY=your-api-key-here
SENTRY_DSN=
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173
将所有配置存储在环境变量中。绝不要硬编码URL、凭证或环境专属值。
启动时通过schema验证配置。若配置无效则快速失败:
typescript
import { z } from 'zod';

const ConfigSchema = z.object({
  PORT: z.coerce.number().default(3000),
  DATABASE_URL: z.string().url(),
  REDIS_URL: z.string().url(),
  API_URL: z.string().url(),
  LOG_LEVEL: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
  API_KEY: z.string().min(1),
  SENTRY_DSN: z.string().url().optional(),
  ALLOWED_ORIGINS: z.string().default('').transform((s) => s === '' ? [] : s.split(',')),
});

type Config = z.infer<typeof ConfigSchema>;

export const createConfig = (env: Record<string, string | undefined> = process.env): Config => {
  const result = ConfigSchema.safeParse(env);
  if (!result.success) {
    console.error(JSON.stringify({ level: 'error', message: 'Invalid config', errors: result.error.flatten() }));
    process.exit(1);
  }
  return result.data;
};
通过选项对象注入配置——绝不要在调用链深处直接导入
process.env
typescript
const UserSchema = z.object({ id: z.string(), name: z.string(), email: z.string().email() });
type User = z.infer<typeof UserSchema>;

export const createUserService = ({ config }: { config: Pick<Config, 'API_URL'> }) => ({
  async getUser(id: string): Promise<User> {
    const response = await fetch(`${config.API_URL}/users/${id}`);
    if (!response.ok) throw new Error(`Failed to fetch user: ${response.status}`);
    const data: unknown = await response.json();
    return UserSchema.parse(data);
  },
});
提供
.env.example
作为文档(绝不要包含真实值的
.env
文件):
PORT=3000
DATABASE_URL=postgres://localhost:5432/myapp
REDIS_URL=redis://localhost:6379
API_URL=http://localhost:8080
LOG_LEVEL=info
API_KEY=your-api-key-here
SENTRY_DSN=
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173

Config Anti-Patterns

配置反模式

typescript
const DB_HOST = 'prod-db.internal.example.com';

if (process.env.NODE_ENV === 'production') {
  connectTo('prod-db');
} else {
  connectTo('localhost');
}

const config = require(`./config.${process.env.NODE_ENV}.json`);
Why these are wrong: Config that varies by deploy belongs in env vars, not code. Environment-name branching creates combinatorial explosion and breaks dev/prod parity.
typescript
const DB_HOST = 'prod-db.internal.example.com';

if (process.env.NODE_ENV === 'production') {
  connectTo('prod-db');
} else {
  connectTo('localhost');
}

const config = require(`./config.${process.env.NODE_ENV}.json`);
为何这些做法错误: 随部署环境变化的配置应放在环境变量中,而非代码里。基于环境名称分支会导致组合爆炸,破坏开发/生产环境一致性。

Dependencies (Factor II)

依赖(第二要素)

Explicitly declare all dependencies. Never rely on implicit system-wide packages.
typescript
import which from 'which';

export const checkSystemDependencies = (required: readonly string[]) => {
  const missing = required.filter((cmd) => !which.sync(cmd, { nothrow: true }));
  if (missing.length > 0) {
    throw new Error(`Missing required system dependencies: ${missing.join(', ')}`);
  }
};
Rules:
  • Every dependency in
    package.json
    (or equivalent manifest)
  • Lockfile (
    package-lock.json
    ,
    pnpm-lock.yaml
    ) committed to repo
  • Dependencies are isolated — the app does not leak from or depend on the system environment (use
    node_modules
    , not global installs)
  • No
    exec('imagemagick ...')
    or
    child_process
    calls to assumed system tools
  • If a system tool is required, document it explicitly and check for it at startup
明确声明所有依赖。绝不依赖隐式的系统级包。
typescript
import which from 'which';

export const checkSystemDependencies = (required: readonly string[]) => {
  const missing = required.filter((cmd) => !which.sync(cmd, { nothrow: true }));
  if (missing.length > 0) {
    throw new Error(`Missing required system dependencies: ${missing.join(', ')}`);
  }
};
规则:
  • 所有依赖均在
    package.json
    (或等效清单)中声明
  • 锁文件(
    package-lock.json
    pnpm-lock.yaml
    )提交至代码库
  • 依赖相互隔离——应用不依赖系统环境(使用
    node_modules
    ,而非全局安装)
  • 避免调用
    exec('imagemagick ...')
    child_process
    执行假定存在的系统工具
  • 若必须使用系统工具,需明确文档说明,并在启动时检查是否存在

Backing Services (Factor IV)

后端服务(第四要素)

Treat every backing service (database, cache, queue, email, storage) as an attached resource identified by a URL in config.
typescript
export const createApp = ({ config }: { config: Pick<Config, 'DATABASE_URL' | 'REDIS_URL'> }) => {
  const db = createDbPool({ connectionString: config.DATABASE_URL });
  const cache = createRedisClient({ url: config.REDIS_URL });

  return {
    db,
    cache,
    async shutdown() {
      await Promise.all([db.end(), cache.quit()]);
    },
  } as const;
};
The code makes no distinction between local and third-party services. Swapping a local PostgreSQL for a managed cloud database requires only a config change, never a code change.
For projects using hexagonal architecture, backing services map naturally to ports (interfaces) and adapters (implementations). See the
hexagonal-architecture
skill.
将每个后端服务(数据库、缓存、队列、邮件、存储)视为附加资源,通过配置中的URL标识。
typescript
export const createApp = ({ config }: { config: Pick<Config, 'DATABASE_URL' | 'REDIS_URL'> }) => {
  const db = createDbPool({ connectionString: config.DATABASE_URL });
  const cache = createRedisClient({ url: config.REDIS_URL });

  return {
    db,
    cache,
    async shutdown() {
      await Promise.all([db.end(), cache.quit()]);
    },
  } as const;
};
代码不区分本地服务和第三方服务。 将本地PostgreSQL替换为托管云数据库仅需修改配置,无需更改代码。
对于采用六边形架构的项目,后端服务自然对应端口(接口)和适配器(实现)。请查看
hexagonal-architecture
skill。

Stateless Processes (Factor VI)

无状态进程(第六要素)

Execute the app as stateless, share-nothing processes. Any data that must persist lives in a backing service.
typescript
export const createSessionStore = <T>({
  redis,
  schema,
}: {
  redis: RedisClient;
  schema: z.ZodType<T>;
}) => ({
  async get(sessionId: string): Promise<T | undefined> {
    const data = await redis.get(`session:${sessionId}`);
    return data ? schema.parse(JSON.parse(data)) : undefined;
  },
  async set({ sessionId, data, ttlSeconds }: { sessionId: string; data: T; ttlSeconds: number }) {
    await redis.setex(`session:${sessionId}`, ttlSeconds, JSON.stringify(data));
  },
});
以无状态、无共享的进程方式运行应用。任何需要持久化的数据都存储在后端服务中。
typescript
export const createSessionStore = <T>({
  redis,
  schema,
}: {
  redis: RedisClient;
  schema: z.ZodType<T>;
}) => ({
  async get(sessionId: string): Promise<T | undefined> {
    const data = await redis.get(`session:${sessionId}`);
    return data ? schema.parse(JSON.parse(data)) : undefined;
  },
  async set({ sessionId, data, ttlSeconds }: { sessionId: string; data: T; ttlSeconds: number }) {
    await redis.setex(`session:${sessionId}`, ttlSeconds, JSON.stringify(data));
  },
});

Stateless Anti-Patterns

无状态反模式

typescript
const sessions = new Map<string, UserSession>();

app.post('/upload', (req, res) => {
  fs.writeFileSync(`/tmp/uploads/${req.file.name}`, req.file.data);
});

let requestCount = 0;
app.use(() => { requestCount++; });

setInterval(() => sendReport(), 60_000);
Why these are wrong: In-memory state is lost on restart and invisible to other process instances. Local filesystem state cannot be shared across processes. In-process schedulers run in only one instance. Use backing services (Redis, S3, database) and external schedulers instead.
See the
functional
skill for immutable data patterns that naturally support statelessness.
typescript
const sessions = new Map<string, UserSession>();

app.post('/upload', (req, res) => {
  fs.writeFileSync(`/tmp/uploads/${req.file.name}`, req.file.data);
});

let requestCount = 0;
app.use(() => { requestCount++; });

setInterval(() => sendReport(), 60_000);
为何这些做法错误: 内存中的状态会在重启时丢失,且对其他进程实例不可见。本地文件系统状态无法在进程间共享。进程内调度器仅在单个实例中运行。应使用后端服务(Redis、S3、数据库)和外部调度器替代。
如需了解天然支持无状态的不可变数据模式,请查看
functional
skill。

Concurrency (Factor VIII)

并发(第八要素)

Scale out via the process model. Design the app so work can be divided across process types.
typescript
// web.ts — handles HTTP requests
const config = createConfig();
const app = createApp({ config });
await startServer({ app, config });

// worker.ts — processes background jobs from a queue backed by Redis
const config = createConfig();
const queue = createQueueConsumer({ url: config.REDIS_URL });
await queue.process('email', sendEmail);
await queue.process('report', generateReport);
Rules:
  • Separate entry points for each process type (web, worker, scheduler)
  • HTTP handlers dispatch background work to a queue, never process it inline
  • Each process type scales independently
  • Use a
    Procfile
    or equivalent to define process types
web: node dist/web.js
worker: node dist/worker.js
通过进程模型横向扩展。设计应用以将工作分配到不同类型的进程中。
typescript
// web.ts — 处理HTTP请求
const config = createConfig();
const app = createApp({ config });
await startServer({ app, config });

// worker.ts — 处理Redis队列中的后台任务
const config = createConfig();
const queue = createQueueConsumer({ url: config.REDIS_URL });
await queue.process('email', sendEmail);
await queue.process('report', generateReport);
规则:
  • 为每个进程类型(web、worker、scheduler)设置独立入口点
  • HTTP处理程序将后台工作调度至队列,绝不在线处理
  • 每个进程类型可独立扩展
  • 使用
    Procfile
    或等效文件定义进程类型
web: node dist/web.js
worker: node dist/worker.js

Disposability (Factor IX)

可处置性(第九要素)

Maximize robustness with fast startup and graceful shutdown.
通过快速启动和优雅关闭最大化健壮性。

Health Check Endpoints

健康检查端点

typescript
export const createHealthRoutes = ({ db }: { db: DbPool }) => ({
  '/health': async () => ({ status: 'ok' }),
  '/ready': async () => {
    await db.query('SELECT 1');
    return { status: 'ready' };
  },
});
typescript
export const createHealthRoutes = ({ db }: { db: DbPool }) => ({
  '/health': async () => ({ status: 'ok' }),
  '/ready': async () => {
    await db.query('SELECT 1');
    return { status: 'ready' };
  },
});

Graceful Shutdown

优雅关闭

typescript
const SHUTDOWN_TIMEOUT_MS = 30_000;

export const startServer = async ({ app, config }: { app: App; config: Pick<Config, 'PORT'> }) => {
  const server = app.listen(config.PORT);

  const shutdown = async (signal: 'SIGTERM' | 'SIGINT') => {
    const forceExit = setTimeout(() => process.exit(1), SHUTDOWN_TIMEOUT_MS);

    try {
      await new Promise<void>((resolve) => server.close(() => resolve()));
      await app.shutdown();
      clearTimeout(forceExit);
      process.exit(0);
    } catch (err: unknown) {
      const message = err instanceof Error ? err.message : String(err);
      const stack = err instanceof Error ? err.stack : undefined;
      console.error(JSON.stringify({ level: 'error', message: 'Shutdown error', signal, error: message, stack }));
      process.exit(1);
    }
  };

  process.on('SIGTERM', () => shutdown('SIGTERM'));
  process.on('SIGINT', () => shutdown('SIGINT'));

  return server;
};
Rules:
  • Handle SIGTERM and SIGINT for graceful shutdown
  • Set a drain timeout — force exit if shutdown hangs
  • Await
    server.close()
    to drain in-flight connections
  • Close database pools, Redis connections, queue consumers
  • Exit with non-zero code on shutdown failure
  • Keep startup fast — defer heavy initialization to first request if needed
  • Design background jobs to be reentrant/idempotent so interrupted work can be safely retried
  • Provide
    /health
    and
    /ready
    endpoints for orchestrator probes
typescript
const SHUTDOWN_TIMEOUT_MS = 30_000;

export const startServer = async ({ app, config }: { app: App; config: Pick<Config, 'PORT'> }) => {
  const server = app.listen(config.PORT);

  const shutdown = async (signal: 'SIGTERM' | 'SIGINT') => {
    const forceExit = setTimeout(() => process.exit(1), SHUTDOWN_TIMEOUT_MS);

    try {
      await new Promise<void>((resolve) => server.close(() => resolve()));
      await app.shutdown();
      clearTimeout(forceExit);
      process.exit(0);
    } catch (err: unknown) {
      const message = err instanceof Error ? err.message : String(err);
      const stack = err instanceof Error ? err.stack : undefined;
      console.error(JSON.stringify({ level: 'error', message: 'Shutdown error', signal, error: message, stack }));
      process.exit(1);
    }
  };

  process.on('SIGTERM', () => shutdown('SIGTERM'));
  process.on('SIGINT', () => shutdown('SIGINT'));

  return server;
};
规则:
  • 处理SIGTERM和SIGINT信号以实现优雅关闭
  • 设置超时时间——若关闭挂起则强制退出
  • 等待
    server.close()
    以处理完正在进行的连接
  • 关闭数据库连接池、Redis连接、队列消费者
  • 关闭失败时返回非零退出码
  • 保持启动速度快——必要时将重型初始化延迟至首次请求
  • 设计后台任务为可重入/幂等,以便中断的工作可安全重试
  • 提供
    /health
    /ready
    端点供编排器探测

Logs (Factor XI)

日志(第十一要素)

Treat logs as event streams. Write structured output to stdout. Never route or store logs from within the app.
For internet-facing servers, RFC 6302 (BCP 162) specifies minimum logging requirements: source and destination addresses and ports, timestamps (preferably UTC), and transport protocol. These should be captured at the server/framework level in addition to application-level structured logging.
将日志视为事件流。向标准输出(stdout)写入结构化输出。绝不要在应用内部路由或存储日志。
面向互联网的服务器需遵循RFC 6302(BCP 162)规定的最低日志要求:源地址和目标地址及端口、时间戳(优先使用UTC)、传输协议。除应用级结构化日志外,这些信息应在服务器/框架层面捕获。

Semantic Requirements

语义要求

Regardless of which logging library or implementation a project uses, all loggers must satisfy these properties:
  • Structured output — logs are machine-parseable (JSON preferred), not free-form strings
  • stdout/stderr only — the app never writes to log files, never configures file transports
  • Standard levels — at minimum:
    debug
    ,
    info
    ,
    warn
    ,
    error
    — configurable via environment
  • Contextual data — logs accept structured metadata (key-value pairs), not just message strings
  • Timestamp included — every log entry includes an ISO 8601 timestamp
  • Request correlation — include a
    requestId
    or trace ID to correlate logs across a single request
Projects may use any logging library (pino, winston with console transport, OpenTelemetry, custom) as long as these semantics are met. The specific interface may vary per project. If an existing logger is missing levels or structured data support, it should be adapted to meet these requirements.
无论项目使用何种日志库或实现,所有日志记录器必须满足以下特性:
  • 结构化输出——日志为机器可解析格式(首选JSON),而非自由格式字符串
  • 仅输出至stdout/stderr——应用绝不写入日志文件,绝不配置文件传输
  • 标准日志级别——至少包含:
    debug
    info
    warn
    error
    ——可通过环境变量配置
  • 上下文数据——日志接受结构化元数据(键值对),而非仅消息字符串
  • 包含时间戳——每条日志条目包含ISO 8601格式的时间戳
  • 请求关联——包含
    requestId
    或跟踪ID,以关联单个请求的所有日志
项目可使用任何日志库(pino、带console传输的winston、OpenTelemetry、自定义实现),只要满足上述语义要求即可。具体接口可因项目而异。若现有日志库缺少级别或结构化数据支持,应进行适配以满足要求。

Example (illustrative — adapt to project conventions)

示例(仅作说明——需适配项目约定)

typescript
const LOG_LEVELS = { debug: 0, info: 1, warn: 2, error: 3 } as const;

export const createLogger = ({ config }: { config: Pick<Config, 'LOG_LEVEL'> }) => {
  const shouldLog = (level: keyof typeof LOG_LEVELS) =>
    LOG_LEVELS[level] >= LOG_LEVELS[config.LOG_LEVEL];

  const log = (level: keyof typeof LOG_LEVELS, message: string, data?: Record<string, unknown>) => {
    if (!shouldLog(level)) return;
    const output = JSON.stringify({ timestamp: new Date().toISOString(), level, message, context: data });
    (level === 'error' ? console.error : console.log)(output);
  };

  return {
    debug: (message: string, data?: Record<string, unknown>) => log('debug', message, data),
    info: (message: string, data?: Record<string, unknown>) => log('info', message, data),
    warn: (message: string, data?: Record<string, unknown>) => log('warn', message, data),
    error: (message: string, data?: Record<string, unknown>) => log('error', message, data),
  };
};
typescript
const LOG_LEVELS = { debug: 0, info: 1, warn: 2, error: 3 } as const;

export const createLogger = ({ config }: { config: Pick<Config, 'LOG_LEVEL'> }) => {
  const shouldLog = (level: keyof typeof LOG_LEVELS) =>
    LOG_LEVELS[level] >= LOG_LEVELS[config.LOG_LEVEL];

  const log = (level: keyof typeof LOG_LEVELS, message: string, data?: Record<string, unknown>) => {
    if (!shouldLog(level)) return;
    const output = JSON.stringify({ timestamp: new Date().toISOString(), level, message, context: data });
    (level === 'error' ? console.error : console.log)(output);
  };

  return {
    debug: (message: string, data?: Record<string, unknown>) => log('debug', message, data),
    info: (message: string, data?: Record<string, unknown>) => log('info', message, data),
    warn: (message: string, data?: Record<string, unknown>) => log('warn', message, data),
    error: (message: string, data?: Record<string, unknown>) => log('error', message, data),
  };
};

Logging Anti-Patterns

日志反模式

typescript
import fs from 'fs';
fs.appendFileSync('/var/log/app.log', message);

import winston from 'winston';
const logger = winston.createLogger({
  transports: [new winston.transports.File({ filename: 'error.log' })],
});

console.log(`User ${userId} logged in`);
Why these are wrong: File transports mean the app is routing its own logs. Unstructured string interpolation produces logs that cannot be parsed or queried. The execution environment (container orchestrator, PaaS) captures stdout and routes it to the appropriate destination.
typescript
import fs from 'fs';
fs.appendFileSync('/var/log/app.log', message);

import winston from 'winston';
const logger = winston.createLogger({
  transports: [new winston.transports.File({ filename: 'error.log' })],
});

console.log(`User ${userId} logged in`);
为何这些做法错误: 文件传输意味着应用自行路由日志。非结构化字符串插值生成的日志无法被解析或查询。执行环境(容器编排器、PaaS)会捕获标准输出并将其路由至合适的目标位置。

Build, Release, Run (Factor V)

构建、发布、运行(第五要素)

Strictly separate build and run stages. Config is injected at release/run time, never baked into the build.
Code-level implications:
  • No environment-specific build outputs — the same build artifact deploys to every environment
  • Config comes from env vars at runtime, not from compile-time substitution
  • Releases are immutable — code changes require a new build, not runtime patching
严格区分构建和运行阶段。配置在发布/运行时注入,绝不嵌入构建产物。
代码层面的影响:
  • 无环境专属构建输出——同一构建产物可部署至所有环境
  • 配置来自运行时的环境变量,而非编译时代换
  • 发布版本不可变——代码变更需重新构建,而非运行时补丁

Port Binding (Factor VII)

端口绑定(第七要素)

The app is self-contained and exports its service by binding to a port.
typescript
const server = app.listen(config.PORT, () => {
  logger.info('Server started', { port: config.PORT });
});
Do not rely on runtime injection of a web server (e.g., a separate Apache/Nginx process serving your app). The app includes its own HTTP server library as a dependency.
应用为自包含程序,通过绑定端口对外提供服务。
typescript
const server = app.listen(config.PORT, () => {
  logger.info('Server started', { port: config.PORT });
});
不要依赖运行时注入的Web服务器(例如,独立的Apache/Nginx进程为应用提供服务)。应用应将自身的HTTP服务器库作为依赖包含在内。

Dev/Prod Parity (Factor X)

开发/生产环境一致性(第十要素)

Keep development and production as similar as possible. Use the same type of backing services in all environments.
Rules:
  • If production uses PostgreSQL, develop against PostgreSQL (not SQLite)
  • If production uses Redis, develop against Redis (not in-memory maps)
  • Use containers (Docker Compose) to run backing services locally
  • Config schema validation (Factor III) catches mismatches at startup
保持开发环境与生产环境尽可能相似。所有环境使用相同类型的后端服务。
规则:
  • 若生产环境使用PostgreSQL,开发环境也应使用PostgreSQL(而非SQLite)
  • 若生产环境使用Redis,开发环境也应使用Redis(而非内存映射)
  • 使用容器(Docker Compose)在本地运行后端服务
  • 配置schema验证(第三要素)可在启动时捕获环境不匹配问题

Admin Processes (Factor XII)

管理进程(第十二要素)

Run admin tasks (migrations, data fixes, console sessions) as one-off processes using the same codebase and config.
typescript
const config = createConfig();
const db = createDbPool({ connectionString: config.DATABASE_URL });
try {
  await runMigrations(db);
} finally {
  await db.end();
}
Admin scripts live in the repo alongside application code (e.g.
scripts/migrate.ts
). They are not separate tools or ad-hoc shell commands. Admin processes run in an identical environment to the app — same release, same config, same dependencies.
使用相同的代码库和配置,以一次性进程方式运行管理任务(迁移、数据修复、控制台会话)。
typescript
const config = createConfig();
const db = createDbPool({ connectionString: config.DATABASE_URL });
try {
  await runMigrations(db);
} finally {
  await db.end();
}
管理脚本与应用代码一同存储在代码库中(例如
scripts/migrate.ts
)。它们不是独立工具或临时shell命令。管理进程运行的环境与应用完全相同——相同的发布版本、相同的配置、相同的依赖。

Testing 12-Factor Patterns

12-Factor模式的测试

12-factor patterns are testable through behavior-driven tests:
  • Config: test that
    createConfig
    throws on missing required vars and returns correct defaults
  • Disposability: test that shutdown closes all connections (inject test doubles for db/cache)
  • Backing services: test that services work with any backing service URL (inject via config)
  • Statelessness: test that request handlers do not depend on prior request state
Config injection via options objects makes all of these patterns naturally testable without mocking
process.env
or global state. See the
testing
skill for factory patterns and behavior-driven test examples.
12-Factor模式可通过行为驱动测试验证:
  • 配置:测试
    createConfig
    在缺少必填变量时抛出错误,并返回正确的默认值
  • 可处置性:测试关闭操作会关闭所有连接(为数据库/缓存注入测试替身)
  • 后端服务:测试服务可与任意后端服务URL配合工作(通过配置注入)
  • 无状态性:测试请求处理程序不依赖之前的请求状态
通过选项对象注入配置,使得所有这些模式无需模拟
process.env
或全局状态即可轻松测试。如需了解工厂模式和行为驱动测试示例,请查看
testing
skill。

Checklist

检查清单

  • One codebase per deployable service; shared code extracted as libraries
  • Same build artifact deploys to every environment (no env-specific builds)
  • All config comes from environment variables, validated at startup with a schema
  • Startup fails fast with a clear error message if config is invalid
  • .env.example
    documents required variables (no real credentials)
  • All dependencies explicitly declared in manifest with lockfile committed
  • Backing services connected via config URLs, swappable without code changes
  • No in-memory session state, no local filesystem state between requests
  • Separate entry points for web and worker process types
  • SIGTERM/SIGINT handlers with drain timeout for graceful shutdown
  • Database pools and connections closed on shutdown
  • /health
    and
    /ready
    endpoints for orchestrator probes
  • Logs written as structured JSON to stdout, no file transports
  • Logs include request correlation IDs
  • App binds to a port from config, includes its own HTTP server
  • Same backing service types used in development and production
  • Admin scripts live in the repo and use the same config/dependencies
  • 每个可部署服务对应一个代码库;共享代码提取为库
  • 同一构建产物部署至所有环境(无环境专属构建)
  • 所有配置来自环境变量,启动时通过schema验证
  • 若配置无效,启动时快速失败并显示清晰的错误信息
  • .env.example
    文档化必填变量(无真实凭证)
  • 所有依赖在清单中明确声明,锁文件已提交
  • 通过配置URL连接后端服务,无需修改代码即可切换
  • 请求间无内存会话状态、无本地文件系统状态
  • Web和工作进程类型拥有独立入口点
  • 带有超时时间的SIGTERM/SIGINT处理程序,实现优雅关闭
  • 关闭时关闭数据库连接池和连接
  • 提供
    /health
    /ready
    端点供编排器探测
  • 日志以结构化JSON格式写入标准输出,无文件传输
  • 日志包含请求关联ID
  • 应用绑定配置中的端口,自带HTTP服务器
  • 开发和生产环境使用相同类型的后端服务
  • 管理脚本存储在代码库中,使用相同的配置/依赖