open-multi-agent-orchestration

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Open Multi-Agent Orchestration

Open Multi-Agent 编排

Skill by ara.so — Daily 2026 Skills collection.
open-multi-agent
is a TypeScript framework for building AI agent teams where agents with different roles, models, and tools collaborate on complex goals. The framework handles task dependency resolution (DAG scheduling), parallel execution, shared memory, and inter-agent communication — all in-process with no subprocess overhead.
技能由 ara.so 提供 — 2026 年度每日技能合集。
open-multi-agent
是一款 TypeScript 框架,用于构建 AI Agent 团队,让拥有不同角色、模型和工具的 Agent 协作完成复杂目标。该框架支持任务依赖解析(DAG 调度)、并行执行、共享内存和 Agent 间通信——所有功能都在进程内运行,没有子进程开销。

Installation

安装

bash
npm install @jackchen_me/open-multi-agent
bash
npm install @jackchen_me/open-multi-agent

or

或者

pnpm add @jackchen_me/open-multi-agent

Set environment variables:

```bash
export ANTHROPIC_API_KEY=your_key_here
export OPENAI_API_KEY=your_key_here   # optional, only if using OpenAI models
pnpm add @jackchen_me/open-multi-agent

设置环境变量:

```bash
export ANTHROPIC_API_KEY=your_key_here
export OPENAI_API_KEY=your_key_here   # 可选,仅当使用 OpenAI 模型时需要

Core Concepts

核心概念

ConceptDescription
OpenMultiAgent
Top-level orchestrator — entry point for all operations
Team
A named group of agents sharing a message bus, task queue, and optional shared memory
AgentConfig
Defines an agent's name, model, provider, system prompt, and allowed tools
Task
A unit of work with a title, description, assignee, and optional
dependsOn
list
LLMAdapter
Pluggable interface — built-in adapters for Anthropic and OpenAI
ToolRegistry
Registry of available tools; built-ins + custom tools via
defineTool()
概念描述
OpenMultiAgent
顶层编排器——所有操作的入口点
Team
命名的 Agent 组,共享消息总线、任务队列和可选的共享内存
AgentConfig
定义 Agent 的名称、模型、服务商、系统提示词和允许使用的工具
Task
工作单元,包含标题、描述、受理人,以及可选的
dependsOn
依赖列表
LLMAdapter
可插拔接口——内置 Anthropic 和 OpenAI 的适配实现
ToolRegistry
可用工具的注册表;包含内置工具,也可通过
defineTool()
注册自定义工具

Quick Start — Single Agent

快速入门——单 Agent

typescript
import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'

const orchestrator = new OpenMultiAgent({ defaultModel: 'claude-sonnet-4-6' })

const result = await orchestrator.runAgent(
  {
    name: 'coder',
    model: 'claude-sonnet-4-6',
    tools: ['bash', 'file_write'],
  },
  'Write a TypeScript function that reverses a string, save it to /tmp/reverse.ts, and run it.',
)

console.log(result.output)
typescript
import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'

const orchestrator = new OpenMultiAgent({ defaultModel: 'claude-sonnet-4-6' })

const result = await orchestrator.runAgent(
  {
    name: 'coder',
    model: 'claude-sonnet-4-6',
    tools: ['bash', 'file_write'],
  },
  'Write a TypeScript function that reverses a string, save it to /tmp/reverse.ts, and run it.',
)

console.log(result.output)

Multi-Agent Team

多 Agent 团队

typescript
import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
import type { AgentConfig } from '@jackchen_me/open-multi-agent'

const architect: AgentConfig = {
  name: 'architect',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You design clean API contracts and file structures.',
  tools: ['file_write'],
}

const developer: AgentConfig = {
  name: 'developer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You implement what the architect designs.',
  tools: ['bash', 'file_read', 'file_write', 'file_edit'],
}

const reviewer: AgentConfig = {
  name: 'reviewer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You review code for correctness and clarity.',
  tools: ['file_read', 'grep'],
}

const orchestrator = new OpenMultiAgent({
  defaultModel: 'claude-sonnet-4-6',
  onProgress: (event) => console.log(event.type, event.agent ?? event.task ?? ''),
})

const team = orchestrator.createTeam('api-team', {
  name: 'api-team',
  agents: [architect, developer, reviewer],
  sharedMemory: true,
})

const result = await orchestrator.runTeam(
  team,
  'Create a REST API for a todo list in /tmp/todo-api/',
)

console.log(`Success: ${result.success}`)
console.log(`Output tokens: ${result.totalTokenUsage.output_tokens}`)
typescript
import { OpenMultiAgent } from '@jackchen_me/open-multi-agent'
import type { AgentConfig } from '@jackchen_me/open-multi-agent'

const architect: AgentConfig = {
  name: 'architect',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You design clean API contracts and file structures.',
  tools: ['file_write'],
}

const developer: AgentConfig = {
  name: 'developer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You implement what the architect designs.',
  tools: ['bash', 'file_read', 'file_write', 'file_edit'],
}

const reviewer: AgentConfig = {
  name: 'reviewer',
  model: 'claude-sonnet-4-6',
  systemPrompt: 'You review code for correctness and clarity.',
  tools: ['file_read', 'grep'],
}

const orchestrator = new OpenMultiAgent({
  defaultModel: 'claude-sonnet-4-6',
  onProgress: (event) => console.log(event.type, event.agent ?? event.task ?? ''),
})

const team = orchestrator.createTeam('api-team', {
  name: 'api-team',
  agents: [architect, developer, reviewer],
  sharedMemory: true,
})

const result = await orchestrator.runTeam(
  team,
  'Create a REST API for a todo list in /tmp/todo-api/',
)

console.log(`Success: ${result.success}`)
console.log(`Output tokens: ${result.totalTokenUsage.output_tokens}`)

Task Pipeline — Explicit DAG Control

任务流水线——显式 DAG 控制

Use
runTasks()
when you need precise control over task ordering, assignments, and parallelism:
typescript
const result = await orchestrator.runTasks(team, [
  {
    title: 'Design the data model',
    description: 'Write a TypeScript interface spec to /tmp/spec.md',
    assignee: 'architect',
  },
  {
    title: 'Implement the module',
    description: 'Read /tmp/spec.md and implement the module in /tmp/src/',
    assignee: 'developer',
    dependsOn: ['Design the data model'], // blocked until design completes
  },
  {
    title: 'Write tests',
    description: 'Read the implementation and write Vitest tests.',
    assignee: 'developer',
    dependsOn: ['Implement the module'],
  },
  {
    title: 'Review code',
    description: 'Review /tmp/src/ and produce a structured code review.',
    assignee: 'reviewer',
    dependsOn: ['Implement the module'], // runs in parallel with "Write tests"
  },
])
Tasks with no unresolved
dependsOn
entries run in parallel automatically. The framework cascades failures — if a task fails, dependent tasks are skipped.
当你需要精确控制任务顺序、分配和并行度时,可以使用
runTasks()
typescript
const result = await orchestrator.runTasks(team, [
  {
    title: 'Design the data model',
    description: 'Write a TypeScript interface spec to /tmp/spec.md',
    assignee: 'architect',
  },
  {
    title: 'Implement the module',
    description: 'Read /tmp/spec.md and implement the module in /tmp/src/',
    assignee: 'developer',
    dependsOn: ['Design the data model'], // 阻塞直到设计完成
  },
  {
    title: 'Write tests',
    description: 'Read the implementation and write Vitest tests.',
    assignee: 'developer',
    dependsOn: ['Implement the module'],
  },
  {
    title: 'Review code',
    description: 'Review /tmp/src/ and produce a structured code review.',
    assignee: 'reviewer',
    dependsOn: ['Implement the module'], // 和「编写测试」并行运行
  },
])
没有未解决
dependsOn
依赖的任务会自动并行运行。框架支持故障级联——如果某个任务失败,依赖它的所有任务都会被跳过。

Multi-Model Teams (Claude + GPT)

多模型团队(Claude + GPT)

typescript
const claudeAgent: AgentConfig = {
  name: 'strategist',
  model: 'claude-opus-4-6',
  provider: 'anthropic',
  systemPrompt: 'You plan high-level approaches.',
  tools: ['file_write'],
}

const gptAgent: AgentConfig = {
  name: 'implementer',
  model: 'gpt-5.4',
  provider: 'openai',
  systemPrompt: 'You implement plans as working code.',
  tools: ['bash', 'file_read', 'file_write'],
}

const team = orchestrator.createTeam('mixed-team', {
  name: 'mixed-team',
  agents: [claudeAgent, gptAgent],
  sharedMemory: true,
})

const result = await orchestrator.runTeam(team, 'Build a CLI tool that converts JSON to CSV.')
typescript
const claudeAgent: AgentConfig = {
  name: 'strategist',
  model: 'claude-opus-4-6',
  provider: 'anthropic',
  systemPrompt: 'You plan high-level approaches.',
  tools: ['file_write'],
}

const gptAgent: AgentConfig = {
  name: 'implementer',
  model: 'gpt-5.4',
  provider: 'openai',
  systemPrompt: 'You implement plans as working code.',
  tools: ['bash', 'file_read', 'file_write'],
}

const team = orchestrator.createTeam('mixed-team', {
  name: 'mixed-team',
  agents: [claudeAgent, gptAgent],
  sharedMemory: true,
})

const result = await orchestrator.runTeam(team, 'Build a CLI tool that converts JSON to CSV.')

Custom Tools with Zod Schemas

基于 Zod schema 的自定义工具

typescript
import { z } from 'zod'
import {
  defineTool,
  Agent,
  ToolRegistry,
  ToolExecutor,
  registerBuiltInTools,
} from '@jackchen_me/open-multi-agent'

// Define the tool
const weatherTool = defineTool({
  name: 'get_weather',
  description: 'Get current weather for a city.',
  inputSchema: z.object({
    city: z.string().describe('The city name.'),
    units: z.enum(['celsius', 'fahrenheit']).optional().describe('Temperature units.'),
  }),
  execute: async ({ city, units = 'celsius' }) => {
    // Replace with your actual weather API call
    const data = await fetchWeatherAPI(city, units)
    return { data: JSON.stringify(data), isError: false }
  },
})

// Wire up registry
const registry = new ToolRegistry()
registerBuiltInTools(registry)        // adds bash, file_read, file_write, file_edit, grep
registry.register(weatherTool)        // add your custom tool

const executor = new ToolExecutor(registry)
const agent = new Agent(
  {
    name: 'weather-agent',
    model: 'claude-sonnet-4-6',
    tools: ['get_weather', 'file_write'],
  },
  registry,
  executor,
)

const result = await agent.run('Get the weather for Tokyo and save a report to /tmp/weather.txt')
typescript
import { z } from 'zod'
import {
  defineTool,
  Agent,
  ToolRegistry,
  ToolExecutor,
  registerBuiltInTools,
} from '@jackchen_me/open-multi-agent'

// 定义工具
const weatherTool = defineTool({
  name: 'get_weather',
  description: 'Get current weather for a city.',
  inputSchema: z.object({
    city: z.string().describe('The city name.'),
    units: z.enum(['celsius', 'fahrenheit']).optional().describe('Temperature units.'),
  }),
  execute: async ({ city, units = 'celsius' }) => {
    // 替换为实际的天气 API 调用
    const data = await fetchWeatherAPI(city, units)
    return { data: JSON.stringify(data), isError: false }
  },
})

// 配置注册表
const registry = new ToolRegistry()
registerBuiltInTools(registry)        // 添加 bash、file_read、file_write、file_edit、grep 等内置工具
registry.register(weatherTool)        // 注册自定义工具

const executor = new ToolExecutor(registry)
const agent = new Agent(
  {
    name: 'weather-agent',
    model: 'claude-sonnet-4-6',
    tools: ['get_weather', 'file_write'],
  },
  registry,
  executor,
)

const result = await agent.run('Get the weather for Tokyo and save a report to /tmp/weather.txt')

Streaming Output

流式输出

typescript
import { Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from '@jackchen_me/open-multi-agent'

const registry = new ToolRegistry()
registerBuiltInTools(registry)
const executor = new ToolExecutor(registry)

const agent = new Agent(
  { name: 'writer', model: 'claude-sonnet-4-6', maxTurns: 3 },
  registry,
  executor,
)

for await (const event of agent.stream('Explain dependency injection in two paragraphs.')) {
  if (event.type === 'text' && typeof event.data === 'string') {
    process.stdout.write(event.data)
  }
}
typescript
import { Agent, ToolRegistry, ToolExecutor, registerBuiltInTools } from '@jackchen_me/open-multi-agent'

const registry = new ToolRegistry()
registerBuiltInTools(registry)
const executor = new ToolExecutor(registry)

const agent = new Agent(
  { name: 'writer', model: 'claude-sonnet-4-6', maxTurns: 3 },
  registry,
  executor,
)

for await (const event of agent.stream('Explain dependency injection in two paragraphs.')) {
  if (event.type === 'text' && typeof event.data === 'string') {
    process.stdout.write(event.data)
  }
}

Progress Monitoring

进度监控

typescript
const orchestrator = new OpenMultiAgent({
  defaultModel: 'claude-sonnet-4-6',
  onProgress: (event) => {
    switch (event.type) {
      case 'task:start':
        console.log(`▶ Task started: ${event.task}`)
        break
      case 'task:complete':
        console.log(`✓ Task done: ${event.task}`)
        break
      case 'task:failed':
        console.error(`✗ Task failed: ${event.task}`)
        break
      case 'agent:thinking':
        console.log(`  [${event.agent}] thinking...`)
        break
      case 'agent:tool_use':
        console.log(`  [${event.agent}] using tool: ${event.tool}`)
        break
    }
  },
})
typescript
const orchestrator = new OpenMultiAgent({
  defaultModel: 'claude-sonnet-4-6',
  onProgress: (event) => {
    switch (event.type) {
      case 'task:start':
        console.log(`▶ 任务开始: ${event.task}`)
        break
      case 'task:complete':
        console.log(`✓ 任务完成: ${event.task}`)
        break
      case 'task:failed':
        console.error(`✗ 任务失败: ${event.task}`)
        break
      case 'agent:thinking':
        console.log(`  [${event.agent}] 思考中...`)
        break
      case 'agent:tool_use':
        console.log(`  [${event.agent}] 使用工具: ${event.tool}`)
        break
    }
  },
})

Built-in Tools Reference

内置工具参考

ToolKey OptionsNotes
bash
command
,
timeout
,
cwd
Returns stdout + stderr
file_read
path
,
offset
,
limit
Use offset/limit for large files
file_write
path
,
content
Auto-creates parent directories
file_edit
path
,
old_string
,
new_string
Exact string match replacement
grep
pattern
,
path
,
flags
Uses ripgrep if available, falls back to Node.js
工具核心参数说明
bash
command
,
timeout
,
cwd
返回 stdout + stderr
file_read
path
,
offset
,
limit
大文件读取可使用 offset/limit 参数
file_write
path
,
content
自动创建父级目录
file_edit
path
,
old_string
,
new_string
精确字符串匹配替换
grep
pattern
,
path
,
flags
优先使用 ripgrep,降级到 Node.js 原生实现

AgentConfig Options

AgentConfig 配置项

typescript
interface AgentConfig {
  name: string                    // unique within a team
  model: string                   // e.g. 'claude-sonnet-4-6', 'gpt-5.4'
  provider?: 'anthropic' | 'openai'  // inferred from model name if omitted
  systemPrompt?: string           // agent's persona and instructions
  tools?: string[]                // names of tools the agent can use
  maxTurns?: number               // max conversation turns (default: unlimited)
}
typescript
interface AgentConfig {
  name: string                    // 团队内唯一
  model: string                   // 例如 'claude-sonnet-4-6', 'gpt-5.4'
  provider?: 'anthropic' | 'openai'  // 未指定时会根据模型名称自动推断
  systemPrompt?: string           // Agent 的人设和指令
  tools?: string[]                // Agent 可使用的工具名称列表
  maxTurns?: number               // 最大对话轮次(默认无限制)
}

Custom LLM Adapter

自定义 LLM 适配器

Implement two methods to add any LLM provider:
typescript
import type { LLMAdapter, ChatMessage, ChatResponse } from '@jackchen_me/open-multi-agent'

class OllamaAdapter implements LLMAdapter {
  async chat(messages: ChatMessage[], options?: ChatOptions): Promise<ChatResponse> {
    const response = await fetch('http://localhost:11434/api/chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ model: options?.model ?? 'llama3', messages }),
    })
    const data = await response.json()
    return {
      content: data.message.content,
      usage: { input_tokens: 0, output_tokens: 0 },
    }
  }

  async *stream(messages: ChatMessage[], options?: ChatOptions): AsyncIterable<StreamEvent> {
    // implement streaming from Ollama's /api/chat with stream:true
  }
}
只需实现两个方法即可接入任意 LLM 服务商:
typescript
import type { LLMAdapter, ChatMessage, ChatResponse } from '@jackchen_me/open-multi-agent'

class OllamaAdapter implements LLMAdapter {
  async chat(messages: ChatMessage[], options?: ChatOptions): Promise<ChatResponse> {
    const response = await fetch('http://localhost:11434/api/chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ model: options?.model ?? 'llama3', messages }),
    })
    const data = await response.json()
    return {
      content: data.message.content,
      usage: { input_tokens: 0, output_tokens: 0 },
    }
  }

  async *stream(messages: ChatMessage[], options?: ChatOptions): AsyncIterable<StreamEvent> {
    // 实现 Ollama /api/chat 接口的流式输出(需开启 stream:true)
  }
}

Common Patterns

常见模式

Pattern: Research → Write → Review pipeline

模式:调研 → 写作 → 评审 流水线

typescript
const team = orchestrator.createTeam('content-team', {
  name: 'content-team',
  agents: [
    { name: 'researcher', model: 'claude-sonnet-4-6', tools: ['bash', 'file_write'] },
    { name: 'writer', model: 'claude-sonnet-4-6', tools: ['file_read', 'file_write'] },
    { name: 'editor', model: 'claude-sonnet-4-6', tools: ['file_read', 'file_edit'] },
  ],
  sharedMemory: true,
})

await orchestrator.runTasks(team, [
  {
    title: 'Research topic',
    description: 'Research TypeScript 5.6 features, save findings to /tmp/research.md',
    assignee: 'researcher',
  },
  {
    title: 'Write article',
    description: 'Read /tmp/research.md and write a blog post to /tmp/article.md',
    assignee: 'writer',
    dependsOn: ['Research topic'],
  },
  {
    title: 'Edit article',
    description: 'Read /tmp/article.md and improve clarity and tone in-place',
    assignee: 'editor',
    dependsOn: ['Write article'],
  },
])
typescript
const team = orchestrator.createTeam('content-team', {
  name: 'content-team',
  agents: [
    { name: 'researcher', model: 'claude-sonnet-4-6', tools: ['bash', 'file_write'] },
    { name: 'writer', model: 'claude-sonnet-4-6', tools: ['file_read', 'file_write'] },
    { name: 'editor', model: 'claude-sonnet-4-6', tools: ['file_read', 'file_edit'] },
  ],
  sharedMemory: true,
})

await orchestrator.runTasks(team, [
  {
    title: 'Research topic',
    description: 'Research TypeScript 5.6 features, save findings to /tmp/research.md',
    assignee: 'researcher',
  },
  {
    title: 'Write article',
    description: 'Read /tmp/research.md and write a blog post to /tmp/article.md',
    assignee: 'writer',
    dependsOn: ['Research topic'],
  },
  {
    title: 'Edit article',
    description: 'Read /tmp/article.md and improve clarity and tone in-place',
    assignee: 'editor',
    dependsOn: ['Write article'],
  },
])

Pattern: Fan-out then merge

模式:扇出后合并

typescript
// Three agents work on separate modules in parallel, then one integrates
await orchestrator.runTasks(team, [
  { title: 'Build auth module', assignee: 'dev-1', description: '...' },
  { title: 'Build data module', assignee: 'dev-2', description: '...' },
  { title: 'Build api module',  assignee: 'dev-3', description: '...' },
  {
    title: 'Integrate modules',
    assignee: 'architect',
    description: 'Wire auth, data, and api modules together.',
    dependsOn: ['Build auth module', 'Build data module', 'Build api module'],
  },
])
typescript
// 三个 Agent 并行开发独立模块,之后由一个 Agent 集成
await orchestrator.runTasks(team, [
  { title: 'Build auth module', assignee: 'dev-1', description: '...' },
  { title: 'Build data module', assignee: 'dev-2', description: '...' },
  { title: 'Build api module',  assignee: 'dev-3', description: '...' },
  {
    title: 'Integrate modules',
    assignee: 'architect',
    description: 'Wire auth, data, and api modules together.',
    dependsOn: ['Build auth module', 'Build data module', 'Build api module'],
  },
])

Troubleshooting

故障排查

ANTHROPIC_API_KEY
not found
Ensure the env var is exported in the shell running your script, or use a
.env
loader like
dotenv
before importing from the framework.
Tasks not running in parallel Check that tasks don't share a circular
dependsOn
chain. Only tasks with all dependencies resolved become eligible for parallel execution.
Agent exceeds token limit Set
maxTurns
on the
AgentConfig
to cap conversation length. For large file operations, use
file_read
with
offset
/
limit
instead of reading entire files.
Tool not found error Ensure the tool name in
AgentConfig.tools[]
exactly matches the name registered in
ToolRegistry
. Built-in tools are registered via
registerBuiltInTools(registry)
.
OpenAI adapter not initializing
OPENAI_API_KEY
must be set when any agent uses
provider: 'openai'
. The framework initializes the adapter lazily but will throw if the key is missing at first use.
Type errors with
defineTool
Ensure
zod
is installed as a direct dependency (
npm install zod
) — the framework uses Zod for schema validation but doesn't re-export it.
找不到
ANTHROPIC_API_KEY
请确保运行脚本的 shell 中已导出该环境变量,或者在导入框架之前使用
dotenv
这类 .env 加载工具加载变量。
任务没有并行运行 请检查任务是否存在循环的
dependsOn
依赖链。只有所有依赖都已完成的任务才会进入并行执行队列。
Agent 超出 token 限制 可以在
AgentConfig
中设置
maxTurns
限制对话长度。处理大文件时,请使用带
offset
/
limit
参数的
file_read
接口,不要一次性读取整个文件。
工具未找到错误 请确保
AgentConfig.tools[]
中的工具名称和
ToolRegistry
中注册的名称完全一致。内置工具需要调用
registerBuiltInTools(registry)
完成注册。
OpenAI 适配器初始化失败 当有 Agent 使用
provider: 'openai'
时,必须设置
OPENAI_API_KEY
环境变量。框架会懒加载适配器,但首次调用时如果找不到 key 会抛出异常。
defineTool
类型错误
请确保
zod
是你的项目直接依赖(执行
npm install zod
安装)——框架使用 Zod 做 schema 校验,但不会重新导出该包。