getting-started

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

CrewAI Getting Started & Architecture

CrewAI 入门与架构

How to choose the right abstraction, scaffold a project, and wire everything together.

如何选择合适的抽象层、搭建项目框架并将所有组件整合在一起。

MANDATORY WORKFLOW — Read This First

必看工作流 — 请先阅读本节

NEVER manually create crewAI project files. Always scaffold with the CLI:
bash
crewai create flow <project_name>
This is not optional. Even if you only need one crew, even if you know the file structure by heart — run the CLI first, then modify the generated files. Do NOT write
main.py
,
crew.py
,
agents.yaml
,
tasks.yaml
, or
pyproject.toml
by hand from scratch.
Why: The CLI sets up correct imports, directory structure, pyproject.toml config, and boilerplate that is easy to get subtly wrong when done manually. The reference material below teaches you how the pieces work so you can modify scaffolded code, not so you can replace the scaffolding step.
Workflow:
  1. Run
    crewai create flow <name>
    (use underscores, not hyphens)
  2. Edit the generated YAML and Python files to match your use case
  3. Run
    crewai install
    then
    crewai run

切勿手动创建CrewAI项目文件。请始终使用CLI搭建项目框架:
bash
crewai create flow <project_name>
这一步并非可选。哪怕你只需要一个crew,哪怕你对文件结构烂熟于心 — 也要先运行CLI,再修改生成的文件。请勿从头手动编写
main.py
crew.py
agents.yaml
tasks.yaml
pyproject.toml
原因: CLI会自动配置正确的导入语句、目录结构、pyproject.toml配置以及模板代码,这些内容手动编写很容易出现细微错误。以下参考内容会教你各个组件的工作原理,以便你修改脚手架生成的代码,而非替代脚手架搭建步骤。
工作流:
  1. 运行
    crewai create flow <name>
    (请使用下划线,而非连字符)
  2. 编辑生成的YAML和Python文件以匹配你的使用场景
  3. 运行
    crewai install
    ,然后执行
    crewai run

1. Choosing the Right Abstraction

1. 选择合适的抽象层

crewAI has four levels of abstraction. Pick the simplest one that fits your need:
LevelWhen to UseOverheadExample
LLM.call()
Single prompt, no tools, structured extractionLowestParse an email into fields
Agent.kickoff()
One agent with tools and reasoning, no multi-agent coordinationLowResearch a topic with web search
Crew.kickoff()
Multiple agents collaborating on related tasksMediumResearch + write + review pipeline
Flow
wrapping crews/agents/LLM calls
Production app with state, routing, conditionals, error handlingFullMulti-step workflow with branching logic
CrewAI有四个层级的抽象层。请选择最符合你需求的最简方案:
层级使用场景复杂度示例
LLM.call()
单提示词、无需工具、结构化提取最低将电子邮件解析为指定字段
Agent.kickoff()
单个带工具和推理能力的Agent,无需多Agent协作使用网页搜索研究某个主题
Crew.kickoff()
多个Agent协作完成相关任务中等研究+写作+审核流水线
封装crews/agents/LLM调用的
Flow
具备状态管理、路由、条件判断、错误处理的生产级应用完整带有分支逻辑的多步骤工作流

Decision Flowchart

决策流程图

Do you need tools or multi-step reasoning?
├── No  → LLM.call()
└── Yes
    └── Do you need multiple agents collaborating?
        ├── No  → Agent.kickoff()
        └── Yes
            └── Do you need state management, routing, or multiple crews?
                ├── No  → Crew (but still scaffold as a Flow for future-proofing)
                └── Yes → Flow + Crew(s)
Rule of thumb: For any production application, always start with a Flow. You can embed
LLM.call()
,
Agent.kickoff()
, or
Crew.kickoff()
inside Flow steps. This gives you state management, error handling, and room to grow.

是否需要工具或多步骤推理?
├── 不需要 → LLM.call()
└── 需要
    └── 是否需要多个Agent协作?
        ├── 不需要 → Agent.kickoff()
        └── 需要
            └── 是否需要状态管理、路由或多个crews?
                ├── 不需要 → Crew(但仍建议以Flow方式搭建框架以保证可扩展性)
                └── 需要 → Flow + Crew(s)
经验法则: 对于任何生产级应用,始终从Flow开始。你可以在Flow步骤中嵌入
LLM.call()
Agent.kickoff()
Crew.kickoff()
。Flow能为你提供状态管理、错误处理能力,以及后续扩展的空间。

2. LLM.call() — Direct LLM Invocation

2. LLM.call() — 直接调用大语言模型

Use for simple, single-turn tasks where you don't need tools or agent reasoning.
python
from crewai import LLM
from pydantic import BaseModel

class EmailFields(BaseModel):
    sender: str
    subject: str
    urgency: str

llm = LLM(model="openai/gpt-4o")
适用于简单的单轮任务,无需工具或Agent推理。
python
from crewai import LLM
from pydantic import BaseModel

class EmailFields(BaseModel):
    sender: str
    subject: str
    urgency: str

llm = LLM(model="openai/gpt-4o")

Without response_format — returns a string

不指定response_format — 返回字符串

raw = llm.call(messages=[{"role": "user", "content": "Summarize this text..."}]) print(raw) # str
raw = llm.call(messages=[{"role": "user", "content": "总结这段文本..."}]) print(raw) # str

With response_format — returns the Pydantic object directly

指定response_format — 直接返回Pydantic对象

result = llm.call( messages=[{"role": "user", "content": f"Extract fields from this email: {email_text}"}], response_format=EmailFields ) print(result.sender) # str — access Pydantic fields directly print(result.urgency) # str

**When NOT to use:** If you need tools, multi-step reasoning, or retries — use an Agent instead.

---
result = llm.call( messages=[{"role": "user", "content": f"从这封邮件中提取字段:{email_text}"}], response_format=EmailFields ) print(result.sender) # str — 直接访问Pydantic字段 print(result.urgency) # str

**不适用场景:** 如果你需要工具、多步骤推理或重试机制 — 请使用Agent。

---

3. Agent.kickoff() — Single Agent Execution

3. Agent.kickoff() — 单个Agent执行

Use when you need one agent with tools and reasoning, but don't need multi-agent coordination.
python
from crewai import Agent
from crewai_tools import SerperDevTool
from pydantic import BaseModel

class ResearchFindings(BaseModel):
    main_points: list[str]
    key_technologies: list[str]

researcher = Agent(
    role="AI Researcher",
    goal="Research the latest AI developments",
    backstory="Expert AI researcher with deep technical knowledge.",
    llm="openai/gpt-4o",       # Optional: defaults to OPENAI_MODEL_NAME env var or "gpt-4"
    tools=[SerperDevTool()],
)
适用于需要单个带工具和推理能力的Agent,但无需多Agent协作的场景。
python
from crewai import Agent
from crewai_tools import SerperDevTool
from pydantic import BaseModel

class ResearchFindings(BaseModel):
    main_points: list[str]
    key_technologies: list[str]

researcher = Agent(
    role="AI研究员",
    goal="研究AI领域的最新进展",
    backstory="具备深厚技术知识的资深AI研究员。",
    llm="openai/gpt-4o",       # 可选:默认使用环境变量OPENAI_MODEL_NAME的值或"gpt-4"
    tools=[SerperDevTool()],
)

Unstructured output

非结构化输出

result = researcher.kickoff("What are the latest LLM developments?") print(result.raw) # str print(result.usage_metrics) # token usage
result = researcher.kickoff("LLM领域的最新进展有哪些?") print(result.raw) # str print(result.usage_metrics) # 令牌使用情况

Structured output with response_format

带response_format的结构化输出

result = researcher.kickoff( "Summarize latest AI developments", response_format=ResearchFindings, ) print(result.pydantic.main_points)

> **Note:** `Agent.kickoff()` wraps results — access structured output via `result.pydantic`. This differs from `LLM.call()`, which returns the Pydantic object directly.

**When NOT to use:** If you need multiple agents passing context to each other — use a Crew.

---
result = researcher.kickoff( "总结AI领域的最新进展", response_format=ResearchFindings, ) print(result.pydantic.main_points)

> **注意:** `Agent.kickoff()`会对结果进行封装 — 需通过`result.pydantic`访问结构化输出。这与`LLM.call()`不同,后者会直接返回Pydantic对象。

**不适用场景:** 如果你需要多个Agent之间传递上下文 — 请使用Crew。

---

4. CLI Scaffold Reference

4. CLI脚手架参考

As stated above: NEVER skip
crewai create flow
.
This section documents what the CLI generates so you know what to modify — not so you can recreate it by hand.
bash
crewai create flow my_project
Warning: Always use underscores in project names, not hyphens.
crewai create flow my-project
creates a directory that is not a valid Python identifier, causing
ModuleNotFoundError
on import. Use
my_project
instead.
This generates:
my_project/
├── src/my_project/
│   ├── crews/
│   │   └── my_crew/
│   │       ├── config/
│   │       │   ├── agents.yaml    # Agent definitions (role, goal, backstory)
│   │       │   └── tasks.yaml     # Task definitions (description, expected_output)
│   │       └── my_crew.py         # Crew class with @CrewBase
│   ├── tools/
│   │   └── custom_tool.py
│   ├── main.py                    # Flow class with @start/@listen
│   └── ...
├── .env                           # API keys (OPENAI_API_KEY, etc.)
└── pyproject.toml
Do not use
crewai create crew
unless you are certain you will never need routing, state, or multiple crews. Prefer
crewai create flow
as the default.

如前文所述:切勿跳过
crewai create flow
步骤
。本节将介绍CLI生成的内容,以便你了解需要修改哪些部分 — 而非让你手动重现这些内容。
bash
crewai create flow my_project
警告: 项目名称请始终使用下划线,而非连字符。
crewai create flow my-project
会创建一个不符合Python标识符规范的目录,导致导入时出现
ModuleNotFoundError
。请改用
my_project
该命令将生成以下结构:
my_project/
├── src/my_project/
│   ├── crews/
│   │   └── my_crew/
│   │       ├── config/
│   │       │   ├── agents.yaml    # Agent定义(角色、目标、背景故事)
│   │       │   └── tasks.yaml     # 任务定义(描述、预期输出)
│   │       └── my_crew.py         # 带有@CrewBase装饰器的Crew类
│   ├── tools/
│   │   └── custom_tool.py
│   ├── main.py                    # 带有@start/@listen装饰器的Flow类
│   └── ...
├── .env                           # API密钥(OPENAI_API_KEY等)
└── pyproject.toml
除非你确定永远不需要路由、状态管理或多个crews,否则不要使用
crewai create crew
。默认情况下请优先使用
crewai create flow

5. YAML Configuration (agents.yaml & tasks.yaml)

5. YAML配置(agents.yaml & tasks.yaml)

The scaffold uses YAML files for agent and task definitions. This separates configuration from code and supports
{variable}
interpolation.
脚手架使用YAML文件定义Agent和任务。这实现了配置与代码的分离,并支持
{variable}
变量插值。

agents.yaml

agents.yaml

yaml
researcher:
  role: >
    {topic} Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for uncovering
    the latest developments in {topic}.
  # Optional overrides:
  # llm: openai/gpt-4o
  # max_iter: 20
  # max_rpm: 10

reporting_analyst:
  role: >
    {topic} Reporting Analyst
  goal: >
    Create detailed reports based on {topic} research findings
  backstory: >
    You're a meticulous analyst known for turning complex data
    into clear, actionable reports.
yaml
researcher:
  role: >
    {topic} 资深数据研究员
  goal: >
    发掘{topic}领域的前沿进展
  backstory: >
    你是一名经验丰富的研究员,擅长发掘{topic}领域的最新动态。
  # 可选配置覆盖:
  # llm: openai/gpt-4o
  # max_iter: 20
  # max_rpm: 10

reporting_analyst:
  role: >
    {topic} 报告分析师
  goal: >
    根据{topic}领域的研究成果创建详细报告
  backstory: >
    你是一名严谨的分析师,擅长将复杂数据转化为清晰、可执行的报告。

tasks.yaml

tasks.yaml

yaml
research_task:
  description: >
    Conduct thorough research about {topic}.
    Identify key trends, breakthrough technologies,
    and potential industry impacts.
  expected_output: >
    A detailed report with analysis of the top 5
    developments in {topic}, with sources and implications.
  agent: researcher

reporting_task:
  description: >
    Review the research and create a comprehensive report about {topic}.
  expected_output: >
    A polished report formatted in markdown with sections
    for each key finding.
  agent: reporting_analyst
  output_file: output/report.md
Key rules:
  • {variable}
    placeholders are replaced at runtime via
    crew.kickoff(inputs={...})
  • expected_output
    is always a string (never a Pydantic class name)
  • agent
    value must match an agent key in
    agents.yaml
  • In
    Process.sequential
    , each task auto-receives all prior task outputs as context
  • For non-sequential deps, use
    context=[other_task]
    to explicitly pass output

yaml
research_task:
  description: >
    针对{topic}进行全面研究。
    识别关键趋势、突破性技术
    以及潜在的行业影响。
  expected_output: >
    一份详细报告,分析{topic}领域的前5大进展
    并包含来源和影响分析。
  agent: researcher

reporting_task:
  description: >
    审阅研究成果并创建关于{topic}的综合报告。
  expected_output: >
    一份格式规范的Markdown报告,
    每个关键发现单独成节。
  agent: reporting_analyst
  output_file: output/report.md
核心规则:
  • {variable}
    占位符会在运行时通过
    crew.kickoff(inputs={...})
    进行替换
  • expected_output
    必须始终是字符串(不能是Pydantic类名)
  • agent
    的值必须与
    agents.yaml
    中的Agent键名匹配
  • Process.sequential
    模式下,每个任务会自动接收所有前置任务的输出作为上下文
  • 对于非顺序依赖,请使用
    context=[other_task]
    显式传递输出

6. Wiring It Together — crew.py

6. 整合所有组件 — crew.py

The
@CrewBase
decorator auto-loads YAML config files and collects
@agent
and
@task
methods.
python
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool

@CrewBase
class ResearchCrew:
    """Research and reporting crew."""

    agents_config = "config/agents.yaml"
    tasks_config = "config/tasks.yaml"

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config["researcher"],
            tools=[SerperDevTool()],
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config["reporting_analyst"],
        )

    @task
    def research_task(self) -> Task:
        return Task(config=self.tasks_config["research_task"])

    @task
    def reporting_task(self) -> Task:
        return Task(
            config=self.tasks_config["reporting_task"],
            context=[self.research_task()],  # Explicit dependency (optional in sequential)
            output_file="output/report.md",
        )

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,  # auto-collected by @agent
            tasks=self.tasks,    # auto-collected by @task
            process=Process.sequential,
            verbose=True,
        )
Important: Method names must match YAML keys.
def researcher(self)
maps to the
researcher:
key in
agents.yaml
.

@CrewBase
装饰器会自动加载YAML配置文件,并收集带有
@agent
@task
装饰器的方法。
python
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool

@CrewBase
class ResearchCrew:
    """研究与报告团队。"""

    agents_config = "config/agents.yaml"
    tasks_config = "config/tasks.yaml"

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config["researcher"],
            tools=[SerperDevTool()],
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config["reporting_analyst"],
        )

    @task
    def research_task(self) -> Task:
        return Task(config=self.tasks_config["research_task"])

    @task
    def reporting_task(self) -> Task:
        return Task(
            config=self.tasks_config["reporting_task"],
            context=[self.research_task()],  # 显式依赖(顺序模式下可选)
            output_file="output/report.md",
        )

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,  # 由@agent自动收集
            tasks=self.tasks,    # 由@task自动收集
            process=Process.sequential,
            verbose=True,
        )
重要提示: 方法名称必须与YAML中的键名匹配。
def researcher(self)
对应
agents.yaml
中的
researcher:
键。

7. Flows — The Production Foundation

7. Flows — 生产级应用基础

Flows are the recommended way to build production crewAI applications. They provide state management, conditional routing, human-in-the-loop, and persistence — wrapping crews, agents, and LLM calls into a coherent workflow.
Flows是构建生产级CrewAI应用的推荐方式。它提供状态管理、条件路由、人工介入和持久化能力 — 将crews、agents和LLM调用封装为一个连贯的工作流。

Basic Flow — main.py

基础Flow — main.py

python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
from .crews.research_crew.research_crew import ResearchCrew

class ResearchState(BaseModel):
    topic: str = ""
    report: str = ""

class ResearchFlow(Flow[ResearchState]):

    @start()
    def begin(self):
        print(f"Starting research on: {self.state.topic}")

    @listen(begin)
    def run_research(self):
        result = ResearchCrew().crew().kickoff(
            inputs={"topic": self.state.topic}
        )
        self.state.report = result.raw

def kickoff():
    flow = ResearchFlow()
    flow.kickoff(inputs={"topic": "AI Agents"})

if __name__ == "__main__":
    kickoff()
Key points:
  • flow.kickoff(inputs={"topic": "AI Agents"})
    populates
    self.state.topic
    (keys must match Pydantic field names). The YAML
    {variable}
    substitution happens later, when you call
    crew.kickoff(inputs={"topic": self.state.topic})
    inside a Flow step. The chain is: flow inputs → state → crew inputs → YAML substitution.
  • Each
    @listen
    method runs after its dependency completes
  • State persists across all Flow steps — use it to pass data between crews
python
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
from .crews.research_crew.research_crew import ResearchCrew

class ResearchState(BaseModel):
    topic: str = ""
    report: str = ""

class ResearchFlow(Flow[ResearchState]):

    @start()
    def begin(self):
        print(f"开始研究主题:{self.state.topic}")

    @listen(begin)
    def run_research(self):
        result = ResearchCrew().crew().kickoff(
            inputs={"topic": self.state.topic}
        )
        self.state.report = result.raw

def kickoff():
    flow = ResearchFlow()
    flow.kickoff(inputs={"topic": "AI Agents"})

if __name__ == "__main__":
    kickoff()
核心要点:
  • flow.kickoff(inputs={"topic": "AI Agents"})
    会填充
    self.state.topic
    (键名必须与Pydantic字段名匹配)。YAML中的
    {variable}
    替换会在后续调用
    crew.kickoff(inputs={"topic": self.state.topic})
    时完成。整个流程为:Flow输入 → 状态 → Crew输入 → YAML替换
  • 每个
    @listen
    方法会在其依赖方法完成后运行
  • 状态会在所有Flow步骤中持久化 — 可用于在不同crews之间传递数据

State Management — Structured vs Unstructured

状态管理 — 结构化与非结构化

Structured (recommended for production):
python
from pydantic import BaseModel

class MyState(BaseModel):
    topic: str = ""
    research: str = ""
    draft: str = ""
    approved: bool = False

class MyFlow(Flow[MyState]):
    ...
Unstructured (quick prototyping):
python
class MyFlow(Flow):  # No type parameter — state is a dict
    @start()
    def begin(self):
        self.state["topic"] = "AI"  # dict-style access
Use structured state for type safety, IDE autocompletion, and validation. Use unstructured only for throwaway prototypes.
结构化(生产级推荐):
python
from pydantic import BaseModel

class MyState(BaseModel):
    topic: str = ""
    research: str = ""
    draft: str = ""
    approved: bool = False

class MyFlow(Flow[MyState]):
    ...
非结构化(快速原型开发):
python
class MyFlow(Flow):  # 不指定类型参数 — 状态为字典
    @start()
    def begin(self):
        self.state["topic"] = "AI"  # 字典式访问
结构化状态提供类型安全、IDE自动补全和验证功能。非结构化状态仅适用于一次性原型开发。

Using Agent.kickoff() Inside Flows (Common Pattern)

在Flows中使用Agent.kickoff()(常见模式)

Many production Flows skip Crews entirely and orchestrate individual agents via
Agent.kickoff()
. This gives you fine-grained control — each Flow step calls a specific agent, passes state, and stores the result. The Flow handles orchestration; agents handle reasoning.
python
from crewai import Agent, LLM
from crewai.flow.flow import Flow, listen, start
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
from pydantic import BaseModel

class ResearchState(BaseModel):
    query: str = ""
    raw_research: str = ""
    analysis: str = ""
    report: str = ""

class DeepResearchFlow(Flow[ResearchState]):

    @start()
    def gather_research(self):
        """Agent with tools does the actual searching."""
        researcher = Agent(
            role="Senior Research Analyst",
            goal="Find comprehensive, factual information about the given topic",
            backstory="You're an expert researcher who always cites sources and flags uncertainty.",
            tools=[SerperDevTool(), ScrapeWebsiteTool()],
            llm="openai/gpt-4o",
        )
        result = researcher.kickoff(
            f"Research this topic thoroughly: {self.state.query}"
        )
        self.state.raw_research = result.raw

    @listen(gather_research)
    def analyze_findings(self):
        """A different agent analyzes the raw research — no tools needed."""
        analyst = Agent(
            role="Data Analyst",
            goal="Extract key insights, patterns, and actionable recommendations",
            backstory="You turn raw data into clear, structured analysis.",
            llm="openai/gpt-4o",
        )
        result = analyst.kickoff(
            f"Analyze these research findings and extract key insights:\n\n{self.state.raw_research}"
        )
        self.state.analysis = result.raw

    @listen(analyze_findings)
    def write_report(self):
        """A writer agent produces the final deliverable."""
        writer = Agent(
            role="Technical Writer",
            goal="Produce clear, actionable reports for non-technical readers",
            backstory="You specialize in making complex information accessible.",
            llm="openai/gpt-4o",
        )
        result = writer.kickoff(
            f"Write a comprehensive report based on this analysis:\n\n{self.state.analysis}"
        )
        self.state.report = result.raw
Why this pattern works well:
  • Each agent is purpose-built for its step — narrow role, specific tools
  • The Flow manages state and sequencing — no crew overhead
  • Easy to add routing, human review, or retry logic between steps
  • You can mix
    Agent.kickoff()
    ,
    LLM.call()
    , and
    Crew.kickoff()
    freely
When to use Agent.kickoff() vs Crew.kickoff() in a Flow:
Use
Agent.kickoff()
when
Use
Crew.kickoff()
when
Each step is a distinct agent with different toolsMultiple agents need to collaborate on ONE task
You want the Flow to control sequencingAgents need to pass context to each other within a step
Steps are independent and don't need inter-agent delegationYou need hierarchical process with a manager
You want maximum control over what data flows between stepsThe sub-workflow is self-contained and reusable
许多生产级Flows会完全跳过Crews,直接通过
Agent.kickoff()
编排单个Agent。这种方式能提供更精细的控制 — 每个Flow步骤调用特定的Agent,传递状态并存储结果。Flow负责编排,Agent负责推理。
python
from crewai import Agent, LLM
from crewai.flow.flow import Flow, listen, start
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
from pydantic import BaseModel

class ResearchState(BaseModel):
    query: str = ""
    raw_research: str = ""
    analysis: str = ""
    report: str = ""

class DeepResearchFlow(Flow[ResearchState]):

    @start()
    def gather_research(self):
        """带工具的Agent负责实际搜索。"""
        researcher = Agent(
            role="资深研究分析师",
            goal="查找与给定主题相关的全面、真实信息",
            backstory="你是一名专家研究员,始终会引用来源并标注不确定信息。",
            tools=[SerperDevTool(), ScrapeWebsiteTool()],
            llm="openai/gpt-4o",
        )
        result = researcher.kickoff(
            f"针对以下主题进行全面研究:{self.state.query}"
        )
        self.state.raw_research = result.raw

    @listen(gather_research)
    def analyze_findings(self):
        """另一个Agent负责分析原始研究成果 — 无需工具。"""
        analyst = Agent(
            role="数据分析师",
            goal="提取关键见解、模式和可执行建议",
            backstory="你擅长将原始数据转化为清晰、结构化的分析结果。",
            llm="openai/gpt-4o",
        )
        result = analyst.kickoff(
            f"分析以下研究成果并提取关键见解:\n\n{self.state.raw_research}"
        )
        self.state.analysis = result.raw

    @listen(analyze_findings)
    def write_report(self):
        """撰稿Agent负责生成最终交付物。"""
        writer = Agent(
            role="技术撰稿人",
            goal="为非技术读者生成清晰、可执行的报告",
            backstory="你擅长将复杂信息转化为通俗易懂的内容。",
            llm="openai/gpt-4o",
        )
        result = writer.kickoff(
            f"基于以下分析结果撰写一份全面报告:\n\n{self.state.analysis}"
        )
        self.state.report = result.raw
该模式的优势:
  • 每个Agent都是为特定步骤量身打造的 — 角色明确、工具特定
  • Flow负责管理状态和顺序 — 无需Crew的额外开销
  • 易于在步骤之间添加路由、人工审核或重试逻辑
  • 可以自由混合使用
    Agent.kickoff()
    LLM.call()
    Crew.kickoff()

Agent.kickoff() with Structured Output in Flows

在Flow中选择Agent.kickoff()还是Crew.kickoff():

Combine
response_format
with state for typed data flow between agents:
python
class Insights(BaseModel):
    key_points: list[str]
    recommendations: list[str]
    confidence: float

class AnalysisFlow(Flow[AnalysisState]):

    @start()
    def research(self):
        researcher = Agent(role="Researcher", goal="...", backstory="...", tools=[SerperDevTool()])
        result = researcher.kickoff(
            f"Research {self.state.topic}",
            response_format=Insights,
        )
        # result.pydantic gives you the typed Insights object
        self.state.key_points = result.pydantic.key_points
        self.state.recommendations = result.pydantic.recommendations
适合使用
Agent.kickoff()
的场景
适合使用
Crew.kickoff()
的场景
每个步骤由不同工具的独立Agent完成多个Agent需要协作完成单个任务
希望由Flow控制任务顺序Agent需要在步骤内部互相传递上下文
步骤之间相互独立,无需Agent间委托需要带有管理者的分层流程
希望对步骤间的数据流转拥有最大控制权子工作流是独立且可复用的

Mixing Abstractions in a Flow

在Flows中使用Agent.kickoff()实现结构化输出

A Flow can combine all crewAI abstractions in a single workflow:
python
class ProductFlow(Flow[ProductState]):

    @start()
    def classify_request(self):
        # LLM.call() for simple classification
        llm = LLM(model="openai/gpt-4o")
        self.state.category = llm.call(
            messages=[{"role": "user", "content": f"Classify: {self.state.request}"}],
            response_format=Category
        ).category

    @router(classify_request)
    def route_by_category(self):
        if self.state.category == "simple":
            return "quick_answer"
        return "deep_research"

    @listen("quick_answer")
    def handle_simple(self):
        # Agent.kickoff() for single-agent work
        agent = Agent(role="Helper", goal="Answer quickly", backstory="...")
        result = agent.kickoff(self.state.request)
        self.state.answer = result.raw

    @listen("deep_research")
    def handle_complex(self):
        # Crew.kickoff() for multi-agent collaboration
        result = ResearchCrew().crew().kickoff(
            inputs={"topic": self.state.request}
        )
        self.state.answer = result.raw
response_format
与状态结合,实现Agent之间的类型化数据流转:
python
class Insights(BaseModel):
    key_points: list[str]
    recommendations: list[str]
    confidence: float

class AnalysisFlow(Flow[AnalysisState]):

    @start()
    def research(self):
        researcher = Agent(role="研究员", goal="...", backstory="...", tools=[SerperDevTool()])
        result = researcher.kickoff(
            f"研究主题:{self.state.topic}",
            response_format=Insights,
        )
        # result.pydantic会返回类型化的Insights对象
        self.state.key_points = result.pydantic.key_points
        self.state.recommendations = result.pydantic.recommendations

Flow Routing with
@router

在Flow中混合使用不同抽象层

Use
@router
for conditional branching — return a string label, and
@listen("label")
binds to branches:
python
from crewai.flow.flow import Flow, listen, router, start, or_

class QualityFlow(Flow[QAState]):

    @start()
    def generate_content(self):
        result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
        self.state.draft = result.raw

    @router(generate_content)
    def check_quality(self):
        llm = LLM(model="openai/gpt-4o")
        score = llm.call(
            messages=[{"role": "user", "content": f"Rate 1-10: {self.state.draft}"}],
            response_format=QualityScore
        )
        if score.rating >= 7:
            return "approved"
        return "needs_revision"

    @listen("approved")
    def publish(self):
        self.state.published = True

    @listen("needs_revision")
    def revise(self):
        result = EditorCrew().crew().kickoff(
            inputs={"draft": self.state.draft}
        )
        self.state.draft = result.raw
单个Flow可以整合所有CrewAI抽象层:
python
class ProductFlow(Flow[ProductState]):

    @start()
    def classify_request(self):
        # 使用LLM.call()进行简单分类
        llm = LLM(model="openai/gpt-4o")
        self.state.category = llm.call(
            messages=[{"role": "user", "content": f"分类:{self.state.request}"}],
            response_format=Category
        ).category

    @router(classify_request)
    def route_by_category(self):
        if self.state.category == "simple":
            return "quick_answer"
        return "deep_research"

    @listen("quick_answer")
    def handle_simple(self):
        # 使用Agent.kickoff()完成单Agent任务
        agent = Agent(role="助手", goal="快速回答问题", backstory="...")
        result = agent.kickoff(self.state.request)
        self.state.answer = result.raw

    @listen("deep_research")
    def handle_complex(self):
        # 使用Crew.kickoff()完成多Agent协作任务
        result = ResearchCrew().crew().kickoff(
            inputs={"topic": self.state.request}
        )
        self.state.answer = result.raw

Converging Branches with
or_()
and
and_()

使用
@router
实现Flow路由

python
from crewai.flow.flow import Flow, listen, start, or_, and_

class ParallelFlow(Flow[MyState]):

    @start()
    def fetch_data_a(self):
        ...

    @start()
    def fetch_data_b(self):
        ...

    # Runs when BOTH fetches complete
    @listen(and_(fetch_data_a, fetch_data_b))
    def merge_results(self):
        ...

    # Runs when EITHER source provides data
    @listen(or_(fetch_data_a, fetch_data_b))
    def process_first_available(self):
        ...
使用
@router
实现条件分支 — 返回字符串标签,
@listen("label")
会绑定到对应分支:
python
from crewai.flow.flow import Flow, listen, router, start, or_

class QualityFlow(Flow[QAState]):

    @start()
    def generate_content(self):
        result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
        self.state.draft = result.raw

    @router(generate_content)
    def check_quality(self):
        llm = LLM(model="openai/gpt-4o")
        score = llm.call(
            messages=[{"role": "user", "content": f"为以下内容评分1-10:{self.state.draft}"}],
            response_format=QualityScore
        )
        if score.rating >= 7:
            return "approved"
        return "needs_revision"

    @listen("approved")
    def publish(self):
        self.state.published = True

    @listen("needs_revision")
    def revise(self):
        result = EditorCrew().crew().kickoff(
            inputs={"draft": self.state.draft}
        )
        self.state.draft = result.raw

Flow Persistence with
@persist

使用
or_()
and_()
实现分支聚合

For long-running workflows that need to survive restarts:
python
from crewai.flow.flow import Flow, start, listen, persist
from crewai.flow.persistence import SQLiteFlowPersistence

@persist(SQLiteFlowPersistence())  # Class-level: persists all methods
class LongRunningFlow(Flow[MyState]):

    @start()
    def step_one(self):
        self.state.data = "processed"

    @listen(step_one)
    def step_two(self):
        # If the process crashes here, restarting with the same
        # state ID will resume from after step_one
        ...
python
from crewai.flow.flow import Flow, listen, start, or_, and_

class ParallelFlow(Flow[MyState]):

    @start()
    def fetch_data_a(self):
        ...

    @start()
    def fetch_data_b(self):
        ...

    # 当两个数据获取步骤都完成后运行
    @listen(and_(fetch_data_a, fetch_data_b))
    def merge_results(self):
        ...

    # 当任意一个数据源获取完成后运行
    @listen(or_(fetch_data_a, fetch_data_b))
    def process_first_available(self):
        ...

Human-in-the-Loop with
@human_feedback

使用
@persist
实现Flow持久化

python
from crewai.flow.flow import Flow, start, listen, router
from crewai.flow.human_feedback import human_feedback

class ApprovalFlow(Flow[ReviewState]):

    @start()
    def generate_draft(self):
        result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
        self.state.draft = result.raw

    @human_feedback(
        message="Review the draft and provide feedback",
        emit=["approved", "needs_revision"],
        llm="openai/gpt-4o",
        default_outcome="approved"
    )
    @listen(generate_draft)
    def review_step(self):
        return self.state.draft

    @listen("approved")
    def publish(self):
        ...

    @listen("needs_revision")
    def revise(self):
        feedback = self.last_human_feedback
        # Use feedback.feedback_text for revision
        ...
对于需要在重启后继续运行的长时工作流:
python
from crewai.flow.flow import Flow, start, listen, persist
from crewai.flow.persistence import SQLiteFlowPersistence

@persist(SQLiteFlowPersistence())  # 类级别:持久化所有方法
class LongRunningFlow(Flow[MyState]):

    @start()
    def step_one(self):
        self.state.data = "processed"

    @listen(step_one)
    def step_two(self):
        # 如果进程在此处崩溃,使用相同的状态ID重启后会从step_one之后继续执行
        ...

Flow Visualization

使用
@human_feedback
实现人工介入

python
flow = MyFlow()
flow.plot()             # Display in notebook
flow.plot("my_flow")    # Save as my_flow.png

python
from crewai.flow.flow import Flow, start, listen, router
from crewai.flow.human_feedback import human_feedback

class ApprovalFlow(Flow[ReviewState]):

    @start()
    def generate_draft(self):
        result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
        self.state.draft = result.raw

    @human_feedback(
        message="审阅草稿并提供反馈",
        emit=["approved", "needs_revision"],
        llm="openai/gpt-4o",
        default_outcome="approved"
    )
    @listen(generate_draft)
    def review_step(self):
        return self.state.draft

    @listen("approved")
    def publish(self):
        ...

    @listen("needs_revision")
    def revise(self):
        feedback = self.last_human_feedback
        # 使用feedback.feedback_text进行修订
        ...

8. Variable Interpolation with
inputs

Flow可视化

The
{variable}
pattern is how you make crews reusable.
python
undefined
python
flow = MyFlow()
flow.plot()             # 在Notebook中显示
flow.plot("my_flow")    # 保存为my_flow.png

Variables flow through: kickoff → YAML templates → agent/task prompts

8. 使用
inputs
实现变量插值

crew.kickoff(inputs={ "topic": "AI Agents", "current_year": "2025", "target_audience": "developers", })

In YAML, `{topic}` and `{current_year}` get replaced:

```yaml
research_task:
  description: >
    Research {topic} trends for {current_year},
    targeting {target_audience}.
Common mistakes:
  • Forgetting to pass a variable that's referenced in YAML → results in literal
    {variable}
    in the prompt
  • Using Jinja2 syntax
    {{ }}
    instead of single-brace
    { }
    → crewAI uses single braces
  • Passing variables that don't match any YAML placeholder → silently ignored

{variable}
模式是实现crews复用的关键。
python
undefined

9. Running Your Project

变量流转路径:kickoff → YAML模板 → Agent/任务提示词

bash
undefined
crew.kickoff(inputs={ "topic": "AI Agents", "current_year": "2025", "target_audience": "开发者", })

在YAML中,`{topic}`和`{current_year}`会被替换:

```yaml
research_task:
  description: >
    研究{current_year}年{topic}领域的趋势,
    面向{target_audience}群体。
常见错误:
  • YAML中引用了变量,但
    kickoff()
    中未传递
    inputs=
    → 提示词中会出现字面量
    {variable}
  • 使用Jinja2语法
    {{ }}
    而非单括号
    { }
    → CrewAI使用单括号
  • 传递了YAML中未引用的变量 → 会被静默忽略

Install dependencies

9. 运行项目

crewai install
bash
undefined

Run the flow

安装依赖

crewai run

Or run directly:

```bash
cd my_project
uv run src/my_project/main.py

crewai install

10. Quick Diagnostic Checklist

运行Flow

SymptomLikely CauseFix
{topic}
appears literally in agent output
Missing
inputs=
in
kickoff()
Pass
crew.kickoff(inputs={"topic": "..."})
KeyError
on
self.agents_config['name']
Method name doesn't match YAML keyEnsure
@agent def researcher
matches
researcher:
in YAML
ModuleNotFoundError
on import
Wrong path or hyphens in project nameUse underscores; check
from .crews.crew_name.crew_name import CrewClass
Crew runs but Flow state is emptyNot writing results back to
self.state
Assign crew output to
self.state.field
in the
@listen
method
Process.SEQUENTIAL
raises
AttributeError
Uppercase enumUse lowercase:
Process.sequential
Agent ignores toolsTools assigned to agent but task needs themMove tools to task level or verify agent has the right tools
Agent fabricates search resultsNo tools assigned — agent can't actually searchAdd
tools=[SerperDevTool()]
or equivalent; an agent with no tools will hallucinate data
@listen
never fires
Listener string doesn't match router return value, or passed a string instead of method reference
@router
must return the exact string
@listen("label")
expects; for method chaining use
@listen(method_ref)
not
@listen("method_name")
Flow step runs twice unexpectedlyMultiple
@start()
methods or
or_
listener
Use
and_()
if you need all upstream steps to complete first
AuthenticationError
or
API key not found
Missing env varSet
OPENAI_API_KEY
(and
SERPER_API_KEY
for search tools) in
.env
Agent retries endlessly on structured outputPydantic model too complex for the LLMSimplify the model, reduce nesting, or use a more capable
llm
Agent loops to
max_iter
without finishing
Task description too vague or conflicting with
expected_output
Make
expected_output
specific and achievable; lower
max_iter
to fail faster
Flow state not updating across stepsUsing unstructured state without proper key accessSwitch to structured Pydantic state or ensure dict keys are consistent
@router
return value ignored
Method not decorated with
@router
Use
@router(condition)
not
@listen(condition)
for branching methods

crewai run

或直接运行:

```bash
cd my_project
uv run src/my_project/main.py

References

10. 快速诊断清单

For deeper dives into specific topics, see:
  • Flow Routing, Persistence, Streaming & Human Feedback — complete
    @router
    ,
    or_()
    ,
    and_()
    ,
    @persist
    , streaming, and
    @human_feedback
    patterns
  • MCP Servers — prefer official MCP servers over native tools; setup, DSL integration, and known official servers
  • Tools Catalog — all 80+ built-in tools with imports, env vars, and common combos (use as fallback when no MCP server exists)
For related skills:
  • design-agent — agent Role-Goal-Backstory framework, parameter tuning, tool assignment, memory & knowledge configuration
  • design-task — task description/expected_output best practices, guardrails, structured output, dependencies
症状可能原因修复方案
Agent输出中出现字面量
{topic}
kickoff()
中缺少
inputs=
参数
传递
crew.kickoff(inputs={"topic": "..."})
出现
KeyError: self.agents_config['name']
方法名称与YAML键名不匹配确保
@agent def researcher
与YAML中的
researcher:
匹配
导入时出现
ModuleNotFoundError
路径错误或项目名称使用了连字符使用下划线;检查
from .crews.crew_name.crew_name import CrewClass
Crew正常运行但Flow状态为空未将结果写入
self.state
@listen
方法中将Crew输出赋值给
self.state.field
Process.SEQUENTIAL
引发
AttributeError
枚举使用了大写使用小写:
Process.sequential
Agent忽略工具工具已分配给Agent但任务需要工具将工具移至任务级别,或验证Agent是否拥有正确的工具
Agent生成虚假搜索结果未分配工具 — Agent无法实际搜索添加
tools=[SerperDevTool()]
或类似工具;没有工具的Agent会生成幻觉数据
@listen
从未触发
监听器字符串与路由返回值不匹配,或传递了字符串而非方法引用
@router
必须返回
@listen("label")
期望的精确字符串;方法链式调用请使用
@listen(method_ref)
而非
@listen("method_name")
Flow步骤意外运行两次存在多个
@start()
方法或
or_
监听器
如果需要所有上游步骤完成后再运行,请使用
and_()
出现
AuthenticationError
API key not found
缺少环境变量
.env
中设置
OPENAI_API_KEY
(以及搜索工具所需的
SERPER_API_KEY
Agent在结构化输出时无限重试Pydantic模型对LLM来说过于复杂简化模型、减少嵌套,或使用更强大的
llm
Agent循环至
max_iter
仍未完成
任务描述过于模糊或与
expected_output
冲突
expected_output
更具体、可实现;降低
max_iter
以更快失败
Flow状态在步骤间未更新使用非结构化状态但未正确访问键名切换为结构化Pydantic状态,或确保字典键名一致
@router
返回值被忽略
方法未使用
@router
装饰器
分支方法请使用
@router(condition)
而非
@listen(condition)

参考资料

如需深入了解特定主题,请参阅:
  • Flow路由、持久化、流式传输与人工反馈 — 完整的
    @router
    or_()
    and_()
    @persist
    、流式传输和
    @human_feedback
    模式
  • MCP服务器 — 优先使用官方MCP服务器而非原生工具;包含设置、DSL集成和已知官方服务器列表
  • 工具目录 — 所有80+内置工具的导入方式、环境变量和常见组合(当没有MCP服务器时作为备选方案)
相关技能:
  • design-agent — Agent角色-目标-背景故事框架、参数调优、工具分配、内存与知识配置
  • design-task — 任务描述/预期输出最佳实践、防护机制、结构化输出、依赖管理