Loading...
Loading...
CrewAI architecture decisions and project scaffolding. Use when starting a new crewAI project, choosing between LLM.call() vs Agent.kickoff() vs Crew.kickoff() vs Flow, scaffolding with 'crewai create flow', setting up YAML config (agents.yaml, tasks.yaml), wiring @CrewBase crew.py, writing Flow main.py with @start/@listen, or using {variable} interpolation.
npx skill4agent add crewaiinc/skills getting-startedcrewai create flow <project_name>main.pycrew.pyagents.yamltasks.yamlpyproject.tomlWhy: The CLI sets up correct imports, directory structure, pyproject.toml config, and boilerplate that is easy to get subtly wrong when done manually. The reference material below teaches you how the pieces work so you can modify scaffolded code, not so you can replace the scaffolding step.
crewai create flow <name>crewai installcrewai run| Level | When to Use | Overhead | Example |
|---|---|---|---|
| Single prompt, no tools, structured extraction | Lowest | Parse an email into fields |
| One agent with tools and reasoning, no multi-agent coordination | Low | Research a topic with web search |
| Multiple agents collaborating on related tasks | Medium | Research + write + review pipeline |
| Production app with state, routing, conditionals, error handling | Full | Multi-step workflow with branching logic |
Do you need tools or multi-step reasoning?
├── No → LLM.call()
└── Yes
└── Do you need multiple agents collaborating?
├── No → Agent.kickoff()
└── Yes
└── Do you need state management, routing, or multiple crews?
├── No → Crew (but still scaffold as a Flow for future-proofing)
└── Yes → Flow + Crew(s)LLM.call()Agent.kickoff()Crew.kickoff()from crewai import LLM
from pydantic import BaseModel
class EmailFields(BaseModel):
sender: str
subject: str
urgency: str
llm = LLM(model="openai/gpt-4o")
# Without response_format — returns a string
raw = llm.call(messages=[{"role": "user", "content": "Summarize this text..."}])
print(raw) # str
# With response_format — returns the Pydantic object directly
result = llm.call(
messages=[{"role": "user", "content": f"Extract fields from this email: {email_text}"}],
response_format=EmailFields
)
print(result.sender) # str — access Pydantic fields directly
print(result.urgency) # strfrom crewai import Agent
from crewai_tools import SerperDevTool
from pydantic import BaseModel
class ResearchFindings(BaseModel):
main_points: list[str]
key_technologies: list[str]
researcher = Agent(
role="AI Researcher",
goal="Research the latest AI developments",
backstory="Expert AI researcher with deep technical knowledge.",
llm="openai/gpt-4o", # Optional: defaults to OPENAI_MODEL_NAME env var or "gpt-4"
tools=[SerperDevTool()],
)
# Unstructured output
result = researcher.kickoff("What are the latest LLM developments?")
print(result.raw) # str
print(result.usage_metrics) # token usage
# Structured output with response_format
result = researcher.kickoff(
"Summarize latest AI developments",
response_format=ResearchFindings,
)
print(result.pydantic.main_points)Note:wraps results — access structured output viaAgent.kickoff(). This differs fromresult.pydantic, which returns the Pydantic object directly.LLM.call()
crewai create flowcrewai create flow my_projectWarning: Always use underscores in project names, not hyphens.creates a directory that is not a valid Python identifier, causingcrewai create flow my-projecton import. UseModuleNotFoundErrorinstead.my_project
my_project/
├── src/my_project/
│ ├── crews/
│ │ └── my_crew/
│ │ ├── config/
│ │ │ ├── agents.yaml # Agent definitions (role, goal, backstory)
│ │ │ └── tasks.yaml # Task definitions (description, expected_output)
│ │ └── my_crew.py # Crew class with @CrewBase
│ ├── tools/
│ │ └── custom_tool.py
│ ├── main.py # Flow class with @start/@listen
│ └── ...
├── .env # API keys (OPENAI_API_KEY, etc.)
└── pyproject.tomlDo not useunless you are certain you will never need routing, state, or multiple crews. Prefercrewai create crewas the default.crewai create flow
{variable}researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering
the latest developments in {topic}.
# Optional overrides:
# llm: openai/gpt-4o
# max_iter: 20
# max_rpm: 10
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} research findings
backstory: >
You're a meticulous analyst known for turning complex data
into clear, actionable reports.research_task:
description: >
Conduct thorough research about {topic}.
Identify key trends, breakthrough technologies,
and potential industry impacts.
expected_output: >
A detailed report with analysis of the top 5
developments in {topic}, with sources and implications.
agent: researcher
reporting_task:
description: >
Review the research and create a comprehensive report about {topic}.
expected_output: >
A polished report formatted in markdown with sections
for each key finding.
agent: reporting_analyst
output_file: output/report.md{variable}crew.kickoff(inputs={...})expected_outputagentagents.yamlProcess.sequentialcontext=[other_task]@CrewBase@agent@taskfrom crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
@CrewBase
class ResearchCrew:
"""Research and reporting crew."""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config["researcher"],
tools=[SerperDevTool()],
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config["reporting_analyst"],
)
@task
def research_task(self) -> Task:
return Task(config=self.tasks_config["research_task"])
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config["reporting_task"],
context=[self.research_task()], # Explicit dependency (optional in sequential)
output_file="output/report.md",
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # auto-collected by @agent
tasks=self.tasks, # auto-collected by @task
process=Process.sequential,
verbose=True,
)def researcher(self)researcher:agents.yamlfrom crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
from .crews.research_crew.research_crew import ResearchCrew
class ResearchState(BaseModel):
topic: str = ""
report: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def begin(self):
print(f"Starting research on: {self.state.topic}")
@listen(begin)
def run_research(self):
result = ResearchCrew().crew().kickoff(
inputs={"topic": self.state.topic}
)
self.state.report = result.raw
def kickoff():
flow = ResearchFlow()
flow.kickoff(inputs={"topic": "AI Agents"})
if __name__ == "__main__":
kickoff()flow.kickoff(inputs={"topic": "AI Agents"})self.state.topic{variable}crew.kickoff(inputs={"topic": self.state.topic})@listenfrom pydantic import BaseModel
class MyState(BaseModel):
topic: str = ""
research: str = ""
draft: str = ""
approved: bool = False
class MyFlow(Flow[MyState]):
...class MyFlow(Flow): # No type parameter — state is a dict
@start()
def begin(self):
self.state["topic"] = "AI" # dict-style accessAgent.kickoff()from crewai import Agent, LLM
from crewai.flow.flow import Flow, listen, start
from crewai_tools import SerperDevTool, ScrapeWebsiteTool
from pydantic import BaseModel
class ResearchState(BaseModel):
query: str = ""
raw_research: str = ""
analysis: str = ""
report: str = ""
class DeepResearchFlow(Flow[ResearchState]):
@start()
def gather_research(self):
"""Agent with tools does the actual searching."""
researcher = Agent(
role="Senior Research Analyst",
goal="Find comprehensive, factual information about the given topic",
backstory="You're an expert researcher who always cites sources and flags uncertainty.",
tools=[SerperDevTool(), ScrapeWebsiteTool()],
llm="openai/gpt-4o",
)
result = researcher.kickoff(
f"Research this topic thoroughly: {self.state.query}"
)
self.state.raw_research = result.raw
@listen(gather_research)
def analyze_findings(self):
"""A different agent analyzes the raw research — no tools needed."""
analyst = Agent(
role="Data Analyst",
goal="Extract key insights, patterns, and actionable recommendations",
backstory="You turn raw data into clear, structured analysis.",
llm="openai/gpt-4o",
)
result = analyst.kickoff(
f"Analyze these research findings and extract key insights:\n\n{self.state.raw_research}"
)
self.state.analysis = result.raw
@listen(analyze_findings)
def write_report(self):
"""A writer agent produces the final deliverable."""
writer = Agent(
role="Technical Writer",
goal="Produce clear, actionable reports for non-technical readers",
backstory="You specialize in making complex information accessible.",
llm="openai/gpt-4o",
)
result = writer.kickoff(
f"Write a comprehensive report based on this analysis:\n\n{self.state.analysis}"
)
self.state.report = result.rawAgent.kickoff()LLM.call()Crew.kickoff()Use | Use |
|---|---|
| Each step is a distinct agent with different tools | Multiple agents need to collaborate on ONE task |
| You want the Flow to control sequencing | Agents need to pass context to each other within a step |
| Steps are independent and don't need inter-agent delegation | You need hierarchical process with a manager |
| You want maximum control over what data flows between steps | The sub-workflow is self-contained and reusable |
response_formatclass Insights(BaseModel):
key_points: list[str]
recommendations: list[str]
confidence: float
class AnalysisFlow(Flow[AnalysisState]):
@start()
def research(self):
researcher = Agent(role="Researcher", goal="...", backstory="...", tools=[SerperDevTool()])
result = researcher.kickoff(
f"Research {self.state.topic}",
response_format=Insights,
)
# result.pydantic gives you the typed Insights object
self.state.key_points = result.pydantic.key_points
self.state.recommendations = result.pydantic.recommendationsclass ProductFlow(Flow[ProductState]):
@start()
def classify_request(self):
# LLM.call() for simple classification
llm = LLM(model="openai/gpt-4o")
self.state.category = llm.call(
messages=[{"role": "user", "content": f"Classify: {self.state.request}"}],
response_format=Category
).category
@router(classify_request)
def route_by_category(self):
if self.state.category == "simple":
return "quick_answer"
return "deep_research"
@listen("quick_answer")
def handle_simple(self):
# Agent.kickoff() for single-agent work
agent = Agent(role="Helper", goal="Answer quickly", backstory="...")
result = agent.kickoff(self.state.request)
self.state.answer = result.raw
@listen("deep_research")
def handle_complex(self):
# Crew.kickoff() for multi-agent collaboration
result = ResearchCrew().crew().kickoff(
inputs={"topic": self.state.request}
)
self.state.answer = result.raw@router@router@listen("label")from crewai.flow.flow import Flow, listen, router, start, or_
class QualityFlow(Flow[QAState]):
@start()
def generate_content(self):
result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.draft = result.raw
@router(generate_content)
def check_quality(self):
llm = LLM(model="openai/gpt-4o")
score = llm.call(
messages=[{"role": "user", "content": f"Rate 1-10: {self.state.draft}"}],
response_format=QualityScore
)
if score.rating >= 7:
return "approved"
return "needs_revision"
@listen("approved")
def publish(self):
self.state.published = True
@listen("needs_revision")
def revise(self):
result = EditorCrew().crew().kickoff(
inputs={"draft": self.state.draft}
)
self.state.draft = result.rawor_()and_()from crewai.flow.flow import Flow, listen, start, or_, and_
class ParallelFlow(Flow[MyState]):
@start()
def fetch_data_a(self):
...
@start()
def fetch_data_b(self):
...
# Runs when BOTH fetches complete
@listen(and_(fetch_data_a, fetch_data_b))
def merge_results(self):
...
# Runs when EITHER source provides data
@listen(or_(fetch_data_a, fetch_data_b))
def process_first_available(self):
...@persistfrom crewai.flow.flow import Flow, start, listen, persist
from crewai.flow.persistence import SQLiteFlowPersistence
@persist(SQLiteFlowPersistence()) # Class-level: persists all methods
class LongRunningFlow(Flow[MyState]):
@start()
def step_one(self):
self.state.data = "processed"
@listen(step_one)
def step_two(self):
# If the process crashes here, restarting with the same
# state ID will resume from after step_one
...@human_feedbackfrom crewai.flow.flow import Flow, start, listen, router
from crewai.flow.human_feedback import human_feedback
class ApprovalFlow(Flow[ReviewState]):
@start()
def generate_draft(self):
result = WriterCrew().crew().kickoff(inputs={"topic": self.state.topic})
self.state.draft = result.raw
@human_feedback(
message="Review the draft and provide feedback",
emit=["approved", "needs_revision"],
llm="openai/gpt-4o",
default_outcome="approved"
)
@listen(generate_draft)
def review_step(self):
return self.state.draft
@listen("approved")
def publish(self):
...
@listen("needs_revision")
def revise(self):
feedback = self.last_human_feedback
# Use feedback.feedback_text for revision
...flow = MyFlow()
flow.plot() # Display in notebook
flow.plot("my_flow") # Save as my_flow.pnginputs{variable}# Variables flow through: kickoff → YAML templates → agent/task prompts
crew.kickoff(inputs={
"topic": "AI Agents",
"current_year": "2025",
"target_audience": "developers",
}){topic}{current_year}research_task:
description: >
Research {topic} trends for {current_year},
targeting {target_audience}.{variable}{{ }}{ }# Install dependencies
crewai install
# Run the flow
crewai runcd my_project
uv run src/my_project/main.py| Symptom | Likely Cause | Fix |
|---|---|---|
| Missing | Pass |
| Method name doesn't match YAML key | Ensure |
| Wrong path or hyphens in project name | Use underscores; check |
| Crew runs but Flow state is empty | Not writing results back to | Assign crew output to |
| Uppercase enum | Use lowercase: |
| Agent ignores tools | Tools assigned to agent but task needs them | Move tools to task level or verify agent has the right tools |
| Agent fabricates search results | No tools assigned — agent can't actually search | Add |
| Listener string doesn't match router return value, or passed a string instead of method reference | |
| Flow step runs twice unexpectedly | Multiple | Use |
| Missing env var | Set |
| Agent retries endlessly on structured output | Pydantic model too complex for the LLM | Simplify the model, reduce nesting, or use a more capable |
Agent loops to | Task description too vague or conflicting with | Make |
| Flow state not updating across steps | Using unstructured state without proper key access | Switch to structured Pydantic state or ensure dict keys are consistent |
| Method not decorated with | Use |
@routeror_()and_()@persist@human_feedback