langgraph-tools

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

LangGraph Tool Calling

LangGraph工具调用

Integrate tool calling into LangGraph workflows.
在LangGraph工作流中集成工具调用。

Basic Tool Binding

基础工具绑定

python
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic

@tool
def search_database(query: str) -> str:
    """Search the database for information."""
    return db.search(query)

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a recipient."""
    email_service.send(to, subject, body)
    return f"Email sent to {to}"
python
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic

@tool
def search_database(query: str) -> str:
    """Search the database for information."""
    return db.search(query)

@tool
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email to a recipient."""
    email_service.send(to, subject, body)
    return f"Email sent to {to}"

Bind tools to model

Bind tools to model

tools = [search_database, send_email] model = ChatAnthropic(model="claude-sonnet-4-20250514") model_with_tools = model.bind_tools(tools)
tools = [search_database, send_email] model = ChatAnthropic(model="claude-sonnet-4-20250514") model_with_tools = model.bind_tools(tools)

Agent node

Agent node

def agent_node(state: State): response = model_with_tools.invoke(state["messages"]) return {"messages": [response]}
undefined
def agent_node(state: State): response = model_with_tools.invoke(state["messages"]) return {"messages": [response]}
undefined

ToolNode for Execution

用于执行的ToolNode

python
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, START, END
python
from langgraph.prebuilt import ToolNode
from langgraph.graph import StateGraph, START, END

Create tool execution node

Create tool execution node

tool_node = ToolNode(tools)
tool_node = ToolNode(tools)

Build agent graph

Build agent graph

builder = StateGraph(MessagesState) builder.add_node("agent", agent_node) builder.add_node("tools", tool_node)
builder = StateGraph(MessagesState) builder.add_node("agent", agent_node) builder.add_node("tools", tool_node)

Routing based on tool calls

Routing based on tool calls

def should_continue(state: MessagesState) -> str: last_message = state["messages"][-1] if last_message.tool_calls: return "tools" return END
builder.add_edge(START, "agent") builder.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END}) builder.add_edge("tools", "agent") # Return to agent after tool execution
graph = builder.compile()
undefined
def should_continue(state: MessagesState) -> str: last_message = state["messages"][-1] if last_message.tool_calls: return "tools" return END
builder.add_edge(START, "agent") builder.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END}) builder.add_edge("tools", "agent") # Return to agent after tool execution
graph = builder.compile()
undefined

Force Tool Calling

强制工具调用

python
undefined
python
undefined

Force model to call at least one tool

Force model to call at least one tool

model.bind_tools(tools, tool_choice="any")
model.bind_tools(tools, tool_choice="any")

Force specific tool

Force specific tool

model.bind_tools(tools, tool_choice="search_database")
model.bind_tools(tools, tool_choice="search_database")

Structured output via tool (guaranteed schema)

Structured output via tool (guaranteed schema)

from pydantic import BaseModel
class SearchResult(BaseModel): query: str results: list[str] confidence: float
model.bind_tools([SearchResult], tool_choice="SearchResult")
undefined
from pydantic import BaseModel
class SearchResult(BaseModel): query: str results: list[str] confidence: float
model.bind_tools([SearchResult], tool_choice="SearchResult")
undefined

Dynamic Tool Selection

动态工具选择

python
from sentence_transformers import SentenceTransformer

embedder = SentenceTransformer("all-MiniLM-L6-v2")
python
from sentence_transformers import SentenceTransformer

embedder = SentenceTransformer("all-MiniLM-L6-v2")

Pre-compute tool embeddings

Pre-compute tool embeddings

TOOL_EMBEDDINGS = { tool.name: embedder.encode(tool.description) for tool in all_tools }
def select_relevant_tools(query: str, all_tools: list, top_k: int = 5) -> list: """Select most relevant tools based on query.""" query_embedding = embedder.encode(query)
similarities = [
    (tool, cosine_similarity(query_embedding, TOOL_EMBEDDINGS[tool.name]))
    for tool in all_tools
]

sorted_tools = sorted(similarities, key=lambda x: x[1], reverse=True)
return [tool for tool, _ in sorted_tools[:top_k]]
def agent_with_dynamic_tools(state: State): """Bind only relevant tools to reduce context.""" relevant_tools = select_relevant_tools( state["messages"][-1].content, all_tools, top_k=5 )
model_bound = model.bind_tools(relevant_tools)
response = model_bound.invoke(state["messages"])
return {"messages": [response]}
undefined
TOOL_EMBEDDINGS = { tool.name: embedder.encode(tool.description) for tool in all_tools }
def select_relevant_tools(query: str, all_tools: list, top_k: int = 5) -> list: """Select most relevant tools based on query.""" query_embedding = embedder.encode(query)
similarities = [
    (tool, cosine_similarity(query_embedding, TOOL_EMBEDDINGS[tool.name]))
    for tool in all_tools
]

sorted_tools = sorted(similarities, key=lambda x: x[1], reverse=True)
return [tool for tool, _ in sorted_tools[:top_k]]
def agent_with_dynamic_tools(state: State): """Bind only relevant tools to reduce context.""" relevant_tools = select_relevant_tools( state["messages"][-1].content, all_tools, top_k=5 )
model_bound = model.bind_tools(relevant_tools)
response = model_bound.invoke(state["messages"])
return {"messages": [response]}
undefined

Tool Interrupts (Approval Gates)

工具中断(审批关卡)

python
from langgraph.types import interrupt

@tool
def delete_user(user_id: str) -> str:
    """Delete a user account. Requires approval."""
    # Interrupt for human approval
    response = interrupt({
        "action": "delete_user",
        "user_id": user_id,
        "message": f"Approve deletion of user {user_id}?",
        "risk_level": "high"
    })

    if response.get("approved"):
        db.delete_user(user_id)
        return f"User {user_id} deleted successfully"
    return "Deletion cancelled by user"

@tool
def transfer_funds(from_account: str, to_account: str, amount: float) -> str:
    """Transfer funds between accounts. Requires approval for large amounts."""
    if amount > 1000:
        response = interrupt({
            "action": "transfer_funds",
            "from": from_account,
            "to": to_account,
            "amount": amount,
            "message": f"Approve transfer of ${amount}?"
        })

        if not response.get("approved"):
            return "Transfer cancelled"

    execute_transfer(from_account, to_account, amount)
    return f"Transferred ${amount} from {from_account} to {to_account}"
python
from langgraph.types import interrupt

@tool
def delete_user(user_id: str) -> str:
    """Delete a user account. Requires approval."""
    # Interrupt for human approval
    response = interrupt({
        "action": "delete_user",
        "user_id": user_id,
        "message": f"Approve deletion of user {user_id}?",
        "risk_level": "high"
    })

    if response.get("approved"):
        db.delete_user(user_id)
        return f"User {user_id} deleted successfully"
    return "Deletion cancelled by user"

@tool
def transfer_funds(from_account: str, to_account: str, amount: float) -> str:
    """Transfer funds between accounts. Requires approval for large amounts."""
    if amount > 1000:
        response = interrupt({
            "action": "transfer_funds",
            "from": from_account,
            "to": to_account,
            "amount": amount,
            "message": f"Approve transfer of ${amount}?"
        })

        if not response.get("approved"):
            return "Transfer cancelled"

    execute_transfer(from_account, to_account, amount)
    return f"Transferred ${amount} from {from_account} to {to_account}"

Streaming from Tools

从工具流式输出

python
from langgraph.config import get_stream_writer

@tool
def long_running_analysis(data: str) -> str:
    """Analyze data with progress updates."""
    writer = get_stream_writer()

    writer({"status": "starting", "progress": 0})

    for i, chunk in enumerate(process_chunks(data)):
        writer({
            "status": "processing",
            "progress": (i + 1) * 10,
            "current_chunk": i
        })

    writer({"status": "complete", "progress": 100})
    return "Analysis complete"
python
from langgraph.config import get_stream_writer

@tool
def long_running_analysis(data: str) -> str:
    """Analyze data with progress updates."""
    writer = get_stream_writer()

    writer({"status": "starting", "progress": 0})

    for i, chunk in enumerate(process_chunks(data)):
        writer({
            "status": "processing",
            "progress": (i + 1) * 10,
            "current_chunk": i
        })

    writer({"status": "complete", "progress": 100})
    return "Analysis complete"

Error Handling in Tools

工具中的错误处理

python
@tool
def api_call_with_retry(endpoint: str) -> str:
    """Call external API with automatic retry."""
    for attempt in range(3):
        try:
            response = requests.get(endpoint, timeout=10)
            response.raise_for_status()
            return response.json()
        except requests.RequestException as e:
            if attempt == 2:
                return f"Error: Failed after 3 attempts - {str(e)}"
            time.sleep(2 ** attempt)  # Exponential backoff
python
@tool
def api_call_with_retry(endpoint: str) -> str:
    """Call external API with automatic retry."""
    for attempt in range(3):
        try:
            response = requests.get(endpoint, timeout=10)
            response.raise_for_status()
            return response.json()
        except requests.RequestException as e:
            if attempt == 2:
                return f"Error: Failed after 3 attempts - {str(e)}"
            time.sleep(2 ** attempt)  # Exponential backoff

Parallel Tool Execution

并行工具执行

python
from langgraph.prebuilt import ToolNode
python
from langgraph.prebuilt import ToolNode

ToolNode executes multiple tool calls in parallel by default

ToolNode executes multiple tool calls in parallel by default

tool_node = ToolNode(tools)
tool_node = ToolNode(tools)

If agent returns multiple tool_calls, they execute concurrently

If agent returns multiple tool_calls, they execute concurrently

Results are returned in order matching the tool_calls

Results are returned in order matching the tool_calls

undefined
undefined

Key Decisions

关键决策

DecisionRecommendation
Tool count5-10 tools max per agent (use dynamic selection for more)
Approval gatesUse
interrupt()
for destructive/high-risk operations
Error handlingReturn error strings, don't raise (lets agent recover)
StreamingUse
get_stream_writer()
for long-running tools
决策建议
工具数量每个代理最多5-10个工具(工具更多时使用动态选择)
审批关卡对破坏性/高风险操作使用
interrupt()
错误处理返回错误字符串,不要抛出异常(让代理可以恢复)
流式输出对长时间运行的工具使用
get_stream_writer()

Common Mistakes

常见错误

  • Too many tools (context overflow, poor selection)
  • Raising exceptions in tools (crashes agent loop)
  • Missing tool descriptions (LLM can't choose correctly)
  • Not using
    tool_choice
    when specific tool is required
  • 工具过多(上下文溢出,选择效果差)
  • 在工具中抛出异常(导致代理循环崩溃)
  • 缺少工具描述(LLM无法正确选择)
  • 当需要特定工具时未使用
    tool_choice

Evaluations

评估

See references/evaluations.md for test cases.
查看references/evaluations.md获取测试用例。

Related Skills

相关技能

  • langgraph-supervisor
    - Supervisor agents with tool-calling workers
  • langgraph-human-in-loop
    - Approval gates for dangerous tools
  • langgraph-streaming
    - Stream tool execution progress
  • langgraph-routing
    - Route based on tool results
  • langgraph-state
    - Track tool call history in state
  • function-calling
    - General LLM function calling patterns
  • langgraph-supervisor
    - 带有工具调用工作节点的Supervisor代理
  • langgraph-human-in-loop
    - 为危险工具设置审批关卡
  • langgraph-streaming
    - 流式输出工具执行进度
  • langgraph-routing
    - 根据工具结果进行路由
  • langgraph-state
    - 在状态中跟踪工具调用历史
  • function-calling
    - 通用LLM函数调用模式

Capability Details

能力详情

bind-tools

bind-tools

Keywords: bind_tools, tool calling, function calling, LLM tools Solves:
  • Attach tools to language models
  • Enable function calling in agents
  • Configure tool selection behavior
关键词: bind_tools、工具调用、函数调用、LLM工具 解决的问题:
  • 将工具附加到语言模型
  • 在代理中启用函数调用
  • 配置工具选择行为

tool-node

tool-node

Keywords: ToolNode, execute tools, tool execution, prebuilt Solves:
  • Execute tool calls from LLM responses
  • Handle parallel tool execution
  • Integrate tools into graph workflows
关键词: ToolNode、执行工具、工具执行、预构建 解决的问题:
  • 执行LLM响应中的工具调用
  • 处理并行工具执行
  • 将工具集成到图工作流中

dynamic-tools

dynamic-tools

Keywords: dynamic, select tools, many tools, relevance Solves:
  • Handle large tool inventories
  • Select relevant tools per query
  • Reduce context usage
关键词: 动态、选择工具、多工具、相关性 解决的问题:
  • 处理大量工具库存
  • 根据查询选择相关工具
  • 减少上下文使用

tool-interrupts

tool-interrupts

Keywords: interrupt, approval, gate, human review, dangerous Solves:
  • Add approval gates to dangerous tools
  • Implement human oversight
  • Control high-risk operations
关键词: interrupt、审批、关卡、人工审核、高风险 解决的问题:
  • 为危险工具添加审批关卡
  • 实现人工监督
  • 控制高风险操作