Loading...
Loading...
Implement LangGraph error handling with current v1 patterns. Use when users need to classify failures, add RetryPolicy for transient issues, build LLM recovery loops with Command routing, add human-in-the-loop with interrupt()/resume, handle ToolNode errors, or choose a safe strategy between retry, recovery, and escalation.
npx skill4agent add lubu-labs/langchain-agent-skills langgraph-error-handlingRetryPolicyCommandinterrupt()ToolNode4295xxRetryPolicyCommandinterrupt()| Error Type | Owner | Primary Mechanism |
|---|---|---|
| Transient | System | |
| LLM-recoverable | LLM | State update + |
| User-fixable | Human | |
| Unexpected | Developer | Raise/log/debug |
from langgraph.types import RetryPolicy
builder.add_node(
"call_api",
call_api,
retry_policy=RetryPolicy(max_attempts=3, initial_interval=1.0),
)builder.addNode("callApi", callApi, {
retryPolicy: { maxAttempts: 3, initialInterval: 1.0 },
});retry_onretryOnMessagesStatefrom typing import Literal
from typing_extensions import NotRequired
from langgraph.graph import MessagesState
from langgraph.types import Command
class State(MessagesState):
error: NotRequired[str]
retry_count: NotRequired[int]
def agent(state: State) -> Command[Literal["tool", "__end__"]]:
if state.get("retry_count", 0) >= 3:
return Command(goto="__end__")
if state.get("error"):
return Command(goto="tool")
return Command(goto="tool")import { StateGraph, Command, END } from "@langchain/langgraph";
// If a node returns Command in JS, add `ends` on addNode.
builder.addNode("agent", agentNode, { ends: ["tool", END] });from langgraph.types import interrupt, Command
def human_review(state):
approved = interrupt({
"question": "Proceed?",
"payload": state["pending_action"],
})
return Command(goto="execute" if approved else "cancel")
# resume
graph.invoke(Command(resume=True), config={"configurable": {"thread_id": "t-1"}})import { Command, interrupt } from "@langchain/langgraph";
const approved = interrupt({ question: "Proceed?" });
// later
await graph.invoke(new Command({ resume: true }), {
configurable: { thread_id: "t-1" },
});thread_idfrom langgraph.prebuilt import ToolNode
tool_node = ToolNode(tools, handle_tool_errors=True)
tool_node = ToolNode(tools, handle_tool_errors="Please try again.")
tool_node = ToolNode(tools, handle_tool_errors=(ValueError, TypeError))interrupt()CommandendsaddNode(...)max_attemptsscripts/classify_error.pyscripts/wrap_with_retry.pyuv run skills/langgraph-error-handling/scripts/classify_error.py TimeoutError --verbose
uv run skills/langgraph-error-handling/scripts/wrap_with_retry.py call_llm --with-llm-recoveryassets/examples/retry-example/assets/examples/human-loop-example/references/error-types.mdreferences/retry-strategies.mdreferences/llm-recovery.mdreferences/human-escalation.md| Symptom | Root Cause | Fix |
|---|---|---|
| no checkpointer | compile with checkpointer |
| Resume starts new run | different | reuse same |
| JS Command route not taken | missing | add |
| Infinite loop | no termination counter/condition | add retry counter + terminal branch |
| Retry never triggers | exception excluded by retry filter | set explicit |