agent-debugger
Original:🇺🇸 English
Translated
Use when a program crashes, a test fails, or code produces wrong results and reading the source isn't enough to see why. Lets you pause execution at any line and inspect the actual runtime state, variable values, types, call stacks, to find what went wrong.
2installs
Added on
NPX Install
npx skill4agent add joaquincampo/agent-debugger agent-debuggerTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Agent Debugger
A debugger for AI agents. Set breakpoints, inspect state, evaluate expressions, test fixes in-place.
Philosophy
The debugger is a scalpel, not a flashlight. You don't turn it on to look around. You turn it on to make one precise cut — confirm or kill a specific hypothesis about why the program is broken. If you're "exploring" in the debugger, you've already lost.
Every session starts before the debugger. Read the code. Read the traceback. Form a theory. Know exactly what breakpoint you'll set and what eval you'll run before you type a single command. The debugger is the experiment, not the investigation.
evalvarsstepstacksourceHalf of all bugs don't need a debugger. Read the traceback. Read the code. Check the types. Grep for the error message. Look at git blame. Most bugs surrender to careful reading. Reach for the debugger only when the bug depends on runtime state you can't determine statically.
The Rules
-
Read first, debug second. Never start a debug session without reading the relevant code and forming a hypothesis. The debugger confirms theories — it doesn't generate them.
-
One breakpoint, one question. Each breakpoint should answer a specific question. "Isa string here?" "Is
xnegative after this call?" "Does this branch execute?" If you can't articulate the question, you're not ready to debug.balance -
Eval, don't dump.dumps everything and answers nothing.
varsanswers exactly one question. Prefer eval. Always.eval "type(data['age'])" -
Never step through loops. A loop with 100 iterations is 100 step commands. A conditional breakpoint is 1 command. Useto jump straight to the iteration that matters.
--break "file:line:i == 50" -
Two strikes, new theory. If your hypothesis was wrong twice, stop. Your mental model of the code is broken, not the debugger session. Close, re-read the code, form a completely different theory, then start a new session with different breakpoints. Continuing to probe the same area has exponentially diminishing returns.
-
Test the fix before writing it. The debugger gives you a live REPL in the exact context of the bug. Useto run your proposed fix expression before editing any code. If it works in eval, it'll work in the code.
eval -
Prove the fix, write the test. After fixing, re-run the program to verify. Then write the smallest possible test that catches the bug. A fix without a test is a fix that will regress.
-
Close the session. Always. A stale session blocks the next one.
Bootstrap
- If is available globally, use it directly.
agent-debugger - Otherwise, use (zero-install, no prompts).
npx -y agent-debugger
Commands
bash
# If installed globally:
agent-debugger start <script> --break file:line[:condition] [--runtime path] [--args ...]
# If not installed:
npx -y agent-debugger start <script> --break file:line[:condition] [--runtime path] [--args ...]
agent-debugger eval <expression> # Run any expression in the current frame
agent-debugger vars # List local variables (prefer eval)
agent-debugger step [into|out] # Step over / into function / out of function
agent-debugger continue # Run to next breakpoint or termination
agent-debugger stack # Show call stack
agent-debugger break file:line[:cond] # Add breakpoint mid-session
agent-debugger source # Show source around current line
agent-debugger status # Show session state and location
agent-debugger close # Kill session, clean upMultiple flags supported. Conditions are expressions: .
--break--break "app.py:42:len(items) > 10"Supported Languages
| Language | Extension | Adapter | Requirement |
|---|---|---|---|
| Python | .py | debugpy | |
| JavaScript/TypeScript | .js/.ts | Node Inspector | Node.js |
| Go | .go | Delve | |
| Rust/C/C++ | .rs/.c/.cpp | CodeLLDB | |
The Playbook
These are not suggestions. These are the right way to handle each class of bug.
Type Bugs
A value has the wrong type somewhere in the pipeline. Don't step through — go straight to the suspect and ask.
bash
agent-debugger start app.py --break "app.py:25"
agent-debugger eval "type(data['age'])" # <class 'str'> — found it
agent-debugger eval "int(data['age'])" # 35 — fix is safe
agent-debugger closeTwo commands after the breakpoint. Done.
Data Pipeline Bugs
Something in a batch is wrong. Don't look at individual records — assert the shape of the whole batch.
bash
agent-debugger start etl.py --break "etl.py:90" # after the transformation
agent-debugger eval "all(isinstance(v, int) for v in result.values())" # False
agent-debugger eval "[k for k,v in result.items() if not isinstance(v, int)]" # ['quantity']
agent-debugger closeOne breakpoint, two evals. The first asks "is anything wrong?", the second asks "what exactly?"
Loop Bugs (The Wolf Fence)
A loop processes N items and something goes wrong at an unknown iteration. Binary search it.
bash
agent-debugger start app.py --break "app.py:45:i == 500" # midpoint
agent-debugger eval "is_valid(result)" # True → bug is after 500
agent-debugger close
agent-debugger start app.py --break "app.py:45:i == 750" # narrow
agent-debugger eval "is_valid(result)" # False → bug is between 500-750
agent-debugger close
agent-debugger start app.py --break "app.py:45:i == 625" # narrow again~10 iterations to find the bug in 1000 items. Not 1000 step commands.
Invariant Violations
You know what should never happen. Tell the debugger to catch the exact moment it does.
bash
# "balance should never go negative"
agent-debugger start bank.py --break "bank.py:68:account.balance < 0"
# "every value should be numeric"
agent-debugger start pipeline.py --break "pipeline.py:30:not isinstance(value, (int, float))"
# "list should never exceed 100 items"
agent-debugger start app.py --break "app.py:55:len(results) > 100"If it hits, you've caught the crime in progress. If it doesn't hit, your theory was wrong — move on.
Recursion / Deep Call Chains
The stack tells you how you arrived. The eval tells you why you're wrong.
bash
agent-debugger start tree.py --break "tree.py:22"
agent-debugger stack # see the recursion depth
agent-debugger eval "current_depth" # 3
agent-debugger eval "max_depth" # 3 — off-by-one, should be <, not <=
agent-debugger close"Where Does This Bad Data Come From?"
You found bad data downstream. Pivot upstream.
bash
agent-debugger start app.py --break "handler.py:55"
agent-debugger eval "data['age']" # '35' — string, wrong. But handler didn't create this.
agent-debugger close # pivot to the source
agent-debugger start app.py --break "loader.py:22"
agent-debugger eval "raw_row" # CSV parser returns strings. Root cause.
agent-debugger closeDon't fix the symptom at the handler. Fix the cause at the loader.
"Which of These 3 Functions Is the Culprit?"
Set breakpoints at all suspects. The runtime tells you which one fires.
bash
agent-debugger start app.py \
--break "auth.py:30" \
--break "validate.py:55" \
--break "handler.py:80"
# Hits validate.py:55 — now you know where to focus
agent-debugger eval "request.payload"
agent-debugger closeTesting a Fix In-Place
You think you know the fix. Prove it before editing.
bash
# Paused at the crash: total + data['age'] where age is a string
agent-debugger eval "total + int(data['age'])" # 90 — works
agent-debugger eval "int(data['age'])" # 35 — safe cast
# Prove it works for the entire dataset
agent-debugger eval "sum(int(d['age']) if isinstance(d['age'], str) else d['age'] for d in users)"
agent-debugger close
# NOW edit the code, with confidenceFalsifying Your Theory
Design evals that would break your hypothesis, not confirm it. Confirmation bias is the #1 debugging trap.
bash
# Theory: "age is a string only in the third record"
# BAD — only confirms
agent-debugger eval "isinstance(data['age'], str)" # True. But so what?
# GOOD — tries to disprove
agent-debugger eval "isinstance(users[0]['age'], str)" # False — not all records
agent-debugger eval "isinstance(users[1]['age'], str)" # False — so it IS specific to record 3
agent-debugger eval "users[2]" # {'name': 'Charlie', 'age': '35'} — source data is wrongNever Do This
Never step blindly. If you're running more than 3 times in a row, you need a breakpoint, not more steps.
stepNever start without reading code. The debugger doesn't find bugs. You find bugs by reading code and forming theories. The debugger just confirms them.
Never dump vars when you have a question. is for the rare case when you genuinely don't know what variables exist. If you have a theory, tests it directly.
varsevalNever debug timing bugs with the debugger. Pausing execution changes timing. Race conditions disappear under observation. Use logging.
Never keep going after 2 failed hypotheses. Close. Re-read. Rethink. Your mental model is wrong, and more debugger commands won't fix your mental model.
Never leave a session open. . Always. Every time.
agent-debugger closeNever fix without verifying. Run the program after the fix. If you can, toggle the fix to prove causation. Then write a test.
Notes
- Use absolute paths for breakpoints
- One session at a time — before starting another
close - Python requires (
debugpy)pip install debugpy - Program stdout goes to the daemon — use to inspect output values
eval