Loading...
Loading...
Found 16 Skills
Show real token usage and estimated savings for the current session. Reads directly from the Claude Code session log — no AI estimation. Triggers on /caveman-stats. Output is injected by the mode-tracker hook; the model itself does not compute the numbers.
Analyze claude-trace JSONL files for session health, patterns, and actionable insights. Use when debugging session issues, understanding token usage, or identifying failure patterns.
Show full session token usage, costs, TLDR savings, and hook activity
Track Clawdbot AI model usage and estimate costs. Use when reporting daily/weekly costs, analyzing token usage across sessions, or monitoring AI spending. Supports Claude (opus/sonnet), GPT, and Codex models.
Expert in observing, benchmarking, and optimizing AI agents. Specializes in token usage tracking, latency analysis, and quality evaluation metrics. Use when optimizing agent costs, measuring performance, or implementing evals. Triggers include "agent performance", "token usage", "latency optimization", "eval", "agent metrics", "cost optimization", "agent benchmarking".
Track AI token consumption, costs, and usage trends using the orbit CLI. Use this skill whenever the user asks about token usage, AI costs, Claude Code spending, how many tokens were used, cost breakdown by model, session history, or token analytics. Trigger on phrases like 'how much have I spent', 'token usage', 'show me costs', 'what's my AI spending', 'how many tokens today', 'cost per model', 'list sessions', 'track usage', 'token report', 'weekly usage', 'monthly costs', or any token/cost tracking task — even casual references like 'am I spending too much on Claude', 'what did that session cost', 'show me the dashboard', or 'how much is opus costing us'.
Analyzes Claude Code session transcripts (JSONL files) to reveal context window content, token usage patterns, and decision-making processes using view_session_context.py tool. Use when debugging Claude behavior, investigating token patterns, tracking agent delegation, or analyzing context exhaustion. Triggers on "why did Claude do X", "analyze session", "check session logs", "context window exhaustion", or "track agent delegation".
Show detailed token ROI report across all tracked sessions
Generate a cost report showing token usage and USD costs by agent and model
Profiles DAG execution performance including latency, token usage, cost, and resource consumption. Identifies bottlenecks and optimization opportunities. Activate on 'performance profile', 'execution metrics', 'latency analysis', 'token usage', 'cost analysis'. NOT for execution tracing (use dag-execution-tracer) or failure analysis (use dag-failure-analyzer).
Query Langfuse traces for debugging LLM calls, analyzing token usage, and investigating workflow executions. Use when debugging AI/LLM behavior, checking trace data, or analyzing observability metrics.
Reference documentation for analyzing Claude Code conversation history files