Loading...
Loading...
Debug LLM applications using the Phoenix CLI. Fetch traces, analyze errors, review experiments, and inspect datasets. Use when debugging AI/LLM applications, analyzing trace data, working with Phoenix observability, or investigating LLM performance issues.
npx skill4agent add arize-ai/phoenix phoenix-clipxnpm install -g @arizeai/phoenix-cli
# Or run directly with npx
npx @arizeai/phoenix-cliexport PHOENIX_HOST=http://localhost:6006
export PHOENIX_PROJECT=my-project
export PHOENIX_API_KEY=your-api-key # if authentication is enabledpx traces --limit 10px traces --limit 50 --format raw --no-progress | jq '.[] | select(.status == "ERROR")'px trace <trace-id>px trace <trace-id> --format raw | jq '.spans[] | select(.status_code != "OK")'px traces --limit 20 --format raw --no-progress | jq 'sort_by(-.duration) | .[0:5]'px trace <trace-id> --format raw | jq '.spans | sort_by(-.duration_ms) | .[0:5] | .[] | {name, duration_ms, span_kind}'px traces --limit 50 --format raw --no-progress | \
jq -r '.[].spans[] | select(.span_kind == "LLM") | {model: .attributes["llm.model_name"], prompt_tokens: .attributes["llm.token_count.prompt"], completion_tokens: .attributes["llm.token_count.completion"]}'px datasetspx experiments --dataset my-datasetpx experiment <experiment-id> --format raw --no-progress | \
jq '.[] | select(.error != null) | {input: .input, error}'px experiment <experiment-id> --format raw --no-progress | \
jq '[.[].latency_ms] | add / length'px traces [directory] [options]| Option | Description |
|---|---|
| Save traces as JSON files to directory |
| Number of traces (default: 10) |
| Filter by time window |
| Fetch since ISO timestamp |
| |
| Include span annotations |
px trace <trace-id> [options]| Option | Description |
|---|---|
| Save to file |
| |
| Include span annotations |
px datasets [options]px dataset <dataset-name> [options]| Option | Description |
|---|---|
| Filter by split (repeatable) |
| Specific dataset version |
| Save to file |
px experiments --dataset <name> [directory]| Option | Description |
|---|---|
| Dataset name or ID (required) |
| Export experiment JSON to directory |
px experiment <experiment-id> [options]px prompts [options]px prompt <prompt-name> [options]prettyjsonrawjq--format raw --no-progress{
"traceId": "abc123",
"spans": [{
"name": "chat_completion",
"span_kind": "LLM",
"status_code": "OK",
"attributes": {
"llm.model_name": "gpt-4",
"llm.token_count.prompt": 512,
"llm.token_count.completion": 256,
"input.value": "What is the weather?",
"output.value": "The weather is sunny..."
}
}],
"duration": 1250,
"status": "OK"
}LLMCHAINTOOLRETRIEVEREMBEDDINGAGENTllm.model_namellm.providerllm.token_count.promptllm.token_count.completionllm.input_messages.*llm.output_messages.*input.valueoutput.valueexception.message