query

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

ethpandaops Query Guide

ethpandaops 查询指南

Query Ethereum network data through the ethpandaops tools. Execute Python code in sandboxed containers with access to ClickHouse blockchain data, Prometheus metrics, Loki logs, and Dora explorer APIs.
通过 ethpandaops 工具查询以太坊网络数据。你可以在沙箱容器中执行 Python 代码,访问 ClickHouse 存储的区块链数据、Prometheus 指标、Loki 日志以及 Dora 浏览器 API。

Workflow

工作流

  1. Discover - Find available datasources and schemas
  2. Find patterns - Search for query examples and runbooks
  3. Execute - Run Python using the
    ethpandaops
    library
  1. 探索 - 查找可用的数据源和表结构
  2. 查找范例 - 搜索查询示例和操作手册
  3. 执行 - 使用
    ethpandaops
    库运行 Python 代码

Access Methods

访问方式

This skill works with either the CLI (
panda
) or the MCP server. Use whichever is available.
本工具支持 二选一 使用 CLI(
panda
二进制文件)或 MCP 服务器,选择当前可用的即可。

CLI (
panda
binary)

CLI(
panda
二进制文件)

bash
undefined
bash
undefined

Discovery

Discovery

panda datasources # List all datasources panda datasources --type clickhouse # Filter by type panda schema # List ClickHouse tables panda schema beacon_api_eth_v1_events_block # Show table schema panda docs # List Python API modules panda docs clickhouse # Show module docs
panda datasources # List all datasources panda datasources --type clickhouse # Filter by type panda schema # List ClickHouse tables panda schema beacon_api_eth_v1_events_block # Show table schema panda docs # List Python API modules panda docs clickhouse # Show module docs

Search

Search

panda search examples "block arrival time" panda search examples "attestation" --category attestations --limit 5 panda search runbooks "finality delay" panda search runbooks "validator" --tag performance
panda search examples "block arrival time" panda search examples "attestation" --category attestations --limit 5 panda search runbooks "finality delay" panda search runbooks "validator" --tag performance

Execute

Execute

panda execute --code 'from ethpandaops import clickhouse; print(clickhouse.list_datasources())' panda execute --file script.py panda execute --code '...' --session <id> # Reuse session echo 'print("hello")' | panda execute
panda execute --code 'from ethpandaops import clickhouse; print(clickhouse.list_datasources())' panda execute --file script.py panda execute --code '...' --session <id> # Reuse session echo 'print("hello")' | panda execute

Sessions

Sessions

panda session list panda session create panda session destroy <session-id>

All commands support `--json` for structured output.
panda session list panda session create panda session destroy <session-id>

所有命令都支持 `--json` 参数输出结构化数据。

MCP Server (when available as plugin)

MCP 服务器(作为插件可用时)

ResourceDescription
datasources://list
All configured datasources
datasources://clickhouse
ClickHouse clusters
datasources://prometheus
Prometheus instances
datasources://loki
Loki instances
networks://active
Active Ethereum networks
clickhouse://tables
Available tables
clickhouse://tables/{table}
Table schema details
python://ethpandaops
Python library API docs
search_examples(query="block arrival time")
search_runbooks(query="network not finalizing")
execute_python(code="...")
manage_session(operation="list")
资源描述
datasources://list
所有已配置的数据源
datasources://clickhouse
ClickHouse 集群
datasources://prometheus
Prometheus 实例
datasources://loki
Loki 实例
networks://active
活跃的以太坊网络
clickhouse://tables
可用的表
clickhouse://tables/{table}
表结构详情
python://ethpandaops
Python 库 API 文档
search_examples(query="block arrival time")
search_runbooks(query="network not finalizing")
execute_python(code="...")
manage_session(operation="list")

The ethpandaops Python Library

ethpandaops Python 库

ClickHouse - Blockchain Data

ClickHouse - 区块链数据

python
from ethpandaops import clickhouse
python
from ethpandaops import clickhouse

List available clusters

List available clusters

clusters = clickhouse.list_datasources()
clusters = clickhouse.list_datasources()

Returns: [{"name": "xatu", "database": "default"}, {"name": "xatu-cbt", ...}]

Returns: [{"name": "xatu", "database": "default"}, {"name": "xatu-cbt", ...}]

Query data (returns pandas DataFrame)

Query data (returns pandas DataFrame)

df = clickhouse.query("xatu-cbt", """ SELECT slot, avg(seen_slot_start_diff) as avg_arrival_ms FROM mainnet.fct_block_first_seen_by_node WHERE slot_start_date_time >= now() - INTERVAL 1 HOUR GROUP BY slot ORDER BY slot DESC """)
df = clickhouse.query("xatu-cbt", """ SELECT slot, avg(seen_slot_start_diff) as avg_arrival_ms FROM mainnet.fct_block_first_seen_by_node WHERE slot_start_date_time >= now() - INTERVAL 1 HOUR GROUP BY slot ORDER BY slot DESC """)

Parameterized queries

Parameterized queries

df = clickhouse.query("xatu", "SELECT * FROM blocks WHERE slot > {slot}", {"slot": 1000})

**Cluster selection:**
- `xatu-cbt` - Pre-aggregated tables (faster, use for metrics)
- `xatu` - Raw event data (use for detailed analysis)

**Required filters:**
- ALWAYS filter on partition key: `slot_start_date_time >= now() - INTERVAL X HOUR`
- Filter by network: `meta_network_name = 'mainnet'` or use schema like `mainnet.table_name`
df = clickhouse.query("xatu", "SELECT * FROM blocks WHERE slot > {slot}", {"slot": 1000})

**集群选择:**
- `xatu-cbt` - 预聚合表(查询速度更快,适合指标查询)
- `xatu` - 原始事件数据(适合详细分析)

**必填过滤条件:**
- 必须始终按分区键过滤:`slot_start_date_time >= now() - INTERVAL X HOUR`
- 按网络过滤:`meta_network_name = 'mainnet'` 或使用类似 `mainnet.table_name` 的 schema

Prometheus - Infrastructure Metrics

Prometheus - 基础设施指标

python
from ethpandaops import prometheus
python
from ethpandaops import prometheus

List instances

List instances

instances = prometheus.list_datasources()
instances = prometheus.list_datasources()

Instant query

Instant query

result = prometheus.query("ethpandaops", "up")
result = prometheus.query("ethpandaops", "up")

Range query

Range query

result = prometheus.query_range( "ethpandaops", "rate(http_requests_total[5m])", start="now-1h", end="now", step="1m" )

**Time formats:** RFC3339 or relative (`now`, `now-1h`, `now-30m`)
result = prometheus.query_range( "ethpandaops", "rate(http_requests_total[5m])", start="now-1h", end="now", step="1m" )

**时间格式:** 支持 RFC3339 格式或相对时间(`now`、`now-1h`、`now-30m`)

Loki - Log Data

Loki - 日志数据

Always discover labels first. Before querying logs, fetch the available labels and their values so you can add the right filters. Unfiltered Loki queries are slow and may time out — label filters narrow the search at the storage level and are essential for efficient log retrieval.
python
from ethpandaops import loki
请务必先探索标签。查询日志前,请先获取可用的标签及其值,以便添加正确的过滤条件。未加过滤的 Loki 查询速度很慢,可能会超时——标签过滤器会在存储层缩小搜索范围,是高效检索日志的必要前提。
python
from ethpandaops import loki

Step 1: List instances

Step 1: List instances

instances = loki.list_datasources()
instances = loki.list_datasources()

Step 2: Fetch all available labels

Step 2: Fetch all available labels

labels = loki.get_labels("ethpandaops") print(labels)
labels = loki.get_labels("ethpandaops") print(labels)

Example: ['app', 'cluster', 'ethereum_cl', 'ethereum_el', 'ethereum_network',

Example: ['app', 'cluster', 'ethereum_cl', 'ethereum_el', 'ethereum_network',

'instance', 'namespace', 'node', 'testnet', 'validator_client', ...]

'instance', 'namespace', 'node', 'testnet', 'validator_client', ...]

Step 3: Get values for a specific label to build your filter

Step 3: Get values for a specific label to build your filter

networks = loki.get_label_values("ethpandaops", "testnet") print(networks) # e.g. ['fusaka-devnet-3', 'hoodi', 'sepolia', ...]
cl_clients = loki.get_label_values("ethpandaops", "ethereum_cl") print(cl_clients) # e.g. ['lighthouse', 'prysm', 'teku', 'nimbus', 'lodestar', 'grandine']
networks = loki.get_label_values("ethpandaops", "testnet") print(networks) # e.g. ['fusaka-devnet-3', 'hoodi', 'sepolia', ...]
cl_clients = loki.get_label_values("ethpandaops", "ethereum_cl") print(cl_clients) # e.g. ['lighthouse', 'prysm', 'teku', 'nimbus', 'lodestar', 'grandine']

Step 4: Query logs with label filters

Step 4: Query logs with label filters

logs = loki.query( "ethpandaops", '{testnet="hoodi", ethereum_cl="lighthouse"} |= "error"', start="now-1h", limit=100 )

**Key labels for Ethereum log queries:**
- `testnet` — network/devnet name (e.g. `hoodi`, `fusaka-devnet-3`)
- `ethereum_cl` — consensus layer client (e.g. `lighthouse`, `prysm`, `teku`)
- `ethereum_el` — execution layer client (e.g. `geth`, `nethermind`, `besu`)
- `ethereum_network` — Ethereum network name
- `instance` — specific node instance
- `validator_client` — validator client name

**Log level formats vary by client.** When filtering logs by severity, be aware that Ethereum clients format log levels differently:
- Keywords: `CRIT`, `ERR`, `ERROR`, `WARN`, `INFO`, `DEBUG`
- Structured fields: `level=error`, `"level":"error"`, `"severity":"ERROR"`
- Shorthand: `E`, `W`, `C`

Start with `|~ "(?i)(CRIT|ERR)"` as a default filter. If it returns no results, fetch a few unfiltered log lines to identify the client's format, then adapt the regex (e.g. `|~ "level=(error|fatal)"`).
logs = loki.query( "ethpandaops", '{testnet="hoodi", ethereum_cl="lighthouse"} |= "error"', start="now-1h", limit=100 )

**以太坊日志查询的核心标签:**
- `testnet` — 网络/开发网名称(例如 `hoodi`、`fusaka-devnet-3`)
- `ethereum_cl` — 共识层客户端(例如 `lighthouse`、`prysm`、`teku`)
- `ethereum_el` — 执行层客户端(例如 `geth`、`nethermind`、`besu`)
- `ethereum_network` — 以太坊网络名称
- `instance` — 特定节点实例
- `validator_client` — 验证节点客户端名称

**不同客户端的日志级别格式不同**。按严重程度过滤日志时,请注意以太坊客户端的日志级别格式存在差异:
- 关键词:`CRIT`、`ERR`、`ERROR`、`WARN`、`INFO`、`DEBUG`
- 结构化字段:`level=error`、`"level":"error"`、`"severity":"ERROR"`
- 简写:`E`、`W`、`C`

可以先用 `|~ "(?i)(CRIT|ERR)"` 作为默认过滤器。如果没有返回结果,先拉取几条未过滤的日志行确认客户端的格式,再调整正则表达式(例如 `|~ "level=(error|fatal)"`)。

Dora - Beacon Chain Explorer

Dora - 信标链浏览器

Discovering all Dora API endpoints:
Before using Dora, discover the full set of available API endpoints by fetching the Swagger documentation. The swagger page is always at
<dora-url>/api/swagger/index.html
.
  1. First, get the Dora base URL for the network:
python
from ethpandaops import dora
base_url = dora.get_base_url("mainnet")
print(f"Swagger docs: {base_url}/api/swagger/index.html")
  1. Then use
    WebFetch
    to read the swagger page at
    {base_url}/api/swagger/index.html
    to discover all supported API endpoints for that Dora instance. This is important because different Dora deployments may support different endpoints.
  2. Use the discovered endpoints to make targeted API calls via the Python
    dora
    module or direct HTTP requests.
Common API usage:
python
from ethpandaops import dora
探索所有 Dora API 端点:
使用 Dora 前,请先通过拉取 Swagger 文档探索所有可用的 API 端点。Swagger 页面的固定路径为
<dora-url>/api/swagger/index.html
  1. 首先获取对应网络的 Dora 基础 URL:
python
from ethpandaops import dora
base_url = dora.get_base_url("mainnet")
print(f"Swagger docs: {base_url}/api/swagger/index.html")
  1. 然后使用
    WebFetch
    读取
    {base_url}/api/swagger/index.html
    路径的 Swagger 页面,探索该 Dora 实例支持的所有 API 端点。这一步很重要,因为不同的 Dora 部署可能支持不同的端点。
  2. 使用探索到的端点,通过 Python
    dora
    模块或直接发起 HTTP 请求调用目标 API。
常用 API 用法:
python
from ethpandaops import dora

Get network health

Get network health

overview = dora.get_network_overview("mainnet") print(f"Current epoch: {overview['current_epoch']}") print(f"Active validators: {overview['active_validator_count']}")
overview = dora.get_network_overview("mainnet") print(f"Current epoch: {overview['current_epoch']}") print(f"Active validators: {overview['active_validator_count']}")

Check finality

Check finality

epochs_behind = overview['current_epoch'] - overview.get('finalized_epoch', 0) if epochs_behind > 2: print(f"Warning: {epochs_behind} epochs behind finality")
epochs_behind = overview['current_epoch'] - overview.get('finalized_epoch', 0) if epochs_behind > 2: print(f"Warning: {epochs_behind} epochs behind finality")

Generate explorer links

Generate explorer links

link = dora.link_validator("mainnet", "12345") link = dora.link_slot("mainnet", "9000000") link = dora.link_epoch("mainnet", 280000)

**Direct HTTP calls for endpoints not in the Python module:**

```python
from ethpandaops import dora
import httpx

base_url = dora.get_base_url("mainnet")
link = dora.link_validator("mainnet", "12345") link = dora.link_slot("mainnet", "9000000") link = dora.link_epoch("mainnet", 280000)

**针对 Python 模块未封装的端点直接发起 HTTP 请求:**

```python
from ethpandaops import dora
import httpx

base_url = dora.get_base_url("mainnet")

Call any endpoint discovered from swagger

Call any endpoint discovered from swagger

with httpx.Client(timeout=30) as client: resp = client.get(f"{base_url}/api/v1/<endpoint>") data = resp.json()
undefined
with httpx.Client(timeout=30) as client: resp = client.get(f"{base_url}/api/v1/<endpoint>") data = resp.json()
undefined

Storage - Upload Outputs

存储 - 上传输出内容

python
from ethpandaops import storage
python
from ethpandaops import storage

Save visualization

Save visualization

import matplotlib.pyplot as plt plt.savefig("/workspace/chart.png")
import matplotlib.pyplot as plt plt.savefig("/workspace/chart.png")

Upload for public URL

Upload for public URL

url = storage.upload("/workspace/chart.png") print(f"Chart URL: {url}")
url = storage.upload("/workspace/chart.png") print(f"Chart URL: {url}")

List uploaded files

List uploaded files

files = storage.list_files()
undefined
files = storage.list_files()
undefined

Session Management

会话管理

Critical: Each execution runs in a fresh Python process. Variables do NOT persist.
Files persist: Save to
/workspace/
to share data between calls.
Reuse sessions: Pass
--session <id>
(CLI) or
session_id
(MCP) for faster startup and workspace persistence.
重要提示: 每次执行都会在全新的 Python 进程中运行,变量不会持久化。
文件会持久化: 保存到
/workspace/
路径可在多次调用间共享数据。
复用会话: 传入
--session <id>
(CLI 方式)或
session_id
(MCP 方式)可加快启动速度并保持工作区持久化。

Multi-Step Analysis Pattern

多步分析模式

python
undefined
python
undefined

Call 1: Query and save

Call 1: Query and save

from ethpandaops import clickhouse df = clickhouse.query("xatu-cbt", "SELECT ...") df.to_parquet("/workspace/data.parquet")

```python
from ethpandaops import clickhouse df = clickhouse.query("xatu-cbt", "SELECT ...") df.to_parquet("/workspace/data.parquet")

```python

Call 2: Load and visualize (reuse session from Call 1)

Call 2: Load and visualize (reuse session from Call 1)

import pandas as pd import matplotlib.pyplot as plt from ethpandaops import storage
df = pd.read_parquet("/workspace/data.parquet") plt.figure(figsize=(12, 6)) plt.plot(df["slot"], df["value"]) plt.savefig("/workspace/chart.png") url = storage.upload("/workspace/chart.png") print(f"Chart: {url}")
undefined
import pandas as pd import matplotlib.pyplot as plt from ethpandaops import storage
df = pd.read_parquet("/workspace/data.parquet") plt.figure(figsize=(12, 6)) plt.plot(df["slot"], df["value"]) plt.savefig("/workspace/chart.png") url = storage.upload("/workspace/chart.png") print(f"Chart: {url}")
undefined

Error Handling

错误处理

ClickHouse errors include actionable suggestions:
  • Missing date filter → "Add
    slot_start_date_time >= now() - INTERVAL X HOUR
    "
  • Wrong cluster → "Use xatu-cbt for aggregated metrics"
  • Query timeout → Break into smaller time windows
Default execution timeout is 60s, max 600s. For large analyses:
  • Search for optimized patterns first (
    panda search examples "..."
    )
  • Break work into smaller time windows
  • Save intermediate results to
    /workspace/
ClickHouse 错误会包含可落地的处理建议:
  • 缺少日期过滤 → "Add
    slot_start_date_time >= now() - INTERVAL X HOUR
    "
  • 集群选择错误 → "Use xatu-cbt for aggregated metrics"
  • 查询超时 → 拆分为更小的时间窗口
默认执行超时时间为 60 秒,最长 600 秒。处理大型分析任务时:
  • 先搜索优化后的查询模式(
    panda search examples "..."
  • 将任务拆分为更小的时间窗口
  • 中间结果保存到
    /workspace/
    路径

Notes

注意事项

  • Always filter ClickHouse queries on partition keys (
    slot_start_date_time
    )
  • Use
    xatu-cbt
    for pre-aggregated metrics,
    xatu
    for raw event data
  • Use
    panda docs
    or
    python://ethpandaops
    resource for complete API documentation
  • Search for examples before writing complex queries from scratch
  • Search for runbooks to find common investigation workflows
  • Upload visualizations with
    storage.upload()
    for shareable URLs
  • NEVER just copy/paste/recite base64 of images. You MUST save the image to the workspace and upload it to give it back to the user.
  • ClickHouse 查询必须始终按分区键(
    slot_start_date_time
    )过滤
  • 查询预聚合指标使用
    xatu-cbt
    ,查询原始事件数据使用
    xatu
  • 完整 API 文档可通过
    panda docs
    python://ethpandaops
    资源查看
  • 从零编写复杂查询前请先搜索现有示例
  • 搜索操作手册可获取常见问题的排查流程
  • 使用
    storage.upload()
    上传可视化结果可生成可分享的 URL
  • 切勿直接复制/粘贴/复述图片的 base64 编码,你必须将图片保存到工作区并上传后,再将链接返回给用户。