query
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chineseethpandaops Query Guide
ethpandaops 查询指南
Query Ethereum network data through the ethpandaops tools. Execute Python code in sandboxed containers with access to ClickHouse blockchain data, Prometheus metrics, Loki logs, and Dora explorer APIs.
通过 ethpandaops 工具查询以太坊网络数据。你可以在沙箱容器中执行 Python 代码,访问 ClickHouse 存储的区块链数据、Prometheus 指标、Loki 日志以及 Dora 浏览器 API。
Workflow
工作流
- Discover - Find available datasources and schemas
- Find patterns - Search for query examples and runbooks
- Execute - Run Python using the library
ethpandaops
- 探索 - 查找可用的数据源和表结构
- 查找范例 - 搜索查询示例和操作手册
- 执行 - 使用 库运行 Python 代码
ethpandaops
Access Methods
访问方式
This skill works with either the CLI () or the MCP server. Use whichever is available.
panda本工具支持 二选一 使用 CLI( 二进制文件)或 MCP 服务器,选择当前可用的即可。
pandaCLI (panda
binary)
pandaCLI(panda
二进制文件)
pandabash
undefinedbash
undefinedDiscovery
Discovery
panda datasources # List all datasources
panda datasources --type clickhouse # Filter by type
panda schema # List ClickHouse tables
panda schema beacon_api_eth_v1_events_block # Show table schema
panda docs # List Python API modules
panda docs clickhouse # Show module docs
panda datasources # List all datasources
panda datasources --type clickhouse # Filter by type
panda schema # List ClickHouse tables
panda schema beacon_api_eth_v1_events_block # Show table schema
panda docs # List Python API modules
panda docs clickhouse # Show module docs
Search
Search
panda search examples "block arrival time"
panda search examples "attestation" --category attestations --limit 5
panda search runbooks "finality delay"
panda search runbooks "validator" --tag performance
panda search examples "block arrival time"
panda search examples "attestation" --category attestations --limit 5
panda search runbooks "finality delay"
panda search runbooks "validator" --tag performance
Execute
Execute
panda execute --code 'from ethpandaops import clickhouse; print(clickhouse.list_datasources())'
panda execute --file script.py
panda execute --code '...' --session <id> # Reuse session
echo 'print("hello")' | panda execute
panda execute --code 'from ethpandaops import clickhouse; print(clickhouse.list_datasources())'
panda execute --file script.py
panda execute --code '...' --session <id> # Reuse session
echo 'print("hello")' | panda execute
Sessions
Sessions
panda session list
panda session create
panda session destroy <session-id>
All commands support `--json` for structured output.panda session list
panda session create
panda session destroy <session-id>
所有命令都支持 `--json` 参数输出结构化数据。MCP Server (when available as plugin)
MCP 服务器(作为插件可用时)
| Resource | Description |
|---|---|
| All configured datasources |
| ClickHouse clusters |
| Prometheus instances |
| Loki instances |
| Active Ethereum networks |
| Available tables |
| Table schema details |
| Python library API docs |
search_examples(query="block arrival time")
search_runbooks(query="network not finalizing")
execute_python(code="...")
manage_session(operation="list")| 资源 | 描述 |
|---|---|
| 所有已配置的数据源 |
| ClickHouse 集群 |
| Prometheus 实例 |
| Loki 实例 |
| 活跃的以太坊网络 |
| 可用的表 |
| 表结构详情 |
| Python 库 API 文档 |
search_examples(query="block arrival time")
search_runbooks(query="network not finalizing")
execute_python(code="...")
manage_session(operation="list")The ethpandaops Python Library
ethpandaops Python 库
ClickHouse - Blockchain Data
ClickHouse - 区块链数据
python
from ethpandaops import clickhousepython
from ethpandaops import clickhouseList available clusters
List available clusters
clusters = clickhouse.list_datasources()
clusters = clickhouse.list_datasources()
Returns: [{"name": "xatu", "database": "default"}, {"name": "xatu-cbt", ...}]
Returns: [{"name": "xatu", "database": "default"}, {"name": "xatu-cbt", ...}]
Query data (returns pandas DataFrame)
Query data (returns pandas DataFrame)
df = clickhouse.query("xatu-cbt", """
SELECT
slot,
avg(seen_slot_start_diff) as avg_arrival_ms
FROM mainnet.fct_block_first_seen_by_node
WHERE slot_start_date_time >= now() - INTERVAL 1 HOUR
GROUP BY slot
ORDER BY slot DESC
""")
df = clickhouse.query("xatu-cbt", """
SELECT
slot,
avg(seen_slot_start_diff) as avg_arrival_ms
FROM mainnet.fct_block_first_seen_by_node
WHERE slot_start_date_time >= now() - INTERVAL 1 HOUR
GROUP BY slot
ORDER BY slot DESC
""")
Parameterized queries
Parameterized queries
df = clickhouse.query("xatu", "SELECT * FROM blocks WHERE slot > {slot}", {"slot": 1000})
**Cluster selection:**
- `xatu-cbt` - Pre-aggregated tables (faster, use for metrics)
- `xatu` - Raw event data (use for detailed analysis)
**Required filters:**
- ALWAYS filter on partition key: `slot_start_date_time >= now() - INTERVAL X HOUR`
- Filter by network: `meta_network_name = 'mainnet'` or use schema like `mainnet.table_name`df = clickhouse.query("xatu", "SELECT * FROM blocks WHERE slot > {slot}", {"slot": 1000})
**集群选择:**
- `xatu-cbt` - 预聚合表(查询速度更快,适合指标查询)
- `xatu` - 原始事件数据(适合详细分析)
**必填过滤条件:**
- 必须始终按分区键过滤:`slot_start_date_time >= now() - INTERVAL X HOUR`
- 按网络过滤:`meta_network_name = 'mainnet'` 或使用类似 `mainnet.table_name` 的 schemaPrometheus - Infrastructure Metrics
Prometheus - 基础设施指标
python
from ethpandaops import prometheuspython
from ethpandaops import prometheusList instances
List instances
instances = prometheus.list_datasources()
instances = prometheus.list_datasources()
Instant query
Instant query
result = prometheus.query("ethpandaops", "up")
result = prometheus.query("ethpandaops", "up")
Range query
Range query
result = prometheus.query_range(
"ethpandaops",
"rate(http_requests_total[5m])",
start="now-1h",
end="now",
step="1m"
)
**Time formats:** RFC3339 or relative (`now`, `now-1h`, `now-30m`)result = prometheus.query_range(
"ethpandaops",
"rate(http_requests_total[5m])",
start="now-1h",
end="now",
step="1m"
)
**时间格式:** 支持 RFC3339 格式或相对时间(`now`、`now-1h`、`now-30m`)Loki - Log Data
Loki - 日志数据
Always discover labels first. Before querying logs, fetch the available labels and their values so you can add the right filters. Unfiltered Loki queries are slow and may time out — label filters narrow the search at the storage level and are essential for efficient log retrieval.
python
from ethpandaops import loki请务必先探索标签。查询日志前,请先获取可用的标签及其值,以便添加正确的过滤条件。未加过滤的 Loki 查询速度很慢,可能会超时——标签过滤器会在存储层缩小搜索范围,是高效检索日志的必要前提。
python
from ethpandaops import lokiStep 1: List instances
Step 1: List instances
instances = loki.list_datasources()
instances = loki.list_datasources()
Step 2: Fetch all available labels
Step 2: Fetch all available labels
labels = loki.get_labels("ethpandaops")
print(labels)
labels = loki.get_labels("ethpandaops")
print(labels)
Example: ['app', 'cluster', 'ethereum_cl', 'ethereum_el', 'ethereum_network',
Example: ['app', 'cluster', 'ethereum_cl', 'ethereum_el', 'ethereum_network',
'instance', 'namespace', 'node', 'testnet', 'validator_client', ...]
'instance', 'namespace', 'node', 'testnet', 'validator_client', ...]
Step 3: Get values for a specific label to build your filter
Step 3: Get values for a specific label to build your filter
networks = loki.get_label_values("ethpandaops", "testnet")
print(networks) # e.g. ['fusaka-devnet-3', 'hoodi', 'sepolia', ...]
cl_clients = loki.get_label_values("ethpandaops", "ethereum_cl")
print(cl_clients) # e.g. ['lighthouse', 'prysm', 'teku', 'nimbus', 'lodestar', 'grandine']
networks = loki.get_label_values("ethpandaops", "testnet")
print(networks) # e.g. ['fusaka-devnet-3', 'hoodi', 'sepolia', ...]
cl_clients = loki.get_label_values("ethpandaops", "ethereum_cl")
print(cl_clients) # e.g. ['lighthouse', 'prysm', 'teku', 'nimbus', 'lodestar', 'grandine']
Step 4: Query logs with label filters
Step 4: Query logs with label filters
logs = loki.query(
"ethpandaops",
'{testnet="hoodi", ethereum_cl="lighthouse"} |= "error"',
start="now-1h",
limit=100
)
**Key labels for Ethereum log queries:**
- `testnet` — network/devnet name (e.g. `hoodi`, `fusaka-devnet-3`)
- `ethereum_cl` — consensus layer client (e.g. `lighthouse`, `prysm`, `teku`)
- `ethereum_el` — execution layer client (e.g. `geth`, `nethermind`, `besu`)
- `ethereum_network` — Ethereum network name
- `instance` — specific node instance
- `validator_client` — validator client name
**Log level formats vary by client.** When filtering logs by severity, be aware that Ethereum clients format log levels differently:
- Keywords: `CRIT`, `ERR`, `ERROR`, `WARN`, `INFO`, `DEBUG`
- Structured fields: `level=error`, `"level":"error"`, `"severity":"ERROR"`
- Shorthand: `E`, `W`, `C`
Start with `|~ "(?i)(CRIT|ERR)"` as a default filter. If it returns no results, fetch a few unfiltered log lines to identify the client's format, then adapt the regex (e.g. `|~ "level=(error|fatal)"`).logs = loki.query(
"ethpandaops",
'{testnet="hoodi", ethereum_cl="lighthouse"} |= "error"',
start="now-1h",
limit=100
)
**以太坊日志查询的核心标签:**
- `testnet` — 网络/开发网名称(例如 `hoodi`、`fusaka-devnet-3`)
- `ethereum_cl` — 共识层客户端(例如 `lighthouse`、`prysm`、`teku`)
- `ethereum_el` — 执行层客户端(例如 `geth`、`nethermind`、`besu`)
- `ethereum_network` — 以太坊网络名称
- `instance` — 特定节点实例
- `validator_client` — 验证节点客户端名称
**不同客户端的日志级别格式不同**。按严重程度过滤日志时,请注意以太坊客户端的日志级别格式存在差异:
- 关键词:`CRIT`、`ERR`、`ERROR`、`WARN`、`INFO`、`DEBUG`
- 结构化字段:`level=error`、`"level":"error"`、`"severity":"ERROR"`
- 简写:`E`、`W`、`C`
可以先用 `|~ "(?i)(CRIT|ERR)"` 作为默认过滤器。如果没有返回结果,先拉取几条未过滤的日志行确认客户端的格式,再调整正则表达式(例如 `|~ "level=(error|fatal)"`)。Dora - Beacon Chain Explorer
Dora - 信标链浏览器
Discovering all Dora API endpoints:
Before using Dora, discover the full set of available API endpoints by fetching the Swagger documentation. The swagger page is always at .
<dora-url>/api/swagger/index.html- First, get the Dora base URL for the network:
python
from ethpandaops import dora
base_url = dora.get_base_url("mainnet")
print(f"Swagger docs: {base_url}/api/swagger/index.html")-
Then useto read the swagger page at
WebFetchto discover all supported API endpoints for that Dora instance. This is important because different Dora deployments may support different endpoints.{base_url}/api/swagger/index.html -
Use the discovered endpoints to make targeted API calls via the Pythonmodule or direct HTTP requests.
dora
Common API usage:
python
from ethpandaops import dora探索所有 Dora API 端点:
使用 Dora 前,请先通过拉取 Swagger 文档探索所有可用的 API 端点。Swagger 页面的固定路径为 。
<dora-url>/api/swagger/index.html- 首先获取对应网络的 Dora 基础 URL:
python
from ethpandaops import dora
base_url = dora.get_base_url("mainnet")
print(f"Swagger docs: {base_url}/api/swagger/index.html")-
然后使用读取
WebFetch路径的 Swagger 页面,探索该 Dora 实例支持的所有 API 端点。这一步很重要,因为不同的 Dora 部署可能支持不同的端点。{base_url}/api/swagger/index.html -
使用探索到的端点,通过 Python模块或直接发起 HTTP 请求调用目标 API。
dora
常用 API 用法:
python
from ethpandaops import doraGet network health
Get network health
overview = dora.get_network_overview("mainnet")
print(f"Current epoch: {overview['current_epoch']}")
print(f"Active validators: {overview['active_validator_count']}")
overview = dora.get_network_overview("mainnet")
print(f"Current epoch: {overview['current_epoch']}")
print(f"Active validators: {overview['active_validator_count']}")
Check finality
Check finality
epochs_behind = overview['current_epoch'] - overview.get('finalized_epoch', 0)
if epochs_behind > 2:
print(f"Warning: {epochs_behind} epochs behind finality")
epochs_behind = overview['current_epoch'] - overview.get('finalized_epoch', 0)
if epochs_behind > 2:
print(f"Warning: {epochs_behind} epochs behind finality")
Generate explorer links
Generate explorer links
link = dora.link_validator("mainnet", "12345")
link = dora.link_slot("mainnet", "9000000")
link = dora.link_epoch("mainnet", 280000)
**Direct HTTP calls for endpoints not in the Python module:**
```python
from ethpandaops import dora
import httpx
base_url = dora.get_base_url("mainnet")link = dora.link_validator("mainnet", "12345")
link = dora.link_slot("mainnet", "9000000")
link = dora.link_epoch("mainnet", 280000)
**针对 Python 模块未封装的端点直接发起 HTTP 请求:**
```python
from ethpandaops import dora
import httpx
base_url = dora.get_base_url("mainnet")Call any endpoint discovered from swagger
Call any endpoint discovered from swagger
with httpx.Client(timeout=30) as client:
resp = client.get(f"{base_url}/api/v1/<endpoint>")
data = resp.json()
undefinedwith httpx.Client(timeout=30) as client:
resp = client.get(f"{base_url}/api/v1/<endpoint>")
data = resp.json()
undefinedStorage - Upload Outputs
存储 - 上传输出内容
python
from ethpandaops import storagepython
from ethpandaops import storageSave visualization
Save visualization
import matplotlib.pyplot as plt
plt.savefig("/workspace/chart.png")
import matplotlib.pyplot as plt
plt.savefig("/workspace/chart.png")
Upload for public URL
Upload for public URL
url = storage.upload("/workspace/chart.png")
print(f"Chart URL: {url}")
url = storage.upload("/workspace/chart.png")
print(f"Chart URL: {url}")
List uploaded files
List uploaded files
files = storage.list_files()
undefinedfiles = storage.list_files()
undefinedSession Management
会话管理
Critical: Each execution runs in a fresh Python process. Variables do NOT persist.
Files persist: Save to to share data between calls.
/workspace/Reuse sessions: Pass (CLI) or (MCP) for faster startup and workspace persistence.
--session <id>session_id重要提示: 每次执行都会在全新的 Python 进程中运行,变量不会持久化。
文件会持久化: 保存到 路径可在多次调用间共享数据。
/workspace/复用会话: 传入 (CLI 方式)或 (MCP 方式)可加快启动速度并保持工作区持久化。
--session <id>session_idMulti-Step Analysis Pattern
多步分析模式
python
undefinedpython
undefinedCall 1: Query and save
Call 1: Query and save
from ethpandaops import clickhouse
df = clickhouse.query("xatu-cbt", "SELECT ...")
df.to_parquet("/workspace/data.parquet")
```pythonfrom ethpandaops import clickhouse
df = clickhouse.query("xatu-cbt", "SELECT ...")
df.to_parquet("/workspace/data.parquet")
```pythonCall 2: Load and visualize (reuse session from Call 1)
Call 2: Load and visualize (reuse session from Call 1)
import pandas as pd
import matplotlib.pyplot as plt
from ethpandaops import storage
df = pd.read_parquet("/workspace/data.parquet")
plt.figure(figsize=(12, 6))
plt.plot(df["slot"], df["value"])
plt.savefig("/workspace/chart.png")
url = storage.upload("/workspace/chart.png")
print(f"Chart: {url}")
undefinedimport pandas as pd
import matplotlib.pyplot as plt
from ethpandaops import storage
df = pd.read_parquet("/workspace/data.parquet")
plt.figure(figsize=(12, 6))
plt.plot(df["slot"], df["value"])
plt.savefig("/workspace/chart.png")
url = storage.upload("/workspace/chart.png")
print(f"Chart: {url}")
undefinedError Handling
错误处理
ClickHouse errors include actionable suggestions:
- Missing date filter → "Add "
slot_start_date_time >= now() - INTERVAL X HOUR - Wrong cluster → "Use xatu-cbt for aggregated metrics"
- Query timeout → Break into smaller time windows
Default execution timeout is 60s, max 600s. For large analyses:
- Search for optimized patterns first ()
panda search examples "..." - Break work into smaller time windows
- Save intermediate results to
/workspace/
ClickHouse 错误会包含可落地的处理建议:
- 缺少日期过滤 → "Add "
slot_start_date_time >= now() - INTERVAL X HOUR - 集群选择错误 → "Use xatu-cbt for aggregated metrics"
- 查询超时 → 拆分为更小的时间窗口
默认执行超时时间为 60 秒,最长 600 秒。处理大型分析任务时:
- 先搜索优化后的查询模式()
panda search examples "..." - 将任务拆分为更小的时间窗口
- 中间结果保存到 路径
/workspace/
Notes
注意事项
- Always filter ClickHouse queries on partition keys ()
slot_start_date_time - Use for pre-aggregated metrics,
xatu-cbtfor raw event dataxatu - Use or
panda docsresource for complete API documentationpython://ethpandaops - Search for examples before writing complex queries from scratch
- Search for runbooks to find common investigation workflows
- Upload visualizations with for shareable URLs
storage.upload() - NEVER just copy/paste/recite base64 of images. You MUST save the image to the workspace and upload it to give it back to the user.
- ClickHouse 查询必须始终按分区键()过滤
slot_start_date_time - 查询预聚合指标使用 ,查询原始事件数据使用
xatu-cbtxatu - 完整 API 文档可通过 或
panda docs资源查看python://ethpandaops - 从零编写复杂查询前请先搜索现有示例
- 搜索操作手册可获取常见问题的排查流程
- 使用 上传可视化结果可生成可分享的 URL
storage.upload() - 切勿直接复制/粘贴/复述图片的 base64 编码,你必须将图片保存到工作区并上传后,再将链接返回给用户。