scry
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseScry Skill
Scry 技能
Scry gives you read-only SQL access to the ExoPriors public corpus (229M+ entities)
via a single HTTP endpoint. You write Postgres SQL against a curated schema
and get JSON rows back. There is no ORM, no GraphQL, no pagination token -- just SQL.
scry.*Skill generation:
20260313Scry 为你提供对ExoPriors公共语料库(2.29亿+实体)的只读SQL访问,仅需通过一个HTTP端点。你可以针对精心整理的 schema编写PostgreSQL SQL语句,获取JSON格式的返回行。无需ORM、无需GraphQL、无需分页令牌——只需SQL。
scry.*技能版本:
20260313A) When to use / not use
A) 适用场景与禁用场景
Use this skill when:
- Searching, filtering, or aggregating content across the ExoPriors corpus
- Running lexical (BM25) or hybrid searches
- Exploring author networks, cross-platform identities, or publication patterns
- Navigating the OpenAlex academic graph (authors, citations, institutions, concepts)
- Creating shareable artifacts from query results
- Emitting structured agent judgements about entities or external references
Do NOT use this skill when:
- The user wants semantic/vector search composition or embedding algebra (use the scry-vectors skill)
- The user wants LLM-based reranking (use the scry-rerank skill)
- The user is querying their own local database
适用场景:
- 在ExoPriors语料库中进行搜索、过滤或聚合操作
- 执行词法(BM25)或混合搜索
- 探索作者网络、跨平台身份或出版模式
- 导航OpenAlex学术图谱(作者、引用、机构、概念)
- 从查询结果创建可共享成果
- 生成关于实体或外部引用的结构化Agent判断
禁用场景:
- 用户需要语义/向量搜索组合或嵌入代数操作(请使用scry-vectors技能)
- 用户需要基于LLM的重排序操作(请使用scry-rerank技能)
- 用户查询自己的本地数据库
B) Golden Rules
B) 黄金规则
-
Context handshake first. At session start, call. Use the returned
GET /v1/scry/context?skill_generation=20260313block for the current product summary budgets, canonical env var, default skill, and specialized skill catalog. If you cache descriptive bootstrap context across turns or sessions, also trackofferingsand refresh when it changes. Ifsurface_context_generation, tell the user to runshould_update_skill=true.npx skills update -
Schema first. ALWAYS callbefore writing SQL. Never guess column names or types. The schema endpoint returns live column metadata and row-count estimates for every view.
GET /v1/scry/schema -
Check operational status when search looks wrong. If lexical search, materialized-view freshness, or corpus behavior seems off, callbefore assuming the query or schema is wrong.
GET /v1/scry/index-view-status -
Clarify ambiguous intent before heavy queries. If the request is vague ("search Reddit for X", "find things about Y"), ask one short clarification question about the goal/output format before running expensive SQL.
-
Start with a cheap probe. Before any query likely to run >5s, runand/or a tight exploratory query (
/v1/scry/estimateplus scoped source/window filters), then scale only after confirming relevance.LIMIT 20 -
Choose lexical vs semantic explicitly. Use lexical () for exact terms and named entities. For conceptual intent ("themes", "things like", "similar to"), route to scry-vectors first, then optionally hybridize.
scry.search* -
LIMIT always. Every query MUST include a LIMIT clause. Max 10,000 rows. Queries without LIMIT are rejected by the SQL validator.
-
Prefer materialized views.has 229M+ rows. Scanning it without filters is slow. Use
scry.entities,scry.mv_lesswrong_posts,scry.mv_arxiv_papers, etc. for targeted access. They are pre-filtered and often have embeddings pre-joined.scry.mv_hackernews_posts -
Filter dangerous content. Always includeunless the user explicitly asks for unfiltered results. Dangerous content contains adversarial prompt-injection content.
WHERE content_risk IS DISTINCT FROM 'dangerous' -
Raw SQL, not JSON.takes
POST /v1/scry/querywith raw SQL in the body. Not JSON-wrapped SQL.Content-Type: text/plain -
File rough edges promptly. If Scry blocks the task, misses an obvious result set, or exposes a rough edge, submit a brief note tousing
POST /v1/feedback?feedback_type=suggestion|bug|other&channel=scry_skillby default (Content-Type: text/plainalso works). Do not silently work around it. Logged-in users can review their submissions withtext/markdown.GET /v1/feedback
For full tier limits, timeout policies, and degradation strategies, see Shared Guardrails.
-
先进行上下文握手。会话开始时,调用。使用返回的
GET /v1/scry/context?skill_generation=20260313块获取当前产品摘要、配额、标准环境变量、默认技能和专业技能目录。如果在多轮会话或不同会话中缓存描述性引导上下文,请同时跟踪offerings,并在其变化时刷新。如果surface_context_generation,请告知用户运行should_update_skill=true。npx skills update -
先查看Schema。编写SQL之前必须调用。切勿猜测列名或类型。Schema端点会返回每个视图的实时列元数据和行数估算。
GET /v1/scry/schema -
搜索异常时检查运行状态。如果词法搜索、物化视图新鲜度或语料库行为出现异常,请先调用,不要直接认为是查询或Schema的问题。
GET /v1/scry/index-view-status -
意图模糊时先澄清。如果用户请求模糊(如“在Reddit上搜索X”、“查找关于Y的内容”),请在执行高开销SQL之前,先询问一个简短的问题以明确目标/输出格式。
-
先执行低成本探测查询。对于任何可能运行超过5秒的查询,先执行和/或一个范围狭窄的探索性查询(
/v1/scry/estimate加上限定的来源/时间窗口过滤),确认相关性后再扩大查询范围。LIMIT 20 -
明确选择词法或语义搜索。词法搜索()适用于精确术语和命名实体。对于概念性需求(如“主题”、“类似内容”),请先使用scry-vectors,再根据需要进行混合搜索。
scry.search* -
必须添加LIMIT。每个查询必须包含LIMIT子句,最多返回10000行。没有LIMIT的查询会被SQL验证器拒绝。
-
优先使用物化视图。包含2.29亿+行,无过滤扫描速度很慢。请使用
scry.entities、scry.mv_lesswrong_posts、scry.mv_arxiv_papers等进行针对性访问,这些视图已预过滤,且通常已预关联嵌入向量。scry.mv_hackernews_posts -
过滤危险内容。除非用户明确要求无过滤结果,否则必须在查询中包含。危险内容包含对抗性提示注入内容。
WHERE content_risk IS DISTINCT FROM 'dangerous' -
使用原始SQL,而非JSON。接受
POST /v1/scry/query,请求体为原始SQL,不要用JSON包裹SQL。Content-Type: text/plain -
及时反馈问题。如果Scry阻碍任务执行、遗漏明显结果集或存在缺陷,请通过提交简短说明,默认使用
POST /v1/feedback?feedback_type=suggestion|bug|other&channel=scry_skill(也支持Content-Type: text/plain)。不要默默绕过问题。已登录用户可通过text/markdown查看自己的提交记录。GET /v1/feedback
有关完整的层级限制、超时策略和降级策略,请参阅共享防护规则。
B.1 API Key Setup (Canonical)
B.1 API密钥设置(标准方式)
Recommended default for less-technical users: in the directory where you launch the agent, store in so skills and copied prompts use the same place.
Canonical key naming for this skill:
EXOPRIORS_API_KEY.env- Env var:
EXOPRIORS_API_KEY - Personal key format: with Scry access
exopriors_*
bash
printf '%s\n' 'EXOPRIORS_API_KEY=exopriors_...' >> .env
set -a && source .env && set +aVerify:
bash
echo "$EXOPRIORS_API_KEY"If using packaged skills, keep them current:
bash
npx skills add exopriors/skills
npx skills update针对非技术用户的推荐默认方式:在启动Agent的目录中,将存储在文件中,确保技能和复制的提示使用同一密钥。本技能的标准密钥命名规则:
EXOPRIORS_API_KEY.env- 环境变量:
EXOPRIORS_API_KEY - 个人密钥格式:,且具备Scry访问权限
exopriors_*
bash
printf '%s\n' 'EXOPRIORS_API_KEY=exopriors_...' >> .env
set -a && source .env && set +a验证:
bash
echo "$EXOPRIORS_API_KEY"如果使用打包技能,请保持技能为最新版本:
bash
npx skills add exopriors/skills
npx skills updateB.1b x402 Query-Only Access
B.1b x402 仅查询访问
POST /v1/scry/queryAuthorizationexopriors_*If the user wants wallet-native durable identity plus a reusable key, use
first. That binds the wallet to a user and returns
a session token plus API key in one flow.
POST /v1/auth/agent/signupMinimal client shape:
js
import { wrapFetchWithPayment } from 'x402-fetch';
const paidFetch = wrapFetchWithPayment(fetch, walletClient);
const resp = await paidFetch('https://api.exopriors.com/v1/scry/query', {
method: 'POST',
headers: { 'content-type': 'text/plain' },
body: 'SELECT 1 LIMIT 1',
});当没有头时,也支持标准x402协议。当用户已具备x402兼容钱包/客户端,且仅需直接付费执行查询时,可使用此方式。对于Schema/上下文、共享、判断、反馈或重复的多端点调用,建议使用个人 API密钥。
AuthorizationPOST /v1/scry/queryexopriors_*如果用户需要钱包原生持久身份以及可重用密钥,请先调用。该接口会将钱包与用户绑定,并在一个流程中返回会话令牌和API密钥。
POST /v1/auth/agent/signup最小化客户端示例:
js
import { wrapFetchWithPayment } from 'x402-fetch';
const paidFetch = wrapFetchWithPayment(fetch, walletClient);
const resp = await paidFetch('https://api.exopriors.com/v1/scry/query', {
method: 'POST',
headers: { 'content-type': 'text/plain' },
body: 'SELECT 1 LIMIT 1',
});C) Quickstart
C) 快速入门
One end-to-end example: find recent high-scoring LessWrong posts about RLHF.
Step 1: Get dynamic context + update advisory
GET https://api.exopriors.com/v1/scry/context?skill_generation=20260313
Authorization: Bearer $EXOPRIORS_API_KEY
Step 2: Get schema
GET https://api.exopriors.com/v1/scry/schema
Authorization: Bearer $EXOPRIORS_API_KEY
Step 3: Run query
POST https://api.exopriors.com/v1/scry/query
Authorization: Bearer $EXOPRIORS_API_KEY
Content-Type: text/plain
WITH hits AS (
SELECT id FROM scry.search('RLHF reinforcement learning human feedback',
kinds=>ARRAY['post'], limit_n=>100)
)
SELECT e.uri, e.title, e.original_author, e.original_timestamp, e.score
FROM hits h
JOIN scry.entities e ON e.id = h.id
WHERE e.source = 'lesswrong'
AND e.content_risk IS DISTINCT FROM 'dangerous'
ORDER BY e.score DESC NULLS LAST
LIMIT 20Response shape:
json
{
"columns": ["uri", "title", "original_author", "original_timestamp", "score"],
"rows": [["https://...", "My RLHF Post", "author", "2025-01-15T...", 142], ...],
"row_count": 20,
"duration_ms": 312,
"truncated": false
}一个端到端示例:查找近期高分的LessWrong平台上关于RLHF的帖子。
步骤1:获取动态上下文 + 更新提示
GET https://api.exopriors.com/v1/scry/context?skill_generation=20260313
Authorization: Bearer $EXOPRIORS_API_KEY
步骤2:获取Schema
GET https://api.exopriors.com/v1/scry/schema
Authorization: Bearer $EXOPRIORS_API_KEY
步骤3:执行查询
POST https://api.exopriors.com/v1/scry/query
Authorization: Bearer $EXOPRIORS_API_KEY
Content-Type: text/plain
WITH hits AS (
SELECT id FROM scry.search('RLHF reinforcement learning human feedback',
kinds=>ARRAY['post'], limit_n=>100)
)
SELECT e.uri, e.title, e.original_author, e.original_timestamp, e.score
FROM hits h
JOIN scry.entities e ON e.id = h.id
WHERE e.source = 'lesswrong'
AND e.content_risk IS DISTINCT FROM 'dangerous'
ORDER BY e.score DESC NULLS LAST
LIMIT 20返回格式:
json
{
"columns": ["uri", "title", "original_author", "original_timestamp", "score"],
"rows": [["https://...", "My RLHF Post", "author", "2025-01-15T...", 142], ...],
"row_count": 20,
"duration_ms": 312,
"truncated": false
}D) Decision Tree
D) 决策树
User wants to search the ExoPriors corpus?
|
+-- Ambiguous / conceptual ask? --> Clarify intent first, then use
| scry-vectors for semantic search (optionally hybridize with lexical)
|
+-- By keywords/phrases? --> scry.search() (BM25 lexical over canonical content_text)
| +-- Specific forum? --> pass mode='mv_lesswrong_posts' or kinds filter
| +-- Reddit? --> START with scry.reddit_subreddit_stats /
| scry.reddit_clusters() / scry.reddit_embeddings
| and trust /v1/scry/schema status before
| using direct retrieval helpers
| +-- Large result? --> scry.search_ids() (id+uri+kind, up to 2000)
|
+-- By structured filters (source, date, author)? --> Direct SQL on MVs
|
+-- By semantic similarity? --> (scry-vectors skill, not this one)
|
+-- Hybrid (keywords + semantic rerank)? --> scry.hybrid_search() or
| lexical CTE + JOIN scry.embeddings
|
+-- Author/people lookup? --> scry.actors, scry.people, scry.person_accounts
|
+-- Academic graph (OpenAlex)? --> scry.openalex_find_authors(),
| scry.openalex_find_works(), etc. (see schema-guide.md)
|
+-- Need to share results? --> POST /v1/scry/shares
|
+-- Need to emit a structured observation? --> POST /v1/scry/judgements
|
+-- Scry blocked / missing obvious results? --> POST /v1/feedback用户想要搜索ExoPriors语料库?
|
+-- 请求模糊/概念性需求? --> 先澄清意图,然后使用
| scry-vectors进行语义搜索(可选择与词法搜索混合)
|
+-- 按关键词/短语搜索? --> scry.search()(基于标准content_text的BM25词法搜索)
| +-- 指定论坛? --> 传入mode='mv_lesswrong_posts'或kinds过滤器
| +-- Reddit? --> 先使用scry.reddit_subreddit_stats /
| scry.reddit_clusters() / scry.reddit_embeddings
| 并在使用直接检索工具前查看/v1/scry/schema状态
| +-- 大量结果? --> scry.search_ids()(返回id+uri+kind,最多2000条)
|
+-- 按结构化过滤器(来源、日期、作者)搜索? --> 直接对物化视图执行SQL
|
+-- 按语义相似度搜索? --> (使用scry-vectors技能,而非本技能)
|
+-- 混合搜索(关键词+语义重排序)? --> scry.hybrid_search()或
| 词法CTE + JOIN scry.embeddings
|
+-- 作者/人物查找? --> scry.actors, scry.people, scry.person_accounts
|
+-- 学术图谱(OpenAlex)? --> scry.openalex_find_authors(),
| scry.openalex_find_works()等(请参阅schema-guide.md)
|
+-- 需要共享结果? --> POST /v1/scry/shares
|
+-- 需要生成结构化观测结果? --> POST /v1/scry/judgements
|
+-- Scry功能异常/遗漏明显结果? --> POST /v1/feedbackE) Recipes
E) 示例脚本
E0. Context handshake + skill update advisory
E0. 上下文握手 + 技能更新提示
bash
curl -s "https://api.exopriors.com/v1/scry/context?skill_generation=20260313" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY"If response includes , ask the user to run:
.
"should_update_skill": truenpx skills updatebash
curl -s "https://api.exopriors.com/v1/scry/context?skill_generation=20260313" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY"如果响应包含,请告知用户运行:。
"should_update_skill": truenpx skills updateE0b. Submit feedback when Scry blocks the task
E0b. Scry功能异常时提交反馈
bash
curl -s "https://api.exopriors.com/v1/feedback?feedback_type=bug&channel=scry_skill" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: text/plain" \
--data $'## What happened\n- Query: ...\n- Problem: ...\n\n## Why it matters\n- ...\n\n## Suggested fix\n- ...'Success response includes a receipt . Logged-in users can review their own
submissions with:
idbash
curl -s "https://api.exopriors.com/v1/feedback?limit=10" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY"bash
curl -s "https://api.exopriors.com/v1/feedback?feedback_type=bug&channel=scry_skill" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: text/plain" \
--data $'## 问题描述\n- 查询语句:...\n- 问题:...\n\n## 影响\n- ...\n\n## 建议修复方案\n- ...'成功响应会包含一个回执。已登录用户可通过以下命令查看自己的提交记录:
idbash
curl -s "https://api.exopriors.com/v1/feedback?limit=10" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY"E1. Lexical search (BM25)
E1. 词法搜索(BM25)
sql
WITH c AS (
SELECT id FROM scry.search('your query here',
kinds=>ARRAY['post'], limit_n=>100)
)
SELECT e.uri, e.title, e.original_author, e.original_timestamp
FROM c JOIN scry.entities e ON e.id = c.id
WHERE e.content_risk IS DISTINCT FROM 'dangerous'
LIMIT 50Default if omitted: .
broadens once to if that default returns 0 rows.
Pass explicit for strict scope (for example comment-only or tweet-only).
Pass to scope to LessWrong posts.
kinds['post','paper','document','webpage','twitter_thread','grant']scry.search()kinds=>ARRAY['comment']kindsmode=>'mv_lesswrong_posts'sql
WITH c AS (
SELECT id FROM scry.search('你的查询关键词',
kinds=>ARRAY['post'], limit_n=>100)
)
SELECT e.uri, e.title, e.original_author, e.original_timestamp
FROM c JOIN scry.entities e ON e.id = c.id
WHERE e.content_risk IS DISTINCT FROM 'dangerous'
LIMIT 50如果省略,默认值为。如果默认查询返回0行,会自动将范围扩大到包含。如需严格限定范围(如仅评论或仅推文),请传入明确的参数。传入可限定为LessWrong帖子。
kinds['post','paper','document','webpage','twitter_thread','grant']scry.search()kinds=>ARRAY['comment']kindsmode=>'mv_lesswrong_posts'E2. Reddit-specific discovery
E2. Reddit专属发现
sql
SELECT subreddit, total_count, latest
FROM scry.reddit_subreddit_stats
WHERE subreddit IN ('MachineLearning', 'LocalLLaMA')
ORDER BY total_count DESCFor semantic Reddit retrieval over the embedding-covered subset, use
or .
scry.reddit_embeddingsscry.search_reddit_posts_semantic(...)Direct retrieval helpers (, ,
, ,
) are currently degraded on the public
instance. Check status before using them.
scry.reddit_postsscry.reddit_commentsscry.mv_reddit_*scry.search_reddit_posts(...)scry.search_reddit_comments(...)/v1/scry/schemasql
SELECT subreddit, total_count, latest
FROM scry.reddit_subreddit_stats
WHERE subreddit IN ('MachineLearning', 'LocalLLaMA')
ORDER BY total_count DESC对于覆盖嵌入向量的Reddit内容的语义检索,请使用或。
scry.reddit_embeddingsscry.search_reddit_posts_semantic(...)直接检索工具(、、、、)在公共实例中目前性能受限,使用前请查看状态。
scry.reddit_postsscry.reddit_commentsscry.mv_reddit_*scry.search_reddit_posts(...)scry.search_reddit_comments(...)/v1/scry/schemaE3. Source-filtered materialized view query
E3. 按来源过滤的物化视图查询
sql
SELECT entity_id, uri, title, original_author, score, original_timestamp
FROM scry.mv_arxiv_papers
WHERE original_timestamp >= '2025-01-01'
ORDER BY original_timestamp DESC
LIMIT 50sql
SELECT entity_id, uri, title, original_author, score, original_timestamp
FROM scry.mv_arxiv_papers
WHERE original_timestamp >= '2025-01-01'
ORDER BY original_timestamp DESC
LIMIT 50E4. Author activity across sources
E4. 作者跨平台活动统计
sql
SELECT e.source::text, COUNT(*) AS docs, MAX(e.original_timestamp) AS latest
FROM scry.entities e
WHERE e.original_author ILIKE '%yudkowsky%'
AND e.content_risk IS DISTINCT FROM 'dangerous'
GROUP BY e.source::text
ORDER BY docs DESC
LIMIT 20sql
SELECT e.source::text, COUNT(*) AS docs, MAX(e.original_timestamp) AS latest
FROM scry.entities e
WHERE e.original_author ILIKE '%yudkowsky%'
AND e.content_risk IS DISTINCT FROM 'dangerous'
GROUP BY e.source::text
ORDER BY docs DESC
LIMIT 20E5. Entity kind distribution for a source
E5. 某来源的实体类型分布
sql
SELECT kind::text, COUNT(*)
FROM scry.entities
WHERE source = 'hackernews'
GROUP BY kind::text
ORDER BY 2 DESC
LIMIT 20sql
SELECT kind::text, COUNT(*)
FROM scry.entities
WHERE source = 'hackernews'
GROUP BY kind::text
ORDER BY 2 DESC
LIMIT 20E6. Hybrid search (lexical + semantic rerank in SQL)
E6. 混合搜索(词法+SQL语义重排序)
sql
WITH c AS (
SELECT id FROM scry.search('deceptive alignment',
kinds=>ARRAY['post'], limit_n=>200)
)
SELECT e.uri, e.title, e.original_author,
emb.embedding_voyage4 <=> @p_deadbeef_topic AS distance
FROM c
JOIN scry.entities e ON e.id = c.id
JOIN scry.embeddings emb ON emb.entity_id = c.id AND emb.chunk_index = 0
WHERE e.content_risk IS DISTINCT FROM 'dangerous'
ORDER BY distance
LIMIT 50Requires a stored embedding handle (). See scry-vectors
skill for creating handles.
@p_deadbeef_topicsql
WITH c AS (
SELECT id FROM scry.search('deceptive alignment',
kinds=>ARRAY['post'], limit_n=>200)
)
SELECT e.uri, e.title, e.original_author,
emb.embedding_voyage4 <=> @p_deadbeef_topic AS distance
FROM c
JOIN scry.entities e ON e.id = c.id
JOIN scry.embeddings emb ON emb.entity_id = c.id AND emb.chunk_index = 0
WHERE e.content_risk IS DISTINCT FROM 'dangerous'
ORDER BY distance
LIMIT 50此查询需要一个已存储的嵌入句柄(),请参阅scry-vectors技能了解如何创建句柄。
@p_deadbeef_topicE7. Cost estimation before execution
E7. 执行前成本估算
bash
curl -s -X POST https://api.exopriors.com/v1/scry/estimate \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT id, title FROM scry.mv_arxiv_papers LIMIT 1000"}'Returns EXPLAIN (FORMAT JSON) output. Use this for expensive queries before committing.
bash
curl -s -X POST https://api.exopriors.com/v1/scry/estimate \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT id, title FROM scry.mv_arxiv_papers LIMIT 1000"}'返回EXPLAIN(FORMAT JSON)输出,高开销查询执行前请使用此功能。
E8. Create a shareable artifact
E8. 创建可共享成果
bash
undefinedbash
undefined1. Run query and capture results
1. 执行查询并捕获结果
2. POST share
2. 提交共享请求
curl -s -X POST https://api.exopriors.com/v1/scry/shares
-H "Authorization: Bearer $EXOPRIORS_API_KEY"
-H "Content-Type: application/json"
-d '{ "kind": "query", "title": "Top RLHF posts on LessWrong", "summary": "20 highest-scored LW posts mentioning RLHF.", "payload": { "sql": "...", "result": {"columns": [...], "rows": [...]} } }'
-H "Authorization: Bearer $EXOPRIORS_API_KEY"
-H "Content-Type: application/json"
-d '{ "kind": "query", "title": "Top RLHF posts on LessWrong", "summary": "20 highest-scored LW posts mentioning RLHF.", "payload": { "sql": "...", "result": {"columns": [...], "rows": [...]} } }'
Kinds: `query`, `rerank`, `insight`, `chat`, `markdown`.
Progressive update: create stub immediately, then `PATCH /v1/scry/shares/{slug}`.
Rendered at: `https://scry.io/scry/share/{slug}`.curl -s -X POST https://api.exopriors.com/v1/scry/shares
-H "Authorization: Bearer $EXOPRIORS_API_KEY"
-H "Content-Type: application/json"
-d '{ "kind": "query", "title": "LessWrong平台热门RLHF帖子", "summary": "20篇提及RLHF的最高赞LessWrong帖子。", "payload": { "sql": "...", "result": {"columns": [...], "rows": [...]} } }'
-H "Authorization: Bearer $EXOPRIORS_API_KEY"
-H "Content-Type: application/json"
-d '{ "kind": "query", "title": "LessWrong平台热门RLHF帖子", "summary": "20篇提及RLHF的最高赞LessWrong帖子。", "payload": { "sql": "...", "result": {"columns": [...], "rows": [...]} } }'
支持的类型:`query`, `rerank`, `insight`, `chat`, `markdown`。可先创建空共享,再通过`PATCH /v1/scry/shares/{slug}`逐步更新。共享内容可通过以下链接访问:`https://scry.io/scry/share/{slug}`。E9. Emit a structured agent judgement
E9. 生成结构化Agent判断
bash
curl -s -X POST https://api.exopriors.com/v1/scry/judgements \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"emitter": "my-agent",
"judgement_kind": "topic_classification",
"target_external_ref": "arxiv:2401.12345",
"summary": "Paper primarily about mechanistic interpretability.",
"payload": {"primary_topic": "mech_interp", "confidence_detail": "title+abstract match"},
"confidence": 0.88,
"tags": ["arxiv", "mech_interp"],
"privacy_level": "public"
}'Exactly one target required: , ,
, or .
Judgement-on-judgement: use to chain observations.
target_entity_idtarget_actor_idtarget_judgement_idtarget_external_reftarget_judgement_idbash
curl -s -X POST https://api.exopriors.com/v1/scry/judgements \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"emitter": "my-agent",
"judgement_kind": "topic_classification",
"target_external_ref": "arxiv:2401.12345",
"summary": "论文主要关于机械可解释性。",
"payload": {"primary_topic": "mech_interp", "confidence_detail": "标题+摘要匹配"},
"confidence": 0.88,
"tags": ["arxiv", "mech_interp"],
"privacy_level": "public"
}'必须指定一个目标:、、或。如需对判断结果进行二次判断,请使用关联观测结果。
target_entity_idtarget_actor_idtarget_judgement_idtarget_external_reftarget_judgement_idE10. People / author lookup
E10. 人物/作者查找
sql
-- Per-source author grouping
SELECT a.handle, a.display_name, a.source::text, COUNT(*) AS docs
FROM scry.entities e
JOIN scry.actors a ON a.id = e.author_actor_id
WHERE e.source = 'twitter'
GROUP BY a.handle, a.display_name, a.source::text
ORDER BY docs DESC
LIMIT 50sql
-- 按来源分组的作者统计
SELECT a.handle, a.display_name, a.source::text, COUNT(*) AS docs
FROM scry.entities e
JOIN scry.actors a ON a.id = e.author_actor_id
WHERE e.source = 'twitter'
GROUP BY a.handle, a.display_name, a.source::text
ORDER BY docs DESC
LIMIT 50E11. Thread navigation (replies)
E11. 线程导航(回复内容)
sql
-- Find all replies to a root post
SELECT id, uri, title, original_author, original_timestamp
FROM scry.entities
WHERE anchor_entity_id = 'ROOT_ENTITY_UUID'
ORDER BY original_timestamp
LIMIT 100anchor_entity_idparent_entity_idsql
-- 查找根帖子的所有回复
SELECT id, uri, title, original_author, original_timestamp
FROM scry.entities
WHERE anchor_entity_id = 'ROOT_ENTITY_UUID'
ORDER BY original_timestamp
LIMIT 100anchor_entity_idparent_entity_idE12. Count estimation (safe pattern)
E12. 行数估算(安全模式)
Avoid on large tables. Instead, use schema endpoint row estimates or:
COUNT(*)sql
SELECT reltuples::bigint AS estimated_rows
FROM pg_class
WHERE relname = 'mv_lesswrong_posts'
LIMIT 1Note: access is blocked on the public Scry SQL surface. Use instead.
pg_class/v1/scry/schema避免在大表上使用,请改用Schema端点的行数估算或以下语句:
COUNT(*)sql
SELECT reltuples::bigint AS estimated_rows
FROM pg_class
WHERE relname = 'mv_lesswrong_posts'
LIMIT 1注意:公共Scry SQL接口已禁止访问,请使用替代。
pg_class/v1/scry/schemaF) Error Handling
F) 错误处理
See for the full catalogue. Key patterns:
references/error-reference.md| HTTP | Code | Meaning | Action |
|---|---|---|---|
| 400 | | SQL parse error, missing LIMIT, bad params | Fix query |
| 401 | | Missing or invalid API key | Check key |
| 402 | | Token budget exhausted | Notify user |
| 429 | | Too many requests | Respect |
| 503 | | Scry pool down or overloaded | Wait and retry |
Auth + timeout diagnostics for CLI users:
- If curl shows HTTP , that is client-side timeout/network abort, not a server HTTP status. Check
000and retry with--max-timefirst./v1/scry/estimate - If you see with
401, check for whitespace/newlines in the key:"Invalid authorization format"then useKEY_CLEAN="$(printf '%s' \"$EXOPRIORS_API_KEY\" | tr -d '\\r\\n')".Authorization: Bearer $KEY_CLEAN
Quota fallback strategy:
- If 429: wait seconds, retry once.
Retry-After - If 402: tell the user their token budget is exhausted.
- If 503: retry after 30s with exponential backoff (max 3 attempts).
- If query times out: simplify (use MV instead of full table, reduce LIMIT, add tighter WHERE filters).
完整错误列表请参阅,关键处理模式如下:
references/error-reference.md| HTTP状态码 | 错误码 | 含义 | 处理动作 |
|---|---|---|---|
| 400 | | SQL解析错误、缺少LIMIT、参数错误 | 修改查询语句 |
| 401 | | API密钥缺失或无效 | 检查密钥 |
| 402 | | 令牌配额耗尽 | 通知用户 |
| 429 | | 请求过于频繁 | 遵循 |
| 503 | | Scry资源池下线或过载 | 等待后重试 |
CLI用户的认证与超时诊断:
- 如果curl返回HTTP ,这是客户端超时/网络中断,而非服务器HTTP状态。请检查
000参数,并先使用--max-time重试。/v1/scry/estimate - 如果返回且提示
401,请检查密钥中是否包含空格/换行符:"Invalid authorization format"然后使用KEY_CLEAN="$(printf '%s' \"$EXOPRIORS_API_KEY\" | tr -d '\\r\\n')"。Authorization: Bearer $KEY_CLEAN
配额回退策略:
- 若返回429:等待秒后重试一次。
Retry-After - 若返回402:告知用户令牌配额已耗尽。
- 若返回503:30秒后重试,采用指数退避策略(最多3次尝试)。
- 若查询超时:简化查询(使用物化视图而非全表、减小LIMIT、添加更严格的WHERE过滤)。
G) Output Contract
G) 输出规范
When this skill completes a query task, return a consistent structure:
undefined本技能完成查询任务后,请返回一致的结构:
undefinedScry Result
Scry 查询结果
Query: <natural language description>
SQL:
Rows returned: <N> (truncated: <yes/no>)
Duration: <N>ms
<formatted results table or summary>
sql <the SQL that ran> Share: <share URL if created>
Caveats: <any data quality notes, e.g., "score is NULL for arXiv">
undefined查询描述:<自然语言描述>
SQL语句:
返回行数:<N>(是否截断:<是/否>)
执行时长:<N>ms
sql <执行的SQL语句> <格式化结果表格或摘要>
共享链接:<若创建则提供共享URL>
注意事项:<任何数据质量说明,如“arXiv内容无score字段”>
undefinedHandoff Contract
H) 交互规范
Produces: JSON with , , , ,
Feeds into:
columnsrowsrow_countduration_mstruncated- : ensure SQL returns
rerankandidcolumns for candidate setscontent_text - : save entity IDs for embedding lookup and semantic reranking Receives from: none (entry point for SQL-based corpus access)
scry-vectors
输出格式:包含, , , , 的JSON
下游技能:
columnsrowsrow_countduration_mstruncated- :确保SQL返回
rerank和id列作为候选集content_text - :保存实体ID用于嵌入向量查找和语义重排序 上游技能:无(基于SQL的语料库访问入口)
scry-vectors
Related Skills
相关技能
- scry-vectors -- embed concepts as @handles, search by cosine distance, debias with vector algebra
- scry-rerank -- LLM-powered multi-attribute reranking of candidate sets via pairwise comparison
For detailed schema documentation, see .
For the full pattern library, see .
For error codes and quota details, see .
references/schema-guide.mdreferences/query-patterns.mdreferences/error-reference.md- scry-vectors -- 将概念嵌入为@句柄,通过余弦距离搜索,使用向量代数去偏
- scry-rerank -- 基于LLM的多属性重排序,通过两两比较优化候选集
详细Schema文档请参阅。
完整查询模式库请参阅。
错误码和配额详情请参阅。
references/schema-guide.mdreferences/query-patterns.mdreferences/error-reference.md