Loading...
Loading...
Query the ExoPriors Scry API -- SQL-over-HTTPS search across 229M+ entities spanning forums, papers, social media, government records, and prediction markets. Includes cross-platform author identity resolution (actors, people, aliases), OpenAlex academic graph navigation (authors, citations, institutions, concepts), shareable artifacts, and structured agent judgements. Use when the task involves: Scry API, ExoPriors, /v1/scry/query, scry.search, scry.entities, materialized views, corpus search, epistemic infrastructure, 229M entities, lexical search, BM25, structured agent judgements, scry shares, cross-corpus analysis, who is this person, cross-platform identity, OpenAlex, citation graph, coauthor graph, academic papers, author lookup. NOT for: semantic/vector search composition or embedding algebra (use scry-vectors), LLM-based reranking (use scry-rerank), or the user's own local Postgres / non-ExoPriors data sources.
npx skill4agent add exopriors/skills scryscry.*20260313GET /v1/scry/context?skill_generation=20260313offeringssurface_context_generationshould_update_skill=truenpx skills updateGET /v1/scry/schemaGET /v1/scry/index-view-status/v1/scry/estimateLIMIT 20scry.search*scry.entitiesscry.mv_lesswrong_postsscry.mv_arxiv_papersscry.mv_hackernews_postsWHERE content_risk IS DISTINCT FROM 'dangerous'POST /v1/scry/queryContent-Type: text/plainPOST /v1/feedback?feedback_type=suggestion|bug|other&channel=scry_skillContent-Type: text/plaintext/markdownGET /v1/feedbackEXOPRIORS_API_KEY.envEXOPRIORS_API_KEYexopriors_*printf '%s\n' 'EXOPRIORS_API_KEY=exopriors_...' >> .env
set -a && source .env && set +aecho "$EXOPRIORS_API_KEY"npx skills add exopriors/skills
npx skills updatePOST /v1/scry/queryAuthorizationexopriors_*POST /v1/auth/agent/signupimport { wrapFetchWithPayment } from 'x402-fetch';
const paidFetch = wrapFetchWithPayment(fetch, walletClient);
const resp = await paidFetch('https://api.exopriors.com/v1/scry/query', {
method: 'POST',
headers: { 'content-type': 'text/plain' },
body: 'SELECT 1 LIMIT 1',
});Step 1: Get dynamic context + update advisory
GET https://api.exopriors.com/v1/scry/context?skill_generation=20260313
Authorization: Bearer $EXOPRIORS_API_KEY
Step 2: Get schema
GET https://api.exopriors.com/v1/scry/schema
Authorization: Bearer $EXOPRIORS_API_KEY
Step 3: Run query
POST https://api.exopriors.com/v1/scry/query
Authorization: Bearer $EXOPRIORS_API_KEY
Content-Type: text/plain
WITH hits AS (
SELECT id FROM scry.search('RLHF reinforcement learning human feedback',
kinds=>ARRAY['post'], limit_n=>100)
)
SELECT e.uri, e.title, e.original_author, e.original_timestamp, e.score
FROM hits h
JOIN scry.entities e ON e.id = h.id
WHERE e.source = 'lesswrong'
AND e.content_risk IS DISTINCT FROM 'dangerous'
ORDER BY e.score DESC NULLS LAST
LIMIT 20{
"columns": ["uri", "title", "original_author", "original_timestamp", "score"],
"rows": [["https://...", "My RLHF Post", "author", "2025-01-15T...", 142], ...],
"row_count": 20,
"duration_ms": 312,
"truncated": false
}User wants to search the ExoPriors corpus?
|
+-- Ambiguous / conceptual ask? --> Clarify intent first, then use
| scry-vectors for semantic search (optionally hybridize with lexical)
|
+-- By keywords/phrases? --> scry.search() (BM25 lexical over canonical content_text)
| +-- Specific forum? --> pass mode='mv_lesswrong_posts' or kinds filter
| +-- Reddit? --> START with scry.reddit_subreddit_stats /
| scry.reddit_clusters() / scry.reddit_embeddings
| and trust /v1/scry/schema status before
| using direct retrieval helpers
| +-- Large result? --> scry.search_ids() (id+uri+kind, up to 2000)
|
+-- By structured filters (source, date, author)? --> Direct SQL on MVs
|
+-- By semantic similarity? --> (scry-vectors skill, not this one)
|
+-- Hybrid (keywords + semantic rerank)? --> scry.hybrid_search() or
| lexical CTE + JOIN scry.embeddings
|
+-- Author/people lookup? --> scry.actors, scry.people, scry.person_accounts
|
+-- Academic graph (OpenAlex)? --> scry.openalex_find_authors(),
| scry.openalex_find_works(), etc. (see schema-guide.md)
|
+-- Need to share results? --> POST /v1/scry/shares
|
+-- Need to emit a structured observation? --> POST /v1/scry/judgements
|
+-- Scry blocked / missing obvious results? --> POST /v1/feedbackcurl -s "https://api.exopriors.com/v1/scry/context?skill_generation=20260313" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY""should_update_skill": truenpx skills updatecurl -s "https://api.exopriors.com/v1/feedback?feedback_type=bug&channel=scry_skill" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: text/plain" \
--data $'## What happened\n- Query: ...\n- Problem: ...\n\n## Why it matters\n- ...\n\n## Suggested fix\n- ...'idcurl -s "https://api.exopriors.com/v1/feedback?limit=10" \
-H "Authorization: Bearer $EXOPRIORS_API_KEY"WITH c AS (
SELECT id FROM scry.search('your query here',
kinds=>ARRAY['post'], limit_n=>100)
)
SELECT e.uri, e.title, e.original_author, e.original_timestamp
FROM c JOIN scry.entities e ON e.id = c.id
WHERE e.content_risk IS DISTINCT FROM 'dangerous'
LIMIT 50kinds['post','paper','document','webpage','twitter_thread','grant']scry.search()kinds=>ARRAY['comment']kindsmode=>'mv_lesswrong_posts'SELECT subreddit, total_count, latest
FROM scry.reddit_subreddit_stats
WHERE subreddit IN ('MachineLearning', 'LocalLLaMA')
ORDER BY total_count DESCscry.reddit_embeddingsscry.search_reddit_posts_semantic(...)scry.reddit_postsscry.reddit_commentsscry.mv_reddit_*scry.search_reddit_posts(...)scry.search_reddit_comments(...)/v1/scry/schemaSELECT entity_id, uri, title, original_author, score, original_timestamp
FROM scry.mv_arxiv_papers
WHERE original_timestamp >= '2025-01-01'
ORDER BY original_timestamp DESC
LIMIT 50SELECT e.source::text, COUNT(*) AS docs, MAX(e.original_timestamp) AS latest
FROM scry.entities e
WHERE e.original_author ILIKE '%yudkowsky%'
AND e.content_risk IS DISTINCT FROM 'dangerous'
GROUP BY e.source::text
ORDER BY docs DESC
LIMIT 20SELECT kind::text, COUNT(*)
FROM scry.entities
WHERE source = 'hackernews'
GROUP BY kind::text
ORDER BY 2 DESC
LIMIT 20WITH c AS (
SELECT id FROM scry.search('deceptive alignment',
kinds=>ARRAY['post'], limit_n=>200)
)
SELECT e.uri, e.title, e.original_author,
emb.embedding_voyage4 <=> @p_deadbeef_topic AS distance
FROM c
JOIN scry.entities e ON e.id = c.id
JOIN scry.embeddings emb ON emb.entity_id = c.id AND emb.chunk_index = 0
WHERE e.content_risk IS DISTINCT FROM 'dangerous'
ORDER BY distance
LIMIT 50@p_deadbeef_topiccurl -s -X POST https://api.exopriors.com/v1/scry/estimate \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT id, title FROM scry.mv_arxiv_papers LIMIT 1000"}'# 1. Run query and capture results
# 2. POST share
curl -s -X POST https://api.exopriors.com/v1/scry/shares \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"kind": "query",
"title": "Top RLHF posts on LessWrong",
"summary": "20 highest-scored LW posts mentioning RLHF.",
"payload": {
"sql": "...",
"result": {"columns": [...], "rows": [...]}
}
}'queryrerankinsightchatmarkdownPATCH /v1/scry/shares/{slug}https://scry.io/scry/share/{slug}curl -s -X POST https://api.exopriors.com/v1/scry/judgements \
-H "Authorization: Bearer $EXOPRIORS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"emitter": "my-agent",
"judgement_kind": "topic_classification",
"target_external_ref": "arxiv:2401.12345",
"summary": "Paper primarily about mechanistic interpretability.",
"payload": {"primary_topic": "mech_interp", "confidence_detail": "title+abstract match"},
"confidence": 0.88,
"tags": ["arxiv", "mech_interp"],
"privacy_level": "public"
}'target_entity_idtarget_actor_idtarget_judgement_idtarget_external_reftarget_judgement_id-- Per-source author grouping
SELECT a.handle, a.display_name, a.source::text, COUNT(*) AS docs
FROM scry.entities e
JOIN scry.actors a ON a.id = e.author_actor_id
WHERE e.source = 'twitter'
GROUP BY a.handle, a.display_name, a.source::text
ORDER BY docs DESC
LIMIT 50-- Find all replies to a root post
SELECT id, uri, title, original_author, original_timestamp
FROM scry.entities
WHERE anchor_entity_id = 'ROOT_ENTITY_UUID'
ORDER BY original_timestamp
LIMIT 100anchor_entity_idparent_entity_idCOUNT(*)SELECT reltuples::bigint AS estimated_rows
FROM pg_class
WHERE relname = 'mv_lesswrong_posts'
LIMIT 1pg_class/v1/scry/schemareferences/error-reference.md| HTTP | Code | Meaning | Action |
|---|---|---|---|
| 400 | | SQL parse error, missing LIMIT, bad params | Fix query |
| 401 | | Missing or invalid API key | Check key |
| 402 | | Token budget exhausted | Notify user |
| 429 | | Too many requests | Respect |
| 503 | | Scry pool down or overloaded | Wait and retry |
000--max-time/v1/scry/estimate401"Invalid authorization format"KEY_CLEAN="$(printf '%s' \"$EXOPRIORS_API_KEY\" | tr -d '\\r\\n')"Authorization: Bearer $KEY_CLEANRetry-After## Scry Result
**Query**: <natural language description>
**SQL**: ```sql <the SQL that ran> ```
**Rows returned**: <N> (truncated: <yes/no>)
**Duration**: <N>ms
<formatted results table or summary>
**Share**: <share URL if created>
**Caveats**: <any data quality notes, e.g., "score is NULL for arXiv">columnsrowsrow_countduration_mstruncatedrerankidcontent_textscry-vectorsreferences/schema-guide.mdreferences/query-patterns.mdreferences/error-reference.md