Building with Subconscious Platform
Subconscious is a platform for running AI agents with external tool use and long-horizon reasoning. Key differentiator: You kick off an agent with a single API call—define goals and tools, Subconscious handles orchestration, context management, and multi-hop reasoning automatically. No multi-agent frameworks needed.
Quick Start
Use the native Subconscious SDK (recommended approach):
Python
python
from subconscious import Subconscious
client = Subconscious(api_key="your-api-key") # Get from https://subconscious.dev/platform
run = client.run(
engine="tim-gpt",
input={
"instructions": "Research quantum computing breakthroughs in 2025",
"tools": [] # Optional: see Tools section below
},
options={"await_completion": True}
)
# Extract the answer for display
answer = run.result.answer # Clean text response
print(answer)
Node.js/TypeScript
typescript
import { Subconscious } from "subconscious";
const client = new Subconscious({
apiKey: process.env.SUBCONSCIOUS_API_KEY!,
});
const run = await client.run({
engine: "tim-gpt",
input: {
instructions: "Research quantum computing breakthroughs in 2025",
tools: [], // Optional: see Tools section below
},
options: { awaitCompletion: true },
});
// Extract the answer for display
const answer = run.result?.answer; // Clean text response
console.log(answer);
Response Structure
Critical: The Subconscious SDK returns a different structure than OpenAI:
typescript
{
runId: "run_abc123...",
status: "succeeded",
result: {
answer: "The clean text response for display", // ← Use this for chat UIs
reasoning: [ // Optional: step-by-step reasoning
{
title: "Step 1",
thought: "I need to search for...",
conclusion: "Found relevant information"
}
]
},
usage: {
inputTokens: 1234,
outputTokens: 567,
durationMs: 45000
}
}
For chat UIs, always use - this is the clean text response. The
field contains internal reasoning steps (useful for debugging but not for display).
Simple Chat Example (No Tools)
For conversational chat without tools:
Python
python
from subconscious import Subconscious
client = Subconscious(api_key="your-api-key")
# Convert message history to instructions format
messages = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi there! How can I help?"},
{"role": "user", "content": "Tell me about quantum computing"}
]
# Convert to instructions string
instructions = "\n\n".join([
f"{'User' if m['role'] == 'user' else 'Assistant'}: {m['content']}"
for m in messages
]) + "\n\nRespond to the user's latest message."
run = client.run(
engine="tim-gpt",
input={"instructions": instructions, "tools": []},
options={"await_completion": True}
)
print(run.result.answer) # Clean text response
Node.js/TypeScript
typescript
import { Subconscious } from "subconscious";
const client = new Subconscious({
apiKey: process.env.SUBCONSCIOUS_API_KEY!,
});
const messages = [
{ role: "user", content: "Hello!" },
{ role: "assistant", content: "Hi there! How can I help?" },
{ role: "user", content: "Tell me about quantum computing" }
];
// Convert to instructions string
const instructions = messages
.map(m => `${m.role === "user" ? "User" : "Assistant"}: ${m.content}`)
.join("\n\n") + "\n\nRespond to the user's latest message.";
const run = await client.run({
engine: "tim-gpt",
input: { instructions, tools: [] },
options: { awaitCompletion: true },
});
console.log(run.result?.answer); // Clean text response
Instructions Format vs Messages
Important: Subconscious uses
(single string), not
array like OpenAI.
- OpenAI format:
messages: [{role: "user", content: "..."}]
- Subconscious format:
input: {instructions: "..."}
(single string)
Converting Messages to Instructions
typescript
function buildInstructions(
systemPrompt: string,
messages: Array<{role: string; content: string}>
): string {
const conversation = messages
.map(m => `${m.role === "user" ? "User" : "Assistant"}: ${m.content}`)
.join("\n\n");
return `${systemPrompt}
## Conversation History
${conversation}
## Instructions
Respond to the user's latest message.`;
}
// Usage
const instructions = buildInstructions(
"You are a helpful coding assistant. Be concise and use code examples.",
messages
);
System Prompts
Subconscious doesn't have a separate
field. Prepend your system prompt to the instructions:
typescript
const systemPrompt = "You are a helpful assistant. Always be concise.";
const userMessage = "Explain quantum computing";
const instructions = `${systemPrompt}
User: ${userMessage}
Respond to the user's message.`;
Choosing an Engine
| Engine | API Name | Type | Best For |
|---|
| TIM | | Unified | Flagship unified agent engine for a wide range of tasks |
| TIM-Edge | | Unified | Speed, efficiency, search-heavy tasks |
| TIMINI | | Compound (Gemini-3 Flash backed) | Long-context and tool use, strong reasoning |
| TIM-GPT | | Compound (GPT-4.1 backed) | Most use cases, good balance of cost/performance |
| TIM-GPT-Heavy | | Compound (GPT-5.2 backed) | Maximum capability, complex reasoning |
Recommendation: Start with
for most applications.
Tools: The Key Differentiator
Subconscious tools are remote HTTP endpoints. When the agent needs to use a tool, Subconscious makes an HTTP POST to the URL you specify. This is fundamentally different from OpenAI function calling where YOU handle tool execution in a loop.
Tool Definition Format
python
tools = [
{
"type": "function",
"name": "SearchTool",
"description": "a general search engine returns title, url, and description of 10 webpages",
"url": "https://your-server.com/search", # YOUR hosted endpoint
"method": "POST",
"timeout": 10, # seconds
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "A natural language query for the search engine."
}
},
"required": ["query"],
"additionalProperties": False
}
}
]
Key fields unique to Subconscious:
- : The HTTP endpoint Subconscious will call when the agent uses this tool
- : HTTP method (typically POST)
- : How long to wait for tool response (seconds)
The agent decides when and how to call tools. You don't manage a tool-call loop. Subconscious handles multi-hop reasoning internally via TIMRUN.
Building a Tool Server
Your tool endpoint receives POST requests with parameters as JSON and returns JSON results.
FastAPI (Python):
python
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class SearchRequest(BaseModel):
query: str
@app.post("/search")
async def search(req: SearchRequest):
# Your search logic here
return {
"results": [
{"title": "Result 1", "url": "https://example.com/1", "description": "..."}
]
}
# Run with: uvicorn server:app --host 0.0.0.0 --port 8000
Express.js (Node.js):
typescript
import express from "express";
const app = express();
app.use(express.json());
app.post("/search", (req, res) => {
const { query } = req.body;
// Your search logic here
res.json({
results: [
{ title: "Result 1", url: "https://example.com/1", description: "..." }
]
});
});
app.listen(8000, () => console.log("Tool server running on :8000"));
Important: Your endpoint must be publicly accessible. For local development, use
ngrok or
Cloudflare Tunnel.
Structured Output
Structured output allows you to define the exact shape of the agent's response using JSON Schema. This ensures you receive data in a predictable, parseable format.
When to Use Structured Output
Use structured output when you need:
- Responses that integrate with other systems
- Consistent data formats for downstream processing
- Type-safe responses in your application
Using answerFormat
The
field accepts a JSON Schema that defines the structure of the agent's answer:
Python:
python
from subconscious import Subconscious
client = Subconscious(api_key="your-api-key")
run = client.run(
engine="tim-gpt",
input={
"instructions": "Analyze the sentiment of this review: 'Great product, fast shipping!'",
"tools": [],
"answerFormat": {
"type": "object",
"title": "SentimentAnalysis",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "negative", "neutral"],
"description": "The overall sentiment"
},
"confidence": {
"type": "number",
"description": "Confidence score from 0 to 1"
},
"keywords": {
"type": "array",
"items": {"type": "string"},
"description": "Key phrases that influenced the sentiment"
}
},
"required": ["sentiment", "confidence", "keywords"]
}
},
options={"await_completion": True},
)
# Response is already a dict matching your schema - no parsing needed
result = run.result.answer
print(result["sentiment"]) # "positive"
print(result["confidence"]) # 0.95
print(result["keywords"]) # ["Great product", "fast shipping"]
Node.js/TypeScript:
typescript
import { Subconscious } from "subconscious";
const client = new Subconscious({
apiKey: process.env.SUBCONSCIOUS_API_KEY!,
});
const run = await client.run({
engine: "tim-gpt",
input: {
instructions: "Analyze the sentiment of this review: 'Great product, fast shipping!'",
tools: [],
answerFormat: {
type: "object",
title: "SentimentAnalysis",
properties: {
sentiment: {
type: "string",
enum: ["positive", "negative", "neutral"],
description: "The overall sentiment"
},
confidence: {
type: "number",
description: "Confidence score from 0 to 1"
},
keywords: {
type: "array",
items: { type: "string" },
description: "Key phrases that influenced the sentiment"
}
},
required: ["sentiment", "confidence", "keywords"]
}
},
options: { awaitCompletion: true },
});
// Response is already an object matching your schema - no parsing needed
const result = run.result?.answer;
console.log(result.sentiment); // "positive"
console.log(result.confidence); // 0.95
console.log(result.keywords); // ["Great product", "fast shipping"]
Important: When using
,
returns a
parsed object (dict in Python, object in JavaScript), not a JSON string. You can access fields directly without parsing.
Using Pydantic Models (Python)
The Python SDK automatically converts Pydantic models to JSON Schema:
python
from subconscious import Subconscious
from pydantic import BaseModel
class SentimentAnalysis(BaseModel):
sentiment: str
confidence: float
keywords: list[str]
client = Subconscious(api_key="your-api-key")
run = client.run(
engine="tim-gpt",
input={
"instructions": "Analyze the sentiment of: 'Great product!'",
"answerFormat": SentimentAnalysis, # Pass the class directly
},
options={"await_completion": True},
)
print(run.result.answer["sentiment"])
Using Zod (Node.js/TypeScript)
For TypeScript, we recommend using
Zod to define your schema:
typescript
import { z } from 'zod';
import { Subconscious, zodToJsonSchema } from 'subconscious';
const AnalysisSchema = z.object({
summary: z.string().describe('A brief summary of the findings'),
keyPoints: z.array(z.string()).describe('Main takeaways'),
sentiment: z.enum(['positive', 'neutral', 'negative']),
confidence: z.number().describe('Confidence score from 0 to 1'),
});
const client = new Subconscious({
apiKey: process.env.SUBCONSCIOUS_API_KEY!,
});
const run = await client.run({
engine: 'tim-gpt',
input: {
instructions: 'Analyze the latest news about electric vehicles',
tools: [{ type: 'platform', id: 'fast_search' }],
answerFormat: zodToJsonSchema(AnalysisSchema, 'Analysis'),
},
options: { awaitCompletion: true },
});
// Result is typed according to your schema
const result = run.result?.answer as z.infer<typeof AnalysisSchema>;
console.log(result.summary);
console.log(result.keyPoints);
Structured Reasoning (Optional)
You can also structure the reasoning output using
:
typescript
const ReasoningSchema = z.object({
steps: z.array(z.object({
thought: z.string(),
action: z.string(),
})),
conclusion: z.string(),
});
const run = await client.run({
engine: 'tim-gpt',
input: {
instructions: 'Research and analyze a topic',
tools: [],
reasoningFormat: zodToJsonSchema(ReasoningSchema, 'Reasoning'),
},
options: { awaitCompletion: true },
});
const reasoning = run.result?.reasoning; // Structured reasoning
Schema Requirements
- Must be valid JSON Schema
- Use for structured responses
- Include field for better results
- Define for each field
- Use array for mandatory fields
- Set
additionalProperties: false
to prevent extra fields
See
references/api-reference.md
for more details on structured output.
run() vs stream() - Critical Difference
Use for Chat UIs (Recommended)
Method:
run({ options: { awaitCompletion: true } })
Behavior: Waits for completion, returns clean answer
What you get:
= clean text for display
Best for: Chat UIs, simple responses, production apps
typescript
const run = await client.run({
engine: "tim-gpt",
input: { instructions: "Your prompt", tools: [] },
options: { awaitCompletion: true }
});
const answer = run.result?.answer; // Clean text - use this for display
const reasoning = run.result?.reasoning; // Optional: step-by-step reasoning
Use for Real-time Reasoning Display
Method:
Behavior: Streams JSON incrementally as it's built
What you get: Raw JSON chunks building toward:
{"reasoning": [...], "answer": "..."}
WARNING: The stream content is raw JSON characters, not clean text. You must parse it.
What the stream looks like:
delta: {"rea
delta: soning": [{"th
delta: ought": "Analyzing
...
delta: "}], "answer": "Here's the answer"}
done: {runId: "run_xxx"}
When to use stream():
| Use Case | Method | Why |
|---|
| Show thinking in real-time | | Users see reasoning as it happens (like ChatGPT) |
| Simple chat, fast response | | Easier, returns clean directly |
| Background processing | without | Poll for status |
How to use stream() for reasoning UI:
See references/streaming-and-reasoning.md
for complete implementation including:
- How to extract thoughts from the JSON stream
- Next.js API route example
- React component for displaying reasoning
- CSS styling
Quick example:
typescript
const stream = client.stream({
engine: "tim-gpt",
input: { instructions: "Your prompt", tools: [] }
});
let fullContent = "";
for await (const event of stream) {
if (event.type === "delta") {
fullContent += event.content;
// Extract thoughts using regex (see streaming-and-reasoning.md)
const thoughts = extractThoughts(fullContent);
// Send to UI
} else if (event.type === "done") {
const final = JSON.parse(fullContent);
const answer = final.answer; // Extract final answer
}
}
For most chat UIs, use instead - it's simpler and returns clean text directly.
API Modes
Sync Mode (Recommended for Chat)
python
run = client.run(
engine="tim-gpt",
input={"instructions": "Your task", "tools": tools},
options={"await_completion": True}
)
answer = run.result.answer # Clean text
Async Mode
For long-running jobs, don't set
:
python
run = client.run(
engine="tim-gpt",
input={"instructions": "Long task", "tools": tools}
# No await_completion - returns immediately
)
run_id = run.run_id
# Poll for status
status = client.get(run_id)
while status.status not in ["succeeded", "failed"]:
time.sleep(2)
status = client.get(run_id)
answer = status.result.answer
Streaming (Advanced)
See
for streaming examples.
Note: Streaming returns raw JSON, not clean text.
SDK Methods Reference
| Method | Description | When to Use |
|---|
| Create a run (sync or async) | Most common - create agent runs |
| Stream run events in real-time | Chat UIs, live demos |
| Get current status of a run | Check async run status |
| Poll until run completes | Background jobs, dashboards |
| Cancel a running/queued run | User cancellation, timeouts |
client.get()
Get the current status of a run:
python
status = client.get(run.run_id)
print(status.status) # 'queued' | 'running' | 'succeeded' | 'failed'
if status.status == "succeeded":
print(status.result.answer)
typescript
const status = await client.get(run.runId);
console.log(status.status);
if (status.status === "succeeded") {
console.log(status.result?.answer);
}
client.wait()
Automatically poll until a run completes:
python
result = client.wait(
run.run_id,
options={
"interval_ms": 2000, # Poll every 2 seconds (default)
"max_attempts": 60, # Max attempts before giving up (default: 60)
},
)
typescript
const result = await client.wait(run.runId, {
intervalMs: 2000, // Poll every 2 seconds
maxAttempts: 60, // Max attempts before giving up
});
client.cancel()
Cancel a run that's still in progress:
python
client.cancel(run.run_id)
typescript
await client.cancel(run.runId);
Common Patterns
Research Agent
python
tools = [
{
"type": "function",
"name": "web_search",
"description": "Search the web for current information",
"url": "https://your-server.com/search",
"method": "POST",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
}
]
run = client.run(
engine="tim-gpt",
input={
"instructions": "Research the latest AI breakthroughs",
"tools": tools
},
options={"await_completion": True}
)
print(run.result.answer)
Multi-Tool Agent
Define multiple tools. The agent will chain them as needed:
python
tools = [
{
"type": "function",
"name": "search",
"description": "Search the web",
"url": "https://your-server.com/search",
"method": "POST",
"parameters": {...}
},
{
"type": "function",
"name": "save_to_db",
"description": "Save results to database",
"url": "https://your-server.com/save",
"method": "POST",
"parameters": {...}
}
]
TypeScript Types
SDK Exports
typescript
import {
Subconscious,
type RunResponse,
type StreamEvent,
type ReasoningStep,
type Tool,
type SubconsciousError
} from "subconscious";
Response Types
typescript
interface RunResponse {
runId: string;
status: "queued" | "running" | "succeeded" | "failed" | "canceled" | "timed_out";
result?: {
answer: string; // Clean text response
reasoning?: ReasoningStep[]; // Optional: step-by-step reasoning
};
usage?: {
inputTokens: number;
outputTokens: number;
durationMs: number;
toolCalls?: { [toolName: string]: number };
};
error?: {
code: string;
message: string;
};
}
interface ReasoningStep {
title?: string;
thought?: string;
conclusion?: string;
tooluse?: {
tool_name: string;
parameters: Record<string, unknown>;
tool_result: unknown;
};
subtasks?: ReasoningStep[];
}
interface StreamEvent {
type: "delta" | "done" | "error";
content?: string; // Raw JSON chunk for delta events
runId?: string; // Present on done
message?: string; // Present on error
}
Error Handling
SDK Errors
typescript
import { SubconsciousError } from "subconscious";
try {
const run = await client.run({
engine: "tim-gpt",
input: { instructions: "...", tools: [] },
options: { awaitCompletion: true }
});
} catch (error) {
if (error instanceof SubconsciousError) {
switch (error.code) {
case "invalid_api_key":
// Redirect to settings
console.error("Invalid API key");
break;
case "rate_limited":
// Show retry message
console.error("Rate limited, retry later");
break;
case "insufficient_credits":
// Prompt to add credits
console.error("Insufficient credits");
break;
case "invalid_request":
// Log for debugging
console.error("Invalid request:", error.message);
break;
case "timeout":
// Offer to retry with longer timeout
console.error("Request timed out");
break;
default:
console.error("Error:", error.message);
}
} else {
// Network or other errors
console.error("Unexpected error:", error);
}
}
HTTP Status Codes
| Status | Code | Meaning | Action |
|---|
| 400 | | Bad request parameters | Fix request |
| 401 | | Invalid or missing API key | Check API key |
| 402 | | Account needs credits | Add credits |
| 429 | | Too many requests | Retry after delay |
| 500 | | Server error | Retry with backoff |
| 503 | | Service down | Retry later |
Run-Level Errors
Runs can fail after being accepted. Always check status:
typescript
const run = await client.run({...});
if (run.status === "succeeded") {
console.log(run.result?.answer);
} else if (run.status === "failed") {
console.error("Run failed:", run.error?.message);
} else if (run.status === "timed_out") {
console.error("Run timed out");
}
Request Cancellation
Using AbortController
typescript
const controller = new AbortController();
// Start the request
const runPromise = client.run({
engine: "tim-gpt",
input: { instructions: "...", tools: [] },
options: { awaitCompletion: true }
});
// Cancel after 10 seconds
setTimeout(() => controller.abort(), 10000);
// Or cancel on user action
cancelButton.onclick = () => controller.abort();
try {
const run = await runPromise;
} catch (error) {
if (error.name === "AbortError") {
console.log("Request cancelled by user");
}
}
Cancelling Async Runs
typescript
// Start async run
const run = await client.run({
engine: "tim-gpt",
input: { instructions: "...", tools: [] }
// No awaitCompletion
});
// Cancel it
await client.cancel(run.runId);
Common Gotchas
CRITICAL: Streaming Returns Raw JSON, Not Text
The #1 mistake: Displaying
from
directly in the UI shows ugly raw JSON like
{"reasoning":[{"thought":"I need to...
.
The fix: Extract thoughts and answer from the JSON:
typescript
// BAD - shows raw JSON in UI
for await (const event of stream) {
if (event.type === "delta") {
displayToUser(event.content); // Shows: {"rea... (ugly!)
}
}
// GOOD - extract thoughts and show clean text
let fullContent = "";
let sentThoughts: string[] = [];
for await (const event of stream) {
if (event.type === "delta") {
fullContent += event.content;
// Extract thoughts using regex
const thoughtPattern = /"thought"\s*:\s*"((?:[^"\\]|\\.)*)"/g;
let match;
while ((match = thoughtPattern.exec(fullContent)) !== null) {
const thought = match[1].replace(/\\n/g, " ").replace(/\\"/g, '"');
if (!sentThoughts.includes(thought)) {
displayThinking(thought); // Shows: "I need to search for movies..."
sentThoughts.push(thought);
}
}
} else if (event.type === "done") {
const parsed = JSON.parse(fullContent);
displayAnswer(parsed.answer); // Shows clean final answer
}
}
See
references/streaming-and-reasoning.md
for complete implementation.
- Use for display - Not
choices[0].message.content
(that's OpenAI format)
- returns raw JSON - Use for clean text answers in chat UIs. See
references/streaming-and-reasoning.md
for parsing.
- No endpoint - Use the native SDK, not OpenAI SDK
- Instructions format, not messages - Convert message history to single string
- Tools must be publicly accessible - Use ngrok for local development
- Response has - The clean text is in , not
- Reasoning field is optional - Contains internal steps, useful for debugging
- Engine names: Use , , , ,
- Streaming shows raw JSON - You must parse
{"reasoning": [...], "answer": "..."}
yourself. For simple chat, use instead.
- is required - Even if you have no tools, pass an empty array.
- No system message field - Prepend system prompt to your instructions string.
- Always check - Don't access without checking status first.
Next.js/Vercel Example
See
for complete Next.js API route example with Server-Sent Events.
Production Checklist
Security
Reliability
Monitoring
UX
Cost Control
Reference Files
For detailed information, see:
references/api-reference.md
- Complete API documentation with correct response formats
references/streaming-and-reasoning.md
- CRITICAL: How to stream and display reasoning steps (solves the raw JSON problem)
references/typescript-types.md
- Complete TypeScript type definitions
references/error-handling.md
- Error handling patterns and best practices
references/tools-guide.md
- Deep dive on tool system
- - Complete working examples including Next.js and reasoning display
Resources
When in doubt, check the official docs at docs.subconscious.dev for the latest information.