honcho-integration

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Honcho Integration Guide

Honcho 集成指南

This skill helps you integrate Honcho into existing Python or TypeScript applications. Honcho provides AI-native memory for stateful agents—it uses custom reasoning models to learn continually.
本指南将帮助你把Honcho集成到现有的Python或TypeScript应用中。Honcho为有状态的AI Agent提供原生AI记忆功能——它通过自定义推理模型实现持续学习。

Integration Workflow

集成工作流

Follow these phases in order:
请按以下阶段依次操作:

Phase 1: Codebase Exploration

阶段1:代码库探索

Before asking the user anything, explore the codebase to understand:
  1. Language & Framework: Is this Python or TypeScript? What frameworks are used (FastAPI, Express, Next.js, etc.)?
  2. Existing AI/LLM code: Search for existing LLM integrations (OpenAI, Anthropic, LangChain, etc.)
  3. Entity structure: Identify users, agents, bots, or other entities that interact
  4. Session/conversation handling: How does the app currently manage conversations?
  5. Message flow: Where are messages sent/received? What's the request/response cycle?
Use Glob and Grep to find:
  • **/*.py
    or
    **/*.ts
    files with "openai", "anthropic", "llm", "chat", "message"
  • User/session models or types
  • API routes handling chat or conversation endpoints
在询问用户之前,先探索代码库以了解以下信息:
  1. 语言与框架:项目使用Python还是TypeScript?用到了哪些框架(FastAPI、Express、Next.js等)?
  2. 现有AI/LLM代码:查找已有的大语言模型(LLM)集成代码(OpenAI、Anthropic、LangChain等)
  3. 实体结构:识别参与交互的用户、Agent、机器人或其他实体
  4. 会话/对话处理:应用当前如何管理对话?
  5. 消息流:消息的收发位置在哪里?请求/响应周期是怎样的?
使用Glob和Grep工具查找以下内容:
  • 包含"openai"、"anthropic"、"llm"、"chat"、"message"关键词的
    **/*.py
    **/*.ts
    文件
  • 用户/会话模型或类型定义
  • 处理聊天或对话端点的API路由

Phase 2: Interview (REQUIRED)

阶段2:访谈(必填)

After exploring the codebase, use the AskUserQuestion tool to clarify integration requirements. Ask these questions (adapt based on what you learned in Phase 1):
探索完代码库后,使用AskUserQuestion工具明确集成需求。根据阶段1的发现,调整并询问以下问题:

Question Set 1 - Entities & Peers

问题集1 - 实体与节点(Peers)

Ask about which entities should be Honcho peers:
  • header: "Peers"
  • question: "Which entities should Honcho track and build representations for?"
  • options based on what you found (e.g., "End users only", "Users + AI assistant", "Users + multiple AI agents", "All participants including third-party services")
  • Include a follow-up if they have multiple AI agents: should any AI peers be observed?
询问哪些实体需要作为Honcho的节点:
  • 标题:"节点(Peers)"
  • 问题:"Honcho需要跟踪并为哪些实体构建特征模型?"
  • 基于发现的选项(例如:"仅终端用户"、"用户+AI助手"、"用户+多个AI Agent"、"包括第三方服务在内的所有参与者")
  • 如果用户有多个AI Agent,追加提问:是否需要对某些AI节点进行观测?

Question Set 2 - Integration Pattern

问题集2 - 集成模式

Ask how they want to use Honcho context:
  • header: "Pattern"
  • question: "How should your AI access Honcho's user context?"
  • options:
    • "Tool call (Recommended)" - "Agent queries Honcho on-demand via function calling"
    • "Pre-fetch" - "Fetch user context before each LLM call with predefined queries"
    • "context()" - "Include conversation history and representations in prompt"
    • "Multiple patterns" - "Combine approaches for different use cases"
询问用户希望如何使用Honcho上下文:
  • 标题:"集成模式"
  • 问题:"AI应如何获取Honcho的用户上下文?"
  • 选项:
    • "工具调用(推荐)" - "Agent通过函数调用按需查询Honcho"
    • "预获取" - "在每次LLM调用前,通过预定义查询获取用户上下文"
    • "context()方法" - "在提示词中包含对话历史和实体特征"
    • "混合模式" - "针对不同场景组合使用多种方式"

Question Set 3 - Session Structure

问题集3 - 会话结构

Ask about conversation structure:
  • header: "Sessions"
  • question: "How should conversations map to Honcho sessions?"
  • options based on their app (e.g., "One session per chat thread", "One session per user", "Multiple users per session (group chat)", "Custom session logic")
询问对话与Honcho会话的映射方式:
  • 标题:"会话配置"
  • 问题:"对话应如何映射到Honcho会话?"
  • 基于应用情况的选项(例如:"每个聊天线程对应一个会话"、"每个用户对应一个会话"、"一个会话包含多个用户(群聊)"、"自定义会话逻辑")

Question Set 4 - Specific Queries (if using pre-fetch pattern)

问题集4 - 特定查询(若选择预获取模式)

If they chose pre-fetch, ask what context matters:
  • header: "Context"
  • question: "What user context should be fetched for the AI?"
  • multiSelect: true
  • options: "Communication style", "Expertise level", "Goals/priorities", "Preferences", "Recent activity summary", "Custom queries"
如果用户选择预获取模式,询问需要获取哪些上下文:
  • 标题:"上下文内容"
  • 问题:"需要为AI获取哪些用户上下文?"
  • 可多选
  • 选项:"沟通风格"、"专业水平"、"目标/优先级"、"偏好设置"、"近期活动摘要"、"自定义查询"

Phase 3: Implementation

阶段3:实现集成

Based on interview responses, implement the integration:
  1. Install the SDK
  2. Create Honcho client initialization
  3. Set up peer creation for identified entities
  4. Implement the chosen integration pattern(s)
  5. Add message storage after exchanges
  6. Update any existing conversation handlers
根据访谈反馈,执行以下集成步骤:
  1. 安装SDK
  2. 初始化Honcho客户端
  3. 为已识别的实体创建节点
  4. 实现选定的集成模式
  5. 在消息交互后添加消息存储逻辑
  6. 更新现有对话处理程序

Phase 4: Verification

阶段4:验证

  • Ensure all message exchanges are stored to Honcho
  • Verify AI peers have
    observe_me=False
    (unless user specifically wants AI observation)
  • Check that the workspace ID is consistent across the codebase
  • Confirm environment variable for API key is documented

  • 确保所有消息交互都已存储到Honcho
  • 验证AI节点的
    observe_me=False
    配置(除非用户明确要求观测AI)
  • 检查代码库中工作区ID(workspace ID)的一致性
  • 确认已记录API密钥的环境变量配置

Before You Start

开始前的准备

  1. Check the latest SDK versions at https://docs.honcho.dev/changelog/introduction
    • Python SDK:
      honcho-ai
    • TypeScript SDK:
      @honcho-ai/sdk
  2. Get an API key ask the user to get a Honcho API key from https://app.honcho.dev and add it to the environment.
    • Python SDK:
      honcho-ai
    • TypeScript SDK:
      @honcho-ai/sdk
  1. 获取API密钥:请用户从https://app.honcho.dev获取Honcho API密钥,并添加到环境变量中。

Installation

安装步骤

Python (use uv)

Python(使用uv)

bash
uv add honcho-ai
bash
uv add honcho-ai

TypeScript (use bun)

TypeScript(使用bun)

bash
bun add @honcho-ai/sdk
bash
bun add @honcho-ai/sdk

Core Integration Patterns

核心集成模式

1. Initialize with a Single Workspace

1. 单工作区初始化

Use ONE workspace for your entire application. The workspace name should reflect your app/product.
Python:
python
from honcho import Honcho
import os

honcho = Honcho(
    workspace_id="your-app-name",
    api_key=os.environ["HONCHO_API_KEY"],
    environment="production"
)
TypeScript:
typescript
import { Honcho } from '@honcho-ai/sdk';

const honcho = new Honcho({
    workspaceId: "your-app-name",
    apiKey: process.env.HONCHO_API_KEY,
    environment: "production"
});
为整个应用使用一个工作区,工作区名称应与你的应用/产品名称一致。
Python:
python
from honcho import Honcho
import os

honcho = Honcho(
    workspace_id="your-app-name",
    api_key=os.environ["HONCHO_API_KEY"],
    environment="production"
)
TypeScript:
typescript
import { Honcho } from '@honcho-ai/sdk';

const honcho = new Honcho({
    workspaceId: "your-app-name",
    apiKey: process.env.HONCHO_API_KEY,
    environment: "production"
});

2. Create Peers for ALL Entities

2. 为所有实体创建节点

Create peers for every entity in your business logic - users AND AI assistants.
Python:
python
undefined
为业务逻辑中的每个实体创建节点——包括用户和AI助手。
Python:
python
undefined

Human users

人类用户

user = honcho.peer("user-123")
user = honcho.peer("user-123")

AI assistants - set observe_me=False so Honcho doesn't model the AI

AI助手 - 设置observe_me=False,避免Honcho为AI构建模型

assistant = honcho.peer("assistant", config={"observe_me": False}) support_bot = honcho.peer("support-bot", config={"observe_me": False})

**TypeScript:**

```typescript
// Human users
const user = await honcho.peer("user-123");

// AI assistants - set observe_me=False
const assistant = await honcho.peer("assistant", { config: { observe_me: false } });
const supportBot = await honcho.peer("support-bot", { config: { observe_me: false } });
assistant = honcho.peer("assistant", config={"observe_me": False}) support_bot = honcho.peer("support-bot", config={"observe_me": False})

**TypeScript:**

```typescript
// 人类用户
const user = await honcho.peer("user-123");

// AI助手 - 设置observe_me=False
const assistant = await honcho.peer("assistant", { config: { observe_me: false } });
const supportBot = await honcho.peer("support-bot", { config: { observe_me: false } });

3. Multi-Peer Sessions

3. 多节点会话

Sessions can have multiple participants. Configure observation settings per-peer.
Python:
python
from honcho import SessionPeerConfig

session = honcho.session("conversation-123")
会话可包含多个参与者,可为每个节点配置观测设置。
Python:
python
from honcho import SessionPeerConfig

session = honcho.session("conversation-123")

User is observed (Honcho builds a model of them)

用户会被观测(Honcho为其构建模型)

user_config = SessionPeerConfig(observe_me=True, observe_others=True)
user_config = SessionPeerConfig(observe_me=True, observe_others=True)

AI is NOT observed (no model built of the AI)

AI不会被观测(不为AI构建模型)

ai_config = SessionPeerConfig(observe_me=False, observe_others=True)
session.add_peers([ (user, user_config), (assistant, ai_config) ])

**TypeScript:**

```typescript
const session = await honcho.session("conversation-123");

await session.addPeers([
    [user, { observeMe: true, observeOthers: true }],
    [assistant, { observeMe: false, observeOthers: true }]
]);
ai_config = SessionPeerConfig(observe_me=False, observe_others=True)
session.add_peers([ (user, user_config), (assistant, ai_config) ])

**TypeScript:**

```typescript
const session = await honcho.session("conversation-123");

await session.addPeers([
    [user, { observeMe: true, observeOthers: true }],
    [assistant, { observeMe: false, observeOthers: true }]
]);

4. Add Messages to Sessions

4. 向会话添加消息

Python:
python
session.add_messages([
    user.message("I'm having trouble with my account"),
    assistant.message("I'd be happy to help. What seems to be the issue?"),
    user.message("I can't reset my password")
])
TypeScript:
typescript
await session.addMessages([
    user.message("I'm having trouble with my account"),
    assistant.message("I'd be happy to help. What seems to be the issue?"),
    user.message("I can't reset my password")
]);
Python:
python
session.add_messages([
    user.message("我在账户操作上遇到了问题"),
    assistant.message("我很乐意帮忙,具体是什么问题呢?"),
    user.message("我无法重置密码")
])
TypeScript:
typescript
await session.addMessages([
    user.message("我在账户操作上遇到了问题"),
    assistant.message("我很乐意帮忙,具体是什么问题呢?"),
    user.message("我无法重置密码")
]);

Using Honcho for AI Agents

为AI Agent使用Honcho

Pattern A: Dialectic Chat as a Tool Call (Recommended for Agents)

模式A:将辩证聊天作为工具调用(推荐用于Agent系统)

Make Honcho's chat endpoint available as a tool for your AI agent. This lets the agent query user context on-demand.
Python (OpenAI function calling):
python
import openai
from honcho import Honcho

honcho = Honcho(workspace_id="my-app", api_key=os.environ["HONCHO_API_KEY"])
将Honcho的聊天端点作为工具提供给AI Agent,让Agent可以按需查询用户上下文。
Python(OpenAI函数调用):
python
import openai
from honcho import Honcho

honcho = Honcho(workspace_id="my-app", api_key=os.environ["HONCHO_API_KEY"])

Define the tool for your agent

为Agent定义工具

honcho_tool = { "type": "function", "function": { "name": "query_user_context", "description": "Query Honcho to retrieve relevant context about the user based on their history and preferences. Use this when you need to understand the user's background, preferences, past interactions, or goals.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "A natural language question about the user, e.g. 'What are this user's main goals?' or 'What communication style does this user prefer?'" } }, "required": ["query"] } } }
def handle_honcho_tool_call(user_id: str, query: str) -> str: """Execute the Honcho chat tool call.""" peer = honcho.peer(user_id) return peer.chat(query)
honcho_tool = { "type": "function", "function": { "name": "query_user_context", "description": "查询Honcho以获取与用户历史和偏好相关的上下文信息。当你需要了解用户背景、偏好、过往交互或目标时使用此工具。", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "关于用户的自然语言问题,例如'该用户的主要目标是什么?'或'该用户偏好哪种沟通风格?'" } }, "required": ["query"] } } }
def handle_honcho_tool_call(user_id: str, query: str) -> str: """执行Honcho聊天工具调用。""" peer = honcho.peer(user_id) return peer.chat(query)

Use in your agent loop

在Agent循环中使用

def run_agent(user_id: str, user_message: str): messages = [{"role": "user", "content": user_message}]
response = openai.chat.completions.create(
    model="gpt-4",
    messages=messages,
    tools=[honcho_tool]
)

# Handle tool calls
if response.choices[0].message.tool_calls:
    for tool_call in response.choices[0].message.tool_calls:
        if tool_call.function.name == "query_user_context":
            import json
            args = json.loads(tool_call.function.arguments)
            result = handle_honcho_tool_call(user_id, args["query"])
            # Continue conversation with tool result...

**TypeScript (OpenAI function calling):**

```typescript
import OpenAI from 'openai';
import { Honcho } from '@honcho-ai/sdk';

const honcho = new Honcho({
    workspaceId: "my-app",
    apiKey: process.env.HONCHO_API_KEY
});

const honchoTool: OpenAI.ChatCompletionTool = {
    type: "function",
    function: {
        name: "query_user_context",
        description: "Query Honcho to retrieve relevant context about the user based on their history and preferences.",
        parameters: {
            type: "object",
            properties: {
                query: {
                    type: "string",
                    description: "A natural language question about the user"
                }
            },
            required: ["query"]
        }
    }
};

async function handleHonchoToolCall(userId: string, query: string): Promise<string> {
    const peer = await honcho.peer(userId);
    return await peer.chat(query);
}
def run_agent(user_id: str, user_message: str): messages = [{"role": "user", "content": user_message}]
response = openai.chat.completions.create(
    model="gpt-4",
    messages=messages,
    tools=[honcho_tool]
)

# 处理工具调用
if response.choices[0].message.tool_calls:
    for tool_call in response.choices[0].message.tool_calls:
        if tool_call.function.name == "query_user_context":
            import json
            args = json.loads(tool_call.function.arguments)
            result = handle_honcho_tool_call(user_id, args["query"])
            # 携带工具结果继续对话...

**TypeScript(OpenAI函数调用):**

```typescript
import OpenAI from 'openai';
import { Honcho } from '@honcho-ai/sdk';

const honcho = new Honcho({
    workspaceId: "my-app",
    apiKey: process.env.HONCHO_API_KEY
});

const honchoTool: OpenAI.ChatCompletionTool = {
    type: "function",
    function: {
        name: "query_user_context",
        description: "Query Honcho to retrieve relevant context about the user based on their history and preferences.",
        parameters: {
            type: "object",
            properties: {
                query: {
                    type: "string",
                    description: "A natural language question about the user"
                }
            },
            required: ["query"]
        }
    }
};

async function handleHonchoToolCall(userId: string, query: string): Promise<string> {
    const peer = await honcho.peer(userId);
    return await peer.chat(query);
}

Pattern B: Pre-fetch Context with Targeted Queries

模式B:通过定向查询预获取上下文

For simpler integrations, fetch user context before the LLM call using pre-defined queries.
Python:
python
def get_user_context_for_prompt(user_id: str) -> dict:
    """Fetch key user attributes via targeted Honcho queries."""
    peer = honcho.peer(user_id)

    return {
        "communication_style": peer.chat("What communication style does this user prefer? Be concise."),
        "expertise_level": peer.chat("What is this user's technical expertise level? Be concise."),
        "current_goals": peer.chat("What are this user's current goals or priorities? Be concise."),
        "preferences": peer.chat("What key preferences should I know about this user? Be concise.")
    }

def build_system_prompt(user_context: dict) -> str:
    return f"""You are a helpful assistant. Here's what you know about this user:

Communication style: {user_context['communication_style']}
Expertise level: {user_context['expertise_level']}
Current goals: {user_context['current_goals']}
Key preferences: {user_context['preferences']}

Tailor your responses accordingly."""
TypeScript:
typescript
async function getUserContextForPrompt(userId: string): Promise<Record<string, string>> {
    const peer = await honcho.peer(userId);

    const [style, expertise, goals, preferences] = await Promise.all([
        peer.chat("What communication style does this user prefer? Be concise."),
        peer.chat("What is this user's technical expertise level? Be concise."),
        peer.chat("What are this user's current goals or priorities? Be concise."),
        peer.chat("What key preferences should I know about this user? Be concise.")
    ]);

    return {
        communicationStyle: style,
        expertiseLevel: expertise,
        currentGoals: goals,
        preferences: preferences
    };
}
对于简单集成,可在LLM调用前通过预定义查询获取用户上下文。
Python:
python
def get_user_context_for_prompt(user_id: str) -> dict:
    """通过定向Honcho查询获取关键用户属性。"""
    peer = honcho.peer(user_id)

    return {
        "communication_style": peer.chat("该用户偏好哪种沟通风格?请简洁回答。"),
        "expertise_level": peer.chat("该用户的技术专业水平如何?请简洁回答。"),
        "current_goals": peer.chat("该用户当前的目标或优先级是什么?请简洁回答。"),
        "preferences": peer.chat("我需要了解该用户的哪些关键偏好?请简洁回答。")
    }

def build_system_prompt(user_context: dict) -> str:
    return f"""你是一个乐于助人的助手。以下是关于当前用户的信息:

沟通风格:{user_context['communication_style']}
专业水平:{user_context['expertise_level']}
当前目标:{user_context['current_goals']}
关键偏好:{user_context['preferences']}

请据此调整你的回复。"""
TypeScript:
typescript
async function getUserContextForPrompt(userId: string): Promise<Record<string, string>> {
    const peer = await honcho.peer(userId);

    const [style, expertise, goals, preferences] = await Promise.all([
        peer.chat("What communication style does this user prefer? Be concise."),
        peer.chat("What is this user's technical expertise level? Be concise."),
        peer.chat("What are this user's current goals or priorities? Be concise."),
        peer.chat("What key preferences should I know about this user? Be concise.")
    ]);

    return {
        communicationStyle: style,
        expertiseLevel: expertise,
        currentGoals: goals,
        preferences: preferences
    };
}

Pattern C: Get Context for LLM Integration

模式C:为LLM集成获取上下文

Use
context()
for conversation history with built-in LLM formatting.
Python:
python
import openai

session = honcho.session("conversation-123")
user = honcho.peer("user-123")
assistant = honcho.peer("assistant", config={"observe_me": False})
使用
context()
方法获取内置LLM格式的对话历史。
Python:
python
import openai

session = honcho.session("conversation-123")
user = honcho.peer("user-123")
assistant = honcho.peer("assistant", config={"observe_me": False})

Get context formatted for your LLM

获取适配LLM格式的上下文

context = session.context( tokens=2000, peer_target=user.id, # Include representation of this user summary=True # Include conversation summaries )
context = session.context( tokens=2000, peer_target=user.id, # 包含该用户的特征模型 summary=True # 包含对话摘要 )

Convert to OpenAI format

转换为OpenAI格式

messages = context.to_openai(assistant=assistant)
messages = context.to_openai(assistant=assistant)

Or Anthropic format

或转换为Anthropic格式

messages = context.to_anthropic(assistant=assistant)

messages = context.to_anthropic(assistant=assistant)

Add the new user message

添加新的用户消息

messages.append({"role": "user", "content": "What should I focus on today?"})
response = openai.chat.completions.create( model="gpt-4", messages=messages )
messages.append({"role": "user", "content": "我今天应该重点做什么?"})
response = openai.chat.completions.create( model="gpt-4", messages=messages )

Store the exchange

存储本次交互内容

session.add_messages([ user.message("What should I focus on today?"), assistant.message(response.choices[0].message.content) ])
undefined
session.add_messages([ user.message("我今天应该重点做什么?"), assistant.message(response.choices[0].message.content) ])
undefined

Streaming Responses

流式响应

Python:
python
response_stream = peer.chat("What do we know about this user?", stream=True)

for chunk in response_stream.iter_text():
    print(chunk, end="", flush=True)
Python:
python
response_stream = peer.chat("我们对该用户有哪些了解?", stream=True)

for chunk in response_stream.iter_text():
    print(chunk, end="", flush=True)

Integration Checklist

集成检查清单

When integrating Honcho into an existing codebase:
  • Install SDK with
    uv add honcho-ai
    (Python) or
    bun add @honcho-ai/sdk
    (TypeScript)
  • Set up
    HONCHO_API_KEY
    environment variable
  • Initialize Honcho client with a single workspace ID
  • Create peers for all entities (users AND AI assistants)
  • Set
    observe_me=False
    for AI peers
  • Configure sessions with appropriate peer observation settings
  • Choose integration pattern:
    • Tool call pattern for agentic systems
    • Pre-fetch pattern for simpler integrations
    • context() for conversation history
  • Store messages after each exchange to build user models
将Honcho集成到现有代码库时,请确认以下事项:
  • 使用
    uv add honcho-ai
    (Python)或
    bun add @honcho-ai/sdk
    (TypeScript)安装SDK
  • 配置
    HONCHO_API_KEY
    环境变量
  • 使用单个工作区ID初始化Honcho客户端
  • 为所有实体(用户和AI助手)创建节点
  • 为AI节点设置
    observe_me=False
  • 配置会话并设置合适的节点观测参数
  • 选择集成模式:
    • 面向Agent系统的工具调用模式
    • 面向简单集成的预获取模式
    • 面向对话历史的context()方法
  • 在每次交互后存储消息,以构建用户模型

Common Mistakes to Avoid

需避免的常见错误

  1. Multiple workspaces: Use ONE workspace per application
  2. Forgetting AI peers: Create peers for AI assistants, not just users
  3. Observing AI peers: Set
    observe_me=False
    for AI peers unless you specifically want Honcho to model your AI's behavior
  4. Not storing messages: Always call
    add_messages()
    to feed Honcho's reasoning engine
  5. Blocking on processing: Messages are processed asynchronously; use
    get_deriver_status()
    if you need to wait
  1. 多工作区:每个应用仅使用一个工作区
  2. 遗漏AI节点:不仅要为用户创建节点,也要为AI助手创建节点
  3. 观测AI节点:为AI节点设置
    observe_me=False
    ,除非你明确希望Honcho为AI的行为建模
  4. 未存储消息:务必调用
    add_messages()
    为Honcho的推理引擎提供数据
  5. 阻塞式处理:消息为异步处理;若需等待处理完成,使用
    get_deriver_status()

Resources

参考资源