streaming

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

assistant-ui Streaming

assistant-ui 流式传输

Always consult assistant-ui.com/llms.txt for latest API.
The
assistant-stream
package handles streaming from AI backends.
请始终查阅assistant-ui.com/llms.txt获取最新API。
assistant-stream
包负责处理来自AI后端的流式传输。

References

参考文档

  • ./references/data-stream.md -- AI SDK data stream format
  • ./references/assistant-transport.md -- Native assistant-ui format
  • ./references/encoders.md -- Encoders and decoders
  • ./references/data-stream.md -- AI SDK 数据流格式
  • ./references/assistant-transport.md -- 原生assistant-ui格式
  • ./references/encoders.md -- 编码器与解码器

When to Use

使用场景

Using Vercel AI SDK?
├─ Yes → toUIMessageStreamResponse() (no assistant-stream needed)
└─ No → assistant-stream for custom backends
是否使用Vercel AI SDK?
├─ 是 → 使用toUIMessageStreamResponse()(无需assistant-stream)
└─ 否 → 使用assistant-stream构建自定义后端

Installation

安装

bash
npm install assistant-stream
bash
npm install assistant-stream

Custom Streaming Response

自定义流式响应

ts
import { createAssistantStreamResponse } from "assistant-stream";

export async function POST(req: Request) {
  return createAssistantStreamResponse(async (stream) => {
    stream.appendText("Hello ");
    stream.appendText("world!");

    // Tool call example
    const tool = stream.addToolCallPart({ toolCallId: "1", toolName: "get_weather" });
    tool.argsText.append('{"city":"NYC"}');
    tool.argsText.close();
    tool.setResponse({ result: { temperature: 22 } });

    stream.close();
  });
}
ts
import { createAssistantStreamResponse } from "assistant-stream";

export async function POST(req: Request) {
  return createAssistantStreamResponse(async (stream) => {
    stream.appendText("Hello ");
    stream.appendText("world!");

    // 工具调用示例
    const tool = stream.addToolCallPart({ toolCallId: "1", toolName: "get_weather" });
    tool.argsText.append('{"city":"NYC"}');
    tool.argsText.close();
    tool.setResponse({ result: { temperature: 22 } });

    stream.close();
  });
}

With useLocalRuntime

结合useLocalRuntime使用

useLocalRuntime
expects
ChatModelRunResult
chunks. Yield content parts for streaming:
tsx
import { useLocalRuntime } from "@assistant-ui/react";

const runtime = useLocalRuntime({
  model: {
    async *run({ messages, abortSignal }) {
      const response = await fetch("/api/chat", {
        method: "POST",
        body: JSON.stringify({ messages }),
        signal: abortSignal,
      });

      const reader = response.body?.getReader();
      const decoder = new TextDecoder();
      let buffer = "";

      while (reader) {
        const { done, value } = await reader.read();
        if (done) break;

        buffer += decoder.decode(value, { stream: true });
        const parts = buffer.split("\n");
        buffer = parts.pop() ?? "";

        for (const chunk of parts.filter(Boolean)) {
          yield { content: [{ type: "text", text: chunk }] };
        }
      }
    },
  },
});
useLocalRuntime
期望接收
ChatModelRunResult
块。通过生成内容片段实现流式传输:
tsx
import { useLocalRuntime } from "@assistant-ui/react";

const runtime = useLocalRuntime({
  model: {
    async *run({ messages, abortSignal }) {
      const response = await fetch("/api/chat", {
        method: "POST",
        body: JSON.stringify({ messages }),
        signal: abortSignal,
      });

      const reader = response.body?.getReader();
      const decoder = new TextDecoder();
      let buffer = "";

      while (reader) {
        const { done, value } = await reader.read();
        if (done) break;

        buffer += decoder.decode(value, { stream: true });
        const parts = buffer.split("\n");
        buffer = parts.pop() ?? "";

        for (const chunk of parts.filter(Boolean)) {
          yield { content: [{ type: "text", text: chunk }] };
        }
      }
    },
  },
});

Debugging Streams

调试流式传输

ts
import { AssistantStream, DataStreamDecoder } from "assistant-stream";

const stream = AssistantStream.fromResponse(response, new DataStreamDecoder());
for await (const event of stream) {
  console.log("Event:", JSON.stringify(event, null, 2));
}
ts
import { AssistantStream, DataStreamDecoder } from "assistant-stream";

const stream = AssistantStream.fromResponse(response, new DataStreamDecoder());
for await (const event of stream) {
  console.log("Event:", JSON.stringify(event, null, 2));
}

Stream Event Types

流式事件类型

  • part-start
    with
    part.type
    =
    "text" | "reasoning" | "tool-call" | "source" | "file"
  • text-delta
    with streamed text
  • result
    with tool results
  • step-start
    ,
    step-finish
    ,
    message-finish
  • error
    strings
  • part-start
    ,其中
    part.type
    =
    "text" | "reasoning" | "tool-call" | "source" | "file"
  • text-delta
    ,包含流式文本
  • result
    ,包含工具结果
  • step-start
    ,
    step-finish
    ,
    message-finish
  • error
    ,错误字符串

Common Gotchas

常见问题

Stream not updating UI
  • Check Content-Type is
    text/event-stream
  • Check for CORS errors
Tool calls not rendering
  • addToolCallPart
    needs both
    toolCallId
    and
    toolName
  • Register tool UI with
    makeAssistantToolUI
Partial text not showing
  • Use
    text-delta
    events for streaming
流式传输未更新UI
  • 检查Content-Type是否为
    text/event-stream
  • 检查是否存在CORS错误
工具调用未渲染
  • addToolCallPart
    需要同时传入
    toolCallId
    toolName
  • 使用
    makeAssistantToolUI
    注册工具UI
部分文本未显示
  • 使用
    text-delta
    事件实现流式传输