ai-model-web

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

When to use this skill

何时使用此Skill

Use this skill for calling AI models in browser/Web applications using
@cloudbase/js-sdk
.
Use it when you need to:
  • Integrate AI text generation in a frontend Web app
  • Stream AI responses for better user experience
  • Call Hunyuan or DeepSeek models from browser
Do NOT use for:
  • Node.js backend or cloud functions → use
    ai-model-nodejs
    skill
  • WeChat Mini Program → use
    ai-model-wechat
    skill
  • Image generation → use
    ai-model-nodejs
    skill (Node SDK only)
  • HTTP API integration → use
    http-api
    skill

当你需要在浏览器/网页应用中通过
@cloudbase/js-sdk
调用AI模型时,请使用此Skill。
适用场景:
  • 在前端网页应用中集成AI文本生成功能
  • 流式获取AI响应以提升用户体验
  • 在浏览器中调用Hunyuan或DeepSeek模型
不适用场景:
  • Node.js后端或云函数 → 使用
    ai-model-nodejs
    Skill
  • 微信小程序 → 使用
    ai-model-wechat
    Skill
  • 图片生成 → 使用
    ai-model-nodejs
    Skill(仅支持Node SDK)
  • HTTP API集成 → 使用
    http-api
    Skill

Available Providers and Models

可用服务商与模型

CloudBase provides these built-in providers and models:
ProviderModelsRecommended
hunyuan-exp
hunyuan-turbos-latest
,
hunyuan-t1-latest
,
hunyuan-2.0-thinking-20251109
,
hunyuan-2.0-instruct-20251111
hunyuan-2.0-instruct-20251111
deepseek
deepseek-r1-0528
,
deepseek-v3-0324
,
deepseek-v3.2
deepseek-v3.2

CloudBase提供以下内置服务商和模型:
服务商模型推荐
hunyuan-exp
hunyuan-turbos-latest
,
hunyuan-t1-latest
,
hunyuan-2.0-thinking-20251109
,
hunyuan-2.0-instruct-20251111
hunyuan-2.0-instruct-20251111
deepseek
deepseek-r1-0528
,
deepseek-v3-0324
,
deepseek-v3.2
deepseek-v3.2

Installation

安装

bash
npm install @cloudbase/js-sdk
bash
npm install @cloudbase/js-sdk

Initialization

初始化

js
import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({
  env: "<YOUR_ENV_ID>",
  accessKey: "<YOUR_PUBLISHABLE_KEY>"  // Get from CloudBase console
});

const auth = app.auth();
await auth.signInAnonymously();

const ai = app.ai();
Important notes:
  • Always use synchronous initialization with top-level import
  • User must be authenticated before using AI features
  • Get
    accessKey
    from CloudBase console

js
import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({
  env: "<YOUR_ENV_ID>",
  accessKey: "<YOUR_PUBLISHABLE_KEY>"  // 从CloudBase控制台获取
});

const auth = app.auth();
await auth.signInAnonymously();

const ai = app.ai();
重要说明:
  • 请始终使用顶层导入进行同步初始化
  • 在使用AI功能前,用户必须完成认证
  • 从CloudBase控制台获取
    accessKey

generateText() - Non-streaming

generateText() - 非流式

js
const model = ai.createModel("hunyuan-exp");

const result = await model.generateText({
  model: "hunyuan-2.0-instruct-20251111",  // Recommended model
  messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});

console.log(result.text);           // Generated text string
console.log(result.usage);          // { prompt_tokens, completion_tokens, total_tokens }
console.log(result.messages);       // Full message history
console.log(result.rawResponses);   // Raw model responses

js
const model = ai.createModel("hunyuan-exp");

const result = await model.generateText({
  model: "hunyuan-2.0-instruct-20251111",  // 推荐使用的模型
  messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});

console.log(result.text);           // 生成的文本字符串
console.log(result.usage);          // { prompt_tokens, completion_tokens, total_tokens }
console.log(result.messages);       // 完整的消息历史
console.log(result.rawResponses);   // 原始模型响应

streamText() - Streaming

streamText() - 流式

js
const model = ai.createModel("hunyuan-exp");

const res = await model.streamText({
  model: "hunyuan-2.0-instruct-20251111",  // Recommended model
  messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});

// Option 1: Iterate text stream (recommended)
for await (let text of res.textStream) {
  console.log(text);  // Incremental text chunks
}

// Option 2: Iterate data stream for full response data
for await (let data of res.dataStream) {
  console.log(data);  // Full response chunk with metadata
}

// Option 3: Get final results
const messages = await res.messages;  // Full message history
const usage = await res.usage;        // Token usage

js
const model = ai.createModel("hunyuan-exp");

const res = await model.streamText({
  model: "hunyuan-2.0-instruct-20251111",  // 推荐使用的模型
  messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});

// 方式1:遍历文本流(推荐)
for await (let text of res.textStream) {
  console.log(text);  // 增量文本片段
}

// 方式2:遍历数据流以获取完整响应数据
for await (let data of res.dataStream) {
  console.log(data);  // 包含元数据的完整响应片段
}

// 方式3:获取最终结果
const messages = await res.messages;  // 完整的消息历史
const usage = await res.usage;        // Token使用情况

Type Definitions

类型定义

ts
interface BaseChatModelInput {
  model: string;                        // Required: model name
  messages: Array<ChatModelMessage>;    // Required: message array
  temperature?: number;                 // Optional: sampling temperature
  topP?: number;                        // Optional: nucleus sampling
}

type ChatModelMessage =
  | { role: "user"; content: string }
  | { role: "system"; content: string }
  | { role: "assistant"; content: string };

interface GenerateTextResult {
  text: string;                         // Generated text
  messages: Array<ChatModelMessage>;    // Full message history
  usage: Usage;                         // Token usage
  rawResponses: Array<unknown>;         // Raw model responses
  error?: unknown;                      // Error if any
}

interface StreamTextResult {
  textStream: AsyncIterable<string>;    // Incremental text stream
  dataStream: AsyncIterable<DataChunk>; // Full data stream
  messages: Promise<ChatModelMessage[]>;// Final message history
  usage: Promise<Usage>;                // Final token usage
  error?: unknown;                      // Error if any
}

interface Usage {
  prompt_tokens: number;
  completion_tokens: number;
  total_tokens: number;
}

ts
interface BaseChatModelInput {
  model: string;                        // 必填:模型名称
  messages: Array<ChatModelMessage>;    // 必填:消息数组
  temperature?: number;                 // 可选:采样温度
  topP?: number;                        // 可选:核采样参数
}

type ChatModelMessage =
  | { role: "user"; content: string }
  | { role: "system"; content: string }
  | { role: "assistant"; content: string };

interface GenerateTextResult {
  text: string;                         // 生成的文本
  messages: Array<ChatModelMessage>;    // 完整的消息历史
  usage: Usage;                         // Token使用情况
  rawResponses: Array<unknown>;         // 原始模型响应
  error?: unknown;                      // 错误信息(如有)
}

interface StreamTextResult {
  textStream: AsyncIterable<string>;    // 增量文本流
  dataStream: AsyncIterable<DataChunk>; // 完整数据流
  messages: Promise<ChatModelMessage[]>;// 最终消息历史
  usage: Promise<Usage>;                // 最终Token使用情况
  error?: unknown;                      // 错误信息(如有)
}

interface Usage {
  prompt_tokens: number;
  completion_tokens: number;
  total_tokens: number;
}

Best Practices

最佳实践

  1. Use streaming for long responses - Better user experience
  2. Handle errors gracefully - Wrap AI calls in try/catch
  3. Keep accessKey secure - Use publishable key, not secret key
  4. Initialize early - Initialize SDK in app entry point
  5. Ensure authentication - User must be signed in before AI calls
  1. 长响应使用流式传输 - 提升用户体验
  2. 优雅处理错误 - 将AI调用包裹在try/catch中
  3. 保护accessKey安全 - 使用发布密钥,而非密钥
  4. 尽早初始化 - 在应用入口处初始化SDK
  5. 确保完成认证 - 用户必须先登录才能调用AI功能