ai-model-web
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseWhen to use this skill
何时使用此Skill
Use this skill for calling AI models in browser/Web applications using .
@cloudbase/js-sdkUse it when you need to:
- Integrate AI text generation in a frontend Web app
- Stream AI responses for better user experience
- Call Hunyuan or DeepSeek models from browser
Do NOT use for:
- Node.js backend or cloud functions → use skill
ai-model-nodejs - WeChat Mini Program → use skill
ai-model-wechat - Image generation → use skill (Node SDK only)
ai-model-nodejs - HTTP API integration → use skill
http-api
当你需要在浏览器/网页应用中通过调用AI模型时,请使用此Skill。
@cloudbase/js-sdk适用场景:
- 在前端网页应用中集成AI文本生成功能
- 流式获取AI响应以提升用户体验
- 在浏览器中调用Hunyuan或DeepSeek模型
不适用场景:
- Node.js后端或云函数 → 使用Skill
ai-model-nodejs - 微信小程序 → 使用Skill
ai-model-wechat - 图片生成 → 使用Skill(仅支持Node SDK)
ai-model-nodejs - HTTP API集成 → 使用Skill
http-api
Available Providers and Models
可用服务商与模型
CloudBase provides these built-in providers and models:
| Provider | Models | Recommended |
|---|---|---|
| | ✅ |
| | ✅ |
CloudBase提供以下内置服务商和模型:
| 服务商 | 模型 | 推荐 |
|---|---|---|
| | ✅ |
| | ✅ |
Installation
安装
bash
npm install @cloudbase/js-sdkbash
npm install @cloudbase/js-sdkInitialization
初始化
js
import cloudbase from "@cloudbase/js-sdk";
const app = cloudbase.init({
env: "<YOUR_ENV_ID>",
accessKey: "<YOUR_PUBLISHABLE_KEY>" // Get from CloudBase console
});
const auth = app.auth();
await auth.signInAnonymously();
const ai = app.ai();Important notes:
- Always use synchronous initialization with top-level import
- User must be authenticated before using AI features
- Get from CloudBase console
accessKey
js
import cloudbase from "@cloudbase/js-sdk";
const app = cloudbase.init({
env: "<YOUR_ENV_ID>",
accessKey: "<YOUR_PUBLISHABLE_KEY>" // 从CloudBase控制台获取
});
const auth = app.auth();
await auth.signInAnonymously();
const ai = app.ai();重要说明:
- 请始终使用顶层导入进行同步初始化
- 在使用AI功能前,用户必须完成认证
- 从CloudBase控制台获取
accessKey
generateText() - Non-streaming
generateText() - 非流式
js
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
console.log(result.text); // Generated text string
console.log(result.usage); // { prompt_tokens, completion_tokens, total_tokens }
console.log(result.messages); // Full message history
console.log(result.rawResponses); // Raw model responsesjs
const model = ai.createModel("hunyuan-exp");
const result = await model.generateText({
model: "hunyuan-2.0-instruct-20251111", // 推荐使用的模型
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
console.log(result.text); // 生成的文本字符串
console.log(result.usage); // { prompt_tokens, completion_tokens, total_tokens }
console.log(result.messages); // 完整的消息历史
console.log(result.rawResponses); // 原始模型响应streamText() - Streaming
streamText() - 流式
js
const model = ai.createModel("hunyuan-exp");
const res = await model.streamText({
model: "hunyuan-2.0-instruct-20251111", // Recommended model
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
// Option 1: Iterate text stream (recommended)
for await (let text of res.textStream) {
console.log(text); // Incremental text chunks
}
// Option 2: Iterate data stream for full response data
for await (let data of res.dataStream) {
console.log(data); // Full response chunk with metadata
}
// Option 3: Get final results
const messages = await res.messages; // Full message history
const usage = await res.usage; // Token usagejs
const model = ai.createModel("hunyuan-exp");
const res = await model.streamText({
model: "hunyuan-2.0-instruct-20251111", // 推荐使用的模型
messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});
// 方式1:遍历文本流(推荐)
for await (let text of res.textStream) {
console.log(text); // 增量文本片段
}
// 方式2:遍历数据流以获取完整响应数据
for await (let data of res.dataStream) {
console.log(data); // 包含元数据的完整响应片段
}
// 方式3:获取最终结果
const messages = await res.messages; // 完整的消息历史
const usage = await res.usage; // Token使用情况Type Definitions
类型定义
ts
interface BaseChatModelInput {
model: string; // Required: model name
messages: Array<ChatModelMessage>; // Required: message array
temperature?: number; // Optional: sampling temperature
topP?: number; // Optional: nucleus sampling
}
type ChatModelMessage =
| { role: "user"; content: string }
| { role: "system"; content: string }
| { role: "assistant"; content: string };
interface GenerateTextResult {
text: string; // Generated text
messages: Array<ChatModelMessage>; // Full message history
usage: Usage; // Token usage
rawResponses: Array<unknown>; // Raw model responses
error?: unknown; // Error if any
}
interface StreamTextResult {
textStream: AsyncIterable<string>; // Incremental text stream
dataStream: AsyncIterable<DataChunk>; // Full data stream
messages: Promise<ChatModelMessage[]>;// Final message history
usage: Promise<Usage>; // Final token usage
error?: unknown; // Error if any
}
interface Usage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}ts
interface BaseChatModelInput {
model: string; // 必填:模型名称
messages: Array<ChatModelMessage>; // 必填:消息数组
temperature?: number; // 可选:采样温度
topP?: number; // 可选:核采样参数
}
type ChatModelMessage =
| { role: "user"; content: string }
| { role: "system"; content: string }
| { role: "assistant"; content: string };
interface GenerateTextResult {
text: string; // 生成的文本
messages: Array<ChatModelMessage>; // 完整的消息历史
usage: Usage; // Token使用情况
rawResponses: Array<unknown>; // 原始模型响应
error?: unknown; // 错误信息(如有)
}
interface StreamTextResult {
textStream: AsyncIterable<string>; // 增量文本流
dataStream: AsyncIterable<DataChunk>; // 完整数据流
messages: Promise<ChatModelMessage[]>;// 最终消息历史
usage: Promise<Usage>; // 最终Token使用情况
error?: unknown; // 错误信息(如有)
}
interface Usage {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
}Best Practices
最佳实践
- Use streaming for long responses - Better user experience
- Handle errors gracefully - Wrap AI calls in try/catch
- Keep accessKey secure - Use publishable key, not secret key
- Initialize early - Initialize SDK in app entry point
- Ensure authentication - User must be signed in before AI calls
- 长响应使用流式传输 - 提升用户体验
- 优雅处理错误 - 将AI调用包裹在try/catch中
- 保护accessKey安全 - 使用发布密钥,而非密钥
- 尽早初始化 - 在应用入口处初始化SDK
- 确保完成认证 - 用户必须先登录才能调用AI功能