prompt-api

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Prompt API

Prompt API

Procedures

操作流程

Step 1: Identify the integration surface
  1. Inspect the workspace for browser entry points, UI handlers, and any existing AI abstraction layer.
  2. Execute
    node scripts/find-frontend-targets.mjs .
    to inventory likely frontend files and existing Prompt API usage when a Node runtime is available.
  3. If a Node runtime is unavailable, inspect the nearest
    package.json
    , HTML entry point, and framework entry files manually to identify the browser app boundary.
  4. If the workspace contains multiple frontend apps, prefer the app that contains the active route, component, or user-requested feature surface.
  5. If the inventory still leaves multiple plausible frontend targets, stop and ask the user which app should receive the Prompt API integration.
  6. If the project is not a browser web app, stop and explain that this skill does not apply.
Step 2: Confirm Prompt API viability
  1. Read
    references/prompt-api-reference.md
    before writing code.
  2. Read
    references/examples.md
    when the feature needs a spec-valid message shape for text, multimodal, prefix, or tool-enabled sessions.
  3. Read
    references/compatibility.md
    when the feature must support multiple browser generations or decide between native support and polyfills.
  4. Read
    references/polyfills.md
    when the feature needs concrete package installation or backend configuration examples for Prompt API or Task API polyfills.
  5. Verify that the feature runs in a secure window context and that the
    language-model
    permissions-policy allows access from the current frame.
  6. If the integration must run in a Web Worker or other non-window context, stop and explain the platform limitation.
  7. Choose the session shape the feature needs:
    prompt()
    ,
    promptStreaming()
    ,
    initialPrompts
    ,
    append()
    ,
    measureContextUsage()
    ,
    tools
    , or
    responseConstraint
    .
  8. If the project uses TypeScript, add or preserve typings that cover the Prompt API surface used by the project.
Step 3: Implement a guarded session wrapper
  1. Read
    assets/language-model-service.template.ts
    and adapt it to the framework, state model, and file layout in the workspace.
  2. Gate session creation behind
    LanguageModel.availability()
    using the same creation options that the feature will use at runtime, including expected modalities and tools.
  3. Create sessions only after user activation when model download or instantiation may begin.
  4. Use
    AbortController
    for cancelable prompts and call
    destroy()
    when the session is no longer needed.
  5. If the feature runs in a cross-origin iframe, require
    allow="language-model"
    on the embedding iframe.
  6. Do not depend on
    params()
    ,
    topK
    , or
    temperature
    ; current integrations must work without them.
  7. Treat
    availability()
    as a passive capability check: if it reports
    downloading
    before user activation, do not assume the current page initiated that download or lock the UI into an app-started busy state.
Step 4: Wire UX and fallback behavior
  1. Surface distinct states for unavailable devices, model download, ready sessions, and in-flight prompts.
  2. If download progress matters to the feature, attach a
    monitor
    listener during
    LanguageModel.create()
    and render progress in the UI.
  3. Keep a non-AI fallback for unsupported browsers, unsupported devices, or blocked iframe contexts.
  4. If the feature needs structured output, pass a JSON Schema through
    responseConstraint
    , use
    omitResponseConstraintInput
    only when the prompt already carries the required format instructions, and parse the returned string before using it.
  5. Respect prompt-shape validation rules:
    system
    messages belong in
    initialPrompts
    ,
    prefix: true
    applies only to the final
    assistant
    message, and
    assistant
    message content must remain text-only.
  6. If
    availability()
    reports
    downloading
    before the app has called
    create()
    , present that as informational browser state rather than a page-owned active download, and keep controls usable unless the app itself is busy.
Step 5: Validate behavior
  1. Test short responses with
    prompt()
    and long responses with
    promptStreaming()
    when applicable.
  2. Verify that repeated prompts reuse context intentionally, that destroyed sessions are not reused, and that the app uses compatibility checks for context measurement and overflow handling across browser versions.
  3. Read
    references/troubleshooting.md
    if the integration throws
    NotSupportedError
    or behaves differently across frames or execution contexts.
  4. Run the workspace build, typecheck, or tests after editing.
步骤1:确定集成范围
  1. 检查工作区中的浏览器入口点、UI处理程序和所有现有的AI抽象层。
  2. 当有可用Node运行时环境时,执行
    node scripts/find-frontend-targets.mjs .
    来清点可能的前端文件和现有Prompt API使用情况。
  3. 若无可用Node运行时环境,手动检查最近的
    package.json
    、HTML入口点和框架入口文件,确定浏览器应用边界。
  4. 若工作区包含多个前端应用,优先选择包含活动路由、组件或用户请求功能范围的应用。
  5. 若清点后仍存在多个合理的前端目标,停止操作并询问用户需要为哪个应用集成Prompt API。
  6. 若项目不是浏览器Web应用,停止操作并说明该技能不适用。
步骤2:确认Prompt API可行性
  1. 编写代码前先阅读
    references/prompt-api-reference.md
  2. 当功能需要符合规范的文本、多模态、前缀或支持工具的会话消息格式时,阅读
    references/examples.md
  3. 当功能必须支持多代浏览器或需要在原生支持和polyfill之间做选择时,阅读
    references/compatibility.md
  4. 当功能需要Prompt API或Task API polyfill的具体包安装或后端配置示例时,阅读
    references/polyfills.md
  5. 验证功能运行在安全窗口上下文中,且
    language-model
    权限策略允许当前帧访问。
  6. 若集成必须运行在Web Worker或其他非窗口上下文,停止操作并说明平台限制。
  7. 选择功能需要的会话类型:
    prompt()
    promptStreaming()
    initialPrompts
    append()
    measureContextUsage()
    tools
    responseConstraint
  8. 若项目使用TypeScript,添加或保留覆盖项目所用Prompt API范围的类型定义。
步骤3:实现带防护的会话封装
  1. 阅读
    assets/language-model-service.template.ts
    ,适配工作区中的框架、状态模型和文件布局。
  2. 使用功能运行时相同的创建选项(包括预期的模态和工具),通过
    LanguageModel.availability()
    控制会话创建流程。
  3. 仅在用户激活后,可能开始模型下载或实例化时再创建会话。
  4. 对可取消的prompt使用
    AbortController
    ,不再需要会话时调用
    destroy()
  5. 若功能运行在跨域iframe中,要求嵌入的iframe添加
    allow="language-model"
    属性。
  6. 不要依赖
    params()
    topK
    temperature
    ;当前集成不需要这些参数即可运行。
  7. availability()
    视为被动能力检查:如果它在用户激活前返回
    downloading
    ,不要假定是当前页面发起了下载,也不要将UI锁定为应用启动中的忙碌状态。
步骤4:连接用户体验和降级逻辑
  1. 针对设备不可用、模型下载中、会话就绪、prompt处理中等不同状态展示对应的UI。
  2. 如果下载进度对功能很重要,在
    LanguageModel.create()
    时添加
    monitor
    监听器,并在UI中渲染进度。
  3. 为不支持的浏览器、不支持的设备或被阻止的iframe上下文保留非AI降级方案。
  4. 如果功能需要结构化输出,通过
    responseConstraint
    传递JSON Schema,仅当prompt已经包含所需格式说明时才使用
    omitResponseConstraintInput
    ,使用返回的字符串前先进行解析。
  5. 遵守prompt格式验证规则:
    system
    消息属于
    initialPrompts
    prefix: true
    仅适用于最后一条
    assistant
    消息,且
    assistant
    消息内容必须仅为文本。
  6. 如果
    availability()
    在应用调用
    create()
    前返回
    downloading
    ,将其视为浏览器的信息状态,而非页面发起的活跃下载,除非应用本身处于忙碌状态,否则保持控件可用。
步骤5:验证行为
  1. 适用时,用
    prompt()
    测试短响应,用
    promptStreaming()
    测试长响应。
  2. 验证重复prompt会按预期复用上下文,已销毁的会话不会被复用,且应用针对不同浏览器版本的上下文测量和溢出处理使用了兼容性检查。
  3. 如果集成抛出
    NotSupportedError
    或在不同帧/执行上下文中行为异常,阅读
    references/troubleshooting.md
  4. 编辑完成后运行工作区构建、类型检查或测试。

Error Handling

错误处理

  • If
    LanguageModel
    is missing, prefer progressive enhancement with a maintained Prompt API polyfill or a non-AI fallback instead of inventing a custom compatibility layer.
  • If
    availability()
    returns
    downloading
    before the app has called
    create()
    , treat it as passive browser state. Only surface live progress and block prompt submission when the app itself has started
    LanguageModel.create()
    .
  • If
    availability()
    or
    prompt()
    throws
    NotSupportedError
    , align the creation and prompt options with the actual modalities, languages, message roles, and tools used by the feature.
  • If the feature must run in Web Workers, redirect the integration to a window context because the Prompt API is not available in workers.
  • If the feature lives in a cross-origin iframe, require
    allow="language-model"
    from the embedding page before continuing.
  • If
    node scripts/find-frontend-targets.mjs .
    cannot run, identify the browser app boundary manually and continue only after a single target app is clear.
  • 若缺少
    LanguageModel
    ,优先使用维护良好的Prompt API polyfill或非AI降级方案做渐进式增强,不要自行开发自定义兼容层。
  • availability()
    在应用调用
    create()
    前返回
    downloading
    ,将其视为被动浏览器状态。仅当应用自身已调用
    LanguageModel.create()
    时,才展示实时进度并阻止prompt提交。
  • availability()
    prompt()
    抛出
    NotSupportedError
    ,调整创建和prompt选项,使其与功能实际使用的模态、语言、消息角色和工具匹配。
  • 若功能必须运行在Web Workers中,将集成重定向到窗口上下文,因为Prompt API在worker中不可用。
  • 若功能位于跨域iframe中,继续操作前要求嵌入页面提供
    allow="language-model"
    权限。
  • node scripts/find-frontend-targets.mjs .
    无法运行,手动识别浏览器应用边界,仅在明确单一目标应用后继续操作。