webnn
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseWebNN
WebNN
Procedures
操作流程
Step 1: Identify the browser integration surface
- Inspect the workspace for browser entry points, UI handlers, worker entry files, and any existing model-loading or inference abstraction layer.
- Execute to inventory likely frontend files and existing WebNN markers when a Node runtime is available.
node scripts/find-webnn-targets.mjs . - If a Node runtime is unavailable, inspect the nearest , HTML entry point, framework bootstrap files, and worker entry files manually to identify the browser app boundary.
package.json - If the workspace contains multiple frontend apps, prefer the app that contains the active route, component, or user-requested feature surface.
- If the inventory still leaves multiple plausible frontend targets, stop and ask which app should receive the WebNN integration.
- If the project is not a browser web app, stop and explain that this skill does not apply.
Step 2: Confirm WebNN viability and choose the runtime shape
- Read before writing code.
references/webnn-reference.md - Read when choosing between a direct WebNN graph flow and an adapter around an existing browser ML runtime.
references/examples.md - Read when native support, preview flags, device behavior, or backend differences matter.
references/compatibility.md - Read when context creation, graph build, tensor readback, or device selection fails.
references/troubleshooting.md - Verify that the feature runs in a secure context and in a or
Windowcontext.DedicatedWorker - If the feature must run on the server, train models, or depend on cloud inference, stop and explain the platform mismatch.
- Choose device intent deliberately: use for throughput,
powerPreference: "high-performance"for power-efficient acceleration, orpowerPreference: "low-power"to prefer CPU inference for maximum reach.accelerated: false - Treat and
acceleratedas preferences, not guarantees. Browser backends can still partition graphs or fall back per operator.powerPreference - Choose a direct flow when the application owns graph construction or can keep a small deterministic graph path.
MLGraphBuilder - Choose an adapter around an existing local runtime only when the application already loads models through that runtime and the task is to prefer WebNN acceleration without rewriting the full inference stack.
- If the project uses TypeScript, add or preserve typings for the WebNN surface used by the project.
Step 3: Implement a guarded runtime adapter
- Read and adapt it to the framework, state model, and file layout in the workspace.
assets/webnn-runtime.template.ts - Centralize support detection around ,
window.isSecureContext, and the requested execution context instead of scattering checks through UI components.navigator.ml - Create an only at the boundary where the app is ready to initialize local inference.
MLContext - Pass explicit and
acceleratedvalues when the product has a real preference, and omit tuning that the product cannot justify.powerPreference - Build the graph through when the feature uses direct WebNN operations, or route existing model execution through the app's existing local runtime adapter when that runtime is already responsible for model loading and pre/post-processing.
MLGraphBuilder - Reuse the compiled graph and reusable tensors when input and output shapes stay stable across requests.
- Use ,
context.writeTensor(), andcontext.dispatch()in that order for direct graph execution.await context.readTensor() - Observe and rebuild the context, graph, and tensors if the browser invalidates the execution state.
context.lost - Destroy tensors, graphs, and contexts when the feature is disposed or the route no longer needs them.
Step 4: Wire UX and fallback behavior
- Surface distinct states for unsupported browsers, secure-context failures, runtime preparation, ready native execution, and explicit fallback execution.
- Keep a non-WebNN path for unsupported browsers or unsupported devices when the feature must remain available.
- Keep the fallback explicit and product-approved. Do not silently swap in a remote model provider when the feature is supposed to stay local.
- Present device choice as an intent, not a promise that every operator will execute on that device.
- Move long-running model preparation or repeated inference off the main thread when the application already uses a worker-friendly architecture.
- Keep all user data handling consistent with the product's local-processing promises and privacy requirements.
Step 5: Validate behavior
- Execute to confirm that the intended app boundary and WebNN markers still resolve to the edited integration surface.
node scripts/find-webnn-targets.mjs . - Verify secure-context and detection before debugging deeper runtime issues.
navigator.ml - For direct WebNN paths, run a smoke test that creates a context, builds a trivial graph, writes inputs, dispatches, and reads outputs.
- Test the intended and
acceleratedsettings and confirm that fallback behavior remains usable when an accelerated context cannot be created.powerPreference - Use when operator coverage or tensor data type support influences graph design.
context.opSupportLimits() - Confirm the app does not reuse destroyed tensors, graphs, or contexts.
- If the target environment depends on preview Chromium flags or milestone-specific behavior, confirm the required browser state from before treating runtime failures as application bugs.
references/compatibility.md - Run the workspace build, typecheck, or tests after editing.
步骤1:确定浏览器集成边界
- 检查工作空间中的浏览器入口点、UI处理程序、worker入口文件,以及任何现有的模型加载或推理抽象层。
- 当Node运行时可用时,执行 来清点可能的前端文件和现有WebNN标记。
node scripts/find-webnn-targets.mjs . - 如果没有Node运行时,手动检查最近的、HTML入口点、框架启动文件和worker入口文件,确定浏览器应用边界。
package.json - 如果工作空间包含多个前端应用,优先选择包含活跃路由、组件或用户请求功能模块的应用。
- 如果清点后仍有多个可能的前端目标,暂停操作并询问需要为哪个应用集成WebNN。
- 如果项目不是浏览器网页应用,暂停操作并说明本技能不适用。
步骤2:确认WebNN可行性并选择运行时形态
- 编写代码前先阅读。
references/webnn-reference.md - 在直接使用WebNN图流和适配现有浏览器ML运行时之间做选择时,阅读。
references/examples.md - 涉及原生支持、预览标记、设备行为或后端差异问题时,阅读。
references/compatibility.md - 遇到上下文创建、图构建、张量回读或设备选择失败问题时,阅读。
references/troubleshooting.md - 验证功能可在安全上下文以及或
Window上下文中运行。DedicatedWorker - 如果功能必须在服务端运行、需要训练模型或者依赖云推理,暂停操作并说明平台不匹配。
- 谨慎选择设备倾向:吞吐量优先使用,能效优先的加速场景使用
powerPreference: "high-performance",要最大化覆盖范围优先CPU推理则使用powerPreference: "low-power"。accelerated: false - 将和
accelerated视为偏好设置而非保证。浏览器后端仍可能拆分图或者按算子回退运行。powerPreference - 当应用自行控制图构建或者可以维护小型确定性图路径时,选择直接的流程。
MLGraphBuilder - 仅当应用已经通过现有本地运行时加载模型,且任务目标是无需重写整个推理栈就能优先使用WebNN加速时,才选择适配现有本地运行时。
- 如果项目使用TypeScript,为项目用到的WebNN接口添加或保留类型定义。
步骤3:实现带守卫的运行时适配器
- 阅读,并适配工作空间中的框架、状态模型和文件布局。
assets/webnn-runtime.template.ts - 将支持性检测逻辑集中在、
window.isSecureContext和请求的执行上下文相关模块中,不要把检查逻辑散落在UI组件里。navigator.ml - 仅在应用准备好初始化本地推理的边界位置创建。
MLContext - 当产品有明确偏好时传入显式的和
accelerated值,省略产品无法说明必要性的调优参数。powerPreference - 如果功能直接使用WebNN算子,通过构建图;如果现有本地运行时已经负责模型加载和前后处理,则通过应用现有本地运行时适配器路由现有模型执行逻辑。
MLGraphBuilder - 当输入输出形状在多次请求间保持稳定时,复用编译后的图和可复用张量。
- 直接执行图时按顺序使用、
context.writeTensor()和context.dispatch()。await context.readTensor() - 监听事件,如果浏览器使执行状态失效,重建上下文、图和张量。
context.lost - 当功能被销毁或者路由不再需要时,销毁张量、图和上下文。
步骤4:对接UX和回退逻辑
- 对外暴露明确的状态:浏览器不支持、安全上下文失败、运行时准备中、原生执行就绪、显式回退执行。
- 当功能必须保持可用时,为不支持的浏览器或设备保留非WebNN路径。
- 保留显式且经过产品确认的回退逻辑。当功能要求保持本地运行时,不要静默切换到远程模型服务。
- 将设备选择展示为倾向设置,而非承诺所有算子都会在该设备上执行。
- 当应用已经适配worker架构时,把耗时较长的模型准备或者重复推理逻辑移到主线程外运行。
- 所有用户数据处理要符合产品的本地处理承诺和隐私要求。
步骤5:验证行为
- 执行确认预期的应用边界和WebNN标记仍对应到编辑后的集成模块。
node scripts/find-webnn-targets.mjs . - 在调试更深层的运行时问题前,先验证安全上下文和检测逻辑是否正常。
navigator.ml - 对于直接WebNN路径,运行冒烟测试:创建上下文、构建简单图、写入输入、调度执行、读取输出。
- 测试预期的和
accelerated设置,确认无法创建加速上下文时回退逻辑仍然可用。powerPreference - 当算子覆盖范围或张量数据类型支持会影响图设计时,使用查询支持情况。
context.opSupportLimits() - 确认应用不会复用已销毁的张量、图或上下文。
- 如果目标环境依赖Chromium预览标记或特定版本的行为,在将运行时故障判定为应用bug前,先从确认所需的浏览器状态。
references/compatibility.md - 编辑完成后运行工作空间的构建、类型检查或测试流程。
Error Handling
错误处理
- If is missing, confirm secure-context requirements and browser support from
navigator.mlbefore changing application code.references/compatibility.md - If fails for an accelerated or high-performance request, retry only through the product's approved fallback plan and surface the failure reason.
createContext() - If or
build()fails, checkdispatch()andreferences/examples.mdfor operator, shape, and device mismatches before rewriting the feature.references/troubleshooting.md - If resolves, treat the current context, graph, and tensors as invalid and recreate them before the next inference attempt.
context.lost - If the product only has a remote inference contract, stop and explain that this skill does not directly apply.
- 如果不存在,修改应用代码前先从
navigator.ml确认安全上下文要求和浏览器支持情况。references/compatibility.md - 如果加速或高性能请求的失败,仅按照产品批准的回退方案重试,并暴露失败原因。
createContext() - 如果或
build()失败,重写功能前先查阅dispatch()和references/examples.md排查算子、形状和设备不匹配问题。references/troubleshooting.md - 如果触发,将当前上下文、图和张量视为无效,下次推理尝试前重新创建。
context.lost - 如果产品只有远程推理协议,暂停操作并说明本技能不直接适用。