web-audio-api

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Web Audio API Skill

Web Audio API 技能

1. Overview

1. 概述

This skill provides Web Audio API expertise for creating audio feedback, voice processing, and sound effects in the JARVIS AI Assistant.
Risk Level: LOW - Audio processing with minimal security surface
Primary Use Cases:
  • HUD audio feedback (beeps, alerts)
  • Voice input processing
  • Spatial audio for 3D HUD elements
  • Real-time audio visualization
  • Text-to-speech integration
本技能为JARVIS AI助手提供Web Audio API相关能力,用于创建音频反馈、语音处理和音效。
风险等级:低 - 音频处理的安全暴露面极小
主要用例:
  • HUD音频反馈(蜂鸣声、警报声)
  • 语音输入处理
  • 3D HUD元素的空间音频
  • 实时音频可视化
  • 文本转语音集成

2. Core Responsibilities

2. 核心职责

2.1 Fundamental Principles

2.1 基本原则

  1. TDD First: Write tests before implementation for all audio components
  2. Performance Aware: Optimize for 60fps with minimal audio latency
  3. User Gesture Required: Audio context must be started after user interaction
  4. Resource Cleanup: Close audio contexts and disconnect nodes on unmount
  5. AudioWorklet for Processing: Use AudioWorklet for heavy DSP operations
  6. Accessibility: Provide visual alternatives to audio feedback
  7. Volume Control: Respect system and user volume preferences
  8. Error Handling: Gracefully handle audio permission denials
  1. 测试驱动开发优先:所有音频组件先编写测试再实现
  2. 性能感知:优化以实现60fps帧率,同时将音频延迟降至最低
  3. 需用户交互触发:音频上下文必须在用户交互后启动
  4. 资源清理:组件卸载时关闭音频上下文并断开节点连接
  5. 使用AudioWorklet处理:重型数字信号处理(DSP)操作使用AudioWorklet
  6. 可访问性:为音频反馈提供视觉替代方案
  7. 音量控制:遵循系统和用户的音量偏好设置
  8. 错误处理:优雅处理音频权限被拒绝的情况

3. Technology Stack & Versions

3. 技术栈与版本

3.1 Browser Support

3.1 浏览器支持

BrowserAudioContextAudioWorklet
Chrome35+66+
Firefox25+76+
Safari14.1+14.1+
BrowserAudioContextAudioWorklet
Chrome35+66+
Firefox25+76+
Safari14.1+14.1+

3.2 TypeScript Types

3.2 TypeScript 类型

typescript
// types/audio.ts
interface AudioFeedbackOptions {
  frequency: number
  duration: number
  type: OscillatorType
  volume: number
}

interface SpatialAudioPosition {
  x: number
  y: number
  z: number
}
typescript
// types/audio.ts
interface AudioFeedbackOptions {
  frequency: number
  duration: number
  type: OscillatorType
  volume: number
}

interface SpatialAudioPosition {
  x: number
  y: number
  z: number
}

4. Implementation Patterns

4. 实现模式

4.1 Audio Context Management

4.1 音频上下文管理

typescript
// composables/useAudioContext.ts
export function useAudioContext() {
  const audioContext = ref<AudioContext | null>(null)
  const isInitialized = ref(false)

  async function initialize() {
    if (audioContext.value) return
    audioContext.value = new AudioContext()
    if (audioContext.value.state === 'suspended') await audioContext.value.resume()
    isInitialized.value = true
  }

  onUnmounted(() => {
    audioContext.value?.close()
    audioContext.value = null
  })

  return { audioContext: readonly(audioContext), isInitialized: readonly(isInitialized), initialize }
}
typescript
// composables/useAudioContext.ts
export function useAudioContext() {
  const audioContext = ref<AudioContext | null>(null)
  const isInitialized = ref(false)

  async function initialize() {
    if (audioContext.value) return
    audioContext.value = new AudioContext()
    if (audioContext.value.state === 'suspended') await audioContext.value.resume()
    isInitialized.value = true
  }

  onUnmounted(() => {
    audioContext.value?.close()
    audioContext.value = null
  })

  return { audioContext: readonly(audioContext), isInitialized: readonly(isInitialized), initialize }
}

4.2 HUD Beep Feedback

4.2 HUD蜂鸣反馈

typescript
// composables/useHUDSounds.ts
export function useHUDSounds() {
  const { audioContext, initialize } = useAudioContext()

  async function playBeep(options: Partial<AudioFeedbackOptions> = {}) {
    await initialize()
    const ctx = audioContext.value
    if (!ctx) return

    const { frequency = 440, duration = 0.1, type = 'sine', volume = 0.3 } = options
    const safeVolume = Math.max(0, Math.min(1, volume))

    const oscillator = ctx.createOscillator()
    const gainNode = ctx.createGain()
    oscillator.type = type
    oscillator.frequency.value = frequency
    gainNode.gain.value = safeVolume
    gainNode.gain.exponentialRampToValueAtTime(0.001, ctx.currentTime + duration)

    oscillator.connect(gainNode).connect(ctx.destination)
    oscillator.start()
    oscillator.stop(ctx.currentTime + duration)
  }

  const sounds = {
    confirm: () => playBeep({ frequency: 880, duration: 0.1, volume: 0.2 }),
    alert: () => playBeep({ frequency: 440, duration: 0.3, type: 'square', volume: 0.4 }),
    error: () => playBeep({ frequency: 220, duration: 0.5, type: 'sawtooth', volume: 0.3 }),
    click: () => playBeep({ frequency: 1000, duration: 0.05, volume: 0.1 })
  }
  return { playBeep, sounds }
}
typescript
// composables/useHUDSounds.ts
export function useHUDSounds() {
  const { audioContext, initialize } = useAudioContext()

  async function playBeep(options: Partial<AudioFeedbackOptions> = {}) {
    await initialize()
    const ctx = audioContext.value
    if (!ctx) return

    const { frequency = 440, duration = 0.1, type = 'sine', volume = 0.3 } = options
    const safeVolume = Math.max(0, Math.min(1, volume))

    const oscillator = ctx.createOscillator()
    const gainNode = ctx.createGain()
    oscillator.type = type
    oscillator.frequency.value = frequency
    gainNode.gain.value = safeVolume
    gainNode.gain.exponentialRampToValueAtTime(0.001, ctx.currentTime + duration)

    oscillator.connect(gainNode).connect(ctx.destination)
    oscillator.start()
    oscillator.stop(ctx.currentTime + duration)
  }

  const sounds = {
    confirm: () => playBeep({ frequency: 880, duration: 0.1, volume: 0.2 }),
    alert: () => playBeep({ frequency: 440, duration: 0.3, type: 'square', volume: 0.4 }),
    error: () => playBeep({ frequency: 220, duration: 0.5, type: 'sawtooth', volume: 0.3 }),
    click: () => playBeep({ frequency: 1000, duration: 0.05, volume: 0.1 })
  }
  return { playBeep, sounds }
}

4.3 Audio Visualization

4.3 音频可视化

typescript
// composables/useAudioVisualization.ts
export function useAudioVisualization() {
  const { audioContext, initialize } = useAudioContext()
  let analyser: AnalyserNode | null = null
  let dataArray: Uint8Array | null = null

  async function setupAnalyser(source: AudioNode) {
    await initialize()
    const ctx = audioContext.value
    if (!ctx) return
    analyser = ctx.createAnalyser()
    analyser.fftSize = 256
    dataArray = new Uint8Array(analyser.frequencyBinCount)
    source.connect(analyser)
  }

  function getFrequencyData(): Uint8Array | null {
    if (!analyser || !dataArray) return null
    analyser.getByteFrequencyData(dataArray)
    return dataArray
  }

  return { setupAnalyser, getFrequencyData }
}
typescript
// composables/useAudioVisualization.ts
export function useAudioVisualization() {
  const { audioContext, initialize } = useAudioContext()
  let analyser: AnalyserNode | null = null
  let dataArray: Uint8Array | null = null

  async function setupAnalyser(source: AudioNode) {
    await initialize()
    const ctx = audioContext.value
    if (!ctx) return
    analyser = ctx.createAnalyser()
    analyser.fftSize = 256
    dataArray = new Uint8Array(analyser.frequencyBinCount)
    source.connect(analyser)
  }

  function getFrequencyData(): Uint8Array | null {
    if (!analyser || !dataArray) return null
    analyser.getByteFrequencyData(dataArray)
    return dataArray
  }

  return { setupAnalyser, getFrequencyData }
}

4.4 Spatial Audio for 3D HUD

4.4 3D HUD空间音频

typescript
// composables/useSpatialAudio.ts
export function useSpatialAudio() {
  const { audioContext, initialize } = useAudioContext()
  let panner: PannerNode | null = null

  async function createSpatialSource(position: SpatialAudioPosition) {
    await initialize()
    const ctx = audioContext.value
    if (!ctx) return null
    panner = ctx.createPanner()
    panner.panningModel = 'HRTF'
    panner.distanceModel = 'inverse'
    setPosition(position)
    return panner
  }

  function setPosition(pos: SpatialAudioPosition) {
    if (!panner) return
    panner.positionX.value = pos.x
    panner.positionY.value = pos.y
    panner.positionZ.value = pos.z
  }

  return { createSpatialSource, setPosition }
}
typescript
// composables/useSpatialAudio.ts
export function useSpatialAudio() {
  const { audioContext, initialize } = useAudioContext()
  let panner: PannerNode | null = null

  async function createSpatialSource(position: SpatialAudioPosition) {
    await initialize()
    const ctx = audioContext.value
    if (!ctx) return null
    panner = ctx.createPanner()
    panner.panningModel = 'HRTF'
    panner.distanceModel = 'inverse'
    setPosition(position)
    return panner
  }

  function setPosition(pos: SpatialAudioPosition) {
    if (!panner) return
    panner.positionX.value = pos.x
    panner.positionY.value = pos.y
    panner.positionZ.value = pos.z
  }

  return { createSpatialSource, setPosition }
}

4.5 Microphone Input

4.5 麦克风输入

typescript
// composables/useMicrophone.ts
export function useMicrophone() {
  const { audioContext, initialize } = useAudioContext()
  const stream = ref<MediaStream | null>(null)
  const isListening = ref(false)
  const error = ref<string | null>(null)

  async function startListening() {
    try {
      await initialize()
      stream.value = await navigator.mediaDevices.getUserMedia({
        audio: { echoCancellation: true, noiseSuppression: true, autoGainControl: true }
      })
      isListening.value = true
      return stream.value
    } catch (err) {
      error.value = err instanceof Error ? err.message : 'Microphone access denied'
      return null
    }
  }

  function stopListening() {
    stream.value?.getTracks().forEach(track => track.stop())
    stream.value = null
    isListening.value = false
  }

  onUnmounted(() => stopListening())

  return { stream: readonly(stream), isListening: readonly(isListening), error: readonly(error), startListening, stopListening }
}
typescript
// composables/useMicrophone.ts
export function useMicrophone() {
  const { audioContext, initialize } = useAudioContext()
  const stream = ref<MediaStream | null>(null)
  const isListening = ref(false)
  const error = ref<string | null>(null)

  async function startListening() {
    try {
      await initialize()
      stream.value = await navigator.mediaDevices.getUserMedia({
        audio: { echoCancellation: true, noiseSuppression: true, autoGainControl: true }
      })
      isListening.value = true
      return stream.value
    } catch (err) {
      error.value = err instanceof Error ? err.message : '麦克风访问被拒绝'
      return null
    }
  }

  function stopListening() {
    stream.value?.getTracks().forEach(track => track.stop())
    stream.value = null
    isListening.value = false
  }

  onUnmounted(() => stopListening())

  return { stream: readonly(stream), isListening: readonly(isListening), error: readonly(error), startListening, stopListening }
}

5. Implementation Workflow (TDD)

5. 实现工作流(测试驱动开发)

Step 1: Write Failing Test First

步骤1:先编写失败的测试

typescript
// tests/composables/useHUDSounds.test.ts
import { describe, it, expect, vi, beforeEach } from 'vitest'
import { useHUDSounds } from '~/composables/useHUDSounds'

// Mock AudioContext nodes
const mockOscillator = { connect: vi.fn(), start: vi.fn(), stop: vi.fn(), frequency: { value: 440 } }
const mockGainNode = { connect: vi.fn(), gain: { value: 1, exponentialRampToValueAtTime: vi.fn() } }
const mockAudioContext = {
  state: 'running', currentTime: 0, destination: {},
  createOscillator: vi.fn(() => mockOscillator),
  createGain: vi.fn(() => mockGainNode),
  resume: vi.fn(), close: vi.fn()
}
vi.stubGlobal('AudioContext', vi.fn(() => mockAudioContext))

describe('useHUDSounds', () => {
  beforeEach(() => vi.clearAllMocks())

  it('creates oscillator with correct frequency', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep({ frequency: 880 })
    expect(mockOscillator.frequency.value).toBe(880)
  })

  it('clamps volume to valid range', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep({ volume: 2.5 })
    expect(mockGainNode.gain.value).toBeLessThanOrEqual(1)
  })

  it('connects nodes in correct order', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep()
    expect(mockOscillator.connect).toHaveBeenCalledWith(mockGainNode)
    expect(mockGainNode.connect).toHaveBeenCalledWith(mockAudioContext.destination)
  })
})
typescript
// tests/composables/useHUDSounds.test.ts
import { describe, it, expect, vi, beforeEach } from 'vitest'
import { useHUDSounds } from '~/composables/useHUDSounds'

// Mock AudioContext nodes
const mockOscillator = { connect: vi.fn(), start: vi.fn(), stop: vi.fn(), frequency: { value: 440 } }
const mockGainNode = { connect: vi.fn(), gain: { value: 1, exponentialRampToValueAtTime: vi.fn() } }
const mockAudioContext = {
  state: 'running', currentTime: 0, destination: {},
  createOscillator: vi.fn(() => mockOscillator),
  createGain: vi.fn(() => mockGainNode),
  resume: vi.fn(), close: vi.fn()
}
vi.stubGlobal('AudioContext', vi.fn(() => mockAudioContext))

describe('useHUDSounds', () => {
  beforeEach(() => vi.clearAllMocks())

  it('creates oscillator with correct frequency', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep({ frequency: 880 })
    expect(mockOscillator.frequency.value).toBe(880)
  })

  it('clamps volume to valid range', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep({ volume: 2.5 })
    expect(mockGainNode.gain.value).toBeLessThanOrEqual(1)
  })

  it('connects nodes in correct order', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep()
    expect(mockOscillator.connect).toHaveBeenCalledWith(mockGainNode)
    expect(mockGainNode.connect).toHaveBeenCalledWith(mockAudioContext.destination)
  })
})

Step 2: Implement Minimum to Pass

步骤2:实现最小代码使测试通过

typescript
// composables/useHUDSounds.ts
export function useHUDSounds() {
  // Implementation from section 4.2
  // Only add features that tests require
}
typescript
// composables/useHUDSounds.ts
export function useHUDSounds() {
  // 来自4.2节的实现
  // 仅添加测试要求的功能
}

Step 3: Refactor Following Patterns

步骤3:遵循模式重构

After tests pass, refactor to:
  • Extract shared audio context logic
  • Add proper TypeScript types
  • Implement cleanup on unmount
测试通过后,进行重构:
  • 提取共享的音频上下文逻辑
  • 添加合适的TypeScript类型
  • 实现组件卸载时的清理逻辑

Step 4: Run Full Verification

步骤4:执行完整验证

bash
undefined
bash
undefined

Run all audio-related tests

运行所有音频相关测试

npm test -- --grep "audio|sound|HUD"
npm test -- --grep "audio|sound|HUD"

Check types

检查类型

npm run typecheck
npm run typecheck

Verify no memory leaks in browser

在浏览器中验证无内存泄漏

npm run dev # Test manually with DevTools Memory tab
undefined
npm run dev # 使用DevTools内存面板手动测试
undefined

6. Performance Patterns

6. 性能优化模式

6.1 AudioWorklet for Processing

6.1 使用AudioWorklet处理

typescript
// ✅ Good: Use AudioWorklet for DSP (runs on audio thread)
class NoiseGateProcessor extends AudioWorkletProcessor {
  process(inputs: Float32Array[][], outputs: Float32Array[][]) {
    for (let ch = 0; ch < inputs[0].length; ch++) {
      for (let i = 0; i < inputs[0][ch].length; i++) {
        outputs[0][ch][i] = Math.abs(inputs[0][ch][i]) > 0.01 ? inputs[0][ch][i] : 0
      }
    }
    return true
  }
}
registerProcessor('noise-gate', NoiseGateProcessor)

// ❌ Bad: ScriptProcessorNode (deprecated, blocks main thread)
typescript
// ✅ 推荐:使用AudioWorklet进行数字信号处理(运行在音频线程)
class NoiseGateProcessor extends AudioWorkletProcessor {
  process(inputs: Float32Array[][], outputs: Float32Array[][]) {
    for (let ch = 0; ch < inputs[0].length; ch++) {
      for (let i = 0; i < inputs[0][ch].length; i++) {
        outputs[0][ch][i] = Math.abs(inputs[0][ch][i]) > 0.01 ? inputs[0][ch][i] : 0
      }
    }
    return true
  }
}
registerProcessor('noise-gate', NoiseGateProcessor)

// ❌ 不推荐:ScriptProcessorNode(已废弃,会阻塞主线程)

6.2 Buffer Pooling

6.2 缓冲池复用

typescript
// ✅ Good: Reuse audio buffers
class AudioBufferPool {
  private pool: AudioBuffer[] = []
  constructor(ctx: AudioContext, size: number, length: number) {
    for (let i = 0; i < size; i++) {
      this.pool.push(ctx.createBuffer(2, length, ctx.sampleRate))
    }
  }
  acquire(): AudioBuffer | undefined { return this.pool.pop() }
  release(buffer: AudioBuffer) {
    for (let ch = 0; ch < buffer.numberOfChannels; ch++) {
      buffer.getChannelData(ch).fill(0)
    }
    this.pool.push(buffer)
  }
}

// ❌ Bad: Create new buffer each time
const buffer = ctx.createBuffer(2, 44100, 44100) // Allocates memory each call
typescript
// ✅ 推荐:复用音频缓冲
class AudioBufferPool {
  private pool: AudioBuffer[] = []
  constructor(ctx: AudioContext, size: number, length: number) {
    for (let i = 0; i < size; i++) {
      this.pool.push(ctx.createBuffer(2, length, ctx.sampleRate))
    }
  }
  acquire(): AudioBuffer | undefined { return this.pool.pop() }
  release(buffer: AudioBuffer) {
    for (let ch = 0; ch < buffer.numberOfChannels; ch++) {
      buffer.getChannelData(ch).fill(0)
    }
    this.pool.push(buffer)
  }
}

// ❌ 不推荐:每次创建新缓冲
const buffer = ctx.createBuffer(2, 44100, 44100) // 每次调用都会分配内存

6.3 Offline Rendering

6.3 离线渲染

typescript
// ✅ Good: Pre-render complex sounds
async function prerenderSound(): Promise<AudioBuffer> {
  const offlineCtx = new OfflineAudioContext(2, 44100, 44100)
  const osc = offlineCtx.createOscillator()
  const gain = offlineCtx.createGain()
  osc.connect(gain).connect(offlineCtx.destination)
  gain.gain.setValueAtTime(0, 0)
  gain.gain.linearRampToValueAtTime(1, 0.01)
  gain.gain.exponentialRampToValueAtTime(0.001, 1)
  osc.start(); osc.stop(1)
  return offlineCtx.startRendering()
}

// ❌ Bad: Generate complex sounds in real-time (multiple oscillators computed live)
typescript
// ✅ 推荐:预渲染复杂音效
async function prerenderSound(): Promise<AudioBuffer> {
  const offlineCtx = new OfflineAudioContext(2, 44100, 44100)
  const osc = offlineCtx.createOscillator()
  const gain = offlineCtx.createGain()
  osc.connect(gain).connect(offlineCtx.destination)
  gain.gain.setValueAtTime(0, 0)
  gain.gain.linearRampToValueAtTime(1, 0.01)
  gain.gain.exponentialRampToValueAtTime(0.001, 1)
  osc.start(); osc.stop(1)
  return offlineCtx.startRendering()
}

// ❌ 不推荐:实时生成复杂音效(多个振荡器实时计算)

6.4 Node Graph Optimization

6.4 节点图优化

typescript
// ✅ Good: Reuse master gain node
const masterGain = ctx.createGain()
masterGain.connect(ctx.destination)
function playSound(buffer: AudioBuffer) {
  const source = ctx.createBufferSource()
  source.buffer = buffer
  source.connect(masterGain)
  source.start()
}

// ❌ Bad: Create full chain for each sound (gain + compressor per play)
typescript
// ✅ 推荐:复用主增益节点
const masterGain = ctx.createGain()
masterGain.connect(ctx.destination)
function playSound(buffer: AudioBuffer) {
  const source = ctx.createBufferSource()
  source.buffer = buffer
  source.connect(masterGain)
  source.start()
}

// ❌ 不推荐:每次播放都创建完整链(每个播放都创建增益+压缩器)

6.5 Memory Management

6.5 内存管理

typescript
// ✅ Good: Disconnect and cleanup nodes
function playOneShot(buffer: AudioBuffer) {
  const source = ctx.createBufferSource()
  source.buffer = buffer
  source.connect(masterGain)
  source.onended = () => source.disconnect()
  source.start()
}

// ✅ Good: Limit concurrent sounds (max 8)
class SoundManager {
  private activeSources = new Set<AudioBufferSourceNode>()
  play(buffer: AudioBuffer) {
    if (this.activeSources.size >= 8) this.activeSources.values().next().value?.stop()
    const source = ctx.createBufferSource()
    source.buffer = buffer
    source.connect(masterGain)
    source.onended = () => { source.disconnect(); this.activeSources.delete(source) }
    this.activeSources.add(source)
    source.start()
  }
}

// ❌ Bad: Never cleanup - nodes stay in memory after playback
const source = ctx.createBufferSource()
source.connect(ctx.destination)
source.start()
typescript
// ✅ 推荐:断开并清理节点
function playOneShot(buffer: AudioBuffer) {
  const source = ctx.createBufferSource()
  source.buffer = buffer
  source.connect(masterGain)
  source.onended = () => source.disconnect()
  source.start()
}

// ✅ 推荐:限制并发音效数量(最多8个)
class SoundManager {
  private activeSources = new Set<AudioBufferSourceNode>()
  play(buffer: AudioBuffer) {
    if (this.activeSources.size >= 8) this.activeSources.values().next().value?.stop()
    const source = ctx.createBufferSource()
    source.buffer = buffer
    source.connect(masterGain)
    source.onended = () => { source.disconnect(); this.activeSources.delete(source) }
    this.activeSources.add(source)
    source.start()
  }
}

// ❌ 不推荐:从不清理 - 播放后节点仍留在内存中
const source = ctx.createBufferSource()
source.connect(ctx.destination)
source.start()

7. Quality Standards

7. 质量标准

typescript
// ✅ Always require user gesture
button.addEventListener('click', async () => {
  await audioContext.resume()
  playSound()
})

// ✅ Respect user preferences
if (usePreferencesStore().preferences.soundEnabled) playBeep()

// ✅ Handle permission denial gracefully
try {
  await navigator.mediaDevices.getUserMedia({ audio: true })
} catch (err) {
  if (err.name === 'NotAllowedError') {
    showVisualFeedback('Microphone access required')
  }
}
typescript
// ✅ 始终要求用户交互触发
button.addEventListener('click', async () => {
  await audioContext.resume()
  playSound()
})

// ✅ 尊重用户偏好
if (usePreferencesStore().preferences.soundEnabled) playBeep()

// ✅ 优雅处理权限被拒绝的情况
try {
  await navigator.mediaDevices.getUserMedia({ audio: true })
} catch (err) {
  if (err.name === 'NotAllowedError') {
    showVisualFeedback('需要麦克风访问权限')
  }
}

8. Testing & Quality

8. 测试与质量

typescript
describe('HUD Sounds', () => {
  it('validates volume bounds', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep({ volume: 2 })  // Clamped to 1
    await playBeep({ volume: -1 }) // Clamped to 0
  })
})
typescript
describe('HUD Sounds', () => {
  it('验证音量边界', async () => {
    const { playBeep } = useHUDSounds()
    await playBeep({ volume: 2 })  // 被限制为1
    await playBeep({ volume: -1 }) // 被限制为0
  })
})

9. Common Mistakes & Anti-Patterns

9. 常见错误与反模式

9.1 Critical Anti-Patterns

9.1 严重反模式

typescript
// ❌ Auto-play without user gesture - BLOCKED
onMounted(() => playSound())

// ✅ After user interaction
const handleClick = async () => { await audioContext.resume(); playSound() }

// ❌ Memory leak - no cleanup
const audioContext = new AudioContext()

// ✅ Proper cleanup
onUnmounted(() => audioContext.close())

// ❌ New context per sound - performance killer
function playSound() { const ctx = new AudioContext() }

// ✅ Reuse context
const ctx = new AudioContext()
function playSound() { /* reuse ctx */ }
typescript
// ❌ 无用户交互自动播放 - 会被阻止
onMounted(() => playSound())

// ✅ 用户交互后触发
const handleClick = async () => { await audioContext.resume(); playSound() }

// ❌ 内存泄漏 - 未清理
const audioContext = new AudioContext()

// ✅ 正确清理
onUnmounted(() => audioContext.close())

// ❌ 每次播放创建新上下文 - 性能杀手
function playSound() { const ctx = new AudioContext() }

// ✅ 复用上下文
const ctx = new AudioContext()
function playSound() { /* 复用ctx */ }

10. Pre-Implementation Checklist

10. 实现前检查清单

Phase 1: Before Writing Code

阶段1:编写代码前

  • Tests written for audio node creation and connections
  • Tests written for volume clamping and validation
  • Performance requirements identified (latency, concurrent sounds)
  • AudioWorklet needed for DSP? Worklet file created
  • Buffer pool size calculated for expected usage
  • 已编写音频节点创建和连接的测试
  • 已编写音量限制和验证的测试
  • 已确定性能要求(延迟、并发音效数量)
  • 是否需要AudioWorklet进行DSP?已创建Worklet文件
  • 已根据预期使用场景计算缓冲池大小

Phase 2: During Implementation

阶段2:实现过程中

  • User gesture required for AudioContext initialization
  • Audio context reused (not created per sound)
  • Nodes disconnected in onended callbacks
  • Volume bounds validated (0-1 range)
  • Microphone permissions handled gracefully
  • Error states provide visual feedback
  • 音频上下文初始化需用户交互触发
  • 音频上下文被复用(而非每次播放创建)
  • 节点在onended回调中断开连接
  • 音量已验证在0-1范围内
  • 麦克风权限已被优雅处理
  • 错误状态提供视觉反馈

Phase 3: Before Committing

阶段3:提交代码前

  • All audio tests pass:
    npm test -- --grep "audio"
  • Type checking passes:
    npm run typecheck
  • No memory leaks (tested in DevTools Memory tab)
  • Audio context closed on component unmount
  • Visual alternatives provided for accessibility
  • Sound can be disabled via user preferences
  • Volume respects system preferences
  • 所有音频测试通过:
    npm test -- --grep "audio"
  • 类型检查通过:
    npm run typecheck
  • 无内存泄漏(已通过DevTools内存面板测试)
  • 组件卸载时音频上下文已关闭
  • 提供了可访问性的视觉替代方案
  • 音效可通过用户偏好设置禁用
  • 音量遵循系统偏好设置

11. Summary

11. 总结

Web Audio API for JARVIS: Initialize after user gesture, cleanup on unmount, handle permission denials, provide visual alternatives. See
references/advanced-patterns.md
JARVIS的Web Audio API使用要点:需在用户交互后初始化,组件卸载时清理资源,处理权限拒绝情况,提供视觉替代方案。详情见
references/advanced-patterns.md