axiom-camera-capture

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Camera Capture with AVFoundation

使用AVFoundation实现相机捕获

Guides you through implementing camera capture: session setup, photo capture, video recording, responsive capture UX, rotation handling, and session lifecycle management.
本文将指导你实现相机捕获功能:会话设置、照片拍摄、视频录制、响应式捕获用户体验、旋转处理以及会话生命周期管理。

When to Use This Skill

何时使用此技能

Use when you need to:
  • ☑ Build a custom camera UI (not system picker)
  • ☑ Capture photos with quality/speed tradeoffs
  • ☑ Record video with audio
  • ☑ Handle device rotation correctly (RotationCoordinator)
  • ☑ Make capture feel responsive (zero-shutter-lag)
  • ☑ Handle session interruptions (phone calls, multitasking)
  • ☑ Switch between front/back cameras
  • ☑ Configure capture quality and resolution
当你需要以下功能时使用:
  • ☑ 构建自定义相机UI(而非系统选择器)
  • ☑ 在照片质量与拍摄速度间做取舍
  • ☑ 录制带音频的视频
  • ☑ 正确处理设备旋转(RotationCoordinator)
  • ☑ 让拍摄操作感觉响应迅速(零快门延迟)
  • ☑ 处理会话中断(来电、多任务处理)
  • ☑ 在前后摄像头间切换
  • ☑ 配置捕获质量与分辨率

Example Prompts

示例提问

"How do I set up a camera preview in SwiftUI?" "My camera freezes when I get a phone call" "The photo preview is rotated wrong on front camera" "How do I make photo capture feel instant?" "Should I use deferred processing?" "My camera takes too long to capture" "How do I switch between front and back cameras?" "How do I record video with audio?"
"如何在SwiftUI中设置相机预览?" "来电时我的相机会卡住" "前置摄像头的照片预览方向错误" "如何让照片拍摄感觉即时完成?" "我应该使用延迟处理吗?" "我的相机拍摄速度太慢" "如何在前后摄像头间切换?" "如何录制带音频的视频?"

Red Flags

注意事项(错误警示)

Signs you're making this harder than it needs to be:
  • ❌ Calling
    startRunning()
    on main thread (blocks UI for seconds)
  • ❌ Using deprecated
    videoOrientation
    instead of RotationCoordinator (iOS 17+)
  • ❌ Not observing session interruptions (app freezes on phone call)
  • ❌ Creating new AVCaptureSession for each capture (expensive)
  • ❌ Using
    .photo
    preset for video (wrong format)
  • ❌ Ignoring
    photoQualityPrioritization
    (slow captures)
  • ❌ Not handling
    .notAuthorized
    permission state
  • ❌ Modifying session without
    beginConfiguration()
    /
    commitConfiguration()
  • ❌ Using UIImagePickerController for custom camera UI (limited control)
以下迹象表明你把事情复杂化了:
  • ❌ 在主线程调用
    startRunning()
    (会阻塞UI数秒)
  • ❌ 使用已弃用的
    videoOrientation
    而非RotationCoordinator(iOS 17+)
  • ❌ 未监听会话中断(来电时应用会卡住)
  • ❌ 每次拍摄都创建新的AVCaptureSession(资源消耗大)
  • ❌ 对视频使用
    .photo
    预设(格式错误)
  • ❌ 忽略
    photoQualityPrioritization
    (导致拍摄缓慢)
  • ❌ 未处理
    .notAuthorized
    权限状态
  • ❌ 未使用
    beginConfiguration()
    /
    commitConfiguration()
    就修改会话
  • ❌ 为自定义相机UI使用UIImagePickerController(控制权限有限)

Mandatory First Steps

必备前置步骤

Before implementing any camera feature:
在实现任何相机功能前:

1. Choose Your Capture Mode

1. 选择捕获模式

What do you need?

┌─ Just let user pick a photo?
│  └─ Don't use AVFoundation - use PHPicker or PhotosPicker
│     See: /skill axiom-photo-library
├─ Simple photo/video capture with system UI?
│  └─ UIImagePickerController (but limited customization)
├─ Custom camera UI with photo capture?
│  └─ AVCaptureSession + AVCapturePhotoOutput
│     → Continue with this skill
├─ Custom camera UI with video recording?
│  └─ AVCaptureSession + AVCaptureMovieFileOutput
│     → Continue with this skill
└─ Both photo and video in same session?
   └─ AVCaptureSession + both outputs
      → Continue with this skill
你需要什么功能?

┌─ 仅让用户选择照片?
│  └─ 不要使用AVFoundation - 使用PHPicker或PhotosPicker
│     参考:/skill axiom-photo-library
├─ 带系统UI的简单照片/视频捕获?
│  └─ 使用UIImagePickerController(但自定义程度有限)
├─ 带照片捕获功能的自定义相机UI?
│  └─ AVCaptureSession + AVCapturePhotoOutput
│     → 继续使用本技能
├─ 带视频录制功能的自定义相机UI?
│  └─ AVCaptureSession + AVCaptureMovieFileOutput
│     → 继续使用本技能
└─ 在同一会话中同时支持照片和视频?
   └─ AVCaptureSession + 上述两种输出
      → 继续使用本技能

2. Request Camera Permission

2. 请求相机权限

swift
import AVFoundation

func requestCameraAccess() async -> Bool {
    let status = AVCaptureDevice.authorizationStatus(for: .video)

    switch status {
    case .authorized:
        return true
    case .notDetermined:
        return await AVCaptureDevice.requestAccess(for: .video)
    case .denied, .restricted:
        // Show settings prompt
        return false
    @unknown default:
        return false
    }
}
Info.plist required:
xml
<key>NSCameraUsageDescription</key>
<string>Take photos and videos</string>
For audio (video recording):
xml
<key>NSMicrophoneUsageDescription</key>
<string>Record audio with video</string>
swift
import AVFoundation

func requestCameraAccess() async -> Bool {
    let status = AVCaptureDevice.authorizationStatus(for: .video)

    switch status {
    case .authorized:
        return true
    case .notDetermined:
        return await AVCaptureDevice.requestAccess(for: .video)
    case .denied, .restricted:
        // 显示设置提示
        return false
    @unknown default:
        return false
    }
}
必须配置Info.plist:
xml
<key>NSCameraUsageDescription</key>
<string>拍摄照片和视频</string>
如果需要录制音频(视频录制):
xml
<key>NSMicrophoneUsageDescription</key>
<string>录制视频时同步录制音频</string>

3. Understand Session Architecture

3. 理解会话架构

AVCaptureSession
    ├─ Inputs
    │   ├─ AVCaptureDeviceInput (camera)
    │   └─ AVCaptureDeviceInput (microphone, for video)
    ├─ Outputs
    │   ├─ AVCapturePhotoOutput (photos)
    │   ├─ AVCaptureMovieFileOutput (video files)
    │   └─ AVCaptureVideoDataOutput (raw frames)
    └─ Connections (automatic between compatible input/output)
Key rule: All session configuration happens on a dedicated serial queue, never main thread.
AVCaptureSession
    ├─ 输入源
    │   ├─ AVCaptureDeviceInput(摄像头)
    │   └─ AVCaptureDeviceInput(麦克风,用于视频录制)
    ├─ 输出源
    │   ├─ AVCapturePhotoOutput(照片)
    │   ├─ AVCaptureMovieFileOutput(视频文件)
    │   └─ AVCaptureVideoDataOutput(原始帧)
    └─ 连接(在兼容的输入/输出间自动建立)
核心规则:所有会话配置操作都必须在专用串行队列中执行,绝不能在主线程。

Core Patterns

核心实现模式

Pattern 1: Basic Session Setup

模式1:基础会话设置

Use case: Set up camera preview with photo capture capability.
swift
import AVFoundation

class CameraManager: NSObject {
    let session = AVCaptureSession()
    let photoOutput = AVCapturePhotoOutput()

    // CRITICAL: Dedicated serial queue for session work
    private let sessionQueue = DispatchQueue(label: "camera.session")

    func setupSession() {
        sessionQueue.async { [self] in
            session.beginConfiguration()
            defer { session.commitConfiguration() }

            // 1. Set session preset
            session.sessionPreset = .photo

            // 2. Add camera input
            guard let camera = AVCaptureDevice.default(.builtInWideAngleCamera,
                                                        for: .video,
                                                        position: .back),
                  let input = try? AVCaptureDeviceInput(device: camera),
                  session.canAddInput(input) else {
                return
            }
            session.addInput(input)

            // 3. Add photo output
            guard session.canAddOutput(photoOutput) else { return }
            session.addOutput(photoOutput)

            // 4. Configure photo output
            photoOutput.isHighResolutionCaptureEnabled = true
            photoOutput.maxPhotoQualityPrioritization = .quality
        }
    }

    func startSession() {
        sessionQueue.async { [self] in
            if !session.isRunning {
                session.startRunning()  // Blocking call - never on main thread!
            }
        }
    }

    func stopSession() {
        sessionQueue.async { [self] in
            if session.isRunning {
                session.stopRunning()
            }
        }
    }
}
Cost: 30 min implementation
适用场景:设置带照片捕获功能的相机预览。
swift
import AVFoundation

class CameraManager: NSObject {
    let session = AVCaptureSession()
    let photoOutput = AVCapturePhotoOutput()

    // 关键:用于会话操作的专用串行队列
    private let sessionQueue = DispatchQueue(label: "camera.session")

    func setupSession() {
        sessionQueue.async { [self] in
            session.beginConfiguration()
            defer { session.commitConfiguration() }

            // 1. 设置会话预设
            session.sessionPreset = .photo

            // 2. 添加摄像头输入
            guard let camera = AVCaptureDevice.default(.builtInWideAngleCamera,
                                                        for: .video,
                                                        position: .back),
                  let input = try? AVCaptureDeviceInput(device: camera),
                  session.canAddInput(input) else {
                return
            }
            session.addInput(input)

            // 3. 添加照片输出
            guard session.canAddOutput(photoOutput) else { return }
            session.addOutput(photoOutput)

            // 4. 配置照片输出
            photoOutput.isHighResolutionCaptureEnabled = true
            photoOutput.maxPhotoQualityPrioritization = .quality
        }
    }

    func startSession() {
        sessionQueue.async { [self] in
            if !session.isRunning {
                session.startRunning()  // 阻塞调用 - 绝不能在主线程执行!
            }
        }
    }

    func stopSession() {
        sessionQueue.async { [self] in
            if session.isRunning {
                session.stopRunning()
            }
        }
    }
}
实现成本:30分钟

Pattern 2: SwiftUI Camera Preview

模式2:SwiftUI相机预览

Use case: Display camera preview in SwiftUI view.
swift
import SwiftUI
import AVFoundation

struct CameraPreview: UIViewRepresentable {
    let session: AVCaptureSession

    func makeUIView(context: Context) -> PreviewView {
        let view = PreviewView()
        view.previewLayer.session = session
        view.previewLayer.videoGravity = .resizeAspectFill
        return view
    }

    func updateUIView(_ uiView: PreviewView, context: Context) {}

    class PreviewView: UIView {
        override class var layerClass: AnyClass { AVCaptureVideoPreviewLayer.self }
        var previewLayer: AVCaptureVideoPreviewLayer { layer as! AVCaptureVideoPreviewLayer }
    }
}

// Usage in SwiftUI
struct CameraView: View {
    @StateObject private var camera = CameraManager()

    var body: some View {
        CameraPreview(session: camera.session)
            .ignoresSafeArea()
            .onAppear { camera.startSession() }
            .onDisappear { camera.stopSession() }
    }
}
Cost: 20 min implementation
适用场景:在SwiftUI视图中显示相机预览。
swift
import SwiftUI
import AVFoundation

struct CameraPreview: UIViewRepresentable {
    let session: AVCaptureSession

    func makeUIView(context: Context) -> PreviewView {
        let view = PreviewView()
        view.previewLayer.session = session
        view.previewLayer.videoGravity = .resizeAspectFill
        return view
    }

    func updateUIView(_ uiView: PreviewView, context: Context) {}

    class PreviewView: UIView {
        override class var layerClass: AnyClass { AVCaptureVideoPreviewLayer.self }
        var previewLayer: AVCaptureVideoPreviewLayer { layer as! AVCaptureVideoPreviewLayer }
    }
}

// 在SwiftUI中使用
struct CameraView: View {
    @StateObject private var camera = CameraManager()

    var body: some View {
        CameraPreview(session: camera.session)
            .ignoresSafeArea()
            .onAppear { camera.startSession() }
            .onDisappear { camera.stopSession() }
    }
}
实现成本:20分钟

Pattern 3: Rotation Handling with RotationCoordinator (iOS 17+)

模式3:使用RotationCoordinator处理旋转(iOS 17+)

Use case: Keep preview and captured photos correctly oriented regardless of device rotation.
Why RotationCoordinator: Deprecated
videoOrientation
requires manual observation of device orientation. RotationCoordinator automatically tracks gravity and provides angles.
swift
import AVFoundation

class CameraManager {
    private var rotationCoordinator: AVCaptureDevice.RotationCoordinator?
    private var rotationObservation: NSKeyValueObservation?

    func setupRotationCoordinator(device: AVCaptureDevice, previewLayer: AVCaptureVideoPreviewLayer) {
        // Create coordinator with device and preview layer
        rotationCoordinator = AVCaptureDevice.RotationCoordinator(
            device: device,
            previewLayer: previewLayer
        )

        // Observe preview rotation changes
        rotationObservation = rotationCoordinator?.observe(
            \.videoRotationAngleForHorizonLevelPreview,
            options: [.new]
        ) { [weak previewLayer] coordinator, _ in
            // Update preview layer rotation on main thread
            DispatchQueue.main.async {
                previewLayer?.connection?.videoRotationAngle = coordinator.videoRotationAngleForHorizonLevelPreview
            }
        }

        // Set initial rotation
        previewLayer.connection?.videoRotationAngle = rotationCoordinator!.videoRotationAngleForHorizonLevelPreview
    }

    func captureRotationAngle() -> CGFloat {
        // Use this angle when capturing photos
        rotationCoordinator?.videoRotationAngleForHorizonLevelCapture ?? 0
    }
}
When capturing:
swift
func capturePhoto() {
    let settings = AVCapturePhotoSettings()

    // Apply rotation angle from coordinator
    if let connection = photoOutput.connection(with: .video) {
        connection.videoRotationAngle = captureRotationAngle()
    }

    photoOutput.capturePhoto(with: settings, delegate: self)
}
Cost: 45 min implementation, prevents 2+ hours debugging rotation issues
适用场景:无论设备如何旋转,都保持预览和拍摄的照片方向正确。
为什么使用RotationCoordinator:已弃用的
videoOrientation
需要手动监听设备方向,而RotationCoordinator会自动跟踪重力方向并提供角度。
swift
import AVFoundation

class CameraManager {
    private var rotationCoordinator: AVCaptureDevice.RotationCoordinator?
    private var rotationObservation: NSKeyValueObservation?

    func setupRotationCoordinator(device: AVCaptureDevice, previewLayer: AVCaptureVideoPreviewLayer) {
        // 使用设备和预览层创建协调器
        rotationCoordinator = AVCaptureDevice.RotationCoordinator(
            device: device,
            previewLayer: previewLayer
        )

        // 监听预览旋转变化
        rotationObservation = rotationCoordinator?.observe(
            \.videoRotationAngleForHorizonLevelPreview,
            options: [.new]
        ) { [weak previewLayer] coordinator, _ in
            // 在主线程更新预览层旋转角度
            DispatchQueue.main.async {
                previewLayer?.connection?.videoRotationAngle = coordinator.videoRotationAngleForHorizonLevelPreview
            }
        }

        // 设置初始旋转角度
        previewLayer.connection?.videoRotationAngle = rotationCoordinator!.videoRotationAngleForHorizonLevelPreview
    }

    func captureRotationAngle() -> CGFloat {
        // 拍摄照片时使用此角度
        rotationCoordinator?.videoRotationAngleForHorizonLevelCapture ?? 0
    }
}
拍摄时的配置:
swift
func capturePhoto() {
    let settings = AVCapturePhotoSettings()

    // 应用协调器提供的旋转角度
    if let connection = photoOutput.connection(with: .video) {
        connection.videoRotationAngle = captureRotationAngle()
    }

    photoOutput.capturePhoto(with: settings, delegate: self)
}
实现成本:45分钟,可避免2小时以上的旋转问题调试工作

Pattern 4: Responsive Capture Pipeline (iOS 17+)

模式4:响应式捕获流程(iOS 17+)

Use case: Make photo capture feel instant with zero-shutter-lag, overlapping captures, and responsive button states.
iOS 17+ introduces four complementary APIs that work together for maximum responsiveness:
适用场景:通过零快门延迟、重叠捕获和响应式按钮状态,让照片拍摄感觉即时完成。
iOS 17+引入了四个互补的API,协同工作以实现最佳响应速度:

4a. Zero Shutter Lag

4a. 零快门延迟

Uses a ring buffer of recent frames to "time travel" back to the exact moment you tapped the shutter. Enabled automatically for iOS 17+ apps.
swift
// Check if supported for current format
if photoOutput.isZeroShutterLagSupported {
    // Enabled by default for apps linking iOS 17+
    // Opt out if causing issues:
    // photoOutput.isZeroShutterLagEnabled = false
}
Why it matters: Without ZSL, there's a delay between tap and frame capture. For action shots, the moment is already over.
Requirements: iPhone XS and newer. Does NOT apply to flash captures, manual exposure, bracketed captures, or constituent photo delivery.
使用最近帧的环形缓冲区,可“回溯”到你按下快门的确切时刻。iOS 17+应用默认启用此功能。
swift
// 检查当前格式是否支持
if photoOutput.isZeroShutterLagSupported {
    // iOS 17+应用默认启用
    // 如果有问题可以关闭:
    // photoOutput.isZeroShutterLagEnabled = false
}
重要性:如果没有ZSL,按下快门和捕获帧之间会有延迟。对于动作拍摄,精彩瞬间可能已经错过。
要求:iPhone XS及更新机型。不适用于闪光灯拍摄、手动曝光、包围曝光或原始照片输出。

4b. Responsive Capture (Overlapping Captures)

4b. 响应式捕获(重叠捕获)

Allows a new capture to start while the previous one is still processing:
swift
// Check support first
if photoOutput.isZeroShutterLagSupported {
    photoOutput.isZeroShutterLagEnabled = true  // Required for responsive capture

    if photoOutput.isResponsiveCaptureSupported {
        photoOutput.isResponsiveCaptureEnabled = true
    }
}
Tradeoff: Increases peak memory usage. If your app is memory-constrained, consider leaving disabled.
Requirements: A12 Bionic (iPhone XS) and newer.
允许在前一次捕获仍在处理时启动新的捕获:
swift
// 先检查支持情况
if photoOutput.isZeroShutterLagSupported {
    photoOutput.isZeroShutterLagEnabled = true  // 响应式捕获的前提

    if photoOutput.isResponsiveCaptureSupported {
        photoOutput.isResponsiveCaptureEnabled = true
    }
}
权衡:会增加峰值内存使用。如果应用内存受限,建议保持关闭。
要求:A12仿生芯片(iPhone XS)及更新机型。

4c. Fast Capture Prioritization

4c. 快速捕获优先级

Automatically adapts quality when taking multiple photos rapidly (like burst mode):
swift
if photoOutput.isFastCapturePrioritizationSupported {
    photoOutput.isFastCapturePrioritizationEnabled = true
    // When enabled, rapid captures use "balanced" quality instead of "quality"
    // to maintain consistent shot-to-shot time
}
When to enable: User-facing toggle ("Prioritize Faster Shooting" in Camera.app). Off by default because it reduces quality.
在快速连续拍摄多张照片时(如连拍模式)自动调整质量:
swift
if photoOutput.isFastCapturePrioritizationSupported {
    photoOutput.isFastCapturePrioritizationEnabled = true
    // 启用后,快速连续拍摄会使用“平衡”质量而非“高质量”
    // 以保持稳定的拍摄间隔
}
启用时机:提供用户可切换的选项(如相机应用中的“优先快速拍摄”)。默认关闭,因为会降低画质。

4d. Readiness Coordinator (Button State Management)

4d. 就绪协调器(按钮状态管理)

Critical for UX: Provides synchronous updates for shutter button state without async lag.
swift
class CameraManager {
    private var readinessCoordinator: AVCapturePhotoOutputReadinessCoordinator!

    func setupReadinessCoordinator() {
        readinessCoordinator = AVCapturePhotoOutputReadinessCoordinator(photoOutput: photoOutput)
        readinessCoordinator.delegate = self
    }

    func capturePhoto() {
        var settings = AVCapturePhotoSettings()
        settings.photoQualityPrioritization = .balanced

        // Tell coordinator to track this capture BEFORE calling capturePhoto
        readinessCoordinator.startTrackingCaptureRequest(using: settings)

        photoOutput.capturePhoto(with: settings, delegate: self)
    }
}

extension CameraManager: AVCapturePhotoOutputReadinessCoordinatorDelegate {
    func readinessCoordinator(_ coordinator: AVCapturePhotoOutputReadinessCoordinator,
                              captureReadinessDidChange captureReadiness: AVCapturePhotoOutput.CaptureReadiness) {
        DispatchQueue.main.async {
            switch captureReadiness {
            case .ready:
                self.shutterButton.isEnabled = true
                self.shutterButton.alpha = 1.0

            case .notReadyMomentarily:
                // Brief delay - disable to prevent double-tap
                self.shutterButton.isEnabled = false

            case .notReadyWaitingForCapture:
                // Flash is firing - dim button
                self.shutterButton.alpha = 0.5

            case .notReadyWaitingForProcessing:
                // Processing previous photo - show spinner
                self.showProcessingIndicator()

            case .sessionNotRunning:
                self.shutterButton.isEnabled = false

            @unknown default:
                break
            }
        }
    }
}
Why use Readiness Coordinator: Without it, you'd need to track capture state manually and users might spam the shutter button during processing.
对用户体验至关重要:提供快门按钮状态的同步更新,无异步延迟。
swift
class CameraManager {
    private var readinessCoordinator: AVCapturePhotoOutputReadinessCoordinator!

    func setupReadinessCoordinator() {
        readinessCoordinator = AVCapturePhotoOutputReadinessCoordinator(photoOutput: photoOutput)
        readinessCoordinator.delegate = self
    }

    func capturePhoto() {
        var settings = AVCapturePhotoSettings()
        settings.photoQualityPrioritization = .balanced

        // 在调用capturePhoto前,告诉协调器跟踪此次捕获请求
        readinessCoordinator.startTrackingCaptureRequest(using: settings)

        photoOutput.capturePhoto(with: settings, delegate: self)
    }
}

extension CameraManager: AVCapturePhotoOutputReadinessCoordinatorDelegate {
    func readinessCoordinator(_ coordinator: AVCapturePhotoOutputReadinessCoordinator,
                              captureReadinessDidChange captureReadiness: AVCapturePhotoOutput.CaptureReadiness) {
        DispatchQueue.main.async {
            switch captureReadiness {
            case .ready:
                self.shutterButton.isEnabled = true
                self.shutterButton.alpha = 1.0

            case .notReadyMomentarily:
                // 短暂延迟 - 禁用按钮以防止重复点击
                self.shutterButton.isEnabled = false

            case .notReadyWaitingForCapture:
                // 闪光灯正在充电 - 调暗按钮
                self.shutterButton.alpha = 0.5

            case .notReadyWaitingForProcessing:
                // 正在处理前一张照片 - 显示加载指示器
                self.showProcessingIndicator()

            case .sessionNotRunning:
                self.shutterButton.isEnabled = false

            @unknown default:
                break
            }
        }
    }
}
为什么使用就绪协调器:如果没有它,你需要手动跟踪捕获状态,用户可能会在处理过程中重复点击快门按钮。

Quality Prioritization (Baseline)

质量优先级(基础设置)

Still useful even without the new APIs:
swift
func capturePhoto() {
    var settings = AVCapturePhotoSettings()

    // Speed vs Quality tradeoff
    // .speed     - Fastest capture, lower quality
    // .balanced  - Good default
    // .quality   - Best quality, may have delay
    settings.photoQualityPrioritization = .speed

    // For specific use cases:
    // - Social sharing: .speed (users expect instant)
    // - Document scanning: .quality (accuracy matters)
    // - General photography: .balanced

    photoOutput.capturePhoto(with: settings, delegate: self)
}
Deferred Processing (iOS 17+):
For maximum responsiveness, capture returns immediately with proxy image, full Deep Fusion processing happens in background:
swift
// Check support and enable deferred processing
if photoOutput.isAutoDeferredPhotoDeliverySupported {
    photoOutput.isAutoDeferredPhotoDeliveryEnabled = true
}
Delegate callbacks with deferred processing:
swift
// Called for BOTH regular photos AND deferred proxies
func photoOutput(_ output: AVCapturePhotoOutput,
                 didFinishProcessingPhoto photo: AVCapturePhoto,
                 error: Error?) {
    guard error == nil else { return }

    // Non-deferred photo - save directly
    if !photo.isRawPhoto, let data = photo.fileDataRepresentation() {
        savePhotoToLibrary(data)
    }
}

// Called ONLY for deferred proxies - save to PhotoKit for later processing
func photoOutput(_ output: AVCapturePhotoOutput,
                 didFinishCapturingDeferredPhotoProxy deferredPhotoProxy: AVCaptureDeferredPhotoProxy,
                 error: Error?) {
    guard error == nil else { return }

    // CRITICAL: Save proxy to library ASAP before app is backgrounded
    // App may be force-quit if memory pressure is high during backgrounding
    guard let proxyData = deferredPhotoProxy.fileDataRepresentation() else { return }

    Task {
        try await PHPhotoLibrary.shared().performChanges {
            let request = PHAssetCreationRequest.forAsset()
            // Use .photoProxy resource type - triggers deferred processing in Photos
            request.addResource(with: .photoProxy, data: proxyData, options: nil)
        }
    }
}
When final processing happens:
  • On-demand when image is requested from PhotoKit
  • Or automatically when device is idle (plugged in, not in use)
Fetching images with deferred processing awareness:
swift
// Request with secondary degraded image for smoother UX
let options = PHImageRequestOptions()
options.allowSecondaryDegradedImage = true  // New in iOS 17

PHImageManager.default().requestImage(
    for: asset,
    targetSize: targetSize,
    contentMode: .aspectFill,
    options: options
) { image, info in
    let isDegraded = info?[PHImageResultIsDegradedKey] as? Bool ?? false

    if isDegraded {
        // First: Low quality (immediate)
        // Second: Medium quality (new - while processing)
        // Third callback will be final quality
        self.showTemporaryImage(image)
    } else {
        // Final quality - processing complete
        self.showFinalImage(image)
    }
}
Requirements: iPhone 11 Pro and newer. Not used for flash captures or formats that don't benefit from extended processing.
Important considerations:
  • Can't apply pixel buffer customizations (filters, metadata changes) to deferred photos
  • Use PhotoKit adjustments after processing for edits
  • Get proxy into library ASAP - limited time when backgrounded
Cost: 1 hour implementation, prevents "camera feels slow" complaints
即使不使用新API,此设置仍然有用:
swift
func capturePhoto() {
    var settings = AVCapturePhotoSettings()

    // 速度与质量的权衡
    // .speed     - 最快捕获速度,画质较低
    // .balanced  - 推荐默认值
    // .quality   - 最佳画质,可能有延迟
    settings.photoQualityPrioritization = .speed

    // 针对特定场景:
    // - 社交分享:.speed(用户期望即时完成)
    // - 文档扫描:.quality(准确性重要)
    // - 普通摄影:.balanced

    photoOutput.capturePhoto(with: settings, delegate: self)
}
延迟处理(iOS 17+):
为了实现最大响应速度,捕获后立即返回代理图像,完整的深度融合处理在后台进行:
swift
// 检查支持情况并启用延迟处理
if photoOutput.isAutoDeferredPhotoDeliverySupported {
    photoOutput.isAutoDeferredPhotoDeliveryEnabled = true
}
延迟处理的代理回调:
swift
// 常规照片和延迟代理照片都会调用此方法
func photoOutput(_ output: AVCapturePhotoOutput,
                 didFinishProcessingPhoto photo: AVCapturePhoto,
                 error: Error?) {
    guard error == nil else { return }

    // 非延迟照片 - 直接保存
    if !photo.isRawPhoto, let data = photo.fileDataRepresentation() {
        savePhotoToLibrary(data)
    }
}

// 仅针对延迟代理照片调用 - 保存到PhotoKit以便后续处理
func photoOutput(_ output: AVCapturePhotoOutput,
                 didFinishCapturingDeferredPhotoProxy deferredPhotoProxy: AVCaptureDeferredPhotoProxy,
                 error: Error?) {
    guard error == nil else { return }

    // 关键:在应用进入后台前尽快将代理保存到图库
    // 如果后台期间内存压力过大,应用可能会被强制退出
    guard let proxyData = deferredPhotoProxy.fileDataRepresentation() else { return }

    Task {
        try await PHPhotoLibrary.shared().performChanges {
            let request = PHAssetCreationRequest.forAsset()
            // 使用.photoProxy资源类型 - 触发Photos应用中的延迟处理
            request.addResource(with: .photoProxy, data: proxyData, options: nil)
        }
    }
}
最终处理时机:
  • 当从PhotoKit请求图像时按需处理
  • 或在设备空闲时自动处理(充电且未在使用中)
支持延迟处理的图像获取:
swift
// 请求时允许返回低质量的次级图像,提升用户体验流畅度
let options = PHImageRequestOptions()
options.allowSecondaryDegradedImage = true  // iOS 17新增

PHImageManager.default().requestImage(
    for: asset,
    targetSize: targetSize,
    contentMode: .aspectFill,
    options: options
) { image, info in
    let isDegraded = info?[PHImageResultIsDegradedKey] as? Bool ?? false

    if isDegraded {
        // 首先:低质量图像(即时返回)
        // 其次:中等质量图像(处理中)
        // 第三次回调返回最终高质量图像
        self.showTemporaryImage(image)
    } else {
        // 最终质量图像 - 处理完成
        self.showFinalImage(image)
    }
}
要求:iPhone 11 Pro及更新机型。不适用于闪光灯拍摄或无法从扩展处理中受益的格式。
重要注意事项:
  • 无法对延迟照片应用像素缓冲区自定义(滤镜、元数据修改)
  • 处理完成后使用PhotoKit进行编辑
  • 尽快将代理保存到图库 - 后台时的处理时间有限
实现成本:1小时,可避免“相机反应慢”的用户投诉

Pattern 5: Session Interruption Handling

模式5:会话中断处理

Use case: Handle phone calls, multitasking, system camera usage.
swift
class CameraManager {
    private var interruptionObservers: [NSObjectProtocol] = []

    func setupInterruptionHandling() {
        // Session was interrupted
        let interruptedObserver = NotificationCenter.default.addObserver(
            forName: .AVCaptureSessionWasInterrupted,
            object: session,
            queue: .main
        ) { [weak self] notification in
            guard let reason = notification.userInfo?[AVCaptureSessionInterruptionReasonKey] as? Int,
                  let interruptionReason = AVCaptureSession.InterruptionReason(rawValue: reason) else {
                return
            }

            switch interruptionReason {
            case .videoDeviceNotAvailableInBackground:
                // App went to background - normal, will resume
                self?.showPausedOverlay()

            case .audioDeviceInUseByAnotherClient:
                // Another app using audio
                self?.showInterruptedBanner("Audio in use by another app")

            case .videoDeviceInUseByAnotherClient:
                // Another app using camera
                self?.showInterruptedBanner("Camera in use by another app")

            case .videoDeviceNotAvailableWithMultipleForegroundApps:
                // Split View/Slide Over - camera not available
                self?.showInterruptedBanner("Camera unavailable in Split View")

            case .videoDeviceNotAvailableDueToSystemPressure:
                // Thermal state - reduce quality or stop
                self?.handleThermalPressure()

            @unknown default:
                self?.showInterruptedBanner("Camera interrupted")
            }
        }
        interruptionObservers.append(interruptedObserver)

        // Session interruption ended
        let endedObserver = NotificationCenter.default.addObserver(
            forName: .AVCaptureSessionInterruptionEnded,
            object: session,
            queue: .main
        ) { [weak self] _ in
            self?.hideInterruptedBanner()
            self?.hidePausedOverlay()
            // Session automatically resumes - no need to call startRunning()
        }
        interruptionObservers.append(endedObserver)
    }

    deinit {
        interruptionObservers.forEach { NotificationCenter.default.removeObserver($0) }
    }
}
Cost: 30 min implementation, prevents "camera freezes" bug reports
适用场景:处理来电、多任务处理、系统相机占用等情况。
swift
class CameraManager {
    private var interruptionObservers: [NSObjectProtocol] = []

    func setupInterruptionHandling() {
        // 会话被中断
        let interruptedObserver = NotificationCenter.default.addObserver(
            forName: .AVCaptureSessionWasInterrupted,
            object: session,
            queue: .main
        ) { [weak self] notification in
            guard let reason = notification.userInfo?[AVCaptureSessionInterruptionReasonKey] as? Int,
                  let interruptionReason = AVCaptureSession.InterruptionReason(rawValue: reason) else {
                return
            }

            switch interruptionReason {
            case .videoDeviceNotAvailableInBackground:
                // 应用进入后台 - 正常情况,会自动恢复
                self?.showPausedOverlay()

            case .audioDeviceInUseByAnotherClient:
                // 其他应用正在使用音频
                self?.showInterruptedBanner("音频被其他应用占用")

            case .videoDeviceInUseByAnotherClient:
                // 其他应用正在使用相机
                self?.showInterruptedBanner("相机被其他应用占用")

            case .videoDeviceNotAvailableWithMultipleForegroundApps:
                // 分屏/侧滑模式 - 相机不可用
                self?.showInterruptedBanner("分屏模式下相机不可用")

            case .videoDeviceNotAvailableDueToSystemPressure:
                // 热状态 - 降低质量或停止使用
                self?.handleThermalPressure()

            @unknown default:
                self?.showInterruptedBanner("相机被中断")
            }
        }
        interruptionObservers.append(interruptedObserver)

        // 会话中断结束
        let endedObserver = NotificationCenter.default.addObserver(
            forName: .AVCaptureSessionInterruptionEnded,
            object: session,
            queue: .main
        ) { [weak self] _ in
            self?.hideInterruptedBanner()
            self?.hidePausedOverlay()
            // 会话会自动恢复 - 无需调用startRunning()
        }
        interruptionObservers.append(endedObserver)
    }

    deinit {
        interruptionObservers.forEach { NotificationCenter.default.removeObserver($0) }
    }
}
实现成本:30分钟,可避免“相机卡住”的bug报告

Pattern 6: Camera Switching (Front/Back)

模式6:摄像头切换(前后置)

Use case: Toggle between front and back cameras.
swift
func switchCamera() {
    sessionQueue.async { [self] in
        guard let currentInput = session.inputs.first as? AVCaptureDeviceInput else {
            return
        }

        let currentPosition = currentInput.device.position
        let newPosition: AVCaptureDevice.Position = currentPosition == .back ? .front : .back

        guard let newDevice = AVCaptureDevice.default(
            .builtInWideAngleCamera,
            for: .video,
            position: newPosition
        ) else {
            return
        }

        session.beginConfiguration()
        defer { session.commitConfiguration() }

        // Remove old input
        session.removeInput(currentInput)

        // Add new input
        do {
            let newInput = try AVCaptureDeviceInput(device: newDevice)
            if session.canAddInput(newInput) {
                session.addInput(newInput)

                // Update rotation coordinator for new device
                if let previewLayer = previewLayer {
                    setupRotationCoordinator(device: newDevice, previewLayer: previewLayer)
                }
            } else {
                // Fallback: restore old input
                session.addInput(currentInput)
            }
        } catch {
            session.addInput(currentInput)
        }
    }
}
Front camera mirroring: Front camera preview is mirrored by default (matches user expectation). Captured photos are NOT mirrored (correct for sharing). This is intentional.
Cost: 20 min implementation
适用场景:在前后置摄像头间切换。
swift
func switchCamera() {
    sessionQueue.async { [self] in
        guard let currentInput = session.inputs.first as? AVCaptureDeviceInput else {
            return
        }

        let currentPosition = currentInput.device.position
        let newPosition: AVCaptureDevice.Position = currentPosition == .back ? .front : .back

        guard let newDevice = AVCaptureDevice.default(
            .builtInWideAngleCamera,
            for: .video,
            position: newPosition
        ) else {
            return
        }

        session.beginConfiguration()
        defer { session.commitConfiguration() }

        // 移除旧输入
        session.removeInput(currentInput)

        // 添加新输入
        do {
            let newInput = try AVCaptureDeviceInput(device: newDevice)
            if session.canAddInput(newInput) {
                session.addInput(newInput)

                // 为新设备更新旋转协调器
                if let previewLayer = previewLayer {
                    setupRotationCoordinator(device: newDevice, previewLayer: previewLayer)
                }
            } else {
                // 回退:恢复旧输入
                session.addInput(currentInput)
            }
        } catch {
            session.addInput(currentInput)
        }
    }
}
前置摄像头镜像:前置摄像头预览默认是镜像的(符合用户预期)。拍摄的照片不会被镜像(适合分享)。这是有意设计的行为。
实现成本:20分钟

Pattern 7: Video Recording

模式7:视频录制

Use case: Record video with audio to file.
swift
class CameraManager: NSObject {
    let movieOutput = AVCaptureMovieFileOutput()
    private var currentRecordingURL: URL?

    func setupVideoRecording() {
        sessionQueue.async { [self] in
            session.beginConfiguration()
            defer { session.commitConfiguration() }

            // Set video preset
            session.sessionPreset = .high  // Or .hd1920x1080, .hd4K3840x2160

            // Add microphone input
            if let microphone = AVCaptureDevice.default(for: .audio),
               let audioInput = try? AVCaptureDeviceInput(device: microphone),
               session.canAddInput(audioInput) {
                session.addInput(audioInput)
            }

            // Add movie output
            if session.canAddOutput(movieOutput) {
                session.addOutput(movieOutput)
            }
        }
    }

    func startRecording() {
        guard !movieOutput.isRecording else { return }

        let outputURL = FileManager.default.temporaryDirectory
            .appendingPathComponent(UUID().uuidString)
            .appendingPathExtension("mov")

        currentRecordingURL = outputURL

        // Apply rotation
        if let connection = movieOutput.connection(with: .video) {
            connection.videoRotationAngle = captureRotationAngle()
        }

        movieOutput.startRecording(to: outputURL, recordingDelegate: self)
    }

    func stopRecording() {
        guard movieOutput.isRecording else { return }
        movieOutput.stopRecording()
    }
}

extension CameraManager: AVCaptureFileOutputRecordingDelegate {
    func fileOutput(_ output: AVCaptureFileOutput,
                    didFinishRecordingTo outputFileURL: URL,
                    from connections: [AVCaptureConnection],
                    error: Error?) {
        if let error = error {
            print("Recording error: \(error)")
            return
        }

        // Video saved to outputFileURL
        saveVideoToPhotoLibrary(outputFileURL)
    }
}
Cost: 45 min implementation
适用场景:录制带音频的视频到文件。
swift
class CameraManager: NSObject {
    let movieOutput = AVCaptureMovieFileOutput()
    private var currentRecordingURL: URL?

    func setupVideoRecording() {
        sessionQueue.async { [self] in
            session.beginConfiguration()
            defer { session.commitConfiguration() }

            // 设置视频预设
            session.sessionPreset = .high  // 或.hd1920x1080, .hd4K3840x2160

            // 添加麦克风输入
            if let microphone = AVCaptureDevice.default(for: .audio),
               let audioInput = try? AVCaptureDeviceInput(device: microphone),
               session.canAddInput(audioInput) {
                session.addInput(audioInput)
            }

            // 添加视频输出
            if session.canAddOutput(movieOutput) {
                session.addOutput(movieOutput)
            }
        }
    }

    func startRecording() {
        guard !movieOutput.isRecording else { return }

        let outputURL = FileManager.default.temporaryDirectory
            .appendingPathComponent(UUID().uuidString)
            .appendingPathExtension("mov")

        currentRecordingURL = outputURL

        // 应用旋转角度
        if let connection = movieOutput.connection(with: .video) {
            connection.videoRotationAngle = captureRotationAngle()
        }

        movieOutput.startRecording(to: outputURL, recordingDelegate: self)
    }

    func stopRecording() {
        guard movieOutput.isRecording else { return }
        movieOutput.stopRecording()
    }
}

extension CameraManager: AVCaptureFileOutputRecordingDelegate {
    func fileOutput(_ output: AVCaptureFileOutput,
                    didFinishRecordingTo outputFileURL: URL,
                    from connections: [AVCaptureConnection],
                    error: Error?) {
        if let error = error {
            print("Recording error: \(error)")
            return
        }

        // 视频已保存到outputFileURL
        saveVideoToPhotoLibrary(outputFileURL)
    }
}
实现成本:45分钟

Anti-Patterns

反模式

Anti-Pattern 1: Session Work on Main Thread

反模式1:在主线程处理会话操作

Wrong:
swift
func startCamera() {
    session.startRunning()  // Blocks UI for 1-3 seconds!
}
Right:
swift
func startCamera() {
    sessionQueue.async { [self] in
        session.startRunning()
    }
}
Why it matters:
startRunning()
is blocking. On main thread, UI freezes.
错误做法:
swift
func startCamera() {
    session.startRunning()  // 会阻塞UI 1-3秒!
}
正确做法:
swift
func startCamera() {
    sessionQueue.async { [self] in
        session.startRunning()
    }
}
原因
startRunning()
是阻塞调用,在主线程执行会导致UI冻结。

Anti-Pattern 2: Using Deprecated videoOrientation

反模式2:使用已弃用的videoOrientation

Wrong (pre-iOS 17):
swift
// Manually tracking orientation
NotificationCenter.default.addObserver(
    forName: UIDevice.orientationDidChangeNotification,
    object: nil,
    queue: .main
) { _ in
    // Manual rotation logic...
}
Right (iOS 17+):
swift
let coordinator = AVCaptureDevice.RotationCoordinator(device: camera, previewLayer: preview)
// Automatically tracks gravity, provides angles
Why it matters: RotationCoordinator handles edge cases (face-up, face-down) that manual tracking misses.
错误做法(iOS 17之前):
swift
// 手动跟踪设备方向
NotificationCenter.default.addObserver(
    forName: UIDevice.orientationDidChangeNotification,
    object: nil,
    queue: .main
) { _ in
    // 手动旋转逻辑...
}
正确做法(iOS 17+):
swift
let coordinator = AVCaptureDevice.RotationCoordinator(device: camera, previewLayer: preview)
// 自动跟踪重力方向,提供角度
原因:RotationCoordinator能处理手动跟踪遗漏的边缘情况(如设备朝上、朝下)。

Anti-Pattern 3: Ignoring Session Interruptions

反模式3:忽略会话中断

Wrong:
swift
// No interruption handling - camera freezes on phone call
Right:
swift
NotificationCenter.default.addObserver(
    forName: .AVCaptureSessionWasInterrupted,
    object: session,
    queue: .main
) { notification in
    // Show UI feedback
}
Why it matters: Without handling, camera appears frozen when interrupted.
错误做法:
swift
// 无中断处理 - 来电时相机卡住
正确做法:
swift
NotificationCenter.default.addObserver(
    forName: .AVCaptureSessionWasInterrupted,
    object: session,
    queue: .main
) { notification in
    // 显示用户反馈
}
原因:如果不处理,相机在被中断时会看起来像是卡住了。

Anti-Pattern 4: Modifying Session Without Configuration Block

反模式4:未使用配置块就修改会话

Wrong:
swift
session.removeInput(oldInput)
session.addInput(newInput)  // May fail mid-stream
Right:
swift
session.beginConfiguration()
session.removeInput(oldInput)
session.addInput(newInput)
session.commitConfiguration()  // Atomic change
Why it matters: Without configuration block, session may enter invalid state between calls.
错误做法:
swift
session.removeInput(oldInput)
session.addInput(newInput)  // 可能在操作中途失败
正确做法:
swift
session.beginConfiguration()
session.removeInput(oldInput)
session.addInput(newInput)
session.commitConfiguration()  // 原子性变更
原因:如果不使用配置块,会话可能在两次调用之间进入无效状态。

Pressure Scenarios

压力场景处理

Scenario 1: "Just Make the Camera Work by Friday"

场景1:“周五前必须让相机能用”

Context: Product wants camera feature shipped. You're considering skipping interruption handling.
Pressure: "It works when I test it, let's ship."
Reality: First user who gets a phone call while using camera will see frozen UI. App Store review may catch this.
Correct action:
  1. Implement interruption handling (30 min)
  2. Test by calling your test device during camera use
  3. Verify UI shows appropriate feedback
Push-back template: "Camera captures work, but the app freezes if a phone call comes in. I need 30 minutes to handle interruptions properly and avoid 1-star reviews."
背景:产品团队要求上线相机功能,你考虑跳过中断处理。
压力:“我测试时没问题,先上线吧。”
实际情况:第一个在使用相机时接到电话的用户会看到冻结的UI。App Store审核可能会发现这个问题。
正确行动:
  1. 实现中断处理(30分钟)
  2. 在测试设备上通话时测试相机功能
  3. 验证UI显示适当的反馈
应对话术:“相机捕获功能可以正常工作,但如果用户接到电话,应用会冻结。我需要30分钟来正确处理中断情况,避免一星差评。”

Scenario 2: "The Camera is Too Slow"

场景2:“相机反应太慢”

Context: QA reports photo capture feels sluggish. PM wants it "instant like the system camera."
Pressure: "Just make it faster somehow."
Reality: Default settings prioritize quality over speed. System camera uses deferred processing.
Correct action:
  1. Set
    photoQualityPrioritization = .speed
    for social/sharing use cases
  2. Consider deferred processing for maximum responsiveness
  3. Show capture animation immediately (before processing completes)
Push-back template: "We're currently optimizing for image quality. I can make capture feel instant by prioritizing speed and showing the preview immediately while processing continues in background. This is what the system Camera app does."
背景:QA反馈照片拍摄感觉迟缓,产品经理希望“像系统相机一样即时”。
压力:“不管怎样,让它变快。”
实际情况:默认设置优先考虑质量而非速度。系统相机使用了延迟处理。
正确行动:
  1. 对于社交/分享场景,设置
    photoQualityPrioritization = .speed
  2. 考虑使用延迟处理以实现最大响应速度
  3. 立即显示捕获动画(在处理完成前)
应对话术:“我们当前优先优化图像质量。我可以通过优先速度并在后台处理时立即显示预览,让拍摄感觉即时完成。这和系统相机的做法一致。”

Scenario 3: "Why is the Front Camera Photo Mirrored?"

场景3:“为什么前置摄像头的照片是镜像的?”

Context: Designer reports front camera photos look "wrong" - they're not mirrored like the preview.
Pressure: "The preview shows it one way, the photo should match."
Reality: Preview is mirrored (user expectation - like a mirror). Photo is NOT mirrored (correct for sharing - text reads correctly). This is intentional behavior matching system camera.
Correct action:
  1. Explain this is Apple's standard behavior
  2. If business requires mirrored photos (selfie apps), manually mirror in post-processing
  3. Never mirror the preview differently than expected
Push-back template: "This is intentional Apple behavior. The preview is mirrored like a mirror so users can frame themselves, but the captured photo is unmirrored so text reads correctly when shared. We can add optional mirroring in post-processing if our use case requires it."
背景:设计师反馈前置摄像头的照片看起来“不对”——和预览的镜像效果不一致。
压力:“预览显示的是一种样子,照片应该和预览一致。”
实际情况:预览是镜像的(符合用户预期——像镜子一样)。拍摄的照片不会被镜像(适合分享——文字显示正确)。这是和系统相机一致的有意设计。
正确行动:
  1. 解释这是苹果的标准行为
  2. 如果业务要求镜像照片(如自拍应用),在后期处理中手动镜像
  3. 永远不要改变预览的默认镜像行为
应对话术:“这是苹果的有意设计。预览像镜子一样镜像显示,方便用户调整构图,但拍摄的照片是非镜像的,这样分享时文字可以正确显示。如果我们的场景需要,我们可以在后期处理中添加可选的镜像功能。”

Checklist

检查清单

Before shipping camera features:
Session Setup:
  • ☑ All session work on dedicated serial queue
  • startRunning()
    never called on main thread
  • ☑ Session preset matches use case (
    .photo
    for photos,
    .high
    for video)
  • ☑ Configuration changes wrapped in
    beginConfiguration()
    /
    commitConfiguration()
Permissions:
  • ☑ Camera permission requested before session setup
  • NSCameraUsageDescription
    in Info.plist
  • NSMicrophoneUsageDescription
    if recording audio
  • ☑ Graceful handling of denied permission
Rotation:
  • ☑ RotationCoordinator used (not deprecated videoOrientation)
  • ☑ Preview layer rotation updated via observation
  • ☑ Capture rotation angle applied when taking photos
  • ☑ Tested in all orientations (portrait, landscape, face-up)
Responsiveness:
  • ☑ photoQualityPrioritization set appropriately for use case
  • ☑ Capture button shows immediate feedback
  • ☑ Deferred processing considered for maximum speed
Interruptions:
  • ☑ Session interruption observer registered
  • ☑ UI feedback shown when interrupted
  • ☑ Tested with incoming phone call
  • ☑ Tested in Split View (iPad)
Camera Switching:
  • ☑ Front/back switch updates rotation coordinator
  • ☑ Switch happens on session queue
  • ☑ Fallback if new camera unavailable
Video Recording (if applicable):
  • ☑ Microphone input added
  • ☑ Recording delegate handles completion
  • ☑ File cleanup for temporary recordings
在上线相机功能前:
会话设置:
  • ☑ 所有会话操作都在专用串行队列执行
  • startRunning()
    从未在主线程调用
  • ☑ 会话预设与使用场景匹配(照片用
    .photo
    ,视频用
    .high
  • ☑ 配置变更包裹在
    beginConfiguration()
    /
    commitConfiguration()
权限:
  • ☑ 在会话设置前请求相机权限
  • ☑ Info.plist中配置了
    NSCameraUsageDescription
  • ☑ 如果录制音频,配置了
    NSMicrophoneUsageDescription
  • ☑ 优雅处理权限被拒绝的情况
旋转处理:
  • ☑ 使用RotationCoordinator(而非已弃用的videoOrientation)
  • ☑ 通过监听更新预览层旋转角度
  • ☑ 拍摄照片时应用捕获旋转角度
  • ☑ 在所有方向下测试(竖屏、横屏、朝上、朝下)
响应速度:
  • ☑ 根据使用场景设置合适的photoQualityPrioritization
  • ☑ 捕获按钮显示即时反馈
  • ☑ 考虑使用延迟处理以实现最大速度
中断处理:
  • ☑ 注册了会话中断观察者
  • ☑ 中断时显示UI反馈
  • ☑ 测试来电场景
  • ☑ 测试iPad分屏模式
摄像头切换:
  • ☑ 前后置切换时更新旋转协调器
  • ☑ 切换操作在会话队列执行
  • ☑ 新摄像头不可用时提供回退方案
视频录制(如果适用):
  • ☑ 添加了麦克风输入
  • ☑ 录制代理处理完成事件
  • ☑ 清理临时录制文件

Resources

参考资源

WWDC: 2021-10247, 2023-10105
Docs: /avfoundation/avcapturesession, /avfoundation/avcapturedevice/rotationcoordinator, /avfoundation/avcapturephotosettings, /avfoundation/avcapturephotooutputreadinesscoordinator
Skills: axiom-camera-capture-ref, axiom-camera-capture-diag, axiom-photo-library
WWDC:2021-10247, 2023-10105
文档:/avfoundation/avcapturesession, /avfoundation/avcapturedevice/rotationcoordinator, /avfoundation/avcapturephotosettings, /avfoundation/avcapturephotooutputreadinesscoordinator
相关技能:axiom-camera-capture-ref, axiom-camera-capture-diag, axiom-photo-library