axiom-vision-diag

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Vision Framework Diagnostics

Vision Framework 诊断指南

Systematic troubleshooting for Vision framework issues: subjects not detected, missing landmarks, low confidence, performance problems, coordinate mismatches, text recognition failures, barcode detection issues, and document scanning problems.
针对Vision框架问题的系统性故障排查:未检测到主体、缺失关键点、置信度低、性能问题、坐标不匹配、文本识别失败、条形码检测问题以及文档扫描问题。

Overview

概述

Core Principle: When Vision doesn't work, the problem is usually:
  1. Environment (lighting, occlusion, edge of frame) - 40%
  2. Confidence threshold (ignoring low confidence data) - 30%
  3. Threading (blocking main thread causes frozen UI) - 15%
  4. Coordinates (mixing lower-left and top-left origins) - 10%
  5. API availability (using iOS 17+ APIs on older devices) - 5%
Always check environment and confidence BEFORE debugging code.
核心原则:当Vision无法正常工作时,问题通常源于以下方面:
  1. 环境因素(光线、遮挡、主体在画面边缘)- 占比40%
  2. 置信度阈值(忽略低置信度数据)- 占比30%
  3. 线程处理(主线程被阻塞导致UI冻结)- 占比15%
  4. 坐标系统(混淆左下和左上原点)- 占比10%
  5. API兼容性(在旧设备上使用iOS 17+的API)- 占比5%
在调试代码之前,务必先检查环境和置信度。

Red Flags

异常信号

Symptoms that indicate Vision-specific issues:
SymptomLikely Cause
Subject not detected at allEdge of frame, poor lighting, very small subject
Hand landmarks intermittently nilHand near edge, parallel to camera, glove/occlusion
Body pose skipped framesPerson bent over, upside down, flowing clothing
UI freezes during processingRunning Vision on main thread
Overlays in wrong positionCoordinate conversion (lower-left vs top-left)
Crash on older devicesUsing iOS 17+ APIs without
@available
check
Person segmentation misses people>4 people in scene (instance mask limit)
Low FPS in camera feed
maximumHandCount
too high, not dropping frames
Text not recognized at allBlurry image, stylized font, wrong recognition level
Text misread (wrong characters)Language correction disabled, missing custom words
Barcode not detectedWrong symbology, code too small, glare/reflection
DataScanner shows blank screenCamera access denied, device not supported
Document edges not detectedLow contrast, non-rectangular, glare
Real-time scanning too slowProcessing every frame, region too large
表明存在Vision特定问题的症状:
症状可能原因
完全未检测到主体主体在画面边缘、光线不佳、主体过小
手部关键点间歇性为空手靠近画面边缘、与摄像头平行、戴手套/被遮挡
人体姿态跳过帧人物弯腰、倒立、穿着宽松衣物
处理过程中UI冻结在主线程运行Vision任务
覆盖层位置错误坐标转换问题(左下 vs 左上原点)
在旧设备上崩溃使用iOS 17+ API未做
@available
检查
人物分割遗漏人物场景中超过4人(实例掩码限制)
摄像头画面帧率低
maximumHandCount
设置过高、未丢弃帧
完全未识别到文本图像模糊、字体风格特殊、识别级别设置错误
文本识别错误(字符错误)语言校正已禁用、缺少自定义词汇
未检测到条形码符号体系错误、条码过小、反光/眩光
DataScanner显示黑屏摄像头权限被拒绝、设备不支持
未检测到文档边缘对比度低、非矩形文档、反光
实时扫描过慢处理每一帧、扫描区域过大

Mandatory First Steps

必做的初步步骤

Before investigating code, run these diagnostics:
在排查代码之前,先执行以下诊断步骤:

Step 1: Verify Detection with Diagnostic Code

步骤1:使用诊断代码验证检测功能

swift
let request = VNGenerateForegroundInstanceMaskRequest()  // Or hand/body pose
let handler = VNImageRequestHandler(cgImage: testImage)

do {
    try handler.perform([request])

    if let results = request.results {
        print("✅ Request succeeded")
        print("Result count: \(results.count)")

        if let observation = results.first as? VNInstanceMaskObservation {
            print("All instances: \(observation.allInstances)")
            print("Instance count: \(observation.allInstances.count)")
        }
    } else {
        print("⚠️ Request succeeded but no results")
    }
} catch {
    print("❌ Request failed: \(error)")
}
Expected output:
  • ✅ Request succeeded, instance count > 0 → Detection working
  • ⚠️ Request succeeded, instance count = 0 → Nothing detected (see Decision Tree)
  • ❌ Request failed → API availability issue
swift
let request = VNGenerateForegroundInstanceMaskRequest()  // Or hand/body pose
let handler = VNImageRequestHandler(cgImage: testImage)

do {
    try handler.perform([request])

    if let results = request.results {
        print("✅ Request succeeded")
        print("Result count: \(results.count)")

        if let observation = results.first as? VNInstanceMaskObservation {
            print("All instances: \(observation.allInstances)")
            print("Instance count: \(observation.allInstances.count)")
        }
    } else {
        print("⚠️ Request succeeded but no results")
    }
} catch {
    print("❌ Request failed: \(error)")
}
预期输出:
  • ✅ Request succeeded, instance count > 0 → 检测功能正常
  • ⚠️ Request succeeded, instance count = 0 → 未检测到任何内容(查看决策树)
  • ❌ Request failed → API兼容性问题

Step 2: Check Confidence Scores

步骤2:检查置信度分数

swift
// For hand/body pose
if let observation = request.results?.first as? VNHumanHandPoseObservation {
    let allPoints = try observation.recognizedPoints(.all)

    for (key, point) in allPoints {
        print("\(key): confidence \(point.confidence)")

        if point.confidence < 0.3 {
            print("  ⚠️ LOW CONFIDENCE - unreliable")
        }
    }
}
Expected output:
  • Most landmarks > 0.5 confidence → Good detection
  • Many landmarks < 0.3 → Poor lighting, occlusion, or edge of frame
swift
// For hand/body pose
if let observation = request.results?.first as? VNHumanHandPoseObservation {
    let allPoints = try observation.recognizedPoints(.all)

    for (key, point) in allPoints {
        print("\(key): confidence \(point.confidence)")

        if point.confidence < 0.3 {
            print("  ⚠️ LOW CONFIDENCE - unreliable")
        }
    }
}
预期输出:
  • 大多数关键点置信度>0.5 → 检测效果良好
  • 多个关键点置信度<0.3 → 光线不佳、被遮挡或主体在画面边缘

Step 3: Verify Threading

步骤3:验证线程处理

swift
print("🧵 Thread: \(Thread.current)")

if Thread.isMainThread {
    print("❌ Running on MAIN THREAD - will block UI!")
} else {
    print("✅ Running on background thread")
}
Expected output:
  • ✅ Background thread → Correct
  • ❌ Main thread → Move to
    DispatchQueue.global()
swift
print("🧵 Thread: \(Thread.current)")

if Thread.isMainThread {
    print("❌ Running on MAIN THREAD - will block UI!")
} else {
    print("✅ Running on background thread")
}
预期输出:
  • ✅ Background thread → 处理正确
  • ❌ Main thread → 迁移至
    DispatchQueue.global()

Decision Tree

决策树

Vision not working as expected?
├─ No results returned?
│  ├─ Check Step 1 output
│  │  ├─ "Request failed" → See Pattern 1a (API availability)
│  │  ├─ "No results" → See Pattern 1b (nothing detected)
│  │  └─ Results but count = 0 → See Pattern 1c (edge of frame)
├─ Landmarks have nil/low confidence?
│  ├─ Hand pose → See Pattern 2 (hand detection issues)
│  ├─ Body pose → See Pattern 3 (body detection issues)
│  └─ Face detection → See Pattern 4 (face detection issues)
├─ UI freezing/slow?
│  ├─ Check Step 3 (threading)
│  │  ├─ Main thread → See Pattern 5a (move to background)
│  │  └─ Background thread → See Pattern 5b (performance tuning)
├─ Overlays in wrong position?
│  └─ See Pattern 6 (coordinate conversion)
├─ Person segmentation missing people?
│  └─ See Pattern 7 (crowded scenes)
├─ VisionKit not working?
│  └─ See Pattern 8 (VisionKit specific)
├─ Text recognition issues?
│  ├─ No text detected → See Pattern 9a (image quality)
│  ├─ Wrong characters → See Pattern 9b (language/correction)
│  └─ Too slow → See Pattern 9c (recognition level)
├─ Barcode detection issues?
│  ├─ Barcode not detected → See Pattern 10a (symbology/size)
│  └─ Wrong payload → See Pattern 10b (barcode quality)
├─ DataScannerViewController issues?
│  ├─ Blank screen → See Pattern 11a (availability check)
│  └─ Items not detected → See Pattern 11b (data types)
└─ Document scanning issues?
   ├─ Edges not detected → See Pattern 12a (contrast/shape)
   └─ Perspective wrong → See Pattern 12b (corner points)
Vision未按预期工作?
├─ 未返回任何结果?
│  ├─ 查看步骤1的输出
│  │  ├─ "Request failed" → 查看模式1a(API兼容性)
│  │  ├─ "No results" → 查看模式1b(未检测到任何内容)
│  │  └─ 有结果但数量为0 → 查看模式1c(主体在画面边缘)
├─ 关键点为空/置信度低?
│  ├─ 手部姿态 → 查看模式2(手部检测问题)
│  ├─ 人体姿态 → 查看模式3(人体检测问题)
│  └─ 面部检测 → 查看模式4(面部检测问题)
├─ UI冻结/响应缓慢?
│  ├─ 检查步骤3(线程处理)
│  │  ├─ 主线程 → 查看模式5a(迁移至后台线程)
│  │  └─ 后台线程 → 查看模式5b(性能调优)
├─ 覆盖层位置错误?
│  └─ 查看模式6(坐标转换)
├─ 人物分割遗漏人物?
│  └─ 查看模式7(拥挤场景)
├─ VisionKit无法工作?
│  └─ 查看模式8(VisionKit特定问题)
├─ 文本识别问题?
│  ├─ 未检测到文本 → 查看模式9a(图像质量)
│  ├─ 字符识别错误 → 查看模式9b(语言/校正设置)
│  └─ 识别过慢 → 查看模式9c(识别级别)
├─ 条形码检测问题?
│  ├─ 未检测到条形码 → 查看模式10a(符号体系/尺寸)
│  └─ 载荷内容错误 → 查看模式10b(条码质量)
├─ DataScannerViewController问题?
│  ├─ 黑屏 → 查看模式11a(兼容性检查)
│  └─ 未检测到物品 → 查看模式11b(数据类型)
└─ 文档扫描问题?
   ├─ 未检测到边缘 → 查看模式12a(对比度/形状)
   └─ 透视效果错误 → 查看模式12b(角点位置)

Diagnostic Patterns

诊断模式

Pattern 1a: Request Failed (API Availability)

模式1a:请求失败(API兼容性)

Symptom:
try handler.perform([request])
throws error
Common errors:
"VNGenerateForegroundInstanceMaskRequest is only available on iOS 17.0 or newer"
"VNDetectHumanBodyPose3DRequest is only available on iOS 17.0 or newer"
Root cause: Using iOS 17+ APIs on older deployment target
Fix:
swift
if #available(iOS 17.0, *) {
    let request = VNGenerateForegroundInstanceMaskRequest()
    // ...
} else {
    // Fallback for iOS 14-16
    let request = VNGeneratePersonSegmentationRequest()
    // ...
}
Prevention: Check API availability in
axiom-vision-ref
before implementing
Time to fix: 10 min
症状
try handler.perform([request])
抛出错误
常见错误:
"VNGenerateForegroundInstanceMaskRequest is only available on iOS 17.0 or newer"
"VNDetectHumanBodyPose3DRequest is only available on iOS 17.0 or newer"
根本原因:在旧版部署目标设备上使用iOS 17+的API
修复方案:
swift
if #available(iOS 17.0, *) {
    let request = VNGenerateForegroundInstanceMaskRequest()
    // ...
} else {
    // Fallback for iOS 14-16
    let request = VNGeneratePersonSegmentationRequest()
    // ...
}
预防措施:在实现前,先在
axiom-vision-ref
中检查API兼容性
修复时间:10分钟

Pattern 1b: No Results (Nothing Detected)

模式1b:无结果返回(未检测到任何内容)

Symptom:
request.results == nil
or
results.isEmpty
Diagnostic:
swift
// 1. Save debug image to Photos
UIImageWriteToSavedPhotosAlbum(debugImage, nil, nil, nil)

// 2. Inspect visually
// - Is subject too small? (< 10% of image)
// - Is subject blurry?
// - Poor contrast with background?
Common causes:
  • Subject too small (resize or crop closer)
  • Subject too blurry (increase lighting, stabilize camera)
  • Low contrast (subject same color as background)
Fix:
swift
// Crop image to focus on region of interest
let croppedImage = cropImage(sourceImage, to: regionOfInterest)
let handler = VNImageRequestHandler(cgImage: croppedImage)
Time to fix: 30 min
症状
request.results == nil
results.isEmpty
诊断方法:
swift
// 1. Save debug image to Photos
UIImageWriteToSavedPhotosAlbum(debugImage, nil, nil, nil)

// 2. Inspect visually
// - Is subject too small? (< 10% of image)
// - Is subject blurry?
// - Poor contrast with background?
常见原因:
  • 主体过小(小于画面的10%)→ 调整大小或裁剪放大
  • 主体模糊 → 增加光线、稳定摄像头
  • 对比度低(主体与背景颜色相近)
修复方案:
swift
// Crop image to focus on region of interest
let croppedImage = cropImage(sourceImage, to: regionOfInterest)
let handler = VNImageRequestHandler(cgImage: croppedImage)
修复时间:30分钟

Pattern 1c: Edge of Frame Issues

模式1c:画面边缘问题

Symptom: Subject detected intermittently as object moves across frame
Root cause: Partial occlusion when subject touches image edges
Diagnostic:
swift
// Check if subject is near edges
if let observation = results.first as? VNInstanceMaskObservation {
    let mask = try observation.createScaledMask(
        for: observation.allInstances,
        croppedToInstancesContent: true
    )

    let bounds = calculateMaskBounds(mask)

    if bounds.minX < 0.1 || bounds.maxX > 0.9 ||
       bounds.minY < 0.1 || bounds.maxY > 0.9 {
        print("⚠️ Subject too close to edge")
    }
}
Fix:
swift
// Add padding to capture area
let paddedRect = captureRect.insetBy(dx: -20, dy: -20)

// OR guide user with on-screen overlay
overlayView.addSubview(guideBox)  // Visual boundary
Time to fix: 20 min
症状:主体在画面中移动时,检测结果间歇性出现
根本原因:主体接触画面边缘时被部分遮挡
诊断方法:
swift
// Check if subject is near edges
if let observation = results.first as? VNInstanceMaskObservation {
    let mask = try observation.createScaledMask(
        for: observation.allInstances,
        croppedToInstancesContent: true
    )

    let bounds = calculateMaskBounds(mask)

    if bounds.minX < 0.1 || bounds.maxX > 0.9 ||
       bounds.minY < 0.1 || bounds.maxY > 0.9 {
        print("⚠️ Subject too close to edge")
    }
}
修复方案:
swift
// Add padding to capture area
let paddedRect = captureRect.insetBy(dx: -20, dy: -20)

// OR guide user with on-screen overlay
overlayView.addSubview(guideBox)  // Visual boundary
修复时间:20分钟

Pattern 2: Hand Pose Issues

模式2:手部姿态检测问题

Symptom:
VNDetectHumanHandPoseRequest
returns nil or low confidence landmarks
Diagnostic:
swift
if let observation = request.results?.first as? VNHumanHandPoseObservation {
    let thumbTip = try? observation.recognizedPoint(.thumbTip)
    let wrist = try? observation.recognizedPoint(.wrist)

    print("Thumb confidence: \(thumbTip?.confidence ?? 0)")
    print("Wrist confidence: \(wrist?.confidence ?? 0)")

    // Check hand orientation
    if let thumb = thumbTip, let wristPoint = wrist {
        let angle = atan2(
            thumb.location.y - wristPoint.location.y,
            thumb.location.x - wristPoint.location.x
        )
        print("Hand angle: \(angle * 180 / .pi) degrees")

        if abs(angle) > 80 && abs(angle) < 100 {
            print("⚠️ Hand parallel to camera (hard to detect)")
        }
    }
}
Common causes:
CauseConfidence PatternFix
Hand near edgeTips have low confidenceAdjust framing
Hand parallel to cameraAll landmarks lowPrompt user to rotate hand
Gloves/occlusionFingers low, wrist highRemove gloves or change lighting
Feet detected as handsUnexpected hand detectedAdd
chirality
check or ignore
Fix for parallel hand:
swift
// Detect and warn user
if avgConfidence < 0.4 {
    showWarning("Rotate your hand toward the camera")
}
Time to fix: 45 min
症状
VNDetectHumanHandPoseRequest
返回空值或低置信度关键点
诊断方法:
swift
if let observation = request.results?.first as? VNHumanHandPoseObservation {
    let thumbTip = try? observation.recognizedPoint(.thumbTip)
    let wrist = try? observation.recognizedPoint(.wrist)

    print("Thumb confidence: \(thumbTip?.confidence ?? 0)")
    print("Wrist confidence: \(wrist?.confidence ?? 0)")

    // Check hand orientation
    if let thumb = thumbTip, let wristPoint = wrist {
        let angle = atan2(
            thumb.location.y - wristPoint.location.y,
            thumb.location.x - wristPoint.location.x
        )
        print("Hand angle: \(angle * 180 / .pi) degrees")

        if abs(angle) > 80 && abs(angle) < 100 {
            print("⚠️ Hand parallel to camera (hard to detect)")
        }
    }
}
常见原因:
原因置信度模式修复方案
手靠近画面边缘指尖置信度低调整取景范围
手与摄像头平行所有关键点置信度低提示用户旋转手部
戴手套/被遮挡手指置信度低、手腕置信度高摘下手套或调整光线
脚被识别为手检测到意外的手部添加
chirality
检查或忽略结果
平行手部的修复方案:
swift
// Detect and warn user
if avgConfidence < 0.4 {
    showWarning("Rotate your hand toward the camera")
}
修复时间:45分钟

Pattern 3: Body Pose Issues

模式3:人体姿态检测问题

Symptom:
VNDetectHumanBodyPoseRequest
skips frames or returns low confidence
Diagnostic:
swift
if let observation = request.results?.first as? VNHumanBodyPoseObservation {
    let nose = try? observation.recognizedPoint(.nose)
    let root = try? observation.recognizedPoint(.root)

    if let nosePoint = nose, let rootPoint = root {
        let bodyAngle = atan2(
            nosePoint.location.y - rootPoint.location.y,
            nosePoint.location.x - rootPoint.location.x
        )

        let angleFromVertical = abs(bodyAngle - .pi / 2)

        if angleFromVertical > .pi / 4 {
            print("⚠️ Person bent over or upside down")
        }
    }
}
Common causes:
CauseSolution
Person bent overPrompt user to stand upright
Upside down (handstand)Use ARKit instead (better for dynamic poses)
Flowing clothingIncrease contrast or use tighter clothing
Multiple people overlappingUse person instance segmentation
Time to fix: 1 hour
症状
VNDetectHumanBodyPoseRequest
跳过帧或返回低置信度结果
诊断方法:
swift
if let observation = request.results?.first as? VNHumanBodyPoseObservation {
    let nose = try? observation.recognizedPoint(.nose)
    let root = try? observation.recognizedPoint(.root)

    if let nosePoint = nose, let rootPoint = root {
        let bodyAngle = atan2(
            nosePoint.location.y - rootPoint.location.y,
            nosePoint.location.x - rootPoint.location.x
        )

        let angleFromVertical = abs(bodyAngle - .pi / 2)

        if angleFromVertical > .pi / 4 {
            print("⚠️ Person bent over or upside down")
        }
    }
}
常见原因:
原因解决方案
人物弯腰提示用户站直
倒立(手倒立)使用ARKKit替代(更适合动态姿态)
穿着宽松衣物增加对比度或穿着紧身衣物
多人重叠使用人物实例分割功能
修复时间:1小时

Pattern 4: Face Detection Issues

模式4:面部检测问题

Symptom:
VNDetectFaceRectanglesRequest
misses faces or returns wrong count
Diagnostic:
swift
if let faces = request.results as? [VNFaceObservation] {
    print("Detected \(faces.count) faces")

    for face in faces {
        print("Face bounds: \(face.boundingBox)")
        print("Confidence: \(face.confidence)")

        if face.boundingBox.width < 0.1 {
            print("⚠️ Face too small")
        }
    }
}
Common causes:
  • Face < 10% of image (crop closer)
  • Profile view (use face landmarks request instead)
  • Poor lighting (increase exposure)
Time to fix: 30 min
症状
VNDetectFaceRectanglesRequest
遗漏面部或返回错误的数量
诊断方法:
swift
if let faces = request.results as? [VNFaceObservation] {
    print("Detected \(faces.count) faces")

    for face in faces {
        print("Face bounds: \(face.boundingBox)")
        print("Confidence: \(face.confidence)")

        if face.boundingBox.width < 0.1 {
            print("⚠️ Face too small")
        }
    }
}
常见原因:
  • 面部过小(小于画面的10%)→ 裁剪放大
  • 侧脸 → 使用面部关键点检测请求替代
  • 光线不佳 → 增加曝光
修复时间:30分钟

Pattern 5a: UI Freezing (Main Thread)

模式5a:UI冻结(主线程问题)

Symptom: App freezes when performing Vision request
Diagnostic (Step 3 above confirms main thread)
Fix:
swift
// BEFORE (wrong)
let request = VNGenerateForegroundInstanceMaskRequest()
try handler.perform([request])  // Blocks UI

// AFTER (correct)
DispatchQueue.global(qos: .userInitiated).async {
    let request = VNGenerateForegroundInstanceMaskRequest()
    try? handler.perform([request])

    DispatchQueue.main.async {
        // Update UI
    }
}
Time to fix: 15 min
症状:执行Vision请求时应用冻结
诊断结果(步骤3已确认在主线程运行)
修复方案:
swift
// BEFORE (wrong)
let request = VNGenerateForegroundInstanceMaskRequest()
try handler.perform([request])  // Blocks UI

// AFTER (correct)
DispatchQueue.global(qos: .userInitiated).async {
    let request = VNGenerateForegroundInstanceMaskRequest()
    try? handler.perform([request])

    DispatchQueue.main.async {
        // Update UI
    }
}
修复时间:15分钟

Pattern 5b: Performance Issues (Background Thread)

模式5b:性能问题(后台线程)

Symptom: Already on background thread but still slow / dropping frames
Diagnostic:
swift
let start = CFAbsoluteTimeGetCurrent()

try handler.perform([request])

let elapsed = CFAbsoluteTimeGetCurrent() - start
print("Request took \(elapsed * 1000)ms")

if elapsed > 0.2 {  // 200ms = too slow for real-time
    print("⚠️ Request too slow for real-time processing")
}
Common causes & fixes:
CauseFixTime Saved
maximumHandCount
= 10
Set to actual need (e.g., 2)50-70%
Processing every frameSkip frames (process every 3rd)66%
Full-res imagesDownscale to 1280x72040-60%
Multiple requests per frameBatch or alternate requests30-50%
Fix for real-time camera:
swift
// Skip frames
frameCount += 1
guard frameCount % 3 == 0 else { return }

// OR downscale
let scaledImage = resizeImage(sourceImage, to: CGSize(width: 1280, height: 720))

// OR set lower hand count
request.maximumHandCount = 2  // Instead of default
Time to fix: 1 hour
症状:已在后台线程运行,但仍缓慢/丢帧
诊断方法:
swift
let start = CFAbsoluteTimeGetCurrent()

try handler.perform([request])

let elapsed = CFAbsoluteTimeGetCurrent() - start
print("Request took \(elapsed * 1000)ms")

if elapsed > 0.2 {  // 200ms = too slow for real-time
    print("⚠️ Request too slow for real-time processing")
}
常见原因及修复方案:
原因修复方案时间节省比例
maximumHandCount
= 10
设置为实际需求(例如2)50-70%
处理每一帧跳过部分帧(每3帧处理1帧)66%
全分辨率图像缩小至1280x72040-60%
每帧执行多个请求批量处理或交替执行请求30-50%
实时摄像头的修复方案:
swift
// Skip frames
frameCount += 1
guard frameCount % 3 == 0 else { return }

// OR downscale
let scaledImage = resizeImage(sourceImage, to: CGSize(width: 1280, height: 720))

// OR set lower hand count
request.maximumHandCount = 2  // Instead of default
修复时间:1小时

Pattern 6: Coordinate Conversion

模式6:坐标转换问题

Symptom: UI overlays appear in wrong position
Diagnostic:
swift
// Vision point (lower-left origin, normalized)
let visionPoint = recognizedPoint.location
print("Vision point: \(visionPoint)")  // e.g., (0.5, 0.8)

// Convert to UIKit
let uiX = visionPoint.x * imageWidth
let uiY = (1 - visionPoint.y) * imageHeight  // FLIP Y
print("UIKit point: (\(uiX), \(uiY))")

// Verify overlay
overlayView.center = CGPoint(x: uiX, y: uiY)
Common mistakes:
swift
// ❌ WRONG (no Y flip)
let uiPoint = CGPoint(
    x: axiom-visionPoint.x * width,
    y: axiom-visionPoint.y * height
)

// ❌ WRONG (forgot to scale from normalized)
let uiPoint = CGPoint(
    x: axiom-visionPoint.x,
    y: 1 - visionPoint.y
)

// ✅ CORRECT
let uiPoint = CGPoint(
    x: axiom-visionPoint.x * width,
    y: (1 - visionPoint.y) * height
)
Time to fix: 20 min
症状:UI覆盖层位置错误
诊断方法:
swift
// Vision point (lower-left origin, normalized)
let visionPoint = recognizedPoint.location
print("Vision point: \(visionPoint)")  // e.g., (0.5, 0.8)

// Convert to UIKit
let uiX = visionPoint.x * imageWidth
let uiY = (1 - visionPoint.y) * imageHeight  // FLIP Y
print("UIKit point: (\(uiX), \(uiY))")

// Verify overlay
overlayView.center = CGPoint(x: uiX, y: uiY)
常见错误:
swift
// ❌ WRONG (no Y flip)
let uiPoint = CGPoint(
    x: axiom-visionPoint.x * width,
    y: axiom-visionPoint.y * height
)

// ❌ WRONG (forgot to scale from normalized)
let uiPoint = CGPoint(
    x: axiom-visionPoint.x,
    y: 1 - visionPoint.y
)

// ✅ CORRECT
let uiPoint = CGPoint(
    x: axiom-visionPoint.x * width,
    y: (1 - visionPoint.y) * height
)
修复时间:20分钟

Pattern 7: Crowded Scenes (>4 People)

模式7:拥挤场景(超过4人)

Symptom:
VNGeneratePersonInstanceMaskRequest
misses people or combines them
Diagnostic:
swift
// Count faces
let faceRequest = VNDetectFaceRectanglesRequest()
try handler.perform([faceRequest])

let faceCount = faceRequest.results?.count ?? 0
print("Detected \(faceCount) faces")

// Person instance segmentation
let personRequest = VNGeneratePersonInstanceMaskRequest()
try handler.perform([personRequest])

let personCount = (personRequest.results?.first as? VNInstanceMaskObservation)?.allInstances.count ?? 0
print("Detected \(personCount) people")

if faceCount > 4 && personCount <= 4 {
    print("⚠️ Crowded scene - some people combined or missing")
}
Fix:
swift
if faceCount > 4 {
    // Fallback: Use single mask for all people
    let singleMaskRequest = VNGeneratePersonSegmentationRequest()
    try handler.perform([singleMaskRequest])

    // OR guide user
    showWarning("Please reduce number of people in frame (max 4)")
}
Time to fix: 30 min
症状
VNGeneratePersonInstanceMaskRequest
遗漏人物或合并人物
诊断方法:
swift
// Count faces
let faceRequest = VNDetectFaceRectanglesRequest()
try handler.perform([faceRequest])

let faceCount = faceRequest.results?.count ?? 0
print("Detected \(faceCount) faces")

// Person instance segmentation
let personRequest = VNGeneratePersonInstanceMaskRequest()
try handler.perform([personRequest])

let personCount = (personRequest.results?.first as? VNInstanceMaskObservation)?.allInstances.count ?? 0
print("Detected \(personCount) people")

if faceCount > 4 && personCount <= 4 {
    print("⚠️ Crowded scene - some people combined or missing")
}
修复方案:
swift
if faceCount > 4 {
    // Fallback: Use single mask for all people
    let singleMaskRequest = VNGeneratePersonSegmentationRequest()
    try handler.perform([singleMaskRequest])

    // OR guide user
    showWarning("Please reduce number of people in frame (max 4)")
}
修复时间:30分钟

Pattern 8: VisionKit Specific Issues

模式8:VisionKit特定问题

Symptom:
ImageAnalysisInteraction
not showing subject lifting UI
Diagnostic:
swift
// 1. Check interaction types
print("Interaction types: \(interaction.preferredInteractionTypes)")

// 2. Check if analysis is set
print("Analysis: \(interaction.analysis != nil ? "set" : "nil")")

// 3. Check if view supports interaction
if let view = interaction.view {
    print("View: \(view)")
} else {
    print("❌ View not set")
}
Common causes:
SymptomCauseFix
No UI appears
analysis
not set
Call
analyzer.analyze()
and set result
UI appears but no subject liftingWrong interaction typeSet
.imageSubject
or
.automatic
Crash on interactionView removed before interactionKeep view in memory
Fix:
swift
// Ensure analysis is set
let analyzer = ImageAnalyzer()
let analysis = try await analyzer.analyze(image, configuration: config)

interaction.analysis = analysis  // Required!
interaction.preferredInteractionTypes = .imageSubject
Time to fix: 20 min
症状
ImageAnalysisInteraction
未显示主体悬浮UI
诊断方法:
swift
// 1. Check interaction types
print("Interaction types: \(interaction.preferredInteractionTypes)")

// 2. Check if analysis is set
print("Analysis: \(interaction.analysis != nil ? "set" : "nil")")

// 3. Check if view supports interaction
if let view = interaction.view {
    print("View: \(view)")
} else {
    print("❌ View not set")
}
常见原因:
症状原因修复方案
无UI显示
analysis
未设置
调用
analyzer.analyze()
并设置结果
UI显示但无主体悬浮交互类型错误设置为
.imageSubject
.automatic
交互时崩溃交互前视图已被移除保持视图在内存中
修复方案:
swift
// Ensure analysis is set
let analyzer = ImageAnalyzer()
let analysis = try await analyzer.analyze(image, configuration: config)

interaction.analysis = analysis  // Required!
interaction.preferredInteractionTypes = .imageSubject
修复时间:20分钟

Pattern 9a: Text Not Detected (Image Quality)

模式9a:未检测到文本(图像质量问题)

Symptom:
VNRecognizeTextRequest
returns no results or empty strings
Diagnostic:
swift
let request = VNRecognizeTextRequest()
request.recognitionLevel = .accurate

try handler.perform([request])

if request.results?.isEmpty ?? true {
    print("❌ No text detected")

    // Check image quality
    print("Image size: \(image.size)")
    print("Minimum text height: \(request.minimumTextHeight)")
}

for obs in request.results as? [VNRecognizedTextObservation] ?? [] {
    let top = obs.topCandidates(3)
    for candidate in top {
        print("'\(candidate.string)' confidence: \(candidate.confidence)")
    }
}
Common causes:
CauseSymptomFix
Blurry imageNo resultsImprove lighting, stabilize camera
Text too smallNo resultsLower
minimumTextHeight
or crop closer
Stylized fontMisread or no resultsTry
.accurate
recognition level
Low contrastPartial resultsImprove lighting, increase image contrast
Rotated textNo results with
.fast
Use
.accurate
(handles rotation)
Fix for small text:
swift
// Lower minimum text height (default ignores very small text)
request.minimumTextHeight = 0.02  // 2% of image height
Time to fix: 30 min
症状
VNRecognizeTextRequest
无结果返回或返回空字符串
诊断方法:
swift
let request = VNRecognizeTextRequest()
request.recognitionLevel = .accurate

try handler.perform([request])

if request.results?.isEmpty ?? true {
    print("❌ No text detected")

    // Check image quality
    print("Image size: \(image.size)")
    print("Minimum text height: \(request.minimumTextHeight)")
}

for obs in request.results as? [VNRecognizedTextObservation] ?? [] {
    let top = obs.topCandidates(3)
    for candidate in top {
        print("'\(candidate.string)' confidence: \(candidate.confidence)")
    }
}
常见原因:
原因症状修复方案
图像模糊无结果改善光线、稳定摄像头
文本过小无结果降低
minimumTextHeight
或裁剪放大
字体风格特殊识别错误或无结果尝试使用
.accurate
识别级别
对比度低部分结果改善光线、提高图像对比度
文本旋转使用
.fast
级别无结果
使用
.accurate
级别(支持旋转)
小文本的修复方案:
swift
// Lower minimum text height (default ignores very small text)
request.minimumTextHeight = 0.02  // 2% of image height
修复时间:30分钟

Pattern 9b: Wrong Characters (Language/Correction)

模式9b:字符识别错误(语言/设置问题)

Symptom: Text is detected but characters are wrong (e.g., "C001" → "COOL")
Diagnostic:
swift
// Check all candidates, not just first
for observation in results {
    let candidates = observation.topCandidates(5)
    for (i, candidate) in candidates.enumerated() {
        print("Candidate \(i): '\(candidate.string)' (\(candidate.confidence))")
    }
}
Common causes:
Input TypeProblemFix
Serial numbersLanguage correction "fixes" themDisable
usesLanguageCorrection
Technical codesMisread as wordsAdd to
customWords
Non-EnglishWrong ML modelSet correct
recognitionLanguages
House numbersStylized → misreadCheck all candidates, not just top
Fix for codes/serial numbers:
swift
let request = VNRecognizeTextRequest()
request.usesLanguageCorrection = false  // Don't "fix" codes

// Post-process with domain knowledge
func correctSerialNumber(_ text: String) -> String {
    text.replacingOccurrences(of: "O", with: "0")
        .replacingOccurrences(of: "l", with: "1")
        .replacingOccurrences(of: "S", with: "5")
}
Time to fix: 30 min
症状:文本已检测到但字符错误(例如"C001" → "COOL")
诊断方法:
swift
// Check all candidates, not just first
for observation in results {
    let candidates = observation.topCandidates(5)
    for (i, candidate) in candidates.enumerated() {
        print("Candidate \(i): '\(candidate.string)' (\(candidate.confidence))")
    }
}
常见原因:
输入类型问题修复方案
序列号语言校正"修正"了序列号禁用
usesLanguageCorrection
技术代码被误识别为普通词汇添加到
customWords
非英文文本ML模型错误设置正确的
recognitionLanguages
门牌号风格特殊导致识别错误检查所有候选结果,而非仅第一个
代码/序列号的修复方案:
swift
let request = VNRecognizeTextRequest()
request.usesLanguageCorrection = false  // Don't "fix" codes

// Post-process with domain knowledge
func correctSerialNumber(_ text: String) -> String {
    text.replacingOccurrences(of: "O", with: "0")
        .replacingOccurrences(of: "l", with: "1")
        .replacingOccurrences(of: "S", with: "5")
}
修复时间:30分钟

Pattern 9c: Text Recognition Too Slow

模式9c:文本识别过慢

Symptom: Text recognition takes >500ms, real-time camera drops frames
Diagnostic:
swift
let start = CFAbsoluteTimeGetCurrent()
try handler.perform([request])
let elapsed = CFAbsoluteTimeGetCurrent() - start

print("Recognition took \(elapsed * 1000)ms")
print("Recognition level: \(request.recognitionLevel == .fast ? "fast" : "accurate")")
print("Language correction: \(request.usesLanguageCorrection)")
Common causes & fixes:
CauseFixSpeedup
Using
.accurate
for real-time
Switch to
.fast
3-5x
Language correction enabledDisable for codes20-30%
Full image processingUse
regionOfInterest
2-4x
Processing every frameSkip frames50-70%
Fix for real-time:
swift
request.recognitionLevel = .fast
request.usesLanguageCorrection = false
request.regionOfInterest = CGRect(x: 0.1, y: 0.3, width: 0.8, height: 0.4)

// Skip frames
frameCount += 1
guard frameCount % 3 == 0 else { return }
Time to fix: 30 min
症状:文本识别耗时超过500ms,实时摄像头丢帧
诊断方法:
swift
let start = CFAbsoluteTimeGetCurrent()
try handler.perform([request])
let elapsed = CFAbsoluteTimeGetCurrent() - start

print("Recognition took \(elapsed * 1000)ms")
print("Recognition level: \(request.recognitionLevel == .fast ? "fast" : "accurate")")
print("Language correction: \(request.usesLanguageCorrection)")
常见原因及修复方案:
原因修复方案提速比例
实时场景使用
.accurate
级别
切换为
.fast
级别
3-5倍
已启用语言校正针对代码类文本禁用20-30%
处理全图像使用
regionOfInterest
2-4倍
处理每一帧跳过部分帧50-70%
实时场景的修复方案:
swift
request.recognitionLevel = .fast
request.usesLanguageCorrection = false
request.regionOfInterest = CGRect(x: 0.1, y: 0.3, width: 0.8, height: 0.4)

// Skip frames
frameCount += 1
guard frameCount % 3 == 0 else { return }
修复时间:30分钟

Pattern 10a: Barcode Not Detected (Symbology/Size)

模式10a:未检测到条形码(符号体系/尺寸问题)

Symptom:
VNDetectBarcodesRequest
returns no results
Diagnostic:
swift
let request = VNDetectBarcodesRequest()
// Don't specify symbologies to detect all types
try handler.perform([request])

if let results = request.results as? [VNBarcodeObservation] {
    print("Found \(results.count) barcodes")
    for barcode in results {
        print("Type: \(barcode.symbology)")
        print("Payload: \(barcode.payloadStringValue ?? "nil")")
        print("Bounds: \(barcode.boundingBox)")
    }
} else {
    print("❌ No barcodes detected")
}
Common causes:
CauseSymptomFix
Wrong symbologyNot detectedDon't filter, or add correct type
Barcode too smallNot detectedMove camera closer, crop image
Glare/reflectionNot detectedChange angle, improve lighting
Damaged barcodePartial/no detectionClean barcode, improve image
Using revision 1Only one codeUse revision 2+ for multiple
Fix for small barcodes:
swift
// Crop to barcode region for better detection
let croppedHandler = VNImageRequestHandler(
    cgImage: croppedImage,
    options: [:]
)
Time to fix: 20 min
症状
VNDetectBarcodesRequest
无结果返回
诊断方法:
swift
let request = VNDetectBarcodesRequest()
// Don't specify symbologies to detect all types
try handler.perform([request])

if let results = request.results as? [VNBarcodeObservation] {
    print("Found \(results.count) barcodes")
    for barcode in results {
        print("Type: \(barcode.symbology)")
        print("Payload: \(barcode.payloadStringValue ?? "nil")")
        print("Bounds: \(barcode.boundingBox)")
    }
} else {
    print("❌ No barcodes detected")
}
常见原因:
原因症状修复方案
符号体系错误未检测到不做过滤,或添加正确的类型
条码过小未检测到靠近摄像头、裁剪图像
反光/眩光未检测到调整角度、改善光线
条码损坏部分或无检测结果清洁条码、改善图像质量
使用版本1仅能检测一个条码使用版本2+以支持多个条码
小条码的修复方案:
swift
// Crop to barcode region for better detection
let croppedHandler = VNImageRequestHandler(
    cgImage: croppedImage,
    options: [:]
)
修复时间:20分钟

Pattern 10b: Wrong Barcode Payload

模式10b:条形码载荷内容错误

Symptom: Barcode detected but
payloadStringValue
is wrong or nil
Diagnostic:
swift
if let barcode = results.first {
    print("String payload: \(barcode.payloadStringValue ?? "nil")")
    print("Raw payload: \(barcode.payloadData ?? Data())")
    print("Symbology: \(barcode.symbology)")
    print("Confidence: Implicit (always 1.0 for barcodes)")
}
Common causes:
CauseFix
Binary barcode (not string)Use
payloadData
instead
Damaged codeRe-scan or clean barcode
Wrong symbology assumedCheck actual
symbology
value
Time to fix: 15 min
症状:已检测到条形码但
payloadStringValue
错误或为空
诊断方法:
swift
if let barcode = results.first {
    print("String payload: \(barcode.payloadStringValue ?? "nil")")
    print("Raw payload: \(barcode.payloadData ?? Data())")
    print("Symbology: \(barcode.symbology)")
    print("Confidence: Implicit (always 1.0 for barcodes)")
}
常见原因:
原因修复方案
二进制条码(非字符串类型)使用
payloadData
替代
条码损坏重新扫描或清洁条码
假设的符号体系错误检查实际的
symbology
修复时间:15分钟

Pattern 11a: DataScanner Blank Screen

模式11a:DataScanner黑屏

Symptom:
DataScannerViewController
shows black/blank when presented
Diagnostic:
swift
// Check support first
print("isSupported: \(DataScannerViewController.isSupported)")
print("isAvailable: \(DataScannerViewController.isAvailable)")

// Check camera permission
let status = AVCaptureDevice.authorizationStatus(for: .video)
print("Camera access: \(status.rawValue)")
Common causes:
SymptomCauseFix
isSupported = false
Device lacks camera/chipCheck before presenting
isAvailable = false
Parental controls or access deniedRequest camera permission
Black screenCamera in use by another appEnsure exclusive access
Crash on presentMissing entitlementsAdd camera usage description
Fix:
swift
guard DataScannerViewController.isSupported else {
    showError("Scanning not supported on this device")
    return
}

guard DataScannerViewController.isAvailable else {
    // Request camera access
    AVCaptureDevice.requestAccess(for: .video) { granted in
        // Retry after access granted
    }
    return
}
Time to fix: 15 min
症状
DataScannerViewController
展示时显示黑屏/空白
诊断方法:
swift
// Check support first
print("isSupported: \(DataScannerViewController.isSupported)")
print("isAvailable: \(DataScannerViewController.isAvailable)")

// Check camera permission
let status = AVCaptureDevice.authorizationStatus(for: .video)
print("Camera access: \(status.rawValue)")
常见原因:
症状原因修复方案
isSupported = false
设备缺少摄像头/芯片展示前先检查
isAvailable = false
家长控制或权限被拒绝请求摄像头权限
黑屏摄像头被其他应用占用确保独占访问权限
展示时崩溃缺少权限描述添加摄像头使用描述
修复方案:
swift
guard DataScannerViewController.isSupported else {
    showError("Scanning not supported on this device")
    return
}

guard DataScannerViewController.isAvailable else {
    // Request camera access
    AVCaptureDevice.requestAccess(for: .video) { granted in
        // Retry after access granted
    }
    return
}
修复时间:15分钟

Pattern 11b: DataScanner Items Not Detected

模式11b:DataScanner未检测到物品

Symptom: DataScanner shows camera but doesn't recognize items
Diagnostic:
swift
// Check recognized data types
print("Data types: \(scanner.recognizedDataTypes)")

// Add delegate to see what's happening
func dataScanner(_ scanner: DataScannerViewController,
                 didAdd items: [RecognizedItem],
                 allItems: [RecognizedItem]) {
    print("Added \(items.count) items, total: \(allItems.count)")
    for item in items {
        switch item {
        case .text(let text): print("Text: \(text.transcript)")
        case .barcode(let barcode): print("Barcode: \(barcode.payloadStringValue ?? "")")
        @unknown default: break
        }
    }
}
Common causes:
CauseFix
Wrong data typesAdd correct
.barcode(symbologies:)
or
.text()
Text content type filterRemove filter or use correct type
Camera too close/farAdjust distance
Poor lightingImprove lighting
Time to fix: 20 min
症状:DataScanner显示摄像头画面但未识别到物品
诊断方法:
swift
// Check recognized data types
print("Data types: \(scanner.recognizedDataTypes)")

// Add delegate to see what's happening
func dataScanner(_ scanner: DataScannerViewController,
                 didAdd items: [RecognizedItem],
                 allItems: [RecognizedItem]) {
    print("Added \(items.count) items, total: \(allItems.count)")
    for item in items {
        switch item {
        case .text(let text): print("Text: \(text.transcript)")
        case .barcode(let barcode): print("Barcode: \(barcode.payloadStringValue ?? "")")
        @unknown default: break
        }
    }
}
常见原因:
原因修复方案
数据类型错误添加正确的
.barcode(symbologies:)
.text()
文本内容类型过滤移除过滤器或使用正确的类型
摄像头过近/过远调整距离
光线不佳改善光线
修复时间:20分钟

Pattern 12a: Document Edges Not Detected

模式12a:未检测到文档边缘

Symptom:
VNDetectDocumentSegmentationRequest
returns no results
Diagnostic:
swift
let request = VNDetectDocumentSegmentationRequest()
try handler.perform([request])

if let observation = request.results?.first {
    print("Document found at: \(observation.boundingBox)")
    print("Corners: TL=\(observation.topLeft), TR=\(observation.topRight)")
} else {
    print("❌ No document detected")
}
Common causes:
CauseFix
Low contrastUse contrasting background
Non-rectangularML expects rectangular documents
Glare/reflectionChange lighting angle
Document fills frameNeed some background visible
Fix: Use VNDocumentCameraViewController for guided user experience with live feedback.
Time to fix: 15 min
症状
VNDetectDocumentSegmentationRequest
无结果返回
诊断方法:
swift
let request = VNDetectDocumentSegmentationRequest()
try handler.perform([request])

if let observation = request.results?.first {
    print("Document found at: \(observation.boundingBox)")
    print("Corners: TL=\(observation.topLeft), TR=\(observation.topRight)")
} else {
    print("❌ No document detected")
}
常见原因:
原因修复方案
对比度低使用对比鲜明的背景
非矩形文档ML模型仅支持矩形文档
反光/眩光调整光线角度
文档充满整个画面需要保留部分背景可见
修复方案:使用VNDocumentCameraViewController提供带实时反馈的引导式用户体验。
修复时间:15分钟

Pattern 12b: Perspective Correction Wrong

模式12b:透视校正错误

Symptom: Document extracted but distorted
Diagnostic:
swift
// Verify corner order
print("TopLeft: \(observation.topLeft)")
print("TopRight: \(observation.topRight)")
print("BottomLeft: \(observation.bottomLeft)")
print("BottomRight: \(observation.bottomRight)")

// Check if corners are in expected positions
// TopLeft should have larger Y than BottomLeft (Vision uses lower-left origin)
Common causes:
CauseFix
Corner order wrongVision uses counterclockwise from top-left
Coordinate systemConvert normalized to pixel coordinates
Filter parameters wrongCheck CIPerspectiveCorrection parameters
Fix:
swift
// Scale normalized to image coordinates
func scaled(_ point: CGPoint, to size: CGSize) -> CGPoint {
    CGPoint(x: point.x * size.width, y: point.y * size.height)
}
Time to fix: 20 min
症状:提取的文档存在变形
诊断方法:
swift
// Verify corner order
print("TopLeft: \(observation.topLeft)")
print("TopRight: \(observation.topRight)")
print("BottomLeft: \(observation.bottomLeft)")
print("BottomRight: \(observation.bottomRight)")

// Check if corners are in expected positions
// TopLeft should have larger Y than BottomLeft (Vision uses lower-left origin)
常见原因:
原因修复方案
角点顺序错误Vision使用从左上角开始的逆时针顺序
坐标系统问题将归一化坐标转换为像素坐标
滤镜参数错误检查CIPerspectiveCorrection参数
修复方案:
swift
// Scale normalized to image coordinates
func scaled(_ point: CGPoint, to size: CGSize) -> CGPoint {
    CGPoint(x: point.x * size.width, y: point.y * size.height)
}
修复时间:20分钟

Production Crisis Scenario

生产环境紧急场景

Situation: App Store review rejected for "app freezes when tapping analyze button"
Triage (5 min):
  1. Confirm Vision running on main thread → Pattern 5a
  2. Verify on older device (iPhone 12) → Freezes
  3. Check profiling: 800ms on main thread
Fix (15 min):
swift
@IBAction func analyzeTapped(_ sender: UIButton) {
    showLoadingIndicator()

    DispatchQueue.global(qos: .userInitiated).async { [weak self] in
        let request = VNGenerateForegroundInstanceMaskRequest()
        // ... perform request

        DispatchQueue.main.async {
            self?.hideLoadingIndicator()
            self?.updateUI(with: results)
        }
    }
}
Communicate to PM: "App Store rejection due to Vision processing on main thread. Fixed by moving to background queue (industry standard). Testing on iPhone 12 confirms fix. Safe to resubmit."
场景:App Store审核因"点击分析按钮时应用冻结"被拒绝
快速排查(5分钟):
  1. 确认Vision在主线程运行 → 模式5a
  2. 在旧设备(iPhone 12)上验证 → 确实冻结
  3. 性能分析:主线程耗时800ms
修复方案(15分钟):
swift
@IBAction func analyzeTapped(_ sender: UIButton) {
    showLoadingIndicator()

    DispatchQueue.global(qos: .userInitiated).async { [weak self] in
        let request = VNGenerateForegroundInstanceMaskRequest()
        // ... perform request

        DispatchQueue.main.async {
            self?.hideLoadingIndicator()
            self?.updateUI(with: results)
        }
    }
}
向产品经理汇报: "App Store审核被拒原因是Vision处理任务在主线程运行。已通过迁移至后台队列修复(行业标准做法)。在iPhone 12上测试确认修复有效,可重新提交审核。"

Quick Reference Table

快速参考表

SymptomLikely CauseFirst CheckPatternEst. Time
No resultsNothing detectedStep 1 output1b/1c30 min
Intermittent detectionEdge of frameSubject position1c20 min
Hand missing landmarksLow confidenceStep 2 (confidence)245 min
Body pose skippedPerson bent overBody angle31 hour
UI freezesMain threadStep 3 (threading)5a15 min
Slow processingPerformance tuningRequest timing5b1 hour
Wrong overlay positionCoordinatesPrint points620 min
Missing people (>4)Crowded sceneFace count730 min
VisionKit no UIAnalysis not setInteraction state820 min
Text not detectedImage qualityResults count9a30 min
Wrong charactersLanguage settingsCandidates list9b30 min
Text recognition slowRecognition levelTiming9c30 min
Barcode not detectedSymbology/sizeResults dump10a20 min
Wrong barcode payloadDamaged/binaryPayload data10b15 min
DataScanner blankAvailabilityisSupported/isAvailable11a15 min
DataScanner no itemsData typesrecognizedDataTypes11b20 min
Document edges missingContrast/shapeResults check12a15 min
Perspective wrongCorner orderCorner positions12b20 min
症状可能原因首要检查项对应模式预计修复时间
无结果返回未检测到任何内容步骤1输出1b/1c30分钟
检测结果间歇性出现主体在画面边缘主体位置1c20分钟
手部关键点缺失置信度低步骤2(置信度)245分钟
人体姿态帧被跳过人物弯腰人体角度31小时
UI冻结主线程运行步骤3(线程处理)5a15分钟
处理过慢性能调优请求耗时5b1小时
覆盖层位置错误坐标问题打印坐标点620分钟
遗漏人物(超过4人)拥挤场景面部数量730分钟
VisionKit无UI显示Analysis未设置交互状态820分钟
未检测到文本图像质量结果数量9a30分钟
字符识别错误语言设置候选结果列表9b30分钟
文本识别过慢识别级别耗时统计9c30分钟
未检测到条形码符号体系/尺寸结果详情10a20分钟
条形码载荷错误损坏/二进制载荷数据10b15分钟
DataScanner黑屏兼容性isSupported/isAvailable11a15分钟
DataScanner未检测到物品数据类型recognizedDataTypes11b20分钟
未检测到文档边缘对比度/形状结果检查12a15分钟
透视效果错误角点顺序角点位置12b20分钟

Resources

参考资源

WWDC: 2019-234, 2021-10041, 2022-10024, 2022-10025, 2025-272, 2023-10176, 2020-10653
Docs: /vision, /vision/vnrecognizetextrequest, /visionkit
Skills: axiom-vision, axiom-vision-ref
WWDC:2019-234, 2021-10041, 2022-10024, 2022-10025, 2025-272, 2023-10176, 2020-10653
文档:/vision, /vision/vnrecognizetextrequest, /visionkit
技能:axiom-vision, axiom-vision-ref