deepfake-detection
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDeepfake Detection & Media Authentication
深度伪造检测与媒体认证
Comprehensive framework for detecting synthetic media, analyzing manipulation artifacts, and establishing media provenance in the post-empirical era.
Key Insight: Traditional detection methods (PRNU, IGH, DQ) are like fingerprints—helpful, but disputable. Cryptographic provenance (C2PA) is like a DNA match—mathematically certain (collision probability 2⁻²⁵⁶).
这是一个适用于后经验主义时代的综合框架,用于检测合成媒体、分析篡改痕迹以及确立媒体来源。
核心见解:传统检测方法(PRNU、IGH、DQ)就像指纹——有帮助但存在争议。加密来源验证(C2PA)则像DNA匹配——具有数学上的确定性(碰撞概率为2⁻²⁵⁶)。
When to Use
适用场景
- Verifying authenticity of images or videos before publication
- Detecting AI-generated or manipulated media (deepfakes, face swaps, synthetic voices)
- Forensic analysis of suspicious media for legal or journalistic purposes
- Implementing automated media authentication pipelines
- Establishing content provenance and chain of custody
- Countering disinformation campaigns and Advanced Persistent Manipulators (APMs)
- 发布前验证图像或视频的真实性
- 检测AI生成或篡改的媒体(深度伪造、换脸、合成语音)
- 为法律或新闻目的对可疑媒体进行取证分析
- 实现自动化媒体认证流水线
- 确立内容来源和监管链
- 应对虚假信息活动和高级持续操纵者(APMs)
Related Skills
相关技能
- security-audit - Security assessment patterns
- security-incident-reporting - Incident documentation for disinformation attacks
- enterprise-readiness - Infrastructure for automated verification pipelines
- cli-tools - Auto-installation of required tools
- security-audit - 安全评估模式
- security-incident-reporting - 虚假信息攻击的事件文档记录
- enterprise-readiness - 自动化验证流水线的基础设施
- cli-tools - 所需工具的自动安装
1. What Are Deepfakes?
1. 什么是深度伪造?
Definition
定义
Deepfakes are synthetic media created using deep learning techniques—primarily Generative Adversarial Networks (GANs), Diffusion Models, and Autoencoders—to generate or manipulate audiovisual content with a high degree of realism. The term combines "deep learning" and "fake."
**深度伪造(Deepfakes)**是利用深度学习技术(主要包括生成对抗网络(GANs)、扩散模型和自动编码器)创建的合成媒体,能够生成或篡改高度逼真的视听内容。该术语是“深度学习(deep learning)”和“伪造(fake)”的组合词。
Types of Synthetic Media
合成媒体类型
| Type | Technology | Description |
|---|---|---|
| Face Swap | Autoencoders, GANs | Replace one person's face with another in video |
| Face Reenactment | 3D Morphable Models | Animate a face with another person's expressions |
| Voice Clone | Text-to-Speech, Vocoder | Generate speech in someone's voice from text [20] |
| Lip Sync | Audio-to-Video | Make someone appear to say different words |
| Full Body Puppetry | Pose Estimation | Control a person's body movements |
| Fully Synthetic | Diffusion, GANs | Generate non-existent people, scenes, events |
| 类型 | 技术 | 描述 |
|---|---|---|
| Face Swap(换脸) | Autoencoders, GANs | 在视频中将一个人的脸替换为另一个人的脸 |
| Face Reenactment(人脸重演) | 3D Morphable Models | 用另一个人的表情驱动目标人脸动画 |
| Voice Clone(语音克隆) | Text-to-Speech, Vocoder | 从文本生成特定人物的语音 [20] |
| Lip Sync(唇形同步) | Audio-to-Video | 让人物看起来在说不同的台词 |
| Full Body Puppetry(全身驱动) | Pose Estimation | 控制人物的身体动作 |
| Fully Synthetic(完全合成) | Diffusion, GANs | 生成不存在的人物、场景、事件 |
Emerging Capabilities (2025-2026)
新兴能力(2025-2026年)
| Type | Advancement | Implication |
|---|---|---|
| Face Swap | One-shot swapping (single reference image), GHOST 2.0 [24], DynamicFace [25] | Minimal source material needed |
| Face Reenactment | Audio-driven animation, Neural Head Reenactment | Fully synthetic video calls |
| Voice Clone | Zero-shot cloning (no training on target), Emotional Voice Synthesis | Clone any voice instantly with emotion |
| Lip Sync | High-fidelity with Diffusion Models, Multilingual sync | Automatic dubbing across languages |
| Full Body Puppetry | 3D-aware motion transfer, Neural Body Avatars | Photorealistic real-time control |
| Fully Synthetic | Video Diffusion Models, Controllable Generation | Precise control over age, expression, gaze |
The Entertaining Side
创意应用场景
Deepfakes have legitimate and creative applications:
| Use Case | Example | Value |
|---|---|---|
| Entertainment | De-aging actors in films, posthumous performances | Artistic expression |
| Satire & Parody | Political satire, comedy sketches | Free speech, humor |
| Education | Historical figures "speaking" in documentaries | Engagement, learning |
| Accessibility | Real-time sign language avatars | Inclusion |
| Gaming & VR | Personalized avatars, NPC faces | Immersion |
| Art & Expression | Digital art, creative projects | Innovation |
Example: The "This Person Does Not Exist" website showcases GAN-generated faces that fascinate users with the uncanny realism of non-existent people.
深度伪造技术也有合法且富有创意的应用:
| 应用场景 | 示例 | 价值 |
|---|---|---|
| 娱乐 | 电影中演员年轻化、已故演员重现表演 | 艺术表达 |
| 讽刺与模仿 | 政治讽刺、喜剧小品 | 言论自由、幽默创作 |
| 教育 | 纪录片中“复活”历史人物进行讲解 | 提升参与度、辅助学习 |
| 无障碍 | 实时手语虚拟形象 | 促进包容 |
| 游戏与VR | 个性化虚拟形象、NPC面部生成 | 增强沉浸感 |
| 艺术与表达 | 数字艺术、创意项目 | 推动创新 |
示例:“This Person Does Not Exist”网站展示了GAN生成的人脸,这些不存在的人物以其诡异的真实感吸引了大量用户。
The Dangerous Side
潜在危害
The same technology enables serious harms:
| Threat | Description | Impact |
|---|---|---|
| Non-Consensual Imagery | Synthetic intimate content without consent | Psychological harm, harassment, reputation destruction |
| Political Manipulation | Fabricated speeches, fake scandals | Election interference, democratic erosion |
| Financial Fraud | CEO voice clones for wire transfer scams | Millions in losses per incident |
| Evidence Fabrication | Fake alibis, planted evidence | Obstruction of justice |
| Liar's Dividend | Dismissing real evidence as "deepfake" | Accountability evasion |
| Identity Theft | Bypassing facial recognition, KYC | Account takeover, fraud |
| Disinformation Warfare | State-sponsored synthetic media campaigns | Geopolitical destabilization |
同样的技术也会带来严重危害:
| 威胁 | 描述 | 影响 |
|---|---|---|
| 非合意私密内容 | 在未经同意的情况下生成私密合成内容 | 心理伤害、骚扰、名誉损毁 |
| 政治操纵 | 伪造演讲、虚假丑闻 | 干预选举、侵蚀民主 |
| 金融诈骗 | 克隆CEO语音进行转账诈骗 | 单次事件损失达数百万美元 |
| 证据伪造 | 伪造不在场证明、栽赃证据 | 妨碍司法公正 |
| 说谎者红利 | 将真实证据污蔑为“深度伪造” | 逃避问责 |
| 身份盗用 | 绕过面部识别、KYC验证 | 账户接管、诈骗 |
| 虚假信息战 | 国家资助的合成媒体活动 | 地缘政治不稳定 |
Current Scale (2025-2026)
当前规模(2025-2026年)
| Metric | Value | Source |
|---|---|---|
| Deepfakes shared annually | 8 million (2025) vs 500,000 (2023) | Industry estimates |
| Projected synthetic content | 90% of online content by 2026 | Europol |
| Non-consensual intimate imagery (NCII) | 98% of all deepfakes | EU Commission |
Key Insight: The exponential growth rate means detection systems face an ever-increasing volume challenge, reinforcing the need for proactive authentication (C2PA) over reactive detection.
| 指标 | 数值 | 来源 |
|---|---|---|
| 每年传播的深度伪造内容 | 800万条(2025年)vs 50万条(2023年) | 行业估算 |
| 合成内容占比预测 | 到2026年占在线内容的90% | 欧洲刑警组织 |
| 非合意私密内容(NCII) | 占所有深度伪造内容的98% | 欧盟委员会 |
核心见解:指数级增长意味着检测系统面临日益增长的规模挑战,这凸显了主动认证(C2PA)相较于被动检测的必要性。
The Future of Deepfakes
深度伪造的未来
| Timeline | Development | Implication |
|---|---|---|
| Now (2026) | Real-time video deepfakes, commoditized tools | Anyone can create convincing fakes |
| Near Future | Interactive deepfakes in video calls | Trust in live communication erodes |
| Medium Term | Undetectable synthetic media | Detection becomes probabilistic, not binary |
| Long Term | "Reality-as-a-Service" | Authenticated media becomes the norm, unsigned content is suspect |
| 时间线 | 发展 | 影响 |
|---|---|---|
| 现在(2026年) | 实时视频深度伪造、工具 commoditization | 任何人都能制作逼真的伪造内容 |
| 近期 | 视频通话中的交互式深度伪造 | 实时通信的信任度下降 |
| 中期 | 无法检测的合成媒体 | 检测结果从确定性变为概率性 |
| 长期 | “现实即服务(Reality-as-a-Service)” | 经过认证的媒体成为常态,未签名内容将受到质疑 |
The Detection Arms Race
检测军备竞赛
Recent research confirms the growing challenge of detection generalizability [1]:
Generation Quality: ████████████████████░░░░ 85% (2026)
Detection Accuracy: █████████████░░░░░░░░░░░ 55% (2026)
↑ Gap widening over timeKey Insight: We are transitioning from a world where "seeing is believing" to one where "cryptographic proof is believing." The future lies not in perfect detection, but in provenance infrastructure (C2PA v2.3) that proves authenticity at creation [15, 16]. Traditional detection methods (PRNU, IGH, DQ) are like fingerprints—helpful, but disputable. Cryptographic provenance (C2PA) is like a DNA match—mathematically certain.
2. Strategic Context: The Post-Empirical Era
2. 战略背景:后经验主义时代
The Crisis of Empirical Evidence (2026)
经验证据危机(2026年)
The boundary between authentic and synthetic media has effectively vanished. Trillion-parameter models have commoditized the generation of photorealistic synthetic content, transforming deepfakes from isolated experiments into an industrialized disinformation capability.
真实媒体与合成媒体的界限已基本消失。万亿参数模型让生成照片级真实感的合成内容变得普及,将深度伪造从孤立实验转变为工业化的虚假信息能力。
The ABC Framework of Synthetic Media Threats
合成媒体威胁的ABC框架
| Category | Description | Examples |
|---|---|---|
| A - Actors | Malicious generators of synthetic content | Nation-states, APMs (Advanced Persistent Manipulators), commercial disinformation services |
| B - Behavior | Deceptive patterns and tactics | Astroturfing with synthetic identities, coordinated inauthentic behavior |
| C - Content | The synthetic media itself | Deepfake videos, voice clones, GAN-generated faces, manipulated images |
| 类别 | 描述 | 示例 |
|---|---|---|
| A - 行为主体 | 合成内容的恶意生成者 | 国家行为体、APMs(高级持续操纵者)、商业虚假信息服务提供商 |
| B - 行为模式 | 欺骗性模式与策略 | 利用合成身份进行水军操作、协调一致的非真实行为 |
| C - 内容 | 合成媒体本身 | 深度伪造视频、语音克隆、GAN生成人脸、篡改图像 |
The 4D Disinformation Tactics
4D虚假信息策略
| Tactic | Description | Forensic Counter |
|---|---|---|
| Dismiss | Claim real evidence is fake ("Liar's Dividend") | Provenance verification, cryptographic attestation |
| Distort | Reframe authentic events with synthetic fragments | Semantic consistency analysis |
| Distract | Flood with synthetic noise to obscure truth | Scale-resistant automated detection |
| Dismay | Psychological operations through synthetic threats | Confidence scoring, sensemaking support |
| 策略 | 描述 | 取证应对措施 |
|---|---|---|
| Dismiss(否认) | 声称真实证据是伪造的(“说谎者红利”) | 来源验证、加密认证 |
| Distort(歪曲) | 用合成片段重构真实事件 | 语义一致性分析 |
| Distract(干扰) | 用合成信息淹没真实内容以混淆视听 | 可扩展的自动化检测 |
| Dismay(恐慌) | 通过合成内容进行心理操作 | 置信度评分、意义构建支持 |
3. System Architecture
3. 系统架构
LLM Integration Strategy
LLM集成策略
The skill implements a hierarchical model structure for forensic analysis:
| Role | Model | Version | Function |
|---|---|---|---|
| Lead | Claude Opus | 4.5 | Complex synthesis of forensic data, multimodal analysis, report generation |
| Validation | Gemini Pro | 3.0 | Cross-validation of detection results, second opinion on edge cases |
| Reasoning | GLM Pro Thinking | 4.7 | Logical verification of causal chains, step-by-step reasoning for forensic conclusions |
本技能采用分层模型结构进行取证分析:
| 角色 | 模型 | 版本 | 功能 |
|---|---|---|---|
| 主导 | Claude Opus | 4.5 | 取证数据的复杂合成、多模态分析、报告生成 |
| 验证 | Gemini Pro | 3.0 | 检测结果的交叉验证、边缘案例的二次意见 |
| 推理 | GLM Pro Thinking | 4.7 | 因果链的逻辑验证、取证结论的分步推理 |
Model Selection Rationale
模型选择依据
- Claude Opus 4.5: Best-in-class for nuanced multimodal analysis and synthesizing complex forensic evidence into coherent reports
- Gemini Pro 3.0: Strong visual understanding for cross-validating image/video analysis results
- GLM Pro Thinking 4.7: Chain-of-thought reasoning for transparent forensic logic that can be audited
- Claude Opus 4.5:在精细的多模态分析以及将复杂取证证据整合成连贯报告方面表现最佳
- Gemini Pro 3.0:在视觉理解方面能力突出,可交叉验证图像/视频分析结果
- GLM Pro Thinking 4.7:具备思维链推理能力,可生成可审计的透明取证逻辑
Architecture Requirements
架构要求
- Asynchronous Processing Pipeline: Handle high token counts from multimodal analysis
- Vector Database for CRF Profiles: Store and query Camera Response Function signatures
- RAG Integration: Access forensic reference databases during inference
- Tool Integration: ffmpeg, ExifTool, ImageMagick for low-level signal processing
- 异步处理流水线:处理多模态分析产生的高token量
- CRF特征向量数据库:存储和查询相机响应函数(Camera Response Function)特征
- RAG集成:推理过程中访问取证参考数据库
- 工具集成:ffmpeg、ExifTool、ImageMagick用于底层信号处理
4. Required Tools & Installation
4. 所需工具与安装
Tool Overview
工具概述
| Tool | Purpose | Required |
|---|---|---|
| Video processing, frame extraction, audio isolation | Yes |
| Metadata extraction, container analysis | Yes (bundled with ffmpeg) |
| Deep metadata extraction, EXIF/XMP/IPTC analysis | Yes |
| Image processing, format conversion | Recommended |
| JSON processing for metadata analysis | Recommended |
| C2PA/CAI provenance verification | Optional |
| 工具 | 用途 | 是否必需 |
|---|---|---|
| 视频处理、帧提取、音频分离 | 是 |
| 元数据提取、容器分析 | 是(与ffmpeg捆绑) |
| 深度元数据提取、EXIF/XMP/IPTC分析 | 是 |
| 图像处理、格式转换 | 推荐 |
| 元数据分析的JSON处理工具 | 推荐 |
| C2PA/CAI来源验证 | 可选 |
Auto-Installation by Agent
Agent自动安装
When a required tool is missing, the agent will detect this and offer to install it. User approval is required before any installation.
🔧 Tool Missing: ffmpeg
The agent needs 'ffmpeg' for video frame extraction and analysis.
This tool is not currently installed on your system.
Would you like me to install it?
[macOS] brew install ffmpeg
[Ubuntu] sudo apt install ffmpeg
[Windows] winget install ffmpeg
⚠️ Approval required: Type 'yes' to proceed or 'no' to skip.当缺少必需工具时,Agent会检测到并提供安装选项。安装前需获得用户批准。
🔧 工具缺失: ffmpeg
Agent需要'ffmpeg'用于视频帧提取和分析。
当前系统未安装该工具。
是否需要安装?
[macOS] brew install ffmpeg
[Ubuntu] sudo apt install ffmpeg
[Windows] winget install ffmpeg
⚠️ 需要批准: 输入'yes'继续或'no'跳过。Manual Installation
手动安装
macOS (Homebrew)
macOS(Homebrew)
bash
undefinedbash
undefinedInstall all recommended tools
安装所有推荐工具
brew install ffmpeg exiftool imagemagick jq
brew install ffmpeg exiftool imagemagick jq
Optional: C2PA verification tool
可选:C2PA验证工具
brew install c2patool
undefinedbrew install c2patool
undefinedUbuntu/Debian
Ubuntu/Debian
bash
undefinedbash
undefinedInstall all recommended tools
安装所有推荐工具
sudo apt update
sudo apt install ffmpeg libimage-exiftool-perl imagemagick jq
sudo apt update
sudo apt install ffmpeg libimage-exiftool-perl imagemagick jq
Optional: C2PA verification tool (from GitHub releases)
可选:C2PA验证工具(从GitHub发布版获取)
curl -L https://github.com/contentauth/c2patool/releases/latest/download/c2patool-linux-x86_64.tar.gz | tar xz
sudo mv c2patool /usr/local/bin/
undefinedcurl -L https://github.com/contentauth/c2patool/releases/latest/download/c2patool-linux-x86_64.tar.gz | tar xz
sudo mv c2patool /usr/local/bin/
undefinedWindows (winget)
Windows(winget)
powershell
undefinedpowershell
undefinedInstall all recommended tools
安装所有推荐工具
winget install ffmpeg
winget install exiftool
winget install imagemagick
winget install jqlang.jq
winget install ffmpeg
winget install exiftool
winget install imagemagick
winget install jqlang.jq
Optional: C2PA verification tool (from GitHub releases)
可选:C2PA验证工具(从GitHub发布版获取)
Download from: https://github.com/contentauth/c2patool/releases
undefinedundefinedVerification
验证安装
bash
undefinedbash
undefinedVerify installations
验证安装
ffmpeg -version
exiftool -ver
magick -version
jq --version
c2patool --version # if installed
undefinedffmpeg -version
exiftool -ver
magick -version
jq --version
c2patool --version # 如果已安装
undefinedTool Usage Examples
工具使用示例
ffmpeg for Feature Extraction
ffmpeg特征提取
bash
undefinedbash
undefinedExtract I-frames for PRNU analysis
提取I帧用于PRNU分析
ffmpeg -i input.mp4 -vf "select='eq(pict_type,I)'" -vsync vfr frame_%04d.png
ffmpeg -i input.mp4 -vf "select='eq(pict_type,I)'" -vsync vfr frame_%04d.png
Analyze inter-frame consistency (temporal artifacts)
分析帧间一致性(时间域痕迹)
ffmpeg -i input.mp4 -vf "mpdecimate,setpts=N/FRAME_RATE/TB" -c:v libx264 dedup.mp4
ffmpeg -i input.mp4 -vf "mpdecimate,setpts=N/FRAME_RATE/TB" -c:v libx264 dedup.mp4
Extract metadata for container audit
提取元数据用于容器审计
ffprobe -v quiet -print_format json -show_format -show_streams input.mp4
ffprobe -v quiet -print_format json -show_format -show_streams input.mp4
Isolate audio stream for voice clone detection
分离音频流用于语音克隆检测
ffmpeg -i input.mp4 -vn -acodec pcm_s16le -ar 44100 audio.wav
ffmpeg -i input.mp4 -vn -acodec pcm_s16le -ar 44100 audio.wav
Extract specific frame range for analysis
提取特定帧范围用于分析
ffmpeg -i input.mp4 -ss 00:01:30 -t 00:00:10 -c copy segment.mp4
undefinedffmpeg -i input.mp4 -ss 00:01:30 -t 00:00:10 -c copy segment.mp4
undefinedExifTool for Metadata Forensics
ExifTool元数据取证
bash
undefinedbash
undefinedExtract all metadata
提取所有元数据
exiftool -json input.jpg | jq .
exiftool -json input.jpg | jq .
Check for editing software traces
检查编辑软件痕迹
exiftool -Software -CreatorTool -HistorySoftwareAgent input.jpg
exiftool -Software -CreatorTool -HistorySoftwareAgent input.jpg
Compare metadata between original and suspected fake
对比原始文件与可疑伪造文件的元数据
exiftool -g1 -a -u original.jpg suspected.jpg | diff -y -
exiftool -g1 -a -u original.jpg suspected.jpg | diff -y -
Find GPS coordinates (if present)
查找GPS坐标(如果存在)
exiftool -gps:all -c "%.6f" input.jpg
exiftool -gps:all -c "%.6f" input.jpg
Check creation/modification times for inconsistencies
检查创建/修改时间是否存在不一致
exiftool -time:all -G1 input.jpg
undefinedexiftool -time:all -G1 input.jpg
undefinedImageMagick for Image Analysis
ImageMagick图像分析
bash
undefinedbash
undefinedAnalyze image statistics (useful for noise analysis)
分析图像统计信息(用于噪声分析)
magick identify -verbose input.jpg
magick identify -verbose input.jpg
Extract error level analysis (ELA) for manipulation detection
提取误差水平分析(ELA)用于篡改检测
magick input.jpg -quality 95 ela_temp.jpg
magick composite input.jpg ela_temp.jpg -compose difference ela_output.jpg
magick input.jpg -quality 95 ela_temp.jpg
magick composite input.jpg ela_temp.jpg -compose difference ela_output.jpg
Check for resampling artifacts
检查重采样痕迹
magick input.jpg -resize 200% -resize 50% resample_test.jpg
undefinedmagick input.jpg -resize 200% -resize 50% resample_test.jpg
undefinedC2PA Tool for Provenance
C2PA工具来源验证
bash
undefinedbash
undefinedVerify C2PA manifest
验证C2PA清单
c2patool verify input.jpg
c2patool verify input.jpg
Extract manifest details as JSON
提取清单详情为JSON格式
c2patool manifest input.jpg -o manifest.json
c2patool manifest input.jpg -o manifest.json
Check certificate chain
检查证书链
c2patool trust input.jpg
undefinedc2patool trust input.jpg
undefinedC2PA Test Files for Validation
C2PA测试文件用于验证
Official test files from the C2PA organization (CC BY-SA 4.0):
| File | Description | Expected Result |
|---|---|---|
| Valid Adobe certificate, verified signature | ✅ Chain verified |
| Hardware-signed at capture | ✅ Chain verified |
| Files without credentials | No C2PA manifest | ⚠️ No provenance |
| Tampered files | Modified after signing | ❌ Invalid signature |
Source: c2pa-org/public-testfiles
Understanding C2PA Validation: The chain is verified step-by-step: (1) Certificate verified → (2) Signature valid → (3) Claims unchanged → (4) Image hash matches. One failure breaks the entire chain.
来自C2PA组织的官方测试文件(CC BY-SA 4.0许可):
| 文件 | 描述 | 预期结果 |
|---|---|---|
| 有效的Adobe证书、已验证签名 | ✅ 证书链验证通过 |
| 拍摄时硬件签名 | ✅ 证书链验证通过 |
| 无凭证文件 | 无C2PA清单 | ⚠️ 无来源信息 |
| 篡改文件 | 签名后被修改 | ❌ 签名无效 |
理解C2PA验证:证书链验证分步进行:(1) 证书验证 → (2) 签名有效 → (3) 声明未被篡改 → (4) 图像哈希匹配。任何一步失败都会导致整个链无效。
5. Forensic Detection Criteria
5. 取证检测标准
Criterion A: Sensor Fingerprints (PRNU/PCE)
标准A:传感器指纹(PRNU/PCE)
Photo-Response Non-Uniformity (PRNU) is a sensor-specific noise pattern that acts as a biometric fingerprint for cameras.
光响应非均匀性(PRNU)是传感器特有的噪声模式,可作为相机的生物特征指纹。
Metric: Peak to Correlation Energy (PCE)
指标:峰值相关能量(PCE)
python
undefinedpython
undefinedConceptual PRNU/PCE calculation
概念性PRNU/PCE计算
def calculate_pce(image: np.ndarray, reference_prnu: np.ndarray) -> float:
"""
Calculate Peak-to-Correlation-Energy for PRNU matching.
Returns:
PCE value > 60 indicates high confidence match
PCE value 40-60 indicates moderate confidence
PCE value < 40 indicates low confidence or mismatch
"""
noise_residual = extract_noise_residual(image)
correlation = correlate_2d(noise_residual, reference_prnu)
peak = np.max(correlation)
energy = np.mean(correlation**2)
return peak**2 / energyundefineddef calculate_pce(image: np.ndarray, reference_prnu: np.ndarray) -> float:
"""
计算PRNU匹配的峰值相关能量。
返回值:
PCE值>60表示高置信度匹配
PCE值40-60表示中等置信度
PCE值<40表示低置信度或不匹配
"""
noise_residual = extract_noise_residual(image)
correlation = correlate_2d(noise_residual, reference_prnu)
peak = np.max(correlation)
energy = np.mean(correlation**2)
return peak**2 / energyundefinedPRNU Limitations (Wronski Effect)
PRNU局限性(Wronski效应)
Modern computational photography (multi-frame capture, super-resolution) breaks the direct sensor-to-pixel mapping [22, 23]:
| Device Era | PRNU Reliability | Notes |
|---|---|---|
| Pre-2018 DSLRs | High | Direct sensor output |
| 2018-2022 Smartphones | Medium | Some computational processing |
| 2023+ Smartphones | Low | Heavy multi-frame HDR, AI enhancement |
| Synthetic (GAN/Diffusion) | None | No physical sensor involved |
现代计算摄影(多帧捕获、超分辨率)破坏了传感器到像素的直接映射 [22, 23]:
| 设备时代 | PRNU可靠性 | 说明 |
|---|---|---|
| 2018年前的单反相机 | 高 | 直接传感器输出 |
| 2018-2022年智能手机 | 中 | 存在一定计算处理 |
| 2023年后智能手机 | 低 | 大量多帧HDR、AI增强 |
| 合成媒体(GAN/扩散模型) | 无 | 无物理传感器参与 |
Criterion B: Noise-Intensity Relationship & IGH
标准B:噪声-强度关系与IGH
The Intensity Gradient Histogram (IGH) classifies the relationship between local intensity and noise, exploiting Camera Response Function (CRF) physics.
python
def analyze_igh_profile(image: np.ndarray) -> dict:
"""
Analyze Intensity Gradient Histogram for CRF consistency.
Synthetic blur (portrait mode) destroys the statistical harmony
of the CRF model, detectable via IGH asymmetry analysis.
"""
gradients = compute_intensity_gradients(image)
# Authentic optical blur: asymmetric gradient distribution
# Synthetic blur: symmetric Gaussian distribution
symmetry_score = measure_gradient_symmetry(gradients)
return {
"gradient_histogram": gradients,
"symmetry_score": symmetry_score,
"classification": "authentic" if symmetry_score < 0.7 else "synthetic"
}强度梯度直方图(IGH)通过分类局部强度与噪声的关系,利用相机响应函数(CRF)物理特性进行检测。
python
def analyze_igh_profile(image: np.ndarray) -> dict:
"""
分析强度梯度直方图以检查CRF一致性。
合成模糊(人像模式)会破坏CRF模型的统计协调性,可通过IGH不对称性分析检测。
"""
gradients = compute_intensity_gradients(image)
# 真实光学模糊:梯度分布不对称
# 合成模糊:对称高斯分布
symmetry_score = measure_gradient_symmetry(gradients)
return {
"gradient_histogram": gradients,
"symmetry_score": symmetry_score,
"classification": "authentic" if symmetry_score < 0.7 else "synthetic"
}Criterion C: Geometric & Optical Blur Analysis
标准C:几何与光学模糊分析
Physical law: Due to CRF non-linearity, optically blurred edges are asymmetric. Software Gaussian filters produce symmetric profiles.
| Blur Type | Gradient Profile | Detection Method |
|---|---|---|
| Optical (lens) | Asymmetric | IGH analysis |
| Digital (software) | Symmetric | IGH analysis |
| Depth-of-field (real) | Varies with distance | 3D consistency check |
| Portrait mode (fake) | Uniform application | Edge discontinuity detection |
物理定律:由于CRF非线性,光学模糊边缘是不对称的。软件高斯滤镜产生对称轮廓。
| 模糊类型 | 梯度轮廓 | 检测方法 |
|---|---|---|
| 光学(镜头) | 不对称 | IGH分析 |
| 数字(软件) | 对称 | IGH分析 |
| 景深(真实) | 随距离变化 | 3D一致性检查 |
| 人像模式(伪造) | 均匀应用 | 边缘不连续性检测 |
Criterion D: Compression Artifacts (Double JPEG / DQ)
标准D:压缩痕迹(双重JPEG / DQ)
Double Quantization (DQ) effects serve as primary evidence for multiple save operations, enabling splicing localization.
python
def generate_dq_probability_map(image: np.ndarray) -> np.ndarray:
"""
Generate Double Quantization probability map.
Spliced regions show different quantization histories,
creating detectable statistical anomalies.
"""
dct_blocks = compute_dct_blocks(image)
q_estimates = estimate_quantization_tables(dct_blocks)
# Probability map highlights regions with different
# compression histories (splicing indicators)
prob_map = detect_quantization_inconsistencies(q_estimates)
return prob_map # Heatmap: red = likely manipulation双重量化(DQ)效应是多次保存操作的主要证据,可用于定位拼接区域。
python
def generate_dq_probability_map(image: np.ndarray) -> np.ndarray:
"""
生成双重量化概率图。
拼接区域的量化历史不同,会产生可检测的统计异常。
"""
dct_blocks = compute_dct_blocks(image)
q_estimates = estimate_quantization_tables(dct_blocks)
# 概率图突出显示具有不同压缩历史的区域(拼接指示)
prob_map = detect_quantization_inconsistencies(q_estimates)
return prob_map # 热力图:红色 = 疑似篡改区域6. Video-Specific Detection
6. 视频特定检测
Video deepfake detection requires analysis of temporal consistency, which current research shows remains challenging for generalization across different manipulation methods [1, 4].
视频深度伪造检测需要分析时间一致性,当前研究表明,不同篡改方法的泛化检测仍具挑战性 [1, 4]。
Visual Detection Indicators (Human Review)
视觉检测指标(人工审核)
Before algorithmic analysis, trained reviewers check for these telltale signs:
| Indicator | What to Look For | Why It Happens |
|---|---|---|
| Face Boundaries | Flickering edges, face "floating" over body | Imperfect blending between swapped face and original |
| Blinking | No blinking, asymmetric blinking, stiff eyes | Early models lacked blink training; still imperfect |
| Lip Sync | Delays on plosives (p, b, m sounds) | Audio-visual alignment is computationally hard |
| Shadows & Light | Multiple shadow directions, inconsistent lighting | Composited elements from different light sources |
| Eye Reflections | Different scenes reflected in each eye | Synthesized eyes don't share real-world reflection |
| Hair Details | Smooth contours, "melting" strands, clipping | Fine details are hardest for generators |
Best Practice: Slow down video to 25% speed and examine frame-by-frame. Artifacts become more visible when temporal smoothing is removed.
在算法分析之前,经过培训的审核员会检查以下明显迹象:
| 指标 | 检查内容 | 产生原因 |
|---|---|---|
| 人脸边界 | 闪烁边缘、人脸“漂浮”在身体上 | 换脸后的人脸与原身体融合不完美 |
| 眨眼 | 不眨眼、不对称眨眼、眼部僵硬 | 早期模型缺乏眨眼训练;当前模型仍不完善 |
| 唇形同步 | 爆破音(p、b、m)延迟 | 音视频对齐计算难度大 |
| 阴影与光线 | 多个阴影方向、光照不一致 | 合成元素来自不同光源 |
| 眼部反射 | 双眼反射不同场景 | 合成眼睛未共享真实世界反射 |
| 头发细节 | 平滑轮廓、“融化”发丝、裁剪痕迹 | 生成器最难处理精细细节 |
最佳实践:将视频速度放慢至25%,逐帧检查。去除时间平滑后,痕迹会更加明显。
Temporal Consistency Analysis
时间一致性分析
python
def analyze_temporal_artifacts(video_path: str) -> dict:
"""
Detect temporal inconsistencies in video deepfakes.
Face-swap deepfakes often show:
- Flickering at face boundaries
- Inconsistent lighting between frames
- Unnatural head pose transitions
"""
frames = extract_frames(video_path)
results = {
"face_boundary_flickering": detect_boundary_flickering(frames),
"lighting_consistency": analyze_lighting_consistency(frames),
"pose_smoothness": measure_pose_transitions(frames),
"blink_analysis": detect_blink_patterns(frames), # Early deepfakes lacked blinking
"audio_visual_sync": check_lip_sync_accuracy(video_path)
}
return resultspython
def analyze_temporal_artifacts(video_path: str) -> dict:
"""
检测视频深度伪造中的时间不一致性。
换脸深度伪造通常表现为:
- 人脸边界闪烁
- 帧间光照不一致
- 头部姿态过渡不自然
"""
frames = extract_frames(video_path)
results = {
"face_boundary_flickering": detect_boundary_flickering(frames),
"lighting_consistency": analyze_lighting_consistency(frames),
"pose_smoothness": measure_pose_transitions(frames),
"blink_analysis": detect_blink_patterns(frames), # 早期深度伪造缺乏眨眼效果
"audio_visual_sync": check_lip_sync_accuracy(video_path)
}
return resultsGAN & Diffusion Model Fingerprint Detection
GAN与扩散模型指纹检测
Modern detection must address both GAN-based and diffusion model-generated images. Recent research demonstrates that diffusion models leave distinct artifacts detectable via uncertainty estimation [5] and characteristic photorealism patterns [6].
python
def detect_gan_fingerprints(image: np.ndarray) -> dict:
"""
Detect characteristic patterns left by generative architectures.
Different families (StyleGAN, Stable Diffusion, DALL-E, Midjourney)
leave distinct frequency-domain artifacts.
"""
fft = compute_fft_spectrum(image)
# GANs often produce checkerboard patterns in FFT
checkerboard_score = detect_checkerboard_artifacts(fft)
# Spectral analysis for GAN-specific signatures
gan_signatures = match_known_gan_spectra(fft)
return {
"checkerboard_score": checkerboard_score,
"suspected_generator": gan_signatures.get("best_match"),
"confidence": gan_signatures.get("confidence")
}python
def detect_gan_fingerprints(image: np.ndarray) -> dict:
"""
检测生成式架构留下的特征模式。
不同模型家族(StyleGAN、Stable Diffusion、DALL-E、Midjourney)
会留下不同的频域痕迹。
"""
fft = compute_fft_spectrum(image)
# GAN通常会在FFT中产生棋盘格模式
checkerboard_score = detect_checkerboard_artifacts(fft)
# 频谱分析以匹配已知GAN特征
gan_signatures = match_known_gan_spectra(fft)
return {
"checkerboard_score": checkerboard_score,
"suspected_generator": gan_signatures.get("best_match"),
"confidence": gan_signatures.get("confidence")
}7. Semantic Forensics (SemaFor)
7. 语义取证(SemaFor)
When pixel-level artifacts are masked, focus on semantic inconsistencies. This approach was pioneered by the DARPA SemaFor program (2020-2024), which has since transitioned technologies to operational government use [12, 13, 14].
当像素级痕迹被掩盖时,应关注语义不一致性。该方法由DARPA SemaFor项目(2020-2024年)首创,相关技术现已应用于政府实际操作 [12, 13, 14]。
Cross-Modal Consistency Checks
跨模态一致性检查
| Check | Description | Example |
|---|---|---|
| Shadow Physics | Verify shadow directions match single light source | Multiple shadow angles in composite |
| Reflection Consistency | Check reflections match scene geometry | Eyes reflecting different scenes |
| Perspective Geometry | Verify vanishing points are consistent | Impossible architectural angles |
| Audio-Visual Sync | Lip movements match phoneme timing [7, 8] | Desync in voice clone overlays |
| Temporal Plausibility | Metadata matches claimed time/location | Weather, daylight inconsistent with timestamp |
| 检查项 | 描述 | 示例 |
|---|---|---|
| 阴影物理特性 | 验证阴影方向是否匹配单一光源 | 合成图像中存在多个阴影角度 |
| 反射一致性 | 检查反射是否与场景几何一致 | 双眼反射不同场景 |
| 透视几何 | 验证消失点是否一致 | 不可能的建筑角度 |
| 音视频同步 | 唇部动作与音素时序匹配 [7, 8] | 语音克隆叠加时不同步 |
| 时间合理性 | 元数据与声称的时间/地点匹配 | 天气、日光与时间戳不一致 |
Example: Shadow Analysis
示例:阴影分析
python
def analyze_shadow_consistency(image: np.ndarray) -> dict:
"""
Detect physically impossible shadow configurations.
A single light source produces shadows in consistent directions.
Composited images often have inconsistent shadow angles.
"""
shadows = detect_shadows(image)
objects = detect_objects(image)
shadow_vectors = []
for obj, shadow in zip(objects, shadows):
vector = compute_shadow_vector(obj, shadow)
shadow_vectors.append(vector)
# All vectors should converge to consistent light source
consistency_score = measure_vector_convergence(shadow_vectors)
return {
"shadow_vectors": shadow_vectors,
"consistency_score": consistency_score,
"physically_plausible": consistency_score > 0.85
}python
def analyze_shadow_consistency(image: np.ndarray) -> dict:
"""
检测物理上不可能的阴影配置。
单一光源会产生方向一致的阴影。
合成图像通常存在不一致的阴影角度。
"""
shadows = detect_shadows(image)
objects = detect_objects(image)
shadow_vectors = []
for obj, shadow in zip(objects, shadows):
vector = compute_shadow_vector(obj, shadow)
shadow_vectors.append(vector)
# 所有向量应收敛于一致的光源
consistency_score = measure_vector_convergence(shadow_vectors)
return {
"shadow_vectors": shadow_vectors,
"consistency_score": consistency_score,
"physically_plausible": consistency_score > 0.85
}8. Authenticity Scoring System
8. 真实性评分系统
Analysis Layer Weighting
分析层权重
Based on scientific reliability of each method (from webconsulting.at forensics research):
| Analysis Layer | Weight | Rationale |
|---|---|---|
| Signal Analysis | 45% | Objective forensic signals: noise patterns, compression artifacts, frequency analysis. Hybrid approaches achieve F1 scores of 0.96 on benchmarks |
| Metadata Analysis | 35% | EXIF provenance chain. 62% of images have camera-specific signatures, 99% are manufacturer-identifiable |
| Semantic Analysis | 20% | AI-based artifact detection. Only 58% accuracy on standard benchmarks—OpenAI discontinued their detector in 2023 due to low accuracy |
| C2PA (Bonus) | +25-40 points | Cryptographic proof. Only unforgeable method. Combined with AI detection reduces false positives by 41% |
Important: Without C2PA verification, maximum achievable grade is 2 ("No manipulation indicators"). Grade 1 ("Provenance cryptographically verified") requires a validated signature chain.
基于各方法的科学可靠性(来自webconsulting.at取证研究):
| 分析层 | 权重 | 依据 |
|---|---|---|
| 信号分析 | 45% | 客观取证信号:噪声模式、压缩痕迹、频率分析。混合方法在基准测试中F1分数达0.96 |
| 元数据分析 | 35% | EXIF来源链。62%的图像具有相机特定特征,99%可识别制造商 |
| 语义分析 | 20% | AI-based痕迹检测。标准基准测试中准确率仅58%——OpenAI因准确率低在2023年停用了其检测器 |
| C2PA(加分项) | +25-40分 | 加密证明。唯一不可伪造的方法。与AI检测结合可将误报率降低41% |
重要提示:若无C2PA验证,最高可获得2级评分(“无篡改迹象”)。1级评分(“来源已加密验证”)需要有效的签名链。
Probability to Grade Mapping
概率到评分等级映射
| Authenticity % | Grade | Interpretation |
|---|---|---|
| 90 - 100% | 1 (Excellent) | Evidence-based authenticity: Valid PRNU/PCE fingerprint; absence of DQ artifacts |
| 75 - 89% | 2 (Good) | Probably authentic: Consistent IGH profiles; minor deviations from standard compression |
| 50 - 74% | 3 (Satisfactory) | Hybrid content detected: Requires human-in-the-loop verification |
| 35 - 49% | 4 (Adequate) | Significant statistical anomalies: Noise profile inconsistencies indicate local editing |
| 20 - 34% | 5 (Poor) | High manipulation probability: Positive splicing detection via DQ maps |
| < 20% | 6 (Fail) | Confirmed forgery: Forensic evidence of synthetic generation (GAN fingerprints) or physical impossibilities |
| 真实性百分比 | 等级 | 解释 |
|---|---|---|
| 90 - 100% | 1(优秀) | 基于证据的真实性:有效的PRNU/PCE指纹;无DQ痕迹 |
| 75 - 89% | 2(良好) | 可能真实:IGH特征一致;与标准压缩存在微小偏差 |
| 50 - 74% | 3(满意) | 检测到混合内容:需要人工介入验证 |
| 35 - 49% | 4(合格) | 存在明显统计异常:噪声特征不一致表明存在局部编辑 |
| 20 - 34% | 5(较差) | 高篡改概率:通过DQ图检测到拼接 |
| < 20% | 6(不合格) | 确认伪造:存在合成生成(GAN指纹)或物理不可能的取证证据 |
Composite Scoring Algorithm
综合评分算法
python
def calculate_authenticity_score(media_path: str) -> dict:
"""
Calculate composite authenticity score from multiple forensic signals.
"""
image = load_media(media_path)
scores = {
"prnu_pce": analyze_prnu(image), # Weight: 0.25
"igh_profile": analyze_igh_profile(image), # Weight: 0.20
"dq_artifacts": detect_dq_artifacts(image), # Weight: 0.20
"gan_fingerprints": detect_gan_fingerprints(image), # Weight: 0.15
"semantic_consistency": check_semantic_consistency(image), # Weight: 0.20
}
weights = [0.25, 0.20, 0.20, 0.15, 0.20]
composite = sum(s * w for s, w in zip(scores.values(), weights))
return {
"authenticity_probability": composite,
"grade": map_to_grade(composite),
"detailed_scores": scores,
"confidence_interval": calculate_confidence(scores)
}python
def calculate_authenticity_score(media_path: str) -> dict:
"""
从多个取证信号计算综合真实性评分。
"""
image = load_media(media_path)
scores = {
"prnu_pce": analyze_prnu(image), # 权重: 0.25
"igh_profile": analyze_igh_profile(image), # 权重: 0.20
"dq_artifacts": detect_dq_artifacts(image), # 权重: 0.20
"gan_fingerprints": detect_gan_fingerprints(image), # 权重: 0.15
"semantic_consistency": check_semantic_consistency(image), # 权重: 0.20
}
weights = [0.25, 0.20, 0.20, 0.15, 0.20]
composite = sum(s * w for s, w in zip(scores.values(), weights))
return {
"authenticity_probability": composite,
"grade": map_to_grade(composite),
"detailed_scores": scores,
"confidence_interval": calculate_confidence(scores)
}9. Content Provenance (C2PA / CAI)
9. 内容来源(C2PA / CAI)
C2PA Steering Committee (2026)
C2PA指导委员会(2026年)
The Coalition for Content Provenance and Authenticity is governed by major technology and media companies:
| Member | Role |
|---|---|
| Adobe | Founding member, CAI lead |
| BBC | Media organization representative |
| Platform integration | |
| Meta | Social platform adoption |
| Microsoft | Enterprise integration (365) |
| OpenAI | AI generator signing (DALL-E, ChatGPT) |
| Publicis Groupe | Advertising industry adoption |
| Sony | Hardware integration (cameras) |
| Truepic | Mobile authentication pioneer |
内容来源与真实性联盟(C2PA)由主要科技和媒体公司主导:
| 成员 | 角色 |
|---|---|
| Adobe | 创始成员、CAI主导方 |
| BBC | 媒体组织代表 |
| 平台集成 | |
| Meta | 社交平台采用 |
| Microsoft | 企业集成(365) |
| OpenAI | AI生成内容签名(DALL-E、ChatGPT) |
| Publicis Groupe | 广告行业采用 |
| Sony | 硬件集成(相机) |
| Truepic | 移动认证先驱 |
Content Credentials: The CR Icon
内容凭证:CR图标
C2PA is the technical standard. Content Credentials is the user-facing implementation with the visible "CR" icon.
| What the CR Icon Shows | Description |
|---|---|
| Creator | Who created the media (camera, person, AI) |
| Software | What software was used for editing |
| AI Disclosure | Whether AI was used for generation |
| Edit History | What editing steps occurred |
Key Feature: All assertions are cryptographically signed. Changing even one pixel invalidates the signature—manipulation is immediately detectable.
C2PA是技术标准。内容凭证是面向用户的实现,带有可见的“CR”图标。
| CR图标显示内容 | 描述 |
|---|---|
| 创作者 | 媒体创作者(相机、个人、AI) |
| 软件 | 编辑所用软件 |
| AI披露 | 是否使用AI生成 |
| 编辑历史 | 进行过的编辑步骤 |
核心特性:所有声明均经过加密签名。即使更改一个像素也会使签名无效——篡改可立即被检测到。
Industry Adoption (2025-2026)
行业采用情况(2025-2026年)
C2PA is rapidly becoming the industry standard. Current adoption landscape:
| Category | Adopters | Status |
|---|---|---|
| AI Generators | DALL-E 3, Adobe Firefly, Google Gemini | Auto-sign all outputs |
| Software | Adobe Photoshop, Lightroom | Cryptographic edit history |
| Professional Cameras | Leica M11-P, Sony (select models) | Sign at capture |
| Camera Manufacturers | Nikon, Canon | Following (announced) |
| Smartphones | Google Pixel 10 (2025/26) | Native C2PA support |
| Mobile OEMs | Samsung Galaxy | Following (announced) |
| Enterprise | Microsoft 365 | Mandatory AI watermarks (2026) |
Prognosis (3-5 years): For media organizations and government agencies: Without cryptographic provenance, no file will be considered trustworthy.
C2PA正迅速成为行业标准。当前采用情况:
| 类别 | 采用方 | 状态 |
|---|---|---|
| AI生成器 | DALL-E 3、Adobe Firefly、Google Gemini | 自动为所有输出签名 |
| 软件 | Adobe Photoshop、Lightroom | 加密编辑历史 |
| 专业相机 | Leica M11-P、Sony(部分型号) | 拍摄时签名 |
| 相机制造商 | Nikon、Canon | 跟进(已宣布) |
| 智能手机 | Google Pixel 10(2025/26年) | 原生C2PA支持 |
| 手机厂商 | Samsung Galaxy | 跟进(已宣布) |
| 企业 | Microsoft 365 | 强制AI水印(2026年) |
预测(3-5年):对于媒体组织和政府机构:若无加密来源,任何文件都不会被视为可信。
Content Authenticity Initiative (CAI) Integration
内容真实性倡议(CAI)集成
json
{
"claim": {
"recorder": "Canon EOS R5",
"signature": {
"alg": "ES256",
"cert": "-----BEGIN CERTIFICATE-----...",
"sig": "base64_signature"
},
"assertions": [
{
"label": "c2pa.actions",
"data": {
"actions": [
{
"action": "c2pa.created",
"when": "2026-01-20T14:32:00Z",
"softwareAgent": "Canon DPP 4.17"
}
]
}
}
]
}
}json
{
"claim": {
"recorder": "Canon EOS R5",
"signature": {
"alg": "ES256",
"cert": "-----BEGIN CERTIFICATE-----...",
"sig": "base64_signature"
},
"assertions": [
{
"label": "c2pa.actions",
"data": {
"actions": [
{
"action": "c2pa.created",
"when": "2026-01-20T14:32:00Z",
"softwareAgent": "Canon DPP 4.17"
}
]
}
}
]
}
}Provenance Verification Workflow
来源验证工作流
python
def verify_c2pa_provenance(media_path: str) -> dict:
"""
Verify Content Authenticity Initiative (C2PA) manifests.
C2PA provides cryptographic proof of media origin and edit history.
"""
manifest = extract_c2pa_manifest(media_path)
if not manifest:
return {
"has_provenance": False,
"recommendation": "No cryptographic provenance available. Proceed with forensic analysis."
}
# Verify certificate chain
cert_valid = verify_certificate_chain(manifest["signature"]["cert"])
# Verify signature
sig_valid = verify_signature(
manifest["claim"],
manifest["signature"]["sig"],
manifest["signature"]["alg"]
)
# Check assertion integrity
assertions_valid = verify_assertions(manifest["assertions"])
return {
"has_provenance": True,
"certificate_valid": cert_valid,
"signature_valid": sig_valid,
"assertions_valid": assertions_valid,
"edit_history": extract_edit_history(manifest),
"original_device": manifest["claim"].get("recorder"),
"overall_valid": all([cert_valid, sig_valid, assertions_valid])
}python
def verify_c2pa_provenance(media_path: str) -> dict:
"""
验证内容真实性倡议(C2PA)清单。
C2PA提供媒体来源和编辑历史的加密证明。
"""
manifest = extract_c2pa_manifest(media_path)
if not manifest:
return {
"has_provenance": False,
"recommendation": "无加密来源信息。继续进行取证分析。"
}
# 验证证书链
cert_valid = verify_certificate_chain(manifest["signature"]["cert"])
# 验证签名
sig_valid = verify_signature(
manifest["claim"],
manifest["signature"]["sig"],
manifest["signature"]["alg"]
)
# 检查声明完整性
assertions_valid = verify_assertions(manifest["assertions"])
return {
"has_provenance": True,
"certificate_valid": cert_valid,
"signature_valid": sig_valid,
"assertions_valid": assertions_valid,
"edit_history": extract_edit_history(manifest),
"original_device": manifest["claim"].get("recorder"),
"overall_valid": all([cert_valid, sig_valid, assertions_valid])
}10. Forensic Report Template
10. 取证报告模板
Module A: Media Metadata & Summary
模块A:媒体元数据与摘要
markdown
undefinedmarkdown
undefinedMedia Authentication Report
媒体认证报告
Metadata
元数据
| Field | Value |
|---|---|
| Report ID | MAR-2026-001 |
| Classification | Confidential |
| Analysis Date | 2026-01-23 15:00 UTC |
| Media Type | Video (MP4) |
| Duration | 00:02:34 |
| Resolution | 1920x1080 |
| File Hash (SHA-256) | a1b2c3d4... |
| Lead Analyst | Forensic AI Agent |
| 字段 | 值 |
|---|---|
| 报告ID | MAR-2026-001 |
| 分类 | 机密 |
| 分析日期 | 2026-01-23 15:00 UTC |
| 媒体类型 | 视频(MP4) |
| 时长 | 00:02:34 |
| 分辨率 | 1920x1080 |
| 文件哈希(SHA-256) | a1b2c3d4... |
| 首席分析师 | 取证AI Agent |
Executive Summary (max 200 words)
执行摘要(最多200字)
Analysis of [MEDIA_FILE] reveals [AUTHENTICITY_ASSESSMENT].
Key findings include [PRIMARY_INDICATORS]. The composite authenticity
score is [SCORE]% (Grade: [GRADE]). [RECOMMENDATION].
对[MEDIA_FILE]的分析显示[真实性评估]。
主要发现包括[主要指标]。综合真实性评分为[评分]%(等级:[等级])。[建议]。
Authenticity Assessment
真实性评估
| Criterion | Score | Status |
|---|---|---|
| PRNU/PCE Match | 45/100 | ⚠️ Inconclusive |
| IGH Profile | 82/100 | ✅ Consistent |
| DQ Artifacts | 23/100 | ❌ Detected |
| GAN Fingerprints | 15/100 | ❌ Detected |
| Semantic Consistency | 67/100 | ⚠️ Minor issues |
| Composite Score | 34% | Grade: 5 |
undefined| 标准 | 得分 | 状态 |
|---|---|---|
| PRNU/PCE匹配 | 45/100 | ⚠️ 不确定 |
| IGH特征 | 82/100 | ✅ 一致 |
| DQ痕迹 | 23/100 | ❌ 检测到 |
| GAN指纹 | 15/100 | ❌ 检测到 |
| 语义一致性 | 67/100 | ⚠️ 轻微问题 |
| 综合得分 | 34% | 等级:5 |
undefinedModule B: Technical Evidence
模块B:技术证据
markdown
undefinedmarkdown
undefinedTechnical Analysis
技术分析
PRNU/PCE Analysis
PRNU/PCE分析
- Reference device: Not identified
- PCE Value: 23.4 (below threshold of 40)
- Interpretation: Cannot establish camera origin
- 参考设备:未识别
- PCE值:23.4(低于40的阈值)
- 解释:无法确定相机来源
Double Quantization Map
双重量化图

- Red regions indicate areas with different compression histories
- Face region shows 89% probability of splicing

- 红色区域表示具有不同压缩历史的区域
- 人脸区域拼接概率为89%
GAN Fingerprint Analysis
GAN指纹分析
- Checkerboard artifacts: Detected (FFT analysis)
- Suspected generator: StyleGAN2-derived architecture
- Confidence: 78%
- 棋盘格痕迹:检测到(FFT分析)
- 疑似生成器:StyleGAN2衍生架构
- 置信度:78%
Semantic Inconsistencies
语义不一致性
- Shadow direction: Inconsistent (2 apparent light sources)
- Eye reflections: Different scene reflected in each eye
- Audio-visual sync: 120ms average desync detected
---- 阴影方向:不一致(存在2个明显光源)
- 眼部反射:双眼反射不同场景
- 音视频同步:检测到平均120ms延迟
---11. Defense Strategies
11. 防御策略
Technical Controls
技术控制
| Control | Implementation | Effectiveness |
|---|---|---|
| C2PA/CAI Validation | Require provenance for high-stakes media | High (when available) |
| Automated Screening | Deploy detection pipeline for inbound media | Medium (arms race) |
| Multi-Signal Fusion | Combine PRNU, IGH, DQ, semantic signals | High |
| Human-in-the-Loop | Expert review for Grade 3-4 cases | High |
| 控制措施 | 实现方式 | 有效性 |
|---|---|---|
| C2PA/CAI验证 | 高风险媒体需提供来源 | 高(当可用时) |
| 自动化筛查 | 部署入站媒体检测流水线 | 中(军备竞赛) |
| 多信号融合 | 结合PRNU、IGH、DQ、语义信号 | 高 |
| 人工介入 | 专家审核3-4级案例 | 高 |
Organizational Inoculation
组织防护
- Pre-bunking: Educate stakeholders about deepfake capabilities before exposure
- Source Triangulation: Verify claims through multiple independent sources
- Temporal Delay: Wait for verification before amplifying uncertain content
- Provenance Requirement: Mandate C2PA for critical communications
- 预辟谣:在接触前教育利益相关者了解深度伪造能力
- 来源三角验证:通过多个独立来源验证声明
- 时间延迟:在传播不确定内容前等待验证结果
- 来源要求:关键通信强制要求C2PA
Response When Targeted (Personal Deepfakes)
被攻击时的应对措施(个人深度伪造)
If you or your client are depicted in a deepfake:
| Step | Action | Details |
|---|---|---|
| 1 | Preserve Evidence | Screenshot with timestamp, save URL, download file |
| 2 | Platform Takedown | Report to platform using manipulation/deepfake reporting tools |
| 3 | Legal Assessment | Consult attorney for jurisdiction-specific remedies |
| 4 | Support Resources | Contact victim support organizations |
若您或您的客户被深度伪造内容针对:
| 步骤 | 行动 | 详情 |
|---|---|---|
| 1 | 保存证据 | 带时间戳的截图、保存URL、下载文件 |
| 2 | 平台下架 | 通过平台的篡改/深度伪造举报工具举报 |
| 3 | 法律评估 | 咨询律师了解司法管辖区特定救济措施 |
| 4 | 支持资源 | 联系受害者支持组织 |
Legal Framework (Austria)
奥地利法律框架
| Statute | Protection | Application |
|---|---|---|
| § 78 UrhG | Recht am eigenen Bild (Right to own image) | Unauthorized use of likeness |
| § 107c StGB | Cybermobbing | Persistent harassment via digital means |
| § 120a StGB | Unbefugte Bildaufnahmen | Intimate imagery without consent |
Austrian Resources:
- Saferinternet.at Helpline - Expert counseling
- Saferinternet.at Unterrichtsmaterialien - Teaching materials ("Wahr oder falsch im Internet")
- Rat auf Draht: 147 (24/7 hotline for young people)
- Internet Ombudsstelle: ombudsstelle.at
Note: Similar protections exist across EU member states under the Digital Services Act (DSA) and GDPR. Consult local counsel for jurisdiction-specific advice.
| 法规 | 保护内容 | 适用场景 |
|---|---|---|
| § 78 UrhG | 肖像权 | 未经授权使用肖像 |
| § 107c StGB | 网络欺凌 | 通过数字手段持续骚扰 |
| § 120a StGB | 非法拍摄 | 未经同意的私密内容 |
奥地利资源:
- Saferinternet.at热线 - 专家咨询
- Saferinternet.at教学材料 - 教学材料(“网络中的真假内容”)
- Rat auf Draht: 147(面向年轻人的24/7热线)
- 互联网监察员: ombudsstelle.at
注意:欧盟成员国根据《数字服务法》(DSA)和GDPR也有类似保护措施。请咨询当地律师了解司法管辖区特定建议。
Detection Pipeline Example
检测流水线示例
python
class MediaAuthenticationPipeline:
"""
Production pipeline for automated media authentication.
"""
def __init__(self, config: PipelineConfig):
self.prnu_analyzer = PRNUAnalyzer(config.prnu_db)
self.igh_classifier = IGHClassifier(config.igh_model)
self.dq_detector = DQDetector()
self.gan_detector = GANFingerprintDetector(config.gan_signatures)
self.semantic_analyzer = SemanticAnalyzer(config.llm_endpoint)
self.c2pa_validator = C2PAValidator(config.trusted_roots)
async def authenticate(self, media_path: str) -> AuthenticationResult:
# Check provenance first (fast path)
provenance = await self.c2pa_validator.verify(media_path)
if provenance.valid:
return AuthenticationResult(
authentic=True,
confidence=0.99,
method="cryptographic_provenance"
)
# Run forensic analysis in parallel
results = await asyncio.gather(
self.prnu_analyzer.analyze(media_path),
self.igh_classifier.classify(media_path),
self.dq_detector.detect(media_path),
self.gan_detector.detect(media_path),
self.semantic_analyzer.analyze(media_path)
)
# Fuse signals
composite = self.fuse_signals(results)
return AuthenticationResult(
authentic=composite.score > 0.75,
confidence=composite.confidence,
grade=composite.grade,
details=results,
method="forensic_analysis"
)python
class MediaAuthenticationPipeline:
"""
用于自动化媒体认证的生产级流水线。
"""
def __init__(self, config: PipelineConfig):
self.prnu_analyzer = PRNUAnalyzer(config.prnu_db)
self.igh_classifier = IGHClassifier(config.igh_model)
self.dq_detector = DQDetector()
self.gan_detector = GANFingerprintDetector(config.gan_signatures)
self.semantic_analyzer = SemanticAnalyzer(config.llm_endpoint)
self.c2pa_validator = C2PAValidator(config.trusted_roots)
async def authenticate(self, media_path: str) -> AuthenticationResult:
# 先检查来源(快速路径)
provenance = await self.c2pa_validator.verify(media_path)
if provenance.valid:
return AuthenticationResult(
authentic=True,
confidence=0.99,
method="cryptographic_provenance"
)
# 并行运行取证分析
results = await asyncio.gather(
self.prnu_analyzer.analyze(media_path),
self.igh_classifier.classify(media_path),
self.dq_detector.detect(media_path),
self.gan_detector.detect(media_path),
self.semantic_analyzer.analyze(media_path)
)
# 信号融合
composite = self.fuse_signals(results)
return AuthenticationResult(
authentic=composite.score > 0.75,
confidence=composite.confidence,
grade=composite.grade,
details=results,
method="forensic_analysis"
)12. Tool & Dataset References
12. 工具与数据集参考
Detection Tools
检测工具
| Tool | Type | Description |
|---|---|---|
| FaceForensics++ | Dataset & Benchmark | Standard deepfake detection benchmark |
| Sensity | Commercial | Enterprise deepfake detection API |
| Microsoft Video Authenticator | Tool | Frame-by-frame manipulation scoring |
| C2PA Tool | CLI | Content provenance verification |
| Content Credentials Verify | Web | Online C2PA verification (CAI) |
| webconsulting Forensik-Tool | Web | Multi-layer analysis (EXIF, C2PA, Signal, AI) |
| 工具 | 类型 | 描述 |
|---|---|---|
| FaceForensics++ | 数据集与基准测试 | 标准深度伪造检测基准 |
| Sensity | 商业工具 | 企业级深度伪造检测API |
| Microsoft Video Authenticator | 工具 | 逐帧篡改评分 |
| C2PA Tool | CLI | 内容来源验证 |
| Content Credentials Verify | 网页工具 | 在线C2PA验证(CAI) |
| webconsulting Forensik-Tool | 网页工具 | 多层分析(EXIF、C2PA、信号、AI) |
Reference Datasets
参考数据集
| Dataset | Content | Use Case |
|---|---|---|
| DARPA MediFor | Multi-modal manipulation | Comprehensive forensic training |
| DARPA SemaFor | Semantic manipulation | Semantic consistency models |
| Google/Jigsaw DeepFake | Face-swap videos | Video deepfake detection |
| Facebook DFDC | Diverse deepfakes | Large-scale detection training |
| StyleGAN2 FFHQ | Synthetic faces | GAN fingerprint analysis |
| 数据集 | 内容 | 用例 |
|---|---|---|
| DARPA MediFor | 多模态篡改内容 | 综合取证训练 |
| DARPA SemaFor | 语义篡改内容 | 语义一致性模型 |
| Google/Jigsaw DeepFake | 换脸视频 | 视频深度伪造检测 |
| Facebook DFDC | 多样深度伪造内容 | 大规模检测训练 |
| StyleGAN2 FFHQ | 合成人脸 | GAN指纹分析 |
Industry Standards
行业标准
- C2PA (Coalition for Content Provenance and Authenticity): Cryptographic media provenance
- CAI (Content Authenticity Initiative): Adobe-led provenance standard
- IPTC Photo Metadata: Standard metadata for photographic content
- C2PA(内容来源与真实性联盟):加密媒体来源
- CAI(内容真实性倡议):Adobe主导的来源标准
- IPTC Photo Metadata:摄影内容标准元数据
13. Checklists
13. 检查清单
Pre-Analysis Checklist
分析前检查清单
- Obtain original file (avoid screenshots, re-uploads)
- Preserve file hash (SHA-256) for chain of custody
- Document source and context of media
- Check for C2PA/CAI provenance data
- Identify claimed device/source for PRNU matching
- 获取原始文件(避免截图、重新上传)
- 保存文件哈希(SHA-256)以确保监管链
- 记录媒体来源和上下文
- 检查C2PA/CAI来源数据
- 识别声称的设备/来源用于PRNU匹配
Analysis Checklist
分析检查清单
- Run PRNU/PCE analysis (if reference available)
- Generate IGH profile and classify blur types
- Create DQ probability map for splicing detection
- Analyze for GAN fingerprints (FFT spectrum)
- Check semantic consistency (shadows, reflections, physics)
- For video: temporal consistency, audio-visual sync
- Calculate composite authenticity score
- 运行PRNU/PCE分析(若有参考设备)
- 生成IGH特征并分类模糊类型
- 创建DQ概率图用于拼接检测
- 分析GAN指纹(FFT频谱)
- 检查语义一致性(阴影、反射、物理特性)
- 视频:时间一致性、音视频同步
- 计算综合真实性评分
Reporting Checklist
报告检查清单
- Document all findings with confidence levels
- Include visualizations (DQ maps, FFT spectra)
- Provide grade interpretation with caveats
- List limitations of analysis
- Recommend human review for borderline cases
- 记录所有发现及置信度
- 包含可视化内容(DQ图、FFT频谱)
- 提供等级解释及注意事项
- 列出分析局限性
- 建议对边界案例进行人工审核
14. Limitations & Caveats
14. 局限性与注意事项
Known Detection Challenges
已知检测挑战
| Challenge | Impact | Mitigation |
|---|---|---|
| Computational imaging (HDR+, Night Sight) | Destroys PRNU | Rely on semantic analysis |
| Social media compression | Removes fine artifacts | Focus on coarse-grained signals |
| Adversarial attacks on detectors | Evades specific models | Multi-model ensemble |
| Rapid GAN evolution | Outdated fingerprints | Continuous model updates |
| Metadata stripping | Screenshots, re-uploads remove C2PA | Invisible watermarks coupled with C2PA |
C2PA Challenge: Screenshots and social media uploads can strip metadata ("stripping attack"). The industry is developing invisible watermarks that survive re-encoding and link back to C2PA manifests.
| 挑战 | 影响 | 缓解措施 |
|---|---|---|
| 计算成像(HDR+、Night Sight) | 破坏PRNU | 依赖语义分析 |
| 社交媒体压缩 | 去除精细痕迹 | 关注粗粒度信号 |
| 针对检测器的对抗攻击 | 规避特定模型 | 多模型集成 |
| GAN快速演进 | 指纹过时 | 持续更新模型 |
| 元数据剥离 | 截图、重新上传会移除C2PA | 与C2PA结合的隐形水印 |
C2PA挑战:截图和社交媒体上传会剥离元数据(“剥离攻击”)。行业正在开发可在重新编码后存活并链接回C2PA清单的隐形水印。
Ethical Considerations
伦理考量
- False Positives: Incorrectly flagging authentic media can cause harm
- Dual Use: Detection research enables better synthesis
- Automation Bias: Over-reliance on automated verdicts
- Privacy: PRNU databases can identify individuals
- 误报:错误标记真实媒体会造成伤害
- 双重用途:检测研究可能促进更优合成技术
- 自动化偏见:过度依赖自动化结论
- 隐私:PRNU数据库可识别个人
The Liar's Dividend
说谎者红利
The mere existence of deepfakes allows bad actors to dismiss authentic evidence as fake. Detection tools must be communicated carefully to avoid amplifying this effect.
深度伪造的存在本身就允许不良行为者将真实证据污蔑为伪造。必须谨慎传达检测工具的能力,避免放大这种效应。
References
参考文献
Academic Research (2024-2025)
学术研究(2024-2025年)
-
Ramanaharan, R. et al. (2025). "DeepFake video detection: Insights into model generalisation." Forensic Science International: Digital Investigation, Vol. 52. DOI: 10.1016/j.fsidi.2025.301875
- Systematic review of generalizability in deepfake detection techniques
-
Ahmed, N. et al. (2024). "Visual Deepfake Detection: Review of Techniques, Tools, and Datasets." IEEE Access, Vol. 12, pp. 180234-180261. DOI: 10.1109/ACCESS.2024.3511641
- Comprehensive review covering 2018-2024 with 16+ citations
-
Cassia, M. et al. (2025). "Deepfake Forensic Analysis: Source Dataset Attribution and Legal Implications." arXiv:2505.11110. arXiv
- Dataset attribution for legal proceedings
-
Nature Scientific Reports (2025). "Deepfake video deception detection using visual attention mechanisms." Sci Rep 15, 23920. DOI: 10.1038/s41598-025-23920-0
- Novel attention-based detection methods
-
Nature Scientific Reports (2025). "Detection of AI generated images using combined uncertainty estimation." Sci Rep 15, 28572. DOI: 10.1038/s41598-025-28572-8
- Diffusion model detection via uncertainty quantification
-
ACM CHI (2025). "Characterizing Photorealism and Artifacts in Diffusion Model Images." Proceedings of CHI 2025. DOI: 10.1145/3706598.3713962
- Human perception studies on diffusion model artifacts
-
Ramanaharan, R. et al. (2025). "DeepFake video detection: Insights into model generalisation." Forensic Science International: Digital Investigation, Vol. 52. DOI: 10.1016/j.fsidi.2025.301875
- 深度伪造检测技术泛化能力的系统综述
-
Ahmed, N. et al. (2024). "Visual Deepfake Detection: Review of Techniques, Tools, and Datasets." IEEE Access, Vol. 12, pp. 180234-180261. DOI: 10.1109/ACCESS.2024.3511641
- 涵盖2018-2024年的综合综述,引用16+文献
-
Cassia, M. et al. (2025). "Deepfake Forensic Analysis: Source Dataset Attribution and Legal Implications." arXiv:2505.11110. arXiv
- 法律程序中的数据集归因
-
Nature Scientific Reports (2025). "Deepfake video deception detection using visual attention mechanisms." Sci Rep 15, 23920. DOI: 10.1038/s41598-025-23920-0
- 基于注意力机制的新型检测方法
-
Nature Scientific Reports (2025). "Detection of AI generated images using combined uncertainty estimation." Sci Rep 15, 28572. DOI: 10.1038/s41598-025-28572-8
- 通过不确定性量化检测扩散模型生成内容
-
ACM CHI (2025). "Characterizing Photorealism and Artifacts in Diffusion Model Images." Proceedings of CHI 2025. DOI: 10.1145/3706598.3713962
- 扩散模型图像的照片真实感与痕迹的人类感知研究
Audio Deepfake Research
音频深度伪造研究
-
PMC/NIH (2024). "Audio Deepfake Detection: What Has Been Achieved and What Lies Ahead." PMCID: PMC11991371. PMC
- Comprehensive review of audio synthesis detection
-
Forensic Science International (2025). "Forensic deepfake audio detection using segmental speech features." DOI: 10.1016/j.forsciint.2025.112345
- Acoustic feature analysis for voice clone detection
-
UC Berkeley I-School (2025). "FairVoice: An Equitable Audio Deepfake Detector." Project Page
- Addressing bias in audio deepfake detection
-
PMC/NIH (2024). "Audio Deepfake Detection: What Has Been Achieved and What Lies Ahead." PMCID: PMC11991371. PMC
- 音频合成检测的综合综述
-
Forensic Science International (2025). "Forensic deepfake audio detection using segmental speech features." DOI: 10.1016/j.forsciint.2025.112345
- 基于分段语音特征的语音克隆取证检测
-
UC Berkeley I-School (2025). "FairVoice: An Equitable Audio Deepfake Detector." Project Page
- 解决音频深度伪造检测中的偏见问题
Benchmarks & Datasets
基准测试与数据集
-
Deepfake-Eval-2024 (2025). "A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024." arXiv:2503.02857. arXiv
- Real-world social media deepfake benchmark
-
DeepfakeBench (2024). "A Comprehensive Benchmark of Deepfake Detection." GitHub/SCLBD. Repository
- Standardized evaluation framework
-
Deepfake-Eval-2024 (2025). "A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024." arXiv:2503.02857. arXiv
- 真实世界社交媒体深度伪造基准
-
DeepfakeBench (2024). "A Comprehensive Benchmark of Deepfake Detection." GitHub/SCLBD. Repository
- 标准化评估框架
Government Programs
政府项目
-
DARPA (2025). "Furthering Deepfake Defenses." DARPA News. Press Release
- SemaFor program transition to operational use
-
DARPA SemaFor (2020-2024). "Semantic Forensics Program." Program Page
- Original program documentation
-
UL/DSRI (2025). "DSRI & DARPA Fight Deepfakes with AI Forensics." Article
- Post-SemaFor continuation
-
DARPA (2025). "Furthering Deepfake Defenses." DARPA News. Press Release
- SemaFor项目向实际应用过渡
-
DARPA SemaFor (2020-2024). "Semantic Forensics Program." Program Page
- 原始项目文档
-
UL/DSRI (2025). "DSRI & DARPA Fight Deepfakes with AI Forensics." Article
- SemaFor项目后续
Industry Standards
行业标准
-
C2PA (2025). "Content Credentials: C2PA Technical Specification v2.3." Specification
- Current cryptographic provenance standard
-
Content Authenticity Initiative (2025). Official Website
- Industry adoption and implementation resources
-
C2PA Whitepaper (2025). "Content Credentials: A New Standard for Digital Provenance." PDF
- Technical overview and use cases
-
C2PA (2025). "Content Credentials: C2PA Technical Specification v2.3." Specification
- 当前加密来源标准
-
Content Authenticity Initiative (2025). Official Website
- 行业采用与实现资源
-
C2PA Whitepaper (2025). "Content Credentials: A New Standard for Digital Provenance." PDF
- 技术概述与用例
Financial Impact & Case Studies
财务影响与案例研究
-
World Economic Forum (2025). "Detecting dangerous AI is essential in the deepfake era." Article
- 1,740% surge in deepfake fraud (North America, 2022-2023)
-
The Guardian (2024). "CEO of world's biggest ad firm targeted by deepfake scam." Article
- WPP CEO voice clone fraud attempt
-
Biometric Update (2025). "Voice clones can sound as real as human voices." Article
- Voice synthesis indistinguishability research
-
World Economic Forum (2025). "Detecting dangerous AI is essential in the deepfake era." Article
- 2022-2023年北美深度伪造诈骗案件激增1740%
-
The Guardian (2024). "CEO of world's biggest ad firm targeted by deepfake scam." Article
- WPP CEO语音克隆诈骗尝试案例
-
Biometric Update (2025). "Voice clones can sound as real as human voices." Article
- 语音合成不可区分性研究
Foundational Works (Pre-2024)
基础研究(2024年前)
-
Rossler, A. et al. (2019). "FaceForensics++: Learning to Detect Manipulated Facial Images." ICCV 2019. arXiv:1901.08971
- Foundational benchmark dataset
-
Wronski, B. et al. (2019). "Handheld Multi-Frame Super-Resolution." ACM SIGGRAPH 2019. Google Research
- Night Sight and PRNU implications
-
Kirchner, M. & Fridrich, J. (2019). "PRNU-Based Camera Identification." Digital Image Forensics, Springer.
- Sensor fingerprint methodology
-
Rossler, A. et al. (2019). "FaceForensics++: Learning to Detect Manipulated Facial Images." ICCV 2019. arXiv:1901.08971
- 基础基准数据集
-
Wronski, B. et al. (2019). "Handheld Multi-Frame Super-Resolution." ACM SIGGRAPH 2019. Google Research
- Night Sight与PRNU的影响
-
Kirchner, M. & Fridrich, J. (2019). "PRNU-Based Camera Identification." Digital Image Forensics, Springer.
- 传感器指纹方法
Deepfake Synthesis Methods
深度伪造合成方法
-
Thies, J. et al. (2016). "Face2Face: Real-time Face Capture and Reenactment of RGB Videos." CVPR 2016.
- Foundational face reenactment technique
-
Prajwal, K.R. et al. (2020). "Wav2Lip: Accurately Lip-syncing Videos In The Wild." ACM Multimedia 2020.
- Audio-driven lip synchronization
-
Siarohin, A. et al. (2019). "First Order Motion Model for Image Animation." NeurIPS 2019.
- Keypoint-based body puppetry
-
Karras, T. et al. (2020). "Analyzing and Improving the Image Quality of StyleGAN." CVPR 2020.
- StyleGAN2 architecture and fully synthetic generation
-
Perov, I. et al. (2020). "DeepFaceLab: Integrated, flexible and extensible face-swapping framework."
- Widely-used face swap toolkit
-
Thies, J. et al. (2016). "Face2Face: Real-time Face Capture and Reenactment of RGB Videos." CVPR 2016.
- 基础人脸重演技术
-
Prajwal, K.R. et al. (2020). "Wav2Lip: Accurately Lip-syncing Videos In The Wild." ACM Multimedia 2020.
- 音频驱动唇形同步
-
Siarohin, A. et al. (2019). "First Order Motion Model for Image Animation." NeurIPS 2019.
- 基于关键点的身体驱动
-
Karras, T. et al. (2020). "Analyzing and Improving the Image Quality of StyleGAN." CVPR 2020.
- StyleGAN2架构与完全合成生成
-
Perov, I. et al. (2020). "DeepFaceLab: Integrated, flexible and extensible face-swapping framework."
- 广泛使用的换脸工具包
Face Swap & Synthesis (2025)
换脸与合成(2025年)
-
GHOST 2.0 (2025). "Generative High-fidelity One Shot Transfer of Heads." arXiv.
- One-shot face swapping with single reference image
-
DynamicFace (2025). "High-Quality and Consistent Video Face Swapping using Diffusion." ICCV 2025.
- Temporal consistency in video face swapping via diffusion models
-
HFMF (2025). "Hierarchical Fusion for Multi-Modal Forgery Detection." WACV 2025.
- Multi-modal detection architecture
-
GHOST 2.0 (2025). "Generative High-fidelity One Shot Transfer of Heads." arXiv.
- 单样本人脸迁移(仅需一张参考图)
-
DynamicFace (2025). "High-Quality and Consistent Video Face Swapping using Diffusion." ICCV 2025.
- 基于扩散模型的视频换脸时间一致性
-
HFMF (2025). "Hierarchical Fusion for Multi-Modal Forgery Detection." WACV 2025.
- 多模态伪造检测架构
Policy & Impact (2025)
政策与影响(2025年)
- European Parliament (2025). "Children and deepfakes." EPRS Briefing.
- Policy analysis on deepfake impact on minors
- European Parliament (2025). "Children and deepfakes." EPRS Briefing.
- 深度伪造对未成年人影响的政策分析
Credits & Attribution
致谢与归属
This skill synthesizes methodologies from the multimedia forensics research community,
drawing from peer-reviewed publications (2024-2025), DARPA MediFor/SemaFor program
outcomes, and industry standards (C2PA v2.3, CAI).
Synchronized with: webconsulting.at/blog/deepfakes-erkennen-verstehen
All citations verified as of January 2026.
Developed by webconsulting.at for the Claude skill collection.
本技能综合了多媒体取证研究社区的方法,借鉴了同行评审出版物(2024-2025年)、DARPA MediFor/SemaFor项目成果以及行业标准(C2PA v2.3、CAI)。
所有引用截至2026年1月均已验证。
由webconsulting.at为Claude技能集开发。