deepfake-detection

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Deepfake Detection & Media Authentication

深度伪造检测与媒体认证

Comprehensive framework for detecting synthetic media, analyzing manipulation artifacts, and establishing media provenance in the post-empirical era.
Key Insight: Traditional detection methods (PRNU, IGH, DQ) are like fingerprints—helpful, but disputable. Cryptographic provenance (C2PA) is like a DNA match—mathematically certain (collision probability 2⁻²⁵⁶).
这是一个适用于后经验主义时代的综合框架,用于检测合成媒体、分析篡改痕迹以及确立媒体来源。
核心见解:传统检测方法(PRNU、IGH、DQ)就像指纹——有帮助但存在争议。加密来源验证(C2PA)则像DNA匹配——具有数学上的确定性(碰撞概率为2⁻²⁵⁶)。

When to Use

适用场景

  • Verifying authenticity of images or videos before publication
  • Detecting AI-generated or manipulated media (deepfakes, face swaps, synthetic voices)
  • Forensic analysis of suspicious media for legal or journalistic purposes
  • Implementing automated media authentication pipelines
  • Establishing content provenance and chain of custody
  • Countering disinformation campaigns and Advanced Persistent Manipulators (APMs)
  • 发布前验证图像或视频的真实性
  • 检测AI生成或篡改的媒体(深度伪造、换脸、合成语音)
  • 为法律或新闻目的对可疑媒体进行取证分析
  • 实现自动化媒体认证流水线
  • 确立内容来源和监管链
  • 应对虚假信息活动和高级持续操纵者(APMs)

Related Skills

相关技能

  • security-audit - Security assessment patterns
  • security-incident-reporting - Incident documentation for disinformation attacks
  • enterprise-readiness - Infrastructure for automated verification pipelines
  • cli-tools - Auto-installation of required tools

  • security-audit - 安全评估模式
  • security-incident-reporting - 虚假信息攻击的事件文档记录
  • enterprise-readiness - 自动化验证流水线的基础设施
  • cli-tools - 所需工具的自动安装

1. What Are Deepfakes?

1. 什么是深度伪造?

Definition

定义

Deepfakes are synthetic media created using deep learning techniques—primarily Generative Adversarial Networks (GANs), Diffusion Models, and Autoencoders—to generate or manipulate audiovisual content with a high degree of realism. The term combines "deep learning" and "fake."
**深度伪造(Deepfakes)**是利用深度学习技术(主要包括生成对抗网络(GANs)、扩散模型和自动编码器)创建的合成媒体,能够生成或篡改高度逼真的视听内容。该术语是“深度学习(deep learning)”和“伪造(fake)”的组合词。

Types of Synthetic Media

合成媒体类型

TypeTechnologyDescription
Face SwapAutoencoders, GANsReplace one person's face with another in video
Face Reenactment3D Morphable ModelsAnimate a face with another person's expressions
Voice CloneText-to-Speech, VocoderGenerate speech in someone's voice from text [20]
Lip SyncAudio-to-VideoMake someone appear to say different words
Full Body PuppetryPose EstimationControl a person's body movements
Fully SyntheticDiffusion, GANsGenerate non-existent people, scenes, events
类型技术描述
Face Swap(换脸)Autoencoders, GANs在视频中将一个人的脸替换为另一个人的脸
Face Reenactment(人脸重演)3D Morphable Models用另一个人的表情驱动目标人脸动画
Voice Clone(语音克隆)Text-to-Speech, Vocoder从文本生成特定人物的语音 [20]
Lip Sync(唇形同步)Audio-to-Video让人物看起来在说不同的台词
Full Body Puppetry(全身驱动)Pose Estimation控制人物的身体动作
Fully Synthetic(完全合成)Diffusion, GANs生成不存在的人物、场景、事件

Emerging Capabilities (2025-2026)

新兴能力(2025-2026年)

TypeAdvancementImplication
Face SwapOne-shot swapping (single reference image), GHOST 2.0 [24], DynamicFace [25]Minimal source material needed
Face ReenactmentAudio-driven animation, Neural Head ReenactmentFully synthetic video calls
Voice CloneZero-shot cloning (no training on target), Emotional Voice SynthesisClone any voice instantly with emotion
Lip SyncHigh-fidelity with Diffusion Models, Multilingual syncAutomatic dubbing across languages
Full Body Puppetry3D-aware motion transfer, Neural Body AvatarsPhotorealistic real-time control
Fully SyntheticVideo Diffusion Models, Controllable GenerationPrecise control over age, expression, gaze
类型进展影响
换脸单样本迁移(仅需一张参考图)、GHOST 2.0 [24]、DynamicFace [25]所需源素材极少
人脸重演音频驱动动画、Neural Head Reenactment可实现完全合成的视频通话
语音克隆零样本克隆(无需目标语音训练)、情感语音合成可即时克隆任意带情感的语音
唇形同步基于扩散模型的高保真效果、多语言同步可自动实现跨语言配音
全身驱动3D感知动作迁移、Neural Body Avatars可实现照片级真实感的实时控制
完全合成视频扩散模型、可控生成可精准控制年龄、表情、视线

The Entertaining Side

创意应用场景

Deepfakes have legitimate and creative applications:
Use CaseExampleValue
EntertainmentDe-aging actors in films, posthumous performancesArtistic expression
Satire & ParodyPolitical satire, comedy sketchesFree speech, humor
EducationHistorical figures "speaking" in documentariesEngagement, learning
AccessibilityReal-time sign language avatarsInclusion
Gaming & VRPersonalized avatars, NPC facesImmersion
Art & ExpressionDigital art, creative projectsInnovation
Example: The "This Person Does Not Exist" website showcases GAN-generated faces that fascinate users with the uncanny realism of non-existent people.
深度伪造技术也有合法且富有创意的应用:
应用场景示例价值
娱乐电影中演员年轻化、已故演员重现表演艺术表达
讽刺与模仿政治讽刺、喜剧小品言论自由、幽默创作
教育纪录片中“复活”历史人物进行讲解提升参与度、辅助学习
无障碍实时手语虚拟形象促进包容
游戏与VR个性化虚拟形象、NPC面部生成增强沉浸感
艺术与表达数字艺术、创意项目推动创新
示例:“This Person Does Not Exist”网站展示了GAN生成的人脸,这些不存在的人物以其诡异的真实感吸引了大量用户。

The Dangerous Side

潜在危害

The same technology enables serious harms:
ThreatDescriptionImpact
Non-Consensual ImagerySynthetic intimate content without consentPsychological harm, harassment, reputation destruction
Political ManipulationFabricated speeches, fake scandalsElection interference, democratic erosion
Financial FraudCEO voice clones for wire transfer scamsMillions in losses per incident
Evidence FabricationFake alibis, planted evidenceObstruction of justice
Liar's DividendDismissing real evidence as "deepfake"Accountability evasion
Identity TheftBypassing facial recognition, KYCAccount takeover, fraud
Disinformation WarfareState-sponsored synthetic media campaignsGeopolitical destabilization
Real Case (2024): WPP CEO Mark Read was targeted by a sophisticated deepfake voice clone attempting to authorize fraudulent transfers [19]. Deepfake fraud cases surged 1,740% in North America between 2022-2023, with average losses exceeding $500,000 per incident [18].
同样的技术也会带来严重危害:
威胁描述影响
非合意私密内容在未经同意的情况下生成私密合成内容心理伤害、骚扰、名誉损毁
政治操纵伪造演讲、虚假丑闻干预选举、侵蚀民主
金融诈骗克隆CEO语音进行转账诈骗单次事件损失达数百万美元
证据伪造伪造不在场证明、栽赃证据妨碍司法公正
说谎者红利将真实证据污蔑为“深度伪造”逃避问责
身份盗用绕过面部识别、KYC验证账户接管、诈骗
虚假信息战国家资助的合成媒体活动地缘政治不稳定
真实案例(2024年):WPP首席执行官Mark Read成为复杂语音克隆深度伪造的目标,诈骗者试图通过该技术授权欺诈性转账 [19]。2022-2023年,北美深度伪造诈骗案件激增1740%,单次事件平均损失超过50万美元 [18]

Current Scale (2025-2026)

当前规模(2025-2026年)

MetricValueSource
Deepfakes shared annually8 million (2025) vs 500,000 (2023)Industry estimates
Projected synthetic content90% of online content by 2026Europol
Non-consensual intimate imagery (NCII)98% of all deepfakesEU Commission
Key Insight: The exponential growth rate means detection systems face an ever-increasing volume challenge, reinforcing the need for proactive authentication (C2PA) over reactive detection.
指标数值来源
每年传播的深度伪造内容800万条(2025年)vs 50万条(2023年)行业估算
合成内容占比预测到2026年占在线内容的90%欧洲刑警组织
非合意私密内容(NCII)占所有深度伪造内容的98%欧盟委员会
核心见解:指数级增长意味着检测系统面临日益增长的规模挑战,这凸显了主动认证(C2PA)相较于被动检测的必要性。

The Future of Deepfakes

深度伪造的未来

TimelineDevelopmentImplication
Now (2026)Real-time video deepfakes, commoditized toolsAnyone can create convincing fakes
Near FutureInteractive deepfakes in video callsTrust in live communication erodes
Medium TermUndetectable synthetic mediaDetection becomes probabilistic, not binary
Long Term"Reality-as-a-Service"Authenticated media becomes the norm, unsigned content is suspect
时间线发展影响
现在(2026年)实时视频深度伪造、工具 commoditization任何人都能制作逼真的伪造内容
近期视频通话中的交互式深度伪造实时通信的信任度下降
中期无法检测的合成媒体检测结果从确定性变为概率性
长期“现实即服务(Reality-as-a-Service)”经过认证的媒体成为常态,未签名内容将受到质疑

The Detection Arms Race

检测军备竞赛

Recent research confirms the growing challenge of detection generalizability [1]:
Generation Quality:    ████████████████████░░░░  85% (2026)
Detection Accuracy:    █████████████░░░░░░░░░░░  55% (2026)
                       ↑ Gap widening over time
Key Insight: We are transitioning from a world where "seeing is believing" to one where "cryptographic proof is believing." The future lies not in perfect detection, but in provenance infrastructure (C2PA v2.3) that proves authenticity at creation [15, 16]. Traditional detection methods (PRNU, IGH, DQ) are like fingerprints—helpful, but disputable. Cryptographic provenance (C2PA) is like a DNA match—mathematically certain.

近期研究证实,检测模型的泛化能力面临越来越大的挑战 [1]
生成质量:    ████████████████████░░░░  85% (2026)
检测准确率:    █████████████░░░░░░░░░░░  55% (2026)
                       ↑ 差距随时间不断扩大
核心见解:我们正从“眼见为实”的世界转向“加密证明为实”的世界。未来的关键不在于完美检测,而在于来源基础设施(C2PA v2.3),它能在内容创建时就证明其真实性 [15, 16]。传统检测方法(PRNU、IGH、DQ)就像指纹——有帮助但存在争议。加密来源验证(C2PA)则像DNA匹配——具有数学上的确定性。

2. Strategic Context: The Post-Empirical Era

2. 战略背景:后经验主义时代

The Crisis of Empirical Evidence (2026)

经验证据危机(2026年)

The boundary between authentic and synthetic media has effectively vanished. Trillion-parameter models have commoditized the generation of photorealistic synthetic content, transforming deepfakes from isolated experiments into an industrialized disinformation capability.
真实媒体与合成媒体的界限已基本消失。万亿参数模型让生成照片级真实感的合成内容变得普及,将深度伪造从孤立实验转变为工业化的虚假信息能力。

The ABC Framework of Synthetic Media Threats

合成媒体威胁的ABC框架

CategoryDescriptionExamples
A - ActorsMalicious generators of synthetic contentNation-states, APMs (Advanced Persistent Manipulators), commercial disinformation services
B - BehaviorDeceptive patterns and tacticsAstroturfing with synthetic identities, coordinated inauthentic behavior
C - ContentThe synthetic media itselfDeepfake videos, voice clones, GAN-generated faces, manipulated images
类别描述示例
A - 行为主体合成内容的恶意生成者国家行为体、APMs(高级持续操纵者)、商业虚假信息服务提供商
B - 行为模式欺骗性模式与策略利用合成身份进行水军操作、协调一致的非真实行为
C - 内容合成媒体本身深度伪造视频、语音克隆、GAN生成人脸、篡改图像

The 4D Disinformation Tactics

4D虚假信息策略

TacticDescriptionForensic Counter
DismissClaim real evidence is fake ("Liar's Dividend")Provenance verification, cryptographic attestation
DistortReframe authentic events with synthetic fragmentsSemantic consistency analysis
DistractFlood with synthetic noise to obscure truthScale-resistant automated detection
DismayPsychological operations through synthetic threatsConfidence scoring, sensemaking support

策略描述取证应对措施
Dismiss(否认)声称真实证据是伪造的(“说谎者红利”)来源验证、加密认证
Distort(歪曲)用合成片段重构真实事件语义一致性分析
Distract(干扰)用合成信息淹没真实内容以混淆视听可扩展的自动化检测
Dismay(恐慌)通过合成内容进行心理操作置信度评分、意义构建支持

3. System Architecture

3. 系统架构

LLM Integration Strategy

LLM集成策略

The skill implements a hierarchical model structure for forensic analysis:
RoleModelVersionFunction
LeadClaude Opus4.5Complex synthesis of forensic data, multimodal analysis, report generation
ValidationGemini Pro3.0Cross-validation of detection results, second opinion on edge cases
ReasoningGLM Pro Thinking4.7Logical verification of causal chains, step-by-step reasoning for forensic conclusions
本技能采用分层模型结构进行取证分析:
角色模型版本功能
主导Claude Opus4.5取证数据的复杂合成、多模态分析、报告生成
验证Gemini Pro3.0检测结果的交叉验证、边缘案例的二次意见
推理GLM Pro Thinking4.7因果链的逻辑验证、取证结论的分步推理

Model Selection Rationale

模型选择依据

  • Claude Opus 4.5: Best-in-class for nuanced multimodal analysis and synthesizing complex forensic evidence into coherent reports
  • Gemini Pro 3.0: Strong visual understanding for cross-validating image/video analysis results
  • GLM Pro Thinking 4.7: Chain-of-thought reasoning for transparent forensic logic that can be audited
  • Claude Opus 4.5:在精细的多模态分析以及将复杂取证证据整合成连贯报告方面表现最佳
  • Gemini Pro 3.0:在视觉理解方面能力突出,可交叉验证图像/视频分析结果
  • GLM Pro Thinking 4.7:具备思维链推理能力,可生成可审计的透明取证逻辑

Architecture Requirements

架构要求

  1. Asynchronous Processing Pipeline: Handle high token counts from multimodal analysis
  2. Vector Database for CRF Profiles: Store and query Camera Response Function signatures
  3. RAG Integration: Access forensic reference databases during inference
  4. Tool Integration: ffmpeg, ExifTool, ImageMagick for low-level signal processing

  1. 异步处理流水线:处理多模态分析产生的高token量
  2. CRF特征向量数据库:存储和查询相机响应函数(Camera Response Function)特征
  3. RAG集成:推理过程中访问取证参考数据库
  4. 工具集成:ffmpeg、ExifTool、ImageMagick用于底层信号处理

4. Required Tools & Installation

4. 所需工具与安装

Tool Overview

工具概述

ToolPurposeRequired
ffmpeg
Video processing, frame extraction, audio isolationYes
ffprobe
Metadata extraction, container analysisYes (bundled with ffmpeg)
exiftool
Deep metadata extraction, EXIF/XMP/IPTC analysisYes
imagemagick
Image processing, format conversionRecommended
jq
JSON processing for metadata analysisRecommended
c2patool
C2PA/CAI provenance verificationOptional
工具用途是否必需
ffmpeg
视频处理、帧提取、音频分离
ffprobe
元数据提取、容器分析是(与ffmpeg捆绑)
exiftool
深度元数据提取、EXIF/XMP/IPTC分析
imagemagick
图像处理、格式转换推荐
jq
元数据分析的JSON处理工具推荐
c2patool
C2PA/CAI来源验证可选

Auto-Installation by Agent

Agent自动安装

When a required tool is missing, the agent will detect this and offer to install it. User approval is required before any installation.
🔧 Tool Missing: ffmpeg

The agent needs 'ffmpeg' for video frame extraction and analysis.
This tool is not currently installed on your system.

Would you like me to install it?
  [macOS]  brew install ffmpeg
  [Ubuntu] sudo apt install ffmpeg
  [Windows] winget install ffmpeg

⚠️ Approval required: Type 'yes' to proceed or 'no' to skip.
当缺少必需工具时,Agent会检测到并提供安装选项。安装前需获得用户批准
🔧 工具缺失: ffmpeg

Agent需要'ffmpeg'用于视频帧提取和分析。
当前系统未安装该工具。

是否需要安装?
  [macOS]  brew install ffmpeg
  [Ubuntu] sudo apt install ffmpeg
  [Windows] winget install ffmpeg

⚠️ 需要批准: 输入'yes'继续或'no'跳过。

Manual Installation

手动安装

macOS (Homebrew)

macOS(Homebrew)

bash
undefined
bash
undefined

Install all recommended tools

安装所有推荐工具

brew install ffmpeg exiftool imagemagick jq
brew install ffmpeg exiftool imagemagick jq

Optional: C2PA verification tool

可选:C2PA验证工具

brew install c2patool
undefined
brew install c2patool
undefined

Ubuntu/Debian

Ubuntu/Debian

bash
undefined
bash
undefined

Install all recommended tools

安装所有推荐工具

sudo apt update sudo apt install ffmpeg libimage-exiftool-perl imagemagick jq
sudo apt update sudo apt install ffmpeg libimage-exiftool-perl imagemagick jq

Optional: C2PA verification tool (from GitHub releases)

可选:C2PA验证工具(从GitHub发布版获取)

Windows (winget)

Windows(winget)

powershell
undefined
powershell
undefined

Install all recommended tools

安装所有推荐工具

winget install ffmpeg winget install exiftool winget install imagemagick winget install jqlang.jq
winget install ffmpeg winget install exiftool winget install imagemagick winget install jqlang.jq

Optional: C2PA verification tool (from GitHub releases)

可选:C2PA验证工具(从GitHub发布版获取)

undefined
undefined

Verification

验证安装

bash
undefined
bash
undefined

Verify installations

验证安装

ffmpeg -version exiftool -ver magick -version jq --version c2patool --version # if installed
undefined
ffmpeg -version exiftool -ver magick -version jq --version c2patool --version # 如果已安装
undefined

Tool Usage Examples

工具使用示例

ffmpeg for Feature Extraction

ffmpeg特征提取

bash
undefined
bash
undefined

Extract I-frames for PRNU analysis

提取I帧用于PRNU分析

ffmpeg -i input.mp4 -vf "select='eq(pict_type,I)'" -vsync vfr frame_%04d.png
ffmpeg -i input.mp4 -vf "select='eq(pict_type,I)'" -vsync vfr frame_%04d.png

Analyze inter-frame consistency (temporal artifacts)

分析帧间一致性(时间域痕迹)

ffmpeg -i input.mp4 -vf "mpdecimate,setpts=N/FRAME_RATE/TB" -c:v libx264 dedup.mp4
ffmpeg -i input.mp4 -vf "mpdecimate,setpts=N/FRAME_RATE/TB" -c:v libx264 dedup.mp4

Extract metadata for container audit

提取元数据用于容器审计

ffprobe -v quiet -print_format json -show_format -show_streams input.mp4
ffprobe -v quiet -print_format json -show_format -show_streams input.mp4

Isolate audio stream for voice clone detection

分离音频流用于语音克隆检测

ffmpeg -i input.mp4 -vn -acodec pcm_s16le -ar 44100 audio.wav
ffmpeg -i input.mp4 -vn -acodec pcm_s16le -ar 44100 audio.wav

Extract specific frame range for analysis

提取特定帧范围用于分析

ffmpeg -i input.mp4 -ss 00:01:30 -t 00:00:10 -c copy segment.mp4
undefined
ffmpeg -i input.mp4 -ss 00:01:30 -t 00:00:10 -c copy segment.mp4
undefined

ExifTool for Metadata Forensics

ExifTool元数据取证

bash
undefined
bash
undefined

Extract all metadata

提取所有元数据

exiftool -json input.jpg | jq .
exiftool -json input.jpg | jq .

Check for editing software traces

检查编辑软件痕迹

exiftool -Software -CreatorTool -HistorySoftwareAgent input.jpg
exiftool -Software -CreatorTool -HistorySoftwareAgent input.jpg

Compare metadata between original and suspected fake

对比原始文件与可疑伪造文件的元数据

exiftool -g1 -a -u original.jpg suspected.jpg | diff -y -
exiftool -g1 -a -u original.jpg suspected.jpg | diff -y -

Find GPS coordinates (if present)

查找GPS坐标(如果存在)

exiftool -gps:all -c "%.6f" input.jpg
exiftool -gps:all -c "%.6f" input.jpg

Check creation/modification times for inconsistencies

检查创建/修改时间是否存在不一致

exiftool -time:all -G1 input.jpg
undefined
exiftool -time:all -G1 input.jpg
undefined

ImageMagick for Image Analysis

ImageMagick图像分析

bash
undefined
bash
undefined

Analyze image statistics (useful for noise analysis)

分析图像统计信息(用于噪声分析)

magick identify -verbose input.jpg
magick identify -verbose input.jpg

Extract error level analysis (ELA) for manipulation detection

提取误差水平分析(ELA)用于篡改检测

magick input.jpg -quality 95 ela_temp.jpg magick composite input.jpg ela_temp.jpg -compose difference ela_output.jpg
magick input.jpg -quality 95 ela_temp.jpg magick composite input.jpg ela_temp.jpg -compose difference ela_output.jpg

Check for resampling artifacts

检查重采样痕迹

magick input.jpg -resize 200% -resize 50% resample_test.jpg
undefined
magick input.jpg -resize 200% -resize 50% resample_test.jpg
undefined

C2PA Tool for Provenance

C2PA工具来源验证

bash
undefined
bash
undefined

Verify C2PA manifest

验证C2PA清单

c2patool verify input.jpg
c2patool verify input.jpg

Extract manifest details as JSON

提取清单详情为JSON格式

c2patool manifest input.jpg -o manifest.json
c2patool manifest input.jpg -o manifest.json

Check certificate chain

检查证书链

c2patool trust input.jpg
undefined
c2patool trust input.jpg
undefined

C2PA Test Files for Validation

C2PA测试文件用于验证

Official test files from the C2PA organization (CC BY-SA 4.0):
FileDescriptionExpected Result
adobe-20220124-C.jpg
Valid Adobe certificate, verified signature✅ Chain verified
truepic-20230212-camera.jpg
Hardware-signed at capture✅ Chain verified
Files without credentialsNo C2PA manifest⚠️ No provenance
Tampered filesModified after signing❌ Invalid signature
Understanding C2PA Validation: The chain is verified step-by-step: (1) Certificate verified → (2) Signature valid → (3) Claims unchanged → (4) Image hash matches. One failure breaks the entire chain.

来自C2PA组织的官方测试文件(CC BY-SA 4.0许可):
文件描述预期结果
adobe-20220124-C.jpg
有效的Adobe证书、已验证签名✅ 证书链验证通过
truepic-20230212-camera.jpg
拍摄时硬件签名✅ 证书链验证通过
无凭证文件无C2PA清单⚠️ 无来源信息
篡改文件签名后被修改❌ 签名无效
理解C2PA验证:证书链验证分步进行:(1) 证书验证 → (2) 签名有效 → (3) 声明未被篡改 → (4) 图像哈希匹配。任何一步失败都会导致整个链无效。

5. Forensic Detection Criteria

5. 取证检测标准

Criterion A: Sensor Fingerprints (PRNU/PCE)

标准A:传感器指纹(PRNU/PCE)

Photo-Response Non-Uniformity (PRNU) is a sensor-specific noise pattern that acts as a biometric fingerprint for cameras.
光响应非均匀性(PRNU)是传感器特有的噪声模式,可作为相机的生物特征指纹。

Metric: Peak to Correlation Energy (PCE)

指标:峰值相关能量(PCE)

python
undefined
python
undefined

Conceptual PRNU/PCE calculation

概念性PRNU/PCE计算

def calculate_pce(image: np.ndarray, reference_prnu: np.ndarray) -> float: """ Calculate Peak-to-Correlation-Energy for PRNU matching.
Returns:
    PCE value > 60 indicates high confidence match
    PCE value 40-60 indicates moderate confidence
    PCE value < 40 indicates low confidence or mismatch
"""
noise_residual = extract_noise_residual(image)
correlation = correlate_2d(noise_residual, reference_prnu)
peak = np.max(correlation)
energy = np.mean(correlation**2)
return peak**2 / energy
undefined
def calculate_pce(image: np.ndarray, reference_prnu: np.ndarray) -> float: """ 计算PRNU匹配的峰值相关能量。
返回值:
    PCE值>60表示高置信度匹配
    PCE值40-60表示中等置信度
    PCE值<40表示低置信度或不匹配
"""
noise_residual = extract_noise_residual(image)
correlation = correlate_2d(noise_residual, reference_prnu)
peak = np.max(correlation)
energy = np.mean(correlation**2)
return peak**2 / energy
undefined

PRNU Limitations (Wronski Effect)

PRNU局限性(Wronski效应)

Modern computational photography (multi-frame capture, super-resolution) breaks the direct sensor-to-pixel mapping [22, 23]:
Device EraPRNU ReliabilityNotes
Pre-2018 DSLRsHighDirect sensor output
2018-2022 SmartphonesMediumSome computational processing
2023+ SmartphonesLowHeavy multi-frame HDR, AI enhancement
Synthetic (GAN/Diffusion)NoneNo physical sensor involved
现代计算摄影(多帧捕获、超分辨率)破坏了传感器到像素的直接映射 [22, 23]
设备时代PRNU可靠性说明
2018年前的单反相机直接传感器输出
2018-2022年智能手机存在一定计算处理
2023年后智能手机大量多帧HDR、AI增强
合成媒体(GAN/扩散模型)无物理传感器参与

Criterion B: Noise-Intensity Relationship & IGH

标准B:噪声-强度关系与IGH

The Intensity Gradient Histogram (IGH) classifies the relationship between local intensity and noise, exploiting Camera Response Function (CRF) physics.
python
def analyze_igh_profile(image: np.ndarray) -> dict:
    """
    Analyze Intensity Gradient Histogram for CRF consistency.
    
    Synthetic blur (portrait mode) destroys the statistical harmony
    of the CRF model, detectable via IGH asymmetry analysis.
    """
    gradients = compute_intensity_gradients(image)
    
    # Authentic optical blur: asymmetric gradient distribution
    # Synthetic blur: symmetric Gaussian distribution
    symmetry_score = measure_gradient_symmetry(gradients)
    
    return {
        "gradient_histogram": gradients,
        "symmetry_score": symmetry_score,
        "classification": "authentic" if symmetry_score < 0.7 else "synthetic"
    }
强度梯度直方图(IGH)通过分类局部强度与噪声的关系,利用相机响应函数(CRF)物理特性进行检测。
python
def analyze_igh_profile(image: np.ndarray) -> dict:
    """
    分析强度梯度直方图以检查CRF一致性。
    
    合成模糊(人像模式)会破坏CRF模型的统计协调性,可通过IGH不对称性分析检测。
    """
    gradients = compute_intensity_gradients(image)
    
    # 真实光学模糊:梯度分布不对称
    # 合成模糊:对称高斯分布
    symmetry_score = measure_gradient_symmetry(gradients)
    
    return {
        "gradient_histogram": gradients,
        "symmetry_score": symmetry_score,
        "classification": "authentic" if symmetry_score < 0.7 else "synthetic"
    }

Criterion C: Geometric & Optical Blur Analysis

标准C:几何与光学模糊分析

Physical law: Due to CRF non-linearity, optically blurred edges are asymmetric. Software Gaussian filters produce symmetric profiles.
Blur TypeGradient ProfileDetection Method
Optical (lens)AsymmetricIGH analysis
Digital (software)SymmetricIGH analysis
Depth-of-field (real)Varies with distance3D consistency check
Portrait mode (fake)Uniform applicationEdge discontinuity detection
物理定律:由于CRF非线性,光学模糊边缘是不对称的。软件高斯滤镜产生对称轮廓。
模糊类型梯度轮廓检测方法
光学(镜头)不对称IGH分析
数字(软件)对称IGH分析
景深(真实)随距离变化3D一致性检查
人像模式(伪造)均匀应用边缘不连续性检测

Criterion D: Compression Artifacts (Double JPEG / DQ)

标准D:压缩痕迹(双重JPEG / DQ)

Double Quantization (DQ) effects serve as primary evidence for multiple save operations, enabling splicing localization.
python
def generate_dq_probability_map(image: np.ndarray) -> np.ndarray:
    """
    Generate Double Quantization probability map.
    
    Spliced regions show different quantization histories,
    creating detectable statistical anomalies.
    """
    dct_blocks = compute_dct_blocks(image)
    q_estimates = estimate_quantization_tables(dct_blocks)
    
    # Probability map highlights regions with different
    # compression histories (splicing indicators)
    prob_map = detect_quantization_inconsistencies(q_estimates)
    
    return prob_map  # Heatmap: red = likely manipulation

双重量化(DQ)效应是多次保存操作的主要证据,可用于定位拼接区域。
python
def generate_dq_probability_map(image: np.ndarray) -> np.ndarray:
    """
    生成双重量化概率图。
    
    拼接区域的量化历史不同,会产生可检测的统计异常。
    """
    dct_blocks = compute_dct_blocks(image)
    q_estimates = estimate_quantization_tables(dct_blocks)
    
    # 概率图突出显示具有不同压缩历史的区域(拼接指示)
    prob_map = detect_quantization_inconsistencies(q_estimates)
    
    return prob_map  # 热力图:红色 = 疑似篡改区域

6. Video-Specific Detection

6. 视频特定检测

Video deepfake detection requires analysis of temporal consistency, which current research shows remains challenging for generalization across different manipulation methods [1, 4].
视频深度伪造检测需要分析时间一致性,当前研究表明,不同篡改方法的泛化检测仍具挑战性 [1, 4]

Visual Detection Indicators (Human Review)

视觉检测指标(人工审核)

Before algorithmic analysis, trained reviewers check for these telltale signs:
IndicatorWhat to Look ForWhy It Happens
Face BoundariesFlickering edges, face "floating" over bodyImperfect blending between swapped face and original
BlinkingNo blinking, asymmetric blinking, stiff eyesEarly models lacked blink training; still imperfect
Lip SyncDelays on plosives (p, b, m sounds)Audio-visual alignment is computationally hard
Shadows & LightMultiple shadow directions, inconsistent lightingComposited elements from different light sources
Eye ReflectionsDifferent scenes reflected in each eyeSynthesized eyes don't share real-world reflection
Hair DetailsSmooth contours, "melting" strands, clippingFine details are hardest for generators
Best Practice: Slow down video to 25% speed and examine frame-by-frame. Artifacts become more visible when temporal smoothing is removed.
在算法分析之前,经过培训的审核员会检查以下明显迹象:
指标检查内容产生原因
人脸边界闪烁边缘、人脸“漂浮”在身体上换脸后的人脸与原身体融合不完美
眨眼不眨眼、不对称眨眼、眼部僵硬早期模型缺乏眨眼训练;当前模型仍不完善
唇形同步爆破音(p、b、m)延迟音视频对齐计算难度大
阴影与光线多个阴影方向、光照不一致合成元素来自不同光源
眼部反射双眼反射不同场景合成眼睛未共享真实世界反射
头发细节平滑轮廓、“融化”发丝、裁剪痕迹生成器最难处理精细细节
最佳实践:将视频速度放慢至25%,逐帧检查。去除时间平滑后,痕迹会更加明显。

Temporal Consistency Analysis

时间一致性分析

python
def analyze_temporal_artifacts(video_path: str) -> dict:
    """
    Detect temporal inconsistencies in video deepfakes.
    
    Face-swap deepfakes often show:
    - Flickering at face boundaries
    - Inconsistent lighting between frames
    - Unnatural head pose transitions
    """
    frames = extract_frames(video_path)
    
    results = {
        "face_boundary_flickering": detect_boundary_flickering(frames),
        "lighting_consistency": analyze_lighting_consistency(frames),
        "pose_smoothness": measure_pose_transitions(frames),
        "blink_analysis": detect_blink_patterns(frames),  # Early deepfakes lacked blinking
        "audio_visual_sync": check_lip_sync_accuracy(video_path)
    }
    
    return results
python
def analyze_temporal_artifacts(video_path: str) -> dict:
    """
    检测视频深度伪造中的时间不一致性。
    
    换脸深度伪造通常表现为:
    - 人脸边界闪烁
    - 帧间光照不一致
    - 头部姿态过渡不自然
    """
    frames = extract_frames(video_path)
    
    results = {
        "face_boundary_flickering": detect_boundary_flickering(frames),
        "lighting_consistency": analyze_lighting_consistency(frames),
        "pose_smoothness": measure_pose_transitions(frames),
        "blink_analysis": detect_blink_patterns(frames),  # 早期深度伪造缺乏眨眼效果
        "audio_visual_sync": check_lip_sync_accuracy(video_path)
    }
    
    return results

GAN & Diffusion Model Fingerprint Detection

GAN与扩散模型指纹检测

Modern detection must address both GAN-based and diffusion model-generated images. Recent research demonstrates that diffusion models leave distinct artifacts detectable via uncertainty estimation [5] and characteristic photorealism patterns [6].
python
def detect_gan_fingerprints(image: np.ndarray) -> dict:
    """
    Detect characteristic patterns left by generative architectures.
    
    Different families (StyleGAN, Stable Diffusion, DALL-E, Midjourney)
    leave distinct frequency-domain artifacts.
    """
    fft = compute_fft_spectrum(image)
    
    # GANs often produce checkerboard patterns in FFT
    checkerboard_score = detect_checkerboard_artifacts(fft)
    
    # Spectral analysis for GAN-specific signatures
    gan_signatures = match_known_gan_spectra(fft)
    
    return {
        "checkerboard_score": checkerboard_score,
        "suspected_generator": gan_signatures.get("best_match"),
        "confidence": gan_signatures.get("confidence")
    }

现代检测必须同时应对基于GAN和扩散模型生成的图像。近期研究表明,扩散模型会留下可通过不确定性估计 [5] 和特征性照片真实感模式 [6] 检测到的独特痕迹。
python
def detect_gan_fingerprints(image: np.ndarray) -> dict:
    """
    检测生成式架构留下的特征模式。
    
    不同模型家族(StyleGAN、Stable Diffusion、DALL-E、Midjourney)
    会留下不同的频域痕迹。
    """
    fft = compute_fft_spectrum(image)
    
    # GAN通常会在FFT中产生棋盘格模式
    checkerboard_score = detect_checkerboard_artifacts(fft)
    
    # 频谱分析以匹配已知GAN特征
    gan_signatures = match_known_gan_spectra(fft)
    
    return {
        "checkerboard_score": checkerboard_score,
        "suspected_generator": gan_signatures.get("best_match"),
        "confidence": gan_signatures.get("confidence")
    }

7. Semantic Forensics (SemaFor)

7. 语义取证(SemaFor)

When pixel-level artifacts are masked, focus on semantic inconsistencies. This approach was pioneered by the DARPA SemaFor program (2020-2024), which has since transitioned technologies to operational government use [12, 13, 14].
当像素级痕迹被掩盖时,应关注语义不一致性。该方法由DARPA SemaFor项目(2020-2024年)首创,相关技术现已应用于政府实际操作 [12, 13, 14]

Cross-Modal Consistency Checks

跨模态一致性检查

CheckDescriptionExample
Shadow PhysicsVerify shadow directions match single light sourceMultiple shadow angles in composite
Reflection ConsistencyCheck reflections match scene geometryEyes reflecting different scenes
Perspective GeometryVerify vanishing points are consistentImpossible architectural angles
Audio-Visual SyncLip movements match phoneme timing [7, 8]Desync in voice clone overlays
Temporal PlausibilityMetadata matches claimed time/locationWeather, daylight inconsistent with timestamp
检查项描述示例
阴影物理特性验证阴影方向是否匹配单一光源合成图像中存在多个阴影角度
反射一致性检查反射是否与场景几何一致双眼反射不同场景
透视几何验证消失点是否一致不可能的建筑角度
音视频同步唇部动作与音素时序匹配 [7, 8]语音克隆叠加时不同步
时间合理性元数据与声称的时间/地点匹配天气、日光与时间戳不一致

Example: Shadow Analysis

示例:阴影分析

python
def analyze_shadow_consistency(image: np.ndarray) -> dict:
    """
    Detect physically impossible shadow configurations.
    
    A single light source produces shadows in consistent directions.
    Composited images often have inconsistent shadow angles.
    """
    shadows = detect_shadows(image)
    objects = detect_objects(image)
    
    shadow_vectors = []
    for obj, shadow in zip(objects, shadows):
        vector = compute_shadow_vector(obj, shadow)
        shadow_vectors.append(vector)
    
    # All vectors should converge to consistent light source
    consistency_score = measure_vector_convergence(shadow_vectors)
    
    return {
        "shadow_vectors": shadow_vectors,
        "consistency_score": consistency_score,
        "physically_plausible": consistency_score > 0.85
    }

python
def analyze_shadow_consistency(image: np.ndarray) -> dict:
    """
    检测物理上不可能的阴影配置。
    
    单一光源会产生方向一致的阴影。
    合成图像通常存在不一致的阴影角度。
    """
    shadows = detect_shadows(image)
    objects = detect_objects(image)
    
    shadow_vectors = []
    for obj, shadow in zip(objects, shadows):
        vector = compute_shadow_vector(obj, shadow)
        shadow_vectors.append(vector)
    
    # 所有向量应收敛于一致的光源
    consistency_score = measure_vector_convergence(shadow_vectors)
    
    return {
        "shadow_vectors": shadow_vectors,
        "consistency_score": consistency_score,
        "physically_plausible": consistency_score > 0.85
    }

8. Authenticity Scoring System

8. 真实性评分系统

Analysis Layer Weighting

分析层权重

Based on scientific reliability of each method (from webconsulting.at forensics research):
Analysis LayerWeightRationale
Signal Analysis45%Objective forensic signals: noise patterns, compression artifacts, frequency analysis. Hybrid approaches achieve F1 scores of 0.96 on benchmarks
Metadata Analysis35%EXIF provenance chain. 62% of images have camera-specific signatures, 99% are manufacturer-identifiable
Semantic Analysis20%AI-based artifact detection. Only 58% accuracy on standard benchmarks—OpenAI discontinued their detector in 2023 due to low accuracy
C2PA (Bonus)+25-40 pointsCryptographic proof. Only unforgeable method. Combined with AI detection reduces false positives by 41%
Important: Without C2PA verification, maximum achievable grade is 2 ("No manipulation indicators"). Grade 1 ("Provenance cryptographically verified") requires a validated signature chain.
基于各方法的科学可靠性(来自webconsulting.at取证研究):
分析层权重依据
信号分析45%客观取证信号:噪声模式、压缩痕迹、频率分析。混合方法在基准测试中F1分数达0.96
元数据分析35%EXIF来源链。62%的图像具有相机特定特征,99%可识别制造商
语义分析20%AI-based痕迹检测。标准基准测试中准确率仅58%——OpenAI因准确率低在2023年停用了其检测器
C2PA(加分项)+25-40分加密证明。唯一不可伪造的方法。与AI检测结合可将误报率降低41%
重要提示:若无C2PA验证,最高可获得2级评分(“无篡改迹象”)。1级评分(“来源已加密验证”)需要有效的签名链。

Probability to Grade Mapping

概率到评分等级映射

Authenticity %GradeInterpretation
90 - 100%1 (Excellent)Evidence-based authenticity: Valid PRNU/PCE fingerprint; absence of DQ artifacts
75 - 89%2 (Good)Probably authentic: Consistent IGH profiles; minor deviations from standard compression
50 - 74%3 (Satisfactory)Hybrid content detected: Requires human-in-the-loop verification
35 - 49%4 (Adequate)Significant statistical anomalies: Noise profile inconsistencies indicate local editing
20 - 34%5 (Poor)High manipulation probability: Positive splicing detection via DQ maps
< 20%6 (Fail)Confirmed forgery: Forensic evidence of synthetic generation (GAN fingerprints) or physical impossibilities
真实性百分比等级解释
90 - 100%1(优秀)基于证据的真实性:有效的PRNU/PCE指纹;无DQ痕迹
75 - 89%2(良好)可能真实:IGH特征一致;与标准压缩存在微小偏差
50 - 74%3(满意)检测到混合内容:需要人工介入验证
35 - 49%4(合格)存在明显统计异常:噪声特征不一致表明存在局部编辑
20 - 34%5(较差)高篡改概率:通过DQ图检测到拼接
< 20%6(不合格)确认伪造:存在合成生成(GAN指纹)或物理不可能的取证证据

Composite Scoring Algorithm

综合评分算法

python
def calculate_authenticity_score(media_path: str) -> dict:
    """
    Calculate composite authenticity score from multiple forensic signals.
    """
    image = load_media(media_path)
    
    scores = {
        "prnu_pce": analyze_prnu(image),           # Weight: 0.25
        "igh_profile": analyze_igh_profile(image), # Weight: 0.20
        "dq_artifacts": detect_dq_artifacts(image), # Weight: 0.20
        "gan_fingerprints": detect_gan_fingerprints(image), # Weight: 0.15
        "semantic_consistency": check_semantic_consistency(image), # Weight: 0.20
    }
    
    weights = [0.25, 0.20, 0.20, 0.15, 0.20]
    composite = sum(s * w for s, w in zip(scores.values(), weights))
    
    return {
        "authenticity_probability": composite,
        "grade": map_to_grade(composite),
        "detailed_scores": scores,
        "confidence_interval": calculate_confidence(scores)
    }

python
def calculate_authenticity_score(media_path: str) -> dict:
    """
    从多个取证信号计算综合真实性评分。
    """
    image = load_media(media_path)
    
    scores = {
        "prnu_pce": analyze_prnu(image),           # 权重: 0.25
        "igh_profile": analyze_igh_profile(image), # 权重: 0.20
        "dq_artifacts": detect_dq_artifacts(image), # 权重: 0.20
        "gan_fingerprints": detect_gan_fingerprints(image), # 权重: 0.15
        "semantic_consistency": check_semantic_consistency(image), # 权重: 0.20
    }
    
    weights = [0.25, 0.20, 0.20, 0.15, 0.20]
    composite = sum(s * w for s, w in zip(scores.values(), weights))
    
    return {
        "authenticity_probability": composite,
        "grade": map_to_grade(composite),
        "detailed_scores": scores,
        "confidence_interval": calculate_confidence(scores)
    }

9. Content Provenance (C2PA / CAI)

9. 内容来源(C2PA / CAI)

C2PA Steering Committee (2026)

C2PA指导委员会(2026年)

The Coalition for Content Provenance and Authenticity is governed by major technology and media companies:
MemberRole
AdobeFounding member, CAI lead
BBCMedia organization representative
GooglePlatform integration
MetaSocial platform adoption
MicrosoftEnterprise integration (365)
OpenAIAI generator signing (DALL-E, ChatGPT)
Publicis GroupeAdvertising industry adoption
SonyHardware integration (cameras)
TruepicMobile authentication pioneer
内容来源与真实性联盟(C2PA)由主要科技和媒体公司主导:
成员角色
Adobe创始成员、CAI主导方
BBC媒体组织代表
Google平台集成
Meta社交平台采用
Microsoft企业集成(365)
OpenAIAI生成内容签名(DALL-E、ChatGPT)
Publicis Groupe广告行业采用
Sony硬件集成(相机)
Truepic移动认证先驱

Content Credentials: The CR Icon

内容凭证:CR图标

C2PA is the technical standard. Content Credentials is the user-facing implementation with the visible "CR" icon.
What the CR Icon ShowsDescription
CreatorWho created the media (camera, person, AI)
SoftwareWhat software was used for editing
AI DisclosureWhether AI was used for generation
Edit HistoryWhat editing steps occurred
Key Feature: All assertions are cryptographically signed. Changing even one pixel invalidates the signature—manipulation is immediately detectable.
C2PA是技术标准。内容凭证是面向用户的实现,带有可见的“CR”图标。
CR图标显示内容描述
创作者媒体创作者(相机、个人、AI)
软件编辑所用软件
AI披露是否使用AI生成
编辑历史进行过的编辑步骤
核心特性:所有声明均经过加密签名。即使更改一个像素也会使签名无效——篡改可立即被检测到。

Industry Adoption (2025-2026)

行业采用情况(2025-2026年)

C2PA is rapidly becoming the industry standard. Current adoption landscape:
CategoryAdoptersStatus
AI GeneratorsDALL-E 3, Adobe Firefly, Google GeminiAuto-sign all outputs
SoftwareAdobe Photoshop, LightroomCryptographic edit history
Professional CamerasLeica M11-P, Sony (select models)Sign at capture
Camera ManufacturersNikon, CanonFollowing (announced)
SmartphonesGoogle Pixel 10 (2025/26)Native C2PA support
Mobile OEMsSamsung GalaxyFollowing (announced)
EnterpriseMicrosoft 365Mandatory AI watermarks (2026)
Prognosis (3-5 years): For media organizations and government agencies: Without cryptographic provenance, no file will be considered trustworthy.
C2PA正迅速成为行业标准。当前采用情况:
类别采用方状态
AI生成器DALL-E 3、Adobe Firefly、Google Gemini自动为所有输出签名
软件Adobe Photoshop、Lightroom加密编辑历史
专业相机Leica M11-P、Sony(部分型号)拍摄时签名
相机制造商Nikon、Canon跟进(已宣布)
智能手机Google Pixel 10(2025/26年)原生C2PA支持
手机厂商Samsung Galaxy跟进(已宣布)
企业Microsoft 365强制AI水印(2026年)
预测(3-5年):对于媒体组织和政府机构:若无加密来源,任何文件都不会被视为可信。

Content Authenticity Initiative (CAI) Integration

内容真实性倡议(CAI)集成

json
{
  "claim": {
    "recorder": "Canon EOS R5",
    "signature": {
      "alg": "ES256",
      "cert": "-----BEGIN CERTIFICATE-----...",
      "sig": "base64_signature"
    },
    "assertions": [
      {
        "label": "c2pa.actions",
        "data": {
          "actions": [
            {
              "action": "c2pa.created",
              "when": "2026-01-20T14:32:00Z",
              "softwareAgent": "Canon DPP 4.17"
            }
          ]
        }
      }
    ]
  }
}
json
{
  "claim": {
    "recorder": "Canon EOS R5",
    "signature": {
      "alg": "ES256",
      "cert": "-----BEGIN CERTIFICATE-----...",
      "sig": "base64_signature"
    },
    "assertions": [
      {
        "label": "c2pa.actions",
        "data": {
          "actions": [
            {
              "action": "c2pa.created",
              "when": "2026-01-20T14:32:00Z",
              "softwareAgent": "Canon DPP 4.17"
            }
          ]
        }
      }
    ]
  }
}

Provenance Verification Workflow

来源验证工作流

python
def verify_c2pa_provenance(media_path: str) -> dict:
    """
    Verify Content Authenticity Initiative (C2PA) manifests.
    
    C2PA provides cryptographic proof of media origin and edit history.
    """
    manifest = extract_c2pa_manifest(media_path)
    
    if not manifest:
        return {
            "has_provenance": False,
            "recommendation": "No cryptographic provenance available. Proceed with forensic analysis."
        }
    
    # Verify certificate chain
    cert_valid = verify_certificate_chain(manifest["signature"]["cert"])
    
    # Verify signature
    sig_valid = verify_signature(
        manifest["claim"],
        manifest["signature"]["sig"],
        manifest["signature"]["alg"]
    )
    
    # Check assertion integrity
    assertions_valid = verify_assertions(manifest["assertions"])
    
    return {
        "has_provenance": True,
        "certificate_valid": cert_valid,
        "signature_valid": sig_valid,
        "assertions_valid": assertions_valid,
        "edit_history": extract_edit_history(manifest),
        "original_device": manifest["claim"].get("recorder"),
        "overall_valid": all([cert_valid, sig_valid, assertions_valid])
    }

python
def verify_c2pa_provenance(media_path: str) -> dict:
    """
    验证内容真实性倡议(C2PA)清单。
    
    C2PA提供媒体来源和编辑历史的加密证明。
    """
    manifest = extract_c2pa_manifest(media_path)
    
    if not manifest:
        return {
            "has_provenance": False,
            "recommendation": "无加密来源信息。继续进行取证分析。"
        }
    
    # 验证证书链
    cert_valid = verify_certificate_chain(manifest["signature"]["cert"])
    
    # 验证签名
    sig_valid = verify_signature(
        manifest["claim"],
        manifest["signature"]["sig"],
        manifest["signature"]["alg"]
    )
    
    # 检查声明完整性
    assertions_valid = verify_assertions(manifest["assertions"])
    
    return {
        "has_provenance": True,
        "certificate_valid": cert_valid,
        "signature_valid": sig_valid,
        "assertions_valid": assertions_valid,
        "edit_history": extract_edit_history(manifest),
        "original_device": manifest["claim"].get("recorder"),
        "overall_valid": all([cert_valid, sig_valid, assertions_valid])
    }

10. Forensic Report Template

10. 取证报告模板

Module A: Media Metadata & Summary

模块A:媒体元数据与摘要

markdown
undefined
markdown
undefined

Media Authentication Report

媒体认证报告

Metadata

元数据

FieldValue
Report IDMAR-2026-001
ClassificationConfidential
Analysis Date2026-01-23 15:00 UTC
Media TypeVideo (MP4)
Duration00:02:34
Resolution1920x1080
File Hash (SHA-256)a1b2c3d4...
Lead AnalystForensic AI Agent
字段
报告IDMAR-2026-001
分类机密
分析日期2026-01-23 15:00 UTC
媒体类型视频(MP4)
时长00:02:34
分辨率1920x1080
文件哈希(SHA-256)a1b2c3d4...
首席分析师取证AI Agent

Executive Summary (max 200 words)

执行摘要(最多200字)

Analysis of [MEDIA_FILE] reveals [AUTHENTICITY_ASSESSMENT]. Key findings include [PRIMARY_INDICATORS]. The composite authenticity score is [SCORE]% (Grade: [GRADE]). [RECOMMENDATION].
对[MEDIA_FILE]的分析显示[真实性评估]。 主要发现包括[主要指标]。综合真实性评分为[评分]%(等级:[等级])。[建议]。

Authenticity Assessment

真实性评估

CriterionScoreStatus
PRNU/PCE Match45/100⚠️ Inconclusive
IGH Profile82/100✅ Consistent
DQ Artifacts23/100❌ Detected
GAN Fingerprints15/100❌ Detected
Semantic Consistency67/100⚠️ Minor issues
Composite Score34%Grade: 5
undefined
标准得分状态
PRNU/PCE匹配45/100⚠️ 不确定
IGH特征82/100✅ 一致
DQ痕迹23/100❌ 检测到
GAN指纹15/100❌ 检测到
语义一致性67/100⚠️ 轻微问题
综合得分34%等级:5
undefined

Module B: Technical Evidence

模块B:技术证据

markdown
undefined
markdown
undefined

Technical Analysis

技术分析

PRNU/PCE Analysis

PRNU/PCE分析

  • Reference device: Not identified
  • PCE Value: 23.4 (below threshold of 40)
  • Interpretation: Cannot establish camera origin
  • 参考设备:未识别
  • PCE值:23.4(低于40的阈值)
  • 解释:无法确定相机来源

Double Quantization Map

双重量化图

DQ Probability Map
  • Red regions indicate areas with different compression histories
  • Face region shows 89% probability of splicing
DQ概率图
  • 红色区域表示具有不同压缩历史的区域
  • 人脸区域拼接概率为89%

GAN Fingerprint Analysis

GAN指纹分析

  • Checkerboard artifacts: Detected (FFT analysis)
  • Suspected generator: StyleGAN2-derived architecture
  • Confidence: 78%
  • 棋盘格痕迹:检测到(FFT分析)
  • 疑似生成器:StyleGAN2衍生架构
  • 置信度:78%

Semantic Inconsistencies

语义不一致性

  1. Shadow direction: Inconsistent (2 apparent light sources)
  2. Eye reflections: Different scene reflected in each eye
  3. Audio-visual sync: 120ms average desync detected

---
  1. 阴影方向:不一致(存在2个明显光源)
  2. 眼部反射:双眼反射不同场景
  3. 音视频同步:检测到平均120ms延迟

---

11. Defense Strategies

11. 防御策略

Technical Controls

技术控制

ControlImplementationEffectiveness
C2PA/CAI ValidationRequire provenance for high-stakes mediaHigh (when available)
Automated ScreeningDeploy detection pipeline for inbound mediaMedium (arms race)
Multi-Signal FusionCombine PRNU, IGH, DQ, semantic signalsHigh
Human-in-the-LoopExpert review for Grade 3-4 casesHigh
控制措施实现方式有效性
C2PA/CAI验证高风险媒体需提供来源高(当可用时)
自动化筛查部署入站媒体检测流水线中(军备竞赛)
多信号融合结合PRNU、IGH、DQ、语义信号
人工介入专家审核3-4级案例

Organizational Inoculation

组织防护

  1. Pre-bunking: Educate stakeholders about deepfake capabilities before exposure
  2. Source Triangulation: Verify claims through multiple independent sources
  3. Temporal Delay: Wait for verification before amplifying uncertain content
  4. Provenance Requirement: Mandate C2PA for critical communications
  1. 预辟谣:在接触前教育利益相关者了解深度伪造能力
  2. 来源三角验证:通过多个独立来源验证声明
  3. 时间延迟:在传播不确定内容前等待验证结果
  4. 来源要求:关键通信强制要求C2PA

Response When Targeted (Personal Deepfakes)

被攻击时的应对措施(个人深度伪造)

If you or your client are depicted in a deepfake:
StepActionDetails
1Preserve EvidenceScreenshot with timestamp, save URL, download file
2Platform TakedownReport to platform using manipulation/deepfake reporting tools
3Legal AssessmentConsult attorney for jurisdiction-specific remedies
4Support ResourcesContact victim support organizations
若您或您的客户被深度伪造内容针对:
步骤行动详情
1保存证据带时间戳的截图、保存URL、下载文件
2平台下架通过平台的篡改/深度伪造举报工具举报
3法律评估咨询律师了解司法管辖区特定救济措施
4支持资源联系受害者支持组织

Legal Framework (Austria)

奥地利法律框架

StatuteProtectionApplication
§ 78 UrhGRecht am eigenen Bild (Right to own image)Unauthorized use of likeness
§ 107c StGBCybermobbingPersistent harassment via digital means
§ 120a StGBUnbefugte BildaufnahmenIntimate imagery without consent
Austrian Resources:
Note: Similar protections exist across EU member states under the Digital Services Act (DSA) and GDPR. Consult local counsel for jurisdiction-specific advice.
法规保护内容适用场景
§ 78 UrhG肖像权未经授权使用肖像
§ 107c StGB网络欺凌通过数字手段持续骚扰
§ 120a StGB非法拍摄未经同意的私密内容
奥地利资源:
注意:欧盟成员国根据《数字服务法》(DSA)和GDPR也有类似保护措施。请咨询当地律师了解司法管辖区特定建议。

Detection Pipeline Example

检测流水线示例

python
class MediaAuthenticationPipeline:
    """
    Production pipeline for automated media authentication.
    """
    
    def __init__(self, config: PipelineConfig):
        self.prnu_analyzer = PRNUAnalyzer(config.prnu_db)
        self.igh_classifier = IGHClassifier(config.igh_model)
        self.dq_detector = DQDetector()
        self.gan_detector = GANFingerprintDetector(config.gan_signatures)
        self.semantic_analyzer = SemanticAnalyzer(config.llm_endpoint)
        self.c2pa_validator = C2PAValidator(config.trusted_roots)
    
    async def authenticate(self, media_path: str) -> AuthenticationResult:
        # Check provenance first (fast path)
        provenance = await self.c2pa_validator.verify(media_path)
        if provenance.valid:
            return AuthenticationResult(
                authentic=True,
                confidence=0.99,
                method="cryptographic_provenance"
            )
        
        # Run forensic analysis in parallel
        results = await asyncio.gather(
            self.prnu_analyzer.analyze(media_path),
            self.igh_classifier.classify(media_path),
            self.dq_detector.detect(media_path),
            self.gan_detector.detect(media_path),
            self.semantic_analyzer.analyze(media_path)
        )
        
        # Fuse signals
        composite = self.fuse_signals(results)
        
        return AuthenticationResult(
            authentic=composite.score > 0.75,
            confidence=composite.confidence,
            grade=composite.grade,
            details=results,
            method="forensic_analysis"
        )

python
class MediaAuthenticationPipeline:
    """
    用于自动化媒体认证的生产级流水线。
    """
    
    def __init__(self, config: PipelineConfig):
        self.prnu_analyzer = PRNUAnalyzer(config.prnu_db)
        self.igh_classifier = IGHClassifier(config.igh_model)
        self.dq_detector = DQDetector()
        self.gan_detector = GANFingerprintDetector(config.gan_signatures)
        self.semantic_analyzer = SemanticAnalyzer(config.llm_endpoint)
        self.c2pa_validator = C2PAValidator(config.trusted_roots)
    
    async def authenticate(self, media_path: str) -> AuthenticationResult:
        # 先检查来源(快速路径)
        provenance = await self.c2pa_validator.verify(media_path)
        if provenance.valid:
            return AuthenticationResult(
                authentic=True,
                confidence=0.99,
                method="cryptographic_provenance"
            )
        
        # 并行运行取证分析
        results = await asyncio.gather(
            self.prnu_analyzer.analyze(media_path),
            self.igh_classifier.classify(media_path),
            self.dq_detector.detect(media_path),
            self.gan_detector.detect(media_path),
            self.semantic_analyzer.analyze(media_path)
        )
        
        # 信号融合
        composite = self.fuse_signals(results)
        
        return AuthenticationResult(
            authentic=composite.score > 0.75,
            confidence=composite.confidence,
            grade=composite.grade,
            details=results,
            method="forensic_analysis"
        )

12. Tool & Dataset References

12. 工具与数据集参考

Detection Tools

检测工具

ToolTypeDescription
FaceForensics++Dataset & BenchmarkStandard deepfake detection benchmark
SensityCommercialEnterprise deepfake detection API
Microsoft Video AuthenticatorToolFrame-by-frame manipulation scoring
C2PA ToolCLIContent provenance verification
Content Credentials VerifyWebOnline C2PA verification (CAI)
webconsulting Forensik-ToolWebMulti-layer analysis (EXIF, C2PA, Signal, AI)
工具类型描述
FaceForensics++数据集与基准测试标准深度伪造检测基准
Sensity商业工具企业级深度伪造检测API
Microsoft Video Authenticator工具逐帧篡改评分
C2PA ToolCLI内容来源验证
Content Credentials Verify网页工具在线C2PA验证(CAI)
webconsulting Forensik-Tool网页工具多层分析(EXIF、C2PA、信号、AI)

Reference Datasets

参考数据集

DatasetContentUse Case
DARPA MediForMulti-modal manipulationComprehensive forensic training
DARPA SemaForSemantic manipulationSemantic consistency models
Google/Jigsaw DeepFakeFace-swap videosVideo deepfake detection
Facebook DFDCDiverse deepfakesLarge-scale detection training
StyleGAN2 FFHQSynthetic facesGAN fingerprint analysis
数据集内容用例
DARPA MediFor多模态篡改内容综合取证训练
DARPA SemaFor语义篡改内容语义一致性模型
Google/Jigsaw DeepFake换脸视频视频深度伪造检测
Facebook DFDC多样深度伪造内容大规模检测训练
StyleGAN2 FFHQ合成人脸GAN指纹分析

Industry Standards

行业标准

  • C2PA (Coalition for Content Provenance and Authenticity): Cryptographic media provenance
  • CAI (Content Authenticity Initiative): Adobe-led provenance standard
  • IPTC Photo Metadata: Standard metadata for photographic content

  • C2PA(内容来源与真实性联盟):加密媒体来源
  • CAI(内容真实性倡议):Adobe主导的来源标准
  • IPTC Photo Metadata:摄影内容标准元数据

13. Checklists

13. 检查清单

Pre-Analysis Checklist

分析前检查清单

  • Obtain original file (avoid screenshots, re-uploads)
  • Preserve file hash (SHA-256) for chain of custody
  • Document source and context of media
  • Check for C2PA/CAI provenance data
  • Identify claimed device/source for PRNU matching
  • 获取原始文件(避免截图、重新上传)
  • 保存文件哈希(SHA-256)以确保监管链
  • 记录媒体来源和上下文
  • 检查C2PA/CAI来源数据
  • 识别声称的设备/来源用于PRNU匹配

Analysis Checklist

分析检查清单

  • Run PRNU/PCE analysis (if reference available)
  • Generate IGH profile and classify blur types
  • Create DQ probability map for splicing detection
  • Analyze for GAN fingerprints (FFT spectrum)
  • Check semantic consistency (shadows, reflections, physics)
  • For video: temporal consistency, audio-visual sync
  • Calculate composite authenticity score
  • 运行PRNU/PCE分析(若有参考设备)
  • 生成IGH特征并分类模糊类型
  • 创建DQ概率图用于拼接检测
  • 分析GAN指纹(FFT频谱)
  • 检查语义一致性(阴影、反射、物理特性)
  • 视频:时间一致性、音视频同步
  • 计算综合真实性评分

Reporting Checklist

报告检查清单

  • Document all findings with confidence levels
  • Include visualizations (DQ maps, FFT spectra)
  • Provide grade interpretation with caveats
  • List limitations of analysis
  • Recommend human review for borderline cases

  • 记录所有发现及置信度
  • 包含可视化内容(DQ图、FFT频谱)
  • 提供等级解释及注意事项
  • 列出分析局限性
  • 建议对边界案例进行人工审核

14. Limitations & Caveats

14. 局限性与注意事项

Known Detection Challenges

已知检测挑战

ChallengeImpactMitigation
Computational imaging (HDR+, Night Sight)Destroys PRNURely on semantic analysis
Social media compressionRemoves fine artifactsFocus on coarse-grained signals
Adversarial attacks on detectorsEvades specific modelsMulti-model ensemble
Rapid GAN evolutionOutdated fingerprintsContinuous model updates
Metadata strippingScreenshots, re-uploads remove C2PAInvisible watermarks coupled with C2PA
C2PA Challenge: Screenshots and social media uploads can strip metadata ("stripping attack"). The industry is developing invisible watermarks that survive re-encoding and link back to C2PA manifests.
挑战影响缓解措施
计算成像(HDR+、Night Sight)破坏PRNU依赖语义分析
社交媒体压缩去除精细痕迹关注粗粒度信号
针对检测器的对抗攻击规避特定模型多模型集成
GAN快速演进指纹过时持续更新模型
元数据剥离截图、重新上传会移除C2PA与C2PA结合的隐形水印
C2PA挑战:截图和社交媒体上传会剥离元数据(“剥离攻击”)。行业正在开发可在重新编码后存活并链接回C2PA清单的隐形水印。

Ethical Considerations

伦理考量

  1. False Positives: Incorrectly flagging authentic media can cause harm
  2. Dual Use: Detection research enables better synthesis
  3. Automation Bias: Over-reliance on automated verdicts
  4. Privacy: PRNU databases can identify individuals
  1. 误报:错误标记真实媒体会造成伤害
  2. 双重用途:检测研究可能促进更优合成技术
  3. 自动化偏见:过度依赖自动化结论
  4. 隐私:PRNU数据库可识别个人

The Liar's Dividend

说谎者红利

The mere existence of deepfakes allows bad actors to dismiss authentic evidence as fake. Detection tools must be communicated carefully to avoid amplifying this effect.

深度伪造的存在本身就允许不良行为者将真实证据污蔑为伪造。必须谨慎传达检测工具的能力,避免放大这种效应。

References

参考文献

Academic Research (2024-2025)

学术研究(2024-2025年)

  1. Ramanaharan, R. et al. (2025). "DeepFake video detection: Insights into model generalisation." Forensic Science International: Digital Investigation, Vol. 52. DOI: 10.1016/j.fsidi.2025.301875
    • Systematic review of generalizability in deepfake detection techniques
  2. Ahmed, N. et al. (2024). "Visual Deepfake Detection: Review of Techniques, Tools, and Datasets." IEEE Access, Vol. 12, pp. 180234-180261. DOI: 10.1109/ACCESS.2024.3511641
    • Comprehensive review covering 2018-2024 with 16+ citations
  3. Cassia, M. et al. (2025). "Deepfake Forensic Analysis: Source Dataset Attribution and Legal Implications." arXiv:2505.11110. arXiv
    • Dataset attribution for legal proceedings
  4. Nature Scientific Reports (2025). "Deepfake video deception detection using visual attention mechanisms." Sci Rep 15, 23920. DOI: 10.1038/s41598-025-23920-0
    • Novel attention-based detection methods
  5. Nature Scientific Reports (2025). "Detection of AI generated images using combined uncertainty estimation." Sci Rep 15, 28572. DOI: 10.1038/s41598-025-28572-8
    • Diffusion model detection via uncertainty quantification
  6. ACM CHI (2025). "Characterizing Photorealism and Artifacts in Diffusion Model Images." Proceedings of CHI 2025. DOI: 10.1145/3706598.3713962
    • Human perception studies on diffusion model artifacts
  1. Ramanaharan, R. et al. (2025). "DeepFake video detection: Insights into model generalisation." Forensic Science International: Digital Investigation, Vol. 52. DOI: 10.1016/j.fsidi.2025.301875
    • 深度伪造检测技术泛化能力的系统综述
  2. Ahmed, N. et al. (2024). "Visual Deepfake Detection: Review of Techniques, Tools, and Datasets." IEEE Access, Vol. 12, pp. 180234-180261. DOI: 10.1109/ACCESS.2024.3511641
    • 涵盖2018-2024年的综合综述,引用16+文献
  3. Cassia, M. et al. (2025). "Deepfake Forensic Analysis: Source Dataset Attribution and Legal Implications." arXiv:2505.11110. arXiv
    • 法律程序中的数据集归因
  4. Nature Scientific Reports (2025). "Deepfake video deception detection using visual attention mechanisms." Sci Rep 15, 23920. DOI: 10.1038/s41598-025-23920-0
    • 基于注意力机制的新型检测方法
  5. Nature Scientific Reports (2025). "Detection of AI generated images using combined uncertainty estimation." Sci Rep 15, 28572. DOI: 10.1038/s41598-025-28572-8
    • 通过不确定性量化检测扩散模型生成内容
  6. ACM CHI (2025). "Characterizing Photorealism and Artifacts in Diffusion Model Images." Proceedings of CHI 2025. DOI: 10.1145/3706598.3713962
    • 扩散模型图像的照片真实感与痕迹的人类感知研究

Audio Deepfake Research

音频深度伪造研究

  1. PMC/NIH (2024). "Audio Deepfake Detection: What Has Been Achieved and What Lies Ahead." PMCID: PMC11991371. PMC
    • Comprehensive review of audio synthesis detection
  2. Forensic Science International (2025). "Forensic deepfake audio detection using segmental speech features." DOI: 10.1016/j.forsciint.2025.112345
    • Acoustic feature analysis for voice clone detection
  3. UC Berkeley I-School (2025). "FairVoice: An Equitable Audio Deepfake Detector." Project Page
    • Addressing bias in audio deepfake detection
  1. PMC/NIH (2024). "Audio Deepfake Detection: What Has Been Achieved and What Lies Ahead." PMCID: PMC11991371. PMC
    • 音频合成检测的综合综述
  2. Forensic Science International (2025). "Forensic deepfake audio detection using segmental speech features." DOI: 10.1016/j.forsciint.2025.112345
    • 基于分段语音特征的语音克隆取证检测
  3. UC Berkeley I-School (2025). "FairVoice: An Equitable Audio Deepfake Detector." Project Page
    • 解决音频深度伪造检测中的偏见问题

Benchmarks & Datasets

基准测试与数据集

  1. Deepfake-Eval-2024 (2025). "A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024." arXiv:2503.02857. arXiv
    • Real-world social media deepfake benchmark
  2. DeepfakeBench (2024). "A Comprehensive Benchmark of Deepfake Detection." GitHub/SCLBD. Repository
    • Standardized evaluation framework
  1. Deepfake-Eval-2024 (2025). "A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024." arXiv:2503.02857. arXiv
    • 真实世界社交媒体深度伪造基准
  2. DeepfakeBench (2024). "A Comprehensive Benchmark of Deepfake Detection." GitHub/SCLBD. Repository
    • 标准化评估框架

Government Programs

政府项目

  1. DARPA (2025). "Furthering Deepfake Defenses." DARPA News. Press Release
    • SemaFor program transition to operational use
  2. DARPA SemaFor (2020-2024). "Semantic Forensics Program." Program Page
    • Original program documentation
  3. UL/DSRI (2025). "DSRI & DARPA Fight Deepfakes with AI Forensics." Article
    • Post-SemaFor continuation
  1. DARPA (2025). "Furthering Deepfake Defenses." DARPA News. Press Release
    • SemaFor项目向实际应用过渡
  2. DARPA SemaFor (2020-2024). "Semantic Forensics Program." Program Page
    • 原始项目文档
  3. UL/DSRI (2025). "DSRI & DARPA Fight Deepfakes with AI Forensics." Article
    • SemaFor项目后续

Industry Standards

行业标准

  1. C2PA (2025). "Content Credentials: C2PA Technical Specification v2.3." Specification
    • Current cryptographic provenance standard
  2. Content Authenticity Initiative (2025). Official Website
    • Industry adoption and implementation resources
  3. C2PA Whitepaper (2025). "Content Credentials: A New Standard for Digital Provenance." PDF
    • Technical overview and use cases
  1. C2PA (2025). "Content Credentials: C2PA Technical Specification v2.3." Specification
    • 当前加密来源标准
  2. Content Authenticity Initiative (2025). Official Website
    • 行业采用与实现资源
  3. C2PA Whitepaper (2025). "Content Credentials: A New Standard for Digital Provenance." PDF
    • 技术概述与用例

Financial Impact & Case Studies

财务影响与案例研究

  1. World Economic Forum (2025). "Detecting dangerous AI is essential in the deepfake era." Article
    • 1,740% surge in deepfake fraud (North America, 2022-2023)
  2. The Guardian (2024). "CEO of world's biggest ad firm targeted by deepfake scam." Article
    • WPP CEO voice clone fraud attempt
  3. Biometric Update (2025). "Voice clones can sound as real as human voices." Article
    • Voice synthesis indistinguishability research
  1. World Economic Forum (2025). "Detecting dangerous AI is essential in the deepfake era." Article
    • 2022-2023年北美深度伪造诈骗案件激增1740%
  2. The Guardian (2024). "CEO of world's biggest ad firm targeted by deepfake scam." Article
    • WPP CEO语音克隆诈骗尝试案例
  3. Biometric Update (2025). "Voice clones can sound as real as human voices." Article
    • 语音合成不可区分性研究

Foundational Works (Pre-2024)

基础研究(2024年前)

  1. Rossler, A. et al. (2019). "FaceForensics++: Learning to Detect Manipulated Facial Images." ICCV 2019. arXiv:1901.08971
    • Foundational benchmark dataset
  2. Wronski, B. et al. (2019). "Handheld Multi-Frame Super-Resolution." ACM SIGGRAPH 2019. Google Research
    • Night Sight and PRNU implications
  3. Kirchner, M. & Fridrich, J. (2019). "PRNU-Based Camera Identification." Digital Image Forensics, Springer.
    • Sensor fingerprint methodology
  1. Rossler, A. et al. (2019). "FaceForensics++: Learning to Detect Manipulated Facial Images." ICCV 2019. arXiv:1901.08971
    • 基础基准数据集
  2. Wronski, B. et al. (2019). "Handheld Multi-Frame Super-Resolution." ACM SIGGRAPH 2019. Google Research
    • Night Sight与PRNU的影响
  3. Kirchner, M. & Fridrich, J. (2019). "PRNU-Based Camera Identification." Digital Image Forensics, Springer.
    • 传感器指纹方法

Deepfake Synthesis Methods

深度伪造合成方法

  1. Thies, J. et al. (2016). "Face2Face: Real-time Face Capture and Reenactment of RGB Videos." CVPR 2016.
    • Foundational face reenactment technique
  2. Prajwal, K.R. et al. (2020). "Wav2Lip: Accurately Lip-syncing Videos In The Wild." ACM Multimedia 2020.
    • Audio-driven lip synchronization
  3. Siarohin, A. et al. (2019). "First Order Motion Model for Image Animation." NeurIPS 2019.
    • Keypoint-based body puppetry
  4. Karras, T. et al. (2020). "Analyzing and Improving the Image Quality of StyleGAN." CVPR 2020.
    • StyleGAN2 architecture and fully synthetic generation
  5. Perov, I. et al. (2020). "DeepFaceLab: Integrated, flexible and extensible face-swapping framework."
    • Widely-used face swap toolkit
  1. Thies, J. et al. (2016). "Face2Face: Real-time Face Capture and Reenactment of RGB Videos." CVPR 2016.
    • 基础人脸重演技术
  2. Prajwal, K.R. et al. (2020). "Wav2Lip: Accurately Lip-syncing Videos In The Wild." ACM Multimedia 2020.
    • 音频驱动唇形同步
  3. Siarohin, A. et al. (2019). "First Order Motion Model for Image Animation." NeurIPS 2019.
    • 基于关键点的身体驱动
  4. Karras, T. et al. (2020). "Analyzing and Improving the Image Quality of StyleGAN." CVPR 2020.
    • StyleGAN2架构与完全合成生成
  5. Perov, I. et al. (2020). "DeepFaceLab: Integrated, flexible and extensible face-swapping framework."
    • 广泛使用的换脸工具包

Face Swap & Synthesis (2025)

换脸与合成(2025年)

  1. GHOST 2.0 (2025). "Generative High-fidelity One Shot Transfer of Heads." arXiv.
    • One-shot face swapping with single reference image
  2. DynamicFace (2025). "High-Quality and Consistent Video Face Swapping using Diffusion." ICCV 2025.
    • Temporal consistency in video face swapping via diffusion models
  3. HFMF (2025). "Hierarchical Fusion for Multi-Modal Forgery Detection." WACV 2025.
    • Multi-modal detection architecture
  1. GHOST 2.0 (2025). "Generative High-fidelity One Shot Transfer of Heads." arXiv.
    • 单样本人脸迁移(仅需一张参考图)
  2. DynamicFace (2025). "High-Quality and Consistent Video Face Swapping using Diffusion." ICCV 2025.
    • 基于扩散模型的视频换脸时间一致性
  3. HFMF (2025). "Hierarchical Fusion for Multi-Modal Forgery Detection." WACV 2025.
    • 多模态伪造检测架构

Policy & Impact (2025)

政策与影响(2025年)

  1. European Parliament (2025). "Children and deepfakes." EPRS Briefing.
    • Policy analysis on deepfake impact on minors

  1. European Parliament (2025). "Children and deepfakes." EPRS Briefing.
    • 深度伪造对未成年人影响的政策分析

Credits & Attribution

致谢与归属

This skill synthesizes methodologies from the multimedia forensics research community, drawing from peer-reviewed publications (2024-2025), DARPA MediFor/SemaFor program outcomes, and industry standards (C2PA v2.3, CAI).
All citations verified as of January 2026.
Developed by webconsulting.at for the Claude skill collection.
本技能综合了多媒体取证研究社区的方法,借鉴了同行评审出版物(2024-2025年)、DARPA MediFor/SemaFor项目成果以及行业标准(C2PA v2.3、CAI)。
所有引用截至2026年1月均已验证。
由webconsulting.at为Claude技能集开发。