A Review of Tools and Technologies to Combat Deepfakes

Erokhin, D. ORCID: https://orcid.org/0000-0002-5191-0579 & Komendantova, N. ORCID: https://orcid.org/0000-0003-2568-6179 (2026). A Review of Tools and Technologies to Combat Deepfakes. Information 17 e347. 10.3390/info17040347.

[thumbnail of information-17-00347.pdf]
Preview
Text
information-17-00347.pdf - Published Version
Available under License Creative Commons Attribution.

Download (758kB) | Preview

Abstract

Deepfakes and adjacent synthetic-media capabilities have become a systemic challenge for information integrity, security, and digital trust. Countermeasures now span passive detection methods that infer manipulation from content traces, active provenance systems that cryptographically bind metadata to media, and watermarking approaches that embed detectable signals into content or generative processes. This review presents a rigorous synthesis of tools and technologies to combat deepfakes across modalities (image, video, audio, and selected multimodal settings), drawing primarily from the peer-reviewed literature, standardized benchmarks, and official technical specifications and reports. The review analyzes detection methods, provenance and authentication technologies, with emphasis on cryptographic manifests and threat models, watermarking and content provenance, including diffusion-era watermarking and industrial deployments, adversarial robustness and attacker adaptation, datasets and benchmarks, evaluation metrics across tasks, and deployment and scalability constraints. A dedicated section addresses legal, ethical, and policy issues, focusing on emerging transparency obligations and platform governance. The review finds that no single countermeasure is sufficient in realistic adversarial settings. The strongest practical approach is a layered defense that combines provenance, watermarking, content-based detection, and human oversight. The study concludes with limitations of the current evidence base and prioritized research directions to improve generalization, interoperability, and trustworthy user experiences.

Item Type: Article
Uncontrolled Keywords: deepfakes; synthetic media; multimedia forensics; deepfake detection; provenance; authentication; content credentials; watermarking; adversarial machine learning; evaluation metrics
Research Programs: Advancing Systems Analysis (ASA)
Advancing Systems Analysis (ASA) > Cooperation and Transformative Governance (CAT)
Depositing User: Michaela Rossini
Date Deposited: 03 Apr 2026 09:28
Last Modified: 03 Apr 2026 09:28
URI: https://pure.iiasa.ac.at/21428

Actions (login required)

View Item View Item