Seongbin Park, Alexander Vilesov, Jinghuai Zhang, Hossein Khalili, Yuan Tian, Achuta Kadambi, and Nader Sehatbakhsh, University of California, Los Angeles
Deepfake detectors relying on heuristics and machine learning are locked in a perpetual struggle against evolving attacks. In contrast, cryptographic solutions provide strong safeguards against deepfakes by creating hardware-binding digital signatures when capturing (real) images. While effective, they falter when attackers misuse cameras to recapture images of digitally generated fake images from a display or other medium. This vulnerability reduces the security assurance back to the effectiveness of deepfake detectors. The main difference, however, is that a successful attack must now deceive two types of detectors simultaneously: deepfake detectors and detectors specialized for detecting image recaptures.
This paper introduces Chimera, an end-to-end attack strategy that crafts cryptographically signed fake images capable of deceiving both deepfake and image recapture detectors. Chimera demonstrates that current adversarial and generative models fail to effectively deceive both detector types or lack generalization across different setups. Chimera addresses this gap by using a hardware-aware adversarial compensator to craft fake images that successfully bypass state-of-the-art detection mechanisms. The key innovation is a GAN-based image generator that accounts for and compensates the physical transformations introduced during the recapture process. Through rigorous testing using commercial off-the-shelf cameras and displays, Chimera proves effective in fooling both types of detectors with a high success rate while having high visual quality (compared to the original real image). Chimera demonstrates the vulnerability of deepfake detectors even when equipped with hardware-based digital signatures. Our successful end-to-end attack on state-of-the-art detectors shows an urgent need for more robust detection and mitigation strategies.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.