Socio-Technical Solutions for Countering AI-Generated Deepfakes

Socio-Technical Solutions for Countering AI-Generated Deepfakes

Roberto Perdisci, Jin Sun, Le Guan, Bart Wojdynski, Justin Conrad, Thomas Kadri, & Sonja West, "Socio-Technical Solutions for Countering AI-Generated Deepfakes," 2023 UGA Presidential Interdisciplinary Seed Grant, $147,668.

Abstract: The term Deep Fakes refers to realistic AI-generated images, videos or audio that can be used to create “fake but believable” content and thus mislead humans. While Deep Fakes can have legitimate uses, such as in the entertainment industry, they can also be used maliciously to manipulate people’s perceptions of real-world facts, such as in cyber-warfare, election manipulation, and misinformation/disinformation campaigns on social media. Although some technical approaches to counter Deep Fakes have been previously proposed, they are far from perfect and new research at the intersection between AI and Cybersecurity is needed to build more reliable Deep Fakes detection solutions. Additionally, the mere existence of Deep Fakes has caused a global erosion of trust in digital media in general. New solutions that enable humans to regain trust in digital content are therefore urgently needed.  First, the PIs propose to investigate two different technical research directions: (1) improving the detection of AI-generated images using novel AI-based computer vision approaches and enabling explainability of the detection system’s results; and (2) reinstating trust in digital images by developing a Trusted Camera mobile application that enables hardware-assisted image provenance attestation. Next, the PIs will study how consumers of digital content can be aided in the process of determining what images can be trusted, by studying how users perceive the results of Deep Fakes detection tools and by developing usable image provenance systems that allow users to verify the origin of an image (e.g., time in which a photo was taken, location coordinates, etc.). Finally, we will investigate public policy and legal aspects of Deep Fakes, and research ways in which these intersect with the new technical solutions that we will develop in this project.

Related Research