"Are deepfakes a looming threat to the truth as we know it? How do we distinguish fact from fiction in an age where seeing is no longer believing?"
Those important questions are posed by Daniel Serfaty, the host of Aptima's MINDWORKS podcast, in a new episode featuring an interview with Graphika's Chief Scientist Vladimir Barash where they discussed the ethical, societal, and technological challenges of deepfakes.
"One of the most fascinating but also one of the most frightening aspects of deepfakes is just how easy it is to generate them," Barash told Serfaty. "All you need is basically an idea and a laptop. There are plenty of free models and even if you want to use one of the off the shelf industrial models, a lot of them are very cheap. Like cents or dollars to use."
When it comes to the role of AI technology being used to detect deepfake images, Barash cited the example of a manipulated image in the summer of 2023 of an explosion at the Pentagon circulating on social media. Graphika evaluated the image after it appeared and was able to quickly analyze and assess it as a deepfake.
"We were able to get that assessment out before the image caused a panic, before it led to some very bad consequences," Barash remembered. "So that shows the stakes are very, very high. Sometimes in critical situations you have minutes, at most hours, to respond before something goes viral and it's a lot harder to counteract."
To see why organizations around the world look to Graphika for social media intelligence, including on the latest deepfakes rapidly circulating online, schedule a custom demo with a member of our team.