Pro-Chinese Actors Promote AI-Generated Video Footage of Fictitious People in Online Influence Operation
In late 2022, Graphika observed limited instances of Spamouflage, a pro-Chinese influence operation (IO), promoting content that included video footage of fictitious people almost certainly created using artificial intelligence techniques.
While a range of IO actors increasingly use AI-generated images or manipulated media in their campaigns, this was the first time we observed a state-aligned operation promoting video footage of AI-generated fictitious people.
The AI-generated footage was almost certainly produced using an “AI video creation platform” operated by a commercial company in the United Kingdom. The company offers its services for customers to create marketing or training videos and says “political […] content is not tolerated or approved.”
Despite featuring lifelike AI-generated avatars, the Spamouflage videos we reviewed were low-quality and spammy in nature. Additionally, none of the identified Spamouflage videos received more than 300 views, reflecting this actor’s long-standing challenges in producing convincing political content that generates authentic online engagement.
We believe the use of commercially-available AI products will allow IO actors to create increasingly high-quality deceptive content at greater scale and speed. In the weeks since we identified the activity described in this report, we have seen other actors move quickly to adopt the exact same tactics. Most recently, this involved unidentified actors using the same AI tools to create videos targeting online conversations in Burkina Faso.