In the span of six weeks, Graphika monitored influence operations targeting elections in Bangladesh, Colombia, and Tibet. In each case, AI was an essential part of the toolkit: to generate fabricated videos, to compose text content in local languages, to automate distribution, and to coordinate posting by a multitude of inauthentic accounts to simulate organic reach. The wide availability of AI tools and the challenges of flagging automated outputs before they spread demonstrate how interference narratives are evolving and becoming increasingly difficult to discern from reality.
Key Takeaways
-
Election interference operations combine AI-generated content with automated distribution, utilizing coordinated inauthentic accounts to simulate organic reach.
-
Narratives are increasingly shifting toward delegitimizing the electoral process and institutions in addition to candidate attacks.
-
Generative AI makes it easier for state-linked actors to produce high-volume, bilingual content tailored to local political contexts.
Deepfakes as Political Weapons
In Bangladesh, ahead of the Feb. 12 general election, Facebook users circulated multiple AI-generated videos: one depicting a retired Pakistani military officer calling on voters to boycott the Jamaat-e-Islami Party, another of two police officers accusing candidates of financial crimes. The individuals portrayed in the videos were constructed to look like credible authority figures making specific, damaging political claims in the final days before a vote. While both were flagged as AI-generated by the Bangladeshi fact-checking agency Rumor Scanner, they still reached viewers, appearing in the context of other election updates without any indication that they were fake.
Facebook users circulated this AI-generated video of retired Pakistani Lt. Col. Muhammad Azmat Ullah Shah calling not to vote for Jamaat.
Scaling Influence: The Bilingual Spamouflage Campaign
In February, Tibet held a vote to elect parliamentary leaders to serve in the Central Tibetan Administration (CTA) — the diaspora’s democratic governing body based in India. The administration presents itself as Tibet’s legitimate government, though the Communist Party of China does not recognize Tibet’s political independence. As the election approached, it fell into the crosshairs of coordinated pro-China actors. Tumblr accounts (which Graphika assessed with high confidence to be part of the Chinese state-linked influence operation Spamouflage) published AI-generated articles simultaneously in English and Tibetan, targeting candidates in the CTA election. On X, 103 coordinated accounts uploaded AI-generated cartoons depicting candidates in negative ways, using the same hashtags for those candidates' names.
AI tools most likely enabled the production of bilingual content at the volume we observed, targeting a niche political context with fluent translation. An operation that would once have been challenged by acquiring illustrations and fluent translation faces fewer hurdles when using AI, which allows election interference content to be spun up quickly, run at low cost, and executed across platforms simultaneously.
Spamouflage-linked accounts on X spread this likely AI-generated poster urging participation in the election for the political head of the Central Tibetan Administration and members of the 18th Tibetan Parliament-in-Exile, while accompanying texts framed multiple candidates negatively.
When the Goal Is Noise, Not Persuasion
Ahead of Colombia’s March 8 parliamentary elections, a network of 176 inauthentic accounts on X spread a range of narratives targeting the country’s left-wing president Gustavo Petro. They flooded the platform with thousands of links from TikTok, Facebook, Instagram, and Threads — posting roughly every minute around the clock. Based on code fragments in the text of the posts, they seem to be automated using Google Go’s programming language. Rather than directing attacks or support toward one candidate, the same network promoted both conservative anti-Petro content and pro-Petro content on Facebook. Some of the YouTube videos shared by the network didn’t concern Colombia or were apolitical, furthering the difficulty in deciphering the intent of these posts.
The outcome of this activity was increased volume and ambient noise, which inflated the apparent reach of partisan content. While we only have evidence of a coding language enabling this reach, we believe AI's ability to assist with coding automation scripts makes this kind of network flooding more likely than before.
Targeting Institutions, Not Just Candidates
In our analysis of both the Bangladeshi and Tibetan elections, election commissions were targeted. In Bangladesh, fabricated videos accused the commission of pre-stuffing ballot boxes. In Tibet, AI-generated articles and YouTube videos alleged the CTA's Election Commission failed to investigate legitimate complaints. The framing wasn't always focused on a candidate; rather, it cast doubt on the process itself.
What We're Watching Next
The lowering cost of AI tools means the tactics covered here will become more accessible to more actors across more elections. Graphika is actively monitoring upcoming elections for early signs of the same patterns: coordinated inauthentic behavior, AI-generated content targeting candidates or institutions, and cross-platform amplification of narratives designed to suppress turnout or erode trust in results.
These AI-driven tactics show up alongside a wider set of threats we monitor across every election cycle:
- Threats to public safety surrounding electoral events
- Voter suppression narratives spreading false or misleading information about voting requirements, eligibility, or procedures
- Foreign interference attempts using inauthentic networks to influence domestic election outcomes
- Election-adjacent criminal activity, including financially motivated scams targeting voters
Across these threat categories, AI is the common accelerant — lowering production costs, increasing content volume, and making detection harder.
See It Before It Spreads
Graphika's platform gives governments, platforms, and research organizations the visibility to detect coordinated inauthentic behavior before it shapes an election — tracking narrative threats, inauthentic networks, and election integrity risks across the social web in real time.
Click here to book a demo.
