Election management bodies and democratic watchdogs across multiple regions are raising concerns about the growing use of artificial intelligence in election-related disinformation campaigns, warning that existing safeguards may be insufficient to address emerging risks.
Officials note that AI tools are increasingly being used to generate misleading political content, including deepfake videos, synthetic audio recordings, and mass-produced social media narratives designed to confuse voters or undermine trust in electoral processes. Unlike traditional misinformation, AI-driven content can be produced rapidly, tailored to specific audiences, and disseminated at scale.
Several governments are now reviewing electoral laws to address these challenges. Proposed measures include stricter transparency rules for political advertising, mandatory disclosure of AI-generated content, and enhanced cooperation between election authorities and digital platforms.
Civil society groups have emphasized that any regulatory response must carefully balance election security with freedom of expression. They warn that overly broad restrictions could be misused to suppress legitimate political speech. As election calendars fill across the globe, experts agree that safeguarding democratic processes in the age of artificial intelligence will require urgent legal, technical, and institutional reforms
Sources:
- Reuters
- BBC News
- European Commission / Election watchdog statements
