A framework for Electoral Integrity Institutions (EIIs)
Democracies must act now to protect elections from synthetic content threats. Generative AI, deepfakes, and AI-driven misinformation are becoming more sophisticated, accessible, and seamlessly integrated into digital ecosystems. While their impact on the 2024 elections was not as disruptive as feared, the risks are escalating, and few institutions are equipped to counter them effectively.
This white paper introduces a six-step framework for establishing Electoral Integrity Institutions (EIIs)—specialised bodies designed to safeguard elections from synthetic disinformation. The framework is adaptable to different political, legal, and cultural contexts, offering a structured approach for countries seeking to enhance electoral integrity.
The EII framework: six steps for safeguarding elections
- SET the right foundations – Define the institution’s mandate, structure, and core principles to ensure long-term effectiveness.
- FACILITATE collaboration – Build alliances across government agencies, technology platforms, civil society, and academia to coordinate a strong response.
- SCAN the digital space – Use AI-driven tools and human expertise to proactively monitor and detect misinformation before it spreads.
- ASSESS content effectively – Implement a tiered evaluation system to determine the severity, reach, and credibility of flagged content.
- ACT with power and accountability – Enable swift interventions through democratic oversight, ensuring that responses are proportional and protect free speech.
- LEARN via feedback loops – Continuously refine strategies by analysing institutional performance, engaging global counterparts, and anticipating future threats.

Lessons from global case studies
Real-world examples reinforce the urgency of action and inform best practices. Sweden’s Psychological Defence Agency demonstrates the importance of a clear mandate. Taiwan’s participatory policymaking shows how multi-stakeholder engagement can enhance trust. France’s VIGINUM lends the role of in-house expertise and oversight. Meanwhile, the controversies surrounding the Global Disinformation Index highlight the need for impartiality and transparency.
Key takeaways
- Synthetic content threats are growing. AI-generated misinformation will increasingly impact elections.
- EIIs offer a proactive institutional response. Without strong institutions, democracies remain vulnerable.
- A structured, adaptable framework is essential. Countries must tailor their approaches to fit their unique governance contexts.
- Multistakeholder collaboration is critical. No single entity can combat digital disinformation alone.
- Transparency and accountability must be prioritised. Trust in institutions depends on their fairness and effectiveness.
- Countries must act now to build Electoral Integrity Institutions.
- This framework provides a roadmap for protecting democratic processes in an era of AI-driven misinformation.

About the authors
Aleš is a PhD researcher at University College London (UCL). His research focuses on harms caused by Generative AI to democratic processes, and application of collective intelligence to mitigate those harms. By employing a blend of Futures, experimental, and computational methods, his research anticipates potential threats and develops and tests appropriate mitigation strategies.
Geoff is a Professor at University College London and one of TIAL’s cofounders. He has had a career spanning senior roles in governments, NGOs, foundations and business. He has been directly involved in setting up many organizations in the public sector and civil society, and has experience overseeing venture capital funds and impact investment.
Playbook: Designing new institutions and renewing existing ones
Why do we need institutional innovation, anyway? The world has long depended on public institutions…