Generative AI as a Force Multiplier for Pro-Democracy Movements - Resistance Daily Brief for 11 March 2025
Information, insight, and inspiration for resisting tyranny in America
Welcome!
Welcome to The Resistance Sentinel, a publication dedicated to documenting and amplifying the movement to defend democracy against authoritarian rule. Today we’re starting something new. We are going to push out the news updates we’ve been posting thus far twice a week, on Monday and Thursday. On the other days, we plan to take a deeper dive into one particular topic relevant to those of us fighting the rapid consolidation of tyranny in America. Today we start with a look at generative AI as a tool for pro-democracy social movements, including its potential benefits and drawbacks. We’ve benefited greatly from AI in our little newsletter project over the last couple weeks and, as a result, decided to take a deeper look at how other activists are using this technology. We hope what we’ve put together will inform and inspire you to experiment with AI in your activist work.
Summary
Pro-democracy movements urgently need to leverage generative artificial intelligence (AI) to counter the increasing use of these tools by authoritarian regimes to spread disinformation, as well as to enhance their own capabilities for messaging, knowledge sharing, and organizing. Generative AI can act as a force multiplier for smaller pro-democracy groups who otherwise face resource limitations. While serious and valid concerns about bias, undermining human deliberative practices, and ethical implications exist, these can be mitigated through careful implementation, human oversight, and the development of ethical guidelines. Based on several recent reports, workshops, and presentations by researchers and activists, we believe that the pro-democracy community must experiment with generative AI, recognizing its potential to amplify their impact in the fight for democratic values against technologically sophisticated forces of authoritarianism worldwide.
The Authoritarian AI Advantage: A Wake-Up Call
Authoritarian regimes are leveraging AI for their surveillance, disinformation, and social control objectives. This rapid adoption of AI by autocratic governments contrasts with the slower integration of these technologies by democracy movements, creating a growing disadvantage for the latter. The monopolization of AI's capabilities by authoritarian regimes and their supporters presents a significant challenge that should serve as a critical wake-up call for pro-democracy movements.
A recent audit by NewsGuard, the news source credibility rating tool and research group, revealed that leading AI chatbots frequently repeated Russian disinformation, with one finding showing that top AI models repeated pro-Kremlin propaganda narratives 33 percent of the time. A previous NewsGuard audit discovered that these AI models mimicked fabricated narratives from a network of fake local news sites linked to Russia 32 percent of the time, often citing these disinformation sources as authoritative.
This so-called Russian "Pravda" network achieves these results by employing a strategy of deliberately infiltrating the data used by AI chatbots with false claims and propaganda. They do this by flooding search results and web crawlers with pro-Kremlin falsehoods. By creating a high volume of disinformation across numerous seemingly independent websites, combined with the use of well-known search engine optimization (SEO) techniques, they increase the likelihood that AI models will encounter and incorporate these false narratives into the responses they give to unsuspecting users.
Multiple recent reports express concern over the slower adoption of AI by democracy movements, warning that this risks widening the gap with adversaries who are leveraging these technologies, potentially increasing existing power imbalances. While many analysts are focusing on the threats AI poses to democracy, the potential uses of AI by pro-democracy forces are too often ignored. Consequently, there is a call for urgent and intentional strategic experimentation with AI by pro-democracy actors. A report from the National Endowment for Democracy states, “For the constellation of pro-democratic donors, journalists, advocacy groups, and grassroots activists seeking to find their footing on this rapidly shifting terrain, the time for intentional thinking about leveraging AI for democracy is now.”
Generative AI as a Democratic Force Multiplier
Generative AI presents a significant opportunity to empower the global democracy movement--made up of many smaller, national and sub-national groups of activists--by acting as a force multiplier. These tools enable small groups to have an outsized impact by scaling their operations and enhancing their communication strategies. Activists can leverage AI for crucial tasks such as content creation and targeted messaging; learning, training, and planning; and maintaining situational awareness. Generative AI tools can make the use of sophisticated tactics in each of these areas more accessible to smaller groups who often lack extensive technical expertise and funding. By utilizing readily available and often free AI tools, small movements can increase their chances of levelling the playing field against more powerful adversaries.
In the area of strategic communication, AI tools can significantly enhance the messaging and counter-messaging efforts of democracy movements by enabling activists to craft and tailor their communications for diverse audiences more effectively. Furthermore, AI can be utilized to detect and counter disinformation by identifying trolls, bots, and deepfakes, allowing movements to defend themselves against harmful narratives.
In the area of learning, training, and planning, AI can be used to provide accessible and effective ways to learn theory, history, and best practices. Activists can utilize AI to generate summarized training materials, slide decks, and video scripts, making complex information easier to share and enabling widespread capacity building within movements, even with limited resources or expertise.
In the area of situational awareness, AI tools enable democracy movements to scale their analysis of feedback from their base by efficiently processing and summarizing large amounts of data, ensuring grassroots voices are better included in decision-making. Furthermore, AI can significantly improve the understanding of adversaries by helping to map complex networks of actors and analyze their narrative tactics, allowing movements to anticipate and counteract their strategies more effectively.
Navigating the Challenges of AI for Pro-Democracy Movements
While generative AI offers considerable promise as a force multiplier for democracy movements, navigating its integration requires careful consideration of several potential downsides. The relatively slower adoption of AI by pro-democracy movements compared to authoritarian regimes and powerful corporations means that movements already face power imbalances that put them at a disadvantage with respect to access to the technology but also its built in biases. Furthermore, the concentration of advanced AI technology within large, often unregulated entities raises concerns about potential misalignments in values and the risk of enhanced surveillance against activists. It is therefore imperative for democracy movements to adopt a cautious yet strategic approach, addressing ethical considerations, security vulnerabilities, and the potential for bias and manipulation while ensuring that AI serves as a tool to augment, not replace, human agency and democratic deliberation.
Concerns exist regarding bias in AI tools and the potential for the spread of misinformation, as AI models can be influenced by biased training data, leading to skewed outputs or the amplification of false narratives. The infiltration of AI systems by disinformation networks, such as the pro-Kremlin "Pravda" network, demonstrates how these tools can be exploited to disseminate propaganda and erode trust in information, posing a significant challenge to democracy movements. Models relied upon by such movements could be targeted for deliberate manipulation by adversaries. As security expert Bruce Schneier noted in his 2024 speech to the RSA security conference, AI is a power-enhancing technology, and if it is being shaped by malign actors, it can concentrate power in their hands. As such, pro-democracy movements must take great care in their adoption of generative AI tools. They should consider the development of fine-tuned AI models created specifically for their needs. They should also stress the necessity of human validation of AI output and careful curation of data used in fine-tuned models. They should prioritize the use of open-source models for greater transparency and control.
Concerns exist that an over-reliance on AI tools could bypass the essential human processes of deliberation and discussion that are crucial for democratic movements and societal change. Bruce Schneier argues that the acts of arguing and collective decision-making, even if imperfect, hold intrinsic value that AI-driven answers might undermine. Thus, it is important that AI serves to augment human agency and connection rather than circumventing them in the pursuit of efficiency or seemingly optimal outcomes. Scholar-activist Freddy Guevara reminds us: “Activists should approach AI as an assistant, not an authority. AI is a powerful tool, but never forget where the real power lies: the people."
Integrating AI into democracy movements presents significant ethical and security considerations, including the potential for AI systems to be exploited for surveillance by authoritarian actors and the risk of exacerbating existing inequalities due to unequal access to these technologies. Furthermore, bias embedded in AI models due to censored training data can lead to "authoritarian biases," potentially undermining the integrity of information and analysis. Therefore, establishing ethical guidelines and ensuring robust security measures are crucial to prevent misuse and maintain the moral integrity and safety of activists and their work. There is an urgent need to create and train activists on a Code of Conduct for integrating AI tools into democracy movement work. Activists must be educated on the potential misuses of AI by their authoritarian adversaries along with how to build resilience into movement planning to withstand disruptions stemming from adversary abuses of AI systems. Finally, activist groups must develop strategies to share AI resources and empower resource-constrained groups.
The Path Forward: Experiment & Empower
An intentional and collaborative approach to integrating AI tools into democracy movements is not just beneficial but essential. Given the intense and pressing challenges faced by these movements globally and the increasing technological advantages of their adversaries, failing to strategically adopt and adapt AI could lead to a further widening of the gap in effectiveness and power.
Looking ahead, recent workshops and reports point to several key areas that demand attention to leverage AI effectively for democracy movements. These include:
Establishing a collaborative consortium, akin to Coders Without Borders, to bridge the gap between AI developers and democracy activists, facilitating the creation of tools tailored to democracy movement needs;
Developing systematic training programs to equip activists with the necessary skills to utilize AI responsibly and effectively in their work;
Creating a collaborative research agenda to thoroughly evaluate the impact of AI tools on movement outcomes; and,
Devising and training activists on a Code of Conduct to navigate the ethical and security considerations inherent in AI integration.
Together, these recent reports, workshops, and presentations make it clear that the time has come for pro-democracy movements to at least embrace a spirit of experimentation with respect to AI. Faced with mounting challenges and the sophisticated technological advancements of their adversaries, it is now imperative for activists, technologists, and researchers to unite in collaboration. Together, we must explore ways of using AI that will not only level the playing field but also amplify the voices and strengthen the impact of those striving for a more just and democratic world.
Sources
How AI Can Support Democracy Movements: Summary Report of a Research and Practice Workshop
LEVERAGING AI FOR DEMOCRACY: Civic Innovation on the New Digital Playing Field
Using AI Now to Improve Movements’ Effectiveness: A Basic Introduction Guide for Social Activists
Methodology Note: This publication benefited from the use of two AI tools in its research, analysis, and writing. While the Harvard Ash Center and NewsGuard reports were encountered during manual research, additional supporting materials like the National Endowment for Democracy report and presentation by Bruce Schneier were uncovered during a search for additional materials using Perplexity, as were other sources that informed our work but were not cited in this report. All sources were read or watched in their entirety by human authors who manually took notes and highlights from the source materials. The corpus of original source documents and manual notes and highlights were added to a notebook in Google’s free NotebookLM tool. An initial, narrative outline prompt was given to the AI tool to assist in fleshing out a more detailed outline with relevant materials from source documents and manual notes. The human authors worked with the AI iteratively, point-by-point through the outline, to devise a written draft, which was then heavily edited by the human authors. Our use of AI to assist with the work of this publication is, we hope, an example of how AI might be used by other small groups of activists.