New advances in machine learning are reshaping our society and our work at a rapid pace. The past year, since the launch of OpenAI’s Chat GPT3, have seen a massive leap in quality and scale of applications made possible by large language models. Unlike previous technological advancements of recent years such as blockchain, with these new tools the practical and imminent use cases for society as a whole have become much clearer.
Background image generated with AI
This clarity has resulted in a period of extraordinary experimentation. We are witnessing a surge in funding for AI startups performing commercial experimentation using the new technology, but we’re also seeing this filter into the civic domain – OpenAI’s recent funding call to fund 10 large scale experiments in setting up democratic processes, for example. We’re also seeing this at a grassroots level, from Prompt Jams at institutions like Newspeak House, to AI hackathons, to new government task forces, every day thousands of developers, entrepreneurs and technologists in the UK (and beyond) are experimenting.
These experiments are contributing to a burgeoning field of new tools and technologies, often building from the ground up. Many new things are now possible: whether it’s creating digital clones based on your social media data or uploading old diaries to create a chat bot to allow you to have a conversation with your younger self. The boundaries are continually being pushed, producing exciting, albeit unstable, outcomes. Within this unstable context, AI offers both challenges and possibilities for organisers, campaigners and democracy itself. So what are the lessons for the democracy sector?
On a purely practical level, a trove of valuable new tools are emerging that are likely to make our day to day workflows much easier. ChatGPT can help anyone avoid “the blank page” problem creating initial drafts of blog articles, writing emails and generating interesting campaign tactics (an escape room to explain democratic primaries anyone?)! But more broadly new AI tools can now summarise reports, podcasts and create clips, transcribe meeting recordings (new tool Whisper can transcribe a whole 24 hours worth of audio into text for just £8) and arrange smart 1 to 1s. New image generation tools can create more interesting stock images giving campaigns the ability to communicate new concepts such as campaigning for “transparency in parliament”. The possibilities and potential use cases are still being discovered. Making our workflows easier might not feel transformative but by automating some of these kinds of tasks it frees us up to do more of the work that matters such as building relationships to achieve change and thinking more deeply about our interventions. This also has a democratising effect as for many smaller campaigns or charities that struggle with resources, these tools work to level the playing field giving them more capacity for the things that matter – but it’s not just freeing up our time and resources, the potential is there to do these tasks in a much broader and expansive way.
AI is also incredibly useful allowing people to query huge corpuses of shared knowledge. This makes it particularly good for knowledge management within an ecosystem. Imagine being able to ask an AI assistant for all shared knowledge the sector has on democratic engagement? Imagine being able to ask for a list of experts who work in this area? Imagine being able to ask for every instance in which a particular idea has been discussed? This kind of tool opens up new opportunities for building and querying shared knowledge, greatly aiding collaborative efforts. In ecosystems such as the democracy sector, this ability to organise, curate knowledge to better cooperate and de-duplicate seems vital.
But the benefits extend beyond operational efficiency. AI tools, particularly language models such as ChatGPT, built from collective data of the internet provide an amazing tool for perspective taking. With this kind of model you can genuinely step into someone else’s shoes exploring the limits of consensus and how different kinds of people might react to different arguments, simulating conversations between disagreeing parties. In an era of culture war, when in our politics sometimes we struggle to appreciate each other’s viewpoints, technologies like Chat GPT perhaps offer an interesting tool for generating more empathy, understanding and dialog (even if only in a simulated form!)
Yet, if we think beyond the simulated, this technology also has the ability to have multiple conversations at once, analyse those conversations and gradually identify the consensus position and represent it in a conversation? There are already many technologists working on ways to do this including vTaiwan! Such an approach offers a massive potential to enhance our democracy. Imagine a deliberative direct democracy where every citizen has a voice, every opinion counts, and everyone can contribute simultaneously? Imagine a government run on that premise… where democracy means so much more than just a single vote every 5 years.
Yet there are significant risks, limits and ethical questions about this technology. How will it affect jobs? Will the economic gains be fairly distributed? How does it affect copyright? Is this technology safe? Should it be open? Who owns the underlying models? Why and how does it reproduce bias and stereotypes? Who makes decisions about how this technology operates? These questions are not just ethical but also questions of governance at a fundamental level. They are questions of our democracy.
And even at a more micro level, every institution and organisation right now is having to answer these questions of governance from The Guardian to the UK Government, new policies around this technology are being developed day by day. From deciding how transparent an organisation will be about its use to decisions about how much to “keep a human in the loop”. These governance questions are huge and the expertise, ideas and thinking of organisations working on democracy, accountability, transparency and other key areas will be much needed.
And yet the challenge goes deeper. So much of how this technology will evolve will be shaped by the people who use it and the technologists who build things that rely on it. Its power is in the realisation of its application. Eventually, this technology will be built into all our software and systems. These tools will become deeply integrated into Gmail, Google Drive, Microsoft, Outlook, Word, but also Amazon, Uber and other big tech firms, your search engine and even your web browser itself. These corporations are on course to shape the future of this technology, with their notions of what might be possible. Leaving AI’s development solely to such entities without the broader influence of civil society and those who are working on democracy is risky, not just for the democracy sector, but for the very future of democracy itself.
Curious to learn more? Watch the recording of Hannah’s ‘How to: AI & Democracy’ workshop here.
By Hannah O’Rourke (First draft by Chat GPT 3.5)
Hannah O’ Rourke is the Co-founder of Campaign Lab and the former Director of Labour Together. Hannah has been working at the intersection of campaigning and technology since 2018, she is also the co-author of Reorganise.