
AI is changing many aspects of our lives, so it's reasonable to expect that it will impact democracy, too. The question is how? Two experts in technology and politics join us to discuss how we can harness AI's power to strengthen democracy. Yes, there will be deepfakes and automated misinformation, but there can also be greater opportunities for the government to serve people and for all of us to have a greater say in our systems of self governance.
In their book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, Bruce Schneier and Nathan E. Sanders describe how AI could change political communication, the legislative process, bureaucracy, the judiciary, and more. It's a more hopeful argument than you might expect. They discuss how AI’s broad capabilities can augment democratic processes and help citizens build consensus, express their voice, and shake up long-standing power structures. As they say in the interview, AI is just a tool; how we use it is up to us.
Schneier is a security technologist and the New York Times bestselling author of 14 books, including A Hacker’s Mind. He is a lecturer at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation, and Chief of Security Architecture at Inrupt, Inc.
Sanders is a data scientist focused on making policymaking more participatory. He has served in fellowships at the Massachusetts legislature and the Berkman-Klein Center at Harvard University.
Related Episodes
The Problem(s) with Platforms (Cory Doctorow)
Building Better Bureaucracy (Jennifer Pahlka)
Laboratories of Restricting Democracy (Virginia Eubanks)
This article is sourced from the Democracy Works podcast. Listen or subscribe below.
Where to subscribe: Apple Podcast | Spotify | RSS
Scroll below for transcripts of this episode.
Chris Beem
From the McCourtney Institute for Democracy at Penn State University. I'm Chris Beem.
Cyanne Loyle
I'm Cyanne Loyle.
Jenna Spinelle
I'm Jenna Spinelle, and welcome to Democracy Works. This week, we are talking about AI and democracy, and our guests are Bruce Schneier and Nathan Sanders. They're both experts in technology and data science, and more importantly, the ways that those two things interact with the government and our political structures. They are the authors of the book rewiring democracy, how AI will transform our politics, government and citizenship, and we get into all of that in the interview, but, you know, I think this is an extension of how AI, just like every other technology that comes before, it has impacted democracy in some way. So it's, you know, naive to think that it won't. And I'm glad that we have Bruce and Nathan with us to help, help us break that down on a more granular level,
Chris Beem
I think that's right. Jenna, I mean, you know you have had, well, first of all, they talk about democracy as being an information system, ie we take inputs from people and we convert them into policy, right and and into our choices about politicians. And so understood that way. You know, you can look back at just about every technological improvement or change as having an impact in democracy, right? I mean, when all we had was horses, you know, you didn't go very far, right? When the train came in, then suddenly it was possible to see and campaign in far more of the world, radio, telegraph, television, you know, computers, of social media, every one of these things has had dramatic impacts on on our politics and on the way this information system works.
Cyanne Loyle
I mean, Chris, I wanted to amplify that a little bit, because in many ways, there isn't anything new here, right? So if we think about our concerns about AI, you know, propaganda has been around since the beginning of politics. There's been, you know, distrust misinformation since the beginning of campaigns and and so the things that we're concerned about aren't necessarily new. They're just going to be bigger, right, bigger, faster and kind of more difficult to counter for those reasons. But I like your point that that, AI, like many of these other technologies, is value neutral, right? It doesn't necessarily have to be to be evil, and I think there's been a lot of discussion that has kind of vilified or stigmatized the use of this particular technology. But it's not the technology itself that's the problem. It's how it's used. And in particular, I appreciate Bruce and Nathan's point that it's how it's regulated, right? So how we think about structuring and controlling, or at least putting the guardrails or parameters around it, before we get too much further in, I want to also just kind of give the pitch Our speakers will get into this a little bit more in terms of what AI actually is. Certainly on college campuses nowadays, we're talking about AI and students. We often are thinking about chat bots, so things like chat GPT and the way in which that's changing the educational system. But just a reminder that AI is all sorts of other stuff that we use all the time, right? So this is, you know, Google Maps. This is Grammarly. This is a lot of the other. This is even a Google search, right? So that the search process in which we're looking up new information is monitored by an algorithm. I've been worrying about that myself for a while. And and Jenna, I know we had a speaker on last year who talked through some of the concerns about the ways in which these search algorithms were changing the information that we saw and received, and so AI is all of these things. And so we can also be a little bit more specific and critical when we're thinking about the things we're worried about, because I'm not sure we are worried about, you know, map applications, but maybe we should be right in terms of the types of information that is being presented to us, even through those media.
Chris Beem
I call my wife a Google Maps fundamentalist. If Google Maps says, Please drive left into Lake Michigan, she would do it because she has no sense of direction. And you know, whatever the algorithm says, it's far more likely to be correct than anything. She does it. And I'm always like, no, no, where's it going? Where's it going, Where's it go? But there's nobody, I genuinely can't imagine, anybody who gets in the car to go to somewhere new that doesn't have Google Maps or Apple Maps or whatever, you know, Waze, whatever, and it's an incredibly useful tool. And when you're, you know, in a big city, and you can get around a big traffic. Like Jeff, you're very glad to have it, I think, terms of the negatives that people are thinking about. Ai, now, many of these are in the future, but you do have people who are worried about data centers and how much land they take up, and how much water they use. And, you know, there doesn't seem to be an enormous amount of environmental concern, so not to mention what's happened to the price of electricity, right and and so these are already public, political problems, issues, questions, and how we move forward on them is not at all clear, but we know where the power is right, and so that makes it really more clear where this is going to end up.
Jenna Spinelle
The lesson I think we've learned, and we talk about this a little bit in the interview, is that I think we've seen, especially from social media companies in particular, that, you know, they're not coming to save us as far as making sure that these tools and technologies have the best interests of the users and democracy more broadly in mind, right? And I don't even think that the AI companies, least from what I've seen, are even like pretending to do that, you know, unlike you know, when, when Facebook first came out, it was, you know, heralded is actually something that could strengthen democracy. And then we all know where that ended up, but I think that that sort of, that pretense has been left behind, and it's really, as Bruce and Nathan say, up to us to figure out how we're going to harness this tool and all of its variations and everything that comes with it, rather than looking to the companies that make it to decide that for us
Cyanne Loyle
Jenna, that's my pitch for a stronger regulatory regime, right? So we've seen other parts of the world be much more successful in constraining the negative impacts of social media, right? So the European Union is kind of light years ahead of the US in terms of privacy laws, the ability to delete data, right, and to maintain a more anonymous internet presence and things like that. And so it's it's both that the companies will not self constrain and that at least in the United States, the government is not interested in constraint. Interested in constraining right, or has not, is not interested in establishing the regulatory regime that can harness the positive powers of AI for democracy and social good. And I really, I really do believe, personally, that there are many, many good things that AI could be used for. And we get into it a little bit in the interview, thinking about different models for the distribution of benefits and things like that that could be amplified, sped up, and ultimately made more effective and cost effective and efficient and just normatively better through some of these technologies, if it is done correctly, right, if It is monitored and regulated and constrained in some way to allow AI to feed into the information component of democracy, right?
Jenna Spinelle
But I also just want to flag that we've had several other conversations on the show over the years that I think pair nicely with this one I'm thinking about. I think cyanne You, you mentioned we had Corey Doctorow on and then several years ago, we had Virginia Eubanks who wrote a book called automating inequality, about the way that benefits are distributed. So I'll link those in the show notes, as I said, They'll pair very nicely with this conversation. But let's go now to the interview with Bruce Schneier and Nathan Sanders.
Jenna Spinelle
Bruce Schneier and Nathan Sanders, welcome to Democracy Dorks. Thanks for joining us today. Thanks for having us so I really enjoyed reading your book, rewiring democracy, how AI will transform our politics, government and citizenship. AI and democracy is one of those things that kind of gets thrown around. Thrown around a lot, but it's a big question. I think both of you do a great job exploring a lot of different ways that AI already has and will continue to impact our democracy. We're going to get into some of those today, but I want to start with with a question about each of your backgrounds, just to help listeners get to know each of you a little bit more. So, Bruce, you have a background in cybersecurity, among many, many other things, but you write that you view democracy as an information system. I wonder if you could tell us a little bit more about what you mean by that.
Bruce Schneier
All right, so I come to this through cybersecurity, through looking at systems, computer systems, and how they work, and how they fail, and how they can be made to be fail, how they're hacked. And throughout my career, I've been looking at that more generally. The book before this that I wrote was called a hacker's mind, which takes this way of thinking to. To social, political, economic systems. What does it mean to hack the law, to hack the tax code, to hack an economic system? Which naturally brings me to thinking about democracy and nai and all the ways it is being used and changed by this technology. And to me, from this perspective, democracy is an information system for problem solving. It is a way to convert individual preferences into a group outcome, how we decide so that fundamentally is this is about information, the information individuals have to establish their preferences, the ways they make their preferences known into the system, and then the ways the system takes those preferences, aggregates them, and figures out what to do. And we can argue about whether it's fair or not. I mean, you know, ideally it's fair. In reality, there are things like gerrymandering and money in politics and all sorts of ways it's perturbed, but that's fundamentally what the system does, and thinking about it that way is actually useful for thinking about both security, like how it can be manipulated, and AI, how artificial thinking agents can will change all the parts of that process.
Jenna Spinelle
Yeah, and that's, I think, a fundamentally more hopeful perspective than your listeners might have been expecting. I think we'll get to some more of that about why both of you are ultimately more hopeful than not about AI and democracy. But Nathan, your background is in data science, and you and I were chatting before we started recording. You've done a lot of work building applications for public comment and oversight and deliberation. Talk about how you see AI fitting into that world and those kinds of projects.
Nathan Sanders
Yeah, you're right. My background is in data science, and really in the physical sciences. I started my career in astrophysics, and I've done a lot of work in applications of machine learning and statistical techniques to environmental science and public health and other domains of natural and life sciences. But really the passion that's emerged for me throughout my career is trying to make democracy work better, trying to make policymaking, in particular, more participatory, helping vulnerable communities to have a greater say in the outcomes of law that affect their communities. And throughout my my work in that area, I found technology to be a really, really important tool. It's often a tool that's valuable in helping to scale those information systems that Bruce was talking about, and the ability of more people to participate in them and have their voices meaningfully heard. So in recent years, of course, among those technological tools that are useful in that domain, AI, has emerged as more and more central, and some of the projects that I've been involved in have found useful applications of AI to augment those information systems, to help people understand and have more information and context about the policy making debate, and to be able to express themselves and have their voices considered and synthesized as part of that information processing that happens in democracy. And of course, we've also seen many examples of ways where that's gone wrong and been used in negative ways to manipulate the system. And I'm sure we'll talk more about many of those risks as well,
Jenna Spinelle
Yeah, and I mean thinking about, you know, democracy. I just just had a conversation earlier this week with somebody who studies comparative politics, so we talked a lot about democracies versus authoritarian countries. And I wonder if you could speak to, you know, broadly what, what the calculation is, maybe what some of the the advantages are, or, you know how, how thinking about AI differs in a democracy versus an authoritarian country or somewhere that is at least more on the spectrum leaning toward authoritarianism versus democracy.
Bruce Schneier
So I've done some of this work, and I think a lot of interesting ways democracy leverages conflict as a problem solving tool. I mean, the same way that capitalism does, that individuals pursuing their self interest aggregates into what we believe is the best interest for the group, and so that's a system by which that works better. In theory, if individuals both know what's going on, they might have access to good information about what's going on, and are able to express their preferences into the group. Authoritarianism, the other way, actually works better if people don't know what's going on, right? Giving people accurate information is harmful for an autocracy, because lets people know where the system's failing, who's benefiting, who's not, who disagrees, where the protests are. So an authoritarian regime actually wants. To suppress accurate information about society, about the government, about the economy, but all the things that are happening. So the technology, the same technology in a democracy, which could be used as a system to move information around, becomes a system to suppress information, and that, to me, is the main difference.
Nathan Sanders
I just want to add to that by bringing in the language of power to this conversation, which is part of what we analyze in our book. How AI is a power amplifying technology. It can turn speech or thought into action in a way that's really unique and more scalable than previous technologies. And so we analyze AI as a as a technology that will amplify the power of everyone that's using it for whichever purpose they're trying to use it for. And that means, in a democratic context, to build on the examples that Bruce gave. If you want to use it to enhance the efficiency of, you know, benefits administration process, it can do that if you want to use it to assert a policy program that is dictated by a single authoritarian leader. It can do that as well. I think really interesting example of this is in the policy making context. We see examples around the world, including experiments in the US, Congress in the house, experiments in Japan and many other jurisdictions of legislatures that are trying to use AI to synthesize public input in a way that can be a guide for writing law, or even to automatically generate laws or amendments based on a corpus of public input, which is really interesting. There are lots of ways that can be done poorly or exhibit bias or go wrong, but the idea of using AI as a technology that synthesizes public input and public demand to amplify the power of constituents is one way democracies can use the technology.
Jenna Spinelle
Yes, it was maybe a year or two ago. I talked with Jennifer Palka on the show. You may know her and and her work about, you know, bringing government technology out of the dark ages. It. It seems, on the one hand, we're pretty far apart, if we're, you know, using, I know some governments use technology that's older than maybe most of them, you know, older than I am or is from the early days of computing and all of these things to suddenly go from that to AI. But it also seems like there's, there's a lot of potential too. And you know, particularly when you know budgets are tight, and people are, you know, open positions aren't being filled, and all of these things, I guess, are my I get the question I really want to get at is like, how open or how, how able are local governments and state governments to even Consider using AI, even though it would benefit them, are they sort of in a place where they even can given how outdated some technology.
Nathan Sanders
I think it's absolutely true that there is an enormous degree of interest. And to use the word from your question, Jenna, openness to applying AI in government. We see so many case studies and examples of that from around the world, in the US in particular, the federal government did an inventory of the number of use cases and experiments with AI happening across federal agencies during the Biden administration. They actually published a roster of those experiments, and they updated over time. At the end of the Biden administration, in late 2024 the number of disclosed AI use cases was north of 2000 just an enormous amount, some of which were very small, actually applying Grammarly to email in one of the agencies was one real example. Some of them were very large and pervasive, like using AI to administer benefits decisions across insurance programs and CMS and other agencies that are really, really impactful and carry greater risks and maybe greater potential benefits as well. I think the openness is clearly definitively there. I think also, as your question, suggesting it takes time to avoid this, and as Jen Palka has written about so eloquently, it can be very difficult to implement these ideas well. So I think even though there are so many applications of AI that have already started to really see the impacts, both positive and negative, of those applications will take years still to unfold.
Jenna Spinelle
We are heading, of course, into the the midterms here in 2026 so it's going to be on a lot of people's minds. And if I read what if I can kind of summarize what what you wrote here, my my takeaway was that there's going to be, you know, even more political communication than we already get as voters, right? So we are, especially here, I'm in Pennsylvania, we already get a lot of ads and messages and any media format. And you can imagine, it sounds like AI. There's the potential, and perhaps already the reality that you know you can generate more things more quickly, and so the theme is just going to be more as we head into, you know, not just this, this election in the midterms, but beyond.
Nathan Sanders
I think you're absolutely right. AI will be an increasingly large issue in our elections going forward, both as a a issue of political salience that is discussed among candidates in the subject of policy debate, as well as part of the medium of how our candidate. Campaigns are conducted, and we're going to see that accelerating in 2026 recently, I thought about this and written about this a lot in terms of the upcoming midterms, by trying to look at examples around the world. I think in the US, we haven't yet seen a candidate who has used AI as part of campaigning in a way that is so novel that it feels like a new form of campaigning in the way that, for example, Obama's 2008 election campaign felt so novel in its use of social media, but there are examples that look and feel more like that elsewhere in the world. The one that we've written about most recently is the case of a Japanese candidate who just got elected to the national legislature there the diet. His name is Takahiro Ono, and he's found a new party in Japan called Team your eye, and he has really innovated a new way of campaigning with AI that uses it for some of the applications that we spoke about earlier, to engage constituents, to include their views in the development of his own policy platform as a candidate and as a party leader, in a way that couldn't be possible with traditional technologies. He's building applications, for example, that will interview constituents and allow them to ask questions about his platform and get relevant information answers back, but then also to ask the constituent what they think about that, what they would change about that. And His system actually translates those interviews, those chatbot conversations, into literal red lines against his policy platform for him as a human, as a candidate, as a party leader, to decide, do I want to make the change that this constituent is suggesting? And one of the reasons I think it's so interesting is that it gives a very visible feedback loop to participants, so that they can see how their political engagement, which may feel ephemeral if you're talking, you know, having a conversation with a chat bot, but they can see if that translates to an actual change in the candidate, in the party's platform, in a way that I think traditional media and traditional technologies just wouldn't have made possible. Now he's one person. He's just started a brand new party. We'll see how, how much those techniques take off in Japan, but I think that the examples like that are what we're looking for in the United States in the midterm elections to see which candidates are going to try to leverage the technology in ways that really feel new and engage constituents and voters in new ways?
Jenna Spinelle
Yeah, so this gets to the question of trust, which, you know, permeates a lot of those, those areas that you write about in the book, whether it's, you know, writing laws or political communication or bureaucracy, making decisions like all of it seems to rely, you rely on a foundation that people are going to trust. What the AI tool is producing, the actions it's it's taking, the decisions it's making. And you know, in in the popular culture, a lot of the discussion on AI as well. It, it, it hallucinates it, you know, puts out things that aren't true. And it's sort of a maybe pushing towards skepticism, if not distrust. So I wonder if you could talk about what's, what's required to build that trust that's required in the in AI and its tools, and how, how you think we're doing in terms of getting to that trust and building it.
Bruce Schneier
It depends, depends on the application. It is very odd to talk about AI's saying things that are not true. Let me talk about politics, because the humans say things are not true all the time, and that's really important. The question is compared to what the other question is, who has to trust it, right? If I'm a candidate and I'm using AI to write campaign messaging, the only person also trust it is me, right? It goes, it is, it is my voice. And compared to what I'm going to probably have some campaign aid write the messaging. And if it does better than that, great. If it does worse than that, I should probably use the human these AI systems make mistakes a lot less than they did a couple of years ago. We are getting better at that. They are better at constrained environments. There's a whole lot of ways to make them more trustworthy. But if it's being used by a person, candidate, a legislator to write a draft legislation, an attorney to write a brief, a citizen to write a letter to the newspaper or to the legislator, they have to trust it. And what I want is for that human to review what the AI does and sign off on it. I just like they would if a human assistant does that. Then there are applications where they're being used, sort of more broadly. Let's say it is being used to help make benefits decisions. You know, should we approve or deny this social security disability benefit, again, the question is going to be compared to what made. Humans will make mistakes when they do that. They make them all the time. And we can, you know, put the AI in parallel, look at the look at the results. Look what the human does, and decide, is it better or worse? In our book, we actually think about. Out a staged way of doing this. So really, for these benefits decisions, problem we have in the US is that humans take too long. There is an enormous delay in making these decisions. So let's say we use an AI, but it's only allowed to say yes. So if it makes a mistake, the only mistake it could make is that it'll give someone a benefit they don't deserve. It cannot make the mistake of denying the benefit to a deserving person, because we're not letting it say No, it just and we even said it just makes the easy yeses. So here it clears out the backlog, and the humans can deal with the harder cases, and then we can decide going forward, if the AI seems to be better, then we might let it work on some of the harder case, we might let it say no. So we can imagine these different ways of deploying it, but it's always going to be compared to a what. So you think of the AI as a power enhancing technology, depending on what you want to do, it will help you do that better, more effectively, faster, more efficiently. And so if you want to have better democracy in general, AI will help. If you want to have less democracy, AI will help right if you want people to get medical reimbursements and benefits they deserve, and it will help you do that if you want to minimize number of people who are getting a benefit. And it will help as well too. But the interesting thing is, it can do it quickly
Jenna Spinelle
Well, and on that point, I want to come back to the Trump administration. They had tried, through the big, beautiful bill, to put a restriction on states being able to adopt AI policy that ultimately did not make it into the final version of the bill, if I understand it correctly. But I guess the bigger question is, thinking about bringing federalism into this? What are the ramifications of California having different AI policies from Nebraska or from Florida or Massachusetts or any other state like, what? What are the you know, pros and cons? Or how are you guys thinking about that differences in policy across states.
Nathan Sanders
Great question. And just to make sure listeners know that proposed federal moratorium on state AI legislation did come back last month in December, when the Trump administration issued an executive order trying to put in place that same policy proposal, the moratorium, but by different, non statutory means. Now there, I think there I think there are a lot of good and open questions about whether the Trump administration can enforce any of the mechanisms in that executive order that are trying to prevent states from regulating AI. I think a lot of people don't think there's a lot of teeth to that, but in any case, it continues to be the enacted policy of the federal government, by virtue of executive order, not to allow states to regulate AI. And from my perspective, that's really a terrible thing for at least two reasons. One is that it's very clear that Congress is not taking action to regulate the AI industry and applications of AI. I would love that it would I would love that we had strong federal standards that govern and shape the AI ecosystem. But it's evidently simply not happening right now. And so states are really the only game in town for regulation. And secondly, I think it is necessary and appropriate and valuable to have innovation in regulation at the state level. States in the US and our federal system, of course, have always been those laboratories of democracy. This is a new technology. People who call out that we don't necessarily yet know all the most appropriate ways to pull those regulatory levers to shape and constrain this industry. Have a good point, and therefore, I think it's valuable to do experimentation in parallel across states on AI regulation in order to identify those best policy mechanisms that Congress can should ultimately adopt over time.
Bruce Schneier
And I'm okay with there being different regulations. We have that with social media, the companies have to deal with regulations in many different countries, and they do. They're big companies. They they can do this. And I know I don't have a lot of sympathy for the whining that the rules are hard. Yeah, rules are hard. That's that's the price of being in business.
Jenna Spinelle
Bruce, I'm glad you mentioned social media companies there. I've been thinking a lot about them. And that era when I was was reading your book, you know, thinking about the, you know, there was a period of time where social media companies maybe gave lip service to caring about democracy, or, you know, tried to make decisions about how their platforms operated that were democratically oriented. But seems like that we've moved away from that over time. So I guess my question for you guys is, do the, you know, open a eyes and the other big AI companies out there, like, are they even. In giving lip service to democracy at this point. Or, you know, how are they thinking about it? And, you know, is there, is there a parallel to be drawn with how some of this has played out over the past 15 years with social media companies?
Nathan Sanders
I think lip service is probably the right expression to use here, because, you know, there are really interesting projects happening at some of the big AI development companies, on the one hand, to try and enforce rules on their own in a voluntary way, for example, about political applications of AI. And two, to try and develop AI systems in a way that integrates public input to try and align those systems to public values and preferences. I think some of that research is really interesting. I admire many of the researchers who are doing it, some of whom Bruce and I have met. However, I think lip service is an appropriate term because that's entirely voluntary, and the lesson that we should learn from social media is that it is not necessarily in these companies' interests to continue or meaningfully enact the outcomes of that research. We shouldn't trust them to do that. We need regulation to enforce those ethical expectations on companies, and the way that we rely on regulation to do so in so many other sensitive industries, like pharmaceuticals. And by the way, we should have alternatives to the corporate development of AI that are based on fundamentally different incentives that incentivize the development of AI towards the public benefit rather than purely for private profit.
Jenna Spinelle
Yeah, you write about public AI in the book. I wonder if you could talk more about what that looks like.
Bruce Schneier
So think about this not as an alternative to corporate AI, but as something else in the mix. What we'd like is there to be many different models, and we're moving to that the cost of creating these core models is dropping dramatically. The Chinese, with their deep seek model, taught us that you don't need the biggest compute or the newest chips that you can use older technologies. The Swiss taught us you can do this ethically, yet you don't have to take, illegally take copyrighted material or use poorly, purge paid third world labor. You can do this more ethically, more democratically, and as this price continues to drop, I expect to see more of these models. They're within reach of large universities, right? Switzerland is not a big country and and so many other countries can do this. Singapore is working on a model that is optimized for Southeast Asian languages, because those that trading data isn't used as much as they want in these large Western models. So I think the ecosystem is changing, and with that, the power of these big tech companies is naturally going to fade, and I think it's gonna be really interesting dynamic to watch.
Jenna Spinelle
Yeah, and speaking of you know what you're watching as we come to the last few minutes of our time together, you write toward the end of the book that there are four things that we need, four pillars that we need to have an AI that works for democracy, reform, resistance, responsible use, renovation. The book goes into details about all of those things, but you know, as you said earlier, Bruce AI is just a tool. It comes down to how humans use it and what we want to put behind it. So I guess my question is, what are you, both of you, watching for to determine you know whether and how the various parts of our democracy are up to the task of utilizing this tool and all of its applications, in a way that will ultimately serve democracy in the end.
Bruce Schneier
So AI is a tool, but AI also has affordances. By making something easier or cheaper or faster, it can change what it is. And we see that with with with propaganda like propaganda is not new. This information is not new. Fake news is not new. Ai makes it easier for random people to produce convincing fake audio and video images. We just saw that with the invasion of Venezuela and the fake images that just within hours, came out and sort of propagated across the internet. So AI will change things because of that nature. So what we look at largely is how democracy is doing, like, what do the people want? You know, we're living in, I guess, a decade where democracy is on the wane in many parts of the planet, and AI is going to exacerbate that, and that's not going to be good, but AI also has the potential of supercharging pro democracy movements and organizations and countries. To do that better. So I think we're watching democracy and how it uses AI as much, if not more, as the technology itself.
Nathan Sanders
I agree with that, and I appreciate Jenna that you brought in the 4r framework that we have in the final chapters of our book, and it's really the last of those R's renovating democracy, that's what I'm watching. This is really just building on what Bruce said, but there's so much that is not specific to technology and certainly not specific to AI that will dictate how democracy changes and how AI ends up being used in the Democratic context. There's so much that we already need know, we need to fix about our democracy, particularly here in the US, and whether or not those changes happen dictates if we do end up using this technology in ways that are pro democratic and pro social or not earlier in our conversation, we came up with, I think, a great example of that, which is gerrymandering. We've had so such a, I think, problematic dynamic around Gerry gerrymandering tit for tat in the United States over the past several months, if elected officials are choosing their own voters, it doesn't matter nearly as much what they have to learn from those constituents, no matter how many great tools we may have to help make that information system of sharing constituent preferences on legislation better and more scalable. Doesn't matter if the underlying electoral system that assigns those constituents and assigns their elected representatives is fundamentally broken.
Jenna Spinelle
We could have a whole other conversation about democracy reform. Maybe we will someday, but I think we're going to leave it there for now. There's so much in this book that we didn't even get to about legislating and the judiciary all lots of other things as well. I hope listeners will pick up a copy of the book to learn more about all of the ideas and the examples that both of you put forward. Nathan Sanders and Bruce Schneier. Thank you so much for your work and for joining us today to talk about it.
Nathan Sanders
Thank you so much.
Cyanne Loyle
Well, thanks Jenna, for that really fantastic interview. I am so impressed with what I've learned from from Bruce and Nathan, but also the real kind of comparative way in which they're they're thinking about this topic for for democracy around the world, not just in the United States. The concept that really jumped out to me, and it's kind of where I want to, want to start, is this idea of AI as kind of a power magnifier or a power amplifier. And the idea is, with this that, going back to our point that technology has always shifted democracies, what's different about this technology is how quickly and how, I guess, efficiently, it's able to amplify power. And so we think about this value neutral concept about whether AI could be good or bad. It's in it depends on who is in charge, right? So who is holding the power behind the technology? And Jenna, I think this gets back to your point in the opening, which is that we haven't seen corporations restrain themselves. I can't think of too many examples of corporations ever restraining themselves in any types of technology, right? It's just not what we should expect from them. And so it really does put a lot of pressure on on government and on citizens to kind of think about this technology and restrain it accordingly. And for me, that was kind of good, because I saw this hopeful message, right, all these really positive and powerful things that AI can do when we set it up correctly.
Chris Beem
Well, you know, I mean, I suppose that's impossible, but, but I just, I don't have a lot, you know, lot looking out the window to hang my hat on there, right? I mean, you Jenna mentioned Cory, Doctorow, and you know his concept is and shitification. And you know he says that Google has made their technology worse deliberately in order to keep people on their site searching longer and therefore being exposed to more ads. These are the folks who we are entrusting the future of AI. So, so it is just, you're right. I mean, there's, there's no reason, no good reason, to expect that it's going to be any different with AI. They have an enormous amount of power. They have the resources that are necessary to create the infrastructure for AI and very few other people do. I mean, I just read something that it's like 1% of the annual GDP of the United States is AI, and that's now it's only going to grow, right? So I don't think that's going to happen. And I also don't, frankly, think that the the current administration, either Congress or the White House, is. Has the prudence, wisdom, forethought, courage, you name it. I don't see any of that in in the in DC right now, to to stand up to those forces, and that is what is required. It's simply somebody to say, You know what, your tools, your power, operates in constraints that the people set for it. And if you're not willing, and if you do that, you know, like we don't have rivers that catch on fire anymore, and the reason for that is because of public policy, and if you want something similar, some kind of similar controls with respect to AI, which is clearly in a position to do really grave and terrible things to our society and our world, then I think you absolutely need governmental control. But you know, we don't have it right now. I'm not confident we have it right now. And so, you know, politics is an extremely self interested profession, and there are always going to be candidates, potential politicians, who are looking for an advantage. And there's no doubt that AI creates that possibility.
Cyanne Loyle
Yeah, and as populations get bigger and constituencies get bigger, the ability to effectively, kind of discern and then disseminate the will of the masses in a way that was easier when we had a smaller electorate, yeah. I mean, could be a real useful feature.
Chris Beem
I mean, it's kind of hard. I mean, I have little doubt going because human beings are inherently creative, and when their self interest is involved, they're they're extremely powerful in terms of figuring how to use this tool. So I'm confident that that's going to appear. And you know, as as Bruce and Nathan say, it's already started right. There's, it's already out there. And so, you know, we'll see in 2026 but certainly by 2028 you're going to see this. It doesn't mean that any of those that it that doesn't mean that the negative and scary effects are going to be mitigated. Does mean that there is a an opportunity for these positive, pro democracy effects to happen as well.
Cyanne Loyle
And I think, you know the it's not a silver lining, but it's a, you know, a place of last resort, right? The final line of defense is individuals, right paying attention and and even if they're in the absence of regulation, confirming and checking right, being right, responsible for the information that you, that you consume. But it goes back to this idea that there are still a ton of humans involved in these institutions, and we have a lot of mechanisms that are, you know, tried over time for, you know, ensuring justice, right and truth and the correct outputs. And we're even seeing this in some of the political movements in the United States right now, in an attempt, you know, when the government is attempting to disseminate misinformation, the burden is on individuals to determine, you know, what is the correct video that they're looking at, right? What is the, you know, what about it makes sense. There's always, you know, some guy with six fingers or whatever, that means that it's an AI generated image. But I think there's other things. I think we can triangulate what people are telling us, what you know, respected journalists are giving us, in terms of news sources, and in that way, we can combat some of the the negatives of AI and continue to leave time to search for the the positives.
Chris Beem
Well, political theorist in me is saying that. So what you're saying is, you know, this is a democracy and and every person in democracy has a little sliver of sovereignty, and if you're going to take that seriously, that means living up to it, and that requires responsibility. And AI can make things faster and more efficient, but it cannot take that away. You're not going to be find a machine or a tool that's going to allow you to abdicate your responsibilities, and, boy, is that a grumpy old man thing. Just, I think it's true.
Cyanne Loyle
No, I think it's the exact opposite, right? I think, I mean, I love that sentiment, right? That no matter what new technology comes down the line, an individual voter has to be an individual voter, right? And make their own, you know, decisions. With the information that they receive. But I'm not, I'm not a political theorist, so I'm going to take an IR realist take on this, which is to cycle us back to power, and just remember that all of these technologies are determined. Right? The outcome of these technologies are determined by who holds the most power, and in a late stage capitalist society, right, with weakened democratic institutions, we should be concerned about who has power and who's going to ultimately regulate these technologies for us, or not regulate them going forward. So, as with many things right, this is a time where we should be concerned about AI's ability to undermine democracy, particularly democracy in the US, because of the lack of regulations, and we should not expect any of the major power brokers around us to to do much about that. Going back to your point, Chris, there's no incentive, right? There's, there's no, you know, there's no reason to see this happen. And so the onus is on us. We must stay strong, and we must continue to be critical consumers of all of the information we receive and not make a left into Lake Michigan.
Chris Beem
All right, I have nothing to add to that. That's really good. And you know, I mean, it's yet another way in which, you know, it's not pretty out there, but that doesn't get you off the hook. It doesn't get any of us off all right. Well, anyway, that's great for democracy works. I'm Chris Beem.
Cyanne Loyle
I'm Cyanne Loyle. Thanks for listening