Some political scientists and democracy scholars think that it might. The thinking goes something like this: inequality will rise as jobs continue to be automated, which will cause distrust in the government and create fertile ground for authoritarianism.
Jay Yonamine is uniquely qualified to weigh in on this issue. He is a data scientist at Google and has a Ph.D. in political science. He has an interesting perspective on the relationship between automation and democracy, and the role that algorithms and platforms play in the spread of misinformation online.
In some ways, this conversation makes the counterargument to our conversation with Penn State’s Matt Jordan about the relationship between social media and democracy. The conversation with Matt is worth revisiting for two perspectives on some of the most complicated questions facing democracy today.
The Fourth Age by Byron Reese – a look at the relationship between technology, humanity, and democratic values
- What do you see as the relationship between AI and democracy?
- Should Google and other platforms regulate the contact that users see?
- Do you feel that you have control over the content you see on Google and other sites?
- Are you concerned about AI’s impacts on democracy?
[3:40] How do you define AI?
AI is has to be something that’s not just a human brain relying on itself. Most of the time, when folks think about AI, what they mean is computers, which is to say a computer is doing the thinking or doing the analysis as apposed to a human brain. How I think of intelligence is the ability to make nontrivial, falsifiable, accurate predictions. I think most folks would agree that the act of a robot by itself is not necessarily artificial intelligence, but the AI aspect of a robot would actually still be the, sort of computer engine that interprets the world and makes predictions
[6:25] What is the relationship between AI and democracy?
A few things have happened simultaneously that might not be as causal as maybe we might believe. There’s definitely been an increase in populist-based politicians in the United States and abroad and a move towards more heavy handed political ideologies. And then of course there’s also been a fairly rapid growth in the prevalence of AI and machine learning in our day-to-day. It’s not clear that those two are connected, but you can see the reasons why people draw their connections. And I think primarily they revolve around news, and around platforms, and around the increase ease of sharing information, and around the increase ease of sharing disinformation.
[8:26] Does one influence the other?
What’s interesting to me as a political scientist and someone who has studied the history of political institutions and political dynamics is for almost all of history, increased access to information and increased access to create and assimilate information has almost always driven an increase in what you might call liberal democratic values. Free speech, democracy, things that have generally been held up as good. And it’s almost always been some autocratic force that has fought against the spread of information that’s going back to the printing press.
What’s interesting now is we’re seeing for the first time, the possibility of that actually shifting. We’re now starting to see that the ease of access to information and the ease of creating and assimilating information might actually now be contributing to the spread of more antidemocratic values.
[10:03] Is AI’s impact on democracy being discussed at tech companies?
The degree of regulation is definitely a hot issue. It’s an immensely complicated issue and one with no easy answers. There’s folks who are arguing for increased regulation ti decrease the spread of misinformation, create a better informed populous, aversion to some of the antidemocratic stuff that we’ve been seeing.
But the counter to that is that you don’t want some centralized control over what can be shared and by whom. And so there’s definitely merits to that argument as well. And it’s an immensely complicated challenge. If you’ve got a team of experts in the room and, and gave them, a handful of pieces of content, I suspect they would have a hard time even reaching consensus. And then when you imagine that scale that a lot of companies operate at it’s, it’s tens of thousands of hundreds of thousands of millions of pieces of content a day, a week, a month.
[13:24] How are companies balancing these big issues with their day-to-day work?
What a lot of companies are trying to do is, hire or create teams and departments and groups whose full time job is just to think about these types of ethical issues. And then create scenarios where those voices have sufficient authority or discretion to actually impact product roadmaps. Companies are big, complex organisms and it’s hard to introduce that type of, of thinking in a really productive way. It’s not like there’s a blueprint where you can say, “Oh, well this is how company A did this in ’98” and now there’s someone who wrote a book on the best practices for introducing ethics and normative guidelines into an AI-based product.
[18:31] How should candidates be talking about these issues in 2020?
It’s very easy to be optimistic about the societal benefit of technological adaption here’s the self driving story where it’s feasible to imagine a world where 50 years from now there’s one one hundredth of the car fatalities that there are today. So that I think is a pretty easy, legitimate story to tell about the benefits of innovation. The counterargument is that when someone comes up with some new device, it displaces a meaningful number of jobs and what do you do with those people? To go back to self-driving cars, we could see a very quick reduction in the number of truck drivers that are needed in the coming years, which is a major industry in a lot of places.
The optimist would say that new jobs will be created to do things like work on the self-driving cars and trucks and do additional road maintenance because the quality of the roads will become increasingly important, but it remains to be seen whether that will actually happen and those jobs will actually be created.