top of page
Search

AI and Democracy: (A fork in the road…?)

  • Writer: Pablo Aguirre Solana
    Pablo Aguirre Solana
  • Apr 30, 2023
  • 10 min read

Dedicated to Alejandro Moreno and Francisco Parra, who have taught me a thing or two about Democracy



We live in a time and age where there are myriad theories, books, articles, podcasts, and opinions about the consequences of AI and societal change. One of them of particular interest to me is how AI can affect and change democracy.

 

Up to now, as far as my knowledge goes (which is not vast), the arena in which these theories, books, articles, and opinions thrive is twofold; Star Trek-optimistic-futuristic, and Faustian-Orwellian-pessimistic. The reason for this is quite simple, I guess; the pace of AI development is faster than the pace of societal change, in this case Democracy, thus, this corpus of theories and opinions, have a high degree, of conjectural and speculative assumptions, that might cover or hide the real and empirical verifiable effects of AI in Democracy.

 

Also, the motivation behind these theories and intellectual frameworks tries to explain a crucial fact; the potentiality and ubiquity of AI in many domains of human life, carrying existential threats to some and naïve realities to others, all based upon the inexorable uncertainty and sense of possibility that the meteoric technological change AI implies.

 

I am convinced that Democracy as any other aspect of human life could be affected by the ramifications and implementations of AI, but not in isolation, nor as simple instrumentation, because democracy foremost is an interrelated societal process, and cannot be easily operationalized and tuned with algorithms. Democracy can erode, transform, and disappear, in fact, but for these to happen, many other things need to occur that can relate to AI development and instrumentation or not, as we shall see ahead.

 

Thus, the aim of this article is first to draw a quick survey of the main ideas of three famous intellectuals from the Faustian-Orwellian-pessimist arena. And second, to contest some of these ideas and offer some alternative interpretations on how AI and its development can affect or not democracy.


Three Horsemen of the Apocalypse

How and why can AI affect democracy? How can its ramifications, relations, instrumentation and outputs can challenge, transform, or break democratic culture and institutions? Three famous intellectuals: Byung-Chul Han, Miguel Benasayag and Shoshana Zuboff conveyed these questions and some other regarding the future of AI in society in their respective books,[1] which are worthwhile to analyze and think of.

 

There are three main shared ideas I found along these authors that can serve as a starting point to open a conversation about the impact that AI can have in democracy: The loss of Individual autonomy, how algorithmic governmentality substitutes human governmentality, and surveillance capitalism as a new instrumentarian power.

 

 

I. The loss of individual autonomy  

 

A society of free and autonomous individuals must organize itself for democracy to exists as a necessary and sufficient condition, thus individual autonomy serves as the cornerstone to think and operationalize democracy, which grants the capacity of moral choice for the individual and for the group which the individual is embedded.[2]

 

With AI, the fear and anxieties that these three authors present to us is how AI and its dominance over particular areas of human life can represent a way of “de-existing”, in which the body is still present but it is subject to an immaterial regime. [3] A way of being captive in a digital cave[4] or as Zuboff put it: parse and package inner life for the sake of surveillance revenues.[5] Which can consequently affect and mine the capacity of humans in the art of self-governing and moral choice, because certain aspects of human choice and decisions can be subjected and delegated to an algorithmic process.


Their understanding of AI as an ubiquitous all-embracing and totalistic phenomenon lead them to think and to argue the possibility of falling into what I like to call a Faustian-tradeoff, in which we sell our souls to the AI-devil for absolute knowledge. Freedom is forfeit to knowledge,[6] but not knowledge in terms of wisdom, but knowledge in terms of certainty about what is happening about anything (through a hyper connected IoT world). Hence, this tradeoff would imply that it is preferable the certainty that AI carries, to the uncertainty and ambiguity, that human agency and moral choice carries, losing somehow the autonomy that is necessary for collective action and for democracy to exist.


II. How algorithmic governmentality substitutes human governmentality

 

The scenario that these authors suggest is one in which: “the life of the individual and societies is guided and structured by machines”[7] as Benasayag states it. One which cannot agree or adapt to democratic principles because it responds to an optimization of information rationale.

 

AI would replace governments and political bodies with the sole analysis of data, as well as the massive delegation comprising an entire set of institutions and a culture of deliberation, scrutiny, and participation in an algorithmic process.

 

This would be possible because the three authors argue there are factual means of behavioral modification that can reduce human experience to measurable, observable behavior units.[8] Which can be more efficient for any decision-making process, determining thus a new social contract. And also, because of Big Data and Data Warehouse capabilities, corporations, governments, and other organizations as any other time in history can keep track of individual behaviors and records of many dimensions of their private lives.

 

Governmentality thus, would rely more on societal modeled behaviors and its predictability by AI, rather than on the messy and accidental human interactions that are involved in any collective decision process.

 


 

III. Surveillance capitalism as a new instrumentarian power


From the moment that FAANG Tech Giants captured, monitored and stored human activities through data from shopping behavior to social group dynamics, these companies became the sole proprietors of what Zuboff calls “behavioral surplus” which serves as the raw material by which surveillance capitalism works. This behavioral surplus is nothing more than the accumulated knowledge of analyzing and data mining what people shop, what people post, and what people do in the vast universe of multiple sites and apps these corporations hold, ranging from location services to face recognition.

 

The implication of this is the origin of a new instrumentarian power that asserts dominance over society[9], this domination is exerted through an unprecedented concentration of knowledge, knowledge that is privately owned by the Tech Giants and knowledge that can be used as a means of behavioral modification.

 

As a result, this concentration and misusage of behavioral surplus can hinder democracies along the world in two ways; violation of privacy and constitutional rights in pursuit of algorithmic efficiency and frictionless consumer experiences, and as government or military agencies surveilling, monitoring, tracking, and profiling ordinary citizens for recruitment, political mobilization, narrative propagation, spying and social ordering.


A multi-Lane Road

 

Imagine “Having a mathematical, predictive science of society that includes both individual differences and the relationships between individuals has the potential to dramatically change the way government officials, industry managers, and citizens think and act.”[10] Also, imagine the possibility of designing an algorithm that can predict a “maximum acceptable risk threshold for a defendant to be denied bail or not”[11] to improve jury and courts decisions from social and racial bias.

 

I assume some of you might think these arguments can seem a bit far-fetched Star Trek-optimistic-futuristic, but this is the reverse side of the coin showing how AI can shape and influence our society in a less catastrophic interpretation as the previous. 

 

I specifically selected these two examples from the works of Pentland and Kahneman et al., to exhibit how instead of seeing AI as a threat to individual freedom and governmentality we can see AI as a tool for aiding and perfecting, rather than shaping and controlling the way we make public and private decisions.

 

The key here to understand these two opposed worldviews of AI (pessimistic vs. futuristic), and therefore the consequences it can have in democracy, is the theoretical standpoint from which these two perspectives depart.

 

Byung-Chul Han, Miguel Benasayag and Shoshana Zuboff assume AI is an all-embracing, overarching totalistic phenomenon that exerts power and control, whereas it is a particularistic, task driven and goal oriented by nature. AI helps to map, automatize, and predict inputs such as image, voice, text in a variety of applications and outputs, but does not exist in a vacuum and by itself.

 

Contrary to popular belief AI, encompasses several branches of computer science and other fields such as mathematic and statistics, that serve different purposes and are not related in a single source that can be controlled or manipulated all at once. For example, robotics, which is a branch of AI, does not relate to translation, which is part of NLP, as the below chart can illustrate.


Thus, in having a particularistic, task and goal driven perspective of AI, we can open a conversation regarding what implementations, applications, and operationalization of AI can take hold of democracy.

 

Another theoretical standpoint that is crucial to understand for how these authors interpret AI in relation to democracy is as an “either/or” rationale. That is, the substitution, alienation and delegation of social functions and social bodies through AI and because of AI. Whereas in practice and under empirical observations there are many applications and experiments such as the ones conducted by Kahneman et al. that prove extensively how AI is not a substitute for social functions and social bodies, nor a totalitarian apparatus for behavioral modification, but just a tool and a complementary medium to refine, aide and support decision making, for example in the fields of justice medicine, psychiatry, agriculture and fashion among others.

 


It is more about the nature of power rather,
than the nature of AI

So far, I have tried to show how some theories thrust AI into a bucket of omniscience totality and certainty, which can be debatable and contested, as I have suggested above. However, the latter does not have to make us uncritical or oblivious to the anxieties and concerns that these theories raise regarding democracy. So, in attempting to understand these anxieties and concerns, I have thought in some ways in which AI can baffle democracy and also help democracy. But from a different standpoint, instead of seeing AI as an instrumentarian power, as Zuboff suggests, I rather see it as an instrument of power, that can yield positive and negative results depending on its use, not depending on its nature as the revised authors suggest. This is what I think:


AI can curb democracy if the state transforms itself into a digital authoritarian system in which, with the aid of AI can surveil, monitor and behavioral-censor its citizens, its minorities and its political opponents as means of control such as China and Russia are doing currently.[12]


AI can curb democracy if special interest and lobbying by the FAANGS and Big Tech companies, impede antitrust regulation and legislation that protects customer privacy rights. There is mounting evidence on how google has lost million of dollars in privacy settlements,[13] for example.


AI can curb democracy if companies, public entities and individuals that develop and innovate business with AI technologies do not subscribe to ethics governmental guidelines such as the European Union guidelines that require for a business or entities involved with AI the following aspects: human agency and oversight, robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability. [14]


AI can curb democracy if it becomes an instrument of demagogues and populist framing and disseminating toxic narratives through different media channels and through fake news. We have seen already what Cambridge Analytica data scandal can cause in terms of political advertising. So as the computational capacities grow larger, and as the algorithms grow more precise and sophisticated, tools can be developed to create new instances of political narratives that can serve as deniers of truth and political polarization.


AI can curb democracy if it embarks in a project of total certainty, which could integrate multiple sources of data into a single source that can predict human behavior. At this point, this seems far away, because computationally and technically this it is not possible, yet. But if some central planning nostalgia kicks in in the mind of an autocrat or populist, this would be certainly a goal to pursue. Dictators, autocrats and populist as resist and defy uncertainty which is a fundamental part of the representative democratic process.


On the other hand:  AI can foster democracy if it serves as another check and balance to arbitrary and uncontested power.

 

AI can foster democracy if it increases the capacity of societies and individuals to dialogue and engage in conversions that entail different interpretations of the world and reality, promoting cognitive pluralism through legitimate differences in public opinion, instead of promoting echo chambers.

 

Ai can foster democracy if it creates instances or products that detect fake news and that do not contribute to toxic and polarizing narratives in the digital space, and that can broaden freedom of speech.

 

AI can foster democracy by aiding and helping research to understand, model and optimize human behavior such as Kahneman et al, experiments to promote better decision among court houses, public schools, medical intuitions, etc. Not to control behavior, but to well public decision making and deliberation.

 

AI can foster democracy serving as a tool of local government, NGOs, Watchdogs, Civil Society Organizations, press agencies to organize information and decision making to contest and to refuse arbitrary power in the whole social life, such as bullying, sexual harassment, racial and gender discrimination, data harvesting and economic inequality among others.

 

 AI can foster democracy through education for communities disproportionately affected by natural disasters, poverty, war, and dictatorship. Through education, communities can be enfranchised and thus access political representation and participation, as well as serving as a primary dike to question and contest power.

 

As John Kean remarkably tells us: “Democracy shows us that no man of woman is perfect enough to rule unaccountably over their fellows, or the fragile lands and seas in which they dwell. “[15]

 

I would add: “no man, woman or technology is perfect enough……” thus, AI can unleash unaccountable power and also constraint it. All will depend in how we chose to write history.


7/Jul/2024



















 





[1]  Infocracy: Digitalization and the Crisis of Democracy. Byun-Chul Han. Polity Press, 2022. The tyranny of Algorithms. Miguel Benasayag. Europa Editions 2021. The Age of Surveillance Capitalism. Shoshana Zuboff. Public Affairs 2020.

[2] Liberalism and its Discontents. Francis Fukuyama. FSG, 2022. Pags. 47-48.

[3] Miguel Benasayag. Europa Editions 2022.Pag.49

[4] Byun-Chul Han. Polity Press, 2022.Pag. 58

[5] The Age of Surveillance Capitalism. Shoshana Zuboff. Public Affairs 2020.Pag.97.

[6] The Age of Surveillance Capitalism. Shoshana Zuboff. Public Affairs 2020.Pag.379. 

[7] Miguel Benasayag. Europa Editions 2022.Pag.51

[8] The Age of Surveillance Capitalism. Shoshana Zuboff. Public Affairs 2020.Pag.306-307.

[9] The Age of Surveillance Capitalism. Shoshana Zuboff. Public Affairs 2020.DEFINITION.

[10] Social Physics. How social networks can make us smarter. Alex Pentland. Penguin.2014. Pag.191

[11] Noise. A flaw in human judgment. Kahneman, Sibony, Sustein. Little Brown. Spark 2021. Pag 130-131

[12] How Artificial Inteligence Will Reshape the Global Order. Nicholas Wright. Foreign Affairs.2018

[15] The Shortest History of Democracy: 4000 years of Self-Government-A Retelling for Our Times. The Experiment 2022.

 
 
 

Comments


Suscríbete aquí para que te lleguen los ultimos posts

Gracias !

© 2024. Y que siempre, siempre: re-chingue su puta madre Andrés Manuel López Obrador.

  • Twitter
  • Instagram
bottom of page