#1 Planetary Futures
Reading time: 18 min.

Escape from the computational fundamentalism

Planetary Intelligence

Humanity seems to have lost its head, or more precisely, its head is no longer functioning with its body.

Félix Guattari

In the Encyclopedia of Computer Science from 1993, under computing, we read that “the basic question posed by informatics is what can be (efficiently) automated”1. It is difficult to decide if this concerns an actual goal of computer science or reflects the language driven by economics and founded on the primacy of efficiency over any other benefit that humans could derive from economic life and technological applications that propel it. It is even harder to omit the historical intertwining between informatics and economic (neo)liberalism – particularly its defining notion of the free market. It is worth reminding that prominent theoreticians of economic liberalism, from Friedrich Hayek to Herbert Simon, imagined market systems as data processing long before the emergence of personal computers and the Internet2. Subsequently, they contributed significantly to naturalising their image via instruments stemming from cognitive sciences3 and, in the case of Hayek, from the selectively interpreted theory of evolution4.

In the current context of the rampant commercialisation of AI systems, recognising the techno-liberal compound of computer science and economics is vital. The nature of this compound is not only political but also epistemological. If we want AI systems to amplify our cognitive abilities and “the social capital of communication,” the realm of politics necessarily needs to become a frontline of the fight against digital hegemons, whose business interests openly hinder such an amplification. This combat must be conducted not only in the sphere of legal regulations. It also needs to translate into a struggle for changing economic and technological rules regarding algorithms’ design. The basis for this change – while never losing sight of any of its vital aspects and becoming immune to the language of techno-propaganda – may be found only in the sphere of epistemology, in a dialogue engaging technical sciences, social sciences, and humanities. Multisectoral political projects may emerge and succeed if accompanied by a new “epistemological rupture” of an empirical and intuitive nature. According to Gaston Bachelard, such a rupture should liberate scientific thought – in this case, also the political one – from false ideas (epistemological obstacles), blocking more detailed comprehension of phenomena and prohibiting the contextualised understanding of AI systems in the broader social ecosystem.

I call the nature of the techno-liberal compound of computer science and economics – currently increasingly becoming techno-rightist – computational fundamentalism. Following Yanis Varoufakis’ analysis, I suggest that computational fundamentalism absorbed free-market fundamentalism by separating new forms of capital under the guise of primary accumulation of data from the capitalist order of production and consumption5. If any “post-capitalist” entity emerges from it, it is instead something worse. This text aims to outline a possible escape route – provided that it is not too late.

 

One more time about neoliberalism but completely differently

Through computational fundamentalism, I understand reductionist and individualist depictions of intelligence, culturally correlated with American individualism and its conception of a subject as a participant in the market. In such a take, what defines intelligence is the capability of data processing, while the basic assumption is that thought mechanisms—with sufficient data and advanced mathematical algorithms—can be reproduced in intelligent computational machines or even lead to the creation of a machine network of intelligence equal to or surpassing human intelligence.

Even though this conception of human intelligence has little to do with both social and animal intelligence, its computerised simulation may provide the same computational effects as the work of the latter, especially when the measurement of these effects is provided by an economist logic of efficiency, which is completely disconnected from social life and biological organisation, or a scientist one, which tends to form a strong bond with the former. At the same time, it significantly influences the collective imagination and the direction of discussion concerning AI: it forces upon us a vision of a human replaced by a more intelligent machine, as well as the thought that such replacement is an inevitable result of current socio-economic and technological transformation6 we are unable to stop and can adjust to it, provided that we prepare accordingly, since as a species – as per the basic law of neoliberalism – we are defined by our incredible capabilities of adaptation to new conditions.

Saying “neoliberalism”, I do not refer to politics promoting deregulation, privatisation, minimalization of political interference in the economy, and close control of public spending – although such “neoliberalism” naturally exists and is deeply internalised by the significant parts of political and decision-making classes in Poland and Europe. I mean the intellectual trend which emerged in the early 20th century, primarily in the United States, as a distinct opposition to 19th century “social Darwinism” of Herbert Spencer and his idea of “the survival of the fittest” – which has nothing to do with Darwin’s theory of evolution based not on the survival of only the strongest specimen but suggesting the ability to develop qualities sufficient to function in the conditions of a given environment. As the analyses of Barbara Stiegler show, the language of this neoliberalism is derived noticeably from the language of biology and the evolutionist trends in philosophy, represented by William James, Henri Bergson, and John Dewey. Neoliberal thinkers, such as Walter Lippman, speaking about “human nature”, tried to extrapolate the theory of evolution on the moral and socio-political spheres or, in the case of Hayek, on the workings of capitalism. Thus, adaptation could become a “new political imperative”7.

Computational fundamentalism merges with such neoliberalism anytime one touches upon a topic of the necessity of adjusting to the technological change expected from broadly understood institutions or upon a theme of “evolution of technology” as if the latter could evolve and self-organise just as the living systems, which eventually removes such a discourse further away from the life sciences and closer to techno-animism. Nonetheless, just as neoliberalism, despite the diversity of the neoliberal thought8, was naturalised due to a small but influential coalition of economists and politicians, the current model of the AI systems development – completely laissez-faire and, in this sense, neither neoliberal but postliberal9 – is similarly naturalised, the only difference being that a narrow group of IT and governance specialists join the coalition.

Thus, to think and speak about a different model of AI systems development, one must first disassemble the system of neoliberal convictions comprising computational fundamentalism, as it creates deep layers of meanings in default language used to habitually think and speak about AI by its builders, enthusiasts, and merchants. When decoding this code, it does not suffice to criticise a language of neoliberalism in its everyday meaning, as this code is different; it has mutated with AI, just like capitalism itself. That is why the critique of AI creation functions as a branch of epistemology.

 

Old cybernetics or new informatics?

The current wave of AI reflects and strengthens the power networks converging at the crossroads of technology, capital, and governance, which Kate Crawford showcases in her analyses10. At the same time, the very idea of AI, as soon as it morphed into an ambitious research program during a workshop in Dartmouth in 1956, treated as a symbolic beginning of this discipline11, evolved in the environment of Cold War economics, detached from the material world. In this economics, as recalled by Evgeny Morozov, the emphasis was on abstract models, with little attention to what and how lasting its meaning is for the social reality while being obsessively concerned with optimisation and equilibrum. The popularity was awarded to theoretical constructs like the mathematical game, which removed this discipline from society and its institutions. 

Morozov suggests that another AI is possible, even if he lucidly warns to retain a critical approach to this technology, regardless of how its further history would develop. He recalls the cybernetic pioneers of this discipline: Stafford Beer, who worked out a visionary Cybersyn project implemented in Chile during the presidency of Salvador Allende, and Warren Brodey, who claimed that intelligence is an emergent phenomenon, i.e., it emerges in interpersonal relations and those between humans and their environment. In this crucial aspect, Beer and Brodey noticeably opposed the individualistic and purely computational view on intelligence, represented by an informatician John McCarthy, who coined the term “artificial intelligence”, or a psychologist Frank Rosenblatt, the creator of Perceptron, a simple model of neural network. Beer and Brodey sought inspiration to build machines in ancient philosophy and life sciences rather than the mathematical economics. Brodey’s motivation was not to create “intelligent machines” equal to or surpassing human intelligence, but rather machines strengthening interactions between humans, machines, and their shared environment. 

Coming back to these unknown, less known, or forgotten cybernetic ideas and their potential socio-political consequences is extraordinarily invigorating in the modern techno- and eco-logical context, mainly due to allowing us to comprehend better the place where we are and also how we arrived here and from where we could draw perspectives for the future. In this aspect, one needs to pay particular attention to the philosophical project of “cybernetics for the 21st century”, initiated by Yuk Hui, and showing how cybernetic thought developed in various countries, including Poland12.

However, speaking and thinking about AI liberated from computational fundamentalism, I wish to suggest an alternative to revisiting the circulations of cybernetic ideas. Now, a different AI requires different informatics, whose beginnings – similarly to the case of cognitive sciences13 – were found in disassociating from cybernetics and its too philosophical tendencies. The possibilities for “new informatics” were enabled by the works of socio-informatics14. Its supporters, being informaticians and not philosophers, underline that applied informatics needs to construct a brand new epistemological paradigm and review its methods. Until now, as they argue, this paradigm used to be positivist, and informatics was treated as a formal science, generating knowledge and retaining importance regardless of its context. Qualitative criteria for judging its output were formal proofs, algorithmic efficacy, and structural elegance.

Such criteria prove vastly insufficient when the social meaning of informatics applications is treated as a priority, which seems necessary in the context of AI systems, whose efficiency is decided by data rather than algorithms. From this point of view, the main goal of informatics, now increasingly a techno-social science and not merely a technological one, is to resolve social issues apparent in a given context and, thus, find a solution while comprehending said context. That is why such a change of perspective on the machine designing process shall not be treated as speculation since it has pragmatic importance and results in postulates regarding the shift in design practice.

Ones who support informatics as a techno-social science demand that informatic artefacts are contextualised with regard to social practices as soon as they enter the conceptualisation stage. This postulate stems from the consciousness that their quality depends on how they influence these practices. Two issues follow. Firstly, applied informatics needs a solid theory defining social practice, which significantly exceeds the scope of formal sciences. Secondly, for the work of machines to harmonise with these practices, applied informatics needs to turn towards the science of design – i.e. an entity by definition theoretically indeterminate and lacking certainty, which one is used to associate with every formal science. Thus, it is a significant change, on the one hand, regarding the very method and, on the other hand, an epistemological change.

 

Why does informatics need philosophy?

Philosophy may participate in this change because it shares with designing a shared activity of producing meaning and because this change needs to be conceptualised as a change in the unsustainable economic model of growth. Applications of informatics are prominent and increasingly toxic, exploiting our mental resources and serving as a tool for wielding automated “psycho-power”15. Capturing the philosophical dimension of this issue is also needed to comprehend better the ambiguous – irreducibly pharmacological, as Bernard Stiegler would say – impact of technology on our cognitive capabilities and ability to reflect on algorithmicised digital space and, thus, not vanish in it.

In this fundamental, political, and philosophical aspect, Stiegler’s proposition regarding informatics sounds much more radical than the proposition in the spirit of socio-informatics. Stiegler not only calls for revision of applied informatics’ methods but for a thorough re-examination of theoretical foundations of informatics as a fundamental science – though his voice seems to be primarily a voice of despair towards philosophy, which, passing as progressive and avantgarde, failed to grasp the challenges connected to the process of, first, informatisation, and, later, algorithmisation and automatisation of society: “Theoretical computer science has been abandoned […] by European philosophy, “French theory,” the heirs of Marxist thought and psychoanalysis, with the exception of Félix Guattari. That this specific dimension of our time has been abandoned to the ideologues of neoliberalism, who hide behind their computational pseudosciences based on a confusion of science and quantification (and which are cognitivist in this sense)”16. 

But if it is so, what can one realistically do with this abandonment? How can we convince informaticians that they are abandoned by what is called “the European philosophy” and, thus, their profession was harmed? We can only stumble upon such a high barrier of radically different ways of action, cognitive habits, and practices of producing knowledge that, to many, it would seem impossible to overcome or even make them doubt in the very sense of trying. After all, since informaticians as scientists are fulfilled in designing computational machines and not formulating theories, while philosophers, on the contrary, create theories and usually do not use the formal language needed to develop such machines, how can we seek a possible language of communication?

These are difficult but also precise questions. Not posing them today, tomorrow we’d face, as individuals, local communities, and countries, voluntary servitude and unpaid labour on the platforms of techno-feudal lords. To escape from such a future – probable, but not inevitable –we require not only will and political determination but also building epistemological foundations for the new culture of science and technology, allowing us to redesign existing AI systems and submit the new ones to social needs and implement them only when needed for a given organisation, society, or issue.

To ask further, how would a process of conceptualisation, designing, and implementing AI systems look if it was guided by different goals than increasing efficacy and optimisation, which, in practice, mean nothing but human labour and intensified resource exploitation? How would we imagine the interaction between humans and these systems if these imaginations came not from technological posthumanism, fetishising the “autonomy” of AI systems and veiling its politics by fables about a collaborative society of humans and bots, but from a diversity of European and non-European socio-humanist thought, from political to economic? How would we understand innovation if its comprehension stemmed from the cultures of innovation different from the business and marketing culture of “disruptive innovations”17, thus creating a cultural counterbalance to disruption?

 

The Living

Direction towards such a culture of science and technology was suggested by Norbert Wiener at the beginning of the adventure by the name of AI. Noticing how technological progress causes devaluation of human minds, he wrote that “the answer […] is to have a society based on human values other than buying or selling”, and to arrive there, a struggle needs to take place “on the plane of ideas”18. Engaging in such a combat, one needs to reach the very foundations on which relies the idea of technological progress, modern understanding of “technology”, and the living it transforms, not only in the socio-economic sense but primarily in the biological one.

In the excellent essay Antidote au culte de la performance, a biologist, Olivier Hamant, writes: “Life did not serve as a model for our societies. On the contrary, how our societies function distorted our view of life. In particular, it was done by a mode of industrial optimisation imposed on life during the nineteenth century. […] An idea of efficient life can be partially true in the reductionist thought when particular biological organisms are extracted from their context. Yet, it is completely false when analysing systems of life in their entirety. […] Considering life as an optimalised system speaks volumes of our obsession with efficiency but nothing about life itself”19

On the opposing epistemic pole, there is a mechanist view of life of the entrepreneur Mustafa Suleyman, one of the founders of DeepMind and the head of MicrosoftAI. Convincing that AI is  “not just a tool or platform but a transformative meta-technology, the technology behind technology and everything else”, Suleyman treats life as such a technology – being now transformed due to connecting AI to bioengineering: “Life, the universe’s most ancient technology, is at least 3.7 billion years old. Across these eons life evolved in a glacial, self-governing, and unguided process. Then, in just the past few decades, the tiniest sliver of evolutionary time, one of life’s products, humans, changed everything. Biology’s mysteries began to unravel, and biology itself became an engineering tool. The story of life had been rewritten in an instant […] Alongside AI, this is the most important transformation of our lifetimes. […] At the center of this wave sits the realization that DNA is information, a biologically evolved encoding and storage system. Over recent decades we have come to understand enough about this information transmission system that we can now intervene to alter its encoding and direct its course. As a result, food, medicine, materials, manufacturing processes, and consumer goods will all be transformed and reimagined. So will humans themselves”20.

Our further history with AI depends on which view of life will inspire its creation. A heterogeneous and decentralised one, which promotes techno-diversity and strives to connect low-tech with high-tech to facilitate expansive forms of life, or the view openly technocratic and reductionist, which submits life to technology. This political and epistemological choice will decide how the “new alliance with the machine”, described by Felix Guattari in a text published in “Le Monde diplomatique” a few weeks before his sudden death in 199221. Will it support, as he wished, the conversion of the foundations of social practices or their vanishing in the world of necro-technology?

 

Translation from Polish by Mateusz Myszka

 


 

Michał Krzykawski, university’s professor at the Faculty of Humanities of the University of Silesia in Katowice, where he heads the Centre for Critical Technology Studies. He has extensively published in the field of contemporary French philosophy, philosophy of technology and social theory. Recently published works: Bifurcate. “There is no Alternative” (edited by Bernard Stiegler with the Internation Collective, Paris 2020, London 2021) and (in Polish) The Economy and Entropy. Overcoming the Polycrisis (co-edited with Jerzy Hausner, Warszawa 2023). Member of the Council of the National Programme for the Development of Humanities, co-founder of the Pracownia Współtwórcza Foundation wspoltworcza.org.

 

1.
P.J. Denning, Computer Science, Academic. [in:] Ralston A., Reilly E. D. (eds), Encyclopedia of Computer Science. Third Edition, Van Nostrand Reinhold, New York 1993, pp. 319-322.
2.
See. P. Mirowsk, E. Nik-Khah, The Knowledge We Have Lost in Information. The History of Information in Modern Economics, Oxford University Press, New York 2017.
3.
M. Pasquinelli, How to make a class: Hayek’s neoliberalism and the origins of connectionism, “Qui Parle” 2021, 30(1): pp. 159–184. DOI: 10.1215/10418385-8955836.
4.
M. Krzykawski, What Is a Neganthropic Institution?, “Theory, Culture & Society” 2022, 39(7-8), pp. 99-115. https://doi.org/10.1177/02632764221141604
5.
Separation of capital from capitalism is one of the key factors leading Yanis Varoufakis to claim that capitalism morphs into techno-feudalism, which, as the Greek economist writes in the conclusions of his book, is “even more unpredictable and destructive” than capitalism (Y. Varoufakis, Techno-Feudalism. What Killed Capitalism, Bodley Head, London 2023, p. 241.)
6.
www.monde-diplomatique.fr
7.
B. Stiegler, Il faut s’adapter. Sur un nouvel impératif politique, Gallimard, Paris 2019.
8.
See. The Road from Mont Pèlerin: The Making of the Neoliberal Thought Collective, Mirowski P., Plehwe D. (eds.), Harvard University Press,  Cambridge MA, 2009.
9.
This notion is used by Gaël Giraud, an economist and a Jesuit theologian, who shows that the utopian political project called “neoliberalism” ceased to have anything to do with the liberalism of the enlightenment, from Rousseau to Kant. According to Giraud, the alternative in the face of this utopia’s demise, lies in the model of economic development based on the commons (See: G. Giraud, Vers une économie politique des communs, “Cités” 2018, 76, pp. 81–94.). Takie spojrzenie na ekonomię polityczną Giraud rozwinął również w swojej monumentalnej „teologii politycznej antropocenu” (zob. G.  Giraud, Composer un monde en commun. Une théologie politique de l’Anthropocène, Seuil, Paris 2022.).
10.
See: K. Crawford, Atlas od AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press,  New Haven 2021.
11.
formy.xyz
12.
See: M. Krzykawski, Cybernetics, Communism, and Romanticism: Cybernetic Thinking in the Polish People’s Republic and in the Pre-Cybernetic Era [w:] Cybernetics for the 21st Century, Hui Y. (ed.), Hanart Press, Hong Kong 2024, pp. 153-170.
13.
See: J-P. Dupuy, Aux origines des sciences cognitives, La Décoverte, Paris 1994.
14.
V. Wolf et al., Socio-Informatics. A Practice-Based Perspective on the Desing and Use of IT Artifacts, Oxford University Press, Oxford, UK 2018.
15.
See. B. Stiegler, Prendre soin. De la jeunesse et des générations, Flammarion, Paris 2008, pp. 223-256.
16.
https://2022.biennalewarszawa.pl/en/nooroznorodnosc-technoroznorodnosc/ Stiegler means “cognitivism” here. He seems to connect it to computational or connectionist theory of mind, according to which the work of mind can be functionally explained by analogy with a computational machine (a computer). Such “cognitivism” is not identical to current cognitive science, proposing many theories of mind, with the computational one being treated by many cognitive scientists as belated.
17.
2022.biennalewarszawa.pl
18.
N. Wiener, Cybernetics, or Control and Communication in the Animal and the Machine, The MIT Press, Cambrisge, MA, 1985, p. 28.
19.
O. Hamant, Antidote au culte de la performance. La robustesse du vivant, Gallimard, Paris 2023, pp. 18-19.
20.
M. Suleyman, M.  Bhaskar, The Coming Wave. Technology, Power, and 21st Century’s Greatest Dilemma, Crown, New York, 2023, p. 93.
21.
 https://theorie.monde-diplomatique.fr

#1 Planetary Futures

The inaugural issue of _BW_Mag attempts to describe the order that is emerging at the intersection of technology, capital and political power. Sometimes it seems to take on a disturbing shape familiar from the past, but much more often it surprises with new developments. The authors of the published texts try to analyze some aspects of this complex reality, which poses an ongoing intellectual and political challenge. They also go a step further – they try to unlock our imagination, dominated by apocalyptic visions present in the media and culture, and the sense of fear induced by propaganda and political tools. The inaugural issue of _BW_Mag introduces the theme of planetary futures, showing that the future is not something immobile and monolithic, and that systemic alternatives must take into account the planetary dimension of contemporary phenomena.

Related content