Artificial intelligence: the innovation of the capitalist domination device

The 20 steps of NINA

N.i.n.a. an acronym for “Neither intelligent nor artificial,” from the title of Kate Crawford’s book, is a group founded in Milan in January 2024 with the aim of investigating what will be the impact of artificial intelligence on certain thematic areas such as the worlds of work, environmental sustainability, old and new discrimination and inequalities, and the circulation of information. And, of course, what new policy instances can be acted upon in this context.

The group has a diverse composition: there are scholars and scholars of digital cultures and media, there are people from academia, there are workers from immaterial labor sectors, figures who personally are touched by this acceleration of automation.

The 20 steps of N.I.N.A. are meant to be a summary of a path of approaching and understanding AI, a path that is meant to be critical and not catastrophic, informed, attentive and not naively optimistic, open to the knowledge that is produced in the relationship between different people and placements. Understand the transformations taking place in order to feed the public debate on these issues (the last meeting of the first cycle of N.I.N.A was on June 18; on the above website you can find recordings, scheduled events and all sorts of information about the group’s activities taking place in Milan).

The 20 steps of NINA

Part 1 – perimeter of the discussion

  1. Artificial intelligence is not an innovation that emerged from nowhere, if anything it is an acceleration of a process already underway, the latest chapter in a history, that of automation that begins with the Jacquard loom and Babbage’s analytical machine (1837). The term AI itself is from 1956, and the expressions machine learning, deep learning, and neural networks are a little later. They are not “tools,” tools, they are not mere prostheses but worlds, clothes that mediate relationships. They are epistemologies that become infrastructure.
  2. Artificial intelligence is not intelligent. What we are witnessing is a sleight of hand: a complicated statistical model in which a piece of software appears to give us an answer to a question, but in fact this thing is not true: from Alan Turing onward it has been made clear how the machine cannot “understand” in the sense that we understand understanding. There is no discernment, no judgment, only complicated statistical, probabilistic calculations and algorithms.
  3. Among the top ten companies by capitalization globally, seven are from the digital sector and an eighth is Tesla (which makes cars & computers). Digital is, today, the main driver of profits and capital accumulation, just as in the last century the main driver was manufacturing/automotive/fossil fuel companies, supplanted at the turn of the century by distribution (logistics) companies.
  4. What is happening is that a handful of companies (six American and three Chinese) have a substantial oligopoly of these technologies, what some call techno-feudalism. These companies can call the shots and make the shots with respect to these accumulative processes while telling us about the inevitability of technological development as they want it. The dramatic thing is that this all falls in a context where social actors who should have had important functions such as journalists or policy makers are extremely unprepared with respect to these kinds of considerations.
  5. Generative artificial intelligence has an element of substantial difference from other innovations. The wheel, the steam engine, and robotics made our lives easier by delegating to technology (and the machine then) the tasks we thought were most strenuous so that we could devote ourselves to qualitatively “higher” intellectual tasks. Historically on the social scale it was the “lower” jobs that were removed. Generative AI, on the other hand, fishes in the middle, goes and cuts out the intangible, cognitive jobs that have a fair level of specialization, and this is unprecedented in history.
  6. With the advent of platform capitalism, what was “creative” work in the 1990s is now being put to value without going through a labor organization: labor organization has been replaced by platform organization. The consequence of this shift belies the theories of Keynes in Economic Perspectives for Our Grandchildren, when he argued that we would work less because of progress: today we discuss the end of wage labor and we are in the age of endless work, any act of our lives mediated by technology is a productive act of value and almost always unpaid. Artificial intelligence contributes to a process of homogenization in a context where any brain activity is subject to value extraction.

Part 2 – policy considerations

  1. Technology is not neutral. It has served, since the days of Ned Ludd, to increase productivity and lower wages. Technology obviously has positive effects, in fact the criticism is not on the tool itself but on the purpose. As Keynes said there would be the possibility of working fifteen hours a week, but in the last thirty years the basic features of Fordism and Taylorism, namely pegging productivity gains to wage increases to ensure consumption, is no longer working. Certainly artificial intelligence and employment have short-term correlations, but the big question is linking technological innovations to increased incomes.
  2. New technologies, second-generation algorithms, generative AI create a hybridization between the machinic and human element, we have a human becoming of the machine and a machinic becoming of the human, a kind of neo-taylorization. In the discussion of the employment effects of these technologies, we often forget that these technologies must be trained, that is, fed data to perform each task. We can provocatively say that there is no such thing as unemployment; the real dichotomy is between those who are employed and receiving income and those who are not.
  3. As Kate Crawford says in her essay Neither Intelligent nor Artificial, today the digital and, specifically, artificial intelligence is a tool in the service of economic power: for profits produced, surveillance devices on workers, financialization; and political: to police dissent, repress, close borders, wage war.
  4. Artificial intelligence does not exist separate from the world, but rather depends entirely on a very large set of political and social structures. There is no artificial intelligence without Big Tech. Artificial intelligence is not an objective, universal or neutral computational technique. Because of the capital required to educate AI on a large scale and the ways to optimize it, AI systems are ultimately designed to serve dominant interests.
  5. AI systems are built with the logics of capital, policing, and militarization, and this combination further exacerbates existing power asymmetries. When applied to social contexts, they can reproduce and amplify structurally existing inequalities because they are designed to discriminate, amplify hierarchies, and encode strict classifications. Artificial intelligence, then, is an idea, an infrastructure, an industry, it is highly organized capital, a form of exercising power and a way of seeing things. So we have to confront AI as a political, economic, cultural and scientific force.
  6. AI needs a lot of power, a lot of data centers and a lot of water, taking to the extreme the material consequences of the digital system that we are already familiar with. It is impossible to separate the social and environmental pressure this “extractivism” puts on workers and communities from that on ecosystems. Artificial intelligence must be seen as an extractive industry. One cannot talk about AI without talking about big data, gigantic datasets filled with conversations, selfies, children, text, images, all to improve functions such as facial recognition, language prediction, and object detection. In this sense, artificial intelligence is a register of power.
  7. Often, we talk about twin transitions for the ecological and digital transition being mutually reinforcing. In many ways this is true: the ecological transition will probably not happen without a simultaneous digital transition. Moving from an energy system based on fossil fuels to one based on renewables means a radical change in approach; it means moving from a centralized system that distributes energy produced by large power plants to be put into/maintained depending on demand trends to a new system based on a local distribution of energy production (intermittent); it means moving from a system where producer and consumer are different and separate entities to a hybrid one where consumers can also produce some of the energy they consume (by equipping themselves with panels and wind turbines) by contributing to the grid. Managing this, while also ensuring grid balance, will not be possible without the development of the digital sector and machine learning systems applied to infrastructure management.

Part 3-claims

  1. The wells are poisoned. Artificial intelligence is trained on data that is far from objective and unbiased, so it contains biases or “biases” and, working on a large scale, systematizes these biases thus amplifying human errors on a global scale. The AI algorithm itself is not biased, but inherits biases that were already occurring in algorithms in social networks and beyond (think of the Hummingbird algorithm and the google bubble) that were deliberately not corrected. Artificial intelligence is poisoned by the discrimination in the data that feeds it. Data poisoning, which is a term that belongs to cybersecurity, well describes the current reality made up of discriminatory outputs that are direct children of the original discriminations with which AI was fed. Classification policy is a fundamental practice in artificial intelligence. Classification practices shape the way artificial intelligence is recognized and produced, from university laboratories to the technology industry.
  2. We need to try to unhinge this centralized power system without throwing away the benefits we might gain from accelerating technology. We need to demand and demandopen access and transparency. Demand that these machines be designed to be inspectable-we need to know what code they are written with and how they were programmed. Demand that the dataset be available and inspectable: these machines are trained with human knowledge and to human knowledge they must belong. Demand that the algorithms and all the paths the machine chooses to make decisions be decoded. We need to know what is in it to avoid the biases they can generate. Open the black box and with it the industrial secrets it contains and that the enormous industrial values be redistributed, we are entitled to our coupon as shareholders of egalitarian citizenship in the world.
  3. We need to demand that machines be put at the service of the community. We need a much more radical approach than the very human, very capitalistic, very extractive, but very rear-guard battle that is the “protect my ideas” issue. The lawsuits that have been filed, for example, by the New York Times and others pertaining to the intellectual and cultural world against these machines are mainly aimed at obtaining remuneration for their materials used to train them. It is a way of replicating the mechanisms of extractive capitalism and there is no real practice of attacking the underlying distortions of the system. We need to demand transparency about when artificial intelligence systems are used and that in a number of areas decisions are not automatic but mediated by human beings.
  4. Artificial intelligence was born with a function that is primarily colonial and extractive. Without correctives in terms of protecting rights, reversing biases in predictive systems, we will have a mechanism of de-accountability that will increase discriminatory forms. The AI industry has traditionally understood the problem ofbias as a bug to be corrected rather than as a feature inherent in the classification itself. The focus on greater “fairness” of training sets through the elimination of exclusionary and discriminatory, and thus racist, sexist, ableist, etc., terms circumvents the power dynamics of classification and precludes a deeper evaluation of the underlying logic.
  5. TheAI act triumphantly heralded by the European Union as great news is itself likely to have problems: it is trying to regulate something that is highly evolving and very changeable. We worry because they already use social scoring in China, but the groundwork is being built for something similar in Europe as well. The European Union is trying to regulate not the technology (which is going too fast) but the uses by defining prohibited, high-risk and low-risk uses. This is a shrewd view of the technology problem, but it has huge limitations: it only affects the European Union; whereas these machines are designed and usable all over the world and, according to many, global governance would be needed to decide what rules to adopt. The second limitation of European Union regulations is that, as they apply to our borders, they are already generating first-class citizens and second-class citizens.
  6. A feminist approach to data is necessary to highlight inequalities and power asymmetries in representations, including in outputs, because it is also from these data that a whole range of discrimination then follows. We discuss equity because we feel there is a gap to be bridged. We live in a highly unequal, inequitable, unjust world, and we cannot expect the data that feeds artificial intelligence to be the height of moral virtue. As Audre Lorde says, “the master’s tools will never dismantle the master’s house.” The importance of data must be related to and balanced with other knowledge that is not quantifiable such as emotional and relational skills. A deeply human value pattern that cannot be computed.
  7. There are sustainable collective politics distinct from value extraction; there are commons worth maintaining, worlds beyond the market, and ways to live beyond discrimination and brutal optimization practices. If AI and algorithms are part of life then an ethics that applies to Ai and algorithms and can think differently about the value of the living and the nonliving is essential.

check here: https://www.nina.watch/