Los Alamos to Neuralink

Humanity must resist its ceaseless, apocalyptic reconstitution

‘There is no way to get there without a breakthrough.’

So sayeth Sam Altman to an audience in Davos, Switzerland as part of the spectacle of the 2024 World Economic Forum. The ‘there’ he is referring to is the Silicon Valley dream of a world where infinite computing power brings about heaven on earth. But there is a fundamental problem that needs to be overcome: the finite planet. As the CEO of OpenAI and overseer of ChatGPT, Altman noted that artificial intelligence systems consume vastly more power than anyone ever expected, and he frets that that new limits to AI roll-outs will soon arise from a lack of available energy. According to Altman, the world must embrace nuclear fission while investing much more in nuclear fusion: the exponential expansion of AI simply demands it. Answering the call of the infinite, he put US$375 million of his own immense wealth into Helion Energy, a private American fusion company for which he is the largest venture capital investor. He even leveraged OpenAI’s partnership with Microsoft, the latter pouring money into Helion as part of an entirely speculative deal to buy electricity, despite the fact that Helion has never produced any.

Such breakthrough buzz is prominent in news headlines, where technological developments are often framed as game-changing ‘Oppenheimer moments’. Two prominent examples include the commencement of human trials for the Neuralink brain implant—a skull-inserted interface intended to connect the living flesh of a brain and a networked computing-machine—and announcements of nuclear fusion experiments achieving ‘net energy gain’ and therefore promising to deliver infinite clean power. That these two matters are never far from news headlines even when there are no significant developments tells us something about the role of high technology in our society. One doesn’t have to look too deeply at these examples to see layers of fraud, self-serving hype and narrow technocratic visions. The coverage of the ‘promise’ of infinite electrical power from fusion and of unlimited connectivity via an in-skull brain-computer is good fantasy copy if you want to deflect readers from the grimness of the violent, stagnant mess we are in.

Neuralink Corporation was founded by Elon Musk in 2016 and has followed a path of hype and myth-making familiar from his other commercial ventures. According to its mission statement, Neuralink will create ‘a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow’. Building on this, Musk regularly promises messiah-like powers to cure paralysis, blindness, depression, insomnia and so forth. Yet in the eight years of Neuralink’s existence, the company has presented no evidence that it has even begun to work on any of these very different areas. Give Musk a microphone and a spotlight and such cure-the-sick rhetoric rapidly gives way to fantasies of creating a ‘general population device’ which will allow for AI symbiosis ‘such that the future of the world is controlled by the combined will of the people of the earth’.

Neuralink Corp builds on research going back to the late 1960s, when the first successful experiment on a monkey with machinery inserted into its brain enabled it to retrieve food pellets via biofeedback. Continuing the quest for the ‘smartbrain’, Neuralink has received approval to conduct human experiments, and recruitment has already begun. This follows a series of violent and disturbing experiments on monkeys that resulted in paralysis, brain swelling, frenzied seizures, fungal infections and monkeys left shaking and weeping while holding hands with their cell-mate monkeys for days on end. In all cases, termination was the final result. Despite this grim record a listing from mid-2023 shows Neuralink has gained US$280 million from outside investors who are presumably taken in by Musk’s infinite hype, and are banking on the creation of a device that’s close to marketability. The technical obstacles to creating even a minimally functional brain-computer-interface, let alone one that is marketable—to say nothing of the ethical or the good—are immense, and are likely insurmountable in any materially grounded foreseeable future.

Material Study | Antique Hydrometers [from Daughters of Uranium]
Mary Kavanagh, 2020

Likewise, the nuclear fusion celebrants take a momentary flux of energy achieved by the technology—way below the initial energy input—and use that to leap into fantasies of techno-science producing unlimited clean energy. In their limited imaginations, this techno-fix apparently will solve the entire problem of humanity’s catastrophic relationship with the planet’s ecology by allowing business as usual to go on, just without the need to use fossil fuels for electricity generation. As Darrin Durant has shown, however, almost ‘every word written about “net energy gain” from a fusion reaction is a species of manufactured ignorance’, noting that the ‘only thing limitless and free about fusion power is the hype it generates’. Such matters have a long pedigree, with fusion entering the world stage via an impressive act of fraud: Proyecto Huemul. In 1951, Argentina’s President General Perón announced that his scientists had produced a ‘net positive result’ in their thermonuclear experiments, thereby leapfrogging both the United States and the Soviet Union and promising to supply infinite, unlimited energy that would be ‘sold in half-litre bottles, like milk’. That this promise quickly came crashing down has not prevented variation-on-a-theme reprises across the decades. At the end of the Cold War there was a buzz of excitement over ‘cold fusion’, with two scientists claiming to have built an apparatus that could open a different pathway to achieving fusion reactions, again raising the prospect of infinite energy. That this was quickly debunked too as fraudulent hype hasn’t prevented it from regularly returning, such as in the Google-funded attempt to replicate the experiment in 2019. Just a month ago, Nature proclaimed that a ‘new era’ in fusion research had been reached, yet a closer look reveals that the commercialisation of science incentivises both hype and the suppression of any appreciation of risk. This produces gee-whiz headlines that may excite a certain percentage of people but for many others such headlines are more likely helping to delegitimate experts, fuel conspiracies and undermine our ability to have meaningful conversations about the monumental problems facing society today.

Hype-hungry commercial media reports frame fusion and the ‘smartbrain’ as being in early, exciting stages of development. In fact they are in a perpetual early stage, with the achievements of these two sectors today not being qualitatively different from what was achieved fifty years ago. There have been various improvements when judged by narrow metrics, but no radical shifts of any basic kind, making them distinctly unlike the world-historical shift of the original Oppenheimer moment, something recently pulled back into public consciousness by the commercial and critical success of the film. Oppenheimer has already received many film awards, with more to come, and has raked in almost a billion US dollars, making it the second-highest-grossing film with an R rating. It plainly resonates in today’s world, in which the atomic bomb’s progeny—like fusion and implanted computers—continue to unsettle the human condition. Yet the film, as Richard King has noted, ultimately fails to interpret the deeper meanings of what happened in New Mexico in 1945. Even three generations on, these meanings, and the social consequences of atomic power, are poorly understood. For it was in that first atomic blast that something genuinely new came about—a true ‘breakthrough’—something radically different from all that preceded it, something worth looking at closely to try and grasp the strange and terrible historical moment we find ourselves in today.

View of Trinity test site from North Oscura Peak, White Sands Missile Range, New Mexico | [from Trinity Equivalents]
Mary Kavanagh, 2019

The First Bomb

At a very specific moment, a great rip began to tear through the social and ecological fabric that makes up the world. This rip opened at precisely 21 seconds past 5:29 a.m.—local time in New Mexico, USA—on 16 July 1945. At this moment, the first atomic bomb—graced with the holy name ‘Trinity’—was detonated in an ancient volcanic basin turned desert. Plutonium atoms were torn apart in a nuclear reaction, releasing an immense amount of energy in the form of heat, light, sound and radiation, shaking the earth, melting the sand of the desert into green radioactive glass and sending a huge mushroom cloud twelve kilometres into the sky. The blast, wrote William Laurence—the only journalist present at the Trinity test—was ‘the first fire ever made on Earth that did not have its origin in the Sun’. This rigorously calculated incident is a crucial moment in world history: at this point, techno-scientific forces enabled people to reorganise nature by tearing atoms apart to release immense energy, hence transforming the most basic of ways in which people relate with the natural world, and thus contributing to a fundamental reorganisation of the social practices that constitute how we live our lives in common. Indeed, it clearly marks an epochal transformation—a fundamental break from 500 years of capitalist modernity and the suffusion of a new level of abstraction into all of social life.i

Something of the epochal significance of the moment was detected by those who built and dropped the Bomb. Oppenheimer himself of course made the famous statement, paraphrasing the Bhagavad Gita, ‘Now, I am become Death, the destroyer of worlds’, referencing Krishna and Oppenheimer’s now God-like power. US Secretary of War Henry Stimson, who was to justify the dropping of the A-bomb on Hiroshima, made the statement in 1947 ‘that atomic energy could not be considered simply in terms of military weapons but must also be considered in terms of a new relationship of man to the universe’.ii But where these statements point to rupture they do not define its social character. As John Hinkson has written, ‘The nuclear age heralds much more than the possibility of nuclear destruction. It introduces a profound social transformation in which we no longer take nature for granted; while always in process to some degree, it is now being radically transformed by a capitalism intertwined with intellectual practices’,iii where intellectual practices refers to the practices of scientists and others in which abstraction takes on lived, material form.

Across the first half of the twentieth century, the military-industrial complex arose simultaneously in the United States and Germany, with a non-profit version of it emerging in the Soviet Union immediately after the Second World War. Building on Einstein’s papers of 1905, and their elaboration, German scientists discovered in 1938 that fission of uranium atoms was possible, and they continued to lead the world in atomic theories and experiments. American military scientists assumed that the Nazis were pursuing the creation of an atomic bomb, and used this as the prime justification for their own quest. Curiously, though, in 1942 Hitler had decided against devoting resources to building an atomic bomb, largely for the practical reason that it would not be possible to build one during the three years of war he believed stood between him and his imagined world-empire. By coincidence, in the same month Hitler cancelled the Nazi atomic program, the Manhattan Project began.

During their research, some of the scientists who stood at the apex of the Manhattan Project calculated that forcing a nuclear chain reaction might potentially spread into the nitrogen that makes up 70 per cent of the Earth’s atmosphere, effectively transforming it into a planetary atomic bomb—something that would lead to the instantaneous and total annihilation of the entire web of earthly life. The prospect of ‘atmospheric ignition’ caused disagreement among the scientists: Edward Teller was concerned about it; Hans Bethe was convinced it was impossible; the other ten fell in between. Enrico Fermi quite reasonably couldn’t rule out the possibility that they’d missed something in their calculations as to how this totally unprecedented event might unfold. Nevertheless, the American team pressed on. Before the Americans took up the quest, Werner Heisenberg, the principal scientist of the Nazi nuclear weapon project, had also become aware of the possibility of atmospheric ignition. This was apparently a factor in Hitler’s abandonment of the project in mid-1942, as Albert Speer, the Nazi Minister of Armaments and War Production, revealed in his memoir Inside the Third Reich:

Hitler was plainly not delighted with the possibility that the earth under his rule might be transformed into a glowing star. Occasionally, however, he joked that the scientists in their unworldly urge to lay bare all the secrets under heaven might some day set the globe on fire. But undoubtedly a good deal of time would pass before that came about, Hitler said; he would certainly not live to see it.iv

This final prediction came to pass: the Trinity test took place just seventy-seven days after the Red Army conquered Berlin and Hitler committed suicide. As the scientists prepared for the test, Fermi offered bets as to whether they would spark the atmosphere, settling the odds of burning the planet at 10:1. As Caesar said as he pushed across the Rubicon, ‘alea iacta estthe die is cast. It was hugely significant that despite not being able to rule out the possibility of instant and total planetary apocalypse, they pressed on with the experiment, this reckless gamble confirming the heedlessness with respect to scientific discovery and its application that would mark our era.

A similar gamble was repeated just twenty-three days after the Trinity test, this time in the form of a colossal human experiment. Two cities were chosen to test two different types of weapon: Hiroshima an atomic bomb featuring a gun-type uranium core—which again took the atmospheric ignition gamble—and three days later, Nagasaki an implosion-type weapon with a plutonium core such as had been tested at Trinity. This time it was tested using human test subjects. The results were clear and the experiment deemed successful according to its own standards: the bombs were unbelievably powerful, city-killers, with around 210,000 deaths resulting from the two tests. Post facto justifications about the need to force the collapsing Japanese Empire to surrender and various calculations of the number of projected troop deaths necessary to conquer the archipelago were a cynical and self-serving distraction from the entirely novel form of calculated violence, and the trajectory it would mark for all life in the nuclear age more generally.

Indeed, the Trinity blast, along with the following 2055 nuclear explosions and several reactor meltdowns, released radiation that will be detectable in the earth’s crust millions of years into the future and is thus a solid marker that geologists can use to pin down the start of the ‘Anthropocene’. After the mushroom clouds above Japan dissipated, the triumphant superpower was metaphorically poisoned by its own weapons. This is another cruel paradox of the era: that the pure and applied rationality needed to theorise and fabricate a machine powerful enough to split atoms was simultaneously utterly irrational. Not only by gambling with instant apocalypse, but by bringing forth these most terrible weapons, the Manhattan Project set the stage for nuclear proliferation, the arms race and the spectre of extermination. This point was made in the final and most powerful moments of the Oppenheimer biopic. In the return to a scene that echoes across the film, the protagonist says to Einstein, ‘When I came to you with those calculations, we thought we might start a chain reaction that could destroy the entire world’. Einstein asks, ‘What of it?’ Oppenheimer replies, ‘I believe we did’. From its American origins, the chain reaction jumped to the Soviet Union (1949), the United Kingdom (1952), France (1960), China (1964), India (1974), Israel (1960–79), Pakistan (1998) and North Korea (2006). To this list can be added a further twenty-three countries that have nuclear power plants in operation or under construction, for the links between nuclear weapons and power generation are ‘both deep and inextricable’.v Even in Israel’s present war in Gaza, Israel’s possession of nuclear weapons casts a distinctive pall over the whole conflict: it is central to understanding that state’s annihilatory methods and sense of total impunity in pursing ‘complete victory’ at any cost, at any amount of suffering, any amount of global rejection. As Wolfgang Streeck notes, having nuclear weapons and full-spectrum means of delivering them—submarines, bombers, intercontinental ballistic missiles, plus a secret doctrine of when and how they may be used—is fundamental to Israel’s sense of invulnerability.

The use of these weapons has always been governed by the logic of exterminism. Daniel Ellsberg, having worked for the Pentagon and the deeply entangled RAND Corporation as a strategic analyst of nuclear command and control from 1958 to 1969, noted the ‘institutionalized madness’ that surrounded the Bomb. Official estimates for the total death toll of a US first strike of the Soviet Union, China and Warsaw Pact satellite nations would be around six hundred million deaths: ‘A hundred Holocausts’, Ellsberg writes. ‘That expected outcome exposed a dizzying irrationality, madness, insanity, at the heart and soul of our nuclear planning and apparatus’. As soon as the project of control intensified to the point where it could reach into the atom, it went out of control, casting the very real threat of death on a titanic scale across the planet.

Plans like the one Ellsberg revealed still exist today, and as the world population has doubled since he left the Pentagon’s service, suffice to say the expected number of deaths would be much higher: maybe 200 Holocausts, maybe more. As the 1955 Russell-Einstein manifesto put it, full-scale nuclear war means ‘universal death—sudden only for a minority, but for the majority a slow torture of disease and disintegration’. At present, the United States and Russia each have around 6000 deployable nuclear weapons, which gives each of them the ability to detonate one weapon in every city on the planet with a population of over 100,000 people and still have more bombs left in their arsenals. As the Bulletin of Atomic Scientists reminds us, the threat of nuclear war now is the highest it has ever been.vi

The Expulsion (in green): Radiological Analysis and Defence (RAD) group, DRDC
Mary Kavanagh, 2020

The Second Bomb

Not long before his death in 1955, Einstein issued a dire warning: that we were living under a grave threat, not only the prospect of the atomic bomb and spectre of extermination in nuclear war, but also a second weapon, one he considered just at dangerous for humanity and the planet—‘the Information Bomb’.vii

The first and second bombs in fact have tangled origin stories, both reaching a new level of synthesis in the military-industrial complex of the Second World War. The atomic explosions were only possible because nascent computing-machines were used to crunch the vast number tables for the Manhattan Project. The computing-machine Harvard Mark I, made by IBM, was necessary to calculate the vastly complex mathematics needed to determine whether implosion was a viable pathway to create the atomic bomb. Computing-machines and atomic weapons were born together, in the womb of war, conjoined twins of the techno-scientific project.

On 10 December 1945, the Electronic Numerical Integrator and Computer (ENIAC) was activated. It was the first fully programmable, electronic, general-purpose digital computer. The first program to run on the world’s first digital computer was a mathematical test to determine the practicality of creating thermonuclear weapons—instruments of ruin even more terrible than the fission blasts that had recently annihilated Hiroshima and Nagasaki. Thereafter, ENIAC turned to calculating artillery firing tables, for while the maths behind these notoriously difficult calculations would take a trained human about twenty hours to calculate, ENIAC could crunch the numbers in thirty seconds, thus doing the intellectual labour of 2400 human calculators.

Since the dawn of capitalism (c1430-1640), the word ‘computer’ had referred to a person whose job was to determine things by mathematical means, yet at the dawn of the cybernetic era it came to refer solely to the machine, with the earlier skilled human profession now obsolete. Thus in the word ‘computer’ itself we can see a history of how human intellectual practices have been automated, outsourced and encoded into machines. In the postwar period, the power to automate was, in David Noble’s insightful synthesis:

promoted by an army of technical enthusiasts, peddled by the vendors of war-born gadgetry, subsidized by the military in the name of performance, command, and national security, legitimized as technical necessity, and celebrated as progress. Industry managers soon were caught up in the enthusiasm themselves, which was associated with fashion, prestige, and patriotism as well as profitable contracts. And here too it was coupled both to the traditional belief that superior efficiency resulted from work simplification, the substitution of capital for labor, and the concentration of management control over production, and to the post war preoccupation with controlling labor as an end in itself, in order to safeguard and extend management ‘rights’.viii

Forty years after Noble’s analysis, the trajectory he mapped out has continued, powered by a one-dimensional exponential acceleration and facilitated by converging social and ecological deterioration. In 1989 the World Wide Web was famously created by CERN, a nuclear research facility, again showing the intercourse between the first and second Bombs. Via the web, the networking of computing-machines—a cybernetic project devised by the Pentagon in the late 1960s as part of its nuclear war-fighting strategy—could expand rapidly. Fundamentally enabled by the techno-sciences, which are never far removed from the war industries, and backed by the power of speculative financial capital, the cybernetic sector aimed to super-charge consumerism via disembodied communication and hyper-individualism. To those ends, it has been a tremendous success, with perhaps no better poster child than the ‘smartphone’, a crucial pathway for the cybernetic colonialisation of everyday life. Today, the information bomb is literally taking years off our lives: Australians currently spend an average of 5.5 hours a day looking at their smartphones. Extend this over an average lifetime and that will be seventeen years spent jumping between corporate apps, interrupted conversations, fraying social connections, fragmented thoughts and blooming alienation.

While grasping some of the enormity of the technology-induced problems facing humanity, many left-wing commentators fail to pursue a multidimensional analysis. For example, Evgeny Morozov reduces all explanations of the high-tech sector to it being ‘just a bastard child of a much grander ideology’: neoliberalism. Likewise, Jeff Sparrow technological problems on the profit motive, claiming that it stifles innovation while ‘tech itself plays a relatively minor role’. Implicit in these arguments is the common leftist attitude that technology and its productive forces can be separated from their uses and social contexts. Yet as these new technologies continue to pile atop each other and to extend further into previously untapped realms, such views become impossible, even in their own terms, as they simplify how the tech sector actually functions.

The word ‘information’ comes from the Latin īnfōrmō,which meant to form or shape, to sketch an idea, or to instruct. This old concept was given a radically different understanding with the rise of ‘information theory’ as pioneered by the military-industrial complex scientist Claude Shannon during and immediately after the Second World War. He was intent on banishing any trace of meaning from the concept of information, as it would become necessary in information technologies. In 1948 he wrote: ‘Frequently the messages have meaning; that is they refer to or are correlating to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem’.ix In a later paper called ‘The Redundancy of English’, he proposed that a message could represent ‘a random sequence of digits, or it might be information for a guided missile or a television signal’: it fundamentally did not matter. Meaning was unnecessary at this level of abstraction.x

This is important for its stark contrast with how everyday life is lived within culture—this most complex and multifaced word that refers to the practices for creating, relating and sharing meaning in and about the world in social milieux. Even at the time when such information-instrumentalist theories were being propagated in the mid-twentieth century, a critical reading was being mobilised against them by the New Left. We live our lives through the ‘deep social reality of communication’, to use Raymond Williams’s phrasing.xi This stands in stark contrast to the radical abstraction necessary to state that meaning is irrelevant to an information engineering problem. As Williams also noted, a ‘definition of language is always, implicitly or explicitly, a definition of human beings in the world’.xii Hence we might wonder what implicit meanings are carried by information theory, a ‘language’ that is foundational to the way all computing-machines operate, and which now in large part define our world.

This highly abstract break from meaning has particular resonance in the present, which is obvious to anyone who has closely read the output of text-generative AI. These cybernetic programs, like ChatGPT most famously, draw from vast databases of source words produced by all manner of people in a multitude of contexts and map them at a higher level of abstraction. Following Shannon’s thinking, the maps they use specifically exclude the meanings of the words they connect. Rather, they statistically model the relationships between words as data-points, thus enabling the algorithmic fabrication of sentences and paragraphs via the connection of points across the abstract terrain of a neurally networked database. Upon reading the word-shaped output, people then attempt to reimpose meaning upon the words—meaning that does not exist on the level at which the machinery operates. As such, generative AI is particularly adept at mimicking forms of writing that have low levels of meaning, such as corporate mission statements and formulaic student essays.

Lead Bricks [from Daughters of Uranium]
Mary Kavanagh, 2020

The Rip

In tackling these explosive developments in our present world, in which many modes of life are relentlessly being reshaped by technological systems, it is worth thinking about the material reality of those abstracting processes. Unlike the conventional definition of abstraction that sees it as ideational or a conceptual process, we may see it as a lived relation with the world that is shaped by patterns of social practice. It involves drawing away from the embodied, and the sensate, and from particularistic relations and meanings in the world. Such processes have long historical roots, going back well before the emergence of the written record, itself a technology and an abstraction Indeed, abstraction is a constitutive feature of human experience. and hence in itself it is pointless to say that it is good or bad. However, important politico-ethical questions are ever-present, most especially because we are usually unaware of this deep and changing base of human interaction and how it variously shapes existence. How the dominant forms of abstraction over history reconstitute the social, but in particular how the dominant form today works to unground valued ways of being-in-the-world becomes a crucial question.

To understand how the powers of abstraction have reached the point they have today requires looking closely at their development—at the material conditions that have enabled their epoch-defining expansion. While nuclear power and computing-machines are part of a long history of abstraction, they do not merely represent a difference in degree from prior technologies; rather they embody a radical discontinuity, or qualitative transformation, in that history. They cannot exist at all except through technological transformations that depend on the intensely abstracted theoretical work of intellectuals and technicians trained in universities and research laboratories. These ways of grasping the world did not emerge neatly from a history of labour or craft; rather, after emerging across capitalist modernity, they were directly taken charge of in the mid-twentieth century by capital and the militaristic state, whereupon they were very rapidly and aggressively turned against any remaining elements of labour and craft.xiii As epochal as the transformations are, however, they do not automatically erase or replace prior and more concrete social formations; rather, they are better understood as overlaying them, with various tensions and contradictions between the layers of social practice.

Importantly, these transformations are not confined to the natural world, but rather work in conjuncture with the emergence of a new social world. As Geoff Sharp put it, ‘Within it, our relation to the natural world is one aspect only of the changing way in which we carry on our lives in common’.xiv Returning to Oppenheimer and Shannon—and avoiding the biopic’s focus on these two men and their brilliant analytic minds and flawed characters—it is more productive to consider the social contexts in which they came to these immensely abstract conceptions. The complex theoretical abstraction necessary for conceiving the atomic and information bombs, and the scientific technologies that would make them practical realities, could not have emerged without prior and ongoing material abstraction of social relations. Of course science and technology have long histories, but it is worth considering the particular transformations that laid the foundations of the epochal break that marks our period. Consider the transformations that occurred during the century before the Trinity blast. The material foundations for globally networked space were laid with the creation of railway companies, shipping lines, canals between oceans, and facilitated by increasingly accurate maps for navigation and clocks for synchronisation. Increasingly powerful war industries produced new weapons and means of standardisation and regimentation, while finance produced credit institutions and sophisticated insurance. Intense urbanisation was bound up with electricity generation, as well as news media companies and mass industrial culture, while the countryside was remade by the fabrication of nitrogen-based fertiliser and machinery further industrialised agriculture. Likewise, telegraph lines laid around the planet allowed for instantaneous communication across impossibly vast distance and telephones disembodied the voice, while they and photography, radio, cinema—all the ‘apparatus that mediates’, in Walter Benjamin’s expression—combined to transform vision, perception and, ultimately, what it means to be human.xv

The nature of the transformation emerging from these ‘newly won powers over space and time’, as Freud put it, was profoundly ambiguous.xvi These technologies enabled the world to be woven together on a more abstract level, allowing those who entered into it to view their object worlds and given social relations differently. The historical process whereby grounded societies are suddenly drawn into one typified by fleeting, global and multiple relations and frameworks, open-ended in their ‘arrow of development’, is the context in which a scientific atomisation becomes thinkable. The kinds of societies and subjectivities that were fostered in such a world were increasingly drawn away from local, seasonal, embodied relations. It is only through such social-material conditions that it is possible to consider atoms or information in the ways that Oppenheimer and Shannon did. It took great intellectual powers, surely, but it could only have been thought and brought into being as a material abstraction—whether in the form of an atomic blast or a digital computing-machine—by people immersed in a culture constituted by deeper social-material abstractions.

Such a social and material understanding is crucial to grasping the ongoing chain reaction of and fallout from the atomic and information bombs today. In the absence of such deeper critique, much of what passes as radical commentary on technology today will continue to sit with Silicon Valley on everything except pay conditions, or fold everything into a bloated critique of ‘neoliberalism’—or, shallowest of all, mount a ‘pros and cons’ argument. In all such accounts, what is fundamentally distinct about our epoch—the process of intensifying material abstraction—remains unexamined, and thus is unproblematically accepted in a regrettable default agreement with the likes of Elon Musk or extremist transhumanists such as Google’s Sergey Brin, who denounces Musk as being ‘speciesist’ for favouring humans over digital ‘life-forms’.

Indeed, many on the radical Left today agree that infinite technology advances should smash up more grounded social relations, and that this will somehow lead to a socialist future, thus putting the cart of post-capitalism far before the horse of anti-capitalism, as Boris Frankel quipped.xvii The writings of Nick Srnicek, for example, are shot through with open admiration for Silicon Valley firms and a celebration of hyper-individualism in which a disembodied network replaces community, taking the entrepreneurial self to new heights where basic categories are opened up to manipulation by the techno-market. Such an analysis can only conclude weakly with a call to ‘collectivise the platforms’, imagining that it might be somehow possible to simply flick a switch and convert the globe-spanning cybernetic war machine from capitalist mode to socialist mode.xviii Such claims are only possible in the absence of a critique of science or of a critical philosophy of technology and the neglect of any ecological critique or analysis of limits to growth: an ignoring of the lack of organised militant opposition to big tech, the destructive lure of consumerism and a failure to properly integrate a long-term historical analysis with the unprecedented nature of the present conjuncture, to say nothing of an incomprehension of the crucial question of abstraction as a socio-material process. Given these limitations, they can only exhibit a stunted ‘alternative lite’ imagination in which the best that can be aimed for is to staple a socialist face over the inhuman apparatus of cybernetic capitalism.xix In this context, the likes of Musk will continue to hype their techno-utopias without facing the opposition of a deeper critique, let along organised resistance derived from other ways of being.

To reiterate, nothing in this essay suggests that abstraction is a bad as such, for it is constitutive of the kind of creatures we are. Moreover, the powers of intellectual abstraction at least necessarily underpin the whole tradition of critical theory and its interpretive, normative analyses, aimed as they are at gaining a higher level understanding of social life in order to change it. Any reader this far into this particular essay has been forced to grapple with such abstractions, all of which are intended to help deepen critique of the present. Such a goal requires both struggles against domination and the ability to lead cooperative lives at many levels of abstraction, in theory and in practice.

Archive Samples [from Daughters of Uranium]. Residues collected at field sites related to atomic history, labelled glass jars.
Mary Kavanagh, 2020

Three generations have passed since Einstein warned of those two supremely abstract weapons, the atomic bomb and the information bomb. This warning was not heeded, with chain reactions seeing their proliferation in and colonisation of many aspects of life, now drawn into profoundly more abstract ways of knowing and living. The legacy of these bombs continues, through cutting-edge technological developments like computer chips rammed into monkeys’ brains and fusion’s fraught attempts to ‘harness the power of the stars’, and in their dark shadows of intense inequality, ecological destruction, rampant alienation and chronic radiation syndrome. The ensemble of intellectual practices facilitating expansion and extraction, centralisation and concentration, acceleration and accumulation, today underpin cybernetic capitalism. Today’s technologies of practical abstraction facilitate a more thorough domination of nature and of people than anything that has come before, now also transforming other aspects of the human condition such as the intimate and intuitive, the embodied and empathetic, and the sensible and sensitive. This drive to dominate is central to, in Lewis Mumford’s words, capitalism’s ‘quest of power by means of abstractions’.xx Put together, the domination of the most abstract ways of being-in-the-world is degrading the possibility of our living fair and meaningful lives in common and within nature. This most basic ground of the human condition is under siege from the fallout of these most abstract of bombs.

i John Hinkson, ‘Beyond Imagination? Responding to Nuclear War’, Arena, 60, 1982, pp 45–71.

ii Henry Lewis Stimson, ‘The Decision to Use the Atomic Bomb’, Harper’s Magazine, 1947(February), pp 102–107.

iii John Hinkson, ‘New Worlds and the Nuclear Age’, Arena Magazine 158, 2019, pp 34–38.

iv Albert Speer, Inside the Third Reich: Memoirs, Richard and Clara Winston (trans), New York: Macmillian, 1970, p. 227.

v Tilman Ruff, ‘Nuclear Promises’, Arena Magazine, 162, 2019, pp 19–22.

vi The Bulletin’s 2024 update is very weak, drastically downplaying the very real threat posed by Israel’s exterminatory war in Gaza, and Israeli/US strikes in the broader Middle East. The Bulletin ignores Israel nuclear weapons and their high level threats to use them. It mentions Israel only twice, while non-nuclear Iran gets seven mentions, and naughty Russia gets no less than eighteen mentions.

vii Paul Virilio, The Information Bomb, London: Verso, 2005, p. 135.

viii David F. Noble, Forces of Production: A Social History of Industrial Automation, New York: Alfred A. Knopf, 1984, p. 57. See also Kevin Robins and Frank Webster, ‘Cybernetic Capitalism: Information, Technology, Everyday Life’, in The Political Economy of Information, Vincent Masco and Janet Wasko (eds), Madison: The University of Wisconsin Press, 1988, pp 44–75.

ix Claude Shannon, ‘A Mathematical Theory of Communication’, The Bell System Technical Journal, 27, 1948, pp 379–423. NB. At the time of writing, this article has been cited almost 150,000 times.

x Claude Shannon, ‘The Redundancy of English’, in Cybernetics: The Macy Conferences, 1946–1953, Claus Pias and Joseph Vogl (eds), Berlin: Diaphanes, 2016, p. 248.

xi Raymond Williams, Communication, Harmondsworth: Penguin, 1962, p. 113.

xii Raymond Williams, Marxism and Literature, Oxford: Oxford University Press, 1977, p. 21.

xiii Noble, Forces of Production.

xiv Geoff Sharp, ‘From Here to Eternity (Part I),’ Arena Magazine 88, 2007, n.p.

xv Walter Benjamin, ‘Art in the age of mechanical reproduction’, in One-Way Street and Other Writings, J.A. Underwood (trans), London: Penguin, 2009, p. 241.

xvi Sigmund Freud, Civilization and its Discontents, James Strachey (trans), New York: Norton, 1962.

xvii Boris Frankel, Fictions of Sustainability: The Politics of Growth and Post-Capitalist Futures, Melbourne: Greenmeadows, 2018, pp 162–6.

xviii Nick Srnick, Platform Capitalism, Cambridge: Polity Press, 2017.

xix An example of this can be found in Aaron Bastani’s Fully Automated Luxury Communism: A Manifesto, London: Verso, 2019.

xx Lewis Mumford, Technics and Civilization, Oakland: Harbinger Books, 1963, p. 24.

Image Credits

Mary Kavanagh is a visual artist and Professor in the Department of Art at the University of Lethbridge, Canada. She is a Tier I Board of Governors Research Chair, awarded for her work examining the material evidence of war and conflict. For nearly thirty years, Kavanagh’s artwork has been exhibited across Canada and internationally. Her decade-long investigation into the complex and veiled history of nuclear armament has resulted in multiple bodies of work including an immersive travelling exhibition with publication, Daughters of Uranium, which encompasses drawing, sculpture, photography, moving image installation, and archival materials. With sustained projects in Canada and the US, she has been the recipient of peer-reviewed grants from the Canada Council for the Arts, the Alberta Foundation for the Arts, and the Social Sciences and Humanities Research Council of Canada. She is a Fellow of the Royal Society of Canada, Academy of Arts and Humanities.

About the author

Timothy Erik Ström

Timothy Erik Ström is the editor of Arena Online. He is the author of the forthcoming book Cybernetic Capitalism (Verso), and his collected writings can be found at his website: The Sorcerer’s Apparatus.

More articles by Timothy Erik Ström

Support Arena

Independent publications and critical thought are more important than ever. Arena has never relied on or received government funding. It has sustained its activities largely through the voluntary work and funding provided by editors and supporters. If Arena is to continue and to expand its readership, we need your support to do it.

Leave a Reply