Ronald Reagan and the Apocalyptic Imagination

I: The Year of Fear

Nuclear war is unthinkable but not impossible, and therefore we must think about it.
Bernard Brodie

On July 7th 1983, Herman Kahn — the nuclear physicist and military strategist whose 1960 bestseller On Thermonuclear War had terrified America — collapsed and died in his home after suffering a stroke at the age of 61. He had, at the time, been working on an update of his book Thinking About the Unthinkable in an attempt to address the strategic realities of nuclear war in the 1980s. At that precise moment, there was a lot to address. During his first presidential term, Ronald Reagan had authorised a 40% real term increase in defence spending; the 1982 bill was $242 billion, almost twice the 1976 budget. Prestige projects like the MX missile and B1 bomber had been approved, while the army, navy and air force were awarded new tanks, supersonic fighter jets and aircraft carriers. On March 8, 1983, Reagan delivered a speech to the National Association of Evangelicals in Orlando, Florida, that described the Soviet Union as “the focus of evil in the modern world” and talked about “the aggressive impulses of an evil empire.” Two weeks later, from behind his desk at the Oval Office, he announced funding for the Strategic Defense Initiative, which was immediately derided as an impossible fantasy, an attempt to militarise outer space and an explicit threat to the 1972 ABM Treaty. Meanwhile, in the background, a new generation of Tomahawk cruise and Pershing II missiles were destined for England, Italy and West Germany, a deployment that provoked Soviet leaders already threatened by American technology and Reagan’s rhetoric. 

The early 1980s proved to be a short-lived moment of opportunity and renaissance for Kahn — the most fertile and auspicious since his 1960s Cold War peak. He agreed with Reagan’s description of the Soviet regime and approved of his military spending; after years in the policy wilderness, his ideas finally seemed to be influencing government nuclear doctrine again. In 1982, Reagan’s Defence Secretary Caspar Weinberger made it  clear that America was rearming so that it would have the ability to “prevail and be able to force the Soviet Union to seek earliest termination of hostilities in terms favourable to the United States…even under the conditions of a prolonged war” (1). This was war-fighting, not deterrence. The Reagan administration had declared its anti-détente position and turned the page on Nixon, Kissinger, Ford and Carter by reviving the concepts of flexible response and limited nuclear war. It was, on all major points, a policy built on ideas first developed by defence intellectuals like Kahn, Bernard Brodie and William Kaufmann at the RAND Corporation in the 1950s and 60s. For those not versed in the esoteric subtleties of nuclear strategy the language of RAND sounded reckless and aggressive; coming from the administration of a militant anti-communist who spoke in moral categories of good and evil, it scared people. For Kahn, however, this was a positive development — for the security of the free world and for his own career.

If Reagan was depicted by his enemies as “a shoot-from-the-hip cowboy aching to pull out my nuclear six-shooter and bring on doomsday” (as he put it in his memoir, An American Life, 2), then Kahn also had to live with his own caricature, having been crudely satirised by Stanley Kubrick in Dr. Strangelove. At one point in Thinking About the Unthinkable in the 1980s, Kahn bristled at this history, barely suppressing the cumulative frustration of decades of public abuse and misrepresentation. “Military planners and nuclear strategists ought not to be discredited out of hand,” he wrote, probably about himself, “many of them understand better than most the “immorality” of nuclear war — including not preparing for it” (3). In many ways, he’d asked for it: described by the New York Times as “a kind of thermonuclear Zero Mostel” (4), Kahn reveled in his own infamy, provoking audiences with stark imagery and macabre wit, forcing them to face up to the practical realities of nuclear war in the most outrageous terms he could devise.

Even so, there were limits to what he could take: when Scientific American condemned On Thermonuclear War as “evil and tenebrous…a moral tract on mass murder,” he was horrified and attempted to submit a riposte to the magazine’s editor, Dennis Flanagan. “I do not think that there is much point in thinking about the unthinkable,” Flanagan piously intoned in response, “I should prefer to devote my thoughts to how nuclear war can be prevented” (5). Kahn considered this to be a definitive statement of willful ignorance and he was still quoting Flanagan’s letter in outraged disbelief in the final year of his life. At the time he simply turned his riposte into Thinking About the Unthinkable instead and scored another bestseller in the process. Reviewing the book, Norman Podhoretz got much closer to understanding the true nature of Kahn’s “rich imagination of disaster” and the reason people responded to it with such hostility: “[he] does seem to take a visible delight in thinking about the unthinkable; in reading him you can feel the pleasure and excitement he experiences at his own intellectual daring in having crossed over a line beyond which no one else has had the courage to look with such brutal clarity” (6).

From the 1960s to the 1980s, Kahn was consistently drawn back to his main point, his primal fear: that deterrence might fail. Its success was based, he believed, on the constraint of fear and the quality of subjective judgement, but its vulnerability was this same reliance on human perceptions – and misperceptions. A military posture that prioritised deterrence at the expense of other strategies was inherently dangerous: not only could it fail but there might also be “an imbalance in the reliance on deterrence,” which, in 1983, Kahn thought there was: “based on available evidence, it seems clear that the Soviet Union does not accept the West’s apocalyptic view of nuclear war, nor do they support deterrence-only policies. Soviet military writings depict nuclear war as a survivable experience, and back up their reliance on deterrence with war-fighting capabilities, that is, the ability to fight a nuclear war and defeat the enemy” (7). According to Kahn, this was precisely what the U.S. had failed to do in the years of containment and détente, a period of complacency compounded by naive attitudes to arms reduction that left the Soviets with a perceived strategic superiority over the U.S. The SS-20s sitting poised in the Urals were only the most obvious example of this. 

Perceptions and misperceptions could be both dangerous and advantageous during the Cold War. In 1983, the Kremlin felt unnerved by America’s advance in computer technology, with the Chief of the Soviet General Staff Marshal Nikolai Ogarkov admitting that “in the U.S., small children — even before they begin school — play with computers. Here we don’t even have computers in every office of the Defence Ministry” (8). Reagan was mocked at home for ‘Star Wars’ with its cartoonish satellites and laser beams, but it alarmed the Soviets precisely because they did not know whether this really was science fiction. What if it wasn’t? They would have no way to compete. Reagan never claimed anything more immediate for SDI than research, but the announcement was tactically smart because it played on Soviet fears of a “technology gap” that would cancel out their mighty military machine. Kahn loved the idea and praised Reagan’s speech as “optimistic, long-term, and responsive to political and moral imperatives” (9) but flat-out rejected the claim that he had started a new arms race. This was no arms race, Kahn claimed, it was an arms competition which the Soviet Union had been playing on its own for twenty years. “The present controversy over the expense and morality of “rearming” the United States is a result of two decades of a very lax U.S. defence effort; the controversy could largely have been avoided if a consistent pace and pattern of defence spending had been maintained all along,” he wrote (10). What Kahn advocated and what Reagan committed to was, in effect, a form of peacetime military mobilisation. From the Korean War to the Cuban Missile Crisis, perceptions and misperceptions had provoked military conflict and inspired technological innovation. For Kahn the psychological element of this was as important in the 1980s as it had been during the early Cold War: “the purpose of a “modern” mobilization is as much to influence the enemy’s perceptions and calculations as it is to shift the actual prewar balance of military forces in one’s favor” (11).

This combination of new weapons of unknown destructive power and human subjectivity haunted the defence intellectuals who, like Kahn, set themselves the task of thinking about these things. The fact that nobody had ever used hydrogen bombs in conflict was inescapable. Everybody, including RAND systems analysts and Pentagon generals, could only imagine what would happen during and after a thermonuclear war. Everybody, or almost everybody, was appalled and terrified by the prospect, including Kahn himself. And yet, as he repeatedly pointed out, somebody had to think about what would happen and what could be saved, however horrible that may be. This was a task for the imagination: to think about nuclear war, you had to imagine it. The perception of enemy intentions and capability in the thermonuclear age was rooted, then, in the imaginations of superpower leaders and war planners. Avoiding doomsday also, it turned out, depended on human perception, fed by fear, shaped by the imagination. In the end, Kahn did not live to see how the most dangerous year since 1962 ended, but 1983 was punctuated by individual decisions made at moments of extreme stress that almost caused — but ultimately prevented — nuclear Armageddon. 

Context was all. American rearmament and rhetoric put a geriatric Soviet elite on edge: Andropov actually believed that the U.S. was making plans for an unprovoked nuclear strike, a misperception unknown to Reagan and the CIA. SDI exacerbated Kremlin paranoia by exposing their technological obsolescence and economic stagnation, while PSYOPS by U.S. planes and warships on their Eastern borders further unsettled the Soviet leadership. Andropov’s resulting “shoot to kill” policy aimed at suspect aircraft was put into practice in September when the Korean passenger jet KAL-007 was shot down over the Kamchatka Peninsula by an SU-15, killing everybody on board. Reagan responded with fury: “this was an act of barbarism,” he thundered, “born of a society which wantonly disregards individual rights and the value of human life and seeks constantly to expand and dominate other nations.” The Soviets quickly countered, asserting that “the intrusion of the South Korean airlines plane into Soviet airspace was a deliberate, thoroughly planned intelligence operation.” This was, in fact, on both sides, a catastrophe borne of misperception, paranoia and fear: a navigational accident interpreted as a deliberate provocation; a decision made under pressure without sufficient intelligence presented as a calculated, cold-blooded massacre. 

The final act of 1983 was a terrible farce that almost turned into the ultimate tragedy. The story of the NATO war game Able Archer 83 was, again, a story of fear and paranoia, this time prompted by mirage, feints and illusions. As luck would have it, the annual NATO exercise scheduled for November 1983 was designed to be bigger and more realistic than ever before, involving 300,000 people in Western Europe, Norway and Turkey, with the final part designed to test procedures for launching nuclear missiles in a losing war with the Warsaw Pact. The KGB routinely monitored these events and everything about this one — from the scale to the details to the timing — put them on high alert. When Andropov concluded in the early Reagan years that “there is now the possibility of a nuclear first strike” he responded with a global KGB alert that created its own dangerous feedback loop, as Oleg Gordievsky later revealed: 

Because the political leadership was expecting to hear that the West was becoming more aggressive, more threatening, better armed, the KGB was obedient and reported: yes, the West was arming…it may be something sinister. And the reports of military intelligence were worse, because, being more primitive and orientated toward military reporting, they exaggerated the Western military threat even more than the KGB did…residencies were, in effect, required to report alarming information even if they themselves were sceptical of it. The Centre was duly alarmed by what they reported and demanded more. (12)

Soviet leaders watched Able Archer 83 unfold while trapped in an escalating cycle of suspicion and fear that inflamed their feverish imaginations. The Soviet Union had its own first strike plan which started under cover of a fake military exercise, therefore Soviet leaders assumed that Western war plans did the same and that, perhaps, this was it. So, unknown to NATO participants, the Soviets put all of their nuclear forces on high alert, with bombers wheeling onto their runways in East Germany and Poland. When NATO game planners switched codes in the middle of the exercise – thereby plunging observing Soviet intelligence officers into the dark – the Kremlin had to decide whether to launch a preemptive nuclear strike or not. Nobody really knows why they didn’t, but the most likely reason was fear. Their paranoid imaginations led them to believe something that was not true: that Reagan was preparing to launch a war. But their imaginations also saved them, and us, from starting that war themselves: fear of the consequences of unleashing their own weapons made them cautious, and they waited. Able Archer 83 ended without incident. 

1983 was the final demonstration that, to a degree which is no longer really the case, the nuclear arsenals of both powers had become so vast, so destructive, so close to their targets, while sitting on a fine hair trigger, that any wrong move could unleash terminal destruction. Nuclear strategy à la RAND was partly an attempt to find a way out of this trap. In fact, the early years of nuclear strategy in America had been a story of the apocalyptic imagination: imagining what nuclear Armageddon would be like, both to plan for it and to avoid it. This was a story in which Herman Kahn took a lead.

II: After Us, Silence

A catastrophe can be pretty catastrophic without being total.
Herman Kahn

On October 30, 1961, the Soviet Union dropped a 50 megaton hydrogen bomb on Severny Island in the Arctic circle, triggering the largest man-made explosion ever witnessed. The detonation sent a seismic shock wave around the globe three times and produced a fireball visible from Alaska and Greenland (pictured above). The bomb itself was a scaled down version of the original 100 megaton design, but even so the test pilots only had a 50% chance of surviving the blast. The exercise had a scientific and political objective: to show the world that the Soviet Union could build a hydrogen bomb of any size it wanted. The Tsar Bomba, Khrushchev gloated, would “hang over the head of the imperialists like a sword of Damocles.” In the years that followed, the Soviets would build an enormous force of missile silos, mobile launch pads, nuclear submarines and bomber planes ready to launch nuclear strikes that could eliminate NATO forces and destroy Europe and America within hours. The problem was, NATO could retaliate in kind. Everybody who worked on nukes knew that a general war in which both superpowers fired off their arsenals in one go would end Western civilisation and leave the rest of the world virtually uninhabitable for anybody who survived. In recognition of this, the Soviet Strategic Rocket Force adopted the motto: “after us, silence.”

Back in 1946, while he was still digesting the implications of Hiroshima and Nagasaki, military strategist Bernard Brodie wrote: “everything about the atomic bomb is overshadowed by the twin facts that it exists and that its destructive power is fantastically great” (13). The first generation of nuclear strategists, led by Brodie, quickly came to the conclusion that the atom bomb was “the ultimate weapon” and that the only rational military policy left in the nuclear age was deterrence. Nevertheless, Brodie also knew that somebody had to think about how to use the bomb because of the simple fact that it existed. The Air Force had it and they would not let it go unused in the event of war. For General Curtis LeMay of the Strategic Air Command, the only way to do this was to fully embrace its destructive power, so his first nuclear war plan proposed dropping the entire U.S. stockpile of atom bombs on 70 Soviet cities in 30 days. SAC strategy reached destructive apotheosis in the Single Integrated Operational Plan which was designed to ensure that the U.S. could effectively respond to any Soviet provocation with, in the words of John Foster Dulles, massive retaliation. In its infamous 1962 vintage this meant dropping 3423 hydrogen bombs on military, urban and industrial targets across the entire Sino-Soviet bloc, instantly killing an estimated 285 million Chinese and Russian citizens. There was no option to exclude China from annihilation if it was not involved in the war because that would ruin the plan, which had been precisely sequenced to ensure that all the bombs were dropped in the right place at the right time, while allowing U.S. pilots to escape their own payload. Eastern Europe had to be bombed so that air defence radars and missile sites could be eliminated, allowing U.S. strategic bombers to continue on to the Russian heartland: millions of Poles and East Germans would be sacrificed in order to facilitate an air corridor. If Western Europe escaped Soviet retaliation, which was unlikely, it would probably still be destroyed by radioactive fallout and ecological collapse. 

The scale of the plan had been partly driven by money and inter-agency rivalry: the Air Force always overestimated the number of bombs needed to minimise any margin of error and “ensure” that targets were destroyed, thereby allowing them to buy more hardware and explosives than the navy. This meant that many targets would be hit over and over again with multiple multi-megaton hydrogen bombs. The plan, as it stood, was likely to produce so much radioactive fallout that, as one horrified naval officer who witnessed the briefing noted, “our weapons can be a hazard to ourselves as well as our enemy” (14). SIOP-62 was a fully worked out and clearly articulated road map to Armageddon that was presented by the U.S. air force elite as a serious plan to counter any form of Soviet provocation, including conventional incursions into Western Europe. Walt Rostow described it as “orgiastic, Wagnerian,” while Eisenhower, after sitting through the grisly flip-charts, noted that “every single nation, including the United States, that entered into this war as a free nation would come out of it as a dictatorship. That would be the price of survival.” When General Tommy Powers briefed Secretary of Defence Robert McNamara on the plan he decided, in his own style, to present one of its many little quirks in an arch aside: “Well, Mr Secretary, I hope you don’t have any friends in Albania, because we’re just going to have to wipe it out” (McNamara was not amused). The SAC philosophy was distilled to its purest essence on the day that Powers infamously barked at RAND counterforce advocate William Kaufmann, “Look, at the end of the war, if there are two Americans and one Russian, we win!” “Well, you’d better make sure they’re a man and a woman,” Kaufmann replied

The atom bomb was a country killer but the hydrogen bomb was a world destroyer. The very first thermonuclear weapon had close RAND connections. Most of the physics department had contacts at the Los Alamos weapons lab and therefore knew something about its top secret research into fusion technology; Kahn had even helped solve some of the maths behind it. Meanwhile, in late December 1951, while the “superbomb” was still in development, RAND put together a four man research group led by Brodie to analyse its potential warfare capabilities and implications. The team started by methodically drawing circles on maps and calculating different types of damage and casualty rates relative to the size of the hypothetical bomb. As the work progressed, it became grueling, and then horrifying. When Jim Lipp, head of the RAND missile division, started looking at the best case scenario for the battlefield use of hydrogen bombs, he nearly threw up: two million was the lowest death toll he could come up with. Other scenarios concluded with tens of millions of casualties, even when avoiding cities. After a few weeks of this kind of work, Lipp retired from the project. “What’s happening at RAND?” asked the wife of Charlie Hitch, the mathematician of the group, “Charlie comes home, he barely says hello, he’s uncivil, and after dinner he just locks himself up in his study. Something terrible is going on there” (15). The work stretched the imaginations of the RAND team to the limits of human horror in a way that the Cold War generations would become accustomed to, living through the Cuban Missile Crisis and watching films like On the Beach, Threads and The Day After. The RAND analysts were the first people to plot the course of thermonuclear apocalypse in numbers and to think about and therefore imagine what this kind of war would mean and look like. It took a psychological toll, as it would do for everybody who worried about how the Cold War would eventually end. As Brodie noted, the A-bomb, for all of its horrors, still needed a degree of accuracy to destroy its targets, but the H-bomb could miss by miles and still destroy everything it needed to. “We no longer need to argue whether the conduct of war is an art or a science — it is neither,” Brodie wrote, “the art or science comes in only finding out, if you’re interested, what not to hit” (16). 

Herman Kahn was the megastar of thermonuclear apocalypse and made the work of this first RAND research group his raison d’être. As a physicist, he had little time for social scientists or what they did (“I read the New York Times,” he said, “what the hell should I read Nathan Leites for?”) but he was also easily bored and intellectually omnivorous, stalking the halls of RAND’s Santa Monica offices to find out what was going on in the other departments. In the years before he was a famous, bestselling author, Kahn took RAND on the road, delivering a series of lectures on how to fight and survive a nuclear war to packed halls and theatres across America. His gripping presentations with their grim subject matter were, surprisingly, wildly popular. The objective was to change strategic reality: to replace the rigid and suicidal policy of massive retaliation with the RAND menu of flexible response, controlled escalation, counterforce and civil defence. He once told the SAC generals, “Gentlemen, you don’t have a war plan, you have a war orgasm”, and despite his reputation, he was not an enthusiast for mass destruction, although his critics accused him of increasing the chances of nuclear war happening by talking about it in rational terms. The clinical reasoning and apparent nonchalance with which he threw around horrifying statistics was, in fact, distinct from the barbarism of Curtis LeMay and Tommy Powers. Facing facts, as he saw it, was a way to reduce reliance on massive retaliation and preserve as much as possible for postwar reconstruction. For Kahn, just assuming there would be nothing left to reconstruct after a nuclear exchange and therefore not bothering to think about it at all was not only a dereliction of duty, it was immoral. 

Despite pioneering the idea, Bernard Brodie had eventually concluded that trying to plan for “controlled escalation” was futile because even if a nuclear war could be controlled millions of people would die anyway. But, for Kahn, this did not end the argument. “It was difficult for people to distinguish in the early 1950s between 2 million deaths and 100 million deaths,” he wrote, in direct reference to Brodie, “today, after a decade pondering these problems, we can make such distinctions perhaps all too clearly” (17). This was not simply a case of brutalisation, but a function of post-war planning, where saving as many people as possible would be essential in order to rebuild a society. In Kahn’s system, civil defence was not only a practical and moral imperative, it also made the nuclear arsenal a more credible threat: if the Soviets thought that America had made plans to survive a retaliatory strike then the first strike capability suddenly looked a lot more menacing. It is easy to see why Kahn’s logic drove people crazy but also why his thought processes proved so compelling. As Podhoretz recognised, he was gleefully crossing into territory everybody else stepped back from; his provincial audiences may have felt they were doing their civic duty by attending his lectures, but the experience was very similar to going to see a horror movie. Kahn, with his sense of theatre, gory set pieces and Gothic imagination, satisfied a compulsion to look horror in the face, to feel and overcome the sensation of fear: his talks were like a Grand Guignol for the nuclear apocalypse.

The content of these lectures would eventually be published in the form of On Thermonuclear War, a 650-page tome that, incredibly, sold 30,000 first edition copies. As Kahn wrote, clearly speaking about himself again, “It takes an act of iron will or an unpleasant degree of detachment or callousness to go about the task of distinguishing among possible degrees of awfulness” (18) — a task he nevertheless took on with relish. On Thermonuclear War was the book that looked beyond deterrence towards Kahn’s two central themes: war and survival. In Lecture 1 — on ‘The Nature and Feasibility of Thermonuclear War’ — Kahn asked, “Will the survivors envy the dead?” and decided that, on balance, they probably wouldn’t. After considering the effects and longevity of nuclear radiation and the protective qualities of fallout shelters, he briskly concluded that “even though the amount of human tragedy would be greatly increased in the postwar world, the increase would not preclude normal and happy lives for the majority of survivors and their descendents” (19). For Kahn, the most important thing any government could do to prepare for this world was to hand out radiation meters. This would, he claimed, help to maintain the morale and risk-taking capacity of war survivors, thereby allowing governments to coordinate and implement postwar reconstruction more effectively, or at all. “Assume…a man gets sick from a cause other than radiation,” Kahn mused, “Not believing this, his morale begins to drop. You look at his meter and say, “You have received only ten roentgens, why are you vomiting? Pull yourself together and get to work”” (20). Kahn had composed a guide to the realities of a postwar world, or so he claimed — in fact, the entire book was a sustained act of imagination, a relentless and clinical epic of invention and speculative thinking. Kahn’s performance, in front of an audience or on the page, was not really as factual or realistic as he liked to pretend, and seemed to many to be hopelessly, even callously, optimistic. He was simply riffing on catastrophe, “making up scenarios” which, after all, “is not really very hard if one puts one’s imagination to work” (21). The resulting work was, in its own exhausting way, a classic of apocalypse literature: a Book of Revelation illustrated with graphs and statistical tables. 

Ronald Reagan would later dismiss “the macabre jargon” of those who believed that a nuclear war could be “won”, yet this kind of thinking would find its place in his own administration. In 1981, Thomas K. Jones, the Deputy Under Secretary of Defense for Research and Engineering, Strategic and Theatre Nuclear Forces, took inspiration from Russian civil defence measures, telling the L.A. Times that millions would survive a nuclear war if they simply went out to the countryside, dug a hole and covered it with dirt. “The dirt really is the thing that protects you from the blast as well as the radiation, if there’s radiation. It protects you from the heat. You know dirt is just great stuff…Turns out with the Russian approach, if there are enough shovels to go around, everybody’s going to make it…” (22). During the first Reagan administration, the pressure to move away from détente and containment came from those who maintained that the Soviet regime had an advanced civil defence system which gave them the confidence to fight a nuclear war  — the mirror image of Kahn’s case for shelters and radiation meters in 1960. This was the argument made by Reagan’s Russia expert Richard Pipes, who claimed that the Soviet Union had already built a civil defence infrastructure capable of protecting their “essential cadres” and possessed the psychological ability to sustain massive casualties. “In other words”, he speculated, “all of the USSR’s multimillion cities could be destroyed without traces or survivors, and, provided that its essential cadres had been saved, it would emerge less hurt in terms of casualties than it was in 1945…clearly a country that since 1914 has lost, as a result of two world wars, a civil war, famine, and various ‘purges,’ perhaps up to sixty million citizens, must define ‘unacceptable damage’ differently from the United States” (23).

Reagan’s nuclear hawks, like T.K. Jones and Pipes, stalked the fringes of his administration, with their spiritual home centered on Weinberger’s defence department. They used the language and the imagery that Kahn had canonised in his masterpiece of the apocalyptic imagination, On Thermonuclear War. Like Kahn, they took a creative delight in thinking about the unthinkable. Kahn was not unaware of this and found the atmosphere conducive, reemerging from the world of futurology to engage with Armageddon once again. During the last year of his life he was busy organising Breakfast Group meetings in Washington that put military and civilian strategists in touch with Defence Department and White House policy planners. To the bitter end he was trying his best to shape a U.S. nuclear posture that terrified the whole world. He did not live to see all of his work undone by the President he most admired, but perhaps this would have pleased him after all. “For the past twenty years I have been concerned with how best to reduce this potential for nuclear war,” he wrote in 1983 (24). One year later, this is what Reagan would begin to do. 

III: Macabre Jargon

Imagination, not mendacity, was the key to Dutch’s mind. He believed both true and untrue things if they suited his moral purpose — and because he believed in belief.
Edmund Morris, Dutch: A Memoir of Ronald Reagan

Lately I’ve been wondering about some older prophecies – those having to do with Armageddon. Things that are new today sound an awful lot like what was predicted would take place just prior to ‘A’ day. Don’t quote me.
Ronald Reagan

In November 1983, ABC broadcast The Day After, a drama that portrayed the destruction of Lawrence, Kansas, following a full-scale nuclear exchange between the two superpowers. The movie was trailed by frenzied publicity partly generated by ABC’s own marketing department but also by genuine public concern. The New York City School Board sent out a memo instructing parents not to let their children watch the film: “this is not just one more horror film,” it warned, “adults can confidently tell youngsters that ghosts and vampires do not exist. But the threat of nuclear war is real” (25).  In the hours following broadcast, the White House switchboard was flooded with telephone calls from concerned citizens asking what the president was doing to stop the events shown in the film from actually happening. What they did not know is that Reagan had also watched the film at a private screening the month before and had been deeply disturbed by it. “It is powerfully done,” he wrote in his diary, “It is very effective and left me greatly depressed” (26). Reagan’s most exhaustive and unorthodox biographer, Edmund Morris, would later note that this was “the first and only admission I have been able to find in his papers that he was “greatly depressed”” (27). 

The Day After is often dismissed as an inferior precursor to the BBC’s own nuclear war drama Threads which traumatised the British public one year later. The two films shared an almost identical premise and a basic narrative technique: in each case, viewers spent time getting accustomed to the faces, hopes and dreams of a group of extended families before their lives were abruptly ruined by the arrival of nuclear warheads. But in other ways, the films were completely different. Threads now has the greater critical reputation because of its brutal neorealist approach to the apocalypse: the Ken Loach-style drama of Jimmy and his family gives way to a pitiless portrayal of environmental collapse and social degeneration, a world in which compassion, empathy and love have all been removed from the human experience. The bleakness of Mick Jackson’s vision of postwar Yorkshire was, in effect, a ferocious application of Carl Sagan’s then-fresh theory of nuclear winter, and lingered in the memory even longer than the imagery of Sheffield’s thermonuclear incineration. In contrast, The Day After used the techniques of a mainstream American soap opera in order to set up a comforting portrait of provincial America padded out with familiar screen faces like Jason Robards and Steve Guttenberg (one year after this apocalypse Guttenberg would be chasing Kim Cattral around the Police Academy campus). The movie didn’t exactly flinch from the gory details of nuclear war – medical facilities collapsed as staff died and casualties mounted, central characters succumbed to radiation or were picked off by feral scavengers – but did suffer from too little Sagan: after the war, blue skies opened up again over the radioactive ruins of Kansas. Nevertheless, the soft focus melodrama served its purpose just as effectively as the bleak realism of Threads did. It was, in its own way, just as unsettling to watch and to think about. Because, while Sheffield was reduced to mass panic by a four minute warning, The Day After was able to show something different and equally chilling: crowds watching passively as the ICBMs arced out of their silos, rising over the hospitals and parks of Kansas and into the immaculate American sky, followed by an eery, even beautiful, moment of pause — partly shock, partly awe — before those same crowds remembered that in thirty minutes time Soviet missiles would pay them back. Although it had a completely different aesthetic sensibility, this sequence was as powerful and haunting as anything in Threads.

It was also better timed. By the time Threads was broadcast in October 1984 it was slightly behind the curve: the crisis of the previous year was history and tensions had begun to ease. The Day After, on the other hand, was shown in the middle of some of the most dangerous moments of 1983 and therefore the entire Cold War; it was, in fact, a key component in that crucial year and contributed to its safe conclusion. Reagan, in his diary entry after the screening, continued: “My own reaction: we have to do all we can…to see that there is never a nuclear war.” Only weeks later, the President sat through his first SIOP briefing, an experience he’d managed to avoid until that point, and was further chastened and horrified by what he was told: with a few tweaks and extra attack options, this was essentially the same “horror strategy” that all previous presidents had abhorred but failed to revoke. The briefing reaffirmed what Reagan felt watching The Day After: “In several ways, the sequence of events described in the briefings parallel those in the ABC movie. Simply put, it was a scenario for a sequence of events that could lead to the end of civilization as we know it” (28). The film provided a dramatic framework that helped him to understand the reality of what he was being told in the SIOP briefing: it made the experience even more immediate and vivid for him. As Beth Fischer argued in The Reagan Reversal, The Day After was more important in Reagan’s case than it might have been otherwise because of his own way of understanding the world. “The film’s format and style were perfect for impressing upon the president the reality and horrors of nuclear war, ” she wrote, “It spoke to his fear about a nuclear Armageddon; it was narrative in style; and, like most of Reagan’s own stories, if focused on the lives of ordinary Americans. The movie also presented the concept of nuclear annihilation in visual images that would stay with Reagan far longer than jargon-laden statistics” (29).

Both Threads and The Day After, like their forerunners The War Game and On the Beach, were attempts to imagine the nuclear apocalypse and to dramatise its reality. In different ways and for all their visual flaws (and maybe because of them), the horror on offer was the ultimate one: the spectacle of everyday life being stripped of all illusions and comforts as humanity destroyed itself. It is not superficial to say that for Reagan – the former Hollywood actor, film fan and compulsive storyteller – watching this vision of nuclear apocalypse unfold on screen affected his subsequent policy decisions and actions. Reagan was an imaginative thinker rather than an academic analyst and understood the global and historical issues he faced in narrative terms. He was once mocked for asking Gorbachev during one of their fireside chats whether or not the Soviet Union would side with the United States in the event of an alien invasion; Gorbachev, rather taken aback, said yes, it probably would. The question was unorthodox but in the context of an arms control summit it wasn’t really stupid at all. Reagan was actually asking: what are the limits of ideology in the context of our common humanity? In the year of KAL-007, Able Archer 83, The Day After and his SIOP briefing, Reagan had been given a powerful reminder that nuclear weapons menaced all mankind. 

So, on January 16, 1984, in the brittle aftermath of the year of fear, the President delivered one of the most important speeches of his career. Andropov was still reeling from the events of the previous autumn and Reagan, having finally learnt of Kremlin paranoia, knew that any wrong move could still trigger a precipitous nuclear response. He decided, then, to make public the true scope of his ambition, reiterating as clearly as he could that “my dream is to see the day when nuclear weapons will be banished from the face of the earth.” On his own side, nobody else was really very keen on this. His aides and advisers tended to ignore this kind of talk, until the 1986 Reykjavik arms summit shocked them all with the depth of his convictions and his determination to act on them. Kenneth Adelman had to exorcise this horrible memory in his memoir: “Reagan is a man of instinct and vision, not logic” he wrote, but “to propose that we usher in a nuclear-free world would be to abdicate our postwar responsibilities. We could kiss NATO goodbye and leave Western Europe far less secure” (30). Thatcher was equally dismayed, telling Richard Perle, “It is inconceivable that the Soviets would turn over their last nuclear weapon. They would cheat. I would cheat” (31). In a letter she told Reagan that “while nuclear weapons themselves might in theory be abolished, the knowledge of how to make them never will be. But the risk lies above all in undermining public support for our agreed strategy of deterrence and flexible response” (32). The problem was, as Adelman attested, Reagan would politely listen to such objections and then just carry on talking about abolishing nuclear weapons anyway. In his memoir, Reagan sounded tired, even exasperated, by his nuclear strategists: “Some of my advisers, including a number at the Pentagon…said a nuclear-free world was unattainable and it would be dangerous for us even if it were possible; some even claimed nuclear war was “inevitable” and we had to prepare for this reality. They tossed around macabre jargon about “throw weights” and “kill ratios” as if they were talking about baseball scores” (33)

John Patrick Diggins described Reagan as “a romantic when it came to the arms race, a leader who believed that…a world without nuclear weapons could be possible because it could be imagined” (34). But Reagan’s imagination was not simply romantic, it was also religious, shaped by the evangelical influence of his youth. Like Kahn and the RAND strategists, Reagan had a moral perspective on nuclear weapons, but this disposed him towards abolition rather than control. Thatcher tried in vain to explain that peace in Europe – and the security of her country – had been maintained by NATO’s nuclear threat. This made no difference to Reagan: as far as he was concerned, the “balance of terror” was an immoral proposition that had to be changed. As he wrote in his memoir:

Advocates of the MAD [Mutual Assured Destruction] policy believed it had served a purpose: The balance of terror it created, they said, had prevented nuclear war for decades. But as far as I was concerned, the MAD policy was madness. For the first time in history, man had the power to destroy mankind itself. A war between the superpowers would incinerate much of the world and leave what was left of it uninhabitable forever. There had to be some way to remove this threat of annihilation and give the world a greater chance of survival. (35)

He was never reconciled to this reality: it offended his moral conscience. It also haunted his imagination – in particular his apocalyptic imagination. At the age of 11, influenced by his mother, Reagan had been baptised into the church of the Disciples of Christ and adopted their tradition of Biblical literalism. He believed in the reality of Armageddon and, when he was President, occasionally unnerved his advisers by noting signs of its imminence. In a Wall Street Journal interview in 1985, he mused aloud, “I don’t know whether you know, but a great many theologians over a number of years…have been struck by the fact that in recent years, as in no other time in history, most of the prophecies have been coming together” (36). Reagan’s National Security Adviser Robert McFarlane would late explain to Beth Fischer that “the President had fairly strong views about the parable of Armageddon…he believed that a nuclear exchange would be the fulfillment of that prophecy and that the world would end through a nuclear catastrophe” (37). 

If Reagan believed that nuclear war was destined to be the fulfillment of scripture, he was also determined to prevent it. The theological implications of this did not concern him, because he was capable of holding two conflicting ideas in his head at the same time with total — and equal — commitment and belief. He found it easy to live with contradictions and did not feel the need to explain them away. Rearmament was required to give America the leverage it needed to negotiate disarmament. SDI was necessary to thwart the Biblical prophecy of Armageddon. In On Thermonuclear War, Herman Kahn had noted that “history has a habit of being richer and more ingenious than the limited imaginations of most scholars or laymen” (38), and the history of the Reagan era turned out, in the end, to be both richer and more ingenious than anybody had expected it to be in 1983. This was largely to do with the imagination of Ronald Reagan, which did not concern itself with limitations, as such. It was an imagination that was surprisingly subtle and universally confounding. (Edmund Morris, the writer given the most access to Reagan during his lifetime, was almost driven to madness trying to explain his subject – Dutch, the resulting biography, was an act of creative despair as much as anything else.) It was an imagination partly defined by a pure hatred of nuclear weapons — a hatred that eclipsed all of the subtleties of nuclear strategy and Soviet policy formulated by Herman Kahn or Caspar Weinberger and aggravated allies and advisers like Thatcher and Adelman. Haunted by Armageddon, it was Reagan, more than Gorbachev or anybody else, who finally saved the world from thermonuclear suicide. 

  1. Quoted in Fred Kaplan, The Wizards of Armageddon (Simon and Schuster, 1983), p.387
  2. Ronald Reagan, An American Life (Simon and Schuster, 1990), p.554
  3. Herman Kahn, Thinking About the Unthinkable in the 1980s (Simon and Schuster, 1984), p.52
  4. Arthur Herzog, ‘Report on a Think Factory’, New York Times Magazine, November 10, 1963
  5. Quoted in Kaplan, p.228
  6. Norman Podhoretz, ‘Herman Kahn and the Unthinkable’ in Doings and Undoings: The Fifties and After in American Writing (Noonday Press, 1964), p.315
  7. Kahn, Thinking About the Unthinkable in the 1980s, p.87
  8. Leslie Gelb, ‘Who Won the Cold War?’, New York Times, August 20, 1992
  9. Kahn, Thinking About the Unthinkable in the 1980s, p.51
  10. Ibid., p.35
  11. Ibid., p.156
  12. Quoted in Beth A. Fischer, The Reagan Reversal: Foreign Policy and the End of the Cold War (University of Missouri Press, 1997), pps.127-8
  13. Quoted in Kaplan, p.32
  14. Ibid., p.268
  15. Ibid., p.78
  16. Ibid., p.79
  17. Herman Kahn, On Thermonuclear War (Routledge, 2017), p.169
  18. Ibid., p.19
  19. Ibid., p.21
  20. Ibid., p.86
  21. Ibid., p.137
  22. Quoted in Robert Scheer, With Enough Shovels: Reagan, Bush and Nuclear War (Secker & Warburg, 1982), pps.23-4
  23. Richard Pipes, ‘Why the Soviet Union Thinks It Could Fight and Win a Nuclear War’, Commentary, July, 1977
  24. Kahn, Thinking About the Unthinkable in the 1980s, p.221
  25. Quoted in Fischer, p.116
  26. Reagan, p.585
  27. Edmund Morris, Dutch: A Memoir of Ronald Reagan (HarperCollins, 1999), p.498
  28. Reagan, p.585
  29. Fischer, pp.118-9
  30. Kenneth L. Adelman, The Great Universal Embrace: Arms Summitry — A Skeptic’s Account (Simon & Schuster, 1989), pps.66-8
  31. Charles Moore, Margaret Thatcher – The Authorized Biography, Volume Two: Everything She Wants (Allen Lane, 2015), p.588
  32. Ibid., p.589
  33. Reagan, p.550
  34. John Patrick Diggins, Ronald Reagan: Fate, Freedom and the Making of History (W.W. Norton & Company, 2007), p.226
  35. Reagan, p.258
  36. Quoted in Fischer, p.107
  37. Ibid.
  38. Kahn, On Thermonuclear War, p.137
Posted in Uncategorized | Leave a comment

Time Wars: Hollywood and the Conquest of Memory

Something I learned watching Ghostbusters: Afterlife: Egon Spengler is a big fan of the Ghostbusters. He may even be a fan of Ghostbusters, judging by the wooden signs scattered around the gates of his house and daubed with excerpts from Revelation 6:12, a verse that features in Jason Reitman’s film because of a private conversation Ray Stanz had with Winston Zeddemore in 1984. “Ray,” asked Winston, as Ecto-1 rolled over Brooklyn Bridge and Gozer the Gozerian plotted the end of the world at 55 Central Park West, “has it ever occurred to you that the reason we’ve been so busy lately is because the dead have been rising from the grave?” Just in case we all somehow miss the link, an elderly Stanz is shown toiling away in Ray’s Occult Books (first glimpsed in 1989, in Ghostbusters II) with the same Biblical reference tattooed on his arm: the conversation clearly cut deep with the Ghostbusters, Jason Reitman and the rest of us. 

References to Ivan Reitman’s original masterpiece come so thick and fast in Afterlife that they effectively hijack the plot, the sound, the visual effects and the atmosphere of his son’s film. In fact, what exists of an original plot is so thin that it hardly registers a flicker of independent thought, while relocating the action from New York to Oklahoma does nothing to lessen its dependence on the earlier work. Some reviewers have described this as homage paid by the young director to his father, but, in fact, the real emotional resonance of the piece is to be found in the death of Harold Ramis, and therefore Egon Spengler. Before the final credits roll a dedication to Harold is inscribed across a starry night and the poignancy of this hits home because Afterlife is also about the passage of time. What has happened to Egon, Ray, Peter and Winston since we last saw them together in 1989 is also, to a large degree, about what has happened to us — those of us, that is, who first saw Ghostbusters as children in the cinema or on TV at Christmas, who wore the original glow-in-the-dark sweaters or played the Spectrum ZX games. For a film so transparently searching for a young audience its most profound theme turns out to be age: what it means to lose friends and the people we love as we grow, mature, mess up, get old and die. It is a film that fixes, both sentimentally and cynically, on the related obsessions that currently haunt Hollywood: time, memory and nostalgia.  

If Afterlife is not the culmination of this lucrative cycle then it is an extreme example of the trend. It is a film that shamelessly exploits generational demographics without successfully identifying a target audience. This is, in a way, one of the inevitable results of Hollywood’s merciless plundering of the rich resources of cultural nostalgia and human memory. Successive generations raised on comics, toys, cartoons and summer blockbusters are now having their own history revived, recycled and rebooted as movies that should, basically, be made for their children. A film like Ghostbusters: Afterlife attempts to span generations and capture overlapping audiences, but because the attempt is so transparent it inevitably fails, leaving the film to drift around in limbo, judged largely by its box office returns and franchise potential. In fact, the film is also partly about franchise demographics and the resulting confusion is therefore hardwired into its DNA, leading to an aesthetic and commercial incoherence that is not without some accidental merit. Because Afterlife, without really intending to, reflects a fundamental insecurity about age, memory and the past at a time when everything is archived and accessible and the cultural boundaries between generations have largely collapsed. It is the product of a moment when viral nostalgia has fully compromised popular culture and is so pervasive that generations of consumers are feeling nostalgic about things they never experienced in the first place. In this film and others like the Star Wars sequels, Blade Runner 2049 and the Wonder Woman movies, the past is almost more important – and certainly superior – to whatever present they try to represent.  

This process began in the 1990s but the most audacious and aggressive colonisation of cultural history has taken place in the last decade. In 2012, Disney made a large purchase on our memories when it acquired Lucasfilm for $4.05 billion. This brought them a lot of stuff: the film production business, animation studios, toys, videos and George Lucas’ video and audio effects portfolio — including Industrial Light & Magic, one of the key innovators in modern movie history. Most important of all, it brought Star Wars, the biggest film franchise of them all and one of the dominant mythic narratives of our time. This was the real prize, which was obvious because the company wasted no time at all announcing the production of Episode VII: The Force Awakens. Over the next few years, Disney unleashed their marketing strategy with the obsessive planning of a Napoleonic campaign, successfully generating enough anticipation and intrigue to deliver the fastest film to gross $1 billion in blockbuster history. The campaign itself was a ruthless exercise in emotional manipulation and psychological exploitation played out on a global scale. The second trailer provided a tantalising recall of familiar settings, sounds and characters, calculated to trigger mémoire involontaire: John Williams’ Empire Strikes Back theme washed over fresh vistas of Tatooine as audiences gazed upon the outline of a wrecked Star Destroyer, ditched sadly in sand dunes but still holding its beautiful triangular frame together like a gigantic dead grey moth. The trailers used musical and visual clues to stir deep subliminal connections to the first trilogy and to the communal experience of generations raised on Star Wars, while carefully avoiding any references to Lucas’ own prequels which nobody at this point wanted to remember at all. In many ways, the trailers and the film itself did all they could to erase the spectre of The Phantom Menace and evoke the memory of A New Hope: nostalgia, rather than future shock FX, was Disney’s secret weapon. 

In fact, this appeal to the past went beyond a mere resurrection of the old crew: a lot of the marketing highlighted the fact that J.J. Abrams used model spaceships and monster costumes instead of building the whole thing on blue screens and computers. For this reason The Force Awakens looked and felt more like the original trilogy and provided an aesthetic refutation of the Lucas prequels. Lucas was stubbornly proud of these prequels: they had functioned as large-scale R&D projects for digital effects, with ILM pushing itself to the limit in an effort to scope out the audio-visual future of cinema. In his head, they held up, but this impressive ambition produced a still-born epic, a sterile CGI canvas that looked horrible and was despised for the liberties it took with the universe that Star Wars fans had grown up with and adopted as their own. The Force Awakens, by contrast, achieved a visual clarity by explicitly mimicking the earlier Star Wars films, but using the new technology to give it a high-gloss sheen. This allowed audiences to fall back into the same sensory and imaginative world that they had left at the end of Return of the Jedi, while satisfying the expectations of the modern multiplex masses. In terms of financial and artistic returns, the wager went, you could get the best of both worlds. Star Wars fans were thrilled by this return to first principles: fidelity to the original work was seen as a strength rather than cowardice, which is what it actually was.

The next entry in the new Disney canon took this process one step further. By telling the story that occurred immediately before A New Hope the producers of Rogue One were left with the interesting technical dilemma of representing characters that already existed in earlier films played by actors who were now either dead or much older. The easiest option would have been to ignore them altogether and skirt around the periphery, but Rogue One took the more ambitious and problematic route of resurrecting a dead Peter Cushing and a young Carrie Fisher through the medium of motion capture CGI. The ethical dimensions of this decision were complex, or at least they should have been: in fact, the prospect of directly segueing the final act of Rogue One into the opening scene of A New Hope was thrilling both for the film-makers and franchise fans. The technological and emotional spectacle of this was so overwhelming that nobody really seemed to be thinking about the purpose of doing it, other than the most obvious and venal one of making money. But what was being channeled, or exploited, was deep and lucrative: memory and nostalgia. It was possible to enjoy Rogue One without any knowledge of the Star Wars saga but most of its emotional resonance came from allusions to the first trilogy. It explored an old universe that had already been built by Lucas and his original crews: anything new was an addition to, or an improvisation on, an already existing imaginative space. In a way, that was not a problem for the film at all: that was, in fact, the pleasure of the text. The memories of the audience were tightly woven into the very fabric of the work, there to be admired, touched, teased out, unthreaded. It is what gave the film its dramatic power, while also pushing reverence for the original source material to a new and logical conclusion. What happened next, therefore, was unexpected. 

Massive commercial and critical success led to hubris at Disney. Any sense of disappointment felt by the complete lack of imagination and courage shown in The Force Awakens or the timid and basically pointless narrative tinkering pursued in Rogue One was effectively marginalised by the pleasure and gratitude felt by a delighted global Star Wars audience. To achieve this, Lucas had to be purged from the process, to an extent that clearly wounded him judging by his prickly dismissal of Abrams’ film. Disney did not have to care about this, however, because the box office receipts and the reviews proved that they had been correct in their strategy, which basically amounted to LUCAS IS DEAD, LONG LIVE LUCAS. Reverence also meant regicide. This had its mirror in a condition peculiar to the hardcore Star Wars fanbase: Star Wars fans hate Star Wars. As Andrey Summers wrote in his 2007 essay ‘The Complex and Terrifying Reality of Star Wars Fandom’, “To be a Star Wars fan, one must possess the ability to see a million different failures and downfalls, and then somehow assemble them into a greater picture of perfection. Every true Star Wars fan is a Luke Skywalker, looking at his twisted, evil father, and somehow seeing good. We hate everything about Star Wars. But the idea of Star Wars…the idea we love.” Never has the Death of the Author been so acute: Lucas held responsible for everything the fans hate about the universe he created which is their one true love. But The Last Jedi provoked this neurosis more than any other film in the franchise, even the largely loathed prequels. 

This was because Disney did not duly respect the property. The extent of the studio’s hubris fully revealed itself in Rian Johnson’s film and left the first dent in their immaculate franchise. Disney appeared to have launched a sequel to one of the most famous sagas of all time without having planned it out in advance. There is some debate about how much Lucas knew what he was doing at the origin but he always claimed to have had the whole thing worked out before he started shooting. He requested the rights to make sequels before A New Hope was even released simply because he wanted to tell his story. Now Disney was improvising, but doing it with billions of dollars of studio money and at the expense of the hopes and dreams of generations of fans. For the “true fans” the Star Wars universe belonged to them, not to Disney, or even to Lucas: it was their possession and, arguably, their devotion had created and shaped more content than the original filmmakers could ever have imagined or even controlled. The wrath that Lucas felt from this contingent when he went back with his new CGI toys to “improve” the old films should have been a warning to the new franchise owners to tread carefully. Disney, however, assumed that they had the ability to complete the Skywalker narrative to the satisfaction of the casual multiplex consumer and the hardcore convention nerd alike, even though they had no idea how they were going to do it. They had the money and the talent it could buy to work this all out as they went along. Abrams’ abject recycling of the plot from A New Hope didn’t seem to worry them because the response had been ecstatic and it did enough to set up a new story cycle. So, with a sense of total ease and confidence, they took a risk. 

The result of that risk was provocative to the point of sadism: it was as if Johnson had taken the job specifically to bury the Star Wars myth forever. In fact, this is what he directly did: the most symbolic scene in the whole film found Yoda goading Luke Skywalker into destroying the Jedi Library, chuckling while the ancient Jedi texts burned,  “page turners, they are not.” The totems most cherished by fans ensconced in the extended Star Wars universe were being dismissed as insignificant and literally incinerated. Johnson did not even respect the “legacy” of the preceding film: Abrams’ carefully prepared master villain Snoke was unceremoniously sliced in half moments before Rey’s lineage was revealed to be completely insignificant — a bold move, seeing as Abrams had set this up as the central riddle the trilogy existed to solve. For a film advancing an epic that built on dynasty, succession and inheritance, The Last Jedi undermined all of these themes: Skywalker calling for the termination of the Jedi Order; Yoda blowing up its inheritance with a bolt of lightning; the Skywalker lineage itself being exposed as irrelevant to this new narrative, a narrative with had not even decided its own destiny yet. The Last Jedi was a disaster for whoever had the task of following it because Johnson had smashed Abrams’ story into little pieces and left it scattered all over the universe. Lucas, hilariously, really liked this one. The reviews and the response aggregators all seemed to agree, but among the Star Wars loyalists it caused pandemonium: their universe had been toyed with and their passion disrespected. Johnson took liberties with narratives and characters he had no right to violate. He had damaged the story that anchored their imaginations. Their response was to dismiss the film completely or to try to rationalise scenes such as Yoda’s destruction of the Jedi texts as part of a master plan, something other than what it seemed or what it was: a rather direct comment on the void at the heart of their elaborate fantasy world. The question the film asked, very effectively judging by this particular response, was: who does this universe belong to? Who is it for, and why? This led to more questions, such as: who owns the past? Should you treat people’s memories with respect or irreverence? What are the implications for art that trades on nostalgia? What can you do with it?

Abrams also toyed with these questions in The Force Awakens when he decided to kill off Han Solo, a plot point guarded closely before the release of the film in order to preserve the emotional impact of his grand ritual slaying. This scene itself condensed all the themes of dynasty, inheritance, time and nostalgia into one metatextual spasm of violence. In fact, Harrison Ford was far from dead: over the years he would prove to be the true icon of nostalgia exploitation cinema, reviving not just Solo but also Indiana Jones and Rick Deckard. Blade Runner 2049 was itself a cathedral dedicated to Hollywood’s new commercial and aesthetic fixation on memory and time. Blade Runner 2049 wasn’t a remake or a reboot, but it wasn’t exactly a sequel either. What differentiated it from Blade Runner was what also differentiated A New Hope from The Force Awakens: the original started with a story, ideas and characters, whereas the new film began with history, a template, an atmosphere, a visual language. Blade Runner 2049 inhabited a world that had been created in 1982 and added new dimensions to it: it was a film about the future that was fatally structured by its own past. There was not so much to redeem this cynical exercise in retconning other than its rich thematic exploration of memory: the memories of the principal characters but also the memory of the original Blade Runner. Elements of the old film were directly inserted into the new one: a snatch of dialogue between Deckard and Rachel was sampled as a minor plot device, while the same special effects company that resurrected Harold Ramis for the finale of Ghostbusters: Afterlife also resurrected the original Rachel as a replicant simulation. The fact that Deckard quickly saw through this by noticing the (wrong) colour of her eyes felt like a wry comment on this virtual recreation of characters from the past that had already given fans of Rogue One their uncanny thrill.

So Blade Runner 2049 was an elegy for a lost world in which humans had real memories of trees, physical relationships, childbirth, Elvis Presley and films like the original Blade Runner. For Denis Villeneuve, the question of replicant blade runner K’s identity hinged on the reality of his one childhood memory: if it was real then it would prove he was born rather than manufactured. The key to this puzzle turned out to be the memory designer Dr. Ana Stelline, who understood the quality of memory to be the recollection of feeling, rather than the mere accumulation of detail. In a world of artifice and simulation, memory remained the root of human experience: “if you have authentic memories,” Ana tells K, “then you have real human responses.” This is why one of the catastrophes responsible for unraveling society in Blade Runner 2049 was revealed to be a global data wipe that erased all digital records of the past, thereby undermining individual and collective links to history. This left memory tethered to that most fragile repository: the human mind. At one time fire was the major physical threat to memory, able to consume photograph albums, love letters and sentimental objects quickly and completely. Now the greatest threat to these markers of human identity would be the erasure of Instagram images, Facebook accounts, Google drives — all the mass digital data banks of our lives. In Blade Runner 2049 this had already happened, and the effect was to remove the past as a tangible reality. For K, who cannot even trust the one memory he thought he had, the past was only available as fragile images and holograms. In this world, emerging from collapse, everything was scarce: water, wood, whiskey, love, even history. Which made all of these things even more valuable. 

Nostalgia was at the heart of Blade Runner 2049 and not just our nostalgia for the Ridley Scott original. Cultural nostalgia was programmed into Villeneuve’s future more directly than Scott’s, partly because nostalgia was not a part of the cultural zeitgeist of 1982 in the way it is now. By the time we get to 2049 (in 2017) K’s holographic girlfriend Joi serves dinner dressed like an American suburban housewife from the 1960s while playing him Frank Sinatra songs that transform the desolation of his high rise apartment into a warm and colorful domestic space, a vintage mirage. K would later find Deckard in the deserted ruins of Las Vegas, holed up in an abandoned hotel, living on single malt whiskey and selecting Sinatra records from an old jukebox. In the concert hall of the casino, holograms of Marilyn Monroe and Elvis Presley struck iconic poses on stage, connecting us not just to the past, but to an entire world that no longer exists (“I like this song,” comments Deckard, as The King croons ‘I Can’t Help Falling in Love’). The simulations were both triumphant and tragic: Elvis in full pomp, an apparition from the glory days of old Vegas, until the holographic transmission jammed, destroying the illusion and highlighting the emptiness of the present. The choice of Vegas for this sequence was perfect: the ‘non-city’ with no history, built out of kitsch monuments to the past in the middle of a desert, eventually recolonised by sand and radioactivity but still redolent of the cultural history it made for itself, which is almost all Deckard could hold on to. It’s like Villeneuve and cinematographer Roger Deakins consulted Robert Venturi and Denise Scott Brown’s Learning from Las Vegas in order to wring all the symbolic resonance out of the city. Where Vegas was once a possible future it now became the symbol of a superior past, the memory of which kept Deckard somehow connected to what existed of life before the Great Blackout and environmental collapse. The radioactive ruins of Las Vegas were now a monument to nostalgia, memory and the past, just like Blade Runner 2049.

Probably the film’s greatest triumph was its cinematography: Deakins utilised CGI beautifully to update the original film’s own update of Metropolis. From the simulation of Rachel to the sandblasted wastelands of Vegas to the lurid and shimmering holograms that tower over K in the future Los Angeles, the technology had never been better used to enhance the human imagination. Deakin’s work practically erased memories of the sterile landscapes of the Star Wars prequel while also vindicating the pioneering work Lucas had demanded from ILM. In fact, the interesting thing is that as soon as it really became possible to realise the full potential of CGI — in other words, with motion capture and virtual cinematography — it was immediately used to resurrect the past and exploit nostalgia. After all, the giant leap forward happened so that Steven Spielberg could make the dinosaurs in Jurassic Park look as realistic as possible. The first thing Ridley Scott did with this new toolkit was to recreate Ancient Rome in Gladiator. The first thing George Lucas did was retouch his old work and revive his galaxy from a long time ago. The studios themselves suddenly saw the possibility of endless blockbuster profits in the lethal combination of CGI innovation and back catalogue material: Independence Day and Twister revived alien invasion and disaster movies in the same summer season, while Godzilla was waiting to return from the deep. Warner Brothers even tried to atone for the original sin of Jaws – the fake plastic shark – with sleek CGI Great Whites which, somehow, looked more risible than the dud Spielberg had to contend with in 1974 (it’s maybe worth noting that the MacGuffin in Deep Blue Sea was the attempt to find a cure for Alzheimer’s, that great enemy of human memory). Truly, anything was possible.

But, in the end, the future belonged to the synthesis of old comics and new CGI. If the blockbuster superhero revival began with Tim Burton’s Batman in 1989, it didn’t define modern cinema until Marvel Studios laid the foundations of the Marvel Cinematic Universe in 2005. The MUC was less about looking back or even exploiting nostalgic capital. Instead, it tried to define the present in order to capture the future, and it succeeded. In these films, contemporary fears and desires were inscribed across a multidimensional canvas made possible by the CGI innovations of the 1990s: a new mythopoetics for the global multiplex. Meanwhile, the Warner Bros triumvirate of Batman, Superman and Wonder Woman had been revived and restyled and rebooted so many times – not least in DC’s own comics – that their myths remained alive, even if they were on life support. In particular, nobody could leave Batman alone: from Michael Keaton through to Ben Afflek, Hollywood had accumulated an entire family of veteran Batmans, some more at peace with their legacy than others. On the face of it, Diana Prince offered less promising material, yet, against all odds, Wonder Woman (2017) proved to be the one genuine masterpiece of this cycle: a beautiful film about the human capacity for good and evil that was also concerned with loss, memory, memorialisation and the undertow of time.

At the heart of Patty Jenkins’ movie was Diana’s doomed love affair with allied pilot and spy Steve Trevor, making it a story framed by mementos and memories. It began with a photograph returned to Diana by Bruce Wayne: a portrait of her with Steve and their comrades Sameer, Charlie and Chief Napi taken on the Western Front in Belgium in 1918. This was a picture that first appeared in Batman v Superman: Dawn of Justice (2016), capturing a moment in time that went unexplained until Jenkins painstakingly re-staged it and so provided the key to the heart of Wonder Woman itself. The poignancy of old nineteenth and early twentieth century sepia photographs partly resides in their loss of context and meaning: it takes an act of imagination or creative empathy to engage with these old images in any way at all, yet they still have the power to move us. But these were not unknown historical figures for Diana: they were people she fought alongside and loved. The photograph therefore represented something that was real for her and that she remembered vividly a century on. It was not an object of antique interest, but her most precious possession. Wonder Woman, then, was a film steeped in history and fascinated by the emotional complexity of time. As she settled into the world, Diana would eventually find employment in cultural anthropology and archaeology at the Louvre and the Smithsonian Institute. On a daily basis, she cataloged historic artifacts recalling legends and myths that were, for her, more than just stories, but the context of her childhood, the story of her birth.

Wonder Woman also explored our cultural memory and collective nostalgia by setting the film in the last year of the First World War. The film’s pivotal scene – in which Diana sheds her cloak, before leading a charge across No Man’s Land in her Wonder Woman apparel – risked being ridiculous, even grotesque, but turned out to be a truly powerful mythic set piece. Enhanced by filtered technicolor and CGI, this reconstruction of trench warfare gave historical landscapes familiar from archived news reel an immediacy only matched by the digital alchemy of Peter Jackson’s They Shall Not Grow Old the following year. While Jackson’s film infused old footage with new life and depth, Diana’s story brought her 1918 portrait back to life for us: the moment her crew assembled in front of the camera was a startling and moving point of recognition in the film, an emotional apex. The tragedy of the trenches provided the platform, and for Diana the catalyst, for human salvation: in this scene Wonder Woman gave us Liberty Leading the People at the Barricades for the age of industrial slaughter. Traversing No Man’s Land, taking all the fire from German machine guns with her Amazonian shield, Diana transformed into Delacroix’s Marianne, in order to defeat Ares, the God of War, and redeem humanity. The point of the scene was not historical realism: its power was symbolic. Diana’s dynamic act of sacrifice inspired others to follow her over the top, yet her success, even her life, was finally assured by their aid and marksmanship. It represented, then, the triumph of compassion and hope over division and despair.

Diana’s understanding of human nature would develop from idealism to horror, before finally settling on the truth: cruelty is balanced by care, hate by love. Her route to wisdom was love, but love that was cruelly cut short, thereby educating her in the basic human conditions of loss, grief and pain. This legacy of past tragedy haunted Wonder Woman 1984, the 2020 sequel that moved Diana to Washington D.C. during the year that Ghostbusters was released in cinemas and five years after the original Wonder Woman TV series was cancelled (Linda Carter made a post-credits cameo here, just like Sigourney Weaver did at the end of Ghostbuster: Afterlife). For Jenkins, the period was clearly a gift: WW84 presents a cartoon-coloured, neon-lit simulacrum of mid-80s America, filled with pink leg-warmers, plastic visors, Miami Vice jackets, shiny corporate power suits, aerobics classes, malls, arcades, a Pontiac Firebird and a Top Gun Tornado. Almost no cliche was left untouched: a private party lit with pink spot-lights pulsated to Frankie Goes to Hollywood; in the White House, a Ronald Reagan clone was hungry for more nuclear missiles. On the glossy surface, then, it was another product of the 80s nostalgia industry, like Absolute 80s or Vaporwave. This was the revenge of a decade largely despised while it was happening: not only those who lived through it, but also those born years later could not help gazing backwards, entertained and seduced by its excesses and aesthetics. In this way, WW84 was simply tapping into a deep reservoir of nostalgic desire. 

The chaotic plot and dubious body ethics of WW84 drew critical censure, but Jenkins was entirely justified in comparing it to the time travel and body swap movies of the 1980s: it had the same manic energy and played the same sly games as Back to the Future and Big. World War One was too far away and too grave for Wonder Woman to be anything other than a memorial to its subject, but the proximity and vapidity of 1984 gave WW84 free reign to play with the cultural fetishisation of our living memories. For Steve Trevor, brought back from the dead and dropped into the body of a stranger, 1984 was all future shock, and his amazement at the period escalators, passenger jets and male grooming clashed with our supposed nostalgic reverie for the same. The sense of play was fun — especially when it got itself in trouble, just like its ancestors – but, in the end, WW84 was actually a fable masquerading as a comic book epic, a morality play laced with Cold War ICBMs. As the camera spanned Diana’s Washington apartment, it became clear that she was living in the past, trapped by its contractions and contradictions. Again, old photographs told her story — this time, of her life after Wonder Woman, filled with friends from the first movie who had, by now, aged and died while Diana remained as she was, only lonelier. She also lived in the shadow of Steve, unable to exorcise his ghost until WW84 granted her one true wish and returned him back to life. But this time-spanning, body-swapping reunion was not only impossible, it was a lie: a cheating of the natural order that was, in effect, an existential crime, unraveling the moral fabric of the world. Because, from its opening scene, when Diana takes a short cut in order to win a ceremonial race on Themyscira, the central theme of WW84 was cheating — that is, the immorality of taking short cuts in order to obtain immediate gratification or glory, of acting on the human desire to evade fate and rewrite destiny. The final lesson for Diana is that you cannot cheat time. 

This is also a lesson for Hollywood. The problem is not nostalgia, which, after all, belongs to human experience. It is a legitimate subject. Hollywood has spawned mutant strains of cultural nostalgia, but only because people respond to this with both their hearts and their hard cash. It is the reality of our lives, surrounded by instantly accessible cultural products that shaped our imaginations and mediated our emotions in the first place. For children of the 1980s, the experience of our first Spielberg movie was as real and authentic as our first romantic crush, and the memories are as vivid. The fact that our children look back with nostalgia on the imagery and sounds and products of our childhoods is not necessarily new either: we gazed back on the 1960s, say, in a similar way. Ghostbusters: Afterlife, the Star Wars franchise, Blade Runner 2049, Wonder Woman and WW84 are all films determined by the past: their own, their characters’ and ours. The exploitation of cultural memory and the trade in nostalgia is not just a cynical commercial tactic, it is tightly woven into the thematic and aesthetic fabric of these films. It is a dominant theme in our cultural universe. When there is no redeemable future, no utopia, no hope — when, in fact, both the present and the future look and feel like Blade Runner 2049 — the power of the past can be overwhelming. Hollywood pretends that it can give it back to us, at least in part, or just so that we can say hello one more time, if only to say goodbye to an Egon Spengler or a Han Solo. Beginning in the 1990s and boosted by CGI, the studios discovered that they could plunder their own past and everybody would come to watch, even those who did not experience these things the first time around. It was, after all that, a cynical commercial tactic, but a tactic that presented a possible trap, because the cultural resources of the past are finite and our memories die with us. So, like Diana Prince in WW84, one day, maybe soon, Hollywood will learn that you cannot cheat time.

Posted in Uncategorized | Leave a comment

On the Legacy of George L. Mosse

Past history is always contemporary. 
George L. Mosse, The Nationalization of the Masses

During their brief uprising in Berlin in January 1919, the leaders of the Spartacus League occupied Mossehaus, a building that contained the offices and printing presses of the publisher Rudolf Mosse. In response to this trespass, the ageing Jewish patriarch sent his son-in-law Hans Lachmann to negotiate with the Spartacist leaders and reclaim his property. As George Mosse recalled in his memoir Confronting History, Rosa Luxemburg wanted to prevent the publication of Mosse’s main newspaper, the Berlin Tageblatt, so Lachmann kept her talking until the early hours of the morning, by which time the paper had been printed and delivered across Berlin. “For the rest of his life,” Mosse wrote of his father, “this committed liberal and capitalist was fond of saying that Rosa Luxemburg was the most intelligent woman he had ever met” (1). Days later the Communist revolt was crushed and Luxemburg was murdered by Freikorps troops, a casualty of the counterrevolution that would contribute so many men to the ranks of the SA and SS. Although they did not know this yet, the Mosses would also become victims of the Nazis, targeted by the regime as “ready-made symbols of the so-called Jewish Press” (2).

In his history of Zionism, Walter Laqueur drew a detailed portrait of the post-emancipation Jewish bourgeoisie in Germany, quoting the politician and economist Ludwig Bamberger who “stressed that the symbiosis, the identification of the Jews with the Germans, had been closer than with any other people. They had been thoroughly Germanised well beyond Germany’s border; through the medium of language they had accepted German culture, and through the culture, the German national spirit. He and his friends thought there was obviously some affinity in the national character which attracted Jews so strongly to Germany and the German spirit” (3). This was the milieu in which George Mosse grew up, as an indulged and privileged child of the liberal Jewish elite of Berlin. The Mosses were superbly well-connected employers and philanthropists who felt secure in their national identity and were therefore complacent about the Nazi threat when it first emerged. As Mosse recalled, “like many liberals of his generation…my father could bring himself only with difficulty to take the Nazis seriously. He used to say that Hitler did not belong in the front part of the newspaper, but in the Ulk, the comic supplement” (4). The Mosses considered themselves to be Germans “without giving it another thought” (5) and were unprepared to confront a regime that would define them as enemies of the nation. “To be sure” Mosse wrote, “such families as ours shared some of the self-criticism of Jews and had a lively consciousness of antisemitism, but in the last resort, German, Jew and family were interchangable concepts” (6). For the whole family, this faith in Germany, liberalism and the tradition of the Enlightenment — which had, after all, secured their emancipation and provided them with the opportunity to succeed so spectacularly — also underpinned their anti-Zionism, a position that George would only abandon later in life. 

Everything changed in 1933 when the Nazi agent Wilhelm Ohst held Lachmann at gunpoint and forced him to sign over the Mosse firm as part of the mass expropriation of newspapers and publishing houses by the Nazis. An SA thug, racial occultist and petty criminal backed by the power of an antisemitic regime, Ohst was no Luxemburg, as Mosse later noted: “this indeed was a different sort of occupation from that of the Sparticists with whom [my father] had dealt so easily after the First World War” (7). With no prospect of negotiation, the family was forced to abandon their business and property and escape into exile in France. They would never return to live in Germany and the experience of dispossession and exile would define the rest of their lives. In the case of George, it would determine his future course as a historian of fascism, nationalism and racism: “I could not simply walk away from the failure of socialists and liberals to understand National Socialism,” he wrote at the end of his life, “this failure in which, as we have seen, my own family’s publishing empire had been involved, was constantly before me” (8). It turned out that the Mosses were fortunate to have been targeted so early: by the Second World War, the family was in America, safe from Nazi extermination. Nevertheless, in an essay to mark his retirement, Mosse would conclude that “all my books in one way or another have dealt with the Jewish catastrophe of my time which I always regarded as no accident, structural fault or continuity of bureaucratic habit, but seemingly built into our society and towards life. Nothing in European history is a stranger to the Holocaust…” (9)

For Mosse, like Benedetto Croce, history is always contemporary, and this was the root of his contention, by no means universally accepted, that the Holocaust was not a historical aberration but an event with deep roots in the European experience. (The past was always present in his own life, too.) In fact, Croce was a key influence on Mosse: in 1976 he would tell the young historian Michael Ledeen that the Italian had “influenced me, above all, through his concept of the totality of history, something I believe very much — that outside history there is no reality” (10). His belief — in common with Croce — that the individual is central to historical understanding helped Mosse to maintain the careful balance between ethical commitment and impartial analysis in his own work. He would later acknowledge that his identity as a Jewish German exile and a closeted homosexual had largely determined his central historical themes, although he was never a didactic political activist in his writing. “I have always been instinctively suspicious of historians who have held an overriding belief, including a faith in a traditional religion,” Mosse wrote in his memoir, “now I realise that this attitude, while still desirable as an ideal, was unfair…The temper of the times made a neutral stance impossible and perhaps even undesirable to maintain” (11). He would never explicitly link his research to a contemporary cause but his work was always defined by personal experience and his ideological commitments to antifascism, liberalism and, later and in a qualified fashion, Zionism. He would engage with the ideas of the Frankfurt School, quote Foucault in his studies of sexuality and scope out territory later occupied by Gender Studies, but he would never join an intellectual movement or adopt the language or frameworks of passing theoretical trends. This was a form of freedom: it is what gave his writing its scope and clarity but also determined his own historical methodology. “I have always believed that empathy is the chief quality a historian needs to cultivate” he wrote in Confronting History (12), and by this he did not mean that he wanted to identify with his subjects, but to understand the context and content of their ideas, aspirations and perceptions. 

Mosse, therefore, proceeded on a different basis to most of his contemporaries, by attempting to see fascism in its own image, rather than applying external categories to it. This meant that he was never fashionable. His independence — or singularity — eventually led to relative obscurity outside of the niche of comparative Fascist Studies, and even here his work remains highly contested and to some degree subliminal. In Italy, where he was genuinely and widely appreciated, fame was clouded by controversies surrounding his friend Renzo De Felice, whose own work and ideas had been bitterly attacked by Marxist and liberal historians following the publication of his Intervista sul fascismo with Michael Ledeen in 1974, an event with important theoretical implications for Mosse himself. As Ledeen recalled afterwards, “for months, it was virtually impossible to read a newspaper, watch an evening of television, or listen to a few hours of radio without running into a supercharged attack on De Felice, not only for the presumed ‘errors’ of his historical analysis but also for ‘corrupting Italian youth.’ More than one critic suggested that he be forbidden to teach at Italian universities” (13). Mosse, for his part, defended De Felice, albeit not in print, and the two historians found inspiration and support in each other’s ideas and work. Most importantly, they quickly acquired the same enemies and for largely the same reasons: by taking the ideas and aesthetics of fascism seriously they found themselves open to accusations of apologetics or revisionism by critics who sought to make political gains from the scandal or behaved defensively, in reaction to a perceived attack on their own theoretical assumptions. At the heart of this was a deliberate and political misunderstanding of ‘historical empathy’ which Mosse himself would define as “putting contemporary prejudice aside while looking at the past without fear of favour” (14) and which was emphatically not the equivalent of sympathy, but a method and a quality that enabled him to break new ground in his own interpretations of fascism, nationalism, racism and sexuality. 

In practice, then, ‘historical empathy’ allowed Mosse to focus on the myths, symbols and perceptions of fascist movements, their supporters and mass participants. Crucially, for Mosse, this did not simply mean the study of high culture, but also popular culture or populist theories, ranging from pulp novels to occult movements. This doesn’t seem very controversial now, but Mosse broke etiquette in two ways: by analysing cultural materials that had been treated as unworthy of consideration by serious historians and by treating fascism and Nazism as coherent ideologies in their own right. The introduction to his 1999 essay collection The Fascist Revolution is the most concise statement Mosse made about his approach and its relation to contemporary interpretations of fascism. “Fascism considered as a cultural movement means seeing fascism as it saw itself and as its followers saw it, to attempt to understand the movement on its own terms,” he wrote, “only then, when we have grasped fascism from the inside out, can we truly judge its appeal and its power” (15). The objective of this was not to rehabilitate fascism, but to understand it better: both its historical reality and its existing danger. Some of the ideas put forward by Mosse (and De Felice) threatened the very basis of existing interpretations, for example their contention that the Italian Fascist and Nazi regimes ruled by consensus rather than simply through terror or propaganda, and their definition of fascism as a middle class revolution of the right rather than a reactionary defence of monopoly capitalism. In Germany, “consensus” was a sensitive topic and, as a consequence, Mosse was largely ignored by German historians; in Italy, the Marxist and liberal cultural and academic elite of the First Republic claimed jealous possession of the revolutionary tradition and saw any minimisation of the overlap between Fascism and Nazism to be, essentially, a fascist act, hence the vilification of De Felice and by extension Mosse.  

So, this line of enquiry led to a break with prevailing modes of analysis that caused discomfort for different people in different ways. Mosse’s breakthrough work of 1964, The Crisis of German Ideology, put forward a new, and for German historians difficult, proposition: that Nazism was made possible by a German völkisch movement that had  “penetrated deeply into the national fabric” and “showed a depth of feeling and a dynamic that was not equaled elsewhere” (16). Mosse was later criticised for adhering to the Sonderweg paradigm, but his achievement was in many ways more subtle and far-reaching than this. For the first time, he analysed the cultural traditions of Germanism and the institutionalisation of völkisch thought in the German Youth Movements, schools, universities, nationalist organisations and political parties of the right. (‘Leadership, Bund and Eros’, a chapter devoted to the links between masculinity, homo-eroticism and völkisch ideals, was a revolutionary piece of research and analysis in itself.) The point wasn’t that Nazism was an inevitable culmination of German history or an innate expression of German character (the ‘From Luther to Hitler’ school), but rather that the success of völkisch ideas in a newly unified and rapidly industrializing Germany created the conditions in which Nazism could succeed by consensus. As Mosse emphasised in his preface to the 1997 edition, the “human perceptions, hopes and longing for the good life” that contributed to this consensus could not be seen in isolation from economic, social and political forces (17). But the historical reality could also not be avoided: “as a result of the lost war, Germany became the nation in which the völkisch dream was realised [and] the alliance between racism and völkisch nationalism triumphed” (18). In fact, racism was key to the development of German nationalism and, according to Mosse, the central innovation of Nazism was to turn the German Revolution into an Anti-Jewish revolution (“the dehumanisation of the Jew is perhaps one of the most significant developments in the evolution of the völkisch ideology”, 19). Hitler’s defining achievement was to organise a diffuse völkisch movement into a new religion of state by combining the irrational mysticism of the Germanic Faith with ruthless and pragmatic political tactics. “Mein Kampf is devoted 50 percent to theory and 50 percent to organisation,” Mosse told Ledeen in a 1976 interview, “and that’s about right as far as Hitler is concerned because he believed that theory was all important — the myth was central — but it would be no good unless it could be translated into action” (20). The Nazi myth needed a tradition to activate and The Crisis of German Ideology was the first attempt to analyse and catalogue the origins, character and structure of that tradition. “You cannot have any successful myth without historical preparation,” Mosse explained, “I tried, then, to show the roots of this myth” (21).

The scope and range of Mosse’s work at this time was unique — linking texts and ideologies to movements, institutions and, finally, mass murder — but not everybody thought it was warranted. Fritz Stern had covered similar ground a couple of years earlier, but most of Mosse’s contemporaries considered fascism to be empty of theoretical content and therefore deemed its ideological roots to be irrelevant. This was largely because fascism did not conform to their own definitions, as Mosse was willing to point out: “all fascisms rejected classical political theory, that is why Anglo-Saxon scholars have such a difficult time discussing it. They’re always looking for logical, consistent political theories. But fascism regarded itself, always, wherever it was, as an attitude of mind, an attitude towards life” (22). Unlike Communism, with its foundation in the canonical works of Marxism, fascism was not a textual doctrine, but a visual revolution; it did not proceed by rational explication of theory, but by the mobilisation of masses of people through myth, symbols and ritual. This crucial insight provided the foundation for Mosse’s 1975 masterpiece The Nationalization of the Masses in which he described the contribution that national monuments, sacred spaces, architecture, public festivals, theatre, cultural organisations and popular taste all made to the Germanic myth. Here Mosse touched on new themes that would contribute to a general theoretical framework for all of his work: the idea of politics as a civic religion; the continuity between fascist and bourgeois aesthetics; the links between the French Revolution, the Enlightenment, nationalism and fascism; and the concrete objectification of myth in rituals, art and monuments. Mosse termed this the New Politics, defined as a secular political style that developed from the French Revolution to the Second World War “through the use of national myths and symbols and the development of a liturgy which would enable the people themselves to participate in such worship” (23). In Germany and Italy — the youngest of the major European nations — the end point of this was fascism, a political style that stood outside the rational, logical systems of traditional political theory: “the fascists themselves described their political thought as an “attitude” rather than a  system; it was, in fact, a theology which provided the framework for national worship. As such, its rites and liturgies were central, an integral part of a political theory which was not dependent on the appeal of the written word” (24). This would later be described as Mosse’s “visual turn”: a new and innovative analysis of fascism that incorporated aesthetics and anthropology, simultaneously widening the scope and altering the terms of the debate.

So The Nationalization of the Masses — a relatively short work of 216 pages — established a new foundation for the cultural history of nationalism and fascism. The book had its greatest impact in Italy, where it intersected with the ideas of Renzo De Felice and set the course for Emilio Gentile’s dazzling trajectory. In an interview published in Corriere della sera in 1985, Mosse speculated that his popular reception in Italy was related to “the widespread diffusion in your country to think visually: a predisposition which is very important for understanding my writings, the encounter between symbols and myths” (25). The relationship between Mosse and De Felice was particularly productive — even their disagreements led to clarity and refinements. Their focus on the differences between Nazism and Fascism had a different outcome in each case: for De Felice, it rendered any general definition of fascism futile, while Mosse would attempt a generic theory in his 1979 essay ‘Towards a General Theory of Fascism’. The distinction De Felice drew between the radical, activist ‘fascist-movement’ of the squadristi and the reactionary, establishment ‘fascist-regime’ of Mussolini found an echo in Mosse’s own analysis of Nazism as an “anti-bourgeois bourgeois revolution” (26) and the provocative link he made between Hitler’s apocalyptic racism and bourgeois tastes. Both Mosse and De Felice saw the fundamental difference between the Nazi and Fascist movements residing in their separate views of human nature. For the Nazis, human identity was fixed and immutable, rooting their racism in ancient German history, but for the Italian Fascists human identity was open-ended, dynamic, a matter of ‘spirit’ rather than blood, therefore racism had less traction. These fine but crucial distinctions, shared with slight nuance by both historians, directly challenged the received wisdom of Marxist and liberal historiography in Italy, and they were bitterly contested. The “storm over De Felice” exploded precisely because the bestselling Intervista sul fascismo presented these ideas with a clarity and concision not found in his own books, thereby setting a clear challenge to the intellectual foundations of the First Republic. In fact, this clarity was important: it did not minimise the crimes of the Fascist regime, but helped to define its true nature. 

For the study of modern European politics, tracing the origins of racism as an ideology was essential to understand its ultimate triumph in the Third Reich, as Mosse knew. In the German case, racism was the central component of Nazi ideology and the direct motivation for unprecedented crimes. In 1978, Mosse finally reckoned with the origins of Nazi mass murder and produced one of the angriest books he ever wrote: his history of European racism, Toward the Final Solution. The anger, as such, did not communicate itself in the style or the tone or even the method, all of which was as cool and expansive as ever, but in the details of his analysis and conclusions. Mosse traced the roots of modern European racism to two traditions: Romanticism (also the root of nationalism) and the Enlightenment. Racism combined science with aesthetics: scientific classification and Greek ideals of beauty found their ultimate application in eugenics, physiognomy and the Aryan stereotype, all Enlightenment legacies. In Germany this combined with völkisch nationalism and led a nation down the fatal path to racial mysticism, race war and Lebensraum. Racism, as a political ideology, social aesthetic and national movement, required an antithesis and an enemy to give it dynamism: in Germany, Austria and Eastern Europe this could only be the Jew. The Jew in this taxonomy was therefore given a distinct look that stood in direct contrast to the Aryan male, itself a derivation of Classical aesthetic models revived during the Enlightenment: the nineteenth and twentieth centuries were, for Mosse, visual ages, a key element in his analysis. Finally, the Jewish stereotype came to embody everything that threatened or corrupted the national community, such as urbanism, financial capitalism, bourgeois cosmopolitanism and decadence. It is this middle class development and promotion of stereotypes which underpinned Mosse’s provocative description of the Nazi as the ideal bourgeois and also inspired his depiction of the nineteenth century “struggle to control sex” (27) through stereotypes and manners in 1985’s Nationalism and Sexuality — an equally angry book that was in many ways a companion piece to Toward the Final Solution. For Mosse, racism and respectability stemmed from the same impulse to classify, and thereby exclude, as well as the same fatal combination of science and aesthetics. The story of racism was not “the history of an aberration of European thought” but “an integral part of the human experience” (28), while “respectability provided society with an essential cohesion that was as important in the perceptions of men and women as any economic or political interests. What began as bourgeois morality in the eighteenth century, in the end became everyone’s morality” (29). This was the paradox of the Enlightenment that Mosse identified: what had given birth to the liberal tradition and led to Jewish emancipation had also spawned the twin evils of racism and respectability. 

These were key concerns for Mosse, linked as they were to his own past and personal experience. However, in his late memoir, he qualified one crucial aspect of this attack on the European bourgeoisie: “the repression involved in the maintenance of respectability seemed to strike reviewers, and indeed I might have overstressed this aspect of nationalism and respectability by failing to suppress sufficiently my anger over the fact that the strictures of respectability had made my own life so much more difficult” (30). Furthermore, in his preface to the republication of The Crisis of German Ideology, Mosse expressed second thoughts about the origins of National Socialism that had implications for all of his work on nationalism, racism and fascism: “If I were to write this book today…the First World War, which prepared the breakthrough of völkisch thought, would be given greater space. Not only because the myth of the war experience proved susceptible to völkisch ideas, but because, as a result of the lost war and its consequences, Germany became the nation in which the völkisch dream was to be realised. This could not have been foreseen before the war” (31). Mosse redressed this in Fallen Soldiers, his 1990 study on the memory of war that finally gave the First World War the “space” it had not been accorded in his earlier work. Influenced by cultural historians such as Marc Ferro and Paul Fussell, Mosse came to see the First World War as the great catalytic moment of the twentieth century, the event that gave nationalism and racism the dynamism and momentum that led directly to fascism and the Holocaust. Fallen Soldiers drew together all the social, cultural and military trends that had developed from the French Revolution to 1914 in order to identify the defining features of the Great War and its aftermath: the myth of the war experience, the cult of the fallen, the worship of the nation, the brutalisation of politics, the mechanisation of all aspects of life and the dehumanisation of the enemy. For Mosse, “the encounter with mass death” during the war “took on a new dimension, the political consequences of which vitally affected the politics of the interwar years” (32). All of the nations involved felt these effects to different degrees, but none more so than Germany, a country both traumatised and brutalised by the war experience and with a völkisch ideology ready to be activated and exploited by the parties of the right, with the Nazis at the vanguard. The First World War was the decisive point at which all the key elements Mosse had traced in his work — from the stereotypes of race and gender to the ideologies of nationalism and völk — were fatally radicalised, with antisemitism, racial mysticism and the cult of violence moving from the margins to the centre of European politics. In the final analysis, it was the Great War that led to Nazi culture and Holocaust morality, the dark heart of European modernity. 

The last major works of Mosse — Fallen Soldiers and The Image of Man (1997) — developed all of the key themes and subjects from across his career, providing a rich coda to his long study of the politics of nation, race and gender from the eighteenth to the twentieth centuries. Both books represented a final reckoning with his times and with his own personal experience; they were also his most beautiful and underrated works. After his death in 1999, tributes were paid to his influence and achievement in the form of obituaries, essays and conferences, and to this day his academic legacy is kept alive through programmes, funds, conferences and prizes all bearing his name and often paid for by the Mosse estate, stolen by the Nazis in 1933 and returned by the newly unified German state in 1990. Most significantly, in 2020 the University of Wisconsin Press finally began to publish Mosse’s collected works with new critical introductions, and as a result The Crisis of German Ideology, Nationalism and Sexuality, Toward the Final Solution and The Fascist Revolution are already back in print. I assumed this would be big news, but then I have not seen any acknowledgment of these new editions in the British or American press — not in the Times Literary Supplement, the London Review of Books, the New York Review of Books or even Commentary, a magazine that once published him. This seems to be par for the course: today, the innovations and influence of Mosse go largely unacknowledged, even in research areas he helped to establish. Nevertheless, Roger Griffin continues “the Mossean legacy” in his own work and Emilio Gentile still plots a course set by his original encounter with Mosse’s books; their respective studies Modernism and Fascism (2007) and The Sacralization of Politics in Fascist Italy (1993) are key works in the Mosse lineage. Furthermore, Mosse’s subliminal influence is  pervasive, as Griffin suggested in 2008 when he noted “the appearance of a series of books by a younger generation of historians who find it a matter of ‘common sense’ to engage with fascism not as a reactionary, backwards-looking force, but as a revolutionary, futural one deeply bound up with the early twentieth century revolt against existing modernity that took on myriad aesthetic, social and political forms” (33). The uncontroversial acceptance by contemporary historians writing about fascism of ideas first pursued by Mosse (along with De Felice) in the face of fierce ideological hostility is, in the end, the sum of his legacy. It is a large one, and yet his name is invisible. The same holds true for the wider terrain of cultural history (Linda Colley’s Britons seems to me to be a very Mossean book, but he appears to have no acknowledged impact on her method or thinking) as well as the various schools and departments of Gender Studies, Queer Theory and LGBT history, fields in which he was a genuine pioneer. The reasons are likely to be political or stylistic, but this is wrong: Mosse still has plenty to say about the world we live in now, and he always has done. “This is still the age of mass politics,” Mosse wrote for an untitled speech on the subject of indoctrination,

and the same longings which were operative in 1945 are still with us surely — the political process is still a drama transmitted to us by the media…the old traditions seem to have broken down, but now we have them again under a different form, mediating between us and the world, between us and our hopes in escaping from the crisis of our time, which is the crisis of mass politics and mass democracy. Indoctrination is in reality this mediation and it would not work if it did not represent a principle of hope. Rather than condemn it we must understand its function: then perhaps we can begin to escape its all pervasive present. (34)

I think we should be reading Mosse carefully, and learning from him. 

  1. George L. Mosse, Confronting History: A Memoir (University of Wisconsin Press, 2000), p.40
  2. Mosse, Confronting History, p.66
  3. Walter Laqueur, A History of Zionism (Schocken Books, 2003), pps.30-1
  4. Mosse, Confronting History, p.41
  5. Ibid., p.43
  6. Ibid., p.26
  7. Ibid., pps.67-8
  8. Ibid., p.184
  9. Quoted in Steven E. Aschheim’s introduction to George L. Mosse, The Crisis of German Ideology: Intellectual Origins of the Third Reich (University of Wisconsin Press, 2021), p.xxv
  10. George L. Mosse and Michael A. Ledeen, Nazism: A Historical and Comparative Analysis of National Socialism (Basil Blackwell, 1978), p.29
  11. Mosse, Confronting History, p.6
  12. Ibid., p.5
  13. Michael A. Ledeen, Freedom Betrayed (AEI Press, 1996), p.25
  14. Mosse, Confronting History, p.5
  15. George L. Mosse, The Fascist Revolution: Toward a General Theory of Fascism (Howard Fertig, 1999), p.x
  16. Mosse, The Crisis of German Ideology, p.10
  17. Ibid., p.x
  18. Ibid., pps.x-xi
  19. Ibid., p.141 
  20. Mosse and Ledeen, Nazism, p.59 
  21. Ibid., p.32
  22. Ibid., p.108
  23. George L. Mosse, The Nationalization of the Masses (Cornell University Press, 1991), pps.2-3
  24. Mosse, The Nationalization of the Masses, p.9
  25. Quoted in Giorgio Caravale, ‘“A Mutual Admiration Society”: The Intellectual Friendships and the Origins of George Mosse’s Connection to Italy’ in George L Mosse’s Italy: Interpretation, Reception and Cultural Heritage, ed. Lorenzo Benadusi and Giorgio Caravale (Palgrave Macmillan, 2014), p.65
  26. George L. Mosse, Masses and Man: Nationalist and Fascist Perceptions of Reality (Wayne State University Press, 1987), p.6
  27. George L. Mosse, Nationalism and Sexuality: Respectability and Abnormal Sexuality in Modern Europe (Howard Fertig, 1997), p.9
  28. George L. Mosse, Toward the Final Solution: A History of European Racism (Howard Fertig, 1997), pps. xxviii-xxix
  29. Mosse, Nationalism and Sexuality, p.191
  30. Mosse, Confronting History, p.180
  31. Mosse, The Crisis of German Ideology, p.x
  32. George L. Mosse, Fallen Soldiers: Reshaping the Memory of the World Wars (Oxford University Press, 1990), p.3
  33. ‘The Fascination of Fascism: A Concluding Interview with Roger Griffin’ in A Fascist Century: Essays by Roger Griffin, ed. Matthew Feldman (Palgrave Macmillan, 2008), p.214
  34. Quoted in Karel Plessini, The Perils of Normalcy: George L. Mosse and the Remaking of Cultural History (University of Wisconsin Press, 2014), p.207

 

Posted in Uncategorized | Leave a comment

Slaughter Commissions: Crime Films and the Italian Crisis

Mario Imperoli’s 1976 film Like Rabid Dogs opens with a brutal armed robbery that takes place in the bowels of Rome’s Stadio Olimpico in the middle of a football match. The footage used to frame this scene is taken from a real game: the ‘76 play-off between Lazio and Sampdoria, a relegation fixture that ended in a 1-1 draw. As the film historian Fabio Melelli noted in a recent interview about the movie, this was the first Italian football match to be patrolled by police with German shepherd dogs, largely due to the fear of hostile pitch invasions by rival fans. On the field that day was Lazio centre Luciano Re Cecconi, who would be shot dead one year later by the owner of a jewellery store in Rome, a victim of “the same violence we see in the movie,” as Melelli puts it. Re Cecconi’s blonde mop is briefly captured on film, as are the police dogs circling the perimeter of the stadium, and with these details on screen the immediacy of the moment becomes palpable. Reality and fiction overlap, as Imperoli disrupts the boundaries that normally separate them: you can feel a sense of febrile tension and arbitrary violence burning off the screen as the football match and robbery progress. To add to this overlap, Imperoli’s story of three middle class students who get their kicks from robbery, rape and murder echoes a case that had shocked Italy the year before the film was made; the lurid nature of these crimes is reflected in the gratuitous set pieces and sleazy atmosphere of the film itself. Like Rabid Dogs is the product of an imploding society; it depicts it, represents it and is part of it. 

Due to budgetary limitations and aesthetic decisions the Italian poliziotteschi movies of the 1970s have a closer proximity to period reality than most genre films made before or since. The recent Arrow box set Years of Lead — which includes a pristine restoration of Like Rabid Dogs — makes this explicit by directly linking five disparate crime movies to the crisis of the First Republic that opened with the Hot Autumn of 1969 and climaxed with the murder of Aldo Moro. This linkage is completely appropriate and it is also smart marketing. Not only are these films intensely parochial, they are also raw expressions of national dislocation, anxiety and violence; this is their strength and the reason for their enduring appeal. In an essay commissioned by Arrow, Troy Howarth roots the poliziotteschi in the 1960s crime films of Carlo Lizzani, a former assistant to Roberto Rossellini, and even if the links between the poliziotteschi and neorealism are not always obvious or even defining, they do share certain practical techniques and aesthetic sensibilities that enhance the overlap between fiction and reality. Imperoli’s appropriation of real football footage is one example of this, and the habit of filming car chases without permission in the middle of live traffic in large Italian cities is another. Like Rabid Dogs is filled with spectacular and completely illegal chases, partly because Imperoli was able to hire one of the top specialist stunt drivers working in Italy at the time, Sergio Mioni. In the same year, Ruggero Deodato also shot an infamous chase sequence in the middle of the Roman rush hour for his own poliziotteschi Live Like a Cop, Die Like a Man. To do this he had to film quickly and recklessly before the traffic cops could mobilise, but the result was visceral and unhinged, crackling with real danger and adrenaline. Like Lizzani, Deodato cut his teeth working as an assistant for Rossellini and he would later combine the tricks of neorealism with the excesses of the mondo documentary to explosive effect in Cannibal Holocaust. The blur between reality and fiction that Deodato perfected in these films and Imperoli exploited in Like Rabid Dogs was key to the success and impact of the poliziotteschi, and as a result you can still learn a lot about the death spiral of the First Republic by watching them. 

So, for example, Vittorio Salerno’s 1975 Savage Three takes some basic cues from A Clockwork Orange, as Imperoli does in Like Rabid Dogs, but both films strip away the literary pretensions and replace them with parochial concerns and pathologies. The location of Salerno’s movie is key: set in Turin, the city symbolises Italy’s industrial salvation and decline, from the Economic Miracle of the 1950s to the crisis of the 1970s. Turin always had this role, and all the major exhibitions held to commemorate the centenary of Italian unification in 1961 had taken place in the city of Fiat, an icon of the nation’s consumer and export boom. For the same reason, Fiat factories saw the most important strikes of 1969 and Fiat’s middle managers became prime targets for terrorist kneecappings and assassinations throughout the 1970s. So the city also provided a symbol for Italian disunity — or, as Emilio Gentile described it, the fragility of the myth of the nation. Turin had been the first capital of Italy and unification had been led by the Piedmontese: their King became the King of Italy and their army imposed the laws and institutions of Piedmont on the rest of the country. The south, in particular, rebelled against ‘Piedmontization’ from the beginning and this historic resentment was exacerbated by mass immigration to the industrial north during the postwar boom: southern immigrants felt out of place and openly despised here, human cattle filling the factories of a northern oligarchy. Turin — the historic capital of Italian unification and home of the largest company in Italy — was packed tight with highly combustible human material, its imperious and fading streets and piazzas providing a primary platform for slaughter. 

In Savage Three, Turin is the stage for an outburst of nihilistic violence perpetrated by a gang of three friends who work in technical jobs at a computer processing plant on the outskirts of the city; Ovidio Mainardi (Joe Dallesandro) is their unofficial leader and a slowly unravelling psychopath who instigates an escalating cycle of destruction. Pepe (Guido de Carle) is from the south and lives with his extended family in a cramped apartment; like the protagonists in Visconti’s Rocco and his Brothers he is dangerously alienated and adrift in this new urban environment, and like the Sicilians and Calabresi who joined the assembly lines of Fiat’s Mirafiori plant he is treated as a foreign imposter, mocked even by his friends. In fact, in Salerno’s movie, Turin is awash with anti-southern prejudice and racism: the brutal crimes the gang commit are immediately blamed on southern immigrants, even though only Pepe is a southerner and he is the least savage of the three. Southerners are considered an inferior and ethnically polluted race prone to irrational violence, an attitude hardwired into the city by one hundred years of codified racism. When the gang murder a pimp and a prostitute they leave their bodies prostrate and hanging from the scaffolding surrounding the Monumento al Conte Verde; the corpses are discovered by Inspector Santagà (Enrico Maria Salerno), a southerner demoted for violent conduct who describes the gruesome tableaux ironically to his northern colleague as “a defamation and offence to the sacred legacy of the nation.” Santagà is alert to the racism of his colleagues and their desperation to ascribe ideological motivations to the offenses: the crime wave is, for them, a result of an influx of violent aliens and the political disintegration of the nation. Santagà is shrewder than this and quickly sees the crimes for what they are: not the result of genetics or beliefs, but blank expressions of boredom, disaffection and alienation, common reactions to modern urban conditions taken to an extreme point. 

In Savage Three, Turin looks like a city that has been defeated: the football stadium, the streets and the buildings all visibly broken, exhausted, unclean, crumbling. This is not the Turin celebrated for its elegance and ceremony, but a 1970s urban sprawl which Salerno depicts as a human trap: citizens crowded into constricted living and working spaces that create the conditions for tension, intolerance and murder. And this is not just the city: everything looks depressed and drained, in line with the common visual tone of the poliziotteschi. In these films, the sky is invariably blank and grey, a featureless ceiling of drab cloud that creates a simultaneously flat and oppressive atmosphere; the saturated colours of the gialli and the seductive monochrome of neorealism are replaced by an ugly, washed-out tonal range. The poliziotteschi city is a bleak, malignant place, where all human relations are negative and most human interactions are violent. Turin has unique divisions caused by southern immigration and its racist reaction, but the general malaise is shared across the country. Fernando di Leo’s Milan is not defined by beautiful arcades or the Duomo di Milano, but by rotting ghettos that run alongside the canals, grey rain-swept piazza tiles and warehouses where corpses wait a long time to be discovered. Umberto Lenzi’s Rome is not the Rome of romantic weekends and ancient ruins, but a dirty and congested labyrinth of squalid flats, dank alleys, damp staircases, bleak wasteland and scrappy tenement rooftops, settings for various acts of torture, rape and murder. Enzo G. Castellari’s Genoa is perhaps the most desperate of them all: a terrifying, feral, decadent city festering on the edge of the Ligurian sea, rotten with vice and corruption and collapsing in on itself like the face of a syphilitic. The fate of Italy’s cities in the 1970s is encapsulated by some of the most famous crime titles: Fernando di Leo’s Milano Calibro 9 (1972), Marino Girolami’s Violent Rome (1975), Lenzi’s Gang War in Milan (1973), Rome Armed to the Teeth (1976) and Violent Naples (1976). The cities are the focal point of Italy’s crisis: the piazzas and train stations are bombed and the banks are robbed, while criminal gangs and rogue policemen rule the streets and citizens are terrorised by random acts of slaughter. Perhaps the definitive Italian crime title is the one that Sergio Sollima gave his 1970 hit man melodrama Città violenta, even though that film is set in New Orleans. The cities are violent places that condition their inhabitants to commit more violence: an endless, animalistic cycle that Salerno depicts so well in Savage Three.

Lucio Fulci once declared that “violence is Italian art” and the art of the poliziotteschi is a very Italian aestheticization of violence. Even the ugliness has style. This is why these films always tread a fine line between exploitation and art, nihilism and responsibility. The films of Imperoli and Salerno are perfect examples of this: they can be hard to watch, but they are impossible to dismiss. The violence is extreme, but makes its point for this reason. The question of motive is always present — but it always remains a question without a satisfactory answer, or even an answer, and this is what makes these films products of the Italian crisis. In Savage Three, Santagà is the only one who realises that there is no reason behind the brutal acts instigated by Mainardi, that they are simply a result of the unnatural and damaging conditions of contemporary Turin. In Massimo Dallamano’s Colt 38 Special Squad (1976) the motivation for the crime is key, but also ambiguous. At first the operation planned by the Marseillaise (Ivan Rassimov) seems like a simple case of common theft, albeit conducted on a grand scale, but as Rachael Nisbet shrewdly points out in her accompanying essay to the Years of Lead presentation, there is something more sinister at work:

While the motivations of the common criminal are clear, the Marseillaise is a more complex individual; a man driven by a certain sense of madness and anarchic glee…his perverse sadism and disregard for life suggest that his actions are motivated by something far darker than the driving factors of vengeance and retribution…

Mainardi and the Marseillaise are similar in their psychopathic nihilism, but the Marseillaise’s plan is more gratuitously grandiose and cruel than the small and terminal spasm of violence unleashed by Mainardi. He wants to inflict a grievous wound on Turin and live to see it; the motivation of wealth does not adequately explain this instinct. This is more than simply a plot point, or an absence of one: it taps into the atmosphere of uncertainty that defined the atrocities of this period, and was never adequately resolved.

Italian political culture is so convoluted and secretive that when Henry Kissinger claimed that he did not understand Italian politics in the 1970s nobody thought he was joking. In 1988, the Italian state set up the Commissione Stragi (‘The Slaughter Commission’) tasked with uncovering the truth behind the bombings that occurred between 1969-88 that had been variously attributed to the Red Brigades, anarchists, neofascists and the CIA; predictably, it failed to remove the opacity of the period or provide any real psychological closure. The scenes of carnage that follow the bombings of Turin railway station and market in Colt 38 Special Squad stand out for their graphic brutality and emotional poignancy. Dallamano shot the film only four years after the Piazza Fontana bomb and these scenes pierce his own movie like unmediated expressions of national post-traumatic stress: the scenes are not shot for thrills, but carry a lot of psychological weight and evoke still raw memories. What can’t be answered in the film, and what could never be effectively answered in real life, is the reason for all of this destruction and cruelty: the Slaughter Commission could not solve it and Dallamano refuses to provide a neat conclusion in the character of the Marseillaise. The ambiguity of the violence and the veil that fell over the most extreme events that ripped through Italy in the 1970s are a central part of the poliziotteschi world view: if nobody can fully account for this violence, then perhaps it is simply endemic to the fragile and conflicted Italian nation, something that can never be solved. 

But Colt 38 Special Squad also fulfills another function common to these films: it provides a psychological outlet, a form of mass catharsis. The volatility and opacity of Italian society was increased by the growth of organised crime, black markets and corruption in a free falling economy. The Italian crime films give voice to the pervasive feeling that the Italian authorities had fallen behind the criminal gangs, even if they were not already hopelessly compromised by them. Fear and helplessness combined with anger; the films responded with bleak cynicism and tales of rough justice. Quite often the stories revolve around detectives or special units within the police force going rogue, using unorthodox and even illegal methods in order to catch or destroy criminal antagonists. The Special Squad in Dallamano’s film is put together to crush the crime wave that has tipped Turin into near anarchy; they are given licence to use Colt 38 revolvers and pursue their targets with freedom that stops short of murder, or is supposed to. It is only by going beyond the bounds of their authority that the Marseillaise is finally stopped. Deodato’s Live Like a Cop, Die Like a Man depicts a two man ‘special unit’ who act outside the law in order to enforce the law, in this case with a cavalier brutality that outdoes some of the criminals, gleefully crossing numerous moral and ethical lines. Stelvio Massi’s 1977 Highway Racer (another film in the Arrow box) pays homage to Armando Spatafora, a member of Rome’s Squadra Mobile whose 1960s exploits were legendary; the exceptional car chases choreographed by Remy Julienne for Massi played on and added to this legend, even recreating the mythical day Spatafora chased a crook down the Spanish Steps of the Trinità dei Monti. In the film the young Squadra Mobile driver Marco Palma (Maurizio Merli) soups up a boxy Alfa Romeo and demands a Ferrari from his boss, the Spatafora doppelganger Tagliaferri (Giancarlo Sbragia), because “I don’t want to lose before I begin”: retaking the advantage is key, and requires special measures. Merli excelled at playing rogue cops like Inspector Tanzi in Umberto Lenzi’s frantic and stylish films Rome Armed to the Teeth and The Cynic, the Rat and the Fist (1977), characters that pushed their job description beyond law enforcement into the realms of vigilantism. These were cops who had not been given any special license, but had simply gone off the rails, driven to rage and despair by inability to deliver justice. By the 1970s, many Italians had lost all faith in the police as an effective force: they were seen as corrupt and incompetent, outspent and outgunned by gangs, syndicates and even lone psychopaths. The poliziotteschi, with their special squads and vigilantes, gave Italians an alternative reality in which this imbalance was finally redressed and Italian society redeemed, however morally questionable that redemption really was.

But they also provided a troubling critique and quite often without any release or redemption. Italy in the 1970s was paralysed by corruption and this was more insidious than terrorism and crime because it undermined all sense of security and trust in state and local authority and, even, any trust in the reality of appearances. Vittorio Salerno’s 1973 film No, the Case is Happily Resolved is the one entry in the Arrow set that focuses on this total breakdown in trust, with a narrative so pure in its inexorable and dire logic that it almost presents a parable for the Italian crisis. The corruption in this case is psychological and sexual: at the opening of the film Professor Eduardo Ranieri (Riccardo Cucciolla), a quiet and respectable maths and physics teacher, brutally beats a Roman prostitute to death in a reed bed outside of the city. Fabio Santamaria (Enzo Cerusico), a young working class man who witnesses the murder, eventually finds himself framed, arrested and jailed for the crime after a relentless and unstoppable series of mistakes and misjudgments that recall the fate of Richard Blaney in Hitchcock’s Frenzy (1972). Santamaria makes the fatal decision not to report this crime and to try to cover up his presence at the scene, largely because he distrusts and even fears the police. This instinct proves correct: the police believe Ranieri because of his position and his appearance, while jailing Santamaria with only a cursory investigation. Murderous and possibly perverse impulses are veiled by the professor’s exterior presentation and this is compounded by an official corruption that renders the police ineffective and even dangerous to defenceless citizens. All the authority figures in this film either conceal their true identity or cannot be trusted to do their jobs safely: all certainty is therefore undermined and Italian society is shown to be based on deception and illusion, a place where reality is inverted and nobody is secure.

The key to the success and durability of the Italian crime films of the 1970s is that they did not just reflect their own society, but made an active contribution to the atmosphere and dynamics of the Italian crisis. The poliziotteschi cycle played out in the shadow of real and relentless atrocities committed by political and criminal groups and state actors; the films did not always or necessarily comment on any of this, but political violence and social degeneration were part of their visual and thematic fabric. The Italian peninsula was tense, fragile and paranoid, its social and moral fabric torn by the rise of cities, consumerism and the cultural influence of America; as Emilio Gentile noted, this occurred in the ruins of the myth of the nation, at a time when “Italians were experiencing a mass anthropological revolution in their attitudes and behaviours” and the state was being contested by the existential opposition of Catholicism and Communism. In this context, popular culture had a crucial role to play in shaping narratives, ideas and emotions on a mass scale: Italian artists, steeped in visual culture and speaking the language of Mussolini and Gramsci, seemed to instinctively grasp this. This is not to assign responsibility for any specific events to these films, but to accord them their due social and cultural power, even at the low level of the regional cinema circuit. It so happened that the Italian crime films of the 1970s had a more profound and immediate relationship with their world than any of the more fêted artistic products of that time. The importance of Arrow’s Years of Lead set is that it recognises this and therefore values the films at their true worth.

Posted in Uncategorized | Leave a comment

‘Everyone Middle East is here’: The British in Iraq

The real difficulty here is that we don’t know exactly what we intend to do in this country. Can you persuade people to take your side when you are not sure in the end whether you’ll be there to take theirs? No wonder they hesitate; and it would take a good deal of potent persuasion to make them think that your side and theirs are compatible. 
Gertrude Bell, Basra, 1916

Britain is a high-maintenance ally.
I. Lewis ‘Scooter’ Libby, Chief of Staff to the Vice President of the United States, 2003

I: The Cairo Conference

When T. E. Lawrence met the Iraqi delegation of Gertrude Bell, Sir Percy Cox, Jafar Pasha al-Askari and Sasun Effendi Eskail at Cairo train station on March 11th, 1921, he could have expected a frosty reception. Bell, in particular, had been furious with Lawrence for his harsh criticism of the post-war British occupation of Mesopotamia under Arnold T. Wilson. “Things have been far worse than we have been told,” Lawrence wrote in The Sunday Times, “our administration more bloody and inefficient than the public knows” (1). “Tosh…pure nonsense,” Bell had responded defensively, but by the time they met again in Cairo this disagreement had faded into insignificance. They had important work to do. “Dear boy,” Bell warmly greeted her old Orientalist comrade. “Gertie,” Lawrence replied, scanning the colonial delegates stepping off the train, “everyone Middle East is here” (2). “We arrived yesterday,” wrote Bell in her diary, “T. E. Lawrence and others met us at the station — I was glad to see him. We retired at once to my bedroom at the Semiramis and had an hour’s talk…” (3). Here they discussed their joint strategy, in preparation for the conference called by the Colonial Secretary Winston Churchill. Both had the same objective: to convince Churchill to adopt the principle of Arab self-determination in Mesopotamia and appoint Faisal — the third son of Sharif Hussein of Mecca — King of Iraq.

Even when they argued, Lawrence and Bell were in accord. Bell sympathised with Lawrence’s criticisms of Wilson because she basically agreed with them. Wilson was a colonialist who believed that Mesopotamia should be absorbed into the British Empire under the direct authority of the British crown. He was not alone: his staff in Baghdad agreed with him and, following his example, froze Bell out of all the social and professional engagements that they could, dismissing her as a female dilettante and pro-Arab adventurer. The fate of Iraq under the British had been caught between the parochial ambitions of two British institutions: the government in India wanted to annex the Iraqi state and keep control of Arabia, while the British administration in Egypt aimed to incorporate the former Turkish vilayets of Mesopotamia into an Arab Kingdom that Britain would influence but not directly rule. The Cairo Arabists were willing to cede more autonomy to local clients than the Indian government, who reacted with fury to this challenge: “I devoutly hope that this proposed Arab State will fall to pieces, if it is ever created,” Viceroy Hardinge wrote in a letter to the Foreign Office, “It simply means misgovernment, chaos and corruption, since there never can be and never has been consistency among the Arab tribes” (4). This rage was partly due to a sense in India that their power and influence over the tribes had been lost, particularly after the propaganda triumph of the Arab Revolt led by Faisal and orchestrated by Cairo. Bell had been highly influential in the conception and execution of this campaign — as Lawrence acknowledged in a radio interview in 1927 — her geographical, cultural, linguistic and political knowledge of the territory and its tribes providing unrivalled intelligence for British officials.

Both Lawrence and Bell were British imperialists who believed in Arab self-determination under the tutelage of British administrators such as Cox, Bell’s main ally in Baghdad. They blamed Wilson for stoking anti-British sentiment that culminated in a nationalist revolt among the Arab and Kurdish tribes in 1920, and considered his brutal response — bombing villages and Shia shrines, machine-gunning insurgents and burning down their homes — to be counterproductive, or “bloody and inefficient” as Lawrence wrote in the Times. Wilson’s preference was for colonial subjugation: he did not see any prospect for successful local rule and he did not want it anyway. Wilson was a complex character: after leaving the Middle East and entering parliament he became a vocal supporter of Hitler and Mussolini, but then died in a plane crash in Dunkirk in 1940 after enlisting with the R.A.F. volunteer reserve (he would later be a model for Sir George Corbett in Powell and Pressburger’s One of Our Aircraft is Missing). 

Meanwhile, Lawrence and Bell both worked towards Arab self-rule, but against British withdrawal from Iraq. Having outmaneuvered Wilson and convinced the Indian government of their case, the prospect of abandoning the nascent Iraqi state remained the gravest threat hanging over the Cairo conference. Churchill summoned the key Middle East administrators and Orientalists to help resolve a number of outstanding and interrelated political issues in the British mandates, but he also wanted to save money. The British public, in the midst of post-war depression and strikes, had protested against the cost of pursuing British ambitions in the Middle East, and a lead article in the Times had asked a question that would haunt later occupiers: “how much longer are valuable lives to be sacrificed in the vain endeavour to impose upon the Arab population an elaborate and expensive administration which they never asked for and do not want?” (5) Churchill was himself inclined to pull out of Iraq, leaving only a controlling presence in the strategically critical vilayet of Basra — “I’m going to save you millions,”  he crowed to the press. Bell, Lawrence and Cox were aghast at the possibility of abandoning their Arab allies, which they saw as a recipe for territorial disintegration rather than independence.

For Bell the conference was a success because it avoided this outcome, although she didn’t know that the key decisions had already been made in London. As Bell later wrote, somewhat surprised, “Sir Percy and I, coming out with a definite programme, found when we came to open our packet that it coincided exactly” with Churchill’s proposals (6). The Iraqi borders that she had drawn up in Baghdad were reinforced, incorporating the vilayet of Mosul, home of the Kurdish tribes. Churchill was sceptical of its place in the new Iraqi state and favored their independence, but Bell and Cox insisted on it as a way to counter the imbalance between the urban Sunni minority and the rural Shia majority, and he had finally agreed. Furthermore, the principle of self-determination under her own preferred ruler, Faisal, was affirmed, while his brother Abdullah was chosen to rule Transjordan as a temporary governor. Despite her own modest disavowals, Bell returned to Baghdad as a state builder and a kingmaker. For the rest of her short life, her influence over the creation of modern Iraq was unrivaled and her identity intimately woven into the elite cultural and administrative structures of the new state. Faisal considered her to be an honorary Iraqi. She loved the country and felt responsible for its fate, although by the end of her life she was troubled by the course of its development and the behaviour of the King, her old friend and confidant.

Following the end of their mandate in 1932, the British handed control to Iraq’s parliamentary government, while protecting their own political and business interests in the country they had just created. The result was a cultural and economic boom in the cities of Basra and Baghdad, but also resentment of the British that permeated all sections of the Iraqi population, even those identified closely with them, like the monarchy, the governments of Nuri al-Said and the Jews of Baghdad. Despite his desire for independence and the end of de facto occupation, al-Said believed that it was in the best interests of Iraq to work with the British rather than to expel them, but this position was compromised by the economic and military priorities of the British themselves, as exposure of the Anglo-Iraqi Treaty of 1930 made clear to all Iraqis. This fed into the atmosphere that led to the 1958 military coup which effectively terminated British colonialism in Iraq. The revolutionary officers and their Baghdad mobs butchered the royal family in their Palace, while anybody identified with the ancien régime faced a choice of incarceration or slaughter.

Tamara Chalabi’s family memoir Late for Tea at the Deer Palace paints a vivid picture of the terror and chaos of that day, when families associated with the monarchy and the government fled in desperation to safe houses and, eventually, the country. “Whether it was as a direct result of the Suez debacle which Britain had suffered a few years earlier, or a calculated plan to sell out their friends,” Chalabi writes, “it was unclear what lay behind the apparent indifference of the British. There had been a subtle shift of attitude amongst the British in Baghdad, who were clearly aware that propaganda against the monarchy had reached immense proportions within Arab circles” (7). To the horror of aristocratic families like the Chalabis (who fled to London) the British did nothing to stop or reverse the coup, even when their Embassy residence was burnt down and a British staff member was shot dead. They had made their own calculations and abandoned their Iraqi allies with ruthless expediency. 

After 1958, British engagement with Iraq was more fractured and distant. During the years of the Ba’ath regime the British tried to intervene in different ways, each ultimately unsuccessful and simply reinforcing the broken link with the Iraqi state that they had created. This essay is about those interventions. It is a tangled tale of exile and opposition, political strategy and foreign policy, invasion and occupation; a story haunted by Britain’s imperial past and the events set in motion by Churchill, Lawrence, Bell, Cox and Wilson.

II: Someone must keep track of these things

On March 18th 2003, the day that Parliament voted to approve the invasion of Iraq, Ann Clwyd wrote a combative op-ed for the Times that made graphic claims about the methods of torture and execution being used in Saddam’s prisons (8). This included details of screaming prisoners being fed into huge plastic shredders, a claim that was dismissed as simplistic propaganda by opponents of the war but which caught the attention of the U.S. Deputy Security of Defence, Paul Wolfowitz. There followed a unique tête-à-tête in Washington between the Labour member for the Cynon Valley and the neoconservative Pentagon civilian that typified the complexities and controversies of the 2003 intervention.

It turned out that they had more in common than she had expected. “Surprisingly, we hit it off,” Clwyd recalled, “it emerged that he had a long-standing association with human rights issues, dating back to his time as ambassador to Indonesia” (9). She was not the only person that Wolfowitz surprised in this way, but Clwyd also had a tendency to surprise people. Her support for intervention in Iraq did not go down well with former friends, allies and colleagues. “At Swansea, while attending a Welsh Labour conference, I was openly jeered and called ‘Bradwr’, ‘Traitor’. Somehow it sounded so much worse in Welsh” (10). Clwyd’s pragmatic approach to politics awarded her diverse allies, from Wolfowitz on Iraq to Jeremy Corbyn on East Timor. “In general, when weighing up whether or not to lend my support to causes and campaigns, I have learnt to make decisions based on what I believe is the right thing to do rather than the personalities involved” (11), she wrote in her 2017 memoir Rebel With a Cause. The cause of human rights in Iraq eventually left her isolated within her own party.

During their meeting Wolfowitz asked Clwyd who she thought would be the best candidate for the role of interim Iraqi president. She had acquired many contacts and friends over decades working with the Iraqi opposition, particularly within Ahmad Chalabi’s Iraqi National Congress (INC), Jalal Talabani’s Patriotic Union of Kurdistan (PUK) and Masoud Barzani’s Kurdish Democratic Party (KDP), but her choice was unequivocal: 

I spoke strongly in support of Jalal Talabani, since I felt he had the ability to draw the factions together, in other words he would provide the glue. At that point Donald Rumsfeld came into the room, and Wolfowitz said, “Tell him what you just told me.” We discussed why I was against some of the names he had mentioned and why Talabani would be such a strong candidate, and I was delighted when Talabani did indeed land the job as interim President of Iraq. (12)

Clwyd had credibility with the Iraqis and her opinions were taken seriously in London and Washington. During the years of Saddam’s genocidal Anfal campaign she had publicised the plight of the Kurds in parliament and the media, while the British government tried to consolidate their own covert trading relationship with the Ba’ath regime. Following the Gulf War she travelled into the Zagros mountain range to meet Kurdish refugees, taking personal and political risks in order to witness and record their suffering. She was instrumental in lobbying for no-fly zones and could take some credit for the designation of Iraqi Kurdistan as a safe haven. During the 1996 Kurdish civil war, she facilitated a delicate prisoner exchange between the PUK and KDP, negotiating an agreement between Talabani and Barzani that foreshadowed their eventual political settlement.

In her memoir Clwyd is vague about the details of the intra-Kurdish conflict but it was a key moment, not simply for the political course of Iraqi Kurdistan but for the Iraqi opposition as a whole. The civil war had its origins in the INC-Kurdish military offensive of March 1995, an operation that drove a wedge between the Kurdish parties after the KDP pulled out at the final moment, while the PUK and INC pushed on to crushing defeat at the hands of the Republican Guards. In the factional scramble that followed the KDP sought protection from Saddam, a desperate and catastrophic move that invited the reinvasion of the northern safe haven by Iraqi troops in August 1996. Saddam used this opportunity to wipe out the INC: Republican Guard divisions occupied Erbil, executed all the INC personnel they could find and seized their computers. The CIA airlifted the remaining INC members to Guam and evacuated Iraq, leaving the Kurds to their fate. Clwyd was present in northern Iraq when the INC was crushed in Erbil, taking desperate calls from Ahmad Chalabi who implored her “tell the world” (13) what was happening, which she did (or tried to). “It was a tragedy for us,” Chalabi later reflected, “‘Twas a devastating blow” (14).

This outcome seemed to confirm Churchill’s early misgivings about Bell’s plan to incorporate the Kurds into the Iraqi state: he had wanted to establish an independent Kurdish home in the north “to protect the Kurds from some future bully in Iraq” but had been overruled by the Foreign Office (15). However, by the 1990s, the Kurds carried the hopes of a democratic and unified post-Saddam Iraq on their shoulders, an outcome the PUK was far more committed to than the KDP. For this reason, the Kurdish collapse into open war and Saddam’s cunning exploitation of the division between the Kurdish parties was a bitter blow for their supporters, and Clwyd did not try to hide her own despair. “I pulled no punches in my conversations, even using emotional blackmail to remind them of my long-standing commitment to the Kurdish cause,” she writes, “I was blunt in telling them of the pain they were now causing their own people” (16). It was a low point for the exiles and their allies. The northern safe haven had been penetrated by Saddam’s army while British and American warplanes watched and did nothing. More importantly, the linchpin of the opposition and the supplier of its most effective existing fighting units, the Kurdish Peshmerga, had disintegrated as a united force. But for Chalabi — always the most resourceful and ruthless of the Iraqi exiles — this was only the beginning of a new, and more successful, phase in the war against Saddam.

To understand the origins of Clwyd’s role in all of this it is crucial to understand Britain’s status as a haven for Iraqi exiles. After 1958 a number of surviving members from the constitutional government fled to London, among them Chalabi’s father Hadi, who had been head of the Iraqi senate before the coup. They would later be joined by émigrés from successive purges until, by the end of the 1980s, London hosted a large and prestigious community of Iraqi refugees. This was partly explained by the close links that remained between the British state and the old Iraqi elites. These ancien régime survivors retained their anglophone sensibilities and a powerful nostalgia for pre-coup Baghdad having benefited from the prosperous and liberal society of the Hashemites and their British sponsors; for some later additions, there were MI6 links. Then, during the years of the Ba’ath, the genteel elegance of their lives in Mayfair, Bloomsbury and Surrey had been overshadowed by the threat of Saddam’s assassins: famously, Chalabi’s rival Iyad Allawi of the Iraqi National Accord had been attacked by an axe-wielding intruder in his Kingston-Upon-Thames home and left for dead by the side of his swimming pool. The escalating brutality and depravity of the Ba’ath regime, as well as Saddam’s willingness to murder his enemies on foreign soil, focussed minds among the opposition groups, although they remained too fractious and scattered to pose any real threat to the dictator. 

London remained a key base for Chalabi, who kept his Knightsbridge office as an operational hub for the INC until the eve of the 2003 invasion. The Chalabis were, in many ways, the epitome of the elite Iraqi exile families: wealthy, cultured, well-connected and determined to return to Iraq to reclaim their stolen inheritance and identity. While he shuttled between Washington and Tehran for the INC, Chalabi kept his family in an opulent Mayfair apartment overlooking Green Park, replete with silk Persian rugs, European and Middle Eastern paintings and a resident maid. When he relocated to northern Iraq in the 1990s, he recreated this refined environment in his house in Salahuddin, decorating it with local sculptures and paintings, hardwood and walnut furniture carved in the style of Frank Lloyd Wright, a state-of-the-art stereo system and a library of imported cookery books from which he taught his Kurdish cooks French cuisine.  One of his favorite relaxations in Kurdistan was to rewatch videos of his favorite TV programme: the 1981 BBC production of Brideshead Revisited, a work with great personal and atmospheric resonance for him. “Fighting Saddam does not mean you have to eat bad food or live in shabby surroundings,” he insisted (17).

Clwyd had worked closely with Chalabi as chair of the Campaign Against Repression and for Democratic Rights in Iraq (CARDRI) and its successor INDICT, in the effort to archive evidence of Saddam’s crimes against humanity. In her memoir, Clwyd recalls the time Chalabi visited her in Cardiff on INDICT business and requested a visit to the National Museum of Wales to see the Davies Sisters’ celebrated collection of Impressionist paintings: “as we walked past the renowned Brangwyn Panels, Ahmad pointed at them and said casually: ‘I bid for those at auction’” (18). For Chalabi, European culture was simply part of the human inheritance that he absorbed without compromising his Iraqi or Shia identity. For his allies, he was the living embodiment of the possibility of a new Iraq, even a new Middle East: proof that a viable alternative to the totalitarianism of the Ba’ath regime or the medieval recidivism of Shia and Sunni fundamentalism actually existed. But for Clwyd’s comrades on the Labour left this was suspicious company, a feeling only confirmed by her subsequent meeting with Wolfowitz and Rumsfeld. Even before the Iraq war, INDICT had received negative coverage or been treated with passive indifference, a fact that dismayed Clwyd, who wrote: “we knew people in Iraq were being tortured and killed in large numbers and could not understand why others did not feel the same sense of outrage” (19). This feeling of anger and futility in the face of ongoing barbarity culminated in her stark declaration during the February 2003 parliamentary debate on UNSCR 1441: “I believe in regime change. I say that without hesitation, and I will support the government tonight because I think that they are doing a brave thing” (20). On the subject of Iraq, Clwyd had become Blair’s most committed ally.

There was a postscript for the Cynon Valley MP that capped all of the work she had done with the Iraqi opposition since the 1970s. In May 2003, a grateful Blair appointed her Special Envoy on Human Rights to Iraq and in this role she witnessed the excavation of the mass graves of Shia victims of Saddam in Al-Hillah. In later years evidence collected by INDICT was finally put to use in the Baghdad trials of Saddam and his captive inner circle, while the Free Prisoners Association, founded by widows of prisoners who had died in Saddam’s jails, would vindicate the work of CARDRI and INDICT and engage Clwyd long after the overthrow of the regime. In her 2017 memoir she summed up this work with a statement that she could have easily made in 1988: “I still keep an eye on the plight of the widows…even though there are other enormous geopolitical challenges and crises, someone must keep track of these things” (21). 

III: A leader of nations or nothing 

“Ann Clwyd was absolutely terrific and we needed to hear more of her” wrote Alistair Campbell in his diary following the February vote, which otherwise delivered a backbench rebuke to the Prime Minister (22). Iraq proved to be the definitive test of a foreign policy doctrine Blair had originally formulated against a background of amoral Conservative realpolitik that had reached a low point during the Bosnian war. The Prime Minister presented his ideas in full at the Chicago Economic Club in April 1999 with a speech titled ‘The Doctrine of the International Community’ — a text partly written by KCL Professor of War Studies Lawrence Freedman, who would later be appointed to the Iraq Inquiry by Sir John Chilcot. The speech was delivered in the shadow of Kosovo, but the implications were very different for Iraq. Blair, in his statement to parliament on 18th March, 2003, tried his best to separate them: “I have never put the justification for action as regime change. We have to act within the terms set out in Resolution 1441 — that is our legal base” (23). Nevertheless, the moral argument for regime change did come within the scope of the Chicago speech and was effectively covered for Blair by Clwyd. This is why she became important to him, particularly once the WMD argument had disintegrated.

To trace the arc of this journey in British politics it is useful to return to the Conservative Party’s policy towards Iraq in the late 1980s. This policy had its roots in the Import, Export and Customs Powers (Defence) Act of 1939, an emergency war measure that gave governments the power to control import and export licenses, and proved far too useful to be repealed. In 1984, at the beginning of the Iran-Iraq War, Foreign Secretary Geoffrey Howe had set out guidelines that prohibited exports that would “significantly enhance the capability of either side to prolong or exacerbate the conflict” (24), but by the end of the war he had secretly revised his own guidelines in order to establish “a more liberal policy” towards exports to Iraq’s lucrative weapons market. Parliament was not informed (as it did not need to be under the existing 1939 act) because, as Howe recognised, “it could look very cynical if, soon after expressing outrage about the treatment of the Kurds, we adopt a more flexible approach to arms sales” (25). The “treatment of the Kurds” referred to by the British Foreign Secretary was the chemical weapons attack on Halabja, which had been dismissed to Clwyd’s face by FCO Minister William Waldergrave, who told her, “‘There is no proof.’” (“I told him bluntly to get it,” she writes, 26). Clwyd recalls with disgust seeing photographs of FCO Minister David Mellor shaking hands with members of the Iraqi regime at the Baghdad trade fair months after Halabja. Waldergrave succinctly clarified the cynicism of government policy in the aftermath of the execution of Observer journalist Farzad Bazoft in Abu Ghraib prison: “the priority of Iraq in our policy should be very high: in commercial terms, comparable to South Africa…a few more Bazofts or another bout of internal repression would make this more difficult” (27).

From this point on the government relaxed restrictions on exports to Iraq that were clearly, if not openly, destined for Saddam’s weapons programmes. Trade Minister Alan Clark was accused of advising companies on how to tailor export licences to satisfy official guidelines and approving contracts on a “nod and wink” basis. When Customs took Matrix Churchill to court after intercepting a consignment of dual use machine tools destined for Iraq, the government used all the legal methods at its disposal to disguise complicity in the case. The trial finally collapsed when Clark admitted to being “economical…with the actualité” concerning his advice to Matrix Churchill and other implicated firms: “there was nothing misleading or dishonest to make a formal introductory comment that the Iraqis would be using current orders for general engineering purposes. All I didn’t say was, ‘and for making munitions…’” (28). Clwyd, one of the few opposition MPs who doggedly pursued the case in parliament, noted that Clark “summed up the arrogance of ministers when he dared to suggest that Labour MPs were a bit dim not to understand that, from 1987 onwards, they were being misled by the government and that we should have known Britain was selling arms to Iraq. We did suspect it, we did try to expose it, but he and his colleagues lied to cover it up” (29).

Blair would later be accused of many things, but arming Iraq was not among them. Blair, it is fair to say, was one of the more vigilant disarmers of Saddam — or, at least, he would have been, if Saddam had actually had any WMDs left to destroy. He was worrying about the dictator’s procurement as early as 1997, telling Liberal Democrat leader Paddy Ashdown,

I have seen some of the stuff on this. It really is pretty scary. He is very close to some appalling weapons of mass destruction. I don’t understand why the French and others don’t understand this. We cannot let him get away with it. (30)

Operation Desert Fox was launched in response to Saddam’s expulsion of UNSCOM inspectors in 1998 and set a number of precedents for Blair. In his own memoir he dismissed it as “a limited success…the general feeling was that Saddam had got away with it again” (31), but the operation was significant for different reasons. It was Blair’s first military intervention and demonstrated that he was prepared to confront Saddam directly. It also provided, in miniature, a model for the run up to the 2003 war. The reason for bombing Iraq was WMD rather than human rights but Blair did not even try to disguise his contempt for the dictator and his regime. Ignoring the advice of Foreign Secretary Robin Cook, he did not seek further UN authorisation, but acted on separate advice that Resolution 687 provided sufficient legal cover for action. To prepare public opinion for the bombing campaign Blair and Campbell produced a briefing note for MPs titled Iraq’s Weapons of Mass Destruction, a tactic they would revive to disastrous effect in 2003. Chirac, already pursuing French commercial interests and seeking the end of sanctions, was furiously opposed, as he would be later on. And, in his speeches to parliament, Blair was uncompromising, refining the rhetoric he would later recycle and expand:

Our quarrel is not with the Iraqi people….our quarrel is with [Saddam] alone and the evil regime he represents. There is no realistic alternative to military force. We are taking military action with real regret, but also with real determination. We have exhausted all other avenues. We act because we must. (32)

The American alliance was a priority for Blair in 1998 and 2003, and it may well have played a part in the decisions of any Tory administration that had been in place at the same time. Nevertheless, Blair’s policy towards Iraq directly repudiated the cynicism of the Thatcher and Major administrations. Characters that appeared in the Scott Report into Arms to Iraq — like Howe, Douglas Hurd and Malcolm Rifkind — also played a role in the sacrifice of Bosnia to geopolitical calculation: again, the practical result was British complicity in the cover-up of genocide. Blair’s Chicago speech was an attempt to define an ethic of intervention that would bury this school of realism forever. It had been morally repugnant but also strategically damaging: Major’s Balkans policy had led to fracture with Clinton, who detested Britain’s pro-Serb stance. Blair would eventually irritate Clinton for the opposite reason: the Chicago speech was delivered at the precise moment Blair was pushing for military intervention against Milosevic and Clinton was vacillating. From this perspective it looked like a very public form of moral blackmail. In fact, Blair’s speech was shaped by the immediate political priorities of Kosovo but its scope was far broader. In some ways it suffered from being too broad — while superficially detailed, the doctrine sketched out was largely untested and full of holes, qualifications and contradictions. But for Blair personally, it was a key political moment.

Blair’s first significant foreign policy speech was in some ways as ambitious as Chicago: in Manchester in April 1997 he had called for the eastwards expansion of the EU, reform of the UN, and a larger British role in NATO, proclaiming, “we are a leader of nations or nothing” (33). Two years and two military campaigns later, he was now attempting to define a new foreign policy ethic and era, not just for Britain but for the entire world. “We have to establish a new framework,” he declared: 

No longer is our existence as states under threat. Now our actions are guided by a more subtle blend of mutual self interest and moral purpose in defending the values we cherish. In the end values and interests merge. If we can establish and spread the values of liberty, the rule of law, human rights and an open society then that is in our national interests too. The spread of our values makes us safer. (34) 

Blair began with the premise that the age of power politics had been superseded by globalisation: the Chicago speech was, in part, a funeral oration for the Westphalian system of state sovereignty and non-interference. Underpinning this was the assumption that Western liberal democratic values are universal values that transcend religion, culture, tradition and ethnicity. In this sense, Blair’s speech was not only against protectionism and isolationism, but also against relativism. This is the point at which Blair’s version of humanitarian intervention overlapped with the moral interventionism of the American neoconservatives. Echoing Blair, Paul Wolfowitz reaffirmed the point in a 2005 interview, as Iraq was unravelling: 

The contradiction is to say that allowing people to choose their government freely is to impose our ideas on them. There was a wonderful moment at a conference here in Washington where someone said it’s arrogant of us to impose our values on the Arab world, and an Arab got up and said it’s arrogant of you to say these are your values because they are universal values. (35)

In many ways the Iraq War was an application of the principles outlined in the Chicago speech and they did not survive the experience intact. Conservative critics of Blair’s doctrine proved to be correct in one crucial aspect: the abstract principles of human rights and democracy could not account for or contain religious, national and ethnic rivalries. In 2003, specifically because of Iraq, the international community was so fractured that it barely existed, as Russia, Germany and France positioned themselves in hostile opposition to Britain and America. Later on, the disintegration of Iraq would expose the limits of both American neoconservatism and liberal interventionism by opening a new era of raw power politics stripped of all progressive and democratic illusions. 

IV: The theatre of power

In the introduction to his 1964 account of his travels among the Marsh Arabs of Southern Iraq, Wilfred Thesiger made a gloomy prediction: “soon the Marshes will probably be drained; when this happens, a way of life that has lasted for thousands of years will disappear” (36). Thesiger was not an Orientalist in the style of Bell and Lawrence, but an English explorer in the Victorian mold. In The Marsh Arabs he complained that during his travels in Iraqi Kurdistan he kept coming up against national borders he could not traverse. Travelling through Iraqi towns and cities he was bored and dissatisfied with the urban Arabs who, he felt, were “ashamed of their background and anxious to forget it. A suburbia covering the length and breadth of Iraq was the Utopia of which they dreamed” (37). Like Bell, Thesiger sought escape from “drab modernity” and found it in the Arabian desert and, between 1951-8, among the Arabs of the Iraqi marshes. This small, self-enclosed tribal world in which he tried to forget his own background was, indeed, ancient and fragile: a pastoral existence defined by flood water and reed beds, wildfowl and pig hunting and the milking of buffalo. For the tribes who welcomed him, Thesiger retained the residual authority of the English and was used as a doctor and surgeon despite his lack of formal medical training. “In Iraq, at that time,” he noted,

the British still had a considerable legacy of good will, the result of our close association with the county between the two world wars when Englishmen worked there as administrative officers and advisers. Many of the older inhabitants continued to feel respect and affection for individuals. Tribesmen were, on the whole, too courteous to embarrass a quest, but I was sometimes bitterly attacked by townsmen of Government officials over British policy — Palestine or Suez, for example. On such occasions, the mention of an Englishman they had known could turn bitterness to friendly reminiscence. (38)

During the First World War the shelter provided by the marshes allowed the tribes to attack and loot both British and Turkish forces with relative impunity. During the mandate they were largely left alone and their link to Iraqi nationalism and to the ruling administrations of Baghdad remained tenuous. Despised by the urban Iraqis and desert Arabs alike, the Marsh Arabs maintained their own society and used their natural environment for protection as well as insurrection. “The Marshes themselves,” wrote Thesiger, “with their baffling maze of reed beds where men could only move by boat, must have afforded a refuge to remnants of defeated people, and been a centre for lawlessness and rebellion, from earliest times” (39). 

In the end, Thesiger’s prediction did not come true until the 1990s, when Saddam drained the marshes in response to the 1991 Shia revolt that had been encouraged by the Bush administration. Saddam had a pressing strategic aim here: following defeat in the uprising, the Supreme Council for the Islamic Revolution in Iraq and their paramilitary unit Badr Corps used the thick cover of the reeds to regroup and organise the next phase of their campaign against the regime. “Accordingly,” wrote SCIRI representative Hamid al-Bayati in his 2011 memoir From Dictatorship to Democracy, “Saddam began to target the Marshes with planes, tanks and artillery” (40) and commenced work on a vast project to divide the major wetlands and divert the River Euphrates with enormous dams, barriers and dykes. This was a double assault of heavy weaponry and civil engineering designed to destroy the environment of the marshes and the rebellious tribes that populated them. As the SCIRI spokesman based in London and a board member of INDICT, al-Bayati had the desperate task of publicising this carnage in Europe and found allies in Foreign Office mandarin Julian Walker, Conservative MP Emma Nicholson and Ann Clwyd. But this was not a popular cause in 1995 and the plight of the Marsh Arabs gained no traction. Saddam succeeded in destroying a unique part of Iraq’s natural landscape and cultural heritage in a combined programme of genocide and ecocide that attracted no significant international response. 

This was the environment in which Rory Stewart arrived in August 2003 as a provincial administrator for the Coalition Provisional Authority (CPA) in Amara and Nasiriyah. Stewart was one of five British civilians appointed by the Foreign Office to support coalition troops and American officials in the immediate post-Saddam period. But he was no zealot for regime change, writing in his 2006 memoir Occupational Hazards:

Ten years in the Islamic world and in other places that had recently emerged from conflict had left me very suspicious of theories produced in seminars in Western capitals and of foreigners in a hurry. The best kind of international development seemed to be done by people who directly absorbed themselves in rural culture and politics, focused on traditional structures, and understood that change would often be slow. I believed that politicians often misled others and themselves when they started wars and that there were dubious reasons for our invasions of Bosnia, Kosovo, and Afghanistan. (41)

Once in post, Stewart quickly learned that Iraqi civil society had been destroyed by the lethal combination of Saddam and UN sanctions. The post-Ba’ath vacuum was filled by what had always existed underneath the political superstructure: tribal allegiance, foreign agents and religious movements. The liberal middle class professionals that the CPA assumed, or hoped, would emerge to lead reconstruction had either voluntarily vanished from the scene or been intimidated into silence. This was a constituency without power in the new Iraq: there was no mass liberal party, no liberal movement and, more importantly, no liberal militia. What actually emerged from the wreckage of tyranny, sanctions and war was not liberal or secular or democratic. In Maysan, Stewart had to contend with a local tribal warlord, Iranian-backed Shia militias and a branch of the Sadrist movement, a triangle of enmity that sidelined all other constituencies and kept the province on the brink of civil war. And this wasn’t unique to Maysan: similar struggles and rivalries were emerging all over Iraq.

As the messy transitional administration under Jay Garner moved into full CPA occupation under Paul Bremer, all of these local rivalries and conflicts intensified and accelerated, feeding a national insurgency. By the autumn of 2003 the CPA was no longer a resource to be exploited but an occupying power to be resisted. This was compounded by the strategic chaos of the occupation. CPA civilians and military personnel tripped over each other constantly, unable to clarify or distentagle their respective duties or areas of responsibility. The CPA in Baghdad devised ambitious nation building schemes that included “programmes on human rights, the free market, feminism, federalism, and constitutional reform” when, as Stewart put it, “people in Maysan talked about almost none of these things. They talked about security” (42). After a year of occupation, “Coalition governors were inventing different political structures in different provinces because Baghdad had not yet defined the legal powers or budgets for local officials” (43). Perhaps most importantly, in the end, “we could not define the conditions under which we were prepared to kill Iraqis or have our own soldiers killed” (44), which sounds harsh, but proved fundamental. Once the governor of Amara realised that the British would not give him the level of protection they gave themselves, Stewart writes, “almost any hope of cooperation was lost” (45).

The point, after all, was power. The financial and military power of the CPA was defeated by the complexity of Iraqi society and by the confusion of an occupying mission that could produce both gender quotas on local elected bodies and the abuse of prisoners in Abu Ghraib. Stewart would often be forced to project an authority that he did not really feel like he possessed: 

In an ad-hoc organisation in a war zone, most power is the theatre of power. It is not enough to do things, you must be seen to do things. I needed to promise change to give Iraqis some belief in us and in the future. I needed to claim authority and bluff people into falling in step. (46)

The problem was the Iraqis saw right through this and treated Stewart’s “theatre of power” precisely as that: theatre. The occupiers were seen as a source of money to be tapped and, potentially, a way to neutralise or eliminate rivals and enemies. But the real source of authority lay in local constituencies and external backers, a shadow power structure that the CPA could do nothing to influence or defeat. In the south, the SCIRI and Badr brigades and their affiliates and proxies extended Iranian influence into the province, using elections to take control of the regional security apparatus and infiltrate the police forces, as they would do across Iraq. Their main Shia rival was the Sadrist movement, whose local proxies stoked anti-Coalition insurrections and whose constituency grew exponentially during the occupation. As Stewart details, the Sadrists were a new and expanding element in Iraqi and Shia society: a populist movement that appealed to the young and the poor in rural backwaters and city slums and found apotheosis in the fiery sermons of Friday prayers. These were two components of the coming civil war that was, in 2003, merely a threat on the lips of petitioners who did not get what they wanted from CPA officials. As Stewart records, he and his colleagues in the CPA and British army were helpless to contain or control any of this. Even if it was possible, they did not have the real power to do so, because the occupation could not clarify its own purpose or limits. Their authority was an empty performance that was soon exposed.

There is one coda to this story of the marshes. In the aftermath of the war, international aid agencies poured money into projects to reflood the Marshlands. As Stewart travelled around the province he noted the effects of Saddam’s ecological destruction: the near-extinction of the buffalo, the depleted fish stocks and bird flocks, ruined huts and broken islands. In Maysan, the Beni Hashim tribe that had been dispossessed and dispersed into the desert and the Shia ghettos and slums of Basra and Baghdad by Saddam was finally able to return to the Marshes, although many didn’t. Slowly the wetlands recovered and some of the old communities resettled. But this was only partial: a mere fraction of the population and a sample of the biodiversity that had existed before Saddam’s original campaign of destruction. 

V: Strategic failure

In 2006, after completing a tour with the International Security Assistance Forces in Afghanistan, Emma Sky spent some time with Stewart in a compound in Kabul working at his Turquoise Mountain Foundation. “I first met Rory in Iraq, after he had walked across Afghanistan following the fall of the Taliban,” Sky recalled in her 2015 memoir The Unravelling

He was so scarred by his experience of working with the Coalition Provisional Authority that he had come to believe that grandiose nation-building schemes could never succeed in societies such as Iraq and Afghanistan. He was now focused on helping to restore historic parts of Kabul and to resuscitate Afghan traditions of arts and crafts. (47)

Stewart was familiar with the culture and the divisions of the British army: his father was a decorated soldier and senior SIS officer and Stewart had joined the Black Watch during his gap year. Yet his memoir reveals tensions between the CPA civilians and the military in Maysan and Nasiriyah: relationships with the colonels and majors he dealt with often seemed strained, even hostile. In contrast, Sky — the liberal, anti-war NGO veteran and British Council project manager who worked in the Occupied Territories before 2001 — would eventually form a close bond with the two most powerful generals in the U.S. army, Petraeus and Odierno. 

Unlike Stewart, Sky had unambiguously opposed the invasion of Iraq before she volunteered to join the CPA. She was eventually sent to Kirkuk in the role of Governorate Coordinator and worked so effectively with Colonel Bill Mayville of the 173rd Airborne Brigade that General Ray Odierno would later ask her to return to Iraq to be his political adviser. Sky was fully aware of her unique, even strange, position: the self-declared “peacenik” who had volunteered to be a human shield during the Gulf War was now the closest political confidant of a general notorious for his aggressive combat methods in Fallujah. Odierno had featured heavily in Tom Ricks’ 2006 best seller Fiasco: The American Military Adventure in Iraq, which, Sky commented, “was particularly harsh…accusing him of causing the insurgency through his harsh tactics of breaking down doors and mass arrests. The allegations had hurt. It was an exaggerated portrayal which had tarnished his reputation” (48). Sky, on the other hand, remained loyal to the generals she worked for and was impressed by their soldiers. She would later contrast the performance of the American army in Iraq with the British, candidly telling Foreign Minister David Miliband: 

in the mid ranks of the U.S. military the criticisms that Brits had made about the Americans in the early years had rankled. The British military had been very arrogant, believing they knew the ultimate truth about counterinsurgency from Malaya and Northern Ireland. Every situation, however, was different. And the American military had proved themselves faster learners than the Brits. (49)

As Sky knew, reports being sent from Basra to London were politically censored, creating a false sense of optimism in Whitehall. This optimism had been abruptly dispelled by the global publication of a photograph of British soldiers jumping from their Warrior jeep, consumed by flames, following a stand-off with Sadrists at Jamait police station in September 2005. Blair was shocked, but he should not have been. A month before Jamait, the American journalist Steven Vincent had been kidnapped and executed in Basra after uncovering links between the Iraqi police, Shia militias, Iranian agents and oil smugglers, a network that the British had both enabled and ignored. “The fact that the British are in effect strengthening the hand of Shiite organizations is not lost on Basra’s residents,” Vincent wrote in his July 31 New York Times op-ed, the final article he would live to see published:

Fearing to appear like colonial occupiers, they avoid any hint of ideological indoctrination: in my time with them, not once did I see an instructor explain such basics of democracy as the politically neutral role of the police in a civil society. Nor did I see anyone question the alarming number of religious posters on the walls of Basran police stations. When I asked British troops if the security sector reform strategy included measures to encourage cadets to identify with the national government rather than their neighborhood mosque, I received polite shrugs: not our job, mate. (50) 

The fear of appearing to be colonial occupiers was deep-rooted and historical, but history could also make this look like dissembling: Sir Mark Sykes had also told the Iraqis that the British were “liberators” rather than occupiers in 1917. The return to Basra was full of historical echoes that were consciously downplayed by British diplomats and the army without, however, being lost on the Iraqis. To pick just one example: from 2006-7 the British ambassador to Iraq was Dominic Asquith, whose great-grandfather had sent the first British troops to Basra in 1914. Once a month Sky and Odierno would dine in his embassy villa and Sky enjoyed his company:

He was tall and handsome, with the most wonderful manners and dry wit…I loved those evenings. They were our monthly treat. We would fly in from some trip to the battlefield, dirty and tired. But in the ambassador’s residence we relaxed on comfortable chairs, surrounded by paintings and photos and various artefacts of culture and life away from war. I always started with gin and tonic before proceeding to red wine…we were served a three-course meal, seated around a table that had once belonged to Gertrude Bell. Never had food tasted so good. We ate off china with metal cutlery, a welcome change from plastic and Styrofoam. (51)

Meanwhile, outside of Asquith’s diplomatic sanctuary with its fading traces of past imperial elegance, the British mission in the south of Iraq was falling apart. American irritation at British arrogance eventually turned to open contempt as the south spiralled into chaos. This feeling was shared by Iraqi Prime Minister Nouri al-Maliki, who tried to confront the southern  Sadrists himself in his abortive Charge of the Knights raid, an operation that simply exposed how unprepared the British were for offensive combat and how dependent the Iraqis were on U.S. military support. The illogicality and profound failure of British strategy was summarised by a soldier in Basra Palace, in a lament that could have been scripted by Joseph Heller: “It’s going round in circles. People are getting killed for us to resupply ourselves, and if we weren’t resupplying ourselves, people wouldn’t be getting killed” (52). The British mission was finally terminated in 2009 with the evacuation of Basra Airport under a heavy barrage of Sadrist rockets. Blair redeployed a demoralised and under-equipped army to fight an equally disastrous campaign in Afghanistan, but to the Americans it looked like the British were running away from Iraq. The Americans were correct. Both the British military and politicians, including Blair, felt defeated and exhausted by Basra, and carried this palpable sense of humiliation and fatigue into Helmand, where the British mission had to be rescued once again by the Americans. As Michael R. Gordon and General Bernard E Trainer wrote in their chronicle of post-war Iraq:

The British military had begun the war as the reputed masters of counterinsurgency…by the end of the Basra fight that reputation was in tatters. The British had ceded control of one of Iraq’s major cities and lost the trust of the Iraqi prime minister. (53)

For Sky the disintegration of Basra province was on the periphery of her work with Odierno in Anbar and Baghdad. In fact, Sky’s memoir reveals her distance and alienation from British institutions inside and outside Iraq: she remarks that her colleagues at the British Council took no interest in her work in Iraq; that both Blair and Miliband had no idea who she was or why she was working for the U.S. army; and her comments on the comparative cultures of the U.S. and British military are much more complimentary to her American employers. Her final judgement of the British in Basra is succinct: “it had not been the most glorious chapter in British history” (54). Nevertheless, she was no American lackey and maintained her position that the Iraq War was a strategic failure of historical distinction. By the time she returned to Kurdistan in 2012 the outcome was clear: the Iraq War had been won by Iran. The key moment turned out to be the 2010 election, which Ayad Allawi’s party won with a slim majority but was effectively handed to Maliki by Obama and Biden who, like the British before them, simply wanted to get America out of Iraq as quickly as possible. This proved to be the final victory for Iranian Quds Force Commander General Suleimani who could now fully penetrate Iraqi political space without any resistance. By the time that Sistani issued a fatwa calling on all Shia to join the Iraqi security forces to fight ISIS, Suleimani was in effective command of those forces and able to personally direct the offensive from inside Iraq. 

The truth is that neither the British nor the Obama administration were prepared to challenge the Iranians in Iraq, despite the clear Iranian effort to kill coalition soldiers and ensure that the Iraqi state remained within the Shia rather than the Arab orbit. As early as May 2004, Foreign Secretary Jack Straw intervened to stop a CIA initiative to attack Iranian agents and training camps in Iraq because the FCO did not want to jeopardise negotiations over Iran’s nuclear weapons programme, a pattern repeated on a grander scale by the Obama administration. The extent of the Iranian triumph was so great that Sky faced accusations of a secret deal between Iran and America and in Kirkuk a local Sunni Sheikh told her, “They handed Iraq to Iran on a silver platter” (55). But there was no deal. In Kirkuk in 2003-4 Sky had caught a fleeting glimpse of “a city of four ethnicities who speak each other’s languages and love each other’s cultures” (56) — a rich patchwork of Kurds, Arabs, Assyrians and Turkmen who might, one day, coexist in a secure country called Iraq. But at the same time, to Sky’s frustration, the Kurds relentlessly pursued Kirkuk as their own, seeking to reverse the Arabisation of the city that had been engineered by Saddam and secure its oil as the foundation of a future independent Kurdistan. Pluralism collapsed into ethnic and sectarian rivalry as group identities were politicised and fatally embedded in an electoral system that the Americans designed. This gave Iran — one of the great sectarian actors in the region — the perfect opportunity to divide and rule Iraq, and they took it.

VI: British ghosts

We were surrounded by half-forgotten history.
Rory Stewart, Occupational Hazards

Gertrude Bell spent her last days in Baghdad — the last days of her life — collecting and cataloging historical artifacts for her Archaeological Museum. Following the coronation of King Faisal and the dissolution of the British administration, Bell found herself isolated and increasingly adrift in the new Iraqi capital. She had helped to shape and found the modern Iraqi state, but the course of its development had not quite gone the way she wanted and she no longer had any influence over events. Her friends were leaving the city and, back in England, the family business had been fatally damaged by the coal strike. On June 16, 1926, King Faisal opened the first room of the museum and Bell wrote in her diary: 

Except for the museum, I am not enjoying life at all. One has the sharp sense of being near the end of things with no certainty as to what, if anything, one will do next. It is also very dull, but for the work. I don’t know what to with myself of an afternoon…It is a very lonely business living here right now. (57)

Bell’s focus on the museum consumed her and she spent all of her time and energy fixing up the building that was given to her and arranging all of her antiquities. When her father implored her to return home, she refused:

I do understand that things are looking very discouraging and I am dreadfully sorry and unhappy about you. But I don’t see for the moment what I can do. You see I have undertaken this very grave responsibility of the museum…(58)

In reality, the responsibility was self-imposed: the museum was the result of her own archaeological discoveries and her commitment to preserve the rich history of Mesopotamia for the new citizens of Iraq, a gift to the country that she truly loved. It was also a way to stave off her own sense of despair, grief and depression, until the point it no longer could. Bell died on July 11, 1926, after taking an overdose of sedatives. Her death was noted across the British empire and also among the tribes of Iraq, who flocked to Baghdad and lined the streets as her cortege travelled to the Anglican cemetery, her coffin decorated with the Union Jack and the new flag of Iraq that she had helped to design. 

Nearly eighty years later the descendant of Bell’s museum was looted by organised gangs who stole artifacts from the heart of her collection. The U.S. army failed to secure the building and it was left to returning museum staff to fend off the looters for four days. This was a significant and symbolic event: a presage of the anarchy and corruption to come. Rumsfeld’s infamous response to the looting exposed the vacuum at the heart of post-war planning in a manner that chilled both opponents and supporters of the war alike: “Freedom’s untidy,” he said, “stuff happens.” With this event, Bell’s archaeological legacy evaporated; her geographic legacy also seemed to be finished in 2015 when ISIS erased the borders she had drawn in the 1920s. In the early years of the monarchy, Bell had been friendly with Iraq’s Education Minister Abdul Hussein Chalabi and she often visited and enjoyed the gardens of Chalabi’s residence, The Deer Palace. In 2006, Abdul Hussein’s great granddaughter (and Ahmad’s daughter) Tamara tracked down Bell’s rarely visited, half-forgotten grave in “a barren spot” of the Anglican cemetery; having developed a fascination with the old family friend and looking at the withered flowers and dusty ground of the memorial, she decided to restore and maintain it. For Tamara, Bell’s grave stood for something more than one life: it represented the lost world of her family and an entire generation of exiles, “buried underground and in memories, almost as if it never existed” (59). But for anybody British who knew or cared about it, Bell’s grave was simply a relic from an expired empire.

The ghosts of British lives haunted the land: the traces of their lives and deaths strewn across Iraq, in graves, plaques, buildings and some living memories. In Maysan in 2003, Stewart met a man who remembered a day of duck shooting with Thesiger in 1953; he also noted the lingering memory of Colonel Leachmen, the Amara-based British political officer whose murder in Fallujah remained the great symbol of the 1920 uprising. Amara cemetery, which Stewart visited, was filled with the bones of British and Indian soldiers who had been killed during the siege of Kut in 1915-6. During the 2006 civil war, when other reporters were too afraid to leave the Green Zone, Dexter Filkins visited the British war cemetery in Baghdad; he found it derelict, with tombstones “toppled and crumbling” and grass growing up to his chest. “When an American died in this war,” he wrote,

he was flown home in a black bag, zippered at the top; the British, killed long ago, were buried across the realm from here to Trincomalee. There hadn’t been any refrigeration then, and the ships had been too slow. The British were buried where they fell. (60)

Symbols of lost power and responsibility surrounded British personnel in Iraq in 2003, but the imperial legacy was almost completely disavowed. Power and responsibility now belonged to America, a nation still changing the course of world history; the British played only minor roles as participants and spectators in an American and Iraqi narrative. Stewart and Sky would often hear praise for their imperial predecessors but only ever offered as a way to criticise their American successors and therefore serve to highlight the historic transfer of power. In the era of the CPA, the original British occupiers were lauded for their diplomacy and their knowledge of the tribes (which itself owed much to Bell’s 1920 masterpiece Review of the Civil Administration of Mesopotamia), while the existing British occupiers tended to be treated as conduits for American money and influence. In contrast to the approach of earlier British imperialists, Bremer and his advisers had no desire to perpetuate the ancient tribal divisions of Mesopotamia and wanted to construct a modern, secular democratic nation out of the wreckage of dictatorship. The tragedy was that their method of achieving this — minority representation — had the effect of hard-wiring sectarianism into the new political institutions that they created. As Sky wrote later: “the focus on subnational identities was at the expense of building an inclusive Iraqi identity. No one at the time seemed to have any foreboding of the disaster this would bring upon the country” (61).

For many reasons, the relationship between the Iraqis and the British had been complicated. As Elie Kedourie noted — in a book written in the shadow of brutal Ba’ath reprisals after the first Gulf War — it was the British who were ultimately responsible for the descent of Iraq into factional bloodshed and military dictatorship, precisely because of the sectarian political settlement they had designed and upheld:

The newly-invented polity was…fragmented from the very beginning. The original fault lines have become, if anything, more pronounced with every change of regime. The Kurdish and Shi’a uprisings in the aftermath of the Iraqi defeat at the hands of the U.S. in 1991 eloquently show the abiding disaffection of the majority of the population towards rule from Baghdad. (62) 

Kedouri argued that an Iraqi state headed by a foreign King and governed by urban Sunni officers and officials “could not, parliament or no parliament, be governed constitutionally or within the rule of law” (63). The state established by Churchill, Cox and Bell therefore contained the seeds of its own destruction and the descent of Iraqi nationalism into genocide and totalitarianism also had its roots in British policy. Because Iraq was never fully annexed by the empire, British power was wielded in back rooms and through proxies, which simply created a different kind of resentment that festered and then exploded. Inspired by fascism and Nazism, Iraqi nationalism was, from birth, authoritarian, militaristic, antisemitic and anti-British. In her memoir, Tamara Chalabi describes the darkness and brutality that consumed Baghdad during the Second World War, inspired by Hitler who briefly courted the Iraqi nationalists. This set the scene for the execution of the royal family in 1958, the destruction of constitutional government and the slow suffocation of the Jewish community of Baghdad. In each case, British power and anybody associated with it was the target of violence and revolution. 

The interesting thing about the British engagement with Iraq during the Ba’ath era was how ahistorical it was. Modern Iraq was a British creation but all sense of ownership or responsibility had long vanished. Ann Clwyd’s campaigns on behalf of the Kurds and the Iraqi exiles were conducted in the context of international human rights, but without any real sense of the link between British imperial history and the tragic progress of Iraqi nationalism. Conservative trade policy with Iraq in the late 1980s was not even an attempt to revive old historical ties, it was simply a very modern, very grubby tale of cynicism and avarice. The Gulf War, Desert Fox and the Iraq War all occurred within a context defined by America and any British policy in Iraq was therefore almost identical to that of the U.S. Blair understood this to be the existential basis of British foreign policy and acted accordingly. The British occupation of Basra turned into a transactional commitment that, ultimately, proved too costly. The echoes of Britain’s past were nothing more than a liability during this mission: a source of tension, a provocation, a reproach. But they also taunted the British, cruelly exposing decline and fall relative to new global powers and forces, so they were best ignored. The British story in Iraq ended, finally, with Blair’s own epitaph, delivered in the form of the Chilcot Report.

As David Fromkin wrote in his account of the dissolution of the Ottoman Empire, “the shadows that accompanied the British rulers wherever they went in the Middle East were in fact their own” (64). This remained as true in 2003 as it had been in 1916.

  1. Quoted in Janet Wallach, Desert Queen: The Extraordinary Life of Gertrude Bell: Adventurer, Adviser to Kings, Ally of Lawrence of Arabia (Phoenix Giant, 1999), p.296
  2. Quoted in Wallach, p.295
  3. Gertrude Bell, A Woman in Arabia: The Writings of the Queen of the Desert, ed. Georgina Howell (Penguin, 2015), p.165
  4. Quoted in Wallach, p.154
  5. Quoted in David Fromkin, A Peace to End All Peace: the Fall of the Ottoman Empire and the Creation of the Modern Middle East (Holt, 1989), p.453
  6. Quoted in Fromkin, p.503
  7. Tamara Chalabi,  Late for Tea at the Deer Palace: The Lost Dreams of My Iraqi Family (HarperPress, 2011), p.274
  8. Ann Clwyd, ‘See men shredded, then say you don’t back war’, The Times, March 18, 2003
  9. Ann Clwyd, Rebel With a Cause (Biteback, 2017), p.274
  10. Ibid., p.264
  11. Ibid., p.255
  12. Ibid., p.274
  13. Ibid., p.241
  14. Quoted in Richard Bonin, Arrows of the Night: Ahmad Chalabi and the Selling of the Iraq War (Anchor Books, 2011), p.114
  15. Andrew Roberts, Churchill: Walking With Destiny (Allen Lane, 2018), p.283
  16. Clwyd, p.248
  17. Quoted in Bonin, p.81. 
  18. Clwyd, p.255
  19. Ibid., p.261
  20. See Hansard: https://publications.parliament.uk/pa/cm200203/cmhansrd/vo030226/debtext/30226-22.htm
  21. Clwyd, p.279
  22. Alistair Campbell, The Alastair Campbell Diaries, Volume 4: The Burden of Power — Countdown to Iraq (Arrow, 2013), p.469
  23. Quoted in Tony Blair, A Journey (Arrow, 2011), p.439
  24. Quoted in Richard Norton-Taylor, Mark Lloyd and Stephen Cook, Knee Deep in Dishonour: the Scott Report and its Aftermath (Gollancz, 1996), p.48
  25. Quoted in Norton-Taylor et al., p.19
  26. Clwyd, p.227
  27. Quoted in Norton-Taylor et al., p.13
  28. Ibid., pps.140-1
  29. Clwyd, pps. 175-6
  30. Quoted in Anthony Seldon, Blair (The Free Press, 2005), p.387
  31. Blair, p.222
  32. See BBC News, Texts and Transcripts, December 17, 1998: http://news.bbc.co.uk/1/hi/events/crisis_in_the_gulf/texts_and_transcripts/236932.stm
  33. Steve Boggan, ‘Election 97: Patriotic Blair sets out global vision’, Independent, April 21, 1997: https://www.independent.co.uk/news/election-97-patriotic-blair-sets-out-global-vision-1268592.html
  34. Tony Blair, ‘The Doctrine of the International Community’ in The Neocon Reader, ed. Irwin Stelzer (Grove Atlantic, 2004), p.112
  35. Radek Sikorski, ‘Interview with Paul Wolfowitz’, Prospect, December 2004
  36. Wilfred Thesiger, The Marsh Arabs (Penguin, 1964), p.14
  37. Ibid., p.59
  38. Ibid., p.61
  39. Ibid., p.99
  40. Hamid al-Bayati, From Dictatorship to Democracy: An Insider’s Account of the Iraqi Opposition to Saddam (University of Pennsylvania Press, 2011), p.30
  41. Rory Stewart, Occupational Hazards (Picador, 2007), p.8
  42. Ibid., p.82
  43. Ibid., p.276
  44. Ibid., p.296
  45. Ibid., p.297
  46. Ibid., p.35
  47. Emma Sky, The Unravelling: High Hopes and Missed Opportunities in Iraq (Atlantic, 2015), p.136
  48. Ibid., p.148
  49. Ibid., p.281
  50. Steve Vincent, ‘Switched Off in Basra’, New York Times, July 31 2005: https://www.nytimes.com/2005/07/31/opinion/switched-off-in-basra.html
  51. Sky, p.167
  52. Michael R. Gordon and General Bernard E. Trainor, The Endgame: the Inside Story of the Struggle for Iraq, From George W. Bush to Barack Obama (Vintage, 2013), p.466
  53. Gordon and Trainor, p.481
  54. Sky, p.281
  55. Ibid., p.352
  56. Ibid., p.70
  57. Bell, p.248
  58. Ibid., p.249
  59. Chalabi, p.185
  60. Dexter Filkins, The Forever War (Vintage, 2009), pps. 332-3
  61. Sky, p.50
  62. Elie Kedourie, Democracy and Arab Political Culture (Washington Institute for Near East Policy, 1992), p.31
  63. Kedourie, p.32
  64. Fromkin, p.468

 

Posted in Uncategorized | Leave a comment

The Writing Racket: Balzac’s ‘Lost Illusions’

During his lifetime, Balzac’s debts were as famous as his novels and more widely discussed than his love affairs. Money was the great unifying subject of La Comédie humaine and the legal and moral dimensions of debt were an important subcategory that he treated with great care and attention. In his novel Les Souffrances de l’inventeur — the third book of Illusions perdues — the provincial printer David Sechard is financially ruined by three promissory notes forged by his brother-in-law Lucien de Rubempré, a poet and journalist driven to desperation by the carnivorous world of literary Paris. The book details various aspects of debt collection: the extortionate fees added by creditors and their lawyers to increase the sum total owed; the character and methods of their ruthless bailiffs; the lengths taken and tricks required by debtors to avoid losing their possessions and even their liberty. This was a subject, like most others, that Balzac knew a lot about, in this case through personal experience. Balzac owed money to practically everybody he knew: friends, family members, bankers and other writers; his greatest money lenders included Baron James de Rothschild, his Polish lover Eveline Hanska and his own mother (who he also accused of ruining his life). For Balzac, borrowing and owing money was an expression of love and friendship, but to his friends and lovers it was a consistent source of irritation and despair. In 1838, Théophile Gautier noticed a black-bound book on Balzac’s shelf titled Comptes melancoliques and asked what it was. “You can keep it,” Balzac replied, “It’s an unpublished work, but it has its value” (1): the book was a record of unpaid bills and summonses. Returning from one of his periodic trips to Italy, where he was consistently and gratifyingly fêted by local aristocracy, he visited his financial advisers in Paris to review the situation: each one told him that his best course of action was to flee the city immediately, which he did. Balzac’s debts multiplied throughout his life, pushing him to artistic immortality and an early grave: the ferocious creative energy that produced La Comédie humaine and destroyed his health was generated by the desperate need to pay off his creditors as much as any of his other motivations (although he rarely paid them anyway). Whether consciously or not, owing large sums of money to lots of people gave Balzac focus by providing an immediate material need for his prodigious, coffee-fueled writing bouts. 

As Balzac shows us in Illusions perdues, debt was the common lot of the Parisian writer without independent means during the venal and materialistic years of the second Bourbon Restoration and the July Monarchy. For the parasitic hacks of the petits journaux, popular commercial novelists and playwrights like Eugene Sue and contemporary masters like Victor Hugo, spending money was the only way to establish a career as any kind of writer in Paris. As the publisher Dauriat explains bluntly to Lucien when negotiating the purchase of his first book of poems Les Marguerites, “fame costs twelve thousand francs in reviews and three thousand francs in dinners” (2). This was partly reality, and partly a philosophy, and one that Balzac practiced fully: he borrowed money he did not have, to spend money that he had not yet earned in order to increase his prospects of artistic and commercial success through expensive social climbing, thereby earning the money he needed to pay back his creditors. As Eugène de Rastignac tells Raphaël de Valentin in La Peau de chagrin, “When a man spends his time squandering his fortune, he’s very often on to a good thing: he is investing his capital in friends, pleasure, protectors and acquaintances” (3). The problem was that Balzac’s extravagant instincts always outran his earning capacity and his speculations invariably led to losses and ruin, often through bad luck and unfortunate timing rather than any inherent flaw in his schemes. (It is striking that his fictional speculators such as Baron de Nucingen invariably made better financial decisions than Balzac ever did in his own life, but also how often his real life ventures led to success for others at a later stage.)

Balzac always tried to make a distinction between purely commercial hack work written to pay the bills, and serious artistic production — a distinction marked in his own life by the early medieval romance and Gothic horror stories published under the pseudonyms Lord R’Hoone and Horace de Saint Aubin and the novels published under his own name, starting with Les Chouans in 1829. The two worlds of commerce and art are segregated morally and aesthetically in Illusions perdues by the dramatic contrast drawn between the cynical journalistic milieu of Etienne Lousteau and the ascetic circle of Daniel D’Arthez and the ‘Cenacle’. But for Balzac this distinction was never so clear in practice: all of his work, including his greatest novels, was driven by commercial considerations and pressing financial needs. When Balzac described himself as “a natural accountant” to Eve Hanska she responded with justifiable scorn, having just seen a large chunk of her own money squandered on his railway investments and his purchase and lavish decoration of a house on the Champs-Élysées that she did not ask for or want, but in one way he was correct: he could exactly calibrate the material return on writing words and paid extremely close attention to his own earning potential. The journalists and publishers of Un Grand homme de province à Paris — the second volume of Illusions perdues — quantify books and magazine columns, lines and words, in precise remunerative and temporal detail, in exactly the same way that Balzac did in his own life. When Lucien first enters the offices of a Paris newspaper he watches an unnamed writer register a complaint to the editor about his pay: “Look here, Giroudeau…my count is eleven columns, and at five francs each that makes fifty-five francs. I’ve only had forty, therefore you still owe me fifteen francs…” (4). Later, the literary shark Finot offers Lousteau a position on a liberal newspaper he has just gained control of in the following terms: “You’ll get paid for all the articles at a rate of five francs a column: in this way you can reap a bonus of fifteen francs a day by only paying three francs a column and by saving on unpaid articles” (5). Lousteau also schools Lucien in the critics’ art of selling complementary theatre seats and review copies of books for extra spending money, a practice close to the heart of all freelance critics. Because of the precarious and highly competitive position of the professional writer, each word is measured in money, energy and time, and life is a desperate scramble for new sources of income, however small.

In Balzac’s own life as well as the corrupt world of Lousteau, a mercenary attitude to creation was necessary for success if not survival — but where Balzac resembled D’Arthez is that he was also willing to work for it. As early as 1819, before he had written a novel and when he was planning his abortive verse drama Cromwell, Balzac could already reason: “a tragedy is normally supposed to contain 2000 lines. That means having between 8 and 10,000 thoughts, without counting all the other thoughts needed for the ideas, the plan, the characters, the customs of the time…” (6). A list of projects scribbled on a note from 1822 is headed, “ORDER OF THE DAY. Make 3000 francs or it’s Dishonour, Destitution and Co.” (7). Despite his veneration of D’Arthez, Balzac himself could be stung by the misplaced purism of part of the Parisian literary elite:

He felt it to be hypocrisy when a writer like Astolphe de Custine, whose inherited fortune permitted him to disdain the one he might have gained by his pen, spoke in praise of asceticism and the pride of Rousseau as opposed to the writers who commercialised their talents. Balzac replied that Rousseau’s Confessions contain a lengthy account of the negotiations which finally brought him an income of 600 francs, and that Racine, like Moliere and Boileau, had not been above accepting royal patronage. A writer in 1837, if he wished to earn a living, was obliged to take account of popular taste and the bookseller’s convenience. (8)

The world of writers is divided by the high ideals of art and the low reality of making a living and this divide spawns hypocrisy, duplicity, treachery and corruption. Un Grand homme de province à Paris is an unsparing exposé of “the writing racket”: “how publishers pull their strings and how literary reputations are concocted…the play of wheels within wheels in Parisian life, the machinery behind it all” (9). In Balzac’s world money determines human agency, and the composition of the first volume of Illusions perdues — titled Les Deux poètes and published in 1837 — was a good example of the material demands of the book trade that Balzac describes. In 1836, the publisher Madame Bechet filed an injunction that required him to deliver the final two volumes he owed her within twenty-four days, or face a fine of fifty francs for each day’s delay. In response, Balzac left Paris and wrote Les Deux poètes — a rich portrait of the social mores, class relations, provincial bigotry and romantic dreams of the inhabitants of Angoulême, all drawn in precise historic and geographical detail — within twenty days. As Balzac’s biographer Andre Maurois wrote, “he had never written anything better. Misfortune lent an edge to his gifts…” (10).

If the first volume of Illusions perdues, completed in such circumstances, was fueled by “bitter melancholy” then the second volume seemed to be driven by deeper furies: the romantic illusions of poetry and the literary life, shared by Lucien and his mentor and lover Madame de Bargeton in the provincial isolation of Angoulême, are systematically dismantled by the hard, materialistic reality of Paris. But this ruthless destruction of illusions is experienced not just by Lucien, but also any reader who shares his idealised assumptions: literature is reduced to a product and writing is exposed as a trade that, on its wilder fringes, shares overlaps with the criminal underworld (as Lousteau demonstrates, to Lucien’s horror, blackmail is an important weapon in a journalist’s arsenal). Balzac takes care to describe the different types of publishers active in Paris in the 1820s but they are all united by their lack of scruples and hostility to any work that does not sell: poetry, in particular, is treated with mockery and disdain because, just like now, hardly anybody buys it. Lucien’s illusions about the material, rather than aesthetic, worth of his debut volume Les Marguerites do not last beyond his first exposure to the publishers of the Galeries de Bois, who also operate as booksellers and wholesalers: “‘I shouldn’t have gone there,’ he told himself; but he was nonetheless struck by the brutally materialistic aspect that literature could assume” (11). (It is worth noting here that Lucien’s sonnets were actually written by and purchased from Charles Laissally and Théophile Gautier for unattributed use in the novel.) Publishers are “speculators in literature” and, like newspaper editors, exploit their considerable positions of power over writers. The worth of literature is defined by the volumes sold, and by this calculation poetry has no value at all. 

The reality of power in the literary marketplace and its relation to both money and social prestige is the key to Balzac’s savage portrayal of the trade of journalism, to which Lucien succumbs and which totally corrupts him. Balzac’s enmity for the press is ferocious (“an inferno, a bottomless pit of iniquity, falsehood and treachery,” 12) and so it is worth remembering that all of the corrupt and cynical practices and attitudes he portrays are ones that he had himself used and held as a newspaper critic. The fact that by the time he came to write Un grand homme de province à Paris he had also been the victim of a coordinated critical assault partly explains the fury of his exposé. The book was a brave and reckless attack on the press from which Balzac suffered for the rest of his life. However, like Milton, who he read carefully, Balzac could not contain the allure of evil. In Paris Lucien is caught between two mentors, D’Arthez and his “system of resigned poverty” and dedication to art, and Lousteau’s “militant doctrine” of money, intrigue, immediate gratification and temporary fame. The austere scribe in his dank garret is no competition for Lousteau’s dark energy and dangerous wit and Lucien is easy prey. The journalist’s Satanic soliloquy in the Luxembourg Gardens, directly inspired by Lucien’s guileless recital of his sonnets, is the triumphant, bleak, black heart of the novel, a tour de force of applied cynicism and ferocious materialism. “I soon discovered the hard facts of the writer’s trade, the difficulty of getting into print and the brutal reality of poverty,” Lousteau begins, “underneath your beautiful dream-world is the turmoil of men, passions and needs” (13). Literature is a world of “systematic warfare” and “ignoble conflicts”, where success and failure is decided “behind the scenes”; for Lousteau, as for Lucien, the effort is impossible to sustain without the material benefits of journalism, a system of conditional alliance, commercial opportunism, bribery, plagiarism and targeted polemic. Lousteau, like Lucien, has followed the flight from the provinces to the capital in search of literary fame, only to “fall into the pit of misery, the mire of journalism, the morass of the book-trade.” In the face of such disappointment, journalism becomes a method of survival and a weapon to be used: “at this trade,” Lousteau explains, “as a hired assassin of ideas, industrial, literary and dramatic reputations — I make fifty francs a week.” Fame costs money, but also depends on the courage to take chances and the ability to create luck: 

outside the literary world…there’s not a single person who knows what a fearful Odyssey one has to pass through in order to acquire what one must call, according to the diversity of talents, popularity, vogue, reputation, renown, celebrity, public favour, the successive rungs of the ladder leading to fame…the attainment of this brilliant zenith depends on so many and so rapidly varying chances that no example has occurred of any two men reaching it by the same route. 

The force of this exposition came from Balzac’s own bitter struggle for success and its truth can be confirmed by any writer working today who is prepared to be honest about their own success or their failure.

What Balzac registers in the characters of Lousteau and the other editors and critics he creates in Illusions perdues (Finot, Blondet, Lucien himself) is the simple power of the press to make or destroy reputations, whether among the aristocrats of the Faubourg Saint-Germain or the playwrights and actresses of the Parisian stages. This power is both seductive and absolute so long as you do not become its target, from which few are safe. Lousteau illustrates this power and its attractions to Lucien, as the poet prepares to write his first theatre review and seal the stage reputation of his future lover, Coralie:

Be hard and witty for a month or two, you’ll be swamped with invitations and actresses’ parties; you’ll be courted by their lovers…At five o’clock this evening, at the Luxembourg, you didn’t know what to do with yourself: now you’re about to become one of the hundred privileged persons who foist their opinions on the French public. In three days’ time, if we succeed, by printing thirty bon mots at the rate of three per diem you can make a man curse the day he was born; you can draw a regular income — in sensual pleasure — from all the actresses in your theatre; you can make a good play fall flat and send the whole of Paris flocking to a bad one. (14)

For Lucien, like Balzac and like any writer, the temptations and satisfactions are multiple and overlapping: they include desire, revenge, influence, intrigue and fame, with money and art somewhere on the list too. Lucien is quickly seduced by the well-dressed, hard-drinking, fast-spending, sharp-witted journalists of the theatre boxes, admiring and then emulating “the damascene armour of their vices and the glittering helmet of their cold analysis…the atrocious power they wielded” (15).

The practice of journalism in Restoration Paris is unavoidably and enjoyably corrupting: writing, for Lousteau, Finot, Blondet and Lucien, serves political ends that exist apart from principles. Opinions are formulated in line with immediate circumstances, as Finot declares when launching his liberal magazine: “although my opinions are undergoing a necessary transformation so that I can take over the editorship of the Review of whose destinies you are aware, my convictions remain the same…Circumstances vary, principles don’t change” (16). In this atmosphere, art is also closely linked, and even subordinated, to politics: Balzac takes care to elucidate the connections between the literary movements and political parties of the Restoration era. During Finot’s magazine launch, Lousteau even drags Lucien’s sonnets into the conflict, exploiting their aesthetic properties as a political weapon: 

We want to be useful to our new comrade. Lucien has two books to publish: a collection of sonnets and a novel. The power of the paragraph must make him a great poet within three months! We’ll use his Marguerites in order to decry Odes, Ballads and Meditations, in fact all Romantic poetry. (17)

In the 1820s — the time period of Illusions perdues — French literary romanticism was influenced by Walter Scott and medievalism and aligned with the Catholicism and monarchism of the Bourbon restoration. Louis XVIII was a patron of poetry and helped establish the career of Victor Hugo by giving him a pension of 1000 francs. For the arts, the atmosphere of Restoration France was a relief after the austere and censored years of the Empire. In this context, then, liberalism and republicanism stood in opposition to romanticism, a situation that would reverse in the years preceding the revolution of 1830 and the July Monarchy. After Lucien has recited his first sonnet in the Luxembourg Gardens, Lousteau’s immediate response is: “Are you a classicist or a romantic?” The significance of this question has to be explained to Lucien:

My friend you are coming into the thick of a fierce battle and must make a prompt decision. Literature is primarily divided into several zones; but our great men are divided into two camps. The royalists are romantics, the liberals are classicists. Divergence in literary opinion is added to divergence in political opinion: hence war between fading and budding reputations, a war in which no weapons are barred: ink spilt in torrents, cutting epigrams, stinging calumnies, unrestrained abuse…If you stand aloof you will stand alone. Which side will you take? (18)

Lucien’s answer is the correct one and reveals his future development before he has even taken one first step in that direction: “Which is the stronger party?” Political and aesthetic territory is clearly demarcated and determines the decisions made about literary success and failure. Book reviews do not describe the relative worth of the work: they are used as the starting point for a political polemic, and aesthetic merit is therefore decided by political utility. There is, of course, no guarantee that the reviewer even shares the political opinions that he is espousing in order to make or break a reputation. This, then, is the art of criticism that Lucien, under Lousteau’s tutelage, masters. 

If art is subordinated to politics then the literary world is also a world of political alliance and rivalry. As Lousteau tells Lucien early on, success requires influential friends and powerful allies: networking is key. “Today, in order to succeed, one needs to be in with such people. It’s all a matter of chance, you see. The most dangerous thing is to churn out wit all alone in the corner” (19). Lucien’s career is destroyed when he switches literary and political allegiance too abruptly: he does not prepare the ground in order to retain the subterranean loyalty of his former allies and so finds himself isolated. Balzac draws a sharp distinction between the conditional and parasitic alliances between journalists and the true bond of friendship and mutual esteem that exists between artists, idealised in Balzac’s group portrait of the Cenacle. The world of journalism is defined by competition and riven by jealousy, and Balzac, a keen observer of human types, has a lot to say about the special quality of envy that exists between writers. When Lucien recites one of his better sonnets, he is chilled by Lousteau’s “inscrutable calm”:

Had he had more experience of literary life, he would have known that, with writers, silence and curtness in such circumstances betoken the jealousy aroused by a fine work, just as their admiration denotes the pleasure they feel on listening to a mediocre work which confirms them in their self-esteem. (20)

Lousteau enlightens Lucien to the behaviour that results from this professional rivalry: “believe me, the author in fashion is harder and more insolent towards newcomers than the most brutal publisher: the one shows you out, the other tramples on you” (21). Vanity is also key to the psychology of writers: “no words, no description can depict the fury of writers when their amour-propre is wounded, nor the energy they can tap when they feel the prick of the poisoned darts of mockery” (22). In 1825, Balzac’s friend Jean Thomassy had written to warn him that “the whole-time man of letters is always tainted with envy; whereas those with other resources are only light-heartedly envious, having achieved other things” (23). When Balzac received this letter he had already reached the point of exhaustion and disgust with the machinations and intrigues of literary Paris. He knew the savage resentment that talent and success bred in this milieu having experienced an intense critical backlash caused by the jealousy of his contemporaries. His friends tried very hard to destroy him. Lucien, who matches good looks with a flair for elegant polemic, has an immediate impact with his first theatre review to a degree that sets up his downfall, as Balzac hints early on:

Looking at Lousteau, he thought: ‘There’s a friend!’ without suspecting that Lousteau already feared him as a dangerous rival. Lucien had made the mistake of expending too much wit: a dull article would have served him admirably. Blondet counterbalanced the envy that was gnawing at Lousteau by telling Finot that one had to surrender to such forceful talent. This verdict determined Lousteau’s conduct: he resolved to remain friendly with Lucien and come to an understanding with Finot in order to exploit so dangerous a newcomer by keeping him in a state of need. (24)

Thus begins Lucien’s sensual corruption and descent into debt, partly orchestrated by his friends and rivals in order to neutralise him and destroy him if necessary (it proves necessary). 

This network of alliance and rivalry underpins the rise and fall of reputations. Critical reviews are not to be taken at face value: in Balzac’s Paris they do not necessarily describe or evaluate a work of art, but serve a function in a social and professional matrix. Balzac knew this because he was part of it: he had written many rave reviews about his own books under pseudonyms in journals that were friendly to him. He had also seen his best novels consistently attacked by his rivals despite, and usually because of, his success and fame across Europe. According to Balzac, the literary world of the 1820s was a world of fabricated reviews, paid applause, reputational sabotage and professional treachery. In this environment, it was necessary for a writer to trade in duality and duplicity in order to succeed. Artistic principles and professional integrity are among the illusions that Lucien loses. The work of journalism is the creation of illusion: opinions do not reflect real convictions but temporary tactical positions that remain invisible to the readers. In Restoration Paris, journalists and writers were closely tied to the theatre and the underworld of prostitution and in Balzac’s novel theatre and prostitution also function as metaphors for writers and writing. Lucien, unable to resist the pleasures of the theatre, its actresses and artists, as well as the aristocrats who form the theatre-going elite, glimpses the unreality of this world on his first visit backstage with Lousteau: 

The magic of the scenery, the spectacle of pretty women filling the boxes, the blazing lights, the resplendent enchantment of back-cloths and new costumes gave place to coldness, desolation, darkness, emptiness. Everything looked hideous. Lucien’s surprise was indescribable. (25) 

It is in the end no more alluring than the garret, but more likely to lead to defeat and decadence and destitution. The young, hungry writer, dazzled by his own power, fueled by vanity and the need to pay for his place in the dance, is unable to see the reality or the danger of his position.

Reading Illusions perdues now it is easy to see it simply as a cautionary tale of a writer trying to make it in the big city, or to defuse it in the way that contemporary critics did as “a satirical view of the Parisian book trade” — or, perhaps worst of all, to enervate it using the jargon of critical theory as a “reflexive novel” (26). I mean, it is all of these things, but the energy and relevance of the book resides in its violently offensive qualities and Balzac’s willingness to expose the vanities and delusions of his own world. This is fissile material because the rivalries, motivations and the conditions of writing and publishing that Balzac describes still exist, but with updated variations and innovations. Balzac’s social networking among the literary and aristocratic elites of Paris, the control he attempted to maintain over his own image, the money he spent on champagne, tailored clothes and elaborate canes “of gold or rhinoceros-horn gleaming with precious stones” (27) foreshadow the marketing demands made of writers today: the need to publicise yourself on social media, to develop and protect your brand on behalf of publishing agents, in order to secure future work. Writers become prisoners of their own profile, a situation intensified and limited by the very social media platforms they depend upon for success. In the same way that Lucien discovers, too late, that he is not really free to write what he thinks, writers today find themselves captured by the social and rhetorical limits and expectations of their niche audience. They must be careful what they write or say on all occasions and across all platforms to avoid losing their constituency of readers and allies and, indeed, prospects of work. At the same time they also need to provide a consistent  stream of topical commentary, however unnecessary or perfunctory, in order to maintain visibility. Ideas and opinions, in this context, do not really exist for their own sake, but to advertise and protect the position of a writer within a social and professional network, just like Restoration Paris. Finally, payment remains precarious, as writers are exploited and corrupted like Lucien was in 1821, while being forced to find income streams anywhere they can like Balzac throughout his own career. Over this whole pitiless landscape hover the grey clouds of publishing schedules, author portfolios and corporate KPIs. 

So Balzac was not sitting in splendid isolation composing a philosophical construction: Un Grand homme de province à Paris was not La Peau de chagrin, it was a declaration of total war on the literary elite of Paris and the nascent world of commercial publishing, gestating in the 1820s and still present now. Balzac took the material reality of professional writing apart piece by piece — from the production of paper to the economics of printing to the critical stratagems that make or break careers, and everything in between — laying it all out for his readers to look upon and, presumably, recoil from. This was not a literary experiment, or a joke: it had serious implications for him, both personally and professionally. Aside from the fact that he didn’t really write very good plays, Balzac did not stand a chance on the Parisian stage after Illusions perdues: the critics not only demolished him in their theatre reviews, but, in the manner outlined in his book, they paid for hostile audiences. “They want to scalp me,” he commented after The Princess of Modena opened to a carefully arranged empty house, “and I want to drink out of their skulls” (28).

The novel, then, both portrayed and embodied the mixed motivations of writers and writing: in the case of Balzac and his cast of hacks and geniuses in Illusions perdues this included the pursuit of money and fame, basic creative and erotic needs, the desire to dominate society and catalogue all its aspects, and the hunger for revenge. Here, vengeance overwhelmed material self-interest as he laid waste to the reputations of an entire class of Parisian writers. In this way, Illusions perdues displays the same reckless, spiteful, self-destructive aggression later seen in Norman Podhoretz’s 1967 memoir Making It. Like Balzac, Podhoretz had the bad taste to reveal the dirty secrets of his contemporaries: in this case, the fact that American writers in the 1960s, despite their bohemian pretensions, pursued wealth and prestige as ruthlessly as the despised business classes, but were just more hypocritical about it. For his intellectual honesty and Brooklyn brashness, Podhoretz was ostracised by the New York literary elites and the reviews of Making It effectively ended his career as a fêted liberal intellectual, in the same way Balzac faced the injured pride and petulant rage of the Parisian critics for the rest of his life. But the attraction of Balzac’s and Podhoretz’s ugly books was that they ruthlessly stripped away hypocritical illusions by exposing the material relations and personal ambitions that underpin the world of writers and writing. To this end they functioned as valuable correctives to the colossal self-regard of the literary world. 

  1. Quoted in André Maurois, Prometheus: The Life of Balzac, trans. Norman Denny (Pelican, 1971), p.404
  2. Honoré de Balzac, Lost Illusions, trans. Herbert J. Hunt (Penguin, 1971), p.275
  3. Honoré de Balzac, The Wild Ass’s Skin, trans. Herbet J. Hunt (Penguin, 1977), pp.118-19
  4. Lost Illusions, pp.231-2
  5. Ibid., p.287
  6. Quoted in Graham Robb, Balzac (Picador, 1994), p.58
  7. Quoted in Robb, p.90
  8. Maurois, p.409
  9. Lost Illusions, p.409
  10. Maurois, p.336
  11. Lost Illusions, p.202
  12. Ibid., p.229
  13. Ibid., p.245
  14. Ibid., p.290
  15. Ibid., pp.317-18
  16. Ibid., p.346
  17. Ibid., p.348
  18. Ibid., p.240
  19. Ibid., p.277
  20. Ibid., p.242
  21. Ibid., p.250
  22. Ibid., p.440
  23. Quoted in Maurois, p.128
  24. Lost Illusions, pp.311-12
  25. Ibid., p.300
  26. See Sotirios Paraschas, ‘Illusions perdues: Writers, Artists and the Reflexive Novel’ in The Cambridge Companion to Balzac (Cambridge University Press, 2017)
  27. Maurois, p.305
  28. Quoted in Maurois, p.490
Posted in Uncategorized | 1 Comment

On Marvin Gaye’s ‘Midnight Love’

In February 1981 Marvin Gaye moved to the seaside town of Ostend in Belgium, as a guest of the concert promoter Freddy Cousaert. This was a rescue mission for Cousaert, as well as an unmissable opportunity: a fan of Gaye, he had found him holed up in the Britannia Hotel in London, depressed, physically ill, freebasing cocaine, surrounded by prostitutes and drug dealers. Running from one acrimonious divorce, a second collapsing marriage, record company conflict and enormous IRS debts, Gaye’s global route took him from L.A. to Hawaii to London and finally the cold shore of Belgium. Here, immersed in North Sea air and coastal tranquility, he got comparatively healthy: he liked the old world of Europe, so far from his family and Motown, the federal authorities and tax collectors, and in this unlikely haven took up physical exercise again, cycling along the seafront and running along the beaches. He also started to play and produce new music at European gigs arranged by Cousaert and recording sessions with Odell Brown and Gordon Banks at Katy Recording Studios in Ohain. What eventually came out of this was a hit record for CBS that was trashy and banal but also struggled to hide the scars of deeper conflict and unhealthy impulses.  

Midnight Love is in a lot of ways a bad record and is considered a sad (if lucrative) coda to the Motown run. But it is also a fascinating album that documents the closing stages of emotional and physical dissolution even as it aims for uncomplicated mainstream appeal. All of this conflict and contradiction is distilled on ‘Sexual Healing’ — the global, immortal smash hit that barely conceals the wreckage beneath its shining, fragile surface. Forty years later the disconnect between this song and the reality of its creation is complete, which was maybe the intention: the overall effect is like a disguise or a cover-up, smothering anguish between the sheets. So symbolic and ubiquitous, it is difficult to remember a world in which this song didn’t exist to talk about sex in the most obvious way possible: it created a cliche. But the poison and anguish underlying it was only partially obscured by the exquisite exterior: if you listen carefully to the details and the context of its composition then you get something different altogether. 

There is some debate about the origin of the lyric. In April 1982, Rolling Stone sent David Ritz to Ostend to interview Gaye and, as Ritz later wrote in his biography A Divided Soul, “the song was born out of our conversation concerning pornography. Gaye’s apartment was filled with sadomasochistic magazines and books by George Pichard, a European cartoonist in whose drawings women were sexually brutalised. I suggested that Marvin needed sexual healing, a concept which broke his creative block.” Ritz claimed a lot for this conversation and sued for royalties, eventually receiving credit after settling out of court with the Gaye family, but a lot of other people who were there (including all the musicians involved) disputed the extent and details of his contribution. Whatever the case, the story makes as much sense as anything else because ‘Sexual Healing’ does not truly drag Gaye out of the dirt even though it superficially aspires to.

One of the interesting things about Gaye as a singer, writer and producer is his interest in sex. This side of his work is sometimes considered to be a decline from the socially committed peak of What’s Going On, but the personal drama of his 1970s opened up vast psychological and sexual terrain for Gaye to explore and resulted in his greatest records. Let’s Get It On (1973) began this period with an uncomplicated carnal statement, but this  would be overwhelmed three years later by the symphonic masterpiece I Want You. On this album — recorded in 1976 in Gaye’s newly built recording studio that was partly modeled on Hugh Hefner’s Playboy pad — the ego was fractured through layers of multi-tracked vocals and overdubbed strings, horns, ARP synthesizers and percussion assembled by Leon Ware and Gaye. It was a lush, expansive sound that existed on the edge of desire: dissolving agency in erotic reverie, seeking spiritual transcendence, yet racked with self-loathing and insecurity. On the other side stood Here, My Dear, the 1978 album that formed part of his divorce settlement to Anna Gordy. This financial arrangement provided Gaye with unexpected artistic inspiration, driven by a combination of resentment and regret: he composed an elongated address to Gordy that was part paean and part assault and would leave her threatening to sue for invasion of privacy. It was a massive monument to spite and petulance, but also an exceptionally beautiful album about the painful knot of love, sex and money.  Gaye relished the idea of Gordy listening to the likes of ‘Anna’s Song’, ‘You Can Leave, But It’s Going to Cost You’ and ‘When Did You Stop Loving Me, When Did I Stop Loving You’ but didn’t particularly want to be in the room when she did. Gordy later said, “I think he did it deliberately for the joy of seeing how hurt I could become.” Gaye replied, “all’s fair in love and war.”

Covering the same sort of territory as Let’s Get It On, I Want You and Here, My Dear, Midnight Love had a lot to live up to lyrically and sonically, and in the circumstances and on those terms it never stood a chance. But, unless you are a rock critic, that doesn’t matter at all: the album did different things, some successfully and some with real aesthetic aftershocks and thematic shadows in the unfolding decade. The creative core of the project was Gaye and the multi-instrumentalist and producer Gordon Banks, who recalled that “it was basically him and I in the studio. Columbia Records gave him some new toys to play with. They gave him two drum machines, a synthesizer called a Roland TR-808 and a Jupiter 8. Marvin didn’t know too much about technology so it was my job to figure out how to get the stuff working. He kind of liked the sounds that came from it and he went from there.” The album sounds just like this, too: a weird mix of casual lo-fi synthesizer experiments and complex guitar and vocal overdubs, with additional expenses such as the horn sections added by Harvey Fuqua back in L.A. The sound that Gaye and Banks got out of the Jupiter and the 808 had its own place in the world of New York Boogie, post-disco, Rick James and Prince; CBS executive producer Larkin Arnold also recalled that Gaye had been listening to a lot of reggae and Kraftwerk since his extended stay in London. Unlike his ARP-inflected 1970s work, Midnight Love was almost entirely synthetic and would help to ignite the era of electronic soul: Mtume’s 1983 smash ‘Juicy Fruit’, for example, was a direct descendant of ‘Sexual Healing’ (and another Arnold executive production). 

There is no doubt that ‘Sexual Healing’ is the outstanding product of this phase – nothing else Gaye did in what remained of his 1980s comes close. Strip away the years of radio rotation and descent into parody, strip away the lyrics, and you are left with beautifully layered, reggae-inflected synth-pop: there is a detailed poignancy to the track, with an undertow of pure melancholy, a sound texture that creates switches of tone and shards of melody, an effect verging on Pointillism. Add the lyrics back onto this and it becomes something else again: something much weirder than the lingerie and flowers clichés we now respond to or recoil from. The song is born out of sickness and despair, that much is clear (“I think I’m capsizing, the waves are rising and rising”) and the immediate remedy is to pick up the phone for “sexual healing” from a girlfriend — or even, the song suggests, a prostitute (“if you don’t know the thing you’re dealing…”). I’m “sick” Gaye declares (down the phone) and you are “my medicine”: “open up and let me in” he implores, “I can’t wait for you to operate.” He extends the medical metaphor and it’s just off, unpleasant: the sexuality is diseased but can be cured by sex as medication or a surgical operation, basically reducing to his default treatments of the time: drugs, pornography, prostitutes.

This kind of clinical and damaged approach to sensuality and seduction is the undertone of the entire album. ‘Til Tomorrow’ is a good example of this, a ballad made out of analogue synths and a saxophone worn like a rented tuxedo. Gaye somehow makes this combination sound as flat and queasy as possible, thereby capturing a certain congealed romantic ideal: rose petals scattered over black satin sheets; red lace underwear over taut, toned skin; champagne in crystal flutes and lines of coke chopped out on glass table tops in rooms decorated with pastel stucco and Art Deco panels. It sounds sleazy, ill, creepy, like something out of The New York Ripper. It’s a conceptualization of sex that matches the most obvious ideals and routines of the time, but exposes their corrupt instincts and damaged impulses. ‘Midnight Lady’ is a rushed ‘Super Freak’ rerun that simply exists to evoke the crepuscular underworld of nightclubbing: “They tell me something’s going on in the backroom,” Marvin sings, “did you save a line for the ladies?” The multilayered vocal on ‘Rockin’ After Midnight’ tries to revive the carnal reveries of I Want You but sounds flat and empty over the fizzing electro-funk track, terribly brittle in comparison to its luxurious predecessor. Midnight Love was conceived as a cynical party record, a stab at commercial rather than spiritual resurrection, but the psychological ill-health and physical fatigue could not be concealed: it is evident in the sound and the edges of the words, not only on the skeletal synth experiments but also the exquisitely detailed hit single. In one way, it worked: CBS got their hit and Gaye returned in triumph to America, but the record also triggered the final unraveling of his entire life.

After Midnight Love things got even worse: hard to look at and to listen to, but still fascinating. Forced back on the type of tour he hated, Gaye resorted to tawdry stripteases at the end of his show: backing dancers ripped silk gowns off his back to reveal the former prince of Motown standing defeated in black underpants.  He had been obsessed with two apocalyptic scenarios since the conception of his final Motown album In Our Lifetime?: nuclear war and the rise of Teddy Pendergrass. The former, at least, was not exactly far-fetched around the time of Able Archer 83 and KAL-007, but it led to drug-fueled Messianic delusions: his mission, he informed a stupefied music journalist who interviewed him in London, was to “tell the world and the people about the upcoming holocaust and to find all those of higher consciousness who can be saved.” In the meantime, he remained surrounded by drug dealers, prostitutes, guns and thugs, arranged in a ring of steel to protect him from the assassination attempts he was convinced were coming. Living in this febrile, feverish atmosphere and still addicted to cocaine and porn, the relative health and stability of Ostend was a quickly receding memory by 1983. Locked in his sister’s apartment with Banks and a 4-track recorder, he continued further down the path set by Midnight Love but with added doses of paranoia, despair and disillusionment. ‘Masochistic Beauty’ was more low-voltage synthetic funk over which Gaye took the role of a sexual dominatrix, rapping in a mannered and slightly fey English accent, thereby uniting two things he loved: the British aristocracy and sadomasochistic pornography. ‘Sanctified Lady’ — originally titled ‘Sanctified Pussy’ — was Gaye’s final, obscene attempt to combine the carnal with the spiritual: the song, inevitably, communicated fatigue, sickness and contempt for the world, a vocoder-led provocation. He told Ritz: “It says: boyfriend here, girlfriend there. Herpes germs everywhere. Some girls do, some girls don’t. Some girls will, some girls won’t. I want a sanctified pussy.”

Hanging over Midnight Love and ‘Sexual Healing’ is the future, then, and this is where the poignancy and excess becomes unbearable: not just in terms of what happened next to Gaye, but what happened to everyone. In 1991, in ‘Junk Bonds and Corporate Raiders’, Camille Paglia wrote: “Everyone of my generation who preached free love is responsible for AIDS. The Sixties revolution in America collapsed because of its own excesses. It followed and fulfilled its own inner historical pattern, a fall from Romanticism into Decadence.” What Paglia directly addressed and what was often missing in Gaye’s own life was responsibility: for ideas and actions, their implications and outcomes. The ideals he sought in love and sex were purity, sweetness, monogamy and spiritual transcendence, but what he often courted or collapsed into was promiscuity, decadence, physical dissolution, emotional torment. From the healthy shag pile of Let’s Get It On to the sensual deluge of I Want You to the corrupt seduction of Midnight Love, Gaye’s aesthetic of sexual freedom — complex and conflicted in his case but essentially acting on and exploiting the libertarian impulse of the 1960s — would be damaged irreparably by HIV and AIDS. When Gaye released Midnight Love in 1982, the epidemic was only beginning to spread in San Francisco, New York and L.A., but its shadow falls back over the album and makes the sexual statements on it even bleaker. Through a retrospective lens and in this context, the medical metaphor of ‘Sexual Healing’ suddenly sounds darkly ironic and gruesomely articulate. 

 

 

Posted in Uncategorized | Leave a comment

The Queen of Condé Nast

 

FACE_EVU_Anna-Wintour

I: Condé Nasties

On the surface, Anna Wintour took public humiliation well. It was easier when The Devil Wears Prada was just a book, even if it was a New York Times bestseller and progenitor of an entire literary micro-genre, the “Boss Betrayal” roman à clef. Lauren Weisberger’s 2003 debut was easy to dismiss as an extended employment grievance aired in public after the fact. It was bitter, entitled, devoid of wit and, crucially, insight into the character and the industry it took aim at. However painful for Wintour personally or professionally, the novel was, in all the important ways, a failure. The real threat came from Hollywood: specifically 20th Century Fox, who quickly brought the rights to adapt the story and scooped Meryl Streep for the role of Runway editor Miranda Priestly. To Wintour’s horror, Weisberger would join Peter Benchley and Mario Puzo as one of a select group of novelists whose most famous work would be made into far better blockbuster movies. The feared Vogue “editrix” was doomed to be parodied by a respected and loved Hollywood actress and she had no option but to put a brave face on it. Later she told Barbara Walters, “I thought the film was really entertaining. Anything that makes fashion entertaining and glamorous and interesting is wonderful for our industry. So, I was 100 percent behind it” (1). At the time, in order to make the point as sharply as possible — like an ice pick aimed directly at Weisberger’s skull — she attended an advanced screening of the film wearing Prada

Nobody doubted Wintour’s power. She warned every designer and fashion label beholden to Vogue to avoid collaborating with the film or face the consequences (in the event only Valentino dared to show his face on screen). While publicising the movie, Streep made sure that everybody knew that she had modeled her performance on somebody else entirely — somebody male, for good measure. Weisberger never publicly admitted that Miranda Priestly was based on Wintour and, in fact, gave the real life editor a cameo role towards the end of the novel (albeit one that made snide reference to the time Wintour was spotted crying at a fashion show after the death of her father). All of this dissembling was transparent, but not futile: it was necessary to avoid a lawsuit. The portrait of Priestly very closely resembled Wintour, except for the working class East End Jewish background which was the novelists’ main fictional embellishment (“from Jewish peasant to secular socialite” was her story, for whatever reason, 2). Priestly’s Runway office, put together for the film by Jess Gonchor’s attentive and mischievous production designers, looked almost identical to its real life model at Condé Nast HQ. As soon as the film came out, Wintour had her own office redesigned and refurbished. 

This was a public humiliation for the Vogue editor: a high profile trashing of the industry she represented, her management style and her personal relationships. The vindictive nature of Weisberger’s creation resided in its intimacy: the physical similarities between Priestly and Wintour (“willowy…perfect posture…casual but supremely neat,” 3) serving to reinforce a more damaging psychological portrait:

How many girls had no idea that the object of their worship was a lonely, deeply unhappy, and oftentimes cruel woman who didn’t deserve the briefest moment of their innocent affection and attention? (4)

According to Vogue’s creative director Grace Coddington the book did hurt her boss, despite Wintour’s public steel and bravado. It was “the bane of Anna’s life” and never quite left her personally or professionally: 

Even ex-President Sarkozy mentioned it semi-jokingly in his speech at the official Elysee Palace ceremony in Paris before awarding her the Legion d’Honneur in 2011. But it’s not a joke. (5)

Plenty of people thought she deserved this, and it did not exactly backfire for Weisberger, who burned all of her fashion bridges with the act of writing, but did not get sued, got paid for the movie rights and launched her own writing career off the back of the scandal. But if her aim had been to diminish or even destroy Wintour’s reputation then her failure was, in the end, total. Weisbeger started a process that created a mass media icon out of a powerful magazine editor. The fact that you could reduce Wintour’s profile to a cartoon helmet bob and Chanel sunglasses, or a scenery-chewing Meryl Streep caricature, didn’t undermine her reputation or standing at all: in the event, it elevated it. 

Wintour, as a true Machiavellian, was perfectly able to retake control of the narrative, manipulating the post-Devil fall-out to serve her own ends. In 2007, she invited a film crew into the Vogue offices to make a documentary about her. Again, this could have, and in one way actually did, backfire: chilly, imperious, poised and distant, Wintour proved less interesting for the filmmakers than Coddington, who provided them with a perfect, flame-haired, Romantic foil, stomping around the corridors complaining about her pictures being axed by the boss. C.J. Cutler’s The September Issue, released in 2009, was enjoyable, then, for this voyeuristic portrayal of a fascinating professional relationship, although it didn’t really have anything more interesting to say about Wintour herself or the fashion industry than The Devil Wears Prada did. But as a record of the creation of one of the most important magazine issues in the world, the film excelled. In the thick of this process, Wintour was supreme: decisive, focused, in control of all elements, balancing the competing interests of business and art. Coddington had the grace to concede this in her memoir: “while I am often approached in the street as a kind of heroine of the film about Vogue, to my mind the point of it was to show the creative push and pull of the way Anna and I work together” (6). Or, as Tina Brown wrote of her one-time rival within the Condé Nast empire:

Wintour’s computation — which was shrewd — was that after living through (and pretending to love) Meryl Streep’s queen bitch parody of her in The Devil Wears Prada, she had nothing to lose. She would embrace her inner vampire. Let the public see in Cutler’s movie how she daily massacres the muse of Grace Coddington, Vogue’s inspirational creative director, and sails around looking aloof in the back of her sleek black limo. At least they would also see how hard she works…(7)

In fact, watching beyond the vampire revealed more than this: despite the arguments, tears and corridor-stomping, the final September issue of 2007 was a visual triumph, and, in the end, dominated by the editorial work of Coddington. So the point was: Wintour’s control of the final product, in line with her strict directives and decision-making process, was the result of her ability to encourage, to bully, to inspire and to discipline the creative teams that she had put together in the first place. Like the other great Condé Nast editors of her time — Diana Vreeland, Beatrix Miller, Tina Brown, Liz Tilberis and Franca Sozzani — Wintour was dominant and sublime in the role, a rare combination of curator, director, producer and businesswoman. At the apex of two deadly and competitive industries — fashion and magazines — Wintour not only survived, she thrived. The spare white spines of all her Vogue editions collected together on the shelves of her Long Island home amounted to a formidable body of work, for which she was primarily responsible. 

So, maybe that didn’t make her a very nice person to work for.

II: Use Your Elbows

I’m the Condé Nast hit man. I love coming in and changing magazines. 
Anna Wintour (8)

If the old world of magazines was a state of war, then New York was the main battleground. Magazine warfare was intimate and intense because of the similarities between the combatants and the limited territory which they contested. The quintessential New York conflict was fought between the New York Intellectuals of Partisan Review, Commentary, Dissent and the New York Review of Books, “a feisty, battling community”(9) who would eviscerate each other in their magazines and at each others Upper West Side cocktail parties during their 1950s and 60s peak. The political and cultural distinctions being contested were so subtle and arcane that they could look absurd to the uninitiated (“I heard that Commentary and Dissent had merged and formed Dysentery,” quipped Alvy Singer in Annie Hall), but this only made the fighting more fierce. In the world of fashion magazines and related glossies, the battles were equally intense and parochial: Vogue at war with Elle and Harper’s Bazaar; Condé Nast magazines at war with Hearst magazines; Condé Nast magazines at war with each other. Fashion editors fought vicious battles over clothes, models, and photographers. No prisoners were taken. During the Wintour-Tilberis hostilities of the early 1990s, for example, the photographer Patrick Demarchelier signed his Vogue death warrant by joining Tilberis to revive Harper’s Bazaar. (Coddington had to tread very carefully in this battle as Anna’s key fashion editor and Liz’s best friend.)

As cultural artifacts, magazines — particularly fashion magazines — are often ignored, dismissed or simply forgotten, but that is a mistake. Unlike a book or a painting, magazines are not designed for posterity: their life is in the immediate world, responsive to rapidly shifting trends, alive to the intricacies and intrigues of the moment. They reflect the world as it is — or thinks it should be, or dreams it will be — at the time of their production and consumption. Their impact has a limited life-span and is driven by the competing demands of culture and commerce. For this reason magazines age very well: old editions reveal details that are otherwise lost or written out of history. In particular, prestigious fashion titles like Vogue and Harper’s Bazaar now provide an unparalleled visual history of the twentieth century, bordering the world of fine art and architecture and directly presenting the clothes, interiors, locations, photographic styles and beauty ideals of each year of each decade to the month. In an interview with the Guardian in 2006, Wintour proclaimed, “if you look at any great fashion photograph out of context, it will tell you as much about what is going on in the world as a headline in the New York Times” (10). She then lamely illustrated this by adding, “the clothes this season are very militant and urban, and have a sense of going into battle,” to the amusement of the interviewer. But she did have a point: Vogue presents the aspirations, desires and visual ideals of affluent global societies, and whether anyone likes it or not this is as relevant as the conflicts and poverty of failed states. 

The fashion magazines prosecuted a war to define the moment: as Tina Brown might have put it in her Vanity Fair pomp, to divine and to decide “what’s hot.” Capture the month and you can define an era. The skill of the editor is to arbitrate this, to make the decisions about what will define both the present and the future. The rivalries are built on deadlines and the time limit of trends: the pressure of the fashion timetable simply compounds this. This is the environment in which Anna Wintour thrived. Her artistic and commercial instincts — and her intense personal focus on leading American Vogue — drove her through Harpers & Queen, Harper’s Bazaar, the defunct Bob Guccioni title Viva, New York magazine and finally into the arms of Condé Nast. This took twenty years but the pace of work was ferocious and the ambition fierce. Each time she caused mayhem in the magazine offices in which she landed and then abandoned, carving out and defending her turf by attacking the established traditions and expectations (in other words, her colleagues). Her success, however, has been driven by her basic ability to drag all of her magazines into the contemporary world, capturing the zeitgeist if not quite leading the vanguard (her commercial instinct always remained too strong for this). Unlike Tina Brown or even Liz Tilberis, Wintour is notorious for her inability to translate ideas into words and has never written anything for her magazines; her editorial talent is almost entirely visual and conceptual, linking together clothes, photography, graphics, layout and (later on) celebrity culture.  

For Si Newhouse and Alexander Liberman, the owner and the creative director of Condé Nast respectively, Wintour’s primary asset was her own unique style of creative destruction. At British Vogue, House & Garden and finally American Vogue, they used her to revitalise aging titles, relying on her ruthless personnel management and design instincts to finish off their strategic dirty work. When Wintour arrived back in London to clean out and refresh British Vogue, the Times reported that Liberman had “instructed her to ‘use her elbows.’ A minister without portfolio, she sized up the situation and rather quickly became the jewel in the Condé Nast crown” (11). In the process, existing staff were sacked or quit and Wintour clashed with her two most important fashion editors: Coddington left to work for Calvin Klein on Seventh Avenue, while Tilberis bit her lip and submitted to Wintour’s directives (under Beatrix Miller both had enjoyed considerable artistic freedom). At American Vogue, working as creative director under Grace Mirabella, Wintour had tried to bring some of the experimentation and edgy energy that she had pioneered at Harper’s, Viva and New York to Mirabella’s rather tired, timid title; returning to British Vogue, she came to bring the bright, clean, commercial aesthetic and working habits of American Vogue to the formerly arty, ethereal, European fantasy world created by Miller, Coddington and Tilberis. (Private Eye reported that Wintour had telephoned managing editor Georgina Boosey to ask if “she knew of a gym that opened at 6 A.M. A little shaken, Boosey said no. ‘Well where do you all go?’ demanded La Wintour incredulously,” 12)

Meanwhile, looking at their stagnating American Vogue and spooked by the arrival of the young and experimental French Elle on U.S. shores (its sales quickly eclipsed Harper’s Bazaar), Newhouse and Liberman had already decided to dethrone Mirabella to make way for Wintour. This was the infamous July Fourth Massacre which also took out society editor Margaret Case, a Vogue veteran who had worked for the magazine since 1926 — so devastated by the manner in which she was axed, Case committed suicide by jumping out of her apartment window. (Condé Nast would become notorious for brutal, badly-handled dismissals.) In her new role — the one she had always dreamed of — Wintour was as ruthless as she needed to be, as was expected of her.  Her first cover swept away the Mirabella era, replacing the formal, high-gloss parade of full-face profiles with a loose Elle-like snap of Israeli model Michaela Bercu wearing faded jeans, a Christian Lacroix T-shirt, tousled hair and a big grin (see above). Within the carefully calibrated visual lexicon of fashion magazines this was a grand statement of intent. “I wanted the covers to show gorgeous real girls looking the way they looked out on the street rather than the plastic kind of retouched look that had been the Vogue face for such a long time,” Wintour declared, thereby consigning Mirabella to oblivion (13). Gradually, and to the amazement and consternation of Mirabella, Coddington and Tilberis, Wintour began re-importing an experimental European sensibility into the mainstream commercial arena of American Vogue. This is where she excelled: she adapted to the demands of the moment and transformed the magazines that she took over (House & Garden, renamed HG under her reign, was her one notable failure), bringing them forward into the modern world and making money for the men who paid her. She was a well-rewarded assassin. 

But this was not all about money. 

III: The Appearance of Things 

My mind is full of pictures, not words.
Liz Tilberis (14)

Before fashion, I love images.
Franca Sozzani (15)

In his 1972 book Painting and Experience in Fifteenth-Century Italy Micheal Baxandall reprints a contract drawn up by the Prior of the Spedale degli Innocenti at Florence for a painting of the Adoration of the Magi that was completed in 1488 by the Florentine artist Domenico Ghirlandaio. The document details precise specifications for the work, including the exact monetary worth of the ultramarine paint to be used. As Baxandall explains, ultramarine was the most valuable colour after gold and silver, but was often substituted for cheaper blues, hence the anxiety felt by patrons and clients who wanted to display their wealth and power in public:

To avoid being let down about blues, clients specified ultramarine; more prudent clients stipulated a particular grade — ultramarine at one or two or four florins an ounce. The painters and their public were alert to all this and the exotic and dangerous character of ultramarine was a means of accent that we, for whom dark blue is probably no more striking than scarlet or vermilion, are liable to miss. (16)

This was a matter of prestige and utility. Painters participated in a commercial enterprise, with the content and materials chosen in advance by the purchasing party. The paintings that resulted are among the most famous and valuable works of art in world history: nobody doubts their worth. Their commercial origin and social function add to their power, rather than detract from it. As Baxandall wrote: “paintings are among other things fossils of economic life” (17).

Fashion photography retains this productive overlap between commerce and aesthetics; in some ways, it is the best contemporary example of the condition. For this reason, and because of a dismissive cultural attitude to clothes more generally, fashion photography and haute couture have only intermittently been examined as art. Grace Coddington, one of the greatest fashion editors and stylists of her era, had no time for such pretensions in her world, writing:

I certainly don’t think fashion photography is art, because if it is art, it’s probably not doing its job. Obviously, there is photography that sets out to be art, but that’s another story altogether. In fashion photography, rule number one is to make the picture beautiful and lyrical or provocative and intellectual — but you still have to see the dress. (18)

Unlike Italian painters of the fifteenth century, who produced bespoke paintings to order, Coddington was loyal to a late Romantic ideal of art cut off from any commercial role or public utility, based solely on the primacy and purity of the vision of the artist. But there has always been a fine line, if any, between the output of the visual artist and the commercial image maker, and from this perspective Guy Bourdin, Helmut Newton, Norman Parkinson, Cecil Beaton, Horst P. Horst, Richard Avedon, Irving Penn and Steven Meisel should be considered among the most important visual artists of the twentieth century. In different ways, their work seeped into and shaped the public imagination. Whether delivered by Louise Dahl-Wolfe for a Harper’s Bazaar cover in 1942 or by Guy Bourdin for a Charles Jourdan shoe advert in French Vogue in 1975, commercial magazines became the prime delivery channel for innovative visual aesthetics and new ideas of physical beauty. Fluxus had nothing on Vogue

Fashion magazines are about visual communication in the service of a commercial industry.  Their basic purpose, as Coddington noted, is to sell frocks. But on this basis a rich world of imagery was created with its own language and tradition, from the Helen Dryden illustrations decorating the earliest Vogue editions to the saturated panels of Mario Testino or the sexually subversive sittings of Ellen Von Unwerth. It was the magazine editors and artistic directors who created the space for this visual tradition to develop and thrive, often vicariously fulfilling their own artistic ambitions in the process. The key figure in this role for Condé Nast was Wintour’s own mentor Alexander Liberman, the White Russian emigre appointed to Vogue by Nast himself in 1941 and promoted to editorial director of all Condé Nast publications by Si Newhouse in 1962. Described by his friend and biographer Barbara Rose as “a Dostoevskian character…a larger-than-life gambler, adventurer and mystic” (19), Liberman was also a practising painter and sculptor throughout his Condé Nast career, creating enormous geometric steel sculptures and Hard Edge minimalist canvases that never quite received the critical recognition he yearned for. In 1962 he poached Diana Vreeland from Harper’s Bazaar, later saying, “she was the first editor to say to me, ‘You know this is entertainment’…In many ways she acted as a brilliant theatrical producer. She visualised Vogue as theatre” (20). The combination of Vreeland’s dramatic flair, Liberman’s highbrow pretensions and the photographic innovations of Irving Penn and Richard Avedon propelled American Vogue into its most visually creative and influential period, only terminated when the magazine began to hemorrhage money in the 1970s. 

By this point, Liberman had grown more cynical about the idea of fashion magazines becoming enduring cultural products in their own right: “It’s a business after all. I’m not interested in the fashion magazine as masterpiece. We are not making magazines to be preciously saved. I like the discardable quality of life” (21). At the same time, Vreeland’s work at Harper’s and Vogue presented a model to be emulated by other ambitious and experimental editors, including Liz Tilberis and Franca Sozzani. Unlike Wintour — who despite her English upbringing was temperamentally and culturally an American editor, deploying her talents to the service of the commercial imperative — Tilberis and Sozzani stayed loyal to their European sensibilities, curating their magazines like galleries, elevating fashion to the realm of dream and fantasy. Taking over British Vogue after Wintour returned to America, Tilberis pointedly rebuilt her editorial teams on the principle of collaboration rather than dictation, trusting the instincts of her photographers and editors. Her first key editorial decision was to rescue a David Bailey portrait of Christy Turlington wearing a white Calvin Klein shirt loosely askew — a picture Wintour hated so much she had literally thrown it in the bin. This became Tilberis’ debut Vogue cover. In fact, the quality of British Vogue covers improved dramatically under her reign, as she let the dramatic or simply gorgeous images speak for themselves:

Sometimes, if a picture was strong enough, we’d ignore all commercial wisdom and use it with a single coverline, unwilling to intrude on the beauty of the image with hard-selling text. Our record low was two words: Helena Christensen in a long white dress leading a white horse through the desert, against a backdrop of bluest skies and the words “International Collections.” That would be unthinkable now. (22)

I also recall Sante D’Orazio’s stately cover portrait of Tatjana Patitz for the March 1989 edition, similarly unadorned by text, although my personal favorite Tilberis-era cover remains Patrick Demarchelier’s sumptuous, pouting portrait of Christy Turlington for the April 1988 edition (see above). Tilberis set out to revive the subtle experimention nestled inside Bea Miller’s Vogue, pushing the boundaries of beauty and fashion:

I wanted to make my own statement with more cutting-edge clothes and smarter fashion writing — real explication about how things were changing and what the reader should do about it…I wanted a return to glamour. Anna liked the normal, and I liked the cutting edge. I felt what was lacking was the element of fantasy — the fancier, flashier, sometimes trashier, more extravagant, and even eccentric clothes. (23)

In a reversal of Liberman snatching Vreeland in 1962, the publishers at Hearst had paid close attention to the post-Wintour direction of Vogue and offered Tilberis the unprecedented opportunity to relaunch Harper’s Bazaar as editor-in-chief in 1991. Wintour considered this to be a major threat to her own status and to American Vogue, especially because Tilberis was offering something she refused to: a large measure of artistic freedom to her editors and photographers. The challenge attracted Patrick Demarchelier sufficiently to sign on exclusively with Tilberis and the supermodels followed him. Tilberis’ approach carried over from her time working with Coddington under Miller and her own tenure as Vogue editor, but was also directly inspired by the revolutionary work of Carmel Snow, Vreeland and Alexey Brodovitch at Harper’s between 1932-62: 

Harper’s Bazaar was a laboratory, constantly reexamining and reinventing the magazine medium, using photographers like Hiro and Man Ray and launching the careers of Richard Avedon and Irving Penn, of whom Brodovitch would demand, “Astonish me.” (24)

Vreeland, Tilberis, Coddington, Sozzani and Wintour too, in her own way, all understood the importance of their photographers and jealousy protected their relationships (and contracts) with them. Wintour had built her own career on discovering, employing and cultivating the loyalty of the best photographers available. As the fashion editor of Viva, for example — a magazine she was so ashamed to work for that she erased it from her own résumé  — Wintour published the work of Demarchelier, Newton, Deborah Turbeville, André Carrara and Shig Ikeda (25). She fully understood the power of images and, therefore, the power of the image-makers; she pushed them to produce their best work for her, which, of course, burnished her own reputation and advanced her own career. 

Fashion magazines are a complex collaboration of clothes designers, retail advertisers, graphic artists, writers and critics, stylists and copy editors, but the key to their ultimate success is the production of the right image. In this dynamic, the editor as artist, or curator, is central to the history of design and fashion imagery. Vreeland and Tilberis have their place in this pantheon, but the greatest of them all was Franca Sozzani, editor of Italian Vogue from 1988 until her death in 2016. The key to Sozzani’s success was her purely artistic ambition. She knew that her Vogue would be limited to the readership of the Italian peninsula unless it could excel in a different way: that is, through images. In a visual culture like Italy’s, this made perfect sense anyway. While Wintour began her tenure at Vogue searching for the perfect synthesis of art and commerce, ultimately breaking into the broader culture of celebrity and politics, Sozzani’s aim was different: she wanted to create the most visually beautiful and artistically experimental magazine of them all. She established an unbreakable bond with Steven Meisel and gave him every single cover of Italian Vogue until she died. She allowed photographers such as Bruce Weber and Peter Lindbergh licence to experiment, sometimes at the expense of the clothes. Within her corner of the Condé Nast empire she took the avant garde European aesthetic to its logical conclusion. She created something perfect and self-enclosed. Her magazines were works of art. You couldn’t always see the dress, but that didn’t really matter. 

IV: The Fashion Conspiracy

In the spring of 1975 Guy Bourdin produced an advert for Charles Jourdan that pushed the conflict between editorial and commercial demands of fashion to the point of mutual destruction. His image depicted a sidewalk murder scene with the female victim’s silhouette outlined in chalk surrounded by pools of fresh blood. The shoes being advertised — a pair of pink wedges — were left scattered on the pavement, the glamorous detritus of homicide. Apart from all of the symbolism packed into the image itself — the links between voyeurism, consumerism, compulsion and violence — the advert presented in extreme form the tension between producing art and flogging clothes. In the work of Bourdin, this reached a rich and conceptual horizon later “critiqued” in Irvin Kershner’s 1978 thriller The Eyes of Laura Mars, in which a model declares: “we’ll use murder to sell deodorant, so you’ll just get bored with murder, right?” (Kershner hired Helmut Newton to shoot parody Bourdin pictures as props for the film.) In his introduction to the 2001 Bourdin monograph Exhibit A, Michel Guerrin wrote: 

In Charles Jourdan, he finds his Lorenzo de Medici, and sticks with the label that is the leader in sexy, top-of-the-line shoes. Jourdan’s fame allows Bourdin to stray from strict promotion to produce revolutionary ad campaigns. Up to this point, no brand name has been so closely identified with a photographer — even though the photographer in this case is apparently “abusing” the product…In his conviction that the image is stronger than the motive behind it, Bourdin forms a visual ethic that allows him to remove the nuts and bolts that hold together frivolity, glamour, ornament. (26)

Bourdin’s relationship with French Vogue editor Francine Crescent was the 1970s equivalent of the bond between Sozzani and Meisel: “[by] the time Crescent was named editor-in-chief in 1967, every single issue of Vogue featured an average of twenty pages of Bourdin’s pictures” (27). The freedom  that Crescent and Jourdan gave Bourdin allowed him to push fashion imagery to the point of implosion: not only could you not see the dress, but the dress was often being, in the words of Guerre, abused. Like the Sozzani era of Italian Vogue, in Bourdin’s work the editorial eclipsed the commercial. 

However, as Diana Vreeland discovered, this kind of freedom was rare and finite. In the 1970s, as sales of American Vogue sank and the cultural moment moved beyond the theatrical escapism of the Vreeland era, Newhouse and Liberman wielded the axe. Grace Mirabella supplanted Vreeland as the most pliant vessel for the Newhouse Concept, a doctrine that prescribed the production of magazines in line with market research, circulation figures and the demands of advertisers. This became the model for all Condé Nast publications, leading to an ambiguous overlap between advertising and editorial that culminated in the “outsert” — expensively produced advertising supplements that used elite photographers, designers and models to mimick editorial content and thereby disguise commercial intent. This was a development that undermined the integrity of Vogue under Wintour and was mirrored at Vanity Fair under Tina Brown, who faced similar ethical questions over subservience to Hollywood moguls in the eternal quest for A-list access. The Newhouse Concept was patented at American Vogue under Mirabella, who dutifully created a magazine that catered to mainstreams interests and tastes: a commercial product designed to sell merchandise and advertising space. This was the period of Liberman’s newly acquired cynicism: the Vreeland years were no longer seen as an artistic triumph, but as a commercial failure, a wrong turn. Wintour’s place in this strategic drama was unique: she came to rest at American Vogue in a space that existed between Grace Mirabella and Franca Sozzani.

One of the key scenes in The September Issue shows Wintour hosting the Vogue Retailers Breakfast at the Paris Ritz, an annual forum that she established and which allows representatives of the major American retail outlets to meet the Vogue editorial team. In the scene, Burton Tanksy, CEO of Neiman Marcus Stores, speaks on behalf of all the retailers present when he asks Wintour to request that designers make smaller collections, to allow faster delivery times and speed up the entire supply chain. Wintour agrees. This is the scene that partially exposes the machinery of the fashion industry — the supply chains beneath the surface of Grace Coddingon’s sittings — and Wintour is completely at ease and in control of all the details. The brilliant Cerulean Sweater monologue in The Devil Wears Prada (actually written by Aline Brosh McKenna, not Weisberger) also touches upon the commercial networks that underpin fashion fantasy, providing the film’s transcendent Gordon Gecko ‘Greed is Good’ moment as she takes apart the pretensions of her assistant, Andrea: 

This “stuff”? Oh, okay. I see. You think this has nothing to do with you.

You go to your closet and you select, oh I don’t know, that lumpy blue sweater, for instance, because you’re trying to tell the world that you take yourself too seriously to care about what you put on your back. But what you don’t know is that that sweater is not just blue, it’s not turquoise, it’s not lapis, it’s actually cerulean.

You’re also blithely unaware of the fact that in 2002, Oscar de la Renta did a collection of cerulean gowns. And then I think it was Yves St Laurent, wasn’t it, who showed cerulean military jackets? And then cerulean quickly showed up in the collections of eight different designers. Then it filtered down through the department stores and then trickled on down into some tragic “casual corner” where you, no doubt, fished it out of some clearance bin. However, that blue represents millions of dollars and countless jobs and so it’s sort of comical how you think that you’ve made a choice that exempts you from the fashion industry when, in fact, you’re wearing the sweater that was selected for you by the people in this room. From a pile of “stuff.”

This put a glossy spin on a process that Nicholas Coleridge first detailed in his 1988 book The Fashion Conspiracy and which has since accelerated to the point of environmental and social meltdown. Coleridge’s book is now out of print and virtually forgotten, but it is one of the best books on fashion ever written: in luxurious if slightly snooty prose, Coleridge meticulously opened up every corner of the fashion industry as it existed in 1988, from sweatshops in Seoul and Brick Lane to Seventh Avenue design titans, the fashion victims of Manhattan society, the influx of avant garde Japanese fashion and the ongoing supremacy of Paris Couture. Coleridge wrote:

[T]he fashion conspiracy is not simply a conspiracy of expensive clothes being marked up around the world, it is a conspiracy of taste and compromise: the prerogative of the international fashion editors in determining how the world dresses, and how their objectivity can be undermined, the despotic vanity of the designers and ruthlessness of the store buyers in distributing their immense ‘open to buy’ budget. (28)

(Coleridge would go on to become the Editorial Director of Condé Nast’s UK division in 1989: Tilberis hated him, although she never mentions whether The Fashion Conspiracy influenced her opinion in any way.) If the word ‘conspiracy’ seemed to be a stretch for the world that Coleridge described in 1988, then he was writing on the cusp of developments that would fully befit the description. The rise of post-NAFTA outsourcing, fast fashion and internet retail would all follow in the 1990s, breaking up production and supply chains across continents and destroying or transforming established names and norms across fashion. Mobile technology and social media further accelerated the process, speeding up the transmission of new looks and offering fresh opportunities for theft and piracy. The trickle-down process that Miranda Priestly outlines does exist, but her description elides the real costs: offshoring production to China and Mexico and the destruction of jobs following NAFTA; horrific conditions inside sweatshops in L.A., Honduras, Vietnam and Bangladesh; the pollution of cotton production and the industrial scale waste of the ‘instant fashion’ system invented by Zara and copied across the globe. This is the underbelly of the industry detailed by Dana Thomas in Fashionopolis, which is like a late and at times apocalyptic update of The Fashion Conspiracy. Who knows how many borders the unfortunate cerulean sweater crossed, or in what conditions it was made? 

But for Wintour, like Miranda Priestly, this is a positive story: “the more people who can have fashion, the better” (29). Wintour presides over this world of production and consumption with imperial impunity, at the apex of haute couture and the apparel supply chains. Equally aware of the cultural impact of a powerful image and the economic reality of retail logistics, Wintour is in this sense the perfect leader for Vogue and by extension the entire fashion industry. Apart from her personal Machiavellian qualities, Wintour’s rise and longevity at the top can be explained by her ability to understand and accommodate both the editorial and the commercial without necessarily sacrificing one for the other. A creature of industry, society, aesthetics and magazines, she also retains a vast blind spot, whether by intention, necessity or temperament. The more people who can have fashion, the better: fine. But the more fashion produced on the old supply chains, through the traditional methods, to the seasonal rhythms of the established industry means, in reality, depletion of resources, pollution and death. The old fashion industry combined with current levels of supply and consumption is, simply put, a death trap. In this context, Bourdin’s 1975 Charles Jourdan murder scene aquires a new and existential dimension. 

V: Machiavelli in Manhattan 

It should be observed here that men should either be caressed or crushed.
Machiavelli, The Prince (30)

When I say that Anna Wintour is a Machiavellian I don’t really mean it in the way that the term is generally understood, as shorthand for deviousness or cynicism, or facilitation of tyranny: I am talking about the ruthless use of power for virtuous ends and the art of leadership in a world where man is more inclined to do evil than good. Wintour’s own stated goals have always been consistent: to democratise fashion and increase the prestige of the industry while maintaining the position of Vogue and Condé Nast at the top of the magazine pile. Since she has been a major factor in the achievement of all of these goals during the course of her career, it only seems fair to grant her motivations at face value. Of course, during that career, and in pursuit of these goals, she has acquired enormous power within the Condé Nast empire and over the whole of the fashion industry. She is sometimes called the most powerful woman in America. To reach this peak she has essentially practiced a form of Machiavellian statecraft. Bodies lie everywhere, and her methods have not always been attractive or even successful. If the result of your personnel management is as damaging as The Devil Wears Prada then something has, after all, gone awry.

Yet, even a crisis like this can be turned into an advantage by a cunning and pragmatic leader in the Machiavellian mould and, as we have seen, this is precisely what Wintour did. Her Machiavellian response was The September Issue, a risky move that relied on converting a piece of luck (Cutler’s offer) into a successful counter-tactic dedicated to an ultimate strategic aim: consolidating her position at Condé Nast. The momentum of Wintour’s career has been maintained by her ability to both crush and caress competitors and collaborators and by her unfailing ability to capitalise on good fortune. This can be seen in the course of her working relationships with Tilberis, Coddington, Newhouse and Liberman and the way that she took every opportunity offered at Harpers & Queen, Viva, New York magazine and Condé Nast to move towards her ultimate goal of American Vogue. Of her two major rivals, neither remain in the field: Tina Brown imploded, flouncing out of Condé Nast to found the ill-fated Talk magazine with Harvey Weinstein and never really recovering her touch; Liz Tilberis died of ovarian cancer in 1999, without seeing her revolution at Harper’s through to its conclusion. Wintour used her wiles with Newhouse to influence the selection of Graydon Carter — by this point an unthreatening Wintour ally — to helm Vanity Fair as Brown moved on to the safe pastures of The New Yorker. This piece of internal turf war was the culmination of the larger Condé Nast war that had raged between the “power booths” at the Royalton restaurant — the fabled ‘Condé Nast cafe’ (31) — throughout the 1980s and 90s.

The ultimate goal for both Brown and Wintour beyond the magazines they ran was Alexander Liberman’s job. Where Brown lost patience and made ultimatums, Wintour kept her ego in check, used her influence discreetly and with the correct people, neutralising threats by ensuring allies got promotions. When she did, finally, start to lose patience, she expertly leveraged a rumour that Barack Obama was going to make her his Ambassador in London. This ploy terrified the bosses: in 2013 Wintour was offered the bespoke position of Artistic Director of  Condé Nast, a role combining the oversight and responsibilities once held by Newhouse and Liberman. This was her ultimate victory in the magazine wars: as Brown conceded at the end of her memoir, “Oh, and yes: Anna Wintour reigns forever as queen of Condé Nast” (32). But having reached the summit, Wintour now faces a world in which Condé Nast has been ruthlessly cut down to size, as magazines have lost lustre, influence and sales to digital platforms and social media. The Condé Nast website no longer lists its diminished roster of titles as magazines but as “brands” — intellectuals no longer write essays for The New Yorker magazine, they contribute content to The New Yorker brand (incidentally, the only Condé Nast title now making a profit). As Reeves Wierdeman noted in a 2019 New York magazine profile, Wintour is still seen as the one indispensable person at Condé Nast (“someone whose eventual departure…will spell the company’s doom”) despite presiding over a steep and consistent commercial decline. As always, on the surface, she puts on a brave face and provides a positive spin, like the distinctively busy, beaming, leaping models of her 1980s Vogue magazine editorials: “How joyous to think about the future and what’s new and what’s next” (33). But, for once, her timing is off: Wintour reached the top of the Condé Nast empire at the precise moment it began to fall apart.

  1. Wintour interviewed on Barbara Walters Presents: The 10 Most Fascinating People of 2006: https://www.youtube.com/watch?v=EEkmKyBzDOE
  2. Lauren Weisberger, The Devil Wears Prada (Harpercollins, 2003), p.40
  3. Ibid., p.21
  4. Ibid., p.266
  5. Grace Coddington, Grace: A Memoir (Chatto & Windus, 2012), p.259
  6. Coddington, p.243
  7. Tina Brown, ‘How Anna Turned It ‘Round’, The Daily Beast, September 11, 2009: https://www.thedailybeast.com/how-anna-turned-it-round
  8. Wintour quoted in Thomas Maier, All That Glitters: Anna Wintour, Tina Brown, and the Rivalry Inside America’s Richest Media Empire (Skyhorse Publishing, 2019), p.84
  9. Alexander Bloom, Prodigal Sons — The New York Intellectuals & their World (Oxford University Press, 1986), pps. 4-5
  10. Emma Brockes, ‘What Lies Beneath’, The Guardian Weekend, May 27, 2006
  11. Quoted in Jerry Oppenheimer, Front Row: Anna Wintour (St Martin’s Press, 2005), p.231
  12. Ibid., pps. 238-9
  13. Ibid., p.288
  14. Liz Tilberis, No Time To Die (Orion, 1998), p.172
  15. ‘Franca Sozzani, Editor in Chief of Italian Vogue, Dies at 66’, Vogue obituary: https://www.vogue.com/article/franca-sozzani-obituary
  16. Michael Baxandall, Painting and Experience in Fifteenth-Century Italy (Oxford University Press, 1974), p.11
  17. Baxandall, p.2
  18. Coddington, p.260
  19. Rose quoted in Maier, p.26
  20. Liberman quoted in Maier, p.32
  21. Ibid., p.46
  22. Tilberis, p.154
  23. Ibid., pps.147-152
  24. Ibid., p. 169
  25. Oppenheimer, p.130
  26. Michel Guerrin, ‘An Image by Guy Bourdin is Never Serene’ in Guy Bourdin, Exhibit A (Bullfinch Press, 2001)
  27. Alison M. Gingeras, Guy Bourdin (Phaidon, 2006)
  28. Nicholas Coleridge, The Fashion Conspiracy — A Remarkable Journey through the Empires of Fashion (Heinemann, 1988), p.4
  29. Wintour quoted in Dana Thomas, Fashionopolis — The Price of Fast Fashion and the Future of Clothes (Apollo, 2019), p.35
  30. Niccolò Machiavelli, The Prince, trans. Russell Price (Cambridge University Press, 1988), p.9 
  31. Meier, p.115
  32. Brown quoted in Meier, p.234
  33. Reeves Wiedeman, ‘What’s Left of Conde Nast’, New York Magazine, 28 Oct, 2019: https://nymag.com/intelligencer/2019/10/conde-nast-anna-wintour-roger-lynch.html

 

Posted in Uncategorized | Leave a comment

American Carnage: Sarah Palin’s Revolution

 

5.3.10SarahPalinByDavidShankbone (1)

I: Family 

Sarah Palin was like a comet zooming across the American sky: out of Alaska, a blazing vision of the republic’s future. In the thick of this current political era, it is worth re-watching the first national speech that she ever made, flanking John McCain as his Vice Presidential candidate in 2008. The cliche goes that when America sneezes, the world catches a cold, and so it was that the central characters of that electoral drama (Hillary Clinton, Barack Obama and Palin herself) attracted the attention of a global audience that considered this to be their own contest, too. At the time, it seemed obvious to quite a lot of people that Sarah Palin’s opinions on birth control and the origin of the world had a direct link to their futures, wherever, on earth, they actually happened to be. But despite this immediate global response, nobody, not even her direct political opponents or McCain himself, quite understood the importance of her national impact in that election. Once it was over, and Obama triumphed, most observers thought Palin was finished on the national stage, sufficiently humiliated and destined to see out the rest of her Governorship safely back in her office in Anchorage. They were wrong. 

That first speech took place on August 29 in Dayton, Ohio. It was the moment McCain revealed the big secret that he had been keeping for less than a week, such was the instinctive and impetuous nature of the decision to pick the “little known” (1) Governor of Alaska to be his running mate. The point of the pick was that Palin was not just any governor: she was chosen by McCain’s key advisers to revitalise a stagnating campaign and to energise unenthusiastic party supporters and activists. That was her function, which was reductive in its objective, as Palin soon realised. In fact, McCain’s team miscalculated, because Palin did not turn out to be the pliant or professional political drone they expected to mould and to exploit. She was both unwilling and unable to meet their expectations or play their game. They found themselves managing a political candidate with an emotional and ideological connection to the activists they wanted to mobilise, but also a stubborn tendency to go rogue. The tie that she had with grassroots Republicans was, it turned out, both deep and dangerous for the GOP establishment, and would eventually ruin them. For the moment, the first and only clue to this future was the undeniable and overwhelming enthusiasm of the crowd for her, which disrupted McCain’s introduction. Watching the video, it is quite hard to read his facial expression when this happens: it may be easy, in retrospect, to notice that he doesn’t look entirely happy, but for anybody who remembers it, and I do, this was a big shot of adrenaline for his team. It just wasn’t, in the end, meant for them. 

The key thing that McCain did not grasp about Palin is that she really would come to Washington as an outsider. This wasn’t a pose, or a studied, tactical position: it was the only way she knew how to operate. It would be a central theme for the rest of her career and would eventually be a threat to McCain, who was, despite his brand reputation as a maverick, a consummate Washington insider like Joe Biden and Hillary Clinton. The nightmare they would all eventually face in the form of Trump had its genesis in the contemporary political character that Palin patented. This first national speech was embryonic in many ways, but also remarkably self-assured, briskly recalling the major accomplishments of her career to date and anticipating some of her future themes. She was confident, almost serene, and at ease with an audience that she instinctively knew was on her side (more than for McCain, which perhaps she realised). She was fearless in identifying, on a national stage, the enemies she had fought in her home state: the special interest lobbies, Big Oil and “the good ol’ boys network” that had included Alaska’s most powerful Republican leaders. Her world view at this point was still parochial, formed by and focused on her family, her town and battles she fought against the local political and business elites. But she was already starting to connect the details of her regional experience to the country at large: Washington D.C. would be her new Juneau. She was mocked, early and then relentlessly, for her ‘folksy’ mannerisms and her parochial worldview, but even in this very first speech she was light years ahead of McCain and the Republican hierarchy as it existed then. She was already tapping into populist rhetoric and, therefore, signalling to a Republican grassroots movement that had been leveraged before but never effectively represented by anybody in the way that Palin would. She was their future.

For Palin, like her heroine Thatcher, the foundation of society was not the state, but the family. So the first thing she did, in the first big national speech of her career, was to introduce every single member of her family to the world. On top of this, there were the larger ‘families’ that she would court. First came those Republican activists, who immediately and instinctively took her as their champion, their Hockey Mom and Mama Grizzly, a role which she relished, to the increasing fury of the McCain campaign staff. Second was the ultimate family: the people, that quasi-mythical mass of ordinary Americans, who were being screwed over and oppressed by the political and business and media elites that she would, increasingly, take aim at, to the fury (again) of the McCain campaign staff. In fact, Palin’s political character made perfect sense for anybody familiar with Alexis de Tocqueville’s Democracy in America. The key themes that Palin hit upon, straight away, without prompting, and relentlessly, were the key themes identified by Tocqueville in 1831 as the foundations of the republic: family, religion, free association, limits on federal government and the sovereignty of the people. This was Palin’s platform and not one part of it was studied, artificial or an affectation. She was a natural politician not because she could make deals in corridors, cultivate networks, or leak to the media, but because she could translate her own values into a political package and communicate it to a core constituency. McCain could never do the last part. No one could do it in the way that Palin would in 2008, and from this moment she became a transformative figure: a national representative without office who unleashed and focused the full power and potential of grassroots Republican activists. 

Three central stages marked this journey, which terminated with Trump: the Alaska Governorship, the 2008 presidential election campaign and the emergence of the Tea Party movement. Each stage helps to understand what Palin did and why she did it. 

II: Corruption

Policy, not politics.
Sarah Palin, Going Rogue (2)

There was a twist in this first speech: a record that Palin refused to obscure. As Governor of Alaska she had been a success, and her success had been based on pragmatism, reform and bipartisan collaboration. She was proud of what she achieved, and how she achieved it: it was, at that point, her whole political story, so she was happy to present it. But, to achieve what she did, she had to take on the Republican establishment of Alaska and the three oil companies that had effectively controlled the Alaskan legislature since 1981: BP, ExxonMobile and ConocoPhillips. She also had to work with her ideological opponents. 

To understand what happened here it is useful to go back to her own account of this time, as narrated in her 2010 autobiography Going Rogue. Palin was a child of the Last Frontier, growing up with the mindset of a colonial pioneer: her parents had moved from Idaho in 1964 to build a new life in the northern wilderness, settling in the ‘Gateway to the Klondike’, Skagway. Alaska is a huge territory, covering one fifth of the American landmass, and its social and economic viability rests on vast and only partly tapped energy resources. 1968 was a defining year for the 49th State, but this had nothing to do with student protests or the counterculture: it was the year that oil deposits were discovered at Prudhoe Bay on the Northern Slope. This transformed the economic prospects of Alaska, but also its political reality, which became increasingly dependent on Big Oil. This reached a climax in 1981, when the oil companies mobilised their allies in the Statehouse to revoke the existing Corporate Income Tax. From that point on they successfully lobbied and bribed both Republicans and Democrats to ensure that taxes remained low and regulation remained light. The remote nature of Juneau, which can only be accessed by boat or plane, encouraged corruption by creating a febrile, insular, secretive, sleazy political culture that Palin would later compare to Washington D.C. The symbolic scandal of this era unfolded in 2007, when an FBI sting operation uncovered key Republican legislators accepting bribes from the CEO of VECO Corporation (a major oil field services company) in his own personal suite at the infamous Baranof Hotel. The Corrupt Bastards Club, as these Republicans laughingly called themselves, would be found guilty of conspiracy, extortion, bribery and fraud. Some went to prison, no longer laughing.

This was corruption and conspiracy that went to the very top. In her role as chairman of the Alaska and Gas Conservation Commission between 2003-4, Palin filed an ethics complaint against Randy Reudrich, a fellow commission member and GOP chair for Alaska. Reudrich was closely allied to State Governor Frank Murkowski, who was also closely connected to Big Oil. Murkowski had alienated Alaskans by arranging for his daughter to replace him in the Senate and buying an expensive private jet on state funds which, it turned out, couldn’t even operate properly on Alaskan terrain. (“That darn jet,” Palin fumed in Going Rogue, “After I was elected, I listed the thing on eBay and an agent finally sold it,” 3) But the Governor’s big error proved to be a deal he cut exclusively with the oil companies to build a new gas pipeline, which he announced in October 2005. This was a deal that was negotiated in secret and mired in corruption; it gave almost everything to the oil companies and nothing to Alaskans. The unraveling of this appalling fix was the making of Palin, who annihilated Murkowski in the Republican primary for Governor, running on a platform of ethical reform, fiscal conservatism and resource development. In office, she put the gas-line out for private tender and replaced Murkowski’s oil tax with Alaska’s Clear and Equitable Share (ACES), a scheme which reaped rich financial rewards for the state. In 2008, when the Lower 48 ran massive budget deficits and were forced to make huge spending cuts, Alaska recorded a $12 billion surplus as a direct result of Palin’s act. She had directly challenged her own party leaders, which necessitated a temporary alliance with Democrats and Independents, and seemed almost certain to terminate her political ascent within the GOP. In fact, she had created a narrative that would eventually propel her onto the national stage and capture the political imagination of Republican activists across the country. She had proven to be a pragmatic, bipartisan, reforming politician, and this was attractive to McCain and his team. But she had also proven willing, and very able, to take on the political and party establishment, and to do it, crucially, on behalf of the people of Alaska.

In Going Rogue, Palin cites the Tenth Amendment and Thomas Jefferson in defence of her own belief in the primacy of local and state government against federal administration, a principle that would endear her to the emergent Tea Party movement in 2009. All of this fed into the image she began to create of an independent-minded maverick, an outsider opposed to “infernal” political machines and “politics-as-usual” (4). Michael Ledeen was the most perceptive commentator of all when he wrote, “for the first time in memory, we have a major candidate who comes from the frontier […] a world that’s almost totally unknown to the pundits, which is why the commentary has been so unhelpful” (5). The idea — the ideal — of the Last Frontier is the key to Palin’s career. Her political imagination was formed in its small, independent frontier towns and the northern wilderness, far from the reach of national politics, yet also partly owned by and dependent on the federal government (a source of resentment in itself). Some of the best passages in Going Rogue are her descriptions of life in Alaska: the practicalities of living in remote territory; the vivid landscapes of volcanoes, mountain ranges, forests, glaciers and lakes; the short summers and the Northern Lights; the abundance of caribou, moose, beluga and killer whales, bears, ospreys and eagles. Underlying all of this are all the Alaskans, united by the state constitution:

Our state constitution stipulates that the citizens actually own our natural resources. Oil companies would partner with Alaskans to develop our resources, and the corporations would make decisions based on the best interests of their shareholders, and that was fine. But in fulfillment of my oath, I would make decisions based on the best interests of our shareholders, the people of Alaska. (6)

This, alongside her Tocquevillian understanding of American character, explained her success as Governor of Alaska and her subsequent bond with grassroots Republican activists across America. For Palin, it all built from the bottom: individuals, the family, private associations and local government are the practical and moral foundation of the republic. At the top, the Constitution remained the framework and safeguard: in 2020, she still felt able to call herself “a hardcore Constitutional conservative” (7), despite the distractions of Trump. In between, the federal and state governments pose the gravest threat to individual liberty and prosperity: in Going Rogue she called Juneau a “swamp”, and the 2008 election schooled her in the Washington party machines. Her personal political philosophy — a mélange of evangelical Christianity, Jeffersonian republicanism, Jacksonian democracy and free-market Reaganism, all mixed together in Skagway, Wasilla and Anchorage — would later help to define the Tea Party and place her at the vanguard of a grassroots revolution against the party elites.

III: Elites 

Opposition makes humanitarians forget the liberal virtues they claim to uphold.
Christopher Lasch, The Revolt of the Elites (8)

On August 19, 2020, Palin appeared on Tucker Carlson Tonight and was asked by the smirking host to comment on the endorsement of Joe Biden by former McCain campaign strategist Steve Schmidt. Palin did not hold back. She wasn’t surprised by this development at all, she said: Schmidt was “a piece of work” who, along with Nicole Wallace, the other senior McCain adviser, had “jumped ship early” and sabotaged their own campaign in 2008. They were “wolves in sheeps clothing,” she continued, “and for those of us who are victims of what they are capable of, it’s kind of a vindication. They were not on our side to begin with” (9). 

The irony is that Steve Schmidt was instrumental in convincing McCain to pick Palin in the first place. Clearly, he quickly came to regret this decision, which was made without serious vetting and put his professional reputation at risk. But Palin’s accusations have always been specific: both Schmidt and Wallace set up interviews for which she was left deliberately unprepared, in order to undermine her credibility and, ultimately, the chances of the McCain-Palin ticket winning. The Katie Couric interview, in particular, was a searing experience for her, a national humiliation, the stuff of nightmares, that she was led into by Wallace for reasons that appeared to be incidental to the campaign strategy itself. Then, to pile on the pain, Schmidt and Wallace’s version of events, and their opinions of her, were presented as the factual record in John Heilemann and Mark Halperin’s anonymously sourced, gossipy bestseller Race of a Lifetime. Palin hated Race of a Lifetime, although she was not the only one: the Clintons, the McCains, Biden and John Edwards also had plenty to complain about. Her portrayal in it was only slightly more nuanced than Tina Fey’s caricture of her as a dumb hick on Saturday Night Live. The language used by Heilemann and Halperin all served to build a picture of a psychologically damaged liability: Palin was “a big-time control freak”, “maniacal”; she “shouted”, “screamed”, “fumed”; her eyes were “glassy and dead”; she seemed to be “suffering from postpartum depression or thwarted maternal need”; she was “mentally unstable” and “irrational”, “a hick on a high wire” (10). This was information being fed by participants and other interested parties, allied to the creative license of professional writers: either way, the character assassination was brutal and bordered on misogyny. The McCain campaign had collapsed for a number of reasons, which included an inability to prepare Palin properly or allow her to play to her strengths, but McCain’s advisers were very quick to frame a narrative in which Palin was assigned all blame for the loss to deflect from their own failures. This narrative was fed directly to media contacts and capped by the portrait in Race of a Lifetime, although even Heilemann and Halperin could not go all the way with it. “The truth was,” they wrote 

the McCain people did fail Palin. They had, as promised, made her one of the most famous people in the world overnight. But they allowed her no time to plant her feet to absorb such a seismic shift. They were unprepared when they picked her, which made her look even more unready than she was. They banked on the force of her magnetism to compensate for their disarray. They amassed polling points and dollars off of her fiery charisma, and then left her to burn up in the inferno of public opinion. (11)

From Palin’s perspective, she was the victim of the Washington D.C. elite: the party machines, the political culture of advisers and strategists, linked to their allies in the media. In Going Rogue, Schmidt is accorded some respect as a ruthless political operator, fulfilling his brief the only way he knew how, but it is Wallace whose betrayal is presented as more personal and wounding. Palin, writing in 2009, said she had been set up by Wallace, who was friends with Couric and was doing her a professional favour by landing a scoop. But “the scoop” was to stitch Palin up: in a rolling ambush, the interviews were framed, conducted and (crucially) edited to hurt her and to boost Couric. “The sin of omission,” wrote Palin afterwards, “was glaring” (12). The Couric interviews, in combination with Fey’s SNL caricature, trashed the reputation of a Governor who had left Alaska with the only state surplus in the country during a global economic crash. Heilemann and Halperin wrote, revealingly:  

there had never been anything quite like the Fey-Couric double act: two uptown New York ladies working independently but in tandem, one engaged in eviscerating satire, the other in even handed journalism. The composite portrait they drew of Palin was viral and omnipresent. The sparkle of celebrity made it irresistible, and devastating. (13)

Palin’s recollection of this would harden, and by the time she appeared on Tucker Carlson Tonight she viewed the actions of Wallace as deliberate sabotage, orchestrated in a professional pact with Couric. The underlying dynamic, however, was implicit, and toxic, as Palin clearly recognised:

It wasn’t that I didn’t want to – or as some have ludicrously suggested, couldn’t – answer her question; it was that her condescension irritated me. It was as though she had suddenly stumbled on a primitive newcomer from an undiscovered tribe. (14)

In a way, from the perspective of Couric, she had. The attitude of the media and political class of Washington D.C. to the Republican activists who Palin represented, and more widely the populations of the South and the Midwest and Alaska, was one of condescension: they did, actually, consider these people to be primitive and tribal. Palin’s supporters inside the GOP could see this very clearly, and very early: already, by the time of the Republican Convention in Minnesota, they were turning their anger on the press boxes precisely because of the way that Palin had been treated by them. This was the divide that would eventually save Palin and fuel the Tea Party and the populist takeover of the GOP. 

This wasn’t simply instinctive or nativist, but linked back to the practical political foundations of the republic as well as the original division between the East Coast urban elites and the frontier pioneers of the nineteenth century. For Palin this divide persisted in contemporary form, between “small town America” and Washington D.C. It wasn’t just political: it was philosophical, psychological, even moral. The soul of the country, and its legacy of freedom, could be found in local government and private associations, as Tocqueville had discovered in 1831. “In the township, as well as everywhere else” he wrote, “the people are the source of power; but nowhere do they exercise their power more immediately. In America the people form a master who must be obeyed to the utmost limits of possibility” (15). Tocqueville recognised that the local autonomy of the townships was crucial to upholding the principle of the sovereignty of the people: “without power and independence a town may contain good subjects, but it can have no active citizens” (16). While recognising the necessity of centralised government, Tocqueville saw centralised administration as a danger to the “active citizen” and, therefore, American democracy: “I am of the opinion that a centralized administration is fit only to  enervate the nations in which it exists, by incessantly diminishing their local spirit” (17). But this was not simply a dry administrative formula, for if that was all it was it would not survive: “patriotism and religion,” Tocqueville argued, “are the only two motives in the world that can long urge all the people towards the same end” (18). This was Palin’s world, too. She built her political ideal from the bottom up: active citizens, to towns, to cities, to states, to the nation. For Palin, the city politics of Wasilla was a “swamp” (19), but still superior because more local and therefore closer to the people than the corrupt sink of Juneau, itself a colonial replication of Washington D.C. “I believe that national leaders have a responsibility to respect the Tenth Amendment and keep their hands off the states,” Palin wrote in Going Rogue. “It’s that old Jeffersonian view that the affairs of the citizens are best left in their own hands” (20). And, as Tocqueville noted in 1831, and Palin confirmed in 2009, that American instinct, represented in its administrative structures, rests on a foundation of religion and patriotism.

For Schmidt, Wallace, Fey and Couric this was simply a worldview held by provincial subjects who only really existed to be mobilised, patronised, ridiculed or erased. Palin was a resource to be exploited by Schmidt and Wallace for immediate political gain in the 2008 election, a way to rally the support of a large group of people they basically despised. For Fey and Couric, representing their entertainment and media caste, the values that Palin espoused made her the primary target: it was both an easy job and an urgent task to neutralise her political potency by humiliating and trashing her. The problem was, a lot of people noticed that by doing this to Palin, Fey and Couric were, by extension, doing it to them too. It was this feeling, allied to the destruction of local communities and jobs, that would eventually contribute to the election of Trump. It turned out that the most pertinent and perceptive text of the Trump era had been written back in 1995: the analysis contained in Christopher Lasch’s The Revolt of the Elites had not been modified by the passage of time, but intensified. The new elites that Lasch identified were, by 2016, exerting even greater influence and control than ever before, amplified by the progress of digital technologies and the increasing mobility of work and capital. The cosmopolitan, transnational, secular, liberal value system of this elite had achieved successful saturation across the educational, media, corporate and federal government sectors, their key power centres. In the 2008 election Schmidt, Wallace, Fey and Couric were its representatives and agents. Palin — as the most visible and potent representative of ‘small town America’, of local community, self-reliance, religion, patriotism, and conservative social principles — became the focus and subject of a wider culture war. From out of the wreckage of this conflict, in which the 2008 election can be seen as a battle she lost, Palin emerged at the head of a world-changing counter-revolution: a new revolt against the elites.

1200px-Gadsden_flag.svg

IV: Revolution

The elitists who denounce this movement, they just don’t want to hear the message.
Sarah Palin, ‘Speech to the Tea Party Convention’ (21)

Palin delivered the keynote address to the first Tea Party Convention in Nashville, Tennessee, on February 26 2010. This was a speech she considered important enough to reprint in full as the afterword to the paperback edition of Going Rogue. Written down as it was delivered the text is disordered, random, rambling. It lacks the structure and precision of the speech she made at the 2008 Republican Convention in Minnesota, a sophisticated combination of partisan attack and statement of principle that thrilled the Republican grassroots. From the perspective of the GOP establishment this remained the high point of her professional political career, but she was probably correct to judge that the Nashville address was more consequential for the conservative movement as a whole. This was the speech that connected her own story to the Tea Party and in making this connection she was summarising the movement’s key themes and defining it on the national stage at an early point of development. Palin described the Tea Party as “a ground-up call to action that is forcing both parties to change the way they’re doing business,” (22)  as she had done in Alaska. Her own priorities were all there: the Constitution, the Tenth Amendment, “a pro-market agenda”, “lower taxes, smaller government, transparency, energy independence and strong national security” (23). The heroes were Washington, Lincoln, Reagan (Palin held her own candle for Calvin Coolidge), but the enemies were even more clearly identified: the “politicos”, “Beltway professionals”, the federal government, “big government and big business”, the mainstream media and the “elites”. Palin understood exactly what was happening, and was not afraid to call it out and lead it: “America is ready for another revolution,” she told the Tea Party, “and you are part of this” (24). 

Palin grasped the fact that the Tea Party was a heterogeneous social movement and not an organised political party, and so she had no illusions about personally leading it. Nevertheless, however dispersed and chaotic the movement was, it shared underlying political principles rooted in the founding texts of the republic. For this reason, the best book on the phenomenon was not a social history, but a slim volume of legal theory written by a teacher of constitutional law. The value of Elizabeth Price Foley’s The Tea Party: Three Principles is that it avoided partisan rhetoric and analysed the Tea Party on its own terms. As Foley wrote (in 2012, when it was still in existence), “there’s no Tea party, but there is a Tea Party movement” (25). What made the movement cohere was not a central leader or an organizing committee, but three central principles: limited government, a defence of U.S. sovereignty, and constitutional originalism. The movement that emerged was not solely aligned to the Republican party at this early stage, although the GOP assiduously courted its votes in order to win back control of the House of Representatives in the 2010 midterm elections, a decision which decisively changed the power dynamic between the party establishment and grassroots activists. What Palin noticed, when she embarked on her Tea Party Express tour in 2009, was an intense focus on the founding texts of the republic: 

the thing that gets the most enthusiastic response — the words that get people on their feet and cheering — is when I talk about America’s founding ideas and documents. Just one mention of the Constitution and audiences go wild with appreciation for our Charter of Liberty. (26)

The speech that Palin delivered in Nashville, to a Tea Party audience waving Gadsden flags, was a defining moment for Palin and the movement itself. Like 1765, it was a conservative revolution: in fact, the Tea Partiers considered themselves to be conserving the American revolution. Palin’s love letter to the Tea Party, almost its manifesto and an essential companion piece to Foley’s text, was her second memoir America by Heart. At its core was the Tenth Amendment, that key limitation to federal power: “nothing could be more simple and straightforward,” Palin wrote:

There, in a single sentence, is the entire spirit of the U.S. Constitution: The federal government’s powers are limited to those listed in the Constitution. Everything else belongs to the states and the people. We give you the power; you don’t give us the power. We are sovereign. (27)

The Tea Party, as Foley detailed, considered Obamacare a violation of the Tenth Amendment. For Palin, in Alaska, the encroachment of federal taxes and spending programmes threatened the Tenth Amendment. The Tenth Amendment was the expression and guarantor of her entire political philosophy, uniting Jefferson, Jackson, Tocqueville, Reagan, the small town representatives of America and the Tea Party. “We serve notice,” she declared, “that we will resist Washington, D.C. adopting us” (28). James Madison had anticipated this divide in The Federalist No. 45, written in 1788 and titled ‘The Alleged Danger From the Powers of the Union to the State Governments Considered’

The powers delegated by the proposed Constitution to the Federal Government, are few and defined. Those which are to remain in the State Governments are numerous and indefinite… [t]he powers reserved to the several States will extend to all the objects, which, in the ordinary course of human affairs, concern the lives, liberties and properties of the people[.] (29)

Foley summarised the importance of Madison’s text for the modern Tea Party position:

It’s critically important that Americans get this distinction: the federal government doesn’t have the power to do anything it wants to do. It’s a government of limited and enumerated powers only…(30)

This was the key to Palin’s own revolt against those seeking to extend the federal administration, a revolt embodied by the Tea Party with Palin as their most prominent and effective spokeswoman. In fact, this was a revival: the passions that produced the Tea Party had always been part of the conservative coalition, drawn upon and patronised by Republican officials during the patrician Bush and insurgent Gingrich eras, yet also at odds with the party establishment and its neoconservative fellow travelers. It is important to remember that when the Tea Party revolted it revolted against Bush’s 2008 Emergency Economic Stabilization Act as well as Obamacare and the ARRA. The original Tea Party protests convened around these specific acts, with a focus on economic and constitutional questions, and avoided divisive social issues, although these would eventually become more prominent as the movement folded back into the wider Republican right. 

The route to power, then, was complex and corrupting. The Tea Party rallies had desperately wanted Palin to run for president in 2012, and it remains historically significant that she chose not to. The actual 2012 electoral ticket was an ambivalent result for the movement: Mitt Romney was the ultimate establishment candidate, and he ran and lost as one; Paul Ryan, Romney’s running mate, was a Tea Party favorite, but generated none of the electricity that Palin had in the same position. The second Obama win dissipated the collective energy and cohesion of the mocked and sidelined ‘Teabaggers’, and yet their influence on GOP selections continued to grow. Nikki Haley, Rick Perry, Rick Santorum, Ted Cruz, Marco Rubio, Carly Fiorina and Michelle Bachmann were all Tea Party candidates and would go on to achieve national prominence (and in Haley’s case, international stature). Establishment Republicans, the RINOs, found themselves beaten by candidates backed by grassroots activists, a groundswell that spread out from the original Tea Party to absorb every conservative movement issue, from gun rights, to same-sex marriage, to immigration, to God. Eventually, somehow, all of this energy and resentment would overwhelm the strict principles that Foley had identified at the origin of the Tea Party, and the movement would find its ultimate avatar in a property tycoon from New York with zero interest in the Charters of Liberty. This choice was not simply extreme, it was illogical in all but one aspect: as a revolt against the elites. Once again, Sarah Palin led the way. 

V: Power 

The truth is, the American Dream is dead. 
Donald J. Trump (31)

Palin was the first major Republican to publicly endorse Trump, on the eve of the Iowa Caucus, which Trump lost to Ted Cruz. At this point, Trump was thrilled to have Palin on his side, and stood behind her on the stage, grinning and gesticulating while she delivered her endorsement speech. For years, Palin had been swinging Republican primaries with her endorsement, boosting Nikki Haley from last place to victory in South Carolina or sinking GOP chances when she backed Christine O’Donnell in Delaware (a candidate who was forced to open her first TV campaign advert with the declaration: “I am not a witch…”). Even before the arrival of Trump, Tea Party candidates had caused mayhem for the traditional GOP establishment. Trump accelerated and deepened this fratricide: in 2016 Palin dumped former Tea Party darling Paul Ryan in the primary race to endorse Paul Nehlen, an antisemitic, white supremacist no-hoper, simply because Ryan had refused to support Trump. Trump annihilated the establishment candidate, Jeb Bush, while trashing the legacy of the Bush dynasty for good measure: the grassroots loved it, a significant indication of how far they had moved against their own party managers. But it got even worse: when Ted Cruz and Marco Rubio, both former Tea Party candidates, made final desperate attacks on the new Republican primaries front runner, they were humiliated (as Rick Perry had been earlier). By May both had left the race. Newt Gingrich bowed to the inevitable and, following Palin’s early example, endorsed a Trump candidacy. From this internecine carnage Trump had emerged unscathed, surrounded by casualties from all sections of the GOP, most of whom would, eventually, crawl back to his side. More importantly, the Tea Party had remained loyal to him all along, despite the opposition of most of their old flames (Perry, Santorum, Fiorina, Ryan, Rubio, Cruz). 

Palin’s endorsement speech was not notably different in content from her Nashville address: the themes and catchphrases were part of the old repertoire (“If you value your freedom, think on it!”), albeit mixed in with new Trump slogans (“Make America Great Again”). The style however, had declined — or matured, depending on your perspective. She apparently had a script, but it seemed to be only vaguely related to what actually came out of her mouth, as she rode the emotion of the crowd and diverged into random attacks on the reporters present. The speech contained little policy or politics in the traditional sense: it was a pure outburst of emotion and resentment that occasionally turned lyrical in its bitterness and suppressed fury. The similarity in rhetorical style between Palin and Trump was clear, but it was instinctive, not imitative. It stemmed from the need that they both shared for direct contact with their supporters, the only thing that kept them politically alive in a world ruled by their enemies. The endorsement had been fixed by Trump’s political director (a former Palin aide), but the match made sense from an emotional point of view: both felt victimised by the media and political elites and considered themselves outsiders in Washington D.C. The event that seemed to give Trump the grisly determination to get to the presidency at whatever cost, including the incitement of racism and violence, had been his public humiliation by Obama at the 2011 White House Correspondents Dinner. Obama remained his primary fixation until deep into the presidential term. Palin and Trump shared an experience of humiliation at the hands of these people, which fueled their thirst for revenge. 

For Palin the actual policies and beliefs that Trump held were not really the point, and this was the same for the Tea Party and other Republicans who lumped all of their special interests under the simple slogans that Trump was peddling: Make America Great Again, America First, Build the Wall, Drain the Swamp. Palin’s political philosophy was fundamentally optimistic: she consistently demonstrated a faith in humanity that was founded on her religion and the American republic as the best guarantor of individual liberty. Trump, on the other, had a cynical view of America and of humanity more generally. “Man is the most vicious of all animals, and life is a series of battles ending in victory or defeat,” Trump had said after the premature death of his brother in 1981, adding: “You just can’t let people make a sucker out of you” (32). He had a higher regard for The Art of the Deal than he did for The Holy Bible. His inaugural address, delivered on January 27, 2017, was a masterpiece of dystopian expressionism, in which he depicted a nation scarred by poverty, crime, gangs, drugs, failing schools, corrupt politics, damaged children and “rusted-out factories scattered like tombstones across the landscape” (33). The speech made repeated appeals to ‘the people’ in order align Trump with the legacies of Jefferson and Jackson, and struck an aspirational note that verged on science fiction (“we stand at the birth of a new millennium, ready to unlock the mysteries of space, to free the earth from the miseries of disease…”). But it was the dark, authoritarian tone that truly defined the address: an inchoate combination of protectionism, isolationism and nativism, guided by the brutal will of a new president who did not seem to understand the limitations of his role in the constitutional arrangement of the United States, and didn’t care about it anyway. “This American carnage stops right here and stops right now,” he thundered, in the most vivid phrase of the speech. Once he had finished speaking, George W. Bush turned to Hillary Clinton and said, “well, that was some weird shit” (34). 

This was quite a long way from America by Heart. You could see aspects of it foreshadowed in the book, but Palin’s patriotic populism seemed very bland and, well, conservative by comparison. In truth, Palin was not a revolutionary: her heroes and her ideas were recognisably part of mainstream American culture and politics. Her disdain for the urban elites and machine politics had antecedents throughout American history, and figured in the rhetoric of Jefferson, Jackson and Reagan. The revolution she headed after 2008 was a revolt against the elites and a revival of democratic populism within the conservative movement. However, emotions overtook ideas. The traditional Republican party was felled by the passions and discontents of its grassroots supporters, who found their true representative in Trump: a wrecking ball to smash the globalised world order. For a number of reasons, politically prosaic as well as instinctive and emotional, Palin joined a cynical and corrupt political platform that promised nothing but power and revenge on the political advisers and media hacks who had tried to destroy her. In the end, it gave her nothing tangible but the satisfaction of watching the establishment reel in anguish and confusion. Despite the speculation, she was offered no major post in the Trump administration. Her marriage collapsed as she pursued her vendetta against the ‘mainstream media’; she described 2019 as being “a lull”, although she remained engaged in her defamation suit against the New York Times. On social media her war against the elites raged on, but she seemed adrift, cut out of the narrative, cut off from her own history: an empty caricature of herself. She had foreseen a political revolution, she had articulated and defined its early stages, and she had played a significant part in promoting its true leader, Donald Trump. But what Trump represented, finally, was an authoritarian personality cult that fundamentally perverted the kind of revolution Palin had once stood for. 

  1. ‘McCain Chooses Palin as Running Mate’, New York Times, August 29, 2008
  2. Sarah Palin, Going Rogue (Harper, 2010), p.156
  3. Ibid., p.147
  4. Ibid., p.3
  5. Michael Ledeen, ‘The Frontierswoman’, National Review, September 3, 2008
  6. Going Rogue, p.126
  7. Palin on Tucker Carlson Tonight, Fox News, August 19, 2020
  8. Christopher Lasch, The Revolt of the Elites and the Betrayal of Democracy (Norton, 1995), p.28
  9. Palin on Tucker Carlson Tonight, Fox News, August 19, 2020
  10. John Heilemann and Mark Halpernin, Race of a Lifetime (Penguin, 2010), all quotes taken from Chapter 22, ‘Seconds in Command’
  11. Heilemann and Halpernin, p.415
  12. Going Rogue, p.272
  13. Heilemann and Halpernin, p.410
  14. Going Rogue, p.276
  15. Alexis de Tocqueville, Democracy in America, trans. Henry Reeve (Everyman’s Library, 1994), p.62
  16. Ibid., p.67
  17. Ibid., p.87
  18. Ibid., p.93
  19. Going Rogue, p.64
  20. Ibid., p.85
  21. Ibid., p.425
  22. Ibid., p.416
  23. Ibid., p.424
  24. Ibid., p.414
  25. Elizabeth Foley, The Tea Party: Three Principles (Cambridge University Press, 2012), p.xiii
  26. Sarah Palin, America by Heart (HarperCollins, 2010), p.xvii
  27. Ibid., p.72
  28. Ibid., p.80
  29. The Federalist, ed. Terence Ball (Cambridge University Press, 2003), p.227
  30. Foley, p.38
  31. Michael Kranish and Marc Fisher, Trump Revealed – The Definitive Biography of the 45th President (Simon & Schuster, 2016), p.9
  32. Kranish and Fisher, p.94
  33. Donald J. Trump, ‘Inaugural Address’, January 20, 2017: https://www.whitehouse.gov/briefings-statements/the-inaugural-address/
  34. Hillary Clinton on The Howard Stern Show, December 4, 2019
Posted in Uncategorized | Leave a comment

‘The Town Blazing Scarlet’: Swansea’s Blitz

CY33193665JPG

They take you up on Townhill at night to see the furnaces in the pit of the town blazing scarlet, and the parallel crossing lines of lamps…If it is always a city of dreadful day, it is for the moment at that distance a city of wondrous night.
Edward Thomas, ‘Swansea Village’

I: The Bombing

In 1914, when The English Review published Edward Thomas’ essay ‘Swansea Village’, the town was at the peak of its industrial wealth and prestige, a position that would be temporarily damaged by the First World War (a war Thomas would not survive: he was killed in action in Arras on Easter Monday, 1917). Thomas’ portrait was debated by the Swansea Council Library Committee, with angry objections raised to descriptions of the town as “a dirty witch” and “compared to Cardiff…a slattern” (1, although in response to Thomas’ judgement that “many of its dark-haired and pale skinned women are beautiful,” Mr Moy Evans objected, “he is all wrong about the women”). In fact, as at least one person present noticed, Thomas had successfully conveyed the paradoxical qualities of the town. It was left to Mr Crocker of the Committee to explain, with bracing common sense: “I beg your pardon, he says the town is witchingly attractive…nobody ever said that Dickens ruined London when he painted Bill Sykes.” As Thomas recognised, and Dylan Thomas would later immortalise, Swansea is a town of insoluble contradictions, inspiring mixed emotions that amount to something more than mere hometown ambivalence. Part of this has always been due to despair at the scars of industry and urban disrepair, a despair equally matched by delight in the natural beauty of the bay. Although the comparison was never as exact as later claimed, in 1826 Walter Savage Landor did write: “The Gulf of Salerno, I hear, is much finer than Naples; but give me Swansea for scenery and climate. I prefer good apples to bad peaches” (2). There also remains, dating from its failure to become a famous Georgian seaside resort, a sense of unfulfilled potential that persists despite a rich industrial history and city status. It is probably impossible to write well about Swansea without running the risk of offending somebody from there, although most will recognise this conflict of feeling.

This conflict was intensified by the destruction of the town during the Second World War. Swansea was one of many major Blitz targets in Britain and while the bombing did not equal the tonnage dropped on cities like London, Liverpool or Birmingham, the concentration of the attack was as ferocious. Swansea and Gower had been subject to random, individual bombing since June 1940, with the first significant air raids taking place in September and the following January, but it was the devastating three night raid of the 19th-21st February 1941 that was subsequently remembered as the Swansea Blitz. In concentration and effect, this attack resembled Coventry: Swansea’s commercial centre was almost completely destroyed, causing a profound historical, physical and psychological rupture for the town.

This was the result of a change of strategy by the Luftwaffe. In early 1941, Luftflotte 3 began to target the west coast seaports that served Allied shipping routes in the Atlantic, switching to night raids following heavy losses during previous day time bombing campaigns. Like Coventry, whose fate was sealed by unusually bright November moonlight, the success of the Swansea Blitz was the result of atmospheric luck: the nights of the raids were cloudless, with moonlight bouncing back off a fresh layer of February snow, creating conditions of exceptional visibility. The first night set the pattern: pathfinders lit up the town with parachute flares and incendiary bombs, illuminating key targets for the main bomber force that saturated the town with thousands of incendiaries and high explosives. By the third night, huge fires consumed the centre. The water mains had been severed by the previous bombing; hoses stretched from the North and South Docks in a desperate attempt to stop the fires, but were continuously destroyed by explosions (3). Neath, Port Talbot and Llanelli dispatched rescue parties which struggled to get in due to bomb-cratered roads and the debris of collapsed buildings. The glow in the night sky caused by the fires could be seen from the far end of the Gower Peninsula. 

After three nights of bombing seven thousand people had been made homeless and the commercial district almost entirely razed. Social cohesion remained and the town was not abandoned, but the Evening Post quoted an eyewitness who succinctly articulated the emotional impact of the raids, stating simply: “Swansea is dead”. The scale of the destruction was an existential event: the industrial infrastructure had survived, but the ruthless immolation of the old town defined the postwar development and identity of the city.

II: The Rest of the World

What made this identity, and what happened to it? The key to these questions lies in the relationship between the town and everything outside of it: both the international contacts which made Swansea a key industrial port, but also its relation to Welsh nationalism and the construction of a ‘Welsh’ identity that took form in the Twentieth Century. 

Swansea was established by Norse raiders who first settled at the mouth of the River Tawe during the tenth century and gave the town its name. Following the Norman Conquest, Henry I transferred the commute of Gwyr to his trusted vassal Henry de Newburgh, the Earl of Warwick, who, recognising the natural advantages of the Tawe, made Swansea his headquarters and built a castle. Exercising the rights of a Marcher Lord, Warwick founded a borough originally populated by non-Welsh settlers who established a successful agricultural market centre and port community of merchants, fishermen, mariners and boatbuilders. For Swansea, modern history began with the Acts of Union, which proved a mixed blessing: the town was absorbed into the new county of Glamorgan, losing its caput status to Cardiff but also tying it into an administrative system facing East towards English markets. From this point, because of the Union settlement and its commercial and political benefits, Swansea thrived. During the Tudor period, the town was able to exploit government policies to capitalise on existing trading contacts and geographical advantages. As a result of rapid population growth that stimulated the economies of all existing Welsh towns after 1540, Swansea’s identity as market destination, international port and centre for crafts and services was enhanced. This had been achieved by immigration, settlement and external investments; through integration into the wider Tudor economy, and exploitation of the international trading links that could be accommodated by the Tawe harbour. It is worth noting that the Act of Union effectively gave the Welsh equal legal status to the English, thereby encouraging local migration from the Welsh-speaking rural communities into English-speaking towns. Like other South Wales towns, therefore, the administration and culture of the Swansea elite remained English, but Welsh-speaking settlers started to become part of the linguistic and social mix. 

Swansea, therefore, was more than ready for the Industrial Revolution. As Glanmore Williams wrote: “[t]he foundations of the future industrial greatness of Swansea and its environs were already being laid in Tudor and Stuart times and the way being prepared” (4) for the era of coal and metal. In fact, the town was an early exporter of coal: shallow outcrops had been mined in the Swansea valley and North Gower from the Elizabethan era, finding ready markets in Somerset, Devon, Cornwall, the South Coast of England and France. By the early eighteenth century Swansea was a busy coal port with hungry export custom in France and Ireland and a population swollen by the large influx of workers from the surrounding countryside and Ireland. The next stage in the town’s evolution was once again driven by outsiders: English entrepreneurs with the necessary wealth and expertise established a thriving metal-smelting industry, importing copper and zinc ores from Bristol, Cornwall and London and strengthening links between Swansea and the West Country. By the end of the nineteenth century, the growth of tinplate manufacturing fed a hungry American market, opening long-lasting trade links and migration routes to the new continent. 

The construction of a great system of docks accelerated the political and cultural development of the town, linking it into a global network of import and export trading partners. Copper, zinc and iron ore imports landed from South America, Cuba, Australia, and later South Africa, Algeria, Chile, Venezuela, California, Italy, Spain, Norway and Sweden. The American tinplate market dwindled following the 1890 McKinley Tariff, but new customers were found in South America, China, Japan and Europe. Coal and patent fuel exports reached France, Italy, Germany, Spain, Sweden, Brazil, Algeria and America. As industry grew and wealth accumulated, the middle classes moved away from the social and ecological furnace by the Tawe: as Victorian suburbs spread west, to the uplands and higher, the townscape became more refined, with the building of Georgian-style villas like Belgrave Gardens and elegant, tree-lined terraces like Walter Road. In the gardens and parks, such as Cwmdonkin, the air was clear, the view back over the town decorated with the red glare of the furnace and the winking rhythm of Mumbles Lighthouse. These years before the First World War, the precise moment captured in Edward Thomas’ essay, represented a peak in Swansea’s industrial history. This was emphatically not a provincial story: in 1913, with a record six million tons of global exports, Swansea was a town with much more than parochial interests. 

In fact, the First World War led to a slump in Swansea’s productivity: the copper and tin plate works ossified before the war and lost out to external competitors after it; hostilities meant the temporary loss of important markets in Germany, Austria and Belgium; while conscription gutted the mines of their workforce, thereby limiting exports from the South Wales coalfield. However, Swansea was revitalized during the interwar period by two strokes of fortune: the decision of the Anglo-Persian Oil Company to open a refinery at Llandarcy, and the sale of the entire port to the Great Western Railway Company which linked the town into the world’s largest dock system. By the Second World War, Swansea was importing crude oil from the Persian Gulf, with 10,000 ton tankers discharging cargoes from Abadan, Haifa and Tripoli (5), while exporting refined petroleum, coal, iron and steel products to Europe, America, Canada and Argentina. This was the town Dylan Thomas grew up in, watching “the dock-bound ships or the ships steaming away into wonder and India, magic and China, countries bright with oranges and loud with lions” (6), a place to be reckoned with and closely connected to the rest of the world. The port expanded and modernised, and was there to be converted to wartime purposes when required: after 1939, the docks took war production orders, handled weapons and the transit of troop reinforcements. The sheer scale of Swansea’s productivity and prestige made it a target for the Luftwaffe in 1941, but the irony is that through strategic confusion and apart from one early, isolated attack on the King’s Dock in July 1940, the Germans left the port facilities intact, while destroying the rest of the town. With the centre in flames and water mains severed, it was the hose pipes stretching out from the docks that came closest to saving Swansea’s old heart.

III: ‘The Broken Sea’

blake

The burning of an age.

Vernon Watkins composed a powerful epitaph for this erased town in his long poem ‘The Broken Sea’. This may not be obvious at first, because the poem is a very broad canvas, clotted with Gnostic imagery, in which Swansea plays one part. A cycle of twenty shorter poems, formally separate yet thematically entwined, it is dedicated and partially addressed to Watkins’ Godchild, “Danielle Dufau-Labeyrie, born in Paris, May 1940”: this birth, and its date, is the foundation for a sprawling meditation on destruction and creation, darkness and light, their cycles and interdependence, obscurely rendered through esoteric Christian symbolism and neo-Romantic imagery. It is, in its way, potent, poignant, even epic, despite numerous flaws and limitations, and places the Swansea Blitz within a wider international and even cosmic context: “the burning of an age.”

The poem opens with an invocation of “the visions of Blake,” the first in a series of reference points that provide their own specific, personal associations (a habit that Watkins indulged throughout his work, dismissed by William Wootten as “paying minor homage to major talents,” 6): Dante, the Books of Job and Kings (“Elijah was fed by ravens”), Hans Christian Anderson, Hölderlin and Kierkegaard. This referential texture provides clues to the coordinates of a rarefied and singular cosmology that seeks, in the grand sweep of the poem, to make sense of the birth of a child within the context of the invasion of France, the bombing of Paris and the destruction of Swansea. The network of symbols used by Blake is crucial to this poem and its imaginative root in dualism and antithesis, the opposition of darkness and light with all of their spiritual associations and implications. The second poem of the cycle is set in Paris after the opening of the Nazi offensive in May 1940, with the city making preparations for the bombing raid that would finally come in June. Plunged into darkness (“‘Put out the lamps! Put out the lamps!’”) but for “a long shaft of moonlight” the City of Light is shut down in anticipation of night attack: people walk in darkness, “moving in ghostly ritual”, “the shroud descending” over Notre-Dame and the Sorbonne, while “singular lives” are “found in the deeper dark…the restive weariness and writhing cramps/of sleepers underground.” Light is a threat in this atmosphere, and people live in shadows or hide in darkness; Paris, crepuscular, cowering, scared of the future (like the new born child, “bursting with terrors to be”) presages the blackouts and Underground station shelters of the London Blitz and the destruction of Swansea: “Perishing in a moment, in a night,/A wave running over the Earth”. These dark shadows both contrast and are entwined in the imagery of light that shrouds the newborn child in her cradle, portrayed by Watkins as a vivid transfiguration:

A cradle in darkness, white.
It must be heavenly. Light

Must stream from it, that white sheet

A pavilion of wonder…

But also a violent conflagration:

...meteors; thunderbolts hurled
From clouds; coil upon coil of spiral flame.
(8)

The child becomes a symbol of the future, of hope and light, but in the shadow of a portent: a future of fire, violence and fear that Paris will face within a month and Swansea within a year. The date that Watkins names (“I remember the tenth of May”) also contains this duality: it is the date that Hitler unleashed his blitzkrieg on the West and, though Watkins does not allude to this, the day that Churchill became Prime Minister.

The ninth poem of the cycle is Watkins’ epitaph for Swansea. It is also the section that caves into rage and despair, listing landmarks that have been obliterated, leaving memories suspended in a void. It is the most vivid part of the poem, and the part that was anthologised by Kenneth Allott in his 1951 Penguin Book of Contemporary Verse. This is possibly because the emotions that the destruction of Swansea stirred in Watkins are more elemental, crude, raw, unprocessed; the most effective lines not rarefied with theological symbolism or literary allusions, but seared onto the page with a furious clarity. “You were born when memory was shattered” Watkins writes, to his Godchild, in the most bitter line of the entire cycle: the ninth poem (“My lamp that was lit every night has burnt a hole in the shade”) is an act of memorial for this lost town, a cry of anguish against shattered memories, the violent erasure of a past now recalled with precision: 

The burnt-out clock of St. Mary’s has come to a stop,
And the hand still points to the figure that beckons the house-stoned dead.

Child Shades of my ignorant darkness, I mourn that moment alive
Near the glow-lamped Eumenides’ house, overlooking the ships in flight,

Where Pearl White focused our childhood, near the foot of Cwmdonkin Drive,

To a figment of crime stampeding in the posters’ wind-blown blight.
 

I regret the broken Past, its prompt and punctilious cares,
All the villainies of the fire-and-brimstone-visited town. 

I miss the painter of limbo at the top of the fragrant stairs,

The extravagant hero of night, his iconoclastic frown.

Through the criminal thumb-prints of soot, in the swaddling-bands of a shroud,
I pace the familiar street, and the wall repeats my pace,

Alone in the blown-up city, lost in a bird-voiced crowd,

Murdered where shattering breakers at your pillow’s head leave lace.
(9)

Watkins continues to address his Godchild in her cradle: “Listen” he says, “below me crashes the bay.” The imagery of the sea is as central to this poem’s symbolic economy as light and darkness. Swansea, burnt out in the arc of the bay, is soothed by the rhythms of the water: everybody who grows up in Swansea understands, innately and unconsciously, the importance of the sea to the phenomenology of the town. In Watkins’ poem it is an all-consuming symbol of destruction and regeneration, and permeates every single part of the piece in a way that is as repetitious and as varied as the tides. The sea is “a wave running over the Earth”, a hurling “sea-mass” “of pitiless history”, “the engulfed, Gargantuan tide” and “the magnificent, quiet, sinister, terrible sea”: an elemental force that is unpredictable, threatening and destructive. But it is also “that eternal Genesis”, “regenerate”, a “resurrection-blast” that “gave back a sigh”: an eternal source of renewal, comfort and vitality. 

For Watkins, destruction contains renewal. Memories are classified in (for him) explicit detail: the clock stopped dead at the time of the bombing on St Mary’s Church (the roof had collapsed following fire damage); the Uplands Cinema (“Eumenides’ House”) in which he watched his childhood heroine Pearl White; his image of the artist Alfred Janes who lived above a College Street florist (“the painter of limbo at the top of the fragrant stairs”); and Cwmdonkin Drive, the childhood home of “the extravagant hero of night”, Dylan Thomas (10). Not every one of these memory traces was destroyed by the Luftwaffe, but like the eyewitness who declared “Swansea is dead” Watkins articulates a broader truth: the old, historic town with its delicate fabric of memory and association, its layers of time and experience, had been irrevocably eradicated by the fires that raged unchecked after three nights of bombing. 

The wave runs over the earth, and also regenerates. Watkins returned to Swansea to recover from a nervous breakdown triggered while working in Cardiff: the town, in a very literal sense, became a place of refuge for him. For most of his life he lived a regular, repetitious, comfortable (and comforting) existence as a cashier in the St Helen’s Road branch of Lloyds Bank, commuting every day from the clifftop village of Pennard (my mother would see him on the Number 64 bus when she got on at Bishopston in the 1960s). Watkins’ imagination was captured by this wider idea of Swansea: an expansive area incorporating the limestone and gorse cliffs of the Gower peninsula, with its gorgeous suite of sandy beaches and expensive and secluded houses (his childhood home was one of these: Redcliffe, a Victorian house at the head of Caswell Bay that was demolished in the 1960s and replaced with a concrete apartment block). This was a town and a peninsula defined by the sea, and its regenerative effect was the dominant reality: both for his personal health, and for the post-war rebuilding of a modern port. 

His 1962 ‘Ode to Swansea’ is a tribute to this vision of Swansea: a “bright town” (11) that emerges from the ruins of war: 

Leaning Ark of the world, dense-windowed, perched
High on the slope of morning,

Taking fire from the kindling East: 

This is the Swansea of light and water: a view down over the bay from the Victorian suburbs of the Uplands and the coiling streets and sprawling estates of Sketty and Townhill, over the 

…shell-brittle sands where children play, 
Shielded from hammering dockyards

Launching strange, equatorial ships.

The renewal of industry in the docks through the oil refinery which attracted the vast tankers that my grandfather would have known in some detail, working as a ship broker for Burgess & Co in 1962, provides, for Watkins, an image of the vitality of the town and its participation in the global pursuit of wealth and power. The poem is also infused with the perspective of a man of the Gower: gulls, pigeons, starlings, shags and cormorants all populate the picture, alongside the Mumbles Lighthouse, limestone rocks, Lundy and the fishing nets dropped off the coast. For Vernon Watkins, unlike Dylan Thomas, Swansea became a “loitering marvel” that contained multitudes: a lovely window onto a wide and teeming world, rather than a narrow, small, provincial place. “Prouder cities rise through the haze of time,” he wrote, “Yet, unenvious, all men have found is here.” 

IV: ‘Return Journey’

In 1947 Dylan Thomas was commissioned to write a radio feature for the Home Service series ‘Return Journey’. At this time he was tempted to move to America, encouraged by his publisher James Laughlin who promised to look for a suitable house near New York, and very actively discouraged by his friend Edith Sitwell who tried to convince him to consider Switzerland instead, or Italy. This was an unlikely move: Thomas once told Laurence Durrell, “the highest hymns of the sun are written in the dark…If I went to the sun I’d just sit in the sun” (12). In February 1947, he went instead to a very cold and dark Swansea yet to be rebuilt and with the clock of St Mary’s still stopped dead at the moment of attack. This was the brutal winter that led to huge snow drifts and power cuts and shortages; broadcasting was limited, exposed cattle froze to death and the presiding Labour administration fatally damaged (they lost to the Conservatives in 1950). February, in particular, was the coldest month on record; the ruins of the town froze under layers of snow in conditions that recalled the nights of the Blitz six years before. Thomas was not exactly yearning for home at this time, but had been around Swansea in 1941, and the impact of the destruction on him was profound. Bert Trick met Dylan and Caitlin in the town centre the morning after the final raid and later wrote: “The air was acrid with smoke, and the hoses of the firemen snaked among the blackened entrails of what had once been Swansea market. As we parted, Dylan said with tears in his eyes, “Our Swansea is dead”” (13). Even more than the anonymous Evening Post eyewitness, the emphasis was highly personal: “Our” Swansea had been destroyed, a site of shared experience assumed to be universal but certainly not collective. The Nazis had destroyed his memories, the imaginative and emotional contours of his town; it was almost as if Hitler had aimed his bombs directly at Dylan Thomas. This was not the burning of an age: it was the razing of his youth. But as the ‘Return Journey’ script revealed, there was value in this perspective.

The recollections of ‘Return Journey’ go wider and deeper than the Blitz, although this event gives the script focus and also overshadows everything in it. The narrative perspective is suspended in time and space, a voice from the unconscious interrogating apparitions from Swansea past and present, drifting in and out of “the snow and ruin” (14). A barmaid, some Evening Post reporters, a Minister, a School Master, a Park Keeper, among others, collectively piece together impressions of Young Thomas chasing girls, drinking too much, filing useless newspaper reports, mooching around the promenade and sand dunes, climbing trees and cutting off branches in Cwmdonkin Park. Like Under Milk Wood, the medium allows a form that is fluid and overlapping, not bound by any conventional narrative devices or visual or spatial limitations. The texture of memory is key: shifting perspectives and dialogic cacophony; chaotic and vivid visual traces, triggers and clues. This drift of voices is filtered through Thomas’ dense poetic mannerisms: a barrage of alliteration and assonance; adjectives and verbs piled on top of each other or promiscuously modified (“he could smudge, hedge, smirk, wriggle, wince, whimper, blarney, badger, blush, deceive, be devious, stammer, improvise…” etc. 15). Memory is heightened with a poetic luster that conveys the distortions and poignancy of nostalgia. 

This is punctuated by another technique that has a  crucial and specific purpose: the careful recording of the names of vanished streets and shops and people dead and alive (“Mrs Ferguson, who kept the sweet-shop in the Uplands where he used to raid wine gums, heard her name in the programme, and wrote to say, ‘Fancy remembering the gob-stoppers’”, 16). Cecil Price later asked Thomas how he remembered all the lost shops that he lists: “‘It was quite easy,’ he answered. ‘I wrote to the Borough Estate Agent and he supplied me with the names’” (17). Like the scattered memories that Watkins crystallizes momentarily in ‘The Broken Sea’, this recording is a deliberate act of memorial, a motivation made explicit, and tragic, by a roll call of dead school contemporaries (“The names of the dead in the living heart and head remain forever. Of all the dead whom did he know?”, 18). The whole script is permeated with death, but the act of writing and the reading of names is an attempt to cheat it: to immortalise a lost world, at least partially. The inevitable failure of this attempt, for Vernon Watkins and Dylan Thomas, provides the emotional core of their acts of memorial.  

Thomas had a more complicated relationship with this dead town than Watkins, for whom Swansea became a revitalising landscape and a refuge. For Thomas the town was a source of material inspiration that was the springboard for his flight out to the rest of the world. His recollection of the Kardomah Cafe gang through a litany of conversation topics and shared obsessions and references makes this point: “Einstein and Epstein, Stravinsky and Greta Garbo…Communism, symbolism, Bradman, Braque, the Watch Committee, free love, free beer, murder, Michelangelo, ping-pong, ambition, Sibelius, and girls” (19). The excitement of discovery and ambition that contact with the world ignites goes beyond the narrow confines of the locality in which you grow: for Thomas, Swansea is not necessarily enriched by this external world, but he is enriched by it despite the dampening provincialism of the town. The plan, like so many before and after, is to get out: “Dan Jones was going to compose the most prodigious symphony, Fred Janes paint the most miraculously meticulous picture, Charlie Fisher catch the poshest trout, Vernon Watkins and Young Thomas write the most broiling poems, how they would ring the bells of London and paint it like a tart.” The strain of nostalgia evident in ‘Return Journey’ is partly the affection of the escapee, looking back on what has been left behind, with all small town frustrations exorcised. But it is also a reaction to the violent eradication of the landscape of his youth, which robs him of an immediate physical environment upon which to reminisce. So much of such worth is easily lost: “The Kardomah Cafe was razed to the snow, the voices of the coffee-drinkers – poets, painters, and musicians in their beginnings – lost in the willy nilly flying of the years and the flakes.” For Dylan Thomas, in ‘Return Journey’, the Blitz was simply a more definitive, and cruel, expression of time. 

V: The Anglo-Welsh

They went outside and stood where a sign used to say Taxi and now said Taxi/Tacsi for the benefit of Welsh people who had never seen a letter X before.
Kingsley Amis,
The Old Devils

To have high esteem for a language you don’t actually use, while holding the one you do use in low esteem, is to be in a parlous mental and moral condition.
Conor Cruise O’Brien,
Ancestral Voices

The Kardomah gang eulogised by Thomas and Watkins represented a creative sensibility that was part of an outward-looking, modern, anglicised urban and industrial culture clustered around the ports and coalfields of South Wales with related pockets in the Medieval Anglo-Norman enclaves of South Pembrokeshire and Gower. The productivity and the money that flowed from and into these areas was the source of a cultural and commercial dynamism that was noticed, momentarily, by the rest of the world. This was a crucial and even central component in the political and economic development of the principality and, therefore, any notion of Welsh identity and ‘nationhood’.  It was a Welsh identity that was located in Swansea, Barry, Cardiff and Newport, with their commercial and mercantile character and their international links and contacts. It was also the Welsh identity of the Valley communities, English-speaking and with their own internationalist links and aspirations fostered by the trade unions, the Labour party, Communist, Trotskyist and Syndicalist organisations, all traditions with no use or respect for Welsh nationalism. As John Davies pointed out in his history of Wales, the most influential exponent of this world view, progressive from its own perspective, was Aneurin Bevan:

As he was convinced of the virtues of central planning, Bevan saw no necessity for that strategy to have a Welsh dimension. In his speech on 17 October 1944, he mocked the notion that Wales had problems unique to itself…Peter Stead has argued that, in the 1940s, it was Bevan specifically who frustrated any significant advance in the recognition of Wales; furthermore, he maintains that in subsequent decades every scheme for devolution would have to wrestle with Bevan’s notion of political priorities. (20)

The star poet of Wales and international exponent of Anglo-Welsh literature, woven well into the fabric of the contemporary Welsh heritage industry, was also hostile to Welsh nationalism and, even, the very confines of a Welsh identity. As Paul Ferris notes, for Dylan Thomas,

Wales was incidental. Thomas had no intention of being regarded as a provincial poet, and there was no substitute for living in London. It was only later that the question of Thomas as a specifically ‘Welsh’ poet arose; and when it did arise, he decried it…In 1952, in a letter to Stephen Spender, thanking him for his review of Collected Poems, he wrote, ‘Oh, & I forgot. I’m not influenced by Welsh bardic poetry. I can’t read Welsh’. (21)

Ferris qualifies this stark declaration by arguing for the Celtic character and technical influence of Welsh verse on Thomas’ strictly monolingual output. What cannot be argued is that, in common with other Anglo-Welsh writers, Thomas derived the bulk of his subject matter and inspiration from Welsh communities and characters and landscapes. This was a tendency dismissed savagely by Kingsley Amis, who met the poet in Swansea and considered him to be

a pernicious figure, one who has helped to get Wales and Welsh poetry a bad name and generally done lasting harm to both. The general picture he draws of the place and the people, in Under Milk Wood and elsewhere, is false, sentimentalising, melodramatising, sensationalising, ingratiating. (22)

The case against this tendency in Anglo-Welsh literature was given more measured and biting articulation in fictional form in The Old Devils, out of the mouth of Charlie: “Write about your own people by all means, don’t be soft on them, turn them into figures of fun if you must, but don’t patronize them, don’t sell them short and above all don’t lay them out on display like quaint objects in a souvenir shop” (23). From the scathing portraits of Caradoc Evans to the socialist ballads of Idris Davies and the pastoral mysticism of Vernon Watkins, this rootedness in the subject of Wales was both a weakness and a strength: but, more than anything, it helped to articulate the culture and aspirations of English-speaking Welsh communities, the most powerful, productive and progressive in the principality. Swansea was a central connection in this Anglo-Welsh artistic network, with its own creative cluster around Thomas, Watkins, Dan Jones and Alfred Janes, with flourishing satellites like the Swansea School of Art which produced the Surrealist painter Ceri Richards. 

The Welsh nationalism that eventually found political form in Plaid Cymru defined itself in conscious opposition to this tradition. From inception, the focus of Welsh nationalism was language: this came to be the basic element of Welsh identity, and its legal and cultural resurrection the foundation of independent nationhood. The significance of this fact lies in the nature of the English-speaking regions of Wales, as described above. Welsh nationalism was at root a vision of Wales centred on the Welsh-speaking, rural communities idealised as uncontaminated by the external and foreign influence of the modern industrial world. It was, basically, nativism, with a reactionary right-wing tendency at its core. This is difficult to recall now, as Plaid Cymru eventually followed the liberal path of D.J. Davies, but the unadulterated, authentic heart of Welsh nationalism was, at this stage, personified by Saunders Lewis. This vision of Welsh identity was regressive and insular, based on an artificial aesthetic of ruralism, provincialism, a cult of the past augmented by Welsh folktales and Celtic legends, Bardic traditions and peasant superstitions. Lewis was a Roman Catholic and brought to Plaid Cymru the influence of French Catholic conservatism, Charles Maurras and Action française (his ally Ambrose Webb wrote, “It is a Mussolini that Wales needs!”, 24). By the Second World War, this Plaid faction was propounding an anti-democratic, extra-parliamentary platform that called for abstention from the ‘Imperial War’ against Hitler and a form of direct action that led to an arson attack on RAF Penrhos in 1936. Lewis was incarcerated in Wormwood Scrubs for this adventure and became a  martyr for the Welsh cause, electrifying activist sentiment within the more militant Welsh-language communities (although many couldn’t have cared less). Welsh nationalism never took this radical course but did reframe the idea of Welsh nationality along the lines of language rights and a rural aesthetic that found expression in, for example, the Welsh Language Act of 1993 and the professional regeneration of the Eisteddfod. This would find its true home in the ‘liberal’ nationalism of a primarily middle class milieu attached to Plaid Cymru, the Church, the University of Wales and the provincial BBC. For these people, language became a political weapon in a culture war.  

This had an indirect effect on Swansea, which had no significant Welsh-language enclaves like Cardiff, and maintained a residual, resolutely anglicised culture which had been the basis for its civic and cultural identity during the years of industry. In reaction to the new cultural and political definition, or ideal, of ‘Welshness’, Swansea’s sense of self-identity crumbled alongside its vanishing industries in the 1980s and 90s. It had no clear way to orientate itself in this new, post-industrial Wales of the Assembly and the Language Act. Its major industrial history, like that of the Valleys, was reduced to perfunctory heritage trails and unenthusiastic school trips, or dismissed as a legacy of ecological disaster in the case of the Tawe and Swansea Valley. Swansea’s greatest cultural achievement — the outward-facing, expansive, experimental, anglicised coterie of the 1930s-50s — was eclipsed by the capture of Dylan Thomas for the Welsh cause, despite his own antipathy to nationalist sentiment and his own conflict with Welsh identity that both animated and distorted his work. Swansea, itself, was reduced by this new idea of the nation: both the complexities of the “two-tongued” city characterised by the anglicisation of its rural, Welsh-speaking settlers, and the Anglo-Welsh culture of its elite and middle classes that found literary expression in the work of Dylan Thomas and Vernon Watkins, were downplayed and even erased by this political project. This was a large part of the reason why, despite the apparent health of the Dylan Thomas industry, Swansea struggled to capitalise on or communicate the full significance of its own past: that past was now debased currency in Wales. Swansea, like Newport and Barry, was shortchanged by Welsh nationalism: nationalism was not interested in Swansea, or in its interests. 

VI: The City

Swansea’s postwar history is a story of temporary renewal followed by steep decline. Following the Blitz, the local authority was faced with the task of rebuilding the entire central commercial area, a project which did not begin until 1947, in the months after the writing of ‘Return Journey’. In some ways, reconstruction never ended. The town centre was rebuilt in the clean concrete style of the New Town: the Kingsway became the wide central thoroughfare, a grey wind tunnel capped by a monolithic, Brutalist Odeon cinema. The town was, in a way, rebuilt in a manner befitting a modern city: a status finally bestowed by the Queen in 1969, by which time Swansea was rapidly losing all the elements that gave it national and even international prominence to begin with. Slowly the traditional industries were dismantled or relocated: the Llandarcy refinery was reduced to a specialist lubricating oil producer in the late 1980s, fully closing in 1998; Landore steelworks was converted into a small engineering factory in 1980; the Velindre tinplate works closed in 1989; finally, the Baglan petrochemical plant began to disappear from the Eastern horizon in 1994 and was gone by 2004. I grew up in the city from 1982, watching oil tankers lumbering impressively across the bay; by the time I left for university in 1996, they had, almost imperceptibly, completely disappeared from the channel. For the majority of people in Swansea, by this time, the change hardly registered at all. 

So, the city redefined itself, reverting to an earlier ambition to create a glamorous seaside resort. “From the 1780s to the 1830s,” wrote J.R. Alban,

Swansea had great pretensions to becoming a seaside resort of some standing. ‘The Brighton of Wales’ or a ‘Welsh Weymouth’ were some of the comparisons made by contemporaries. During this period, the town was provided with bathing houses, bathing machines, public gardens, circulating libraries, public assembly rooms, theatres and a newspaper; in fact, all the accoutrements needed to make it a genteel and fashionable place or resort. (25)

Swansea did not become the Brighton of Wales. But, in 1982, the South Dock was transformed into a yacht marina, with hotels, cafes, bars and restaurants added throughout the decade. This was one of the first such post-industrial redevelopment projects in Britain and was, initially, a success, chiming with a new (definitely not parochial or Welsh at all) consumer and enterprise culture, ruthlessly mocked by Kingsley Amis in The Old Devils. Swansea, in this new world of services, leisure and tourism, stripped of its industrial identity, was more closely tied to the Gower than ever: one part became an adjunct of the other. The middle class commuter worked in Swansea but lived in Langland or Bishopston; tourists stayed at Oxwich or Llangennith but brought supplies or spent rainy days in the city. There was a new enterprise zone with a tropical hothouse, a multiplex cinema, and Toys-R-Us. The memory of Dylan Thomas was absorbed, mobilised, sold: the Swansea Year of Literature in 1995 was the apotheosis of this process. This redefinition of the city and its identity still stopped short at the border of Welshness and the politics of Welsh identity. This had not been an issue, or even at stake, in the city I grew up in, and it struggled to adapt to the new criteria of nationality. Like everywhere else at this time, the primary cultural influence was American: Dallas, Dynasty and Miami Vice could tell you more about the mores and imagination of the city of Swansea than the Mabinogion. Even as this self-image and those aspirations — dreams of Marbella or Miami played out on the set of the Maritime Quarter — dissipated in the ruin of recession, the imagination of everybody in my school still happened to be shaped by Beverly Hills 90210 and Baywatch rather than the medieval bards.  This was the reality of the time and the location: there was no longing, or need, for a fabricated rural tradition. Swansea remained a modern city: open to outside influence, looking forward. As I wanted to show when I started to write this essay, if the city actually, fully, embraced this tradition it would find plenty to be proud of and even to feel confident about. It would, in fact, find something deeper and wider than the Dylan trail or constricting contours of Welsh identity: it would find the history of a town that let in and went out to the whole world. 

  1. See Andrew Green’s blog post ‘Edward Thomas in Swansea’ for details of this meeting, with quotes taken from the Cambrian Post: https://gwallter.com/books/edward-thomas-in-swansea.htm
  2. Quoted in James A. Davies, ‘‘Under a Rainbow’: Literary History’ in Swansea – An Illustrated History, ed. Glanmore Williams (Christopher Davies, 1990) p. 221
  3. Nigel Arthur, Swansea at War – A Pictorial Account 1939-45, (Archive Publications, 1988), p.27 and pp. 34-5. This book has extensive details on all bombing raids on Swansea.
  4. Quoted in ‘Before the Industrial Revolution’ by Glanmore Williams, in Swansea – An Illustrated History, p.16
  5. David Boorman, ‘The Port and its Worldwide Trade’ in Swansea – An Illustrated History, p.77
  6. Dylan Thomas, ‘Reminiscences of Childhood (Second Version)’ in Selected Writings (Heinemann, 1970), p.3
  7. William Wootten, ‘In the Graveyard of Verse’, London Review of Books, August 9th 2001
  8. Vernon Watkins, ‘The Broken Sea’, The Collected Poems of Vernon Watkins (Golgonooza Press, 1986), pp.80-1
  9. Watkins, p.86
  10. James A. Davies, ‘‘Under a Rainbow’: Literary History’, p. 237
  11. Watkins, ‘Ode to Swansea’, p.285
  12. Quoted in Paul Ferris, Dylan Thomas (Penguin, 1978), p.223
  13. Ferris, p.184
  14. Dylan Thomas, ‘Return Journey’ in Dylan Thomas – The Broadcasts, ed. Ralph Maud (J.M.Dent & Sons, Ltd., 1991), p.183
  15. Thomas, ‘Return Journey’, p.185
  16. Ferris, pp.223-4
  17. Quoted in editorial note to ‘Return Journey’ by Ralph Maud, p.178
  18. Thomas, ‘Return Journey’, p.186
  19. Ibid., p.183
  20. John Davies, A History of Wales (Penguin, 2007), pps. 592-602
  21. Ferris, p.115
  22. Kingsley Amis, Memoirs (Penguin, 1992), pps.132-3
  23. Kingsley Amis, The Old Devils (Penguin, 1987), p.28
  24. Kenneth O. Morgan, Rebirth of a Nation: Wales 1880-1980 (Oxford University Press, 1982), p.256
  25. J.R. Alban, ‘Local Government, Administration and Politics, 1700 to the 1830s’ in Swansea – An Illustrated History, p.110
Posted in Uncategorized | 3 Comments