History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Sun, 09 Aug 2020 16:16:31 +0000 Sun, 09 Aug 2020 16:16:31 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://www.hnn.us/site/feed Unconditional Surrender: The Domestic Politics of Victory in the Pacific

 

 

As we observe the 75th anniversary of the surrender of Japan, we should remember that the terms on which that surrender took place remain among the most contested issues of the war. Unconditional surrender was destined to be controversial because it was Franklin Roosevelt’s policy. A quintessentially New Deal program, its goal was the creation of economically broad-based democracies in societies predicated on conquest and the subjugation of other people.  Conservatives, who had long battled the New Deal, saw little reason to extend it to Japan. Following FDR’s death in April and Germany’s surrender in May 1945, they pushed for modification of unconditional surrender.  

Undersecretary of State and former ambassador to Japan Joseph Grew, Secretary of War Henry Stimson, former president Herbert Hoover, and Admiral William D. Leahy, military advisor to both Roosevelt and successor Harry Truman,argued that Japan had been a cooperative U.S. partner during the 1920s and could become one again. Leahy, Stimson, and Hoover questioned the need for a full-scale occupation of Japan and predicted that American efforts to reform Japanese society would create chaos and turmoil and make the country ungovernable. They argued that once the militarists who had hijacked the government were eliminated, the prewar leaders, dubbed “moderates” or “liberals” would steer Japan back onto a civilized path. 

The fate of the emperor was another point of disagreement. New Dealers saw the monarchy as a bastion of reaction that enabled Japan’s business and military leaders to oppress the country’s workers. Conservatives viewed the emperor as a figurehead, but one who could helpfully stabilize Japanese society in defeat. In short, New Dealers believed the sources of Japanese militarism had to be torn out by the roots. Conservatives thought some careful pruning would do the trick.  

Drawing on their lengthy professional service in the upper reaches of government, Stimson, Grew, and Leahy believed they were better able to define the national interest than the politicians who were beholden to the whims of public opinion. The proletariat, the term Leahy used for the public, should not make policy. Stimson bitterly complained that advocates of unconditional surrender derived all their knowledge of Japan from Gilbert and Sullivan’s Mikado.  Hoover referred to advocates of unconditional surrender as a vengeful minority. Eventually, he would blame unconditional surrender on communist sympathizers who prolonged the war so the Russians could get in on the kill. 

Grew, Stimson, Hoover, and Leahy thought a carefully crafted demarche might convince the Japanese to surrender. The place to start was with a public modification of American war aims. The most important “clarification,” the term preferred by conservatives, was one that assured the Japanese they could keep the emperor on the throne. That “clarification” just might tip the balance of power in Tokyo in favor of the moderates. But time was running out. The Russians were set to join the war in August. The American invasion of Japan was slated for November. Conservative advocates of “clarification” hoped to prevent the twin calamities of a Soviet occupation of Northeast Asia and a costly American assault on Japan.

FDR’s successor refused to cooperate. Despite steady pressure from Grew, Stimson and Hoover, Harry Truman stuck to unconditional surrender. Following a meeting with Truman in late May, Hoover encouraged Republican senators to take up the cause. Senator Homer Capehart (R-IN) asked why “we must destroy Japan’s form of government and then spend years in occupation and teaching a different form of government.” Kenneth Wherry (R-KS) and minority leader Wallace White (R-ME), likewise called for clarification of unconditional surrender and questioned the need to occupy Japan. They were supported by the conservative press. Time magazine publisher and Republican internationalist Henry Luce personally lobbied senators to support a statement “clarifying” unconditional surrender. Raymond Moley, a Roosevelt ally turned foe, wrote in the Wall Street Journal that the new president should tell the Japanese he had no intention of “interfering with the religious and social system which centers in the Emperor,” except to ensure that it did not promote aggression.

Truman was unmoved. Reporting on conditions within the administration, a Hoover confidant wrote that “Liberals and New Dealers” wanted to execute the emperor. More knowledgeable officials were warning of a protracted war if the U.S. insisted on destroying “Japan’s religious and political systems.” Truman refused to offer the emperor any guarantee. On July 26, the Allies warned the Japanese they faced prompt and utter destruction if they continued the war. The Japanese would not budge. 

Contrary to what Hoover and the others claimed, Hirohito did not contemplate surrender. He sought a peace that would leave the monarchy unmolested and Japan’s political structure unchanged. Rather than approach the Americans, Hirohito tried to buy Soviet good offices by offering Joseph Stalin slices of Japan’s empire. 

The end came swiftly in a series of world-shaking events. On August 6, the first atomic bomb was exploded over Hiroshima. The Soviets declared war on Japan on August 8. The same day, a second atomic device instantly killed 39,000 inhabitants of Nagasaki. The Japanese government finally offered to accept the terms of the Potsdam Declaration providing they did not impinge on the prerogatives the emperor. Stimson and Leahy urged Truman to accept. Instead, the president authorized a reply that made the authority of the emperor and the Japanese government, subject to the Supreme Commander for the Allied Powers.

It was a crucial distinction. Acceptance of the Japanese offer would have left the emperor’s considerable prerogatives intact and thwarted from the outset American efforts to reform Japanese society. Truman told Democratic senators Mike Mansfield and Warren Magnuson that he thought Hirohito was as guilty as Hitler and Mussolini. He was, however, willing to let the emperor to remain on the throne but only if he served American war aims.

Following the war, conservatives argued that the same arrangement could have been made if Truman had been willing to ignore the public’s demand for vengeance and abandoned FDR’s unconditional surrender policy. That was not so. Throughout 1945, Hirohito was unwilling to accept any limits on his traditional authority and any change to Japan’s political structure. 

Hirohito became the figurehead that Grew and the others said he was, but only after he agreed to the unconditional surrender of Japan’s armed forces and the occupation of the homeland. Everything that followed, the disarmament of Japan, the reform of its economic, political, and social institutions, and the adoption of a new constitution, in other words, a New Deal for Japan, was preceded by Truman’s insistence on unconditional surrender. 

The debate over unconditional surrender extended the ideological battleground of the New Deal into the international realm. Understanding that enables us to see how difficult it is to separate partisanship from foreign policy and reminds us that Americans did not abandon politics when they mobilized to fight the “Good War.”

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176760 https://historynewsnetwork.org/article/176760 0
The 1976 Election: Why We Can't Predict Vice Presidential Selections in Advance

 

 

There’s a reason why vice presidential picks are impossible to predict: Even the presidential candidate who makes the decision rarely knows the choice in advance, because the final selection usually depends on an eleventh-hour turn of events that no one can fully anticipate.  This was the case in the 1976 election, a rare moment when not two, but three, major-party candidates selected running mates.  And in each of the three cases, the choices were eleventh-hour selections that pundits did not expect.

 

The vice presidential selection process held more importance than usual in 1976, because when the primaries ended in early June, neither the Republican incumbent Gerald Ford nor the Democratic frontrunner Jimmy Carter had enough delegates to secure his party’s nomination.  Though Ford held a slight lead over his challenger, California governor Ronald Reagan, a grueling neck-and-neck primary race had left neither the conservative insurgent Reagan nor the centrist Ford with enough delegates to claim the nomination outright.  Carter had a much more formidable lead over his primary opponents than Ford did, but a divided Democratic Party – with delegates split between multiple liberal candidates who still refused to concede after the last primary – meant that it was still theoretically possible for the party liberals to deprive the centrist Carter of the nomination if they could agree on a single alternative candidate.  Fortunately for Carter, they did not, but the divisions in their parties meant that for both Carter and Ford, picking a running mate was about more than personal preference or general election considerations; it was also about uniting a fractured party in order to win over some wavering convention delegates and appease disgruntled party activists who had supported another candidate in the primaries.

 

To the surprise of almost everyone, Reagan was the first to announce his choice of running mate.  Traditionally, candidates had waited until they were assured of their party’s nomination to pick a potential vice president, but Reagan’s campaign manager, John Sears, thought that the California conservative’s best chance of winning the nomination was to shake up the process by selecting Ford delegate Richard Schweiker, a moderately liberal Republican senator from Pennsylvania who, Sears hoped, would bring the rest of the Pennsylvania Republican delegation into the Reagan camp and maybe peel off a few Ford delegates from the New York delegation as well – which would be enough to give the nomination to Reagan. 

 

The gambit failed badly.  The other Pennsylvania delegates refused to back Reagan.  And Reagan lost support from conservative southern delegates who felt betrayed that their candidate had picked a northern moderate liberal in apparent opposition to his conservative principles.  

 

For the Ford campaign, Reagan’s selection of Schweiker was a lesson in what not to do.  Ford resolved to spend the next few weeks carefully vetting each potential vice presidential candidate and selecting someone who would unify the party while also remaining fully compatible with the ideology of his own centrist Republican campaign.  Yet ultimately, the selection process ended up becoming far less organized or predictable than Ford anticipated.

 

Ford already had a vice president, of course: Nelson Rockefeller, the former governor of New York who for more than a decade had been the unofficial leader of the liberal wing of the Republican Party.  When Ford had assumed the presidency after Richard Nixon’s resignation in August 1974, he had selected Rockefeller because of his nearly unparalleled executive experience as governor of one of the nation’s largest states.  But to Ford’s dismay, the reaction from the conservative wing of the party was so vociferous that it fueled Reagan’s primary challenge and threatened to divide the GOP.  Faced with the possibility that it might be impossible to win his party’s nomination with Rockefeller on the ticket, Ford reluctantly notified Rockefeller in the fall of 1975 that he would not be the president’s running mate again the next year.  While Ford had never said who exactly would replace him, many both inside and outside of his campaign assumed that it would be someone more conservative.

 

But this was not Ford’s desire.  Even after doing weeks of interviews, public opinion polling, and vetting of potential candidates, Ford arrived at his party convention without having selected a running mate.  All but one of the finalists on his list were either party centrists or moderate liberals – not conservatives in the Reagan mold.  Those closest to Ford believed he was leaning toward one of the most liberal on the list – William Ruckelshaus, a former head of the Environmental Protection Agency (EPA) and deputy attorney general who had earned a reputation for honesty when he resigned rather than carry out Nixon’s orders during the Watergate scandal.  Years later, Ford said that he had actually wanted to select Anne Armstrong, an ambassador who was opposed by some of the president’s advisors who noted that public opinion polling indicated that placing a woman on the ticket would result in a net loss of votes.  

 

Among delegates, there was strong support for Reagan as a vice presidential candidate, a move that would have created a Republican unity ticket that would presumably help Ford in the conservative South, a region that would otherwise likely go to Carter.  But Reagan insisted that he would never accept the number-two spot, and Ford, who was angry with Reagan for challenging an incumbent president in his own party, was not eager to offer it to him.  Instead, Ford met with Reagan at the convention and asked him which candidate he would accept.  Reagan mentioned Bob Dole, a moderately conservative senator from Kansas who had not been on Ford’s list of finalists.  Ford was not immediately inclined to accept Reagan’s suggestion, but after mulling it over throughout the night and talking with his advisors, he decided that Dole would be the best candidate.  He could mollify party conservatives, solidify Ford’s support in the farm states (a normally Republican region that was threatening to break for Carter), and perhaps even help the campaign make inroads in the South.  On the morning of the final day of the convention, Ford announced that Dole was his choice.  Dole barely had time to write an acceptance speech.

 

Compared to Ford’s eleventh-hour selection of Dole, the Carter campaign’s vice presidential selection process was supposed to be much more orderly.  The campaign determined early on that Carter needed to select a northern senator to balance a ticket headed by a southern governor without Washington experience, and there was a general assumption that Carter’s running mate would probably be more liberal than he was (since nearly all northern Democratic senators were to the left of the fiscally conservative, socially moderate Carter).  Some of Carter’s advisors encouraged him to pick a Catholic, since he was initially not polling well among northern Catholics.  Carter ignored some of this advice.  He allowed his campaign aides to draft a list of potential candidates and oversee the vetting process, but he insisted that his highest priority would be to find someone who could be a potential governing partner – which meant, in his view, that the person had to be temperamentally compatible and, above all, share his values.  

 

Carter did agree to interview one senator who seemed to fulfill all of the criteria set by his advisors: Senator Edmund Muskie of Maine, a Catholic with strong working-class roots who had been Hubert Humphrey’s running mate in 1968.  But the interview with Muskie did not go well.  Carter was disturbed by Muskie’s temper, and he crossed the Maine senator off his list.

 

At the start of the Democratic convention, some journalists assumed that the frontrunner for the number-two position was John Glenn of Ohio, a former astronaut who was serving his first term as senator.  But Glenn’s bland keynote address at the convention sank his prospects.  By contrast, Representative Barbara Jordan of Texas gave such a rousing address highlighting issues of race and social justice that some African American delegates insisted that she be selected for the vice presidency – a move that would have made her not only the first woman, but also the first Black American, selected for a major-party ticket.  But Carter, in the end, settled for someone much more traditional.

 

Carter’s advisors had not initially considered Senator Walter Mondale of Minnesota the best candidate for the position.  After tentatively exploring a presidential bid a year earlier, Mondale had ended his campaign almost as soon as it began, saying that he did not want to spend the next year “sleeping in Holiday Inns.”  Some in the party questioned his stamina.  And within the Carter campaign, pollsters noted that Mondale would likely be a net negative for the ticket by costing the campaign votes among moderates who distrusted his liberalism. 

 

But Carter, despite his ideological differences with the Mondale, appreciated the Minnesota senator’s impeccable reputation for honesty.  As a Baptist deacon and Sunday school teacher with a strong faith, Carter also felt comfortable with Mondale’s background as a Methodist minister’s son.  His interview with Mondale sold him on the candidate, and he selected him in spite of the warnings from his pollsters – just in time for the end of the convention.

 

How well did these eleventh-hour vice presidential choices turn out?  Both Mondale and Dole proved themselves to be loyal, hard-working campaigners who avoided scandal and consistently championed the ideals of their party.  Mondale’s lack of ideological compatibility with Carter’s conservative-leaning centrism resulted in tensions once Carter was elected president, but this was not evident on the campaign trail.  And Dole’s acerbic wit alienated some voters and made some Republicans regret his place on the ticket.  In the end, Dole’s place on the ticket did not win over enough southern conservatives to allow Ford to make significant inroads in the South.  But he did help keep the farm states in the Ford column.  

 

Mondale had much less pull in the northern states than Carter’s campaign had hoped.  If Muskie had been on the ticket, Carter would likely not have lost Maine and the rest of northern New England (as he did with Mondale), and if Glenn had been selected, Ohio might not have been such a nail-biter for the Carter campaign.  (Carter carried Ohio in the end, but only by the slimmest of margins.)  But if Mondale’s presence on the ticket did not give Carter as much of a boost in the North as he had anticipated, neither did it hurt him in the South as much as some of his campaign aides had feared; Carter carried every southern state except for Virginia, even with Mondale as his running mate.  And, despite not being selected, both Muskie and Glenn continued to play important roles in national politics – Muskie as Carter’s Secretary of State and Glenn as a senator and, eventually, presidential candidate.

 

So, perhaps, in the end, the 1976 election shows that even if eleventh-hour vice presidential selections hinge on factors that are impossible to predict in advance, the types of candidates who emerge from these selections are usually reliable – assuming that a well conducted vetting process occurs before the final selection.  The press may not have been able to predict that Dole would be Ford’s running mate or that Mondale would be Carter’s, but both candidates generally fit the profile for the type of candidate that a vetting process might have produced.  So, although no one can know in advance the name of Joe Biden’s running mate, whoever he does select for this position is likely to look in retrospect like an obvious choice, regardless of who she may be. 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176761 https://historynewsnetwork.org/article/176761 0
Hiroshima (1953, Hideo Sekigawa)

 

Hideo Sekigawa's 1953 feature film Hiroshima is difficult to view in its entirety, partly due to the fact that the Communist Sekigawa's political views were out of favor in the context of the postwar relationship between Japan and the United States. This clip was brought to HNN's attention by Erik Loomis at the Lawyers, Guns & Money blog. Loomis writes: 

This is an amazing film, one of the best political films I have ever seen. It is extremely angry. And it directly blames the United States for launching a horror on the world for which there is no excuse. And yet it is almost unknown. It was briefly streaming on a service at one point–maybe Filmstruck back before it closed and I just happened on it and watched with amazement. But it wasn’t on there long. It’s not available on DVD in the US though it is in Britain. But because everything is stupid, America has its own system and foreign DVDs won’t work on our players. On IMDB, this film has a whopping…176 ratings, one of which is mine. It was basically blacklisted after its release, which was outside the Japanese studio system, because of its politics at a time when Japan was just moving to get out from under U.S. occupation. It really deserves a wider showing. 

For further reading on the politics of postwar Japanese film and Sekigawa, see Kazu Watanabe's essay for the Criterion Collection's website: 

Hiroshima points its finger in multiple directions, but more than assigning blame to any one cause or agent of war, the film offered a way for those affected by the bomb to show Japan—and the rest of the world—what they suffered. According to the film’s promotional material, up to 90,000 Hiroshima residents, some of whom were hibakusha, and local labor union members were used as extras in the film’s epic scenes of mass destruction. City officials and local businesses lent full support to the production, and the Hiroshima-born lead actress, Yumeji Tsukioka, appealed to the vice president of Shochiku, with whom she was under contract, to let her act in the film for free. According to researchers Mick Broderick and Junko Hatori, early reports about Hiroshima focused on its collaborative nature and repeatedly contrasted it to Shindo’s film. “The extras’ participation was the most enthusiastic ever seen in a Japanese film,” reported the Mainichi Shimbun, one of Japan’s major newspapers. Hiroshima was expected to “bring about a commotion” to overseas markets, unlike Shindo’s movie, which the newspaper derided as a “sight-seeing film.”

Despite growing public anticipation, Hiroshima’s reception was mixed. Like Children of Hiroshima, the film had been due to premiere on an anniversary of the bombing. But after Shochiku allegedly insisted that the content was too “anti-American” and “cruel” and demanded that several scenes be cut, the release was stalled, and all five major studios reportedly ended up refusing to distribute it. The JTU decided to self-distribute rather than cater to studio demands, and while many praised the film for its realism, the Ministry of Education, Science, Sports and Culture considered it too “anti-American” to show to schoolchildren. Hiroshima was finally distributed in the U.S. two years later in an edited version, giving many American audiences their first opportunity to see images of the effects of the bomb. And in 1959, the film gained even more visibility in the West when Alain Resnais used selected scenes for his masterpiece Hiroshima mon amour, which starred Hiroshima’s lead actor, Eiji Okada.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176797 https://historynewsnetwork.org/article/176797 0
The Apocalypse Factory: Steve Olson Discusses the Path of Plutonium From Hanford Nuclear Reservation to Nagasaki Steve Olson is an award-winning, Seattle-based science writer. His other books include Eruption: The Untold Story of Mount St. Helens (Winner of the Washington State Book Award); and Mapping Human History: Genes, Race, and Our Common Origins (a finalist for the National Book Award and recipient of the Science-in-Society Award from the National Association of Science Writers). His articles have appeared in The Atlantic Monthly, Science, Smithsonian, The Washington Post, Scientific American, and many other periodicals. Mr. Olson also has served as a consultant writer for the National Academy of Sciences and National Research Council, the White House Office of Science and Technology Policy, the President’s Council of Advisors on Science and Technology, the National Institutes of Health, and many other organizations. 

Robin Lindley is a Seattle-based writer and attorney. He is features editor for the History News Network (hnn.us), and his work also has appeared in Writer’s Chronicle, Crosscut, Documentary, NW Lawyer, ABA Journal, Real Change, Huffington Post, Bill Moyers.com, Salon.com, and more. He has a special interest in the history of human rights, conflict, medicine, and art. He can be reached by email: robinlindley@gmail.com.

 

 

 

In the late 1930s, German scientists were conducting experiments to create atomic bombs. In response, and fearing a German bomb, scientists and engineers in the United States in 1939 launched what became the Manhattan Project, an effort to develop the world’s first nuclear weapons. And to beat the Germans to the punch.

In early 1941, American nuclear chemist Glenn Seaborg discovered the radioactive element that he named plutonium, which has an atomic number 94. In the months after he isolated the element, he and others saw plutonium’s potential as a fuel for atomic weapons. American efforts to develop a nuclear weapon redoubled after the US entered the war with the Japanese surprise attack on Pearl Harbor, Hawaii, on December 7, 1941.

As part of the Manhattan project, the US rapidly built a huge facility for plutonium production at Hanford in south central Washington State, an arid, desolate area on the banks of the Columbia River.

During the war, the Hanford nuclear facility attracted tens of thousands of workers from scientists and engineers to skilled workers and laborers. Except for a few project leaders, workers did not know the goal of their intense work at the plant until an atomic bomb fueled with Hanford-produced plutonium incinerated most of the Japanese city of Nagasaki on August 9, 1945. That “Fat Man” bomb killed at least 80,000 people—mostly civilians—and injured many others. Three days earlier, on August 6, Hiroshima had been destroyed by a uranium-fueled bomb.

After the war, scientists determined that plutonium was a more efficient fuel for nuclear weapons than uranium. Hanford became the hub for the production of plutonium, the fuel for all of the nuclear weapons produced during the Cold War. 

After the raucous early work camp was shut down in 1944, the operators of Hanford lived in what historian Kate Brown calls the government-created and highly-subsidized “Plutopia” of Richland, Washington, where highly-paid workers and their families were provided first-rate education, health care, and other amenities. In this arrangement, workers produced the extremely dangerous plutonium and the government kept their work secret.

When the Cold War waned in the 1980s, plutonium production stopped at Hanford. The mission then shifted to environmental cleanup and restoration. Today, the facility continues to make news, especially on health concerns and the progress of the massive clean-up.

In his lively and lucid new book Apocalypse Factory: Plutonium and the Making of the Atomic Age (W.W. Norton), acclaimed author Steve Olson blends history and science to tell the story of plutonium and of the massive production facility at Hanford. He details how the nuclear facility was created and how it shaped the story of the region and the nation. And he persuasively argues that Hanford is the most important site of the nuclear age.

Mr. Olson’s new book is based on extensive research and travels to Hanford and other US site, as well as to Nagasaki—a trip that contributed to his vivid and moving description of the bombing 75 years ago and its horrific aftermath. 

Mr. Olson chronicles this nuclear era history through human stories from survivors of the bombing to the great nuclear scientists and military leaders as well as the humble laborers and citizens of the Hanford area. A native of eastern Washington, he presents a unique perspective on the immense Hanford facility that altered world history. 

Mr. Olson is an award-winning, Seattle-based science writer. His other books include Eruption: The Untold Story of Mount St. Helens (Winner of the Washington State Book Award);  Mapping Human History: Genes, Race, and Our Common Origins (a finalist for the National Book Award and recipient of the Science-in-Society Award from the National Association of Science Writers); Count Down: Six Kids Vie for Glory at the World’s Toughest Math Competition (named a best science book of 2004 by Discover magazine); and, with co-author with Greg Graffin, Anarchy Evolution. His articles have appeared in The Atlantic Monthly, Science, Smithsonian, The Washington Post, Scientific American, and many other periodicals. Mr. Olson also has served as a consultant writer for the National Academy of Sciences and National Research Council; the White House Office of Science and Technology Policy; the President’s Council of Advisors on Science and Technology; the National Institutes of Health; and many other organizations. 

Mr. Olson generously responded to a series of questions in a conversation by email.

 

Robin Lindley: Congratulations Steve on your new book on plutonium and Hanford. What inspired your new book? Did it grow out of your childhood in Eastern Washington or your past research on the many topics that you’ve explored in your writing?

Steve Olson: Several things had to come together for me to begin working on this book, but I doubt I would have started it if I hadn’t grown up in Othello, Washington, just 15 miles away from the nearest reactor at Hanford. We couldn’t see Hanford from Othello, because it was on the other side of a ridgeline from us. But we knew it was there, behind barbed wire fences and heavily guarded.

Also, I’ve always been interested in science, even when I was a kid, so I grew up wondering what went on at Hanford. And then, in 1983, when I was living in Washington, DC, a magazine editor sent me to Hanford to write a story about nuclear power, and I decided in the middle of that trip that I wanted to write a book about the place someday.

Robin Lindley: How did you and your family and friends think about Hanford when you were in grade school and high school?

Steve Olson: I grew up in Othello in the 1960s and early 1970s, and Hanford at that time was still an extremely secretive place. People had known since the end of World War II that it make plutonium for nuclear weapons, but they didn’t know much more than that. My grandfather was an occasional steamfitter at Hanford. But workers at the plant had to agree not to tell even their family members what they did.

Robin Lindley: Your book is wide-ranging, from the discovery of plutonium to the story of Hanford and the wartime use of plutonium to nuclear waste cleanup efforts today. How did your book evolve from your initial conception?

Steve Olson: Not long after I decided to someday write a book about Hanford, Richard Rhodes published his incredible book The Making of the Atomic Bomb. That book had a big influence on my thinking about what became The Apocalypse Factory.

I always wanted to tell the whole story of the nuclear age, from the discovery of fission and plutonium to the present day. But to make the book manageable, I knew that I had to tell the story from a particular perspective, so I chose to tell it largely through the lens of the people associated with Hanford’s construction, operation, and decommissioning.

Robin Lindley: What was your research process for the book? Did do archival work and interviews the major figures you discuss? Did you find any surprises?

Steve Olson: I did almost every kind of research I can imagine doing -- interviews, archival research, multiple days spent in Hanford and the surrounding area, trips to places like Oak Ridge and Nagasaki, and huge amounts of reading (I didn’t anticipate how much reading I would have to do). 

I was surprised by how much new material I found, even on a topic that many others have written about. Some of it is trivial, like the fact that Fermi built his first reactor in a racquets court rather than a squash court. But some is much more important. I make the claim, for instance, that the Manhattan Project would not have happened if Glenn Seaborg hadn’t discovered plutonium a few months before Pearl Harbor, which is a claim that hasn’t been made before. That’s the advantage of telling the story from the perspective of Hanford. Things that seem puzzling about previous historical accounts suddenly become clear.

Robin Lindley: Your writing on the experiments with uranium and the discovery of plutonium is vivid and engaging. You cover this history of radiation and nuclear physics from the time of the Curies to the work with atomic weaponry. The speed of development of atomic weapons was breathtaking.

Steve Olson: Maybe the most exciting thing I discovered in writing the book is how many scientific developments had to occur in relatively quick succession to make atomic bombs possible. The scientific story of plutonium’s discovery is amazingly compelling -- and also idiosyncratic. 

If you ran history again, it would almost certainly not work out the way it did. I’m glad you liked the scientific descriptions, because that section of the book used to be about twice as long. But my editor at W. W. Norton, Alane Mason, argued that readers would not be eager to plow through that much science before Hanford even appeared on the scene, and I struggled mightily to cut that material down.

Robin Lindley: Nuclear chemist Glenn Seaborg is credited with discovering plutonium and is a major figure in your book. What are a few things you’d like readers to know about him and his discovery?

Steve Olson: Many readers, I think, will feel a special affinity with Glenn Seaborg -- I certainly did. He was from a small town, was fascinated with science as a boy, worked his way into world-leading scientific institutions through perseverance and good judgment, and suddenly found himself in a position to change the course of world history. The discovery of plutonium in 1941 was not at all preordained. If Seaborg hadn’t had the knowledge and experiences that he did, plutonium might not have been discovered for several more years, and the Manhattan Project might not have happened. Counterfactuals are impossible to construct reliably, of course. But a historical account of its discovery makes clear how improbable this particular course of events was.

Robin Lindley: When did US nuclear scientists first realize that a world-altering weapon of war could be made from uranium and later plutonium? 

Steve Olson: The full awareness grew on them gradually, even if the path ahead seems clear in retrospect. Not long after the discovery of fission around Christmastime of 1938, scientists realized that an atomic bomb should be possible if enough of a rare isotope of uranium could be separated from uranium ore -- a process that seemed so daunting that the physicist Niels Bohr once said “it would take the entire efforts of a country to build a bomb.” 

But the realization that plutonium also could be used to make an atomic bomb took place slowly after Seaborg and his graduate student assistant first isolated the element on February 24, 1941. A committee at the National Academy of Sciences -- where I’ve worked as a consultant writer for the past 40 years -- considered the issue in three reports issued over the last half of 1941, and you can see the committee’s position change as the prospects for a plutonium bomb grew brighter.

Robin Lindley: A big part of your story is of how Hanford was chosen as the site for producing plutonium and how the nuclear facility there shaped history. What were the major considerations in choosing Hanford for this huge facility?

Steve Olson: I start the book with the selection of the Hanford site. In December 1942, a colonel from the U.S. Army Corps of Engineer named Fritz Matthias was sent to the western United States to look for a place to build the world’s first large-scale nuclear reactors, which is what you need to produce enough plutonium for atomic bombs. He took a list with him of the necessary site characteristics: water and electricity for cooling the reactors, a rail line to transport equipment and chemicals, and enough isolation to limit casualties if one of the reactors blew up. As soon as he flew over the Horse Heaven Hills in south-central Washington State and saw the arid and sparsely populated plain that lies within a broad bend of the Columbia River just southwest of Othello, with powerlines from the Grand Coulee dam running through the site and a spur line from the Milwaukee Road, he knew he’d found what he was looking for.

Robin Lindley: How did Hanford fit into the overall development of nuclear weapons in the Manhattan Project?

Steve Olson: Hanford produced the plutonium for the first nuclear explosion in human history -- the Trinity test that was carried out in New Mexico on July 16, 1945. The bomb dropped on Hiroshima used uranium produced at the Oak Ridge facility in Tennessee, but that was a technological one-off -- almost no bombs of that design were ever built again.

The Nagasaki bomb, and future bombs then in the pipeline, were designed to use plutonium from Hanford. Along with a second facility built later in South Carolina, Hanford produced the plutonium that is used as a trigger in all the current nuclear weapons in the U.S. arsenal. That’s why I call my book The Apocalypse Factory. If the plutonium from Hanford is ever used in a large-scale nuclear war, human civilization will probably end.

Robin Lindley: How did the US government deal with residents of the area including Native Americans in taking possession of the land for the Hanford facility?

Steve Olson: Callously, at best. The 1,500 or so residents of the area that would become Hanford all received a letter saying that the government was taking over their property and that they had a few weeks to a few months to move elsewhere. They were horrified, though many later said that they also felt a patriotic obligation to comply. But then the government tried to pay them much less for their land than it was worth, which set off new rounds of acrimony. 

Meanwhile, the land in that area had been used for millennia by various groups of Native Americans, including the Wanapum, who were the group closest to Hanford. Though they retained some rights to visit the land during World War II, they subsequently lost those rights. The Wanapum were treated as badly as Native Americans were all over the West.

Robin Lindley: Historian Kate Brown called Richland the biggest welfare program in US history. How did Hanford evolve from a rowdy work camp of mostly single men to a community of middle-class families in Richland? 

Steve Olson: I would characterize Richland, at least in the early days, as a kind of military installation or base, though one dressed up in the garb of a small American town. When people moved to Hanford, they went to work as employees of the large companies that had contracted with the government to build and operate Hanford. Military officials oversaw the companies and the town’s residents, and those officials felt that they had to provide a semblance of normalcy for people to do their jobs well and compliantly. The town depended on the government for its survival, but the handouts were indirect and targeted.

Robin Lindley: Apart from some high-ranking officials, the workers at Hanford didn’t know that their work would result in a plutonium bomb that would eventually incinerate much of Nagasaki, Japan. How was secrecy maintained on this massive project that employed hundreds of workers?

Steve Olson: The security was astonishing. I know, since my grandfather and some of my high school friends worked there. While building the reactors, crafts workers would know what they were doing but not what anyone else was doing. Workers climbing a ladder would have to show clearances to prove that they belonged higher up on a ladder rather than lower down. Billboards, water towers, and fliers were plastered with the phrase “Silence Means Security.” If people talked too much, informants among the workers would alert their superiors, and the talkers would be reprimanded or terminated. Even after most of the security restrictions came down at Hanford, the old sentiments prevailed.

Robin Lindley: On August 6, 1945, the US dropped its first atomic bomb, a uranium device, on Hiroshima. On August 9, Hanford’s plutonium bomb fell on Nagasaki. However, Nagasaki wasn’t the original target. What did you learn about the change in plans and the phrase, “Kokura’s luck”?

Steve Olson: Nagasaki was the backup target on the second bombing mission, and it was added to the target list in an amazingly capricious way. On July 24, the generals in charge of the atomic bombings in Washington, DC, received a message to add Nagasaki to the bombing list from air forces chief Henry Arnold, who was with President Truman at the Potsdam Conference. Nagasaki was not an obvious target, and no one knows who at the conference insisted that it be included in the list, but the generals in DC complied. 

Then, the day of the mission -- which had all kinds of things go wrong -- the primary target, Kokura Arsenal, was covered by clouds and smoke by the time the B-29 containing Fat Man arrived at the city. The Bockscar, which was piloted by 25-year-old Charles Sweeney, made three runs on Kokura, but the crew were never able to see the target and drop the bomb. Thus, the phrase that is still associated with the city: the luck of Kokura.

Robin Lindley: Your description of the Nagasaki through the eyes of a Japanese doctor and other witnesses is vivid and heartbreaking. What was the scope of the destruction in Nagasaki and the casualties?

Steve Olson: I did my best to describe the devastation and human carnage, but there’s no replacement for going to Nagasaki or Hiroshima and reconstructing in your mind what an atomic bomb can do to a city and its people. The Urakami Valley of Nagasaki is several miles wide and eight or ten miles long, yet almost everything in the central part of the valley was destroyed. 

My own sentiment is that any national leader who has the authority to drop atomic bombs should be required to go either to Nagasaki or to Hiroshima and witness the scale of destruction that the bombs caused. And those two bombs were very small by today’s standard!

Robin Lindley: What were the medical consequences of the atomic bomb in Nagasaki? 

Steve Olson: Casualties at both Nagasaki and Hiroshima are surprisingly hard to estimate. But deaths caused by the Nagasaki bombing could have exceeded 100,000. Tens of thousands of people were killed by the initial blast and fire. Tens of thousands more died in the succeeding days, weeks, and months from the radiation generated by the bomb. And tens of thousands more died prematurely in later years from cancers and other diseases caused by their radiation exposure. And this is just a small example of what would happen if nuclear weapons are ever used in warfare again.

Robin Lindley: You mentioned that you traveled to Nagasaki as part of your research. How did it feel to visit there?

Steve Olson: I spent a week in Nagasaki reconstructing the minutes, hours, weeks, and months after the bombing through the eyes of a surgeon at the Nagasaki Medical College Hospital named Raisuke Shirabe. I traced his steps in the hills above the hospital as he fled from the burning city. I had parts of his diary translated into English. I met with his daughter and granddaughter and talked with them about Dr. Shirabe, who subsequently spent decades studying the effects of the bomb on the city’s residents. It was the most emotionally affecting research I’ve ever done for a book.

Robin Lindley: How do you see President Harry Truman’s role in deciding to use atomic bombs? Wasn’t there disagreement on whether to use a second bomb?

Steve Olson: Leslie Groves, the leader of the Manhattan Project, who is one of the central characters of my book, once said that Truman was like “a boy on a toboggan” when it came to making decisions about the use of atomic bombs on Japan. Truman generally distanced himself from the decision making. He never made a formal decision to use the bombs. The course was set by Groves, and Groves wanted to use two or more bombs to end the war quickly. He had the additional motivation of wanting to demonstrate that both of the approaches he had backed -- uranium from Oak Ridge, and plutonium from Hanford -- worked so that he would not have to answer to congressional committees for wasting government funds.

Robin Lindley: Manufacture of nuclear weapons picked up during the first couple of decades after the Second World War as the Cold War with the Soviet Union heightened. The nuclear weapons built after the war were fueled by plutonium so Hanford became a busy production facility. Why was plutonium preferable for bombs as opposed to uranium—as used in the Hiroshima bomb?

Steve Olson: Plutonium produces significantly more energy, pound for pound, than uranium. In modern nuclear weapons, a small pit of plutonium is detonated to create temperatures high enough so that isotopes of hydrogen in other parts of the bomb begin to fuse together, which releases much more energy than the original plutonium bomb. Essentially, every nuclear weapon in the U.S. and Russian arsenals is built around a small version of the Nagasaki bomb.

Robin Lindley: What was Hanford used for once production of nuclear weapons slowed? Didn’t plutonium production end there in the 1980’s?

Steve Olson: By the 1970s, both the United States and Soviet Union had more plutonium than they would ever need. Each country had built more than 30,000 nuclear weapons, representing more than a million times the destructive power of the bombs dropped on Hiroshima and Nagasaki. 

With the end of the Cold War and the dissolution of the Soviet Union, the insane size of these arsenals began to drop, and the excess plutonium was set aside either to use in future weapons or to be disposed of. Since then, most activities at Hanford have been directed toward cleaning up the horrendous environmental contamination caused by decades of plutonium production.

Robin Lindley: What have you learned about environmental damage caused by the Hanford nuclear plant?

Steve Olson: Hanford is the most radiologically contaminated place in the western hemisphere, matched only by the comparable site in the former Soviet Union. One hundred and seventy-seven tanks, most the size of a large auditorium, contain millions of gallons of highly radioactive and toxic chemicals generated in the process of producing plutonium. If you held a glass of that material at arm’s length, you’d be dead in a couple of minutes. 

The Department of Energy has made lots of progress in cleaning up Hanford, but it has just begun to deal with the tank waste. Current plans are to immobilize the waste in glass logs and deposit them in a long-term radioactive waste repository. But the technology has been difficult to develop, and the United States has not yet created a repository for the high-level nuclear wastes it has generated.

Robin Lindley: Is the US now making nuclear weapons?

Steve Olson: The United States and Russia are no longer adding to the size of their arsenals. But they are modernizing and miniaturizing their nuclear weapons, which could have the effect of making them easier to use, and if the United States refuses to extend the New START treaty, which expires next February, nations are likely to start building more nuclear weapons. 

As I say in the book, we are going in the wrong direction. Every action we take should be directed toward constraining and ultimately eliminating these moral abominations.

Robin Lindley: To me, the issues of the Hanford cleanup are very complex and seem overwhelming, especially when scientists talk about the extremely toxic substances at the site and the 24,000-year half-life of plutonium. What is the status of the cleanup now and what needs to be done?

Steve Olson: As I said, the Department of Energy has made important progress. Most of the sites right along the Columbia River have been cleaned up, though they are still largely off limits to visitors. Most of the contaminated equipment and soil have been transferred to a plateau in the center of the site, which is also where the tanks of radioactive waste are located. But completing the cleanup, which the federal government is obligated to do, will take many more decades and will cost hundreds of billions more dollars.

Robin Lindley: Hanford and the Tri-Cities have benefited from huge federal government expenditures for more than 70 years, yet the populace seems to be largely conservative and anti-government. How do you see the politics in the region?

Steve Olson: It’s a great contradiction, as are so many aspects of our political life these days. The adjoining cities of Richland, Kennewick, and Pasco, known as the Tri-Cities, were largely the creation of the federal government, and they remain heavily dependent on federal largess -- more than $2 billion per year flows to the area for the ongoing cleanup. But the area is generally conservative, even if many individuals and groups in the region are not. 

Many people think of the area as rural, even though the regional population is now more than 300,000. The population tends to skew white, older, and blue collar, since Hanford was for decades a production facility. 

I grew up surrounded by the conservative politics of the area, and they puzzled me even as a kid. I could see no obvious reason why people so distrusted government. I’m still puzzled. I’d like to write about it someday to try to understand it better.

Robin Lindley: You have a gift for bringing complex issues to life. Who are other writers you admire or see as influences? Are there rules you follow in your lucid writing about technical issues for a general audience?

Steve Olson: I dedicated this book to my wife and also to the memory of John Hersey, from whom I took a nonfiction writing course in college in the spring of 1978. All my books have been heavily influenced by what he taught me. He wanted us to pay attention to the structure of our writing, often by adopting a model that we would visualize in putting a story together. He taught us to pay attention to individual words -- we read poetry in his class to see how words fit together and acquire meaning from their context. 

John Hersey was also my personal connection to World War II.  He had been there in 1946 doing research to write his book Hiroshima.  Now I was in Japan 72 years later doing historical research in the other city destroyed by an atomic bomb.

It’s very kind of you to say that I have a gift for writing, but I see whatever success I’ve achieved as solely the result of practice and determination.

Robin Lindley: You deal with world shaking events in your book, and we are still faced with the threat of nuclear annihilation and an intractable nuclear waste mess, and now a novel virus that is devastating much of the nation. As an acclaimed writer and astute observer, where do you find hope?

Steve Olson: I remain a hopeful person, even though writing a book about nuclear weapons can beat the hope right out of you. But humans have not used nuclear weapons in warfare, as of August 9, for three-quarters of a century. Until recently, the United States and Russia were making steady reductions in their arsenals. 

Many people who have been or could in the future be in positions of authority recognize the need to eliminate nuclear weapons from the earth. And the ongoing cleanup of Hanford gives me hope. As I wrote in the book, “Hanford’s cleanup, if done persistently and well, could provide an object lesson in making the Earth whole again.”

Robin Lindley: Is there anything you’d like to add about your new book or your writing for readers?

Steve Olson: Learning what happened at Hanford is, I think, the best possible way to learn about the nuclear age and what can be done to abolish nuclear weapons. I tried to write this book so that readers would end it with a sense of both understanding and purpose.

Robin Lindley: Thank you Steve for your generosity and thoughtful comments. And congratulations on your sweeping new book on Hanford, plutonium, and much more. It’s an honor to connect with you.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/blog/154388 https://historynewsnetwork.org/blog/154388 0
Let's Return Historical Attention to the Crowd

Astor Place Riot, New York, 1849. Image New York Public Library

 

 

History and historians are certainly close to the front lines of a host of issues in the crises that currently beset us, in the United States and globally. How we handle and convey the past has emerged as a central issue in the protests that surround us, opening all sort of complex issues to new debate.

One angle that has not received due attention involves what history tells us about crowd behavior, and how this in turn might provide useful perspectives on current tensions.

Crowd history was one of the staples of the “new” social history that emerged in the US and the UK in the 1950s and 1960s, ultimately gaining further interest during the turbulence of the latter decade. Historical findings were carefully incorporated in the late 60s report on “Violence in America.” Since then, however, the subject has languished a bit, displaced by more fashionable concerns about culture and gender; but it may be time to renew our interest, among other things asking how more recent developments – both in contemporary history and in historical study – can amplify earlier findings.

One of the central themes of the earlier studies involved a careful attack on the common conservative argument (originating with Gustav Le Bon) that crowds – “mobs”—are simply violently irrational and that, as a result, their demands do not warrant serious consideration. On the contrary, pioneering social historians like George Rudé showed that crowds develop internal leadership, that their actions are quite selective based on group goals, and that random violence (and particularly, personal violence) is not a significant factor.

These findings deserve reemphasis, and appropriate additional documentation, today. Crowds have frequently played a very constructive historical role. Their normally controlled behaviors have been abundantly illustrated in recent weeks, even though the press pays primary attention to occasional instances of property damage. Historian Dolores Janiewski (Victoria University Wellington) has noted how even the New York Times falls into a classic trap of using the term “mob,” rather than more neutral descriptors, with surprising frequency.  Historical summaries, including of course the recollections of the civil rights protests that have been appropriately recalled as part of the homage to John Lewis, can help rescue crowds from editorial distortion.

Historical work can also remind us of the police variable. Here too, there are some classic components. Police accounts almost always seek to minimize the size of crowds and attempt to call attention to the role of real or imagined “outside agitators” in an effort to disparage a crowd’s responsible goals. But outright use of force by police is itself a variable affecting crowds’ behavior, and renewed historical work can remind us of the effects when authorities have chosen different available options. The same applies to the common conservative strategy of trying to use and distort crowd behaviors to instill fear to justify more repression. 

Renewed historical attention to crowds must of course take into account a number of developments not relevant to the situations (largely late 18th to late 19th centuries) on which initial generalizations were based. Changes in the arsenals available to the “forces of order” have clearly introduced new elements into crowd behavior (and so, to a more limited degree, have mechanisms available to crowds themselves, including most recently items like leaf blowers used to disperse tear gas). The rapidity of social media communications would clearly need attention, though in the old days rumors could also spread surprisingly quickly. The advent of formal thinking about nonviolence needs appropriate note, as well as a wider range of examples of crowd behavior in regions beyond the West. How much all of this would alter the existing basic profile is not clear, but a number of updates clearly warrant examination.

Many historians would also argue that a more explicit emotional component needs to be added to relevant crowd history. Arguably, the earlier generalizations went too far in downplaying the emotional elements of crowd behavior. (Emotion and rational action are not, after all, necessarily in conflict.) Emotion – most obviously anger, sometimes fear or hope – is an essential element in binding a crowd together, motivating it to take risks and step out of normal routine. 

Here too, words matter. Terms like “rage” (another part of the NYT’s recent vocabulary) risk reinforcing the idea of the mindless mob, compared to the more neutral “anger” or an older word I would like to see revived, “indignation” (anger itself is sometimes currently viewed as a “negative” emotion, but the history of crowds reminds us that it need not be so).  There is much to do to integrate emotion into an appropriately balanced assessment of crowd motivation and behavior. 

But emotion also contributes to certain aspects of crowd behavior that deserve further analysis, again in light of recent developments. Most importantly, emotion may promote an interest in immediate, tangible targets, a clear feature of the crowds we see around us today, with their intriguing fascination with statues and other physical symbols of injustice. The question of the durability of crowd emotion is another vital topic. Can the emotions that bring people into the streets be sustained over a long period of time, and how is this best achieved?

Finally, the relationship between crowd action and other protest mechanisms – petitioning, boycotting, and crucial to the present day, voting – also warrants attention. Earlier findings on crowd history – and the work of Charles Tilly was central here – emphasized a distinction between “reactive” protest, against a real or imagined deterioration in relevant conditions, where direct action might be particularly effective; and proactive goals, the demand for new rights, where crowd pressures might need to be supplemented for some chance of positive results. The reactive-proactive balance offers another standard that can potentially be applied to the role of crowds in renewed struggles for racial justice, and the need to combine crowd action with other political tactics.    

As we face new and legitimate demands on history and historians to help navigate the current moment, the opportunity to revive and widen the study of crowds and protest deserve a clear place on the agenda, particularly given the opportunity to combine earlier findings with the rich recent developments in the history of emotion. Here, it seems to me, may be one way not only to provide perspective on the present, but also to encourage constructive results from the passions that are again motivating people to protest and put themselves at risk in the name of justice.   At the least, the subject invites renewed professional debate.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176764 https://historynewsnetwork.org/article/176764 0
History, Memory and Reconciliation with the (Whole) Past

 

 

 

When I was a boy I was an avid stamp collector. Couldn’t wait to find stamps to fill the gaps in my album. I especially loved the commemorative stamps, though I did wonder at the time why the United States of America issued stamps of such people as Robert E. Lee, Stonewall Jackson, Jefferson Davis, and other Confederates. They were traitors; they wanted to destroy the Union. Of course, when I went on to get my degree in history I learned a nuanced view of the complicated efforts to reconcile the division between North and South in the decades after the Civil War. The federal government felt it best, in order to reunite the Union more quickly, not to concentrate on racial equality or the protection of black rights, but to let the South have its commemorations and memorials for the Confederate dead, for their “Lost Cause.” Even Republican politicians who had been ardent abolitionists were losing interest in the status of the “freedmen” after Reconstruction and simply wanted to unify the nation and move on to the issues of industrialization, the tariff, labor vs. capital, and immigration. And so by the time these statues were erected the Civil War was fading into myth, segregation and voter restrictions had been firmly established in the South, and any African American who defied their second-class citizenship knew s/he could be facing the ever present lynch mob. The underlying message of these monuments was the affirmation of white supremacy.

 

In my lectures I remind students that there are many things to be proud of in American history, but there are other things, other shadows, skeletons lurking in our national closet. Yes, we should celebrate our better angels and the great accomplishments of this republic, but we need to remember and reflect on the dark side of America’s past: the Pequot War, the Trail of Tears, slavery, Wounded Knee, the Jim Crow laws, lynching, xenophobia, mass incarceration, systemic racism….all of it. In order to understand our history, in order to understand where we as a nation have been and where we are going, even in order to understand ourselves as individuals, it is imperative to examine all aspects of our past. The good, the bad, and the ugly. One argument against removing the Confederate statues is that it would be denying one aspect of our past, would seem like covering up the evils of slavery. Still, I do believe they should be removed. Not to cover up what happened, but to be replaced with monuments that tell the full story. 

 

But then, some people say: “Well, what about Washington, what about Jefferson? They owned slaves. Are we going to remove their monuments? It’s a slippery slope.” That is a specious argument. We have to look at this rationally. And there is a massive difference between George Washington and Robert E. Lee. There is a massive difference between Thomas Jefferson and Jefferson Davis. 

 

Washington and Jefferson and many others of our founding fathers, despite the fact that they owned slaves, were dedicated to building this nation. Dedicated to creating the United States. And they did so. Even if imperfectly. They were driving forces of the creation of a republic governed by the people. We should be proud that the United States did reckon with, and overcome, the original sin that lay at the founding of this nation and did eventually abolish slavery. The Confederate leadership, generals, and soldiers were doing the opposite—they were trying to destroy the United States. 

 

If Robert E. Lee had been a little more successful at Gettysburg, if a few battles had turned out differently, if Lincoln and the North had lost confidence and the will to preserve the Union, the South would have won its independence as a separate nation. There would have been two countries instead of the one. And that would have established the precedent of legitimizing secession. Other states, and regions, over the course of time could very well have separated too: New England. the Mountain States, and (of course) California. How strong would five or six separate republics have been in facing the crises of the twentieth century? Of course, this is idle speculation. But one thing is true. Robert E. Lee was more an existential threat to the United States than Osama Bin Laden ever was. Bin Laden could injure us, could damage us, but he never could have brought down the United States.

 

So, when we reflect on the monuments we need to ask: does the statue memorialize a person or event that was a force for creating a more perfect union or a force that sought to demolish the United States? We must always accept and acknowledge ALL of our past, but the choices we make about which historical figures to honor, and how we represent them, tells us who we are. 

 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176763 https://historynewsnetwork.org/article/176763 0
75 Years Later, Purple Hearts Made for an Invasion of Japan are Still Being Awarded

 

 

The decoration, which goes to troops wounded in battle and the families of those killed in action, had been only one of countless thousands of supplies produced for the planned 1945 invasion of Japan, which military leaders believed could last into 1947.

Fortunately, the invasion never took place. All the other implements of that war --- tanks and LSTs, bullets and K-rations --- have long since been sold, scrapped or used up, but these medals struck for their great grandfathers’ generation are still being pinned on the chests of young soldiers. 

In all, approximately 1,531,000 Purple Hearts were produced for the war effort, with production reaching its peak as the Armed Services geared up for the invasion of Japan. Despite wastage, pilfering, and items that were simply lost, the reserve of decorations was approximately 495,000 after the war.

By 1976, roughly 370,000 Purple Hearts had been earned by servicemen and women who fought in America’s Asian wars, as well as trouble spots in the Middle East and Europe. This total also included a significant number issued to World War II and even World War I veterans whose paperwork had finally caught up with them or who filed for replacement of missing awards. It was at this point that the government agency responsible for storage and distribution of military medals, the Defense Supply Center in Philadelphia (DSCP), found that their decades-old stock of Purple Hearts had dwindled to the point that it had to be replenished. 

The organization ordered a small number of medals in 1976 to bolster the “shelf worn” portions of the earlier production still retained by the Armed Services at scattered locations around the globe. It was then that an untouched warehouse load of the medals was rediscovered after falling off the books for decades. The DSCP suddenly found itself in possession of nearly 125,000 Purple Hearts to add to their continually diminishing stock.

BREAK ---------------------------------------

Increasing terrorist activity in the late 1970s and ‘80s resulted in mounting casualties among service personnel and a decision was made to inspect and refurbish the newly found medals. Fully 4,576 of the 124,588 Purple Hearts stored in the Pennsylvania warehouse were deemed to be too costly to bring up to standards and were labeled “unsalvageable.” The remaining decorations were refurbished and repackaged between 1985 and 1991. Remarkably, more than 120,000, including many of the refurbished medals, were in the hands of the DSCP’s “customers,” the Armed Services, by the beginning of the 1990-1991 Gulf War, stocked at military supply depots and also kept with major combat units and field hospital units so they could --- and would --- be awarded without delay.  Demand for the item was high. By the end of 1999, great numbers of the refurbished medals had been shipped out and the Defense Supply Center entered into contracts for the first large-scale production for Purple Hearts since World War II as 9,000 new awards were ordered for the most simple of bureaucratic reasons. So many medals had been transferred to the Armed Services that the DSCP had to replenish its own inventory. 

Emerging wars in Iraq and Afghanistan in the wake of the 9/11 attacks of 2001 necessitated that much of the remaining stock made for the invasion of Japan be “pushed forward” with the result that the current government organization responsible managing the inventory, the Defense Logistic Agency, decided to order the production of 21,000 new Purple Hearts in 2008.   In all, US military losses since the 9/11 terrorist attacks have totaled some 7,000 killed 53,000 wounded, according to the latest Defense Department figures. Veterans are also regularly being awarded the Purple Heart retroactively, according to the Military Order of the Purple Heart, yet many tens of thousands of the World War II production are still available.  As for the most recently minted medals, they, like those manufactured in 2000, are functionally identical to the refurbished 1945 medals which had their ribbons and clasps replaced before being place in the sleek plastic presentation cases that replaced the original World War II-era “coffin boxes.”  

All medals, old or new, are considered part of the same stock for inventory purposes and some portion of the new production is undoubtedly mixed in with the refurbished oldsters ready for immediate distribution by the Armed Services --- not that anyone other than a specialist in the decoration would be able to tell them apart. BREAK ---------------------------------------- An important thing to consider on this 75th anniversary of the war’s end is that when Harry S. Truman became president following Roosevelt’s death in April, 1945, Americans from Walhalla, Texas, to Washington, DC, believed America to be in the middle of the war.  

Nazi Germany had been defeated at a terrible cost and now the final battles with Imperial Japan loomed. The initial invasion operation of Operation Downfall would be launched before Christmas 1945 and all wondered who would survive to sail home beneath “the Golden Gate in ‘48” after more years of brutal combat.

By July 1945, the US Army and Army Air Force had already suffered more than 945,000 all-causes casualties. As early as January that year the New York Times printed the dire warning of General George C Marshall and Admiral Ernest J. King that ‘’The Army must provide 600,000 replacements for overseas theaters before June 30, and, together with the Navy, will require a total of 900,000 inductions by June 30.’’   To meet these needs, the Army had organized a 100,000 men-per-month ‘’replacement stream’’ for the coming one-front war against Japan and the Army's figures, of course, did not include Navy and Marine casualties.  Meanwhile, some War Department estimates indicated that the number of Japanese dead could reach between five to 10 million, with the possibility of 1.7 million to as many as four million American killed, wounded, and missing to combat, disease, and accidents if the worst case scenarios based on the recent Iwo Jima and Okinawa battles came true. Across the Pacific, the ultimate casualty figure being circulated within imperial circles in Tokyo was 20 million --- a fifth of Japan’s population.

Among the fantastic quantity of war supplies to support the young Americans grimly facing the coming Armageddon were the Purple Hearts awarded into the twenty-first century despite their production being cut short by Japan’s sudden and unexpected surrender.

Many of the World War II veterans, still alive fifty years after the war, were keenly interested in the fact that a huge quantity of medals had been discovered in a government warehouse and readied for future use.  Chief among them were those who had worked with the Smithsonian Institution on the 50th Anniversary display of the Enola Gay, the B-29 bomber that dropped the atom bomb on Hiroshima.

Controversy had erupted over the Smithsonian’s presentation at the National Air and Space Museum, when veterans protested that the multimedia display and exhibit script was crafted in a way that portrayed the Japanese as victims, and not instigators, of the war.

The veterans were heavily criticized in some academic circles for their insistence that the dropping of the atom bomb had ended the war quickly and ultimately saved countless thousands of American --- and Japanese --- lives during an invasion.

When learning of the rediscovered medals and new production after the Smithsonian fiasco, Jim Pattillo, president of the 20th Air Force Association stated that, “detailed information on the kind of casualties expected would have been a big help in demonstrating to modern Americans that those were very different times.”

Medical and training information in “arcanely worded military documents can be confusing,” said Pattillo, “but everyone understands a half-million Purple Hearts.”

But perhaps the most poignant appreciations came from a Vietnam vet --- with a grandson and nephew in the Army today --- who learned for the first time that he had received a medal minted for the fathers of he and his buddies.

“I will never look at my Purple Heart the same way again,” he said.

With perhaps as many as 60,000 of the World War II production still spread throughout the system, it’s possible that some unknown number will still be available another 75 years from now.  Let’s hope that all are.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176762 https://historynewsnetwork.org/article/176762 0
Honor a Hiroshima Survivor's Legacy: Ban Nuclear Testing and Move to Disarmament Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176806 https://historynewsnetwork.org/article/176806 0 Five Strictly Professional Reasons Why Historians Dislike Donald Trump

The fable of the blind men and the elephant is one that speaks to historians' sense of humility in the face of complexity and need to consider other points of view. 

It is unclear if Donald Trump grasps this lesson.

 

It is no secret that most historians dislike President Trump. In December of 2019, more than 2,000 of us signed a petition calling for his impeachment. In June, historian Sean Wilentz wrote “that he is without question the worst president in American history.” Most other historians rank him at least among the worst.

 

Many of the reasons for this judgment we hold in common with other citizens--his narcissism and other unappealing character traits, his handling of the coronavirus pandemic, etc., etc., etc. But we also dislike him because he violates so many of the values important to us as historians. Here I’ll identify and explain our appreciation and his violation of just five of these values: 1) respect for truth; 2) respect for science, reason, and facts; 3) the valuing of history; 4) a realization of the complexity of life and history; and 5) an appreciation of empathy.

In a recent HNN op-ed I quoted from Jill Lepore’s These Truths: A History of the United States (2019): “The work of the historian” includes being “the teller of truth.” And I added my own conviction that “Tell the truth” should be as central to historians as “First, do no harm” is to doctors and nurses. My article also quoted Donald Trump and His Assault on Truth: The President's Falsehoods, Misleading Claims and Flat-Out Lies (2020): “Donald Trump, the most mendacious president in U.S. history . . . . [is] not known for one big lie—just a constant stream of exaggerated, invented, boastful, purposely outrageous, spiteful, inconsistent, dubious and false claims.”

Trump’s lack of respect for truth is related to his lack of respect for science, reason, and facts. The historian H. Stuart Hughes entitled one of his books History as Art and as Science, and he was correct. It is both. We historians share with natural and social scientists a commitment to the scientific method, which means trying to view evidence in as unbiased a fashion as possible and drawing conclusions that approximate truth as closely as we are capable. In his The Modern Mind: An Intellectual History of the 20th Century (2001), Peter Watson praised science for having “no real agenda” and for being open, tolerant, objective, and realizing its results were cumulative, one discovery building upon others. Such an approach also suggests humility by recognizing the limits of the truths we discover.

 

We historians are like the characters in the Buddhist tale of the elephant and the blind men. Each blind man feels only one part of the elephant--e.g., a tusk, a trunk, or a tail. In Ethics and the Quest for Wisdom Robert Kane relates this story to make the point “that the whole or final truth is not something finite creatures can possess entirely. What they can do is partake of or participate in that truth from limited points of view." What Kane says about truth is also true of history--each historian presents only a limited point of view. 

 

Five years ago on this site I suggested that the quote “history is the error we are forever correcting” could more accurately be rendered “history is the inadequate portrayal of the past we are forever correcting.” Further, I indicated that “this helps explain why there will always be revisionist interpretations.” Valuing the scientific method, we do not see our historical works as the final words on any subject, but merely honest, impermanent efforts to portray some slice of history.    

 

But Trump has no respect for science. Before becoming president, he tweeted that “global warming is an expensive hoax.” And in the present coronavirus pandemic he has often ignored the best advice of scientists, for example in regard to wearing masks, easing government restrictions, and by his hyping of the drug Hydroxychloroquine. Thanks largely to his influence, many of his followers ignore the best scientific advice regarding social-distancing and mask-wearing. 

 

Given his lack of respect for science, his failure to value history (our third professional reason for faulting him) should come as no surprise. Early in Trump’s presidency David Blight, author of the acclaimed Frederick Douglass: Prophet of Freedom (2018), wrote “Trump and History: Ignorance and Denial.” The article summed up nicely why a distinguished historian found Trump’s “5th grade understanding of history or worse” so “deeply disturbing.” “Perhaps,” Blight speculated, Trump’s “grasp of American history rather reflects his essential personality, which seems to be some combination of utter self-absorption, a lack of empathy, and a need to believe in or rely upon hyper individualism.” 

 

In the three years that have passed since Blight’s article, several books like A Very Stable Genius by Philip Rucker and Carol Leonnig, and The Room Where It Happened, by John Bolton, have added new examples of Trump’s ignorance of history--e.g. not knowing that India and China shared a border or that the United Kingdom was a nuclear power (as it had been for over a half century). But Blight’s article retains its relevance for why presidential historical ignorance is so appalling to historians.

 

A fourth Trump failure from a historian’s perspective is his failure to acknowledge the complexity of life and history. On a recent PBS Newshour discussing the tearing down of statues, some African Americans stated that the Washington, D.C. Emancipation Memorial of Lincoln standing above a kneeling newly freed slave should be torn down because it depicts a black man kneeling in a subservient position. But historian Lonnie Bunch, founding director of the National Museum of African American History and Culture, said it would perhaps be better to add “another statue next to it of Frederick Douglass, for example, creating, in a sense, more history.” 

 

In general, Bunch said he wanted to see “a reasoned process that allows us to discuss, that allows us to bring history before we make decisions of pulling things down.” By adding the Douglass statue, Bunch thinks the revised memorial would provide more complexity and nuance, which is what he believes history should do. 

 

Newshour interviewer Jeffrey Brown commented, “That may be a lot to ask in an America so greatly divided, seemingly not in a mood for complexity and nuance, now fighting over its past and future one statue at a time.”

 

A lot to ask? Yes, indeed. We historians often feel compelled to reject simplistic casual explanations by saying, “Well, the matter is more complex than that.” For Bunch is right--history teaches us that the reasons things occur are usually complex and nuanced--but people often prefer, especially in our polarized political arena rife with conspiracy theories, more simplistic accounts.

 

In his The Historian’s Craft, Marc Bloch wrote what we historians know well--“the fetish of a single cause,” is often “insidious.” In Historians’ Fallacies, David Hackett Fischer dealt with this insidious error under the category of “the reductive fallacy, which “reduces complexity to simplicity.” 

Such reduction, such over-simplification, and even that often untrue, is typical of Trump’s style. New York Times columnist Thomas Edsall recently cited several studies that indicate “Donald Trump has the most basic, most simplistically constructed, least diverse vocabulary of any president in the last 90 years.” Edsall places Trump among “authoritarians [who] are averse to diversity and complexity.” 

The fifth (and final) Trump failure to be highlighted here is something Blight mentioned above, Trump’s “lack of empathy.” Over four years ago my “Historians Need to Write and Teach with Empathy” appeared on HNN. In it I supported the article’s main point with quotes from various historians such as John Lewis Gaddis, and also furnished a link to a piece by then-editor Rick Shenkman that explained empathy. About a year ago I contrasted President Obama’s emphasis on empathy with Trump’s egoism, which completely submerged “any signs of empathy.”

The five reasons provided above are not meant to be exhaustive. No doubt other historians can think of additional ones for our collective antipathy toward the man Wilentz labels “the worst president in American history.” I look forward to reading them.  

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176805 https://historynewsnetwork.org/article/176805 0
Reflections from "That Further Shore": A Constitutional Lawyer's Immigrant Family History

John D. Feerick, Photo Fordham University School of Law

 

 

The book in its initial stages (18 years ago) was to be a story of my parents, both of whom emigrated to America from Ireland in their late teens. The idea of writing such a book, rather than on the Constitution as my earlier book efforts, developed at the end of the decade after their deaths in the late 1980's. I felt the pain of their loss very deeply, and began in 1989  to make trips in the summer period to Ireland to learn more about them as children, their siblings, and their ancestry. I visited cemeteries to check out the inscriptions on tombstones for information on my ancestors.  I walked around the farms on which my parents were born to learn something about farming and the bog land. I visited the locations of the elementary schools they attended, because that was the limit of the formal part of their education.  I visited the churches they attended in Ireland as children to light candles in their memory and to meet the pastors of the churches in the hope of finding more information about them and their families. I discovered facts that were new to me and to even my Irish relatives. I spoke to neighbors and friends of their families in Ireland about what life was like for my parents as children. I learned about the music of their time, the kind of  games they played, and the work they performed on their family farms. I studied the Irish railway system, even reading books about the system, so that I could visualize the journey that took them from their homes in County Mayo to  County Cork for the ship each traveled on to America.  I relived that train ride in my mind, with the facts I had, and speculated where they may have stayed overnight once arriving in Cork before the ship left for America, not unlike the Titanic whose last stop was Cork before its Atlantic journey. I stood on the dock in County Cork where my parents likely stood in 1928 and 1929, imaging what it must have been like for them,  leaving alone with a few pounds and possessions, without anyone present to see them off. Little did they realize they would never again see their parents and some of their siblings. 

On arriving in America, they found a place to stay for a brief while in a rooming house in Manhattan, then located employment and eventually met at a house party or dance, dated, married, and began to raise a family. I was their first born and for the next 23 years I came to know them in their small  apartments in the Bronx, first in buildings without elevators and then at 305 East 161st street, which is not far from Yankee Stadium. We lived in what seemed like a a little village, with everything available to us, despite the  presence and noise of trolley cars, milk trucks, and all kinds of other vehicles. We played our games on the streets, with cars moving east and west, near our apartment house.  We attended a Catholic school around the corner from us, which was adjacent to our church, and a public school and public library. Along the route from our house to the school were many small stores and shops that served all our needs.  In our two bedroom apartment love was ever present though we did not recognize it at the time. We learned about the world from a World Book encyclopedia set my parents purchased from a salesman at the door.  We said our prayers at night, and ate and did our homework around the kitchen table, with clothes to be dried hanging from the ceiling.  We most reluctantly from time to time accepted parental discipline for our mischief. We enjoyed our holiday celebrations, time away from school, such as Christmas and Easter and loved hearing the  music that flowed when our parents had friends over for a celebration of a birthday or holiday. We watched Mom sing and dance at these events while  Pop played the accordion.

We took to heart in time their constant refrain on the importance of studying, not missing school, going to confession to repent for our sins. and buckling down and making something of ourselves without any proscriptions from them as to what to do with our lives  I was a slow starter, unlike my siblings, with three of us in fold by 1941 as Pop went off to war-related service, and with two other siblings who came along after the war.  Mom was on duty 24 hours a day, taking us to parks in carriages as young children and when we were older, encouraging us to stay outside and play until dinner time lest she go crazy, crowded as we were in a very small apartment. She cooked and painted and knitted clothes for us to wear. She did the wash in the kitchen sink and hung clothes for drying on lines both inside and outside our apartment. We had no TV until 1951 but we loved listening to the radio. While a telephone at some point was installed, we rarely used it. Pop worked at all kinds of trades - on milk trucks, in the subways, on the waterfront, and on military installations during the war years, and then as an operator of trolleys and  buses until he retired. He accepted the decline of Mom, giving her a total commitment until cancer took his life. And how does all of this tie to a book with so much of my own history? Simply stated: my story would  not have been possible without my parents, their love, devotion, values, example and their commitments to the five of us as children, the families they left behind, their friends,  and to the country they adopted.

It was a most happy childhood I had, which I began to realize as I put down in writing their story, and I saw that my story and the stories of their other children were integral to their lives in America.  Work by us in part time jobs was encouraged by them and we took advantage of that in order to have a few dollars to spend and later help pay for school costs. When  I attended college, I became more conscious of them and their love, dreaming that maybe someday I could afford to buy them a color TV set, help them purchase tickets to return to ireland to see their families again, and maybe in time  build for them a house in Lake Carmel where they had a small building lot but no money to build a house. These became my life's goals as I left college to attend law school in 1958. A religious vocation and a supermarket management position were, during my college years, my principal career goals, but these faded as I  approached graduation  and saw life as a lawyer allowing me to become engaged in the political system. My plan for the future was otherwise left open to what came along.

Little did I foresee that one day I would have an opportunity to join what became one of the most successful law firms in the world and to serve as dean of a law school that became nationally and internationally recognized, and to be afforded an opportunity, because of my writings, to help in the crafting of two constitutional amendments, one now in the US Constitution, and the other, for the abolition of the electoral college, that has made it successfully through the House of Representatives in 1969. So many other opportunities unexpectedly came along in areas of conflict resolution, mediating and arbitrating disputes, chairing state investigative commissions, serving in leadership positions in the organized bar and on corporate boards, and engaging in volunteer causes in New York and Washington, and peace making activities in Ghana, and Northern Ireland.

But none of the above would have been possible without my parents, When they left us there was hardly any notice of them other than having lived and died. I therefore decided to tell their story and in the process my own story as a kind of verification of the quality of their lives in America, having arrived from a foreign land as their adult years were about to begin.  

 

 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176804 https://historynewsnetwork.org/article/176804 0
Tumultuous Transitions in the American Presidency Ronald L. Feinman is the author of Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama (Rowman Littlefield Publishers, 2015).  A paperback edition is now available.

 

 

Three months from now America will once again experience the tumult and stress of presidential transitions, if one believes the polls that show former Vice President Joe Biden thwarting President Trump’s attempt to win a second term in the White House.  Trump has already made clear his intention to fight tooth and nail, no holds barred, to win a second term, including legal maneuvers with no limits, plus threats simply not to concede.  This could create a constitutional crisis that would surpass any previous presidential transition.  America has certainly had a lot of experience in difficulties in the change of governments in the last two centuries.  An examination of such tumult and stress is instructional.

When John Adams lost to Thomas Jefferson in 1800 the two men, once friends, and later to be such again, had condemned each other during the campaign in every imaginable manner.  The stress became ever greater when Adams had the opportunity to name John Marshall Chief Justice with only a few weeks left in his term. Jefferson argued that a “lame duck” President should not be able to transform the Court after being defeated.  Marshall, who ironically was a cousin of Jefferson, would go on to serve as the most significant Chief Justice in American history, and also its longest serving Chief Justice. He is often considered to have had the greatest influence on the Court’s entire history through the doctrine of judicial review.  

But this period of transition, which also saw Adams not attend his successor’s inauguration on March 4, 1801, was also complicated by the reality that Vice Presidential nominee Aaron Burr claimed a tie in electoral votes, opening up the possibility of Burr being elected by the House of Representatives. This necessitated a multi-ballot battle in 1801, until Jefferson was selected with the backing of the losing Federalists. Alexander Hamilton lobbied for the election of his ideological rival Jefferson over Burr, whom Hamilton considered a dangerous man with no ethics or principles.  This would, of course, ultimately lead to Burr killing Hamilton in a gun duel in 1804, marking Burr as one of the prime villains in early American history.

John Quincy Adams was elected the 6th president over Andrew Jackson when the 1824 election was decided by the House of Representatives early in 1825, the second and last time that the House was saddled with the need to choose the winner. This led to accusations by Jackson of a stolen election.  Jackson had ended up first in popular and electoral votes, in the first test of popular vote strength in a presidential election, but in a four person race, the election went to the House of Representatives since Jackson had not won the majority of the electoral vote.  It led to a four year campaign by Jackson accusing John Quincy Adams and Henry Clay of a “corrupt bargain” when Clay backed Adams and then was given the highly prized position of Secretary of State.  

So when 1828 came on, the campaign between President JQ Adams and Jackson was especially bitter and nasty, including personal attacks on Jackson’s wife a bigamist, which arguably led to her death during the transition period after Jackson won the election handily.  Adams left Washington without attending the inauguration of his successor, and pledged to come back to fight Jackson, whom he considered a dangerous man.  To the horror of Adams and many other Jackson critics, the new president’s supporters were encouraged to celebrate his inauguration on March 4, 1829, and they proceeded to engage in a drunken brawl, breaking windows and china, and damaging furniture in the White House. Within two years, Adams indeed did come back as a Congressman from Boston, and fought Jackson on many issues. He remains the only former president elected by popular vote to Congress in all of American history until the present.

In 1860, James Buchanan, totally repudiated and not choosing to run again, had to deal with the danger of the oncoming Civil War, as Southern states began to secede from the Union. He refused to take any federal action against Southern states, which were seizing US military forts. The president-elect, Abraham Lincoln, was unable to convince Buchanan to uphold federal law and the Constitution, a reality that would condemn Buchanan forever in American history.  There were regular and constant death threats against Lincoln during the transition, most notably the “Baltimore Plot” that was seen as a real danger and forced Lincoln to travel through the city in the dark of night without notice, on his way to Washington.  The stress level and tumult was very high. Within six weeks of the inauguration, with Lincoln determined to protect US military forts but not start a war, South Carolina chose to attack Fort Sumter in Charleston Harbor, South Carolina, on the morning of April 12, 1861, leading to the undeclared Civil War.

After Ulysses S. Grant was elected president in 1868, months after the failed impeachment trial of President Andrew Johnson, the outgoing president was hostile toward Grant, who had backed away from supporting him in the impeachment crisis.  He refused to go to the inauguration of his successor, staying in the White House until the ceremony was completed.

In 1876, in the closest electoral vote election in American history, a controversy over who had won the electoral votes of three southern states (South Carolina, Louisiana, and Florida) dragged on for nearly four months until two days before the inauguration, with fear of a renewed civil war. But a behind the scenes deal known as the Compromise of 1877 arranged for popular vote loser Rutherford B. Hayes to win the precise majority of electoral votes (185-184) needed to be inaugurated over popular vote winner Samuel Tilden.  There had been some consideration of allowing President Grant to stay in office if the crisis had not been settled by Inauguration Day.

The Compromise of 1877 undermined the reputation of Congress and the Supreme Court, with members from both houses and the Court on the Electoral Commission that struck the political deal, with long range implications of the Republican Party abandoning African Americans to the Democratic Party and its southern adherents, creating “Jim Crow” for nearly a century until the modern civil rights movement in the 1950s and 1960s finally led to legal and statutory laws against segregation.

When Herbert Hoover lost reelection to his onetime friend, Franklin D. Roosevelt in 1932, at the worst moments of the Great Depression, the two men could not agree on actions to be taken during the four months until the inauguration in March 1933. And FDR came close to being assassinated in Miami, Florida on February 15, 1933. The mayor of Chicago, Anton Cermak, sitting next to him, was murdered by the perpetrator, Giuseppe Zangara.  When Inauguration Day arrived two and a half weeks later, Herbert Hoover sat glumly in the automobile taking him and the President elect to the Inaugural stand, and refused to talk with FDR.  The bitterness was lasting. Hoover denounced the New Deal regularly, and the two men never had any contact again.

President Harry Truman did not think highly of his successor Dwight D. Eisenhower in 1952, and the transition into January 1953 was not particularly warm.  And yet, they had once collaborated in the early years after World War II. Truman had thought of Eisenhower as a possible successor in 1948, when Truman suggested that he would step down again to the Vice Presidency with Eisenhower leading the ticket, but Eisenhower rejected the offer.  Truman became a major critic of Eisenhower during his Presidency, and only at the funeral of John F. Kennedy in 1963 did the two men reconcile.

Gerald Ford did not think positively about his successor Jimmy Carter after the hard-fought battle between them in 1976, and Ford, while cordial in the transition period, was a sharp critic of Carter during his Presidency.  But then, the two men and their wives became fast friends, and they agreed that when one passed away first, the survivor would give the eulogy at his funeral. Carter did precisely that at Ford’s funeral in December 2006.

The same scenario existed between George H. W. Bush and Bill Clinton after Clinton defeated Bush in 1992. Bush held hard feelings and offered strong criticism.  But after Clinton left the presidency, he and Bush became good friends, Bush referred to Clinton as the son “from another mother,” and they collaborated on Hurricane Katrina relief in 2005.

Clinton and his wife Hillary Rodham Clinton were strong critics of George W. Bush during the 2000 Presidential campaign, when Clinton’s Vice President, Al Gore, won the popular vote over Bush, but the uncertainty of Florida’s victor led to a 36 day standoff with legal action by both political parties.  When the Supreme Court intervened, however, Al Gore was statesmanlike. Notably, in his required role as outgoing Vice President, he had to open up 51 envelopes from the states and the District of Columbia during a January joint session of Congress and count the electoral vote. Gore announced his own defeat by 271-266, despite his popular vote lead of 540,000 votes.  

During that transition period, however, a major shouting match occurred between Clinton and Gore in the Oval Office. The issues was Gore’s decision not to utilize Clinton in the campaign, due to the impeachment trial of Clinton over his sex scandal, and Gore’s choice of Clinton critic Senator Joe Lieberman as his running mate. Clinton and Gore were never as close and engaged ever again as they were during their two terms in the White House, but the Clintons over time became friendly with the entire Bush family, despite the political battles.

The George W. Bush-Barack Obama transition was far less controversial, due to the developing crisis of the Great Recession, and the Obamas would become friends of the Bush Family. The two Bush Presidents avoided open criticism of Obama, although the Republican Party certainly had no lack of confrontation and challenge to the 44th President during the eight years of his presidency.

Most recently, Obama tried to cooperate with Donald Trump, who had unleashed constant attacks on Obama during 2015 and 2016, but except for one meeting a couple of days after the election in 2016, the Trump transition team was not out to cooperate with Obama, and Trump has continued to be totally condemnatory of everything Obama represents, and has worked to destroy the Obama legacy in a vicious, uncaring, and totally undiplomatic manner.

It is now clear the gloves are off, symbolically, and Donald Trump will have no limits on tactics to attempt to insure a second term, and will be vicious in every way possible toward former Vice President Joe Biden, linking him to Obama constantly.  There is no desire to accommodate or avoid total confrontation in the transition period, so one can expect a very tumultuous, stressful 78 days from November 3, 2020 to January 20, 2021.  We must be prepared for a greater potential constitutional crisis than we have ever witnessed in all of American history.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/blog/154389 https://historynewsnetwork.org/blog/154389 0
Federal Agents, “Insurrection,” and the Long, Bloody History of U.S. Counterinsurgency

 

 

 

Testifying before Congress, Attorney General Bill Barr characterized Movement for Black Lives protests as a threatening “insurrection.” How have youth led movements, accompanied by organized columns of mothers, veterans, and teachers, become public enemy number one? The terrifying spectacle of federal agents apprehending civilians who are exercising their constitutionally protected rights of free speech and assembly is the logical outcome of a long history of U.S. counterinsurgency policies, abroad and at home.

 

The appearance on U.S. streets of personnel from the Department of Homeland Security – an agency created in the wake of the 9/11 attacks to “safeguard the American people” from terrorism –is tantamount to a declaration of counterinsurgency warfare on the Black Lives Matter movement and its numerous supporters, including those deemed “violent anarchists,” “socialists,” and/or “antifa.”  Charged with maintaining “safe borders,” the DHS has long extended its reach far from the geographic border, terrorizing the foreign born.  With the recent deployment, the DHS continues its assault on all civilians, particularly people of color and those suspected of subversive beliefs.

 

Operating in the name of law and order, counterinsurgency policies function by creating and then targeting particular enemies of the state. Such “enemies” are deemed to be dangerous because of their beliefs, their identities, and/or the “violent” practices invariably ascribed to them. Political demonstrations that result in the destruction of property are assailed as unlawful riots, thereby justifying the use of force against primarily peaceful protesters. Subsequently, counterinsurgency practices inaugurate their own murderous regimes of pacification.  

 

Declarations of counterinsurgency warfare on a targeted “public enemy” reverberate with a long history in North America. In the 18th and 19th centuries, indigenous nations defended their lands against incursions by Euroamerican settlers backed by a federal government seeking control of resources in the trans-Mississippi west. Lakota historian Nick Estes describes counterinsurgency as:

 

“asymmetric warfare that includes collective punishment; the taking of children; the forcing of communities to choose between their lives or surrendering their kin or ceding their lands; the use of native scouts and auxiliaries that are in the service of colonial governments; the use of reserves as spaces of containment; the imprisonment, assassination, defamation, or removal of leadership; the targeting of socio-economic institutions as the basis for autonomy; and the need to “civilize” in order to pacify or “the winning of hearts and minds.”

 

As federal engagement in active wars with Indian nations tapered off during the late 19th century, U.S. counterinsurgency policies migrated abroad with the flag of empire:  to the Philippines, the Caribbean, Central and South America. Over the course of the twentieth century, the United States intervened forty-one times to create regime change in Latin America alone. Such interventions involved counterinsurgency operations: coups, civil wars, declarations of emergency powers and the resulting repression, displacement, and murder. As sociologist Stuart Schrader explains, counterinsurgency pioneered abroad return to the streets of U.S. cities in the form of technology and policing practices.

 

Counterinsurgency warfare destroys communities, forcing thousands of people from their homes.  But refugees of the murderous wars waged under the flag of counterinsurgency policy rarely find safe harbor in the United States. 

 

In 2018 and 2019, caravans of refugees, migrants, and asylum seekers from the Americas, Africa, and Europe traversed thousands of miles on foot to approach the U.S.-Mexico border, only to be reviled as dangerous “criminal aliens.” Many of them fled the terror and corruption of regimes installed by U.S. counterinsurgency policy. Arriving at the border, caravan members confront a “zero tolerance” counterinsurgency policy resulting in the taking of children through unlawful and inhumane family separation policies and a “Migrant Protection Protocol” that violates international refugee policy by forcing asylum-seekers to remain in Mexico. During the global pandemic, people imprisoned in detention facilities or forced into precarious and temporary quarters are at an increased risk of infection and illness.

 

Portrayals of Black Lives Matter protesters as violent “thugs” assailing collective security and “law and order” resound with these characterizations of caravans of displaced persons, many from indigenous nations, at the border. In both cases, groups of people seeking justice are depicted as criminals bent on destroying the social order for individual gain, whether through looting or drug-smuggling. 

 

Both the caravans and the Movement for Black Lives respond to the depredations of precisely the same “law and order” regimes that greet them with repression and derision.  Since the police killing of Michael Brown in Ferguson, Missouri in 2014, the Movement for Black Lives has organized against the violence of militarized policing in Black communities, demanding the defunding and abolition of policing.  Multinational caravans of people seeking safe harbor from the ravages of U.S.-backed dirty wars and austerity regimes in Latin America, the Caribbean, and Africa insist on their right to safe harbor against a regime of militarized borders. 

 

Throughout its bloody history, counterinsurgency policy has undermined democratic regimes and created civil strife, endangering and displacing civilians in the name of “law and order.”  Now, on the streets of U.S. cities, federal agents join militarized police in waging war on Americans who are exercising their lawful rights of freedom of speech and assembly. There is no doubt that the results endanger us all.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176673 https://historynewsnetwork.org/article/176673 0
The 1976 Election: Why We Can't Predict Vice Presidential Selections in Advance

 

 

There’s a reason why vice presidential picks are impossible to predict: Even the presidential candidate who makes the decision rarely knows the choice in advance, because the final selection usually depends on an eleventh-hour turn of events that no one can fully anticipate.  This was the case in the 1976 election, a rare moment when not two, but three, major-party candidates selected running mates.  And in each of the three cases, the choices were eleventh-hour selections that pundits did not expect.

 

The vice presidential selection process held more importance than usual in 1976, because when the primaries ended in early June, neither the Republican incumbent Gerald Ford nor the Democratic frontrunner Jimmy Carter had enough delegates to secure his party’s nomination.  Though Ford held a slight lead over his challenger, California governor Ronald Reagan, a grueling neck-and-neck primary race had left neither the conservative insurgent Reagan nor the centrist Ford with enough delegates to claim the nomination outright.  Carter had a much more formidable lead over his primary opponents than Ford did, but a divided Democratic Party – with delegates split between multiple liberal candidates who still refused to concede after the last primary – meant that it was still theoretically possible for the party liberals to deprive the centrist Carter of the nomination if they could agree on a single alternative candidate.  Fortunately for Carter, they did not, but the divisions in their parties meant that for both Carter and Ford, picking a running mate was about more than personal preference or general election considerations; it was also about uniting a fractured party in order to win over some wavering convention delegates and appease disgruntled party activists who had supported another candidate in the primaries.

 

To the surprise of almost everyone, Reagan was the first to announce his choice of running mate.  Traditionally, candidates had waited until they were assured of their party’s nomination to pick a potential vice president, but Reagan’s campaign manager, John Sears, thought that the California conservative’s best chance of winning the nomination was to shake up the process by selecting Ford delegate Richard Schweiker, a moderately liberal Republican senator from Pennsylvania who, Sears hoped, would bring the rest of the Pennsylvania Republican delegation into the Reagan camp and maybe peel off a few Ford delegates from the New York delegation as well – which would be enough to give the nomination to Reagan. 

 

The gambit failed badly.  The other Pennsylvania delegates refused to back Reagan.  And Reagan lost support from conservative southern delegates who felt betrayed that their candidate had picked a northern moderate liberal in apparent opposition to his conservative principles.  

 

For the Ford campaign, Reagan’s selection of Schweiker was a lesson in what not to do.  Ford resolved to spend the next few weeks carefully vetting each potential vice presidential candidate and selecting someone who would unify the party while also remaining fully compatible with the ideology of his own centrist Republican campaign.  Yet ultimately, the selection process ended up becoming far less organized or predictable than Ford anticipated.

 

Ford already had a vice president, of course: Nelson Rockefeller, the former governor of New York who for more than a decade had been the unofficial leader of the liberal wing of the Republican Party.  When Ford had assumed the presidency after Richard Nixon’s resignation in August 1974, he had selected Rockefeller because of his nearly unparalleled executive experience as governor of one of the nation’s largest states.  But to Ford’s dismay, the reaction from the conservative wing of the party was so vociferous that it fueled Reagan’s primary challenge and threatened to divide the GOP.  Faced with the possibility that it might be impossible to win his party’s nomination with Rockefeller on the ticket, Ford reluctantly notified Rockefeller in the fall of 1975 that he would not be the president’s running mate again the next year.  While Ford had never said who exactly would replace him, many both inside and outside of his campaign assumed that it would be someone more conservative.

 

But this was not Ford’s desire.  Even after doing weeks of interviews, public opinion polling, and vetting of potential candidates, Ford arrived at his party convention without having selected a running mate.  All but one of the finalists on his list were either party centrists or moderate liberals – not conservatives in the Reagan mold.  Those closest to Ford believed he was leaning toward one of the most liberal on the list – William Ruckelshaus, a former head of the Environmental Protection Agency (EPA) and deputy attorney general who had earned a reputation for honesty when he resigned rather than carry out Nixon’s orders during the Watergate scandal.  Years later, Ford said that he had actually wanted to select Anne Armstrong, an ambassador who was opposed by some of the president’s advisors who noted that public opinion polling indicated that placing a woman on the ticket would result in a net loss of votes.  

 

Among delegates, there was strong support for Reagan as a vice presidential candidate, a move that would have created a Republican unity ticket that would presumably help Ford in the conservative South, a region that would otherwise likely go to Carter.  But Reagan insisted that he would never accept the number-two spot, and Ford, who was angry with Reagan for challenging an incumbent president in his own party, was not eager to offer it to him.  Instead, Ford met with Reagan at the convention and asked him which candidate he would accept.  Reagan mentioned Bob Dole, a moderately conservative senator from Kansas who had not been on Ford’s list of finalists.  Ford was not immediately inclined to accept Reagan’s suggestion, but after mulling it over throughout the night and talking with his advisors, he decided that Dole would be the best candidate.  He could mollify party conservatives, solidify Ford’s support in the farm states (a normally Republican region that was threatening to break for Carter), and perhaps even help the campaign make inroads in the South.  On the morning of the final day of the convention, Ford announced that Dole was his choice.  Dole barely had time to write an acceptance speech.

 

Compared to Ford’s eleventh-hour selection of Dole, the Carter campaign’s vice presidential selection process was supposed to be much more orderly.  The campaign determined early on that Carter needed to select a northern senator to balance a ticket headed by a southern governor without Washington experience, and there was a general assumption that Carter’s running mate would probably be more liberal than he was (since nearly all northern Democratic senators were to the left of the fiscally conservative, socially moderate Carter).  Some of Carter’s advisors encouraged him to pick a Catholic, since he was initially not polling well among northern Catholics.  Carter ignored some of this advice.  He allowed his campaign aides to draft a list of potential candidates and oversee the vetting process, but he insisted that his highest priority would be to find someone who could be a potential governing partner – which meant, in his view, that the person had to be temperamentally compatible and, above all, share his values.  

 

Carter did agree to interview one senator who seemed to fulfill all of the criteria set by his advisors: Senator Edmund Muskie of Maine, a Catholic with strong working-class roots who had been Hubert Humphrey’s running mate in 1968.  But the interview with Muskie did not go well.  Carter was disturbed by Muskie’s temper, and he crossed the Maine senator off his list.

 

At the start of the Democratic convention, some journalists assumed that the frontrunner for the number-two position was John Glenn of Ohio, a former astronaut who was serving his first term as senator.  But Glenn’s bland keynote address at the convention sank his prospects.  By contrast, Representative Barbara Jordan of Texas gave such a rousing address highlighting issues of race and social justice that some African American delegates insisted that she be selected for the vice presidency – a move that would have made her not only the first woman, but also the first Black American, selected for a major-party ticket.  But Carter, in the end, settled for someone much more traditional.

 

Carter’s advisors had not initially considered Senator Walter Mondale of Minnesota the best candidate for the position.  After tentatively exploring a presidential bid a year earlier, Mondale had ended his campaign almost as soon as it began, saying that he did not want to spend the next year “sleeping in Holiday Inns.”  Some in the party questioned his stamina.  And within the Carter campaign, pollsters noted that Mondale would likely be a net negative for the ticket by costing the campaign votes among moderates who distrusted his liberalism. 

 

But Carter, despite his ideological differences with the Mondale, appreciated the Minnesota senator’s impeccable reputation for honesty.  As a Baptist deacon and Sunday school teacher with a strong faith, Carter also felt comfortable with Mondale’s background as a Methodist minister’s son.  His interview with Mondale sold him on the candidate, and he selected him in spite of the warnings from his pollsters – just in time for the end of the convention.

 

How well did these eleventh-hour vice presidential choices turn out?  Both Mondale and Dole proved themselves to be loyal, hard-working campaigners who avoided scandal and consistently championed the ideals of their party.  Mondale’s lack of ideological compatibility with Carter’s conservative-leaning centrism resulted in tensions once Carter was elected president, but this was not evident on the campaign trail.  And Dole’s acerbic wit alienated some voters and made some Republicans regret his place on the ticket.  In the end, Dole’s place on the ticket did not win over enough southern conservatives to allow Ford to make significant inroads in the South.  But he did help keep the farm states in the Ford column.  

 

Mondale had much less pull in the northern states than Carter’s campaign had hoped.  If Muskie had been on the ticket, Carter would likely not have lost Maine and the rest of northern New England (as he did with Mondale), and if Glenn had been selected, Ohio might not have been such a nail-biter for the Carter campaign.  (Carter carried Ohio in the end, but only by the slimmest of margins.)  But if Mondale’s presence on the ticket did not give Carter as much of a boost in the North as he had anticipated, neither did it hurt him in the South as much as some of his campaign aides had feared; Carter carried every southern state except for Virginia, even with Mondale as his running mate.  And, despite not being selected, both Muskie and Glenn continued to play important roles in national politics – Muskie as Carter’s Secretary of State and Glenn as a senator and, eventually, presidential candidate.

 

So, perhaps, in the end, the 1976 election shows that even if eleventh-hour vice presidential selections hinge on factors that are impossible to predict in advance, the types of candidates who emerge from these selections are usually reliable – assuming that a well conducted vetting process occurs before the final selection.  The press may not have been able to predict that Dole would be Ford’s running mate or that Mondale would be Carter’s, but both candidates generally fit the profile for the type of candidate that a vetting process might have produced.  So, although no one can know in advance the name of Joe Biden’s running mate, whoever he does select for this position is likely to look in retrospect like an obvious choice, regardless of who she may be. 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176761 https://historynewsnetwork.org/article/176761 0
The Mississippi Flag and the Shadow of Lynching

 

 

The Mississippi flag, which has now seen its inglorious end, first flew its Confederate design in 1894, a busy time in the south for mythmaking about white supremacy.  

 

The year 1894 was also near the peak of one of the most evil and grotesque practices in American history: lynching. During a 14-year stretch from 1886-1900 more than 2500 people, mainly black men, were tortured and killed by mainly southern white individuals and mobs that faced almost no consequences. The tortures were horrific; mobs burned some men alive, others castrated them.  One woman in Texas was “boxed up in a barrel with nails driven through the sides and rolled down a hill until she was dead,” wrote Ida B. Wells, the leading anti-lynching crusader of the time. 

 

As lynching rose, the white south was busy creating monuments and flags that asserted white supremacy and white innocence.  The myth of black culpability in the crimes against them was so pervasive that nearly everyone, including Frederick Douglass, believed it, according to the autobiography of Ida B. Wells.  If the great Douglass believed the myths, you can rest assured that they were almost universally shared. Even Ida B. Wells believed them, until a mob lynched three Black grocery store owners in 1892 near her home in Memphis. 

 

Thomas Moss, who Wells knew well—she was his daughter’s godmother—was one of the three Black owners of the People’s Grocery Company.  A competition and tension grew between that store and a white-owned one.  The People’s Grocery workers were increasingly harassed by their white neighbors and scuffles broke out, culminating in a mob of whites surrounding the store and shooting at it.  Someone inside the store shot back and three whites were injured.  

 

The three Black men and a few others were arrested. Egged on by the white-owned media, a lynch mob stormed the jail and lynched the three shopkeepers. Moss, understanding that Memphis was no longer safe for his people, made one last request to his killers: “Tell my people to go West,” he said. “There is no justice for them here.” The lynching and these prophetic words led many Black Memphians to leave the city in 1892.

 

The lies told about Black men were repeated over and over across the south and dutifully reprinted in the northern press.  Black men were lynched, the myth went, because they were raping white women. “The crime for which negroes have frequently been lynched and occasionally been put to death with frightful tortures,” wrote The New York Times in an 1894 editorial, “is a crime to which negroes are particularly prone.”

 

Wells, through her investigations, discovered a very different reality that can be summarized by two key findings.  First, rape was not usually the stated cause, and when it was, it was often not charged until after the lynching had occurred. Second, when an actual relationship between a Black man and a white woman existed, it was a generally a consensual one. 

 

“Nobody in this section believes the old thread-bare lie that Negro men assault white women,” wrote Wells in an unsigned editorial in her newspaper, The Free Speech and Headlight.  “If Southern white men are not careful they will over-reach themselves and a conclusion will be reached which will be very damaging to the moral reputation of their women.” 

 

“The black wretch who had written that foul lie should be tied to a stake,” wrote The Memphis Commercial Appeal, “a pair of tailor’s shears used on him and he should then be burned at a stake.” A mob destroyed the newspaper’s presses and Wells fled the city.

 

As the Mississippi flag was going up in 1894, Ida B. Wells was working as a one-woman force to change the narrative. She had spent years traveling around the south investigating lynching, and had narrowly escaped death herself. In 1894, she was wrapping up her speaking tours. “I found myself physically and financially bankrupt,” she wrote.   

 

Wells had concluded, backed up by evidence, that Black lawlessness was a myth.  Instead, Wells wrote, the real reason behind lynching was white terrorism due to economic competition, just like in the case of Thomas Moss and his grocery store.  The sacking of Tulsa’s Black Wall Street years later in 1921 can be seen as economics-driven terrorism too. 

 

At this point in the story, it would be satisfying to hear that Ida B. Wells convinced the nation and changed the narrative.  But this would be only half true. The Black press told the story and the truth about lynching spread widely among people of color. 

 

But the white press doubled down on the lies. In 1894, the year that the Mississippi myth-making flag was going up, The New York Times called Wells “a slanderous and nasty-minded mulattress, who does not scruple to represent the victims of black brutes in the South as willing victims.”  Politicians, newspapers, and historians painted the picture of a heroic south needing to rein in lawless Blacks, a racist view echoed by the film Birth of a Nation in 1915, and persisting even in our own time when Trayvon Martin, Eric Garner and other victims of violence, are demonized.  

 

But in 2020 the Mississippi flag is coming down and so are the myths. Nikole Hannah-Jones won a Pulitzer Prize for her central essay in the 1619 Project, which seeks to change the national narrative on race.  Ida B. Wells herself won a posthumous Pulitzer Prize this year, more than a century overdue, but better late than never.  Lynching helped to raise the odious flag in 1894.  But in 2020, hundreds of thousands of marchers protesting the lynching of George Floyd brought the flag down.  Maybe, just maybe, this will be remembered as an era of change. 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176621 https://historynewsnetwork.org/article/176621 0
Did the Atomic Bomb End the Pacific War? – Part I Many historians and most lay people still believe the atomic destruction of Hiroshima and Nagasaki ended the Pacific War.

 

They claim with varying intensity that the Japanese regime surrendered unconditionally in response to the nuclear attack; that the bomb saved a million or more Amercian servicemen; that Hiroshima and Nagasaki were chosen chiefly for their value as military targets; and that the use of the weapon was, according to a post-war propaganda campaign aimed at soothing American consciences, ‘our least abhorrent choice’.

 

The trouble is, not one of these claims is true.

 

That such denial of the facts has been allowed to persist for 75 years, that so many people believe this ‘revisionist’ line - revisionist because it was concocted after the war as a post-facto justification for the bomb – demonstrates the power of a government-sponsored rewrite of history over the minds of academics, journalists, citizens and presidents.

 

The uranium bomb dropped on Hiroshima, code-named ‘Little Boy’, landed on the city center, exploding above the main hospital and wiping out dozens of schools, killing 75,000 people, including tens of thousands of school children.

 

‘Fat Man’, the plutonium bomb used on Nagasaki, incinerated the largest Catholic community in Japan, obliterating the country’s biggest cathedral along with a residential district packed with schools and hospitals. Its missed its original target, the city center.

 

Zealous apologists for the bomb will have started picking holes: Hiroshima held troops? Yes, a few enfeebled battalions. Hiroshima had military factories? Most were on the outskirts of town, well clear of the bomb.

 

Nagasaki hosted a torpedo factory and shipyards? Yes. The factory was deep underground and untouched by the weapon; the bomb missed the shipyards, which were not functioning in any case.

 

Only Kokura, of the five intact cities set aside by the Target Committee, a secret group of US military and scientific personnel, for nuclear destruction, contained a large weapons arsenal. In any event, bad weather diverted the second atomic run from Kokura to Nagasaki.

 

And yet, it mattered little to the Target Committee if the targeted city held civilians or soldiers, arms-makers or sushi restaurants, kimono-clad women or children. The ideal city should, according to the committee’s Minutes, “possess sentimental value to the Japanese so its destruction would ‘adversely affect’ the will of the people to continue the war; … and be mostly intact, to demonstrate the awesome destructive power of an atomic bomb.”

 

Kyoto matched that criteria but was grudgingly struck off the list for aesthetic reasons: War Secretary Henry Stimson had visited the beautiful heart of Japanese culture with his wife in 1926, and insisted on preserving it. Tokyo was rubble, so there was no point in “making the rubble bounce,” to appropriate Winston Churchill’s famous remark about the nuclear arms race.

 

In other words, the target should show off the awesome power of the bomb not only to the six leaders who ruled Japan from a bunker under the ruins of the Imperial Palace in Tokyo but also – and just as importantly - to Joseph Stalin, whose massed forces were being deployed to the border of Japanese-occupied Manchuria. Stalin himself was aching to be “in at the kill,” to seize a communist foothold in Asia. The old Bolshevik fancied Hokkaido.

 

In this light, the use of the atomic weapon must be seen as a continuation and a start: the nuclear contination of the conventional terror bombing of Japanese civilians, and the start of a new “cold war” waged by a superpower equipped with a weapon that would, as James Byrnes said on May 28, 1945, a few weeks before he was appointed US Secretary of State, “make Russia more manageable” in Asia.

 

*

 

Let us revisit the scene of the world back then; let us try, briefly, to unravel the confluence of events that led to the use of the weapons.

 

By the start of 1945, Japan had lost the racial war they’d started in the Pacific. Allied-- chiefly US--military power had utterly defeated them. In fact, the Japanese had lost the war as early as the Battle of Midway, fought between June 4th and 7th 1942, when US forces destroyed the bulk of the Japanese navy, “the most stunning and decisive blow in the history of naval warfare,” as historian John Keegan described it, rendering Japan incapable of mounting another major offensive.

 

By July 1945 Japan possessed about 3,000 fighter planes and 1,500 bombers, but few functioning airfields. They lacked sufficient ammunition for their remaining artillery, machine guns and rifles. They had no effective navy (their most lethal sea weapon being some 2400 “suicide boats”: little high-speed craft used to ram the bellies of enemy ships). Untrained kamikaze pilots still dared to take off in planes made partly of wood, so dire were supplies of steel. Indeed, Japan was desperately short of all commodities, chiefly fuel, food and steel. The people were compelled to hand over any household steel items to be melted down for ammunition. Most civilians were malnourished or slowly starving.

 

No doubt Japan could still draw on a large pool of men: some 65 divisions had returned home earlier in the year, and every one of the 350,000 troops (about 900,000, if you include support units, teenage soldiers and troops with little training) assigned to the defense of Kyushu were determined to honor the fierce exhortation of the Bushido military code: “To die!”

 

Yet Japanese “spirit” on the ground meant little without any effective air defense: the Pacific War was won in the air, and by mid-1945, American aircaft carriers and warships ringed the Japanese achipelago in an impenetrable blockade, and US aircraft were in complete control of the skies over Japan.

 

By then, 67 major Japanese cities (including Tokyo) lay in ruins, the result of General Curtis LeMay’s terror firebombing campaign, in which millions of incendiary (proto-napalm) canisters created huge firestoms that tore through Japan’s papyrus homes like a bushfire in hell – killing at least 100,000 civilians in Tokyo in a single night, on March 9-10, making it the deadliest bombing raid in history.

 

LeMay’s goal was the same as Allied terror-bombing of Germany: to break civilian morale. It failed: the Japanese and German people hardened in response to terror bombing - as had the British, of course, during the Blitz, offering empirical evidence of civilian mental toughness the Allies failed to heed.

 

Yet, if the destruction of most of Japan’s cities was not enough to make them surrender, throughout July 1945 Admiral William Halsey’s Third Fleet was busy finishing off what LeMay’s napalm sorties had failed to destroy or even target: Japan’s remaining infrastructure, such as airfields, 12 giant coal transports, and the naval base at Kure.

 

Something else sustained the Japanese that defies easy explanation to westerners: the ‘divine’ presence in their midst, in the form of Emperor Hirohito, the ‘Sacred Crane” who, the people believed, was descended from the Sun Goddess Amaterasu and fortified their extraordinary psychological resilience. The western, Christian equivalent would be the return of the Messiah during total war.

 

*

 

Nowhere was the deference to Hirohito so palpable, so forceful, so weighed down by the dreadful burden of history, as in the concrete bunker under the ashes of Tokyo, where Japan’s War Council of six old Samurai rulers refused to utter three magic words: “We surrender - unconditionally.” Somehow those words had to be extracted from their mouths like an especially stubborn tooth.

 

Three hardliners – War Minister Korechika Anami, Army Chief of Staff General Yoshijiro Umezu and Navy Chief of Staff Suemu Toyoda – dominated the Six, and pressed every Japanese to fight to the death - commit, in effect, national “seppuku,” or ritual suicide - to defend the Emperor and the homeland.

 

Three moderates – Prime Minister Kantaro Suzuki, Foreign Minister Shigenori Togo, and Navy Minister Admiral Mitsumasa Yonai - wavered and vacillated, by turns secretly pursuing peace and openly supporting war. 

 

Through 1945, as their country crumbled before their eyes, the Big Six continued to press for a conditional peace – unacceptable to the Allies - that would at least deliver Japan’s chief condition: the preservation of the life of Hirohito and the Imperial dynasty.

 

For the Japanese regime, Hirohito’s life was non-negotiable, a condition of surrender Japan experts in Washington, notably Joseph Grew, US ambassador to Japan from 1932-1941, well understood and warned the Truman administration. To this, Tokyo would stick to the bitter end: no Japanese leader could, or would, bear responsibility for serving up the Emperor to the Americans to be tried and hung as a war criminal.

 

In Hirohito’s name, then, the Japanese regime would refuse to surrender, and nothing, not even the annihilation of the Japanese people, would deflect these grim old men from saving their divine monarch, a minimum condition for peace.

 

That day of reckoning was fast approaching. The Japanese regime was expecting and preparing for a US land invasion. The hardliners, Anami, Umezu and Toyoda, welcomed this prospect: every Japanese must prepare to martyr themselves in defense of the homeland. There was method in this madness: from the depths of their delusion the Japanese hawks believed high American casualties would compel the US to sue for a negotiated, conditional peace.

 

*

 

Meanwhile, in Washington, President Harry Truman was determined to avoid a land invasion, despite the advanced planning for “Operation Downfall,” the two-pronged attack on the Japanese homeland, at Kyushu and Tokyo Bay.

 

The appalling casualties of Okinawa (April 1-June 22), the bloodiest battle in the Pacific, in which an estimated 12,520 Americans were killed in action and up to 55,000 wounded, preyed on Truman’s mind.

 

With such terrible premonitions, Truman called a critical meeting of the Joint Chiefs of Staff on June 18, 1945, to discuss the invasion plan – a month before the atomic bomb was scheduled to be tested in the New Mexico desert.

 

The Joint Chiefs were asked to estimate likely American losses – dead, missing and wounded – in a land invasion. General George Marshall calculated that during the first 30 days, casualties “should not exceed the price we have paid for Luzon,” where 31,000 were killed, wounded or missing (compared with 42,000 American casualties within a month of the Normandy landings).

 

Several caveats qualified this low “body count”: the invasion of Kyushu and Tokyo Bay would take even longer than the allocated 90 days, and the figures did not include naval losses, which had been extremely heavy at Okinawa. Nor did the meeting reckon on the unknown menace of Japanese civilians, all of whom were expected to fight to the death armed with bamboo spears and knives or whatever weapons they could find.

 

The Joint Chiefs agreed on the politically palatable figure of 31,000 battle casualties in the first month, implying about 10,000 killed in action. Other estimates placed the figure far higher: Admiral Chester Nimitz reckoned on 49,000 dead and wounded in the first 30 days; Admiral William Leahy predicted a 35% casualty rate, implying 268,000 killed and wounded in total. Major General Charles Willoughby, General Douglas MacArthur’s Intelligence Chief, and no stranger to hyperbole, warned of between 210,000 and 280,000 battle casualties in the first push into Kyushu. At the extreme end, some feared half a million dead and wounded.

 

That the estimated casualties of a land invasion ranged from tens of thousands to half a million should have sounded alarm bells: nobody really knew. In any case, Marshall insisted “it was wrong to give any estimate in number” (after the war he privately offered Truman “as much as a million” as the likely casualty number).

 

To put these figures in context: the American combat force slated to invade Japan numbered 766,700. So it was an obvious fiction – or, if the Joint Chiefs actually believed it, a dismal reflection of their faith in the quality of the American soldier – to claim after the war that Japan’s ailing divisions (only half of whom were sufficiently supplied with ammunition) and “home guard” - mostly civilians carrying knives and bamboo spears - would have wiped out the entire US invasion force.

 

In short, in June 1945 nobody seriously believed casualties of an invasion would be a million or several million. So the claims promoted after the war and ad nauseum to this day that the atomic bomb avoided a land invasion and “saved up to a million American troops” were grotesque fictions, used as post-facto justifications for the weapon in the face of mounting ethical objections to its use.

 

The crucial question, however, is what impact these shocking figures had on Truman’s mind. Winding up the 18 June meeting, the president asked the Joint Chiefs: so the invasion would be “another Okinawa closer to Japan?” They nodded. And the Kyushu landing – was it “the best solution under the circumstances?” the President wondered. “It was,” the Chiefs replied.

 

Truman was unpersuaded, and after deep consultation, energised by the prospect of Russia joining the Pacific war, he decided in early July to shelve – ie postpone, if not actually cancel - the invasion plan, two weeks before the atomic “gadget” was due to be tested in New Mexico.

 

Why risk thousands of American lives attacking a defeated nation? Why grant the old Samurai their dying wish, to martyr themselves and their people? Why not involve the Russians or use the US blockade to force Japan to surrender? Those questions fairly reflected Truman’s thinking at the time, and reflect the fact that he was determined to avoid a land invasion.

 

In this light, it was never a question for Truman of either the bomb or an invasion: the bomb hadn’t been tested. It was a question of: why invade Japan at all?

 

*

 

Fast forward to “Trinity,” the atomic bomb test conducted on on July 16th in the Jornada de Muerto desert, 35 miles south of Socorro, New Mexico. Its success fulfilled the wildest dreams of the Manhattan Project, the secret organization charged with building the weapon.

 

The first man-made nuclear explosion detonated at 5:29 that morning. Radiation waves fled the bomb casing at the speed of light. Billions of neutrons liberated billions more in conditions that “briefly resembled the state of the universe moments after its first primordial explosion,” wrote one scientist. A bell-shaped fireball rose from the earth, whose “warm brilliant yellow light” enveloped physicist Ernest Lawrence as he stepped from his car. It was “as brilliant as the sun ... boiling and swirling into the heavens” – about a kilometer and a half in diameter at its base, turning from orange to purple as it gained height.

 

The nuclear dawn was visible in Sante Fe, 400 kilometers away. A partially blind woman later claimed to have seen the light. The blast was variously compared to “Doomsday” and “a vision from the Book of Revelation,” inspiring the scientific leader of the Manhattan Project, Robert Oppenheimer, to summon a line from the mystical Hindu text, the Bhagavad Gita - “Now I am become Death, the destroyer of worlds” – after which he strutted around like a cowboy who had just acquired the fastest weapon in the west.

 

The successful test certainly gave Truman the biggest weapon in the West - and a great boost to his confidence before the coming Potsdam conference, convened in late July to carve up the post-war world and to send an ultimatum to Japan to surrender.

 

The resulting Potsdam Declaration (or Proclamation), signed on July 26th by the United States, Britain and China, ordered Japan to surrender unconditionally or face “prompt and utter destruction.”

 

The nature of that destruction, by an atomic weapon, was not revealed to the Japanese or, ominously, the Russians. Stalin knew of the weapon’s development through his spies in Los Alamos, and drew his own conclusions.

 

But something else set Stalin’s rage boiling: Russia had not been invited to sign the Potsdam ultimatum to Japan. He had been pointedly ignored.

 

*

 

On 27th July, Japan’s Big Six read the Potsdam ultimatum. The three “moderates,” Suzuki, Togo and Yonai, noted with relief that the Soviet Union was not a signatory.

 

Why had Russia been excluded? Russia was then a US ally and Stalin “a disgusting murderer temporarily on our side,” as George Orwell had described the Soviet dictator. Why not use Russia’s name to help end the war, as Truman had earlier that month intended?

 

For one thing, Truman was now armed with a nuclear weapon, and the president understandably felt Russia’s help might no longer be needed to force Japan to surrender.

 

For another, James Byrnes, the US secretary of state and master political manipulator, had persuaded Truman to strike Russia’s name from the ultimatum. Byrnes himself had put a line through the Soviet Union on one draft, signed the amendment “JB” and added the word: DESTROY. The clerk responsible failed to heed Byrnes’ wishes, as I found a copy of this remarkable document in a box in the Truman Presidential Library in 2009.

 

By persuading Truman to remove Russia as a joint-signatory on the ultimatum, Byrnes effectively prolonged the war because, at a single stroke – surely the deadliest pen stroke in history – he deleted one of the greatest incentives for Japan’s surrender (avoiding a communist invasion) and reassured the Japanese leaders that Stalin remained neutral.

 

Byrnes thus handed Tokyo’s hardliners a powerful justification to continue the war effort. The US Secretary of State’s motives were threefold: to buy time for the bomb to complete its journey across the Pacific; to deny Stalin a claim on the spoils of victory; and to give America a crack at using the bomb and emerging as sole Pacific victor. In short, nuclear power was now guiding US strategy, not combat troops on the ground or Russia’s support.

 

In the event, Byrnes delaying tactics worked: the Big Six dared to hope that Russia remained neutral - as agreed under the Russo-Japanese Neutrality Pact. And so Tokyo’s fantasies were allowed to persist: they would continue to press Moscow to mediate a conditional peace with America, which Stalin had no intention of offering, even as he accelerated the mass deployment of his forces to the border with Japanese-occupied territory.

 

*

 

There was an olive branch in the Potsdam ultimatum, which the moderates seized on. One clause appeared to offer the Japanese people, of their “freely expressed will,” the chance to choose their post-war government. That implied the retention of the Imperial system, or at least the emperor as figurehead. 

 

Yet it was wide open to interpretation, and the three hardliners (Anami, Toyoda and Umezu) drew the darkest interpretation of another clause, which insisted that “the authority and influence of those who have deceived and misled the people of Japan into embarking on world conquest must be eliminated for all time.”

 

In their eyes, this meant the Emperor, and his probable execution as a war criminal – tantamount to the destruction of the soul of Nippon.

 

The Potsdam ultimatum must therefore be firmly rejected, they concluded. To surrender the national godhead would condemn them forever as the most reviled figures in Japanese history. The hawks prevailed: none of the Big Six were willing to sign a paper they interpreted as the Emperor’s death warrant.

 

And so, on 28 July Prime Minister Suzuki was persuaded to officially “mokusatsu” or “kill [the Potsdam ultimatum] with silence” – a Japanese negotiating tactic that treated offense with silent contempt.

 

Prime Minister Suzuki obliged at once: “The government does not think that [the Potsdam statement] has serious value,” he told the Japanese press. “We will do our utmost to fight the war to the bitter end.”

 

Like monks cloistered with their myths, the Big Six resolved to fight on, locked in the fantasy of Soviet-sponsored peace negotiations from which Japan would emerge with “honor” intact, oblivious to the fact that, in the eyes of the world, the Japanese regime had nothing left to negotiate - and much less honor.

 

*

 

On 6th August, on the morning a bomber called the Enola Gay flew towards Hiroshima with an atom bomb in its belly, the Japanese leaders were still waiting, hoping, for a Soviet reply to their peace feelers .

 

 

Part II of this essay will appear next week on HNN. 

  ]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176631 https://historynewsnetwork.org/article/176631 0
Learning from Lincoln: Meeting Crisis with Action

 

 

In the spring of 1861, the United States was on the verge of becoming a failed state. All that was needed was Lincoln’s recognition of the existence of the Confederacy. In writing a history of how a strategically placed minority of slaveholders maneuvered eleven states out of the Union and created the short-lived Confederate States of America, I gave little thought to the possibility that my America might soon find itself on the brink of a comparable crisis of governance and national purpose. 

Yet, here we are. The nation is staggering from the shattering effects of the Coronavirus pandemic and its response is among the worst in the world. As mass protests on behalf of racial justice rock the nation’s cities, the president and federal law enforcement agencies respond with a show of brutal force that frightfully resembles the strong-arm tactics of Nazi Brownshirts in the 1930s. As unemployment soars to depression-era levels, the toxic partisanship in Congress stymies any consistent policy of national aid and relief. In retrospect, the combination of malevolence and paralysis that has characterized our national leadership should come as no surprise. Beginning in the late 1970s and accelerating after 9/11, the means of effective governance have been systematically undermined by tax cuts that further enrich the few at the expense of the many, deregulatory policies that give free rein to corporations regardless of the consequences to the environment or the public good, and the privatization of healthcare, prisons, education, infrastructure, pensions, low-income housing, and social services. Only a free market economy directed by individuals pursuing their own self-interest unfettered by government interference can be trusted, we’ve been told, with producing the greatest good for the greatest number. The result has been an economy hobbled for decades by deindustrialization, dead-end jobs in the midst of endemic underemployment, rising levels of linked racial and economic inequality, and smoldering alienation and resentment by those who once knew or dared to hope for better.

Lincoln met the crisis of secession by demonstrating that the United States indeed had a government whose claims to national sovereignty received broad public support. Working hand in hand with Congress and with support from pro-war Democrats, Lincoln oversaw a restructuring of the federal government to meet the demands of the war. In a burst of path-breaking legislation unmatched until the New Deal, Congress created the first national currency, fashioned a new backing system around nationally chartered banks, imposed the first national taxes on individual incomes and businesses, provided for land grant universities to advance agricultural education, and made good on homestead legislation. Federal expenditures exceeded all the costs of running the government since its inception in 1789 down to the outbreak of the Civil War. Business interests certainly received their share of federal subsidies and then some, but the reform program of the Republicans was broad and generous enough to garner the support of the party’s core constituencies. More significantly, the Republicans’ campaign for emancipation, however belated, set the stage for a commitment to racial equality written into the Constitution that would have been unthinkable before the Civil War.

As Lincoln recognized, the Union’s cause was the cause of Western liberalism. Carried forward by a new middle class and workers seeking basic democratic rights of suffrage and political participation, liberalizing currents swept Europe into the revolutions of 1848 against the hierarchical old order of the landed aristocracy. Though suppressed, the revolutionaries held fast to their liberal agenda. This was the international context of rising liberalism the Confederacy sought to reverse with its bid for a reactionary slaveholding republic ruled by a landed elite. Its defeat in the Civil War was a setback for conservative regimes in Europe intent on further repression at home and the imposition of new imperial regimes in the Americas, such as Emperor Napoleon III’s designs on Mexico and Spain’s on Sainte-Dominque.

The challenge facing democratic governments today is even more daunting than that confronted by Lincoln. To date, tightly regulated, centralized regimes in the East led by China, Singapore, and Vietnam have been far more successful in dealing with the Coronavirus crisis and in providing social programs insulating their populations from the worst effects of the crisis. More open nations in the East like Japan, Taiwan, and South Korea have also coped relatively well with the assistance of mandatory controls on individual behavior and programs of public education and health services.  Lagging far behind are the flagships nations of Western liberalism and individualism, the United States and the United Kingdom.  In the increasingly contentious battle for global leadership between China and the United States, China’s model, for all its repressive features, is pulling ahead as a governing system for other nations to follow. 

At risk is the very survival of democratic governance and free market capitalism as the exemplar of how progress and human rights are to be achieved, a legacy already weakened by the rise of authoritarian populism. Much more is needed than pouring money into the struggle to maintain or regain military, technological, and economic superiority and to throw a lifeline to the most disadvantaged. Such measures will be at best stop gap palliatives without a fundamental redefinition of just what constitutes national security and a reckoning with the poisonous legacies of slavery, systemic racism, and grotesque levels of wealth inequality. The United States is at a crossroads. One path leads to closed borders, bludgeoning protesters, and repeating the same policies of the past that have worsened the problems of today. The other leads to major cultural and economic shifts brought about by a fundamental re-examination of who we are as a people and what legitimate demands for social and economic justice we have a right as citizens to make on our government. The path chosen will determine whether contemporary America resumes its role as a beacon of hope and progress to the rest of the world or joins the Confederate slaveholders of the past among history’s losers. 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176681 https://historynewsnetwork.org/article/176681 0
Free Speech and Civic Virtue between "Fake News" and "Wokeness"

Alexander Meiklejohn, Time, 1928

 

 

Harper’s Magazine recently published an open letter defending public inquiry and debate against “severe retribution [for] perceived transgressions of speech and thought.” The letter refers to “an intolerant climate that has set in on all sides,” and while it mentions Donald Trump and authoritarian regimes its real target seems to be an increasingly illiberal segment of the American Left.

 

Critics have cried foul. Some reactions have exhibited exactly the tendencies that the letter describes, but others thoughtfully critique power imbalances between the celebrity signers and members of various marginalized groups, or question the motives or consistency of those signers who (in their eyes) have not supported free speech in the past. Some even claim the mantle of liberalism for themselves, arguing that public pressure to suppress unpopular opinions is itself a form of collective speech or association, the very “marketplace of ideas” that many liberals celebrate.

 

Unfortunately, none of these arguments reaches past adversarial notions of democracy. They all characterize free speech as a matter of conflicting rights-claims and competing factions. Indeed, some critics of the Harper’s letter seem eager to reduce all public debate to a form of power politics. Trans activist Julia Serano merely punctuates the tendency when she writes that calls for free speech represent a “misconception that we, as a society, are all in the midst of some grand rational debate, and that marginalized people simply need to properly plea our case for acceptance, and once we do, reason-minded people everywhere will eventually come around. This notion is utterly ludicrous.” As long as political polarization precludes rational consensus, she argues, we are left to “[make] personal choices and pronouncements regarding what we are willing (or unwilling) to tolerate, in an attempt to slightly nudge the world in our preferred direction.” Notably, she makes no mention of how we might discern the validity of those preferences or how we might arbitrate between them in cases of conflict.

 

To paraphrase the philosopher Alexander Meikeljohn, one could say that critics of the Harper’s letter take the “bad man” as their unit of analysis. By their lights, all participants in public debate are prejudiced, particular, and self-interested, “idiots” in the classical sense of the word. This is what allows many of the critics to assert a moral equivalence between free speech and the suppression of ideas. Free speech advocates are hypocritical or ignore some extenuating context, they claim, while those stifling disagreeable or offensive views are merely rectifying past injustices or paying their opponents back in kind, operating practically in a flawed public sphere.

 

It is telling, however, that the letter’s critics focus on speakers and what they deserve to say far more than the listening public and what we deserve to hear. Indeed, their arguments seem to deny the very existence of a public (at least in a unitary sense), and that is their fundamental shortcoming.

 

In Free Speech and Its Relation to Self-Government (1948), Meikeljohn challenges us to approach public discourse from the perspective of the “good man”: that is to say, the virtuous citizen. For Meikeljohn, if political debate is nothing but an exercise of “self-preference and force,” oriented toward no “ends outside ourselves,” we have already lost sight of its purpose (pp. 78-79). One cannot appreciate the freedom of speech, he writes, unless one sees it as an act of collective deliberation, carried out by “a man who, in his political activities, is not merely fighting for what…he can get, but is eagerly and generously serving the common welfare” (p. 77). Free speech is not only about discovering truth, or encouraging ethical individualism, or protecting minority opinions—liberals’ usual lines of defense—it is ultimately about binding our fate to others’ by “sharing” the truth with our fellow citizens (p. 89).

 

Sharing truth requires mutual respect and a jealous defense of intellectual freedom, so that “no idea, no opinion, no doubt, no belief, no counter belief, no relevant information” is withheld from the electorate. For their part, voters must judge these arguments individually, through introspection, virtue, and meditation on the common good. 

 

The “marketplace of ideas” is dangerous because it relieves citizens of exactly these duties. As Meikeljohn writes:

 

As separate thinkers, we have no obligation to test our thinking, to make sure that it is worthy of a citizen who is one of the ‘rulers of the nation.’ That testing is to be done, we believe, not by us, but by ‘the competition of the market.’ Each one of us, therefore, feels free to think as he pleases, to believe whatever will serve his own private interests. We think, not as members of the body politic, of ‘We, the People of the United States,’ but as farmers, as trade-union workers, as employers, as investors.…Our aim is to ‘make a case,’ to win a fight, to make our plea plausible, to keep the pressure on” (p. 86-87).

 

Of course, this is precisely the sort of self-interested posturing that many on the Left resent in their opponents, but which they now propose to embrace as their own, casually accepting the notion that their fellow citizens are incapable of exercising public reason or considering alternative viewpoints with honesty, bravery, humility, and compassion. 

 

As in many points of our history, the United States today has its share of hucksters, conspiracy theorists, and hate-mongers. It would be a mistake, however, to conceive of either our democratic practice or ideals as if these voices were central or unanswerable. In practice, curtailing public speech is likely to worsen polarization and further empower dominant cultural interests. As an ideal (or a lack thereof), it undermines the intelligibility and mutual respect that form the very basis of citizenship.

 

The philosopher Agnes Callard points out that political polarization has induced Americans to abandon “truth-directed methods of persuasion”—such as argumentation and evidence—for a form of non-rational “messaging,” in which “every speech act is classified as friend or foe… and in which very little faith exists as to the rational faculties of those being spoken to.” “In such a context,” she writes, “even the cry for ‘free speech’ invites a nonliteral interpretation, as being nothing but the most efficient way for its advocates to acquire or consolidate power.” Segments of the Right have pushed this sort of political messaging to its cynical extremes—taking Donald Trump’s statements “seriously but not literally” or taking antagonistic positions simply to “own the libs.” Yet none of that justifies the countervailing impulse to cloak messaging in sincerity of conviction.

 

While the language of the Harper’s letter has been criticized as vague, the responses to it could not be clearer. Critics have endorsed an adversarial politics based on the “marketplace of ideas” and selective elements of liberalism. Signers have put forward robust liberal principles that align with republican virtues. When they warn that restriction of debate “makes everyone less capable of democratic participation,” they affirm civic equality and public duties. When they refuse “any false choice between justice and freedom,” they underscore that freedom is hardly “intellectual license” (p. 87) but a shared commitment to realizing our society’s ideals. Rather than assuming the supremacy of our own opinions or aspersing the motives of those with whom we disagree, our duty as Americans is to think with, learn from, and correct each other. If we are to decide what we are willing or unwilling to tolerate, we must do so with individual integrity and collective concern.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176633 https://historynewsnetwork.org/article/176633 0
Better Than Silence: The Need for Memorials to the Manhattan Project

 

 

 

Seventy-five years ago, the United States detonated the first atomic bomb in Alamogordo, New Mexico. Given the enormity of that event, and the ensuing swift conclusion of World War II, it would be reasonable to expect a substantial physical manifestation of the place’s importance. A park, perhaps, or a museum. 

 

Instead the site of the Trinity Test is closed. Yes, it was designated a National Historic Landmark in 1975. But people can only visit on two days a year -- one in spring and one in fall. Visitors may drive to the site to see the nearly barren landscape and the distant hills that, before dawn on July 16, 1945, were illuminated brighter than one hundred suns. A black stone marker stands about twelve feet tall with its dull brass plaque, surrounded by security fences and a vast expanse of sky. That is all. 

 

When the bomb detonated, atop its 100-foot steel tower, it heated the sandy soil below into glass. The tower itself was vaporized. Should you find a piece of the glass, keeping it is against the law. No artifacts, please.

 

If Alamogordo’s minimalized memorial indicates America’s uncertain relationship with the bomb’s history, then the city two hundred and fifty miles away where the weapon was built reveals an even deeper ambivalence. In a time when monuments are under nationwide debate, Los Alamos may set the standard for what not to do. 

Aug. 11 will be the three-year anniversary of the “Unite the Right” rally in Charlottesville, VA. Its purported aim was to prevent removal of a statue of Confederate Gen. Robert E. Lee. The real intent surfaced when the rally turned violent, with many people injured and one woman killed. 

 

Since then the debate over monuments has intensified. Cities as different as Lexington, KY, Baltimore and New Orleans removed statues of Confederate leaders. Protestors weighed in too, for example tearing down the statue of Confederate president Jefferson Davis in Richmond. According to the Southern Poverty Law Center, 114 Confederate statues have come down since Charlottesville. 

 

In other words, America is having a heated argument about how to represent its past. From Christopher Columbus forward, some public historic symbols have become controversial. The bomb is no exception.  

 

Granted, this nation is fully capable of memorializing painful or complex times. A visitor to Pearl Harbor National Memorial can stand at the window, and see the USS Arizona sitting right there on the bottom. The most casual tourist at Gettysburg can easily find a pamphlet detailing how many boys died in Pickett’s Charge – on both sides. Even above the beaches of Normandy, there is no flinching from the brutal reality of what liberating France required. The cost of valor is tallied in tombstones. 

 

Los Alamos is not the same. I went there to do research for my book Universe of Two. All those years after the Trinity test, I had not expected a stone wall. 

The book is a novel, loosely based on the life of Harvard-trained mathematician Charles B. Fisk. He worked on the Manhattan Project, first within the Metallurgy Department at the University of Chicago, and then in Los Alamos. After the war he received a full scholarship from Stanford University to get a PhD in physics. 

 

Apparently the program’s emphasis on developing new bombs diminished his enthusiasm. Fisk dropped out after less than a semester, taking a part-time job with a company that repaired church organs. When he died in 1983, Fisk was considered one of the greatest cathedral organ builders ever. The company he founded continues to make premium instruments for colleges, universities and churches around the world. 

 

I learned none of this from the two paid historians on staff at the Los Alamos National Laboratory. All they would do is confirm that Fisk had worked there, on the detonator team. 

 

When I reached Los Alamos, I found a similar disinterest in disclosure. Such fundamental tools as maps – of the work sites, of the barracks’ locations -- were not available. I tried with the Historic Society, which directed me to the National Parks office, which was closed in the middle of the day – for several days. 

 

I pressed on. A geologist whose job was to remediate radioactive areas loaned me maps he’d used to organize the various extraction sites. It was helpful for understanding the scale of the Project’s operation, but couldn’t tell me where Fisk might have lived. Ultimately I relied on the hand drawn maps in self-published memoirs by the wives of Manhattan Project scientists – hardly an exacting record.

 

To be fair, Los Alamos is home to the Bradford Science Museum, a clean place with enthusiastic displays. But it lacks even one mote of dust about the consciences of those who built the bomb, who questioned its moral fitness. Hundreds of project scientists repeatedly petitioned the Department of War, the State Department and the president, arguing that the bomb should be demonstrated, not used on civilians. You won’t find that information in the exhibits. If you search the museum’s digital document archive under the word petition, there are zero matches.

 

Los Alamos’ ambivalence about its history is perhaps most physically expressed at Fuller Lodge. In the 1930s, when the area was home to a rough-riding boys’ school, the lodge’s large wooden edifice, with a stone patio facing Ashley Pond, served as the campus center. After the U.S. government acquired the school and surrounding lands, Fuller Lodge continued to be the hub of activity: dances, speeches, concerts, dinners, plays. 

 

But for those who notice subtleties, the history represented there is selective. Fuller Lodge bears a brass plaque declaring it to be on the National Historic Registry. Inside, Native American rugs hang on the walls. Local potters’ work sits in glass cabinets. 

 

But the photos on the walls are revelatory in what they exclude: Here are schoolboys playing basketball. Here’s a graduating class. If there is a picture of Manhattan Project director Robert Oppenheimer, or any of the scientists who labored to build the bomb, it hangs somewhere out of sight. 

 

Is this place special because it was a school? Or because thousands of scientists, engineers and technicians made incredible leaps in knowledge and technology there, harnessing the power of the atom as a tool of war that would change international relations forever? According to the photos, the school’s history wins. 

 

For as long as mankind has warred, new technologies have been put swiftly to use: the trebuchet, gunpowder, the airplane. But each of these tools, in general, was aimed to defeat enemy combatants -- with the intention of sparing innocent civilians. Atomic bombs make no such distinction. They kill everyone within reach, the criminal and the child, the murderer and the monk. Creating the bomb was both a milestone achievement, and a profound expansion of the limits of warfare. 

 

That is not to say that memorials about complex matters cannot exist. The Vietnam Memorial in Washington, DC is a stunning example, simultaneously questioning the war and honoring those who made the ultimate sacrifice. Likewise the 168 empty chairs in the plaza beside the Oklahoma City bombing memorial both symbolize the lives that were lost, and bear mute witness to the crime committed there. The endlessly falling water in the Ground Zero monument in New York City represents the countless tears shed by friends and families and a nation for people whose only error was in going to work on Sept. 11, 2001.

 

What might a proper memorial for the creation of the atomic bomb look like? If Japan can build the Hiroshima Peace Memorial, the artists and architectural geniuses of this nation can find a suitable answer. Both triumph and tragedy deserve a permanent place in the public sphere. Almost anything is better than silence. 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176679 https://historynewsnetwork.org/article/176679 0
Conventional Culture in the Third Reich

Wilhelm Furtwängler conducts the Berlin Philharmonic in a Berlin Factory, March 1942. 

© Deutsches Historisches Museum, Berlin.

 

 

Thinking of culture in the Third Reich conjures up images of mass rituals, swastika flags, and grandiose buildings. Makers of television documentaries and designers of book covers (admittedly including that of my own new synthesis) tend to look for visual material that is instantly recognizable as Nazi. However unconsciously, this reflects the ambition of the Third Reich’s leaders to bolster their rule through a clear cultural profile – an ambition that was only partially fulfilled. No one would doubt that public architecture by Albert Speer or the Nuremberg Party Rallies, enhanced by Speer’s light installations and prominently filmed by Leni Riefenstahl, mattered a great deal. But in other realms, a distinctive cultural profile proved far more elusive. 

 

The careers of extreme-right composers, playwrights, and film directors often stalled owing to their cantankerous personalities, limited popular appeal, or works that were deemed too shocking for a wider public (such as antisemitic dramas featuring rape scenes). Others fell short of Adolf Hitler’s standards, which were as high as they were vague. In January 1936, his faithful propaganda minister Joseph Goebbels noted impatiently: “We don’t have the people, the experts, the Nazi artists. But they must emerge in time.” Behind the scenes, Hitler was unhappy with the heroically proportioned bodies, monumental landscapes, and idealized peasant scenes on display at the 1937 “Great German Art” exhibition in Munich. His opening speech consequently dwelled on nineteenth-century Romanticism and the nefarious influence of Jewish art dealers rather than elaborating on what “true new German art” was supposed to entail.

 

Should the Third Reich’s efforts at transforming German culture thus be regarded as a failure? This would be to distort the picture, for much of what was performed, printed, or exhibited after 1933 was not, and did not aim to be, specifically Nazi. As early as the 1920s, Hitler and his followers had posed equally as bold innovators and as staunch defenders of a tradition that was supposedly under threat from cosmopolitan Jews and left-wing modernists. Such overlaps between extremist and conservative beliefs increased their support among those sections of the German middle class that upheld nineteenth-century cultural tastes. During the Third Reich, this ensured a sense of continuity for a public that appreciated conservative interpretations of Beethoven’s symphonies, Schiller’s plays, and Wagner’s operas. In turn, many a theater actor, orchestra musician, or opera singer benefitted from the generous flow of direct subsidies and the activities of the leisure organization Strength through Joy, which arranged ten thousands of special performances.

 

Popular culture during the Third Reich had a similarly conventional outlook. Most of the costume dramas and screwball comedies shown in cinemas evinced few, if any, traces of Nazi ideology. Germans consumed them as harmless entertainment, much like they did with low-brow novels and the light music that predominated on the radio channels. While they favored domestic offerings, before World War II they did not have to feel cut off from international developments. Walt Disney’s Mickey Mouse series and Margaret Mitchell’s novel Gone with the Wind were widely popular. The Third Reich’s own movie stars included the Hungarian Marika Rökk and the Swede Zarah Leander alongside domestic idols such as the cheerful comedian Heinz Rühmann and the ruggedly masculine Hans Albers.

 

If so much of culture in the Third Reich was conventional rather than identifiable as Nazi, then where does its political significance lay? Obfuscation is an important part of the answer. When seeing a trivial comedy or listening to a nineteenth-century symphony, few appear to have thought about those cultural practitioners who were defined as Jewish and were consequently eliminated from movie casts and symphonic orchestras. Audiences were well aware that the fringes of culture had changed, in favor of pseudo-Germanic plays and paintings and to the detriment of the ambiguity that had been at the heart of Weimar’s most fascinating art, music, and literature. But the ready availability of conventional fare made it easier not to care, in Germany as well as abroad: When the antifascist and modernist Kurt Weill performed a composition in Paris based on texts by Bertolt Brecht, the audience reaction was negative, in stark contrast to the enthusiastic welcome which the French capital gave to Wilhelm Furtwängler, the conductor of the Berlin Philharmonic and one of the Third Reich’s cultural figureheads.

 

Beyond obfuscation, conventional culture in the Third Reich stood out for the ways in which it was marshalled politically. During World War II, when Germany occupied much of Europe, it promoted its own film industry by excluding Hollywood imports and those prewar movies in which Jews had had any involvement. While the occupiers paid respect to the culture of France, they despised that of Poland, citing even the most mediocre theatrical or musical performance as evidence of German superiority. The Nazi grandees were special not in their appreciation of Western European art from the middle ages to the nineteenth century but for their habit of looting widely and unashamedly, thereby treating museums and private collections in the occupied countries as personal hunting grounds.

 

All this backfired, inasmuch as the allies became increasingly uninclined to distinguish between ‘German’ and ‘Nazi’ culture. Nowhere did this become more apparent than in the British and American bombing campaigns that were inflicted on major city centers with their time-honored churches and town halls. This allowed the Nazi leaders to declare themselves the defenders of German culture against a lethal threat from the outside. After the Third Reich’s demise, conventionality once again ensured continuity. Now that the war was over, American, British, and Soviet occupiers gave ample room to an established version of German culture, out of long-standing respect and in an effort to win over a defeated people. To Germans, attending a conservatively interpreted Beethoven symphony or seeing an entertaining movie seemed apolitical. At the same time, it allowed them to preserve a sense of national identity in a situation of national disempowerment. The fact that specifically Nazi elements were marginal to the post-1945 cultural landscape made it all the easier to dissociate oneself from the Third Reich – a necessary step, but also a self-exculpatory one. 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176674 https://historynewsnetwork.org/article/176674 0
The Battle of The Atlantic has Lessons for Fighting COVID-19

Dixie Arrow after being torpedoed off Cape Hatteras, March 1942

 

 

Tom Hanks's “Greyhound,” the historically accurate film about a World War Two convoy under attack from German U-boats, depicts the harrowing early years of the war when merchant shipping faced near-constant threat of attack. The attacks took place in the mid-North Atlantic where, until later years of the war, air coverage could not protect the ships. But in the very first months of the war, the U-boats converged on shipping within sight of the United States’ coastline. The bungled response to the U-boats lurking off the Eastern seaboard offers some surprising lessons for responding to the current pandemic.

 

With Germany’s December 11, 1941 declaration of war on the United States, Hitler’s navy launched what was known as Operation Paukenschlag, or Drumbeat, the cross-Atlantic attack on shipping plying the Eastern seaboard lanes. Five long-range U-boats departed in the third week of December from their pens in occupied France. They reached within just miles of the Eastern seaboard early in January 1942.

 

What they saw shocked and delighted them. It was the start of what the U-boat sailors called the “Second Happy Time,” after an earlier period of great success against Allied shipping.

Lighthouses flickered, headlights beamed, signs glowed, and household and business windows illuminated the night. Arriving off New York City, “I found a coast that was brightly lit,” one U-boat captain recalled. “At Coney Island, there was a huge Ferris wheel and roundabouts —could see it all. Ships were sailing with navigation lights. All the light ships, Sandy Hook and the Ambrose lights, were shining brightly. To me this was incomprehensible.”

 

Other Germans arriving along the East Coast shared his astonishment. They had traveled from blacked out Europe. But surfacing off American cities and resort towns from Portland, Maine, to Miami, observed another U-boat officer, “we could distinguish equally the big hotels and the cheap dives, and read the flickering neon signs…. Before this sea of light… we were passing the silhouettes of ships recognizable in every detail and sharp as the outlines in a sales catalogue…. All we had to do was press the button.”

 

It was a turkey shoot. Those first five U-boats bagged 23 ships within days. The German navy couldn’t afford to dispatch more than a dozen U-boats at a time to the happy hunting ground that spring. But with so many targets silhouetted against the lights glaring out from 1,500 miles of coastline, they wasted no torpedoes. Three ships on average went to the bottom every day that spring.

 

From the Canadian to the Caribbean waters, those scant few U-boats on patrol at any one time a total of sunk more than 360 merchant ships and tankers--about 2,250,000 gross tons--in the first half of 1942. An estimated 5,000 lives, mostly merchant seamen, were lost.

 

Americans on shore and passengers on flights gaped at the horrifying spectacle of ships exploding and sinking. Vacationers watched tankers burn to the waterline and found debris and bodies washed up on shore. With oil and food supplies suddenly lost in vast quantities, rationing began.

 

But those same tourists and beach goers shared some of the blame for the carnage and economic toll. The resort towns and other coastal sites where they flocked refused to dim the hotel, restaurant, shop, boardwalk and carnival lights. They were beacons that drew people—customers—from far and wide.

 

And they nearly lost the nation World War Two.

 

Historian Samuel Eliot Morison wrote in his official postwar naval history, 

 

“One of the most reprehensible failures on our part was the neglect of the local communities to dim their waterfront lights, or of military authorities to require them to do so, until three months after the submarine offensive started. When this obvious defense measure was first proposed, squawks went up all the way from Atlantic City to southern Florida that the ‘tourist season would be ruined.’ Miami and its luxurious suburbs threw up six miles of neon-light glow, against which the southbound shipping that hugged the reefs to avoid the Gulf Stream was silhouetted. Ships were sunk and seamen drowned in order that the citizenry might enjoy business and pleasure as usual.”

 

The U.S. government also shared in the blame. Fearing a demoralized public in the early months of what was sure to be a long and costly war, federal authorities blocked reports about the catastrophic shipping losses and the Navy’s inability to stop the U-boat onslaught. People didn't realize that they could help stop the carnage.

 

Sound familiar? Instead of refusing to turn out the lights, Americans today are in large numbers refusing to mask and social distance. Many share those earlier Americans’ disbelief in the early months of the war that their actions harmed the country and put their fellow citizens at grave risk. Businesses now, like then, worry about losing customers. Unwilling to undermine the economy, some state and federal officials have blocked public access to infection data and deny the gravity of the public health risk.

 

Like the global war in early 1942, we are just in the first months of the coronavirus pandemic. But in late spring of 1942, American authorities finally wised up to the need to come to grips with the U-boat onslaught.

 

Public campaigns urging voluntary dimming of the lights weren’t working. Finally, on April 18, U.S. military and federal officials responsible for protecting the Eastern seaboard ordered a shoreline blackout and a dim-out of coastal cities.

 

With reluctance from some quarters, headlights and windows were masked, hotels darkened their signs, lighthouses dimmed their beacons, businesses shut off the lights at dusk. Penalties were enforced against those who refused to turn their lights off.

 

The benefits came quickly. The U-boats were deprived of their silhouetted sitting ducks. Coastal shipping losses dropped immediately, back to 23 that month. The “curve” of sinkings flattened. Convoys with naval escorts such as depicted in “Greyhound” further reduced losses until, in July, there would be only three sinkings, and then none inside U.S. coastal waters for the rest of the year.

 

As officials and individuals consider their responsibility for halting the coronavirus outbreak, we would do well to recall this earlier failure to act to defeat a common enemy. We should learn from the past and act promptly and decisively to enforce masking before the coronavirus sinks the nation.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176634 https://historynewsnetwork.org/article/176634 0
30 Years Later: Saddam Hussein's Fateful Decision to Invade Kuwait

Oil Well Set Fire during Iraqi Retreat, Kuwait 1991

 

In the summer of 1990, Iraq was on the verge of bankruptcy and already in arrears on payments to several international creditors. It responded to its predicament by invading its southern neighbor, Kuwait. Thus started a long international crisis. It ended only after the onset of a US-led military campaign in mid-January 1991 to liberate Kuwait. 

The 1991 Gulf War brought an ignominious defeat to Iraq which diminished its regional status and weakened the regime of Saddam Hussein, the Iraqi President. Contemporaries could only surmise that this chain of events was the act of an irrational dictator. Indeed, Saddam was often compared to Hitler. Yet once oil is introduced into the picture, it turns out there was logic, after all, to Saddam's madness.   

Saddam's fate was intertwined with that of oil from the very beginning. The Iraqi Baath party, which Saddam led, came to power in 1968, exactly when oil prices started climbing up. In the preceding decade, industrialized countries came to rely on oil to meet their energy needs. World demand was surging as global capacity remained the same. The first to spot an opportunity was the Libyan junta which demanded in 1969 to receive 55 percent of the profits on its oil (up to that point profits were divided evenly between the hosting government and the corporation which pumped the oil out of the ground). In 1972 Iraq nationalized its oil industry and gained full control over its revenue. A year later, the Organization of Petroleum Exporting Countries (OPEC) reacted to another round of Israeli-Arab fighting by cutting production quotas and raising prices. The revolution in Iran in 1979 temporarily removed Iran's oil from the market, causing panic in Western countries. Lines starched for hours in gas stations across the developed world.

As a result, during the 1970s the price for a barrel of oil shot up from $3 to $30. The impact on Iraq was dramatic. State revenue from oil rose from $487 million in 1968 to $12.2 billion in 1979. All this came at an opportune time for Saddam. He needed the money to overcome several thorny issues. There was an on-off Kurdish rebellion in the north where major oil fields were located. Iraq's other oil fields lay in the south which was dominated by Shiites, the country's majority population, who were also suspected in scheming against the government. Sunnis such as Saddam resided in the center of the country, yet he did not even represent them. Most of the senior officials and officers in the Saddam administration hailed, like their president, from the same tribe residing in the environs of the city of Tikrit. How could one man control such a divided country?

Saddam used the enormous resources at his disposal to either bribe or intimidate the Iraqi population. To those who were willing to support the regime, the state would offer full employment with decent pay by expanding the state bureaucracy and investing in a large military-industrial complex, petrochemical industry, and huge infrastructure projects. If that was not enough, the regime also provided low-cost loans for housing and higher education scholarships. The Kurds in the north, finally subdued in 1975 after a military operation, were offered social welfare handouts. Those who chose to resist the regime would face a vast secret police and a formidable army.

Trouble began in 1980 when oil prices started to fall. After a decade of high oil prices, industrialized countries shifted to other sources of energy such as coal, gas, and nuclear power. In 1986 alone the price plummeted from about $28 to $12 a barrel. OPEC was unable to stop this trend. It tried to dictate production quotas but had no mechanism to punish cheaters. Countries worried about losing market share, kept selling their oil no matter how low was the price.

For the time being, Iraq was shielded from the effects of the downturn. Throughout the years 1980 to 1988, Iraq received lavish subsidies from Saudi Arabia and Kuwait. This was the way in which these countries supported Iraq in its war against fundamentalist Iran whose attempts to export its revolution frightened governments across the region. However, when the Iran-Iraq war ended, Iraq had to face the music on its own.

By 1990 Saddam's patience was running thin. Another year of low oil prices could force him to cut the budget. He and his ministers had no doubt that should he do so, his regime would collapse. Saddam was nothing without the benefits he offered to his people.

In Saddam's view, the main culprit was Kuwait. Indeed, Kuwait was a consistent quota-cheater, selling every year about 1.5 million barrels of oil more than OPEC wanted it to. Unlike Iraq, which was wholly dependent on oil as it was its sole export product, Kuwait was also a processor and marketer of the black gold. It owned three refineries in Europe and 6500 service stations across the continent under the logo "Q8." Low oil prices helped attract customers to buy the products of its petrochemical industry and fuel at its gas stations.

Throughout the first six months of 1990, various Iraqi officials in a series of conferences and summit meetings tried to huff and puff Kuwait into submission. But oil sheikhdom seemed defiant. By June, Saddam had ordered the Republican Guard to prepare for an invasion and two months later Iraqi tanks rolled into Kuwait City. Had Saddam succeeded in annexing Kuwait, as he had intended, Iraq would have turned into an oil superpower, equal in its weight to Saudi-Arabia. Then Saddam could try and whip OPEC into shape and dictate prices.

It was clear from the outset that this was a desperate gamble that put Iraq on a collision course with Washington. But Saddam believed he had no other choice. As one senior Iraqi minister summed it in January 1991: "if death is definitely coming to this people and this revolution, let it come while we are standing." 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176672 https://historynewsnetwork.org/article/176672 0
Who’s Our Roy Cohn?

 

 

Was Roy Cohn evil?  There seems to be a consensus that he was.   

That is at least what we learn from two recent films on New York City’s most notorious lawyer: Matt Tyrnauer’s 2019 documentary, “Where’s My Roy Cohn?,” and Ivy Meeropol’s “Bully. Coward. Victim.  The Story of Roy Cohn,” just released last month on HBO.

 

A fixture of the New York political and social scene until his death from AIDS in 1986, Cohn made himself indispensable to some of the biggest movers and shakers of the late 20th century, including Joseph McCarthy, Ronald Reagan, members of the Genovese crime family, and Donald Trump.  A friend to the rich and famous, he was the bête noir of liberals and the left.  

 

As an assistant prosecutor in the espionage case of Julius and Ethel Rosenberg, Cohn illicitly lobbied trial judge Irving Kaufman to impose the death sentence, carried out by electrocution in June 1953.  Later (as we learn in Meeropol’s film), Cohn admitted to fellow lawyer Alan Dershowitz that he had helped frame Ethel Rosenberg by coaching false testimony from her brother David Greenglass, a miscarriage of justice later confirmed by Greenglass in a 60 Minutesinterview.  About using the courts to murder an innocent mother of two young children, Cohn had no more remorse than he had about not paying his income taxes.  “We framed guilty people,” he reportedly told Dershowitz.  To protégé Roger Stone he said, “…if I could have pulled the switch, I would have.” 

 

Cohn used the notoriety he achieved helping to execute the Rosenbergs and convict other accused Communists to land the coveted position of chief counsel to Wisconsin Senator Joseph McCarthy in his notorious Congressional crusade against alleged communist infiltration of national government and the military.  Cohn’s constant presence at McCarthy’s side in the televised Army-McCarthy hearings in 1954 earned him international fame, while at the same time crashing his political ambitions once McCarthy’s manic overreach brought censure from the public and the Senate. 

 

His history as a McCarthyite clung to Cohn like a cheap suit, but as he retreated home to New York City he made that suit a fashion statement, amping up his anticommunist rants and selling his legal services to anyone who needed a ruthless courtroom advocate with no compunction about twisting the truth.  His clients included mobsters, crooked politicians and Gotham’s worst powerbrokers, the city’s real estate titans, among them the young Trump, who at the time was turning his father’s outer-borough business into an international empire.  Cohn cherished his friendship with Trump, whom he defended in the 1970s against a federal suit for racial discrimination in housing.  Trump repaid that loyalty by pretending he didn’t know Cohn when the latter’s AIDS diagnosis hit the press.  Until, of course, he needed a truly ruthless advocate during the Russiagate crisis, when Trump’s long dead former personal fixer once again became “my Roy Cohn.”

 

The banality of “Cohn’s evil”

 

Many interviewees in both documentaries testify to Cohn’s “evil,” his social pathology, his lack of scruples, conscience or remorse, and his nearly innate criminality.   An unattributed voiceover sets the tone for Tyrnauer’s film from its opening scenes:  “Roy Cohn’s contempt for people, his contempt for the law was so evident on his face that if you were in his presence you knew you were in the presence of evil.”  According to former prosecutor and author, James Zirin on whom Tyrnauer relies heavily for background and color, “He was like a caged animal.  If you opened the door to the cage, he would come out and get you.”  

 

Meeropol’s film sets almost the same tone, though differently framed by her experience as the granddaughter of the Rosenbergs.  Interviewing very few of the same informants, Meeropol seems to come to the same conclusion.  John Klotz (a lawyer who investigated Cohn in the 1970s) declared, “Roy Cohn was one of the most evil presences in our society during most of my adult life.”  As Cohn’s cousin, journalist David Lloyd Marcus (who also appears in “Where’s My Roy Cohn?”), succinctly put it, Cohn was “the personification of evil.”

 

In cataloguing Cohn’s misdeeds, both films are precise and exhaustive.  They leave little doubt that this behavior resembles textbook examples of a personality disorder or social pathology.  A lawyer from Cohn’s firm could have been providing a profile to Psychology Today when he tells Tyrnauer that Cohn “knew no boundaries” and that “if you were on the right side of him, you were OK.  If you were on the wrong side of him, it was terrible.” According to Zirin, Cohn was “a personality in disarray.  A personality in anarchy, which had no rules, had no scruples.  It had no boundaries.”    

 

But as Cohn himself might declare, “what’s it to us?”  Why should we care about this psychologically crippled character? How was Cohn’s corruption and unscrupulousness so different from anyone else’s?  Isn’t the world Cohn inhabited, of socialites and their lawyers, of mobsters and real estate speculators, full of sociopaths and disordered personalities like Cohn’s?  It’s only if we think of Cohn as especially emblematic of the age that we spend so much time learning about his life.  

 

Tyrnauer only hints at a broader perspective that might have been useful in understanding Cohn as a historical figure.   “Roy was an evil produced by certain parts of the American culture,” writer Anne Roiphe (another Cohn relative) tells us.  But what parts?  “When you look at Cohn’s life,” the voice over opening Tyrnauer’s film intones, “you are shining a light on demagoguery, hypocrisy, and the darkest parts of the American psyche.” Tyrnauer cuts to scenes of Cohn with a young Donald Trump, Trump speaking and, at the end the montage, the scene of a violent attack on a black protester at a recent Trump rally. 

 

If it’s not one thing, it’s your mother

 

Unfortunately, we don’t learn much from such juxtapositions.  Did Cohn create Trump?  Was he Trump avant la lettre, as New York Magazine columnist Frank Rich recently proposed?  Or did they have some origin in common?  As they strain to explain Cohn’s significance, both films get hung up on the notion that Cohn was “evil.”  Tyrnauer’s suffers the most, and “Where’s My Roy Cohn?” takes off from that assumption on a psychoanalytic tangent that is all too familiar: It was all about his mother. 

 

David and Gary Marcus, Cohn’s cousins on his mother’s side, recount the story of a Passover hosted by Dora Marcus Cohn at the family apartment, when a maid died preparing the Seder dinner.  Dora, Roy’s mother, hid the body and kept the incident a secret, until one of the children asked the traditional question, “why is this night different from every other night?”  Dora blurted out “Because there’s a dead maid in the kitchen.”  

 

Cousin Gary thought Dora was more troubled by the fact that the maid’s death had interrupted the Seder, “not that a life had been lost.” For David, this incident revealed the origins of Roy’s pathology: “That’s totally Roy’s spirit.  His lack of ethics, his lack of empathy.  That came from Dora.  For Roy, life was transactional.  It was all about connections and accruing power.”  

 

In Tyrnauer’s extensive picture of her, Dora, the homely rich Jewish daughter who at best could achieve a desperately arranged marriage, doted on her only son, holding him so close that he could not break free until her death.  Dora here seems blamed both for his homosexuality and his need to hide it.  And thus for Cohn’s “transactionalism” of corrupt exchange and manipulation, in which people were simultaneously shields against public scrutiny (as was lifelong friend, Barbara Walters, who reluctantly bearded for him) and instruments of power. 

 

But, wait.  What about the father?  We learn some things about Al Cohn, but not really enough.  Father Al bought his judgeship with money from his wife’s family in exchange for marrying Dora.  He then served the New York political machine for the rest of his long career on the bench, and put his son in contact with some of the biggest hitters of Tammany Hall, including Bronx political boss Edward J. Flynn, a slick Roosevelt loyalist who nonetheless helped Tammany run the city like a cashbox through the middle of the twentieth century.  Later, Cohn the younger maintained that close relation, serving Tammany stalwarts like party boss Carmine DeSapio, Brooklyn’s Meade Esposito, and Bronx party chair Stanley Friedman, who also joined Cohn’s law firm.  Cohn, like his father, was a creature of the machine. In a city run so “transactionally,” why do we even need to be talking about Dora?  

 

The father’s Tammany connections also help explain the son’s ardent anticommunism. The Bronx political machine, led not only by Flynn but also by its representatives in Albany, stood at the forefront of New York anticommunism during the 1930s, the “red decade.”  Bronx Democrats were responsible for some of the most repressive anticommunist legislation on the eve of World War II, long before the Cold War or any inkling of a Soviet threat to national security.  In March 1940, it was a Bronx Democrat, state Senator John Dunnigan, who launched the city’s Rapp-Coudert investigation of Communists in the public schools and municipal colleges, leading to the firing of several dozen and serving as a prelude to McCarthyism a decade later. 

 

Why did Tammany hate communists?  Throughout the 1930s, Communists in the teacher’s union and elsewhere effectively challenged Tammany control of city schools and agencies, a wrench in its patronage system rivalled only by reformers in Mayor Fiorello LaGuardia’s City Hall.  Clearly, not all Democrats were “liberal” like FDR, or like Roy Cohn claimed he once was, before he decided to support Republican Cold Warriors such as Reagan.    

 

Meeropol does a bit better in historically situating Cohn.  In contrast to Tyrnauer’s facile psychoanalysis, she traces Cohn’s evil back to his pivotal role in the Rosenberg case, though the history stops there.  Her film is not as slickly cut or scored as Tyrnauer’s, but Meeropol is more honest and insightful, as she continues the project of self-exploration through family history begun in her 2004 documentary on her grandparents’ case, “Heir to an Execution.”  This project, which includes extensive interviews with her father Michael, the elder of the two Rosenberg/Meeropol children (her uncle Robert is notably absent from this latest film), is valuable as history in its own right. 

 

Meeropol also provides a more nuanced picture of Cohn’s closeted homosexuality, which is at once public and repressed, as well as weirdly honest, dishonest and corrupting, all at the same time.  In this she follows the lead of Tony Kushner, whose play Angels in America figures prominently in her reconstruction of Cohn’s disturbed and disturbing life.  Like Kushner, Meeropol sees something convoluted and paradoxical about Cohn, even as he represents the worst of American culture.  “To call him ‘evil’ -- it’s true,” journalist Peter Manso tells her at one point.  “But it doesn’t explain a hundred other things about Roy Cohn.”   

 

That’s a good point.  But it would be nice to learn a few more of those hundred things.  More needs to be said about Cohn’s resemblance to and affinity for Trump, the historical roots of that “strain of evil,” as Rich puts it, in New York’s social register and political “favor bank.”  It’s not enough to justify our interest in Cohn merely by connecting him to Trump, as Tyrnauer does.  We need to know who and what enabled each of them to exist.  Many of those enablers are the same people.  Their story is worthy of yet another film.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176675 https://historynewsnetwork.org/article/176675 0
From Historical Injustice to Contemporary Police Brutality, and Costs of Monuments to the Unworthy

Capt. Silas Soule

 

 

 

 

On June 22, demonstrators who attempted to topple a statue of Andrew Jackson located in Lafayette Square near the White House were forcibly removed by police in riot gear. Jackson rose to political prominence on the laurels of his exploits as an Army officer during the War of 1812 and Battle of New Orleans, which he used as an effective platform for a political career culminating in a successful run in 1828 as a Democratic candidate for president.  In addition to the military campaigns against the British, Jackson played a prominent role in wars waged against Native nations of the Southeast, while becoming infamous as president for carrying out the policy of Indian Removal resulting in the series of atrocities perpetrated against the so-called five civilized tribes known as the Trail of Tears, in which at least 15,000 died.  The day after the attempt to fell Jackson’s statue, President Trump announced he was working on an executive order to “reinforce” current laws criminalizing the defacement of monuments honoring military leaders. On July 12, the president followed up with a threat of a 10-year prison sentence for anyone damaging federal statues or monuments.

Jackson is only one of many deeply flawed historical figures whose legacies have come under scrutiny and criticism by Americans demanding social change and political reform in the wake of the police killings of Breonna Taylor, Elijah McClain, and George Floyd. Many are rightly questioning why we continue to honor such leaders as Jackson, as well as figures related to Spanish colonial conquest and genocide such as Christopher Columbus and Junípero Serra, or Robert E. Lee and other military officers and politicians who swore their loyalty to the Confederacy and the cause of slavery during the Civil War period.  For many, the display of such monuments shows a gross disregard for the complexity of our shared historical past, while continuing to exclude and silence the experiences and memories of marginalized peoples. Within this impassioned social environment, it didn’t take long for demonstrators to turn their attentions to such monuments, defacing and even destroying those celebrating historical figures with violent histories, especially in their treatment of non-European peoples.

 

These include one of the numerous statues honoring Junípero Serra, founder of nine Spanish Missions in what is now California. On June 19, citizens brought down a statue honoring him in San Francisco’s Golden Gate Park. A Catholic bishop condemned the action, while California officials in other cities such as San Luis Obispo proactively removed other Serra monuments to prevent similar actions. 

 

The famed cowboy Kit Carson is another legendary figure glorified in the mythology of the old West. In reality, Carson was a scout and soldier who took part in several massacres of Native peoples in California, Oregon, and Washington throughout the mid-19th century, while also a leading participant in the war with the Diné (Navajo) during the period of the Long Walk. In late June, following the bringing-down of a statue of Columbus in Denver’s Civic Center Park, city officials took action to remove a nearby statue of Carson.

 

As a national reckoning propels a movement to remove what many see as tributes to racism and oppression, perhaps we might also imagine what could replace such deeply fraught monuments. We needn’t ignore all of our history in the process: Two Civil War-era heroes who rebelled and refused to join a brutal attack against Native peoples represent the moral courage we would do well to honor.

 

In fact, we should’ve been memorializing these soldiers all along. The ethical stand they took illustrates America’s highest principles – a 156-year-old life lesson that puts to shame many figures who have been celebrated in American history as heroes. It is important to highlight the fact that among all ethnic minorities, Native Americans continue to suffer the highest rate of death at the hands of police.

 

It was November 1864 when Army Capt. Silas Soule and Lt. Joseph Cramer were at Fort Lyons, Colorado, near a peaceful encampment of Arapaho and Cheyenne people who had settled at Sand Creek. The military had promised protection to the Native people for the coming winter amid growing hostilities from settlers in the Colorado Territory.

 

A rush of immigrants had been drawn by the discovery of gold at Pikes Peak in 1858. For many, inflamed by anti-Indian sentiment and eager for undisturbed access to land, resources, and gold, warfare was the preferred option. 

 

Col. John Chivington, commander of the 3rd Colorado Cavalry, was only too willing to oblige. After failing to find a group of Cheyenne dog soldiers who were engaging settlers and troops alike, he decided to take his revenge at Sand Creek. He revealed his intentions to attack the peaceful encampment the night before the assault.  Soule and Cramer protested, to no avail. Soule, later seeking to expose the crime of the massacre and in seeking justice for the victims, wrote to the former commander at Fort Lyons, Major Edward Wynkoop. In this letter, which includes a graphic description of the massacre, Soule says he was “indignant” over the plan.

 

Like the demonstrators who’ve urged change over the past month, he took action. On the night before the attack, Soule confronted officers readying for the assault, declaring “that any man who would take part in the murders, knowing the circumstances as we did, was a low-lived, cowardly son of a bitch.” 

 

Cramer spoke up to Chivington himself, saying, “I thought it murder to jump them friendly Indians.” Chivington replied: “Damn any man or men who are in sympathy with them.”  As the vicious assault unfolded the next morning, and under threats of death, Soule refused to participate and ordered soldiers under his direct command to stand down. Cramer followed suit. Around 200 people, many of them women and children, died in the massacre directed by Chivington. No one knows the exact figure. Yet the losses would have been much higher had Soule and Cramer not resisted.

 

It’s a reality not lost on the descendants of Arapaho and Cheyenne who were present that fateful day. For their bravery and willingness to stand up to evil, Soule and Cramer are still honored by the descendants of the survivors of Sand Creek. Soule was also instrumental in exposing the brutality of the massacre and gave harrowing testimony before a military commission that investigated Chivington’s actions. His descriptions shocked the nation.

 

 

Captain Silas S. Soule, A Man with a Good Heart, July 26, 1838-April 23, 1865,” George Levi, ledger art (2014). Courtesy of George Levi.

 

 

 

 

He would be murdered on the streets of Denver three months later.  Aside from their honored place in the memories and stories of Cheyenne and Arapaho people, Soule and Cramer have been largely overlooked by historians. Now more than ever, theirs is precisely the kind of example Americans could learn the most from. 

 

None, perhaps, could benefit more from Soule’s and Cramer’s acts of humanity and moral courage than members of our nation’s police forces, who continue to kill unarmed people at alarming rates while others all too often stand passively by as it happens.

 

We might wonder how incidents like that which led to Floyd’s killing could have turned out differently if examples like Silas Soule were venerated in place of figures such as Christopher Columbus, Andrew Jackson, Kit Carson, and Robert E. Lee.  Where are the monuments to Soule and Cramer?  Where, too, are the monuments for Cheyenne and Arapaho leaders such as Black Kettle, a leader of the Southern Cheyenne who was wounded as he greeted the soldiers with a white flag of truce at the start of the attack? To the Arapaho Chief Little Raven, who survived the massacre and dedicated his life to peace? Or to another Cheyenne chief, White Antelope, who ran toward soldiers “holding up his hands and saying, ‘Stop! Stop!’”? Then, as the firing intensified, he sang his death song as he was killed: “Nothing lives long, except the Earth and the mountains.”  Black Kettle would survive the Sand Creek massacre but be killed four years later at the Washita River massacre of 1868, carried out by General George Armstrong Custer.

 

To its credit, the Colorado legislature approved in 2017 a monument to all the people murdered at Sand Creek. But it’s been mired in red tape and disputes over the location of its placement ever since.

 

Where, in our public consciousness, are stories and events that speak to these brighter understandings of humanity, history, and the value of life? At a time that Washington is choosing a new mascot for its NFL football team — something their owner had recently vowed he would never do – this could be the moment to forge a new path into the future. Let examples of peace, kindness and moral courage guide us.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176676 https://historynewsnetwork.org/article/176676 0
What's in an Un-Naming? Berkeley's Kroeber Hall

Alfred Kroeber with Ishi, a member of the Yahi tribe who survived the California genocide, 1911.

 

 

 

I welcome the news that the Berkeley campus has joined the un-naming movement. It provides us with an opportunity to learn about histories we’ve forgotten and to make the honoring of spaces and places into a democratic process rather than a done deal decided by elites in back rooms. 

 

John Boalt, the 19th century anti-Chinese crusader, is already banished from Berkeley’s law school walls. The University is likely to follow the example of a local elementary school and remove the name of John LeConte, an unreconstructed Southern racist, from the building that houses the physics department. 

 

Next on the list is anthropologist Alfred Kroeber (1876-1960), after whom Kroeber Hall is named. He didn’t campaign to restrict immigration to the United States on the basis of race, and he wasn’t a white supremacist. But he was the key academic in a department and museum that rose to fame – literally and scientifically – on the bodies of the Native dead.  

 

Kroeber’s reputation in anthropology rests upon his prodigious scholarship, his success in building Berkeley’s department of anthropology into a nationally ranked program, and his documentation of the cultural experiences of California Indians prior to Spanish colonialism and American genocide. Kroeber supported Native land claims, for which the Council of California Indians acknowledged the role he had played in the struggle “for long delayed justice.”

 

To most California Tribes and Native activists, especially those in the Bay Area, however, Kroeber’s legacy is more bitter than sweet. 

 

Kroeber failed in his responsibility to speak out publicly about the genocide that followed the Gold Rush. “What happened to the California Indians following 1849 – their disruption, losses, sufferings, and adjustments – fall into the purview of the historian,” he wrote in 1954, “rather than the anthropologist whose prime concern is the purely aboriginal, the uncontaminatedly native.” The transformation of everyday life after contact was traumatic, Kroeber conceded, but, he added, “it is not gone into here.” It wasn’t that he didn’t know. He just didn’t go into it.

 

One consequence of this moral cowardice was that until the 1960s a crudely racist imagery about California Indians dominated public discourse in the state, making it easier to frame their near extermination in the imagery of natural history, subject to inevitable processes of erosion and decline, rather than as the result of a planned human intervention. Many people hold Kroeber accountable because he had resources and authority to influence public opinion. Of course, one person, even Kroeber, did not wield such power, but he became the personification of meticulous amnesia. Unlike his widow Theodora Kroeber, who spoke out against the genocide, and his colleague Robert Heizer, who at the end of his career issued a mea culpa for his role in treating California’s Tribal peoples as “non-persons,” Kroeber kept his silence.

 

As a core faculty member of Berkeley’s department of anthropology (1901-1946) and as director of the anthropology museum (1925-1946), Kroeber oversighted the University’s collection of more than ten thousand Native human remains that it plundered from Native graveyards, and tens of thousands Native artifacts that were excavated from graves or bought cheaply from the desperate survivors of genocide. The University backed up Kroeber’s collecting frenzy and in 1948 proudly showed off to Life magazine its “bone collection [that] has filled two museums and overflows into the Campanile.”

 

I’ve recently read hundreds of Berkeley’s archaeological reports. Not once have I come across an account that treats the excavated as other than specimens for research. No prayers are spoken, no rituals practiced, no indication that the living and the dead share a common humanity. 

 

Kroeber failed to document how Native peoples survived against all odds and lived to fight another day. Activists looking for inspirational accounts of struggle and resistance find little solace in Kroeber’s work, which has a tendency to be nostalgic for the good old days rather than forward-looking. 

 

Kroeber was not particularly interested in the cultures of local Tribes, reporting that they had made “an unfavorable impression on “early voyagers” as “dark, dirty, squalid, and apathetic.” Moreover, he concluded in 1925 that the Bay Area Indians were “extinct as far as all practical purposes are concerned.”

 

Walking around the Berkeley campus, it is easy to get the impression that the Ohlone are extinct. A plaque at the university’s entrance acknowledges that a Spanish expeditionary force set up camp here in 1772. There are no plaques to mark the settlements of people who lived in this area a few thousand years earlier. The football stadium commemorates faculty, staff, and students who died during World War I. There is no memorial to the thousands of Ohlone who lived and died in this region. A graceful archway celebrates the life of Phoebe Hearst whose philanthropy funded the excavation of Native graves. There is no comparable recognition of the thousands of people who were dug up from their graves in the name of science and “salvage anthropology.” 

 

Today, the descendants of the Verona Band of Mission Indians and other Ohlone people in the Bay Area are asserting their right to federal tribal sovereignty and to reclaim their ancestral lands, cultural artifacts, and the remains of their dead that are among the nine thousand still held by the University.  

 

We should take advantage of this un-naming opportunity to honor the people who made Kroeber’s professional success possible and who are un-remembered in the university’s landscape.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176677 https://historynewsnetwork.org/article/176677 0
Constitutional Textualism, Slavery and Undocumented Immigrants

 

 

 

 

Posting on History News Network, Elliott Young, professor of History at Lewis & Clark College, examined the recent Supreme Court decision in Department of Homeland Security v. Thuraissigiam (2020). Young described the decision as a “fundamental threat to equal protection of the law for all undocumented immigrants” that defied long established legal principles. I strongly support Young’s arguments and, in this article, I wish to extend them. Equally distressing is that it was a seven-to-two majority decision with Ruth Bader Ginsburg and Stephen Breyer joining the rightwing court bloc. Sonia Sotomayor and Elena Kagan posted a powerful joint dissent. 

 

The 1996 Illegal Immigration Reform and Immigrant Responsibility Act “placed restrictions on the ability of asylum seekers to obtain review under the federal habeas statute.” In this case, Vijayakumar Thuraissigiam, an undocumented immigrant from Sri Lanka applying for refugee status because as a Tamil he faced beatings, torture, and death, claimed that since he had already entered the territory of the United States, he was entitled to due process. Thuraissigiam was represented by the American Civil Liberties Union (ACLU). The Court upheld the constitutionality of the 1996 law and ruled that he was not.

 

 The majority decision for the rightwing bloc was written by Samuel Alito. Alito argued “Respondent’s Suspension Clause argument fails because it would extend the writ of habeas corpus far beyond its scope ‘when the Constitution was drafted and ratified’” and that the “respondent’s use of the writ would have been unrecognizable at that time.” Not once did Alito reference the 14th Amendment to the United States Constitution. Breyer and Ginsburg, in a concurring opinion written by Breyer, stated that they supported the court majority “in this particular case,” but not the broader assertions made by Alito.

 

In a dissent endorsed by Kagan, Sotomayor wrote that “The majority declares that the Executive Branch’s denial of asylum claims in expedited removal proceedings shall be functionally unreviewable through the writ of habeas corpus, no matter whether the denial is arbitrary or irrational or contrary to governing law. That determination flouts over a century of this Court’s practice.” She argued “Taken to its extreme, a rule conditioning due process rights on lawful entry would permit Congress to constitutionally eliminate all procedural protections for any noncitizen the Government deems unlawfully admitted and summarily deport them no matter how many decades they have lived here, how settled and integrated they are in their communities, or how many members of their family are U. S. citizens or residents.” If Sotomayor is correct, and I believe she is, the Thuraissigiam decision puts all DACA (Deferred Action for Childhood Arrivals) recipients at immediate risk.

 

I’m not a big fan of the national Common Core Standards and its high-stakes standardized reading tests, but as a historian and social studies teacher, I like the idea that they promote close reading of text. Former Associate Supreme Court Justice Anton Scalia, the halcyon of judicial conservatism and the patron saint of the Supreme Court’s dominant bloc, justified his rightwing jurisprudence by claiming to be a textualist. According to Scalia, “If you are a textualist, you don't care about the intent, and I don't care if the framers of the Constitution had some secret meaning in mind when they adopted its words. I take the words as they were promulgated to the people of the United States, and what is the fairly understood meaning of those words.” 

 

But, as Shakespeare reminded us in Hamlet’s famous “To be, or not to be” soliloquy, “There’s the rub.” There is always “the rub.” The problem, with both Common Core and Constitutional textualism is that words have different meanings at different times and to different people and sometimes words are chosen, not to convey meaning, but to obscure it. Understanding “words” requires historical context.

 

The word slavery did not appear in the United States Constitution until slavery was banned in 1865 by the Thirteenth Amendment because the Constitution, as originally written, represented a series of compromises and contradictions that the authors left to be decided in the future. It was a decision that three score and fourteen years later led to the American Civil War.

 

The humanity of Africans was generally denied at the time the Constitution was written; they were chattel, property. But in Article I, Section II of the Constitution, which established the three-fifth plan for representation in Congress, enslaved Africans are referred to as “other Persons.” And in Article IV, Section II, the Constitution mandates that “No Person held to Service or Labour in one State, under the Laws thereof, escaping into another, shall, in Consequence of any Law or Regulation therein, be discharged from such Service or Labour, but shall be delivered up on Claim of the Party to whom such Service or Labour may be due.” 

 

I read text pretty well. As persons, enslaved Africans should have been included in the people of the United States who wrote the Constitution “in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.” 

 

But of course, they weren’t. Just reading the Constitutional text, without context, does not help us understand what Scalia called “the fairly understood meaning of those words.”

 

Unfortunately for the nation, political bias blinded Scalia while he was on the Supreme Court and blinds the rightwing cabal that dominates the Court today so badly that they just don’t read with any level of understanding and ignore historical documents. Because of this, one of the most pressing issues in the 2020 Presidential election is the appointment of future Supreme Court Justices who can read text with understanding, especially the 14th Amendment to the United States Constitution, and are willing to search for supporting historical evidence.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176680 https://historynewsnetwork.org/article/176680 0
The History of the Boycott Shows a Real Cancel Culture

Charles Boycott caricatured in Vanity Fair, 1881.

 

 

 

J.K. Rowling, Margaret Atwood, Salman Rushdie, and Noam Chomsky are among dozens of writers, artists and academics who signed a July 7 letter in Harper’s Magazine that warned of growing “censoriousness” in our culture. They described this as “an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty.” 

 

The writers didn’t use the term “cancel culture,” which Wikipedia describes as “a form of public shaming in which internet users are harassed, mocked, and bullied.” But that’s what they are talking about.

 

While cancel culture deploys modern technology, it is hardly a new tactic. It most famously dates to 1880 in the west of Ireland, when English land agent Charles Boycott's last name became a verb for the practice.

 

Agrarian activists targeted the County Mayo estate that Boycott managed in the early stages of the Irish Land War as tenants agitated for more influence over their rents and lease terms. Seasonal workers were pressured to withhold their labor from Boycott at harvest time, and nearby shopkeepers were menaced to avoid doing business with him.

 

The boycott was born. 

 

Irish parliamentarian Charles Stewart Parnell recommended the tactic weeks earlier during a speech at Ennis, County Clare, about 80 miles to the south of Mayo. 

 

“When a man takes a farm from which another has been evicted, you must shun him on the roadside when you meet him – you must shun him in the streets of the town – you must shun him in the shop – you must shun him on the fair green and in the market place, and even in the place of worship, by leaving him alone, by putting him in moral Coventry, by isolating him from the rest of the country, as if he were the leper of old – you must show him your detestation of the crime he committed.” 

 

(Parnell faced his own “canceling” in 1890 when his longstanding extramarital affair with Katherine O’Shea was revealed and created a public scandal.) 

 

Agrarian activist Michael Davitt used the image of a leper to describe those who did not support the Irish Land League’s fight against the landlord system. Any such person was “a cowardly, slimy renegade, a man who should be looked upon as a social leper, contact with whom should be considered a stigma and a reproach,” Davitt said.

 

No matter how righteous the cause of landlord-tenant reform, the tactics were taken to brutal extremes, including murder. In 1888, boycotted farmer James Fitzmaurice was shot at point blank in front of his young adult daughter, Nora, as they steered a horse cart to an agricultural fair in County Kerry.

 

Later, at a special commission exploring the land unrest in Ireland, his daughter testified that after the attack five separate travelers passed on the road because they recognized her as belonging to a boycotted family. Only one stopped, coldly noted that her father was “not dead yet,” then proceeded without helping.

 

Two men were convicted of the murder and hanged. More typically, intimidated or obstinate witnesses refused to testify against the perpetrators of murders and assaults against their boycotted neighbors.

 

Threatening notices or placards -- the 19th century print equivalent of social media posts -- appeared in town squares and at rural crossroads, naming names and often including crude drawings of coffins or pistols. 

 

Social ostracism was hardly new to Ireland, though. Even before the land war, some Irish considered it better to starve to death than to consort with the other side. 

 

In his 1852 book Fortnight in Ireland, English aristocrat Sir Francis Head described the reaction to attempted religious conversions tied to accepting food in the wake of the Great Famine. “Any Roman Catholic who listens to a Protestant clergyman, or to a Scripture reader, is denounced as a marked man, and people are forbidden to have any dealings with him in trade or business, to sell him food or buy it from him,” he wrote.

 

In his seminal work Social Origins of the Irish Land War, historian Samuel Clark said of boycotting: “The practice was obviously not invented by Irish farmers in 1880. For centuries, in all parts of the world, it had been employed by active combinations [social groups] for a variety of purposes.”

 

Yet Clark continued: “What was novel … was … the spread and development of this type of collective action on a scale so enormous that the coining of a new term was necessary. Boycotting was becoming the most awesome feature of the [Irish land] agitation.”

 

Cyberbullying is unpleasant, to be sure. But it is hardly the same as being beaten or murdered. Authors, academics, musicians, and others bothered by their work being “cancelled” might consider the original boycott for some needed perspective. Or perhaps they should leave the rough and tumble of the marketplace of ideas.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176682 https://historynewsnetwork.org/article/176682 0
Yes, Even George Washington Can Be Redeemed

 

 

 

George Washington was a slaveholder. 

For some Americans, this is reason enough to exclude our first president from the national pantheon. 

According to one poll, 18 percent of respondents believe he should be removed from Mount Rushmore. Others expressed themselves by defacing or toppling Washington statues.

Are these critics right? 

On the surface, it might seem so. American slavery was inexpressibly gruesome. Accounts from the time reveal the horrors of enslaved African-Americans being separated from their families, violently beaten, routinely raped by their owners, subjected to monotonous, backbreaking labor, and forced to live in filthy dwellings with no hope for improvement.

This was reality for millions of American blacks.

Washington benefited from slavery his entire life. He bought and sold slaves and sought to reacquire runaways. These facts are undeniable.

Does this make Washington, as a New York Times columnist states, a “monster”?

This critique fails to account for the specifics of Washington’s personal journey. Within the tragic reality of his owning slaves lies a unique and unexpected story. 

Like his fellow southerners, Washington was born into a society that accepted slavery. It is true he expressed no qualms about the institution until the American Revolution, but once he did, an extraordinary transformation began. 

The earliest change perhaps can be detected in Washington’s correspondence with Phillis Wheatley, an African-American poet who had composed verses dedicated to him. Washington wrote to her in 1776 praising her “great poetical Talents” and expressing his desire for a meeting. The request broke strict etiquette between slaveholders and black people.

Their correspondence highlights something Washington understood about African-Americans lost upon his contemporaries: their abilities and humanity. Compare Washington’s reference to Wheatley’s “genius,” with Jefferson’s harsh assessment that her poems “are beneath the dignity of criticism.”

Many of Washington’s closest associates during the war opposed slavery, such as Alexander Hamilton and Lafayette. These individuals inclined Washington against the institution. Perhaps the greatest influence, however, were the many black people that served courageously during the war. 

After the Revolution, Washington began to speak of slavery in moral terms. He pondered ways to provide slaves with “a destiny different from that in which they were born.” He hoped such actions, if consummated, would please “the justice of the Creator.” 

Washington freed his slaves at his death—but this raises two questions: first, why didn’t he do so in his lifetime, and second, why didn’t he speak against slavery publicly?

First, we must note that Washington detested breaking up slave families, making it a policy not to do so. He realized, however, that freeing his slaves might make family breakups inevitable. Most of the slaves at his estate, Mount Vernon, belonged to his wife Martha’s family, the Custises, which meant he couldn’t legally free them. At Mount Vernon, Custis and Washington family slaves often intermarried. The Custis heirs regularly sold slaves, breaking up their families. Washington knew that if he liberated his slaves, some in the slave families would be free while the others would remain enslaved in Custis hands, vulnerable to being sold (which eventually happened). 

Mary V. Thompson's excellent book The Only Unavoidable Subject of Regret recounts that, as president, Washington developed elaborate plans to emancipate his slaves. Secret letters to family friend David Stuart reveal Washington trying to convince the Custis heirs to join him in manumitting their slaves together, preserving the families, and hiring them out to tenant farmers. Unfortunately, talks with potential tenants fell through. Washington continued to agonize over a situation where emancipation meant separating black family members. 

Second, we must note, while many founders were antislavery, several sought—threatening disunion—to protect the institution, such as South Carolina’s John Rutledge. This left antislavery founders in a difficult situation. They believed the nation could win independence, initiate a risky experiment in self-government, and survive in a dangerous world (threatened by predatory British, Spanish, and, later, French, empires) only by uniting the strength of every state into one union. This necessitated compromises with slave states during the founding, most notably in the Constitution. 

Pulitzer Prize-winning historian Joseph Ellis believes these concessions were necessary, writing “one could have a nation with slavery, or one could not have a nation.” African-American leader Frederick Douglass saw the utility of the union the founders crafted, compromises included, arguing that, if the states separated, northern antislavery forces could less effectively influence southern slavery. 

Washington believed slavery was so divisive that it threatened the nation’s existence, potentially ending any hope of liberty for all Americans. He had good reason to believe this—during his presidency, an antislavery petition signed by Benjamin Franklin provoked much southern outrage.

Washington couldn’t find a satisfactory solution to slavery in life, but he sought to do so upon death. In his will, he ordered that his slaves be freed, the young be taught to read and write and to learn certain trades, and the orphaned and elderly slaves be provided for permanently. He forbade selling any slave “under any pretense whatsoever.”

These were revolutionary acts—educating slaves threatened the entire system. It revealed Washington’s belief that black people could succeed if given the chance. Again, compare this to Thomas Jefferson who once said they were “inferior to whites in endowments both of body and mind.” Jefferson and other Americans, including Abraham Lincoln, believed the two races couldn’t coexist and that the answer was to recolonize African-Americans abroad. Washington never supported these ideas and his will reveals he envisioned black people thriving alongside whites in America.  

George Washington’s achievements are well known—winning independence, presiding over the Constitutional Convention, and serving as the first president. While we cannot ignore his participation in slavery, we shouldn’t discount his remarkable transformation into someone who wished for its abolition and took steps personally to make things right, becoming the only major founder to free his slaves.

We can acknowledge Washington’s monumental victories for liberty while recognizing his personal struggle with slavery. In this time of national angst, Washington’s story helps us understand how the same country that once held humans in bondage can also be the world’s greatest beacon of freedom.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176678 https://historynewsnetwork.org/article/176678 0
What Does it Mean to be Progressive in 2020? After suffering mightily from conservative disdain, for us and for any political principles except sticking it to us, Progressives now sense vindication. On the points that Progressives have advocated over the past few decades, events, meaning reality, have shown us to be right. Everyone but Republicans and fossil fuel companies has gone beyond talking about global warming to planning their responses. The majority of Americans like Obamacare, and the Kaiser Family Foundation found that 56% favor Medicare for All. Racism and sexism are recognized more than ever as deeply embedded flaws in our society, which require systemic change to eliminate. Policing must be made safer for Black lives and for other lives, because of racism and sexism, as well as a culture of impunity from the people police should serve.

 

It took a cartoon version of conservative ideas to wake up the 20% to 30% of Americans in the middle to the speciousness of Republican political ideology. Progressive causes are becoming American causes.

 

I worry now that the greatest danger to the political success of progressivism is self-destruction. As soon as Biden pulled ahead in the primaries, David Siders and Holly Otterbein wrote for Politico about a “Never Biden” movement among Bernie Sanders’ supporters. Disappointed revolutionaries are seeking to break off a chunk of progressive support and ensure the victory of Trump and forces of the right. Their motives are as fuzzy as their thinking.

 

Here’s what I mean. Ted Rall says “Progressives Should Boycott the Democratic Party”. David Swanson tells us “Why You Should Never Vote for Joe Biden”. Victoria Freire says “Joe Biden doesn’t deserve our vote”.

 

Joe Biden is far from the ideal candidate for Progressives. Biden has personified the corporate wing of the Democratic Party for decades. He has a long history of moderate, even conservative positions as a centrist Democrat, which these articles detail as one of their major arguments. On the burning issues of the day, it is easy to find Biden statements and votes which anger Progressives: opposition to Medicare for All, endorsement of President Obama’s anti-immigrant policies, silencing of Anita Hill during Clarence Thomas’ confirmation hearings, support for military aggression in the Middle East.

 

Next to these legitimate criticisms, however, anti-Biden voices sink to less honest arguments against him. The least honest is the claim that he is mentally unfit. Rall says, “He is clearly suffering from dementia” and is “senile”, citing as evidence only a poll that shows that many Republicans think he is not fully there. Jeremy Scahill says, “Biden’s cognitive health and mental acuity is, to say the least, questionable”. The senility argument is a Trump talking point, and is just as dishonest when employed by leftists.

 

As someone who has talked in front of audiences all my life, I can confidently say that Biden shows no signs of dementia. His critics ignore how difficult it is to talk publicly, especially in front of cameras, even for those who have done it a thousand times. I constantly hear college graduates, even college professors, fumble for words, interrupt their sentences, insert “like” and “you know” everywhere, and make those flubs for which Biden is criticized, often using videos from another century.

 

Somewhat less dishonest, but just as misleading, is the dredging up of every past Biden statement that puts him squarely in the moderate Democratic camp as proof about his policy ideas today. Biden’s centrism has moved leftwards during his career, just as the Democratic electorate has shifted. He is no Bernie Sanders and has not endorsed Medicare for All. But he openly advocates a version of the Green New Deal, a much more radical environmental policy than that of any presidential candidate before this year. He has argued against defunding the police, a purely negative idea which ought not be a progressive litmus test until it has been much more thoroughly discussed. But his current approach to the twin scourges of sexism and racism is far from his previous stands and squarely in the middle of progressive politics.

 

Anti-Biden leftists ignore the policies that Biden and the Democratic Party are promoting now. Waleed Shahid, of the leftist Justice Democrats, said that Biden’s proposals represent “the most progressive platform of any Democratic nominee in the modern history of the party”.

 

I believe that Progressives, especially now in the face of Republican anti-democratic politics, should always emphasize the necessity of listening to the voters. But a central part of the anti-Biden clamor is the delegitimization of the will of Democratic voters.

 

Krystal Ball, former MSNBC host, already in March told millions of viewers of “The Young Turks”, “if they always can say, 'Look, you've got to vote for us no matter what, you've got no other choice,' then they're always going to treat us like this.” Victoria Freire argues this way: “Start by asking why the DNC would choose such a weak candidate for Democrats to consolidate behind. The answer? Corporatist democratic leaders would rather have a fascist in the White House over a democratic socialist.”

 

A different form of condescension comes from David Swanson, who asserts that those who would pick Biden over Trump are “lesser-evil voters” who become evil-doers themselves: “People, with very few exceptions it seems, cannot do lesser-evil voting on a single day without having it take over their identity and influence their behavior.” He cites his own made-up facts: “the nearly universal practice of those who advocate less-evil voting of becoming cheerleaders for evil for periods of four years”.

 

A conspiratorial view of American politics is not limited to the right. Many disgruntled Bernie supporters in 2016 attributed his loss to the secret machinations of some Democratic elite. Democratic voters were duped then and are being duped now by people nearly as bad, or maybe worse, than the far right.

 

American political campaigns are certainly tarnished by deliberate deception, and Trump’s campaign thus far brings the worst form of public lying to the presidential campaign. Voter manipulation is a feature of American politics. But the assertion that a corporate Democratic cabal, a wealthy corporate war-mongering racist and sexist elite, has successfully manipulated Democratic voters to vote for “their” safe candidate is insulting to us voters. That much is obvious.

 

Less obvious are its racial assumptions. The “Black vote”, the convenient political label for how millions of Black Americans make their political choices, was a central media talking point during the primaries. The collective choices of those voters gave moderate Joe the victory over more progressive Bernie. Were they all duped? Did they throw away their votes out of ignorance or malice?

 

The political conspiracy theories of the right assume that Democratic voters actively support evil. The conspiracy theories of the “Never Biden” element of the left assume that we are just dumb.

 

I was frustrated by Bernie’s defeat in 2016 and 2020, and wished that certain Democratic politicians and media personalities had not nudged those elections toward the center. But there is no evidence that the nudging created Hillary’s victory over Bernie by 12% or Joe’s victory this year by 22%.

 

To assume that Black voters, or Democratic voters in general, have made poor choices, that they don’t understand what they should want and how to get there in today’s political climate, is not progressive. That kind of thinking has led left movements towards dictatorship. Letting Trump win by convincing Americans on the left to vote against Biden will be good for nobody, especially for anyone who supports positions further to the left.

 

Steve Hochstadt

Springbrook WI

July 28, 2020

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/blog/154381 https://historynewsnetwork.org/blog/154381 0
Prepare for Massive Turnover on the Supreme Court in the Next Four Years Ronald L. Feinman is the author of Assassinations, Threats, and the American Presidency: From Andrew Jackson to Barack Obama (Rowman Littlefield Publishers, 2015).  A paperback edition is now available.

 

 

The Supreme Court of the United States has tremendous power and impact on all Americans. The future membership of the Court will likely be determined in the next term, and it could be a massive change.

The three youngest of Justices, Elena Kagan (appointed by Barack Obama in 2010), Neil Gorsuch (appointed by Donald Trump in 2017), and Brett Kavanaugh (appointed by Trump in 2018), are 60, 53 and 55, respectively, seem in good health, and are likely to be on the Court for a long time.

Much attention is, of course, paid to the oldest member, Ruth Bader Ginsburg (age 87), who has served on the Court for 27 years since being appointed by Bill Clinton, and has had five bouts with cancer (to date recovering from all and continuing to be able to work).  Democrats have prayed for Ginsburg to stay healthy enough to remain on the Court in the hope that Joe Biden becomes President in 2021.  It is imagined that she will retire next year if Biden is President, but stay on, if she is able to, if Trump is reelected.

But then, there is also Stephen Breyer (age 82), appointed by Bill Clinton, who has been on the Court for 26 years. While he is in good health, it seems likely that he will leave in the next presidential term.  If both Ginsburg and Breyer leave the Court with President Biden in office, it would preserve a 4 Justice liberal bloc that has occasionally drawn an ally from the more conservative side, but if Trump replaces them, then the Court would become much more right wing, with a 7-2 conservative majority.

But this is not the end of the issue of the future Court as, realistically, there might be up to four other Justices departing by 2024.  This would include Clarence Thomas (age 72), appointed by George H. W. Bush, and Samuel Alito (age 70), appointed by George W. Bush, with Thomas on the Court for 29 years, and Alito having served 14 years.  There have been rumors that either or both of them might leave the Court now, so that Donald Trump can replace them, but apparently as the summer moves on toward a regular October opening, it seems not to be happening.  The point is that if either or both left the Court, Trump could replace them with younger, more ideological conservatives, while if Joe Biden were able to replace them, the Court would move substantially to the left.

But then, we also have Sonia Sotomayor (age 66), on the Court for 11 years after appointment by Barack Obama. It has been publicly reported that she has problems with diabetes, which might, in theory, cause her to resign from the Court in the next term.  Sotomayor has been a Type 1 diabetic since age 7, and  had a paramedic team come to her home in January 2018 to deal with an incident of low blood sugar.  If Trump were able in the next Presidential term to replace her, the conservative majority could be as strong as 8-1 by 2024.

And then, finally, we have Chief Justice John Roberts (age 65), who has led the Court for 15 years since appointment by George W. Bush. Roberts is as much of a “swing vote” as there is among the conservative Justices, surprising many with some of his decisions and utterances regarding Donald Trump.  The problem is that Roberts has had health issues involving seizures, in 1993, again in 2007, and most recently in 2020.  In 2007, after two years as Chief Justice, Roberts collapsed while fishing alone on a pier at his summer home in Maine, fortunately not falling into the water and drowning.  In June 2020, he fell and hit his forehead on the sidewalk, receiving sutures and an overnight hospital stay. In this case, a seizure was ruled out as the cause of the fall, but the possibility that Roberts might leave the Court has become a subject of speculation.

So while the future of these six Supreme Court Justices is for the moment just speculation, the odds are good that two or more might leave the Court, and potentially as many as six, which gives either Joe Biden or Donald Trump the ability to transform the ideology of the majority of the Court until mid-century.

So the Presidential Election of 2020 is not just about who might be in the Oval Office, or which party might control the US Senate, but also a potential revision of the Supreme Court’s role in American jurisprudence, and its impact on 330 million Americans.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/blog/154384 https://historynewsnetwork.org/blog/154384 0
Life during Wartime 516

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/blog/154382 https://historynewsnetwork.org/blog/154382 0
The Roundup Top Ten for July 31, 2020

A Brief History of Dangerous Others

by Richard Kreitner and Rick Perlstein

Wielding the outside agitator trope has always, at bottom, been a way of putting dissidents in their place. The allegation is not even necessarily meant to be believed. It is simply a cover story, intended to shield from responsibility not only the authorities implicated in crimes or abuses of power, but also society as a whole. 

 

Africa's Medieval Golden Age

by François-Xavier Fauvelle

During the Middle Ages, while Europe fought, traded, explored and evolved, Africa was a continent in darkness, 'without history' – or so the traditional western narrative runs. In fact, as François-Xavier Fauvelle reveals, it was a shining period in which great African cultures flourished.

 

 

The Border Patrol’s Brute Power in Portland is the Norm at the Border

by Karl Jacoby

What’s happening in Oregon reflects the long history of unprecedented police powers granted to federal border agents over what has become a far more expansive border zone than most Americans realize. 

 

 

Tom Cotton Wants To Save American History. But He Gets It All Wrong.

by Malinda Maynor Lowery

Senator Cotton’s remarks and his proposal to revise history obscure the violence, death and displacement that slavery caused in both Black and Indigenous communities.

 

 

Congresswomen Of Color Have Always Fought Back Against Sexism

by Dana Frank

When he called Alexandria Ocasio-Cortez “crazy” and “out of her mind” because he didn’t like her politics, Ted Yoho was harking back to Edgar Berman’s narrative that a political woman who dares to speak up is constitutionally insane.

 

 

The Death of Hannah Fizer

by Adam Rothman and Barbara J. Fields

Those seeking genuine democracy must fight like hell to convince white Americans that what is good for black people is also good for them: Reining in murderous police, investing in schools rather than prisons, and providing universal healthcare.

 

 

Why "White" Should be Capitalized, Too

by Nell Irvin Painter

Capitalizing "White" makes clear that whiteness is not simply the default American status, but a racial identity that has formed in relation to others. 

 

 

How Trump Politicized Schools Reopening, Regardless of Safety

by Diane Ravitch

Amid this uncertainty and anxiety, President Trump has decided that the reopening of schools is essential to his prospects for reelection.

 

 

Colonialism Made the Modern World. Let’s Remake It.

by Adom Getachew

What is “decolonization?” What the word means and what it requires have been contested for a century.

 

 

On Sex with Demons

by Eleanor Janega

"The idea of having sex with demons or the devil... has a long and proud history. A concern about sleep sex demons traces at least as far back as Mesopotamian myth where we see the hero Gilgamesh’s father recorded on the Sumerian King List as Lilu, a demon who targets sleeping women, in 2400 BC."

 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176664 https://historynewsnetwork.org/article/176664 0
Let Us Now Remove Famous Men

The Ku Klux Klan protests the Charlottesville City Council's decision to remove a monument to Robert E. Lee, 2017.

 

 

I passed the statue of Robert E. Lee in Charlottesville, Virginia literally hundreds of times, often admiring the handsome appearance of a general who was proud in defeat, leaving the battlefield with honor intact. The bronze Lee sits ramrod straight in the saddle of his warhorse Traveller, hat clutched in his right hand, atop a sturdy gray stone pillar. In all seasons, whether sprinkled with snow or glowing in a fall sunset, it seldom occurred to me—whose ancestors wore blue and gray—that this statue was a symbol of white supremacy. That’s not because the message was hidden. It was because I was unaware of my own white privilege, which permitted me to view it in terms other than as a potent symbol of white over Black.

 

But white supremacy is exactly the message that the Lee statue embodied. It was the reason it was built in the 1920s. It wasn’t for the general himself, who led tens of thousands of armed rebels against U.S. forces, wounding and killing American soldiers and re-enslaving Black refugees from bondage. Lee had died over five decades earlier, and he didn’t need another statue. By the early twentieth century, Confederate statuary was a growth industry, with Lee at the center. Charlottesville city boosters commissioned it among several such memorials, and it was unveiled in 1924 to the applause of the Sons of Confederate Veterans and other adherents of the Lost Cause. The president of the University of Virginia dedicated it in the presence of members of several Confederate organizations.

 

Few Black Virginians voted that year, either for Calvin Coolidge or his segregationist Democratic opponent, who won Virginia’s electoral votes. The commonwealth made sure of it, passing the 1924 Racial Integrity Act to harden the state’s color line. The University of Virginia would not admit Black students for another quarter century. State schools, hospitals, and cemeteries were segregated. African American southerners were fleeing to cities like Newark and Philadelphia, where at least there was a hope of upward mobility. But Lee’s likeness gave the violence of Jim Crow a veneer of respectability and a nostalgic atmosphere.

 

The Lee statue was a quiet sequel to an adjacent statue of Thomas “Stonewall” Jackson, dedicated in 1921 in Charlottesville to the applause of 5,000 pro-Confederate supporters, many uniformed, in sight of a massive Southern Cross, the Confederate battle flag. The Jackson monument was also a bronze equestrian statue depicting the general steeling himself for battle. He was killed in 1863 and never saw the defeat of the Confederacy for which he gave the last full measure of his devotion.

 

Had the Confederacy won, 4 million Black Americans would have remained enslaved.

 

Adding injury to insult, the memorial to Jackson and the Lost Cause was built on the grave of a Black neighborhood, McKee Row, which the city seized through eminent domain and demolished, making room for a symbol of white supremacism that was unambiguous to Black residents. The Lee statue stood near Vinegar Hill, a historically Black neighborhood, which the city demolished as part of urban renewal. Vinegar Hill fell so that white city residents could enjoy a downtown mall, pushing its Black residents to the margins, while relying on them to clean the buildings, tend the children, and cook and serve the food that made the living easier for many white residents.

 

The statues kept that racial order front and center, and it is worth remembering that statues are not history. They are historical interpretations reflecting the values, assumptions, and interpretations of their times. History books of the 1920s generally argued that slavery benefitted Black people and Reconstruction was a Yankee plot to punish the white South, fastening African American rule on a prostrate people who were gracious in defeat. It was a rallying cry against Black equality.

 

The Lee statue attracted neo-Confederates, neo-Nazis, white nationalist militias, and other hate groups that converged to defend white supremacy in August 2017. It was the same statue that Heather Heyer lost her life over on the city’s downtown mall.

 

Should such statues come down? 

 

New Orleans mayor Mitch Landrieu perhaps said it best in a 2017 speech arguing for the removal of that city’s Confederate monuments. Landrieu recounted the road he’d traveled to the decision and his talk with an African American father of a daughter through whom he framed the argument.  “Can you look into the eyes of this young girl and convince her that Robert E. Lee is there to encourage her? Do you think that she feels inspired and hopeful by that story? Do these monuments help her see her future with limitless potential?”

Landrieu didn’t have to fill in the blanks. “When you look into this child’s eyes is the moment when the searing truth comes into focus,” he concluded. The statues came down.

 

The wave of iconoclasm in the United States in 2020 seems to look past the nuance of each statue, viewing any stone or bronze figure with a history of racism to be a fair target. Statues of Christopher ColumbusJunípero Serra, and Juan de Oñate came down based on a history of enslaving, torturing, killing, and expropriating land and sacred spaces of indigenous Americans. Philip Schuyler—who the heck was Philip Schuyler (Alexander Hamilton’s father in law and one of New York’s biggest enslavers)? Civil disobedience is jarring. Disorderly property destruction can be downright frightening.

 

After a mob hauled down a bronze equestrian statue of King George III in New York City in July, 1776, General George Washington wrote that those who cut off the king’s head and melted the metal for bullets acted with “Zeal in the public cause; yet it has so much the appearance of riot and want of order, in the Army, that [Washington] disapproves the manner, and directs that in future these things shall be avoided by the Soldiery, and left to be executed by proper authority.”

 

Washington condemned it as the wrong execution of the right idea. And in an irony of history, George Washington himself has become as King George—a target for protest and removal. The Washington that led American forces to victory against the British in defense of “all men are created equal” was also the owner of over 100 enslaved people of African descent, the same leader who signed the 1793 Fugitive Slave Actauthorizing deputized agents to cross state lines and kidnap Black people who had no right to defend themselves in court. The same Washington who pursued his own fugitive bondswoman, Ona Judge, who spent decades evading the Washingtons’ property claims to her body. 

 

But it’s worth noting that over 90 percent of the recent removals were directed by mayors, governors, and other elected officials and assemblies, responding to citizens’ calls for them, and often replacing them with other memorials.

 

And the tone of the protests or the way in which some statues are defaced brings out another aspect of white privilege. That is, some initially sympathetic observers are uncomfortable with who gets to oversee and control the process. This is not to excuse wanton destruction. The Atlantic’s Adam Serwer tweeted that those who sought to bring down a statue of Ulysses S. Grant “probably just want to break things.” There is that, but where on the social balance sheet do we register the insult of a century and a half of white supremacy set up in America’s town squares and green spaces?

 

The protests over the murders of Ahmaud Arbery, George Floyd, Breonna Taylor, and so many others are infused with a damning critique of structural racism—which is not so much a set of persistent attitudes as institutional practices. A century and a half after Lee’s defeat on the battlefield, the typical Black family owns 1/10th the wealth of the typical white family, and that wealth gap is widening into a chasm. Black workers’ earnings are diminishing compared to whites fifty years after Civil Rights, and taken as a whole, African Americans have returned to the same income gap versus whites as in 1950, when Harry S. Truman was president, before Brown v. Board of Education, and before the University of Virginia admitted a single Black student. Was that incidental or an intentional part of Robert E. Lee’s legacy, if not Washington’s?  The Covid-19 crisis has put Black and Latinx workers on the front lines as essential workers, delivering healthcare and meals, yet most are underpaid, have little job security, and risk bringing the virus home to children and seniors. Black people account for nearly a third of coronavirus cases and 40 percent of the deaths. How is that sacrifice to be memorialized?

 

Should the statues remain up, doing the quiet work of reinforcing white supremacy while we get to work dismantling the interlocking components of structural racism? Or are the statues part of a 400-year history of violence against African-descended people that needs urgent attention and rectification? In what direction do the statues and monuments point us?

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176563 https://historynewsnetwork.org/article/176563 0
What's in a Name?: Decolonizing Sports Mascots

Protesters against Washington NFL Team name, Minneapolis, 2014. Photo Fibonacci Blue, CC BY 2.0

 

 

 

Growing up in Swarthmore (PA) during the mid-1970s I played on my high school football team, the Swarthmore Tomahawks.  The image of a tomahawk on our helmets was supposed to inspire fear in our opponents, I guess, though as a 150-lb. Quaker kid with a nickname of “stone hands” I doubt I did. We were one of thousands of schools and colleges to use Native American names and mascots in the 1970s; the list included “Indians,” “Warriors,” “Braves,” “Redmen,” “Fighting Sioux,” “Squaws,” and “Savages,” almost all of them denoting violence, almost all of them justified in the name of “honoring” Native people.  All these teams expropriated Native symbols for entertainment purposes that effectively covered up a violent history of settler colonialism; the use of Tomahawks was pretty bad, especially in Pennsylvania, where 20 Christian Indians were massacred by the so-called Paxton boys in 1763.

 

As a teenager I didn’t think about the issue of mascots, in part because I never learned about Native people in high school, with the likely exception of a section on Thanksgiving and a shout out to Pocahontas and Sitting Bull; Native Americans never made it to the 20th century in my high school books, which is the case today in many textbooks and classrooms.  I came to understand how problematic the use of tomahawks and other Native symbols and names was in American sports in writing a chapter on Native American mascots for a 2003 book called Native American Issues.  By then, many high schools and colleges and universities had dropped their Native names and mascots in response to protests by Native groups, but hundreds remained, including at the professional level.  In light of the Black Lives Matter movement and a national reckoning of systemic racism, some of these teams are rethinking their use of Native symbols.  It’s 2020.  It’s about time. 

 

The Redskins name is being axed, finally, in favor of Red Tails or Red Wolves, largely because of pressure finally applied by the NFL and major team sponsors such as FedEx, Nike, and Pepsi-Co; although I applaud these companies for speaking out, let’s not forget that for years FedEx shipped Nike’s Redskins jerseys and shirts without batting an eye, enabling the team to sustain its use of a name offensive to Native people, who had campaigned against the name since the 1970s, and to any American who objects to racist stereotypes. The use of “Redskins” depoliticized Native people, dehumanized them into a stereotype, and debilitated them in terms of their own self-image according to numerous studies.  Washington, D.C. is the center of American political power, the place where diplomats from around the globe visit to conduct business and the place where federal legislation, court decisions, and presidential orders are generated.  What was the impact of the Redskins name, imagery, and associated fan performances, for decades, on politicians, judges, and other officials’ perception of Native people and their place in American society as they debated public policy and legal cases? 

  

The Atlanta Braves baseball team in particular demonstrates the dishonest and damaging use of Indian imagery to cover up Native people’s traumatic history.  The team has announced that they will retain the name “Braves” but will review the use of the “tomahawk chop.”  In employing the tomahawk on uniforms and merchandise and allowing fans to perform their ritualistic “tomahawk chop,” the team, and Major League Baseball, perpetuate a lie about of one of the most sordid periods of American history: the forced march in the late 1830s of Cherokee from their homeland in Georgia along what they called The Trail Where They Cried to Indian Territory in present day Oklahoma, which led to roughly 4,000 Cherokee dying in federal stockades and on the trail.  It wasn’t the Cherokee who wielded tomahawks.  Rather, they tried to wield legal arguments tied to treaty rights to retain their sovereignty, arguments accepted by the U.S. Supreme Court in 1832 but ignored by President Andrew Jackson and a host of local and state officials eying the Cherokee’s gold resources and rich cotton-growing lands.

 

Changing the names, mascots, and symbols at all levels of sports is a starting point.  But decolonizing sports history requires a deeper analysis of how false historical narratives that ‘blamed the victim’ became embedded in public venues in everyday life that shaped generations of Americans’ perceptions of Native people, who have served their country’s armed forces for over a century in the hopes that it will honor the promises written in hundreds of treaties that represent the legal and moral legacy of American colonialism.

 

Editor's Note: As this article was prepared for publishing, it has been reported that owner Dan Snyder will name his team "The Washington Football Team" for the 2020 NFL season. Whether this is a retaliatory measure against sports merchandise companies who will be forced to try to sell a boring product after pressuring Snyder to change names is anyone's guess.  

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176569 https://historynewsnetwork.org/article/176569 0
Lincoln, Cass, and Daniel Chester French: Homely Politicians Divided by Politics, United through Art

Lewis Cass by Daniel Chester French, United States Capitol Statuary Hall. 

 

 

At first glance, the immortal Abraham Lincoln and the largely forgotten Lewis Cass had almost nothing in common save for the fact that they were probably the two homeliest men in 19th-century American politics. Not that their forbidding looks discouraged artists from portraying them, or dissuaded admirers from commissioning visual tributes. Now, the Illinoisan and the Michigander are each at the center of separate efforts in Washington to remove iconic statuary dedicated in their honor.

As to their obvious differences, Lincoln was a southern born, anti-slavery Whig-turned-Republican, and Cass, a New England-born, pro-slavery Democrat. Lincoln was self-educated; Cass studied at the tony Phillips Exeter Academy (where Lincoln later sent his eldest son). Cass commanded regiments during the War of 1812; Lincoln’s sole military experience came as a militia volunteer in the Black Hawk War in Illinois, where, he admitted, he battled only “mesquitoes” [sic]. Cass went on to serve as territorial governor of his adopted Michigan, and later as one of its U. S. Senators. Lincoln turned down his one and only chance to serve as a territorial governor—in remote Oregon—and later lost two Senate bids in Illinois. Cass served as an American diplomat in France; Lincoln never set foot overseas. And Cass helped introduce “Popular Sovereignty,” the controversial doctrine giving white settlers the power to welcome or ban slavery from America’s western territories; Lincoln not only opposed the scheme, its 1854 passage “aroused” him to back into politics after a five-year hiatus. Of course, while Cass failed in his only run for the presidency; Lincoln won twice.

No two Northerners could have been more different—politically. But there was no disguising the fact that neither man was easy on the eyes. That burden they shared.

Seeing Lincoln’s likeness for the first time, the Charleston Mercury branded him “a lank-sided Yankee of the unloveliest and of the dirtiest complexion…a horrid looking wretch.”  Encountering him in the flesh, a British journalist recalled that Lincoln’s skin was so “indented” it looked as if it had been “scarred by vitriol.” Lincoln had no choice but to make light of his appearance. Accused in 1858 of being two-faced, he famously replied: “If I had another face, do you think I would wear this one?” Once he claimed that a hideous-looking woman had aimed a shotgun at him, declaring: “I vowed that if I ever beheld a person uglier than myself I would shoot him.” Lincoln claimed that he replied, “Madam, if I am uglier than you, fire away!” 

As for Cass, who came of age politically a generation earlier, his initial biographers politely evaded the subject of his personal appearance—though he possessed a bloodhound face dotted with warts and moles, framed by a meandering reddish-brown wig atop a once-husky frame tending to fat. A tactful 1891 writer conceded only that “where a man of less significant appearance would escape attention…the physical poise and stateliness of Cass would arrest the attention of the heedless.” One mid-century newspaper proved blunter, suggesting of Cass that “it is hard to tell whether he swallowed his meals or his meals him.” 

In their time, not surprisingly, both Cass and Lincoln emerged as irresistible targets for caricature. But it is their other, little-remembered artistic connection that comes as a shock. In a significant irony of American art, Lincoln and Cass were each portrayed heroically in marble by the same great sculptor: Daniel Chester French. The colossal, enthroned 1922 statue in the Lincoln Memorial is not only French’s masterpiece, but arguably the most iconic statue in America. Reproduced on coin and currency and featured in movies like Mr. Smith Goes to Washington, it is perhaps most cherished as the backdrop for history-altering events like the “I Have a Dream Speech” that culminated the 1963 March on Washington.

French’s full-figure Cass likeness, carved 30 years earlier for Statuary Hall, has never competed for similar attention or affection, but it did win praise when it was first unveiled in 1889. But in the last few days, some Michigan lawmakers have spearheaded a drive to remove and replace it. “His history is not reflective of our values here,” declared State Senate Democratic leader Jim Ananich on July 10. “I hope that people look at this as a real opportunity to recognize some important people.” (Michigan’s other contribution to Statuary Hall is a figure of Gerald R. Ford). The Cass controversy has erupted amidst a clamor to reappraise Lost Cause monuments throughout the South, and just a few weeks after protestors demanded the de-installation of the long-standing Freedman’s Memorial near Capitol Hill. That 1876 Thomas Ball sculptural group shows the 16th president symbolically lifting a half-naked, enslaved African American from his knees. A copy has already been scheduled for removal in Boston.

In a sense, the Michigan initiative to change its Statuary Hall complement is anything but unique. A few years ago, California swapped out a statue of its long-forgotten statehood advocate Thomas Starr King for one depicting Ronald Reagan. In the wake of the new scrutiny of Lost Cause monuments, Florida recently announced it would replace its longstanding statue of Confederate General Edmund Kirby Smith with one of the African-American educator and civil rights activist Mary McLeod Bethune. The fact that Michigan, too, is beginning to reckon with its conflicted past—Cass (1782-1866) was a slaveholder who also supported Native American removal—demonstrates how thoroughly we have begun conducting a nationwide reappraisal of American memory and memorials.

There is yet one more irony attached to the proposed departure of the Lewis Cass statue: and that is the early history of its distinctive location. Statuary Hall, a favorite of modern visitors to the U.S. Capitol, served originally as the hall of the House of Representatives. In this very chamber, Abraham Lincoln served from 1847 to 1849 in his one and only term as a Congressman from Illinois.

Although freshman Lincoln only occasionally addressed full sessions of the House there, he did rise on July 27, 1848, a few weeks after the Democratic Party nominated the same Lewis Cass to run for president. Whig Lincoln stood firmly behind Zachary Taylor that year. Now he decided it was time not only to offer support for the old general, but a scathing rebuke of Cass.

Lincoln had been known earlier in his career for his caustic wit (one of his sarcastic newspaper articles had once provoked its intended victim to challenge him to a duel). But he had learned over the years to temper the attack-dog side of his oratorical skill set. Cass managed to re-arouse Lincoln’s dormant instincts for venom. Apparently the fact that the Democratic candidate had been re-branded by supporters as a military hero—like Taylor—inspired the 39-year-old Congressman back into the scathing style of his political apprenticeship.

Hastening to point out that he claimed no military glory of his own, Lincoln quickly went after Cass’s claims otherwise. Yes, Lincoln said in his stem-winder, Cass had indeed served decades earlier in the War of 1812, but his principal contribution had been enriching himself. “Gen: Cass is a General of splendidly successful charges,” Lincoln taunted, “—charges to be sure, not upon the public enemy, but upon the public Treasury.”

“I have introduced Gen: Cass’ accounts here chiefly to show the wonderful physical capacities of the man,” Lincoln ridiculed the stout candidate.  “They show that he not only did the labor of several men at the same time; but that he often did it at several places, many of hundreds of miles apart, at the same time… . By all means, make him President, gentlemen. He will feed you bounteously—if—if there is any left after he shall have helped himself.”  

Lincoln was just getting started, and one can only imagine his colleagues rolling in the aisles as his onslaught gained steam. No, Lincoln charged, Cass never manfully broke his sword rather than surrender it to the British, as his campaign now boasted. “Perhaps it would be a fair historical compromise to say, if he did not break it, he didn’t do any thing else with it.”  

As for Cass’s reported military triumphs north of the border, Lincoln declared tartly: “He invaded Canada without resistance, and he outvaded it without pursuit.” Attaching a military record to Cass, he concluded with a vivid frontier allusion, was “like so many mischievous boys tying a dog to a bladder of beans.”

Emboldened by the reception his assault received on the House floor, Lincoln took his campaign on the road, stumping for Taylor in Pennsylvania, Maryland, Delaware, and Massachusetts. In the end, Taylor won the presidential election handily, but Lincoln apparently lacked Cass’s aptitude for exacting political payback. Ambitious for appointment to the federal Land Office (since his party did not re-nominate him for the House), he ended up disappointed. Offered the Oregon posting as a consolation, he turned it down (in part because his wife refused to move west), returned to his law practice, and for a time disappeared into political hibernation. Only when Stephen A. Douglas re-introduced Popular Sovereignty did Lincoln stage a comeback.  

Winning the presidency sixteen years later, Lincoln arrived back in Washington in February 1861, and almost immediately headed to the White House to pay a courtesy call on outgoing president James Buchanan. There he met with Buchanan’s Cabinet. Alas, history did not allow for a face-to-face meeting between the two ugliest men in politics. The elderly Cass had served nearly four years as Buchanan’s Secretary of State, but had resigned back in December. In the ultimate irony, Cass had quit in protest over Buchanan’s reluctance to resupply federal forts in the South at the onset of secession—showing the kind of pro-Union resolve that might have prevented the crisis that Buchanan left Lincoln to handle. Say this much for Cass: he remained loyal to the Union.

That final burst of patriotism may not be enough to save his Statuary Hall effigy. Daniel Chester French’s almost hideously lifelike Cass statue may depart as soon as a replacement can be identified and sculpted. But the Cass masterpiece should definitely be preserved elsewhere—anywhere—for the simple reason that works by the greatest American monumental sculptor deserve to be appreciated on their own terms: as art. At its dedication, critics lauded the French marble as both “an excellent model of the great statesman” and “a statue full of character and expression as well.” 

No one has yet mentioned it, but the uncompromisingly realistic Cass statue also testifies usefully to an era in which matinee-idol good looks were not required for political success. In the age before the glare of television and instantaneous photography were relentlessly aimed at our leaders, politicians could succeed even if they looked like Lewis Cass. Or Abraham Lincoln.

There is no evidence as to whether Lewis Cass ever believed his repellent appearance held him back from the White House, or disqualified him from romanticized portraiture. As for Lincoln, regardless of whether or not his statue survives in Lincoln Park in Washington, at least he maintained a sense of humor about his portrayals, while fully understanding how they burnished his image.  And that is why he posed for a succession of sculptors long before Daniel Chester French ever undertook to create the Lincoln Memorial. Shown the very first sculpture made of him from life, he was heard to observe with self-deprecating frankness: “There is the animal himself!”

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176570 https://historynewsnetwork.org/article/176570 0
One of the Chicago 7 Reflects on Dissident Politics Then and Now

 

 

The defiantly unmasked, gun-toting crowds demanding an end to social distancing from state legislatures and governors in the early spring were loud and obnoxious. A little later, the chants one of my daughters joined in shouting out in New York City against the police murder of George Floyd were loud and necessary. The eruptive, massive and extraordinarily wide-spread demonstrations in cities and towns all across the country against systemic racism were louder still. And it’s possible that the convergence this year of the pandemic with a death toll approaching 150,000, the related economic breakdown, and the raging anger and social unrest on the streets might change America. Because it’s happened before, and not so long ago.

The years 1968, 1969, and 1970 were especially loud and did change America. In the midst of a horrendous war that killed and maimed Americans and Vietnamese seemingly without end, people were shouting and marching and sometimes fighting the police in the streets; there were burnings and looting, and music and long hair and all sorts of drugs seemed to be everywhere and threaten everything. People of color, uppity women, gays, lesbians, university students, the poor: all sorts of people said and did and demanded all sorts of things. And the government was trying its best to stop it all, to demonize and punish people - political and cultural dissidents - who claimed they only wanted to change the country into something better. 

Those times accelerated a splitting of the country into different tribes. Perhaps the cultural split of the sixties was not as severe as the one between early white colonists and Native Americans that ultimately led to genocidal slaughter, or the divisions between the North and South that culminated in over 600,000 Civil War dead. But it was bad enough, and many of the same issues from the sixties feed today’s polarization – race, sexual identity and gender roles, economic inequality, war, a longing for a different and better country. If you want a quick and simple reminder of the sixties’ continuing impact, you can just look at the bumper stickers and emblems on people’s cars and realize you can easily figure out which side of that long-ago political and cultural split they are on, and what tribe they belong to now.

Another, if less direct way of noticing how the sixties are still helping shape today’s political struggles, is to note the birth years of the president, members of the Supreme Court, and the senior leadership of both parties in the House and Senate. More than 40 percent of them were born between 1939 and 1955, which would have put them between the ages of fifteen and thirty in those especially loud years of 1968, 1969, and 1970. And I think that would have made them particularly vulnerable to the cascade of exceptional images and events and noise of those years. Things like that are hard for people to forget and I don’t think it’s unreasonable to suspect that their experiences then have had some impact on the tone and direction of the political decisions they are making now. Everyone used to be certain the perspectives of the generations that experienced the Depression were shaped by those experiences. Why would 1968, 1969 and 1970 be any different? 

Today, once again many people on the left side of a polarized America feel compelled to become more directly involved in political work in opposition to the dominant economic and cultural arrangements of our country. Some of the initial prompts—just like in the sixties—come from being assaulted by vivid media imagery and stories. Then it was newspaper and TV coverage of the black students sitting at segregated lunch counters acting bravely against racism and injustice, or blacks and whites together riding buses into dangerous southern cities. Now there are digital visions of neo-Nazis marching on the streets, chanting against immigrants, blacks, and Jews, video of police violence and murder, and pictures of crying, angry parents mourning the death of their child killed by a police officer for no understandable reason. 

Often far more quickly than I and many of my friends learned so long ago, people have understood that engaging in politics is what is required to stop some of the ugliness and suffering. In part, that’s because seeing that “politics” directly impacts peoples’ lives is easier now. Most media coverage of political news in the fifties and early sixties was almost benign, and certainly never cruel or savage. But today’s often ferocious Republican and Democratic tribal splitting can make it impossible not to see “politics” as an all too visible, divisive force in America. And because of all that noise, and because the images people see of hurt and pain and cruelty offends their sense of what America should be, many people feel compelled to choose a side and act.

The “Resistance” in today’s political tribal divisions most directly and broadly reflects the values, hopes, and commitments that helped define the dissident politics of the sixties: a belief in the essential worth of other people, and that personally engaging in political activity is required to meet your obligations to others and to make America a fairer, more just, and better place. The newer, broad dissident movement even echoes some of the internal conflicts and overlapping strategies of those older times, with arguments about the legitimacy, priority, and importance of local issue organizing, electoral politics, peaceful mass protest, and righteous violence.

When the viral plague finally begins to truly fade away, restaurants and city street life as it all used to be will take a while to come back, if ever. But the brutal tribal politics of “us” against “them” looks almost certain to continue. And the renewed assault by leftist political dissidents on a long-established dominant racist and sexist culture and inequitable wealth distribution will need to be strengthened. I believe and hope that will happen. But for it to work, there will need to be a better balancing between the particularistic aspirations and hopes and goals of all the different groups of dissident activists, far better than we ever achieved in the sixties.  

In my old world, the unbridged differences between otherwise allies – people of different colors, cultures, genders, classes, sexual identities, and competing claims on priorities for action, all swirling underneath crushing waves of unrelenting government repression, first destroyed individuals among us, and finally, us. Far better than we ever did, people now are, as they must, working together, acting and moving forward together, while constructing ways to live daily lives and still be fully engaged politically. Otherwise the conflicts between tribes we see now could strengthen into something even more horrible and threatening to our country’s core values of tolerance, freedom and democracy.   

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176496 https://historynewsnetwork.org/article/176496 0
Mankato’s Hanging Monument Excluded Indigenous Perspectives when it was Erected and when it was Removed

 

 

In the wake of the George Floyd Murder and protests in Minneapolis, Minnesota members of the American Indian Movement (AIM) toppled a controversial monument to Christopher Columbus in St. Paul, Minnesota, signaling a new way to engage with inaccurate representations of the American past. While many believe that monuments serve as a form of public history and that their removal "erases history," the removal of statues demonstrates how communities of color and their allies no longer tolerate narratives of American exceptionalism that suppress questions of racism, slavery and conquest.

 

In fact, Minnesota's history projects similar debates over the state's founding and the methods by which white settlement grew and dominated that state's landscape. The U.S.-Dakota War, one of Minnesota's most defining events, connects debates between the memory of the Civil War and colonialism. As public histories of these two divergent events often suppress racism or enact the erasure of Indigenous peoples, they represent the power cemented in monuments. Not only to celebrate a figure or event, but to also dominate the historical narrative future generations may learn from.

 

By 1912, many white citizens of Mankato, Minnesota eagerly yearned to commemorate the largest mass execution in U.S. history. Fifty years earlier, thirty-eight Dakota men were hanged from a large scaffold in downtown Mankato for their participation in the short-lived U.S-Dakota War. President Abraham Lincoln ultimately lowered the initial number of the condemned from 303 to thirty-eight, both to appease white settlers demanding justice and avoid sparking a continuation of hostilities between Indigenous communities and white settlers in Minnesota and throughout the Northern Great Plains. The “Hanging Monument” represented an “end” to the Dakota War through the recognition of federal power in controlling not only Indigenous peoples but the histories inclusive of their culture and experiences in resisting America’s settler colonial ambitions. 

 

The Hanging Monument, an 8,500 pound memorial made of local granite, commemorated the mass execution near the original site. The austere face of the memorial read: "Here Were Hanged 38 Sioux Indians: Dec. 26th, 1862." Judge Lorin Cray, one of the monument's supporters and financial benefactors, insisted to one newspaper that the memorial told the story of a historical event in Mankato and did not seek to demean the Indigenous peoples which it supposedly represented. The celebration of white law that ended the bloody U.S.-Dakota War, as well as the lack of context given on the monument’s face, enacted a continuous silencing and erasure of Dakota people from that history. The monument thus reassured the white citizens of Mankato that they were safe again, while at the same time promoting the reach of local and federal power and law. For many years, the Hanging Monument represented an object for outside visitors to see. It was a celebration that their town served as the platform to end the Dakota War. The city would forever be known as the place that hosted the hanging of the Dakota 38.

 

Between the 1950s and 1970s, many activists demanded the removal of the monument. In some cases, the memorial had red paint poured over its face or was doused with gasoline with the hopes that it would burn. Yet, despite the City of Mankato moving the monument away from the hanging site and further out of the public's eye, the Hanging Monument remained deeply engrained as part of the town. By 1971, Mankato State College (now Minnesota State University, Mankato) hosted an annual Social Studies Conference. That year AIM leaders spoke to a full audience of educators, activists, and the Bureau of Indian Affairs about the plight of the American Indian. One, Edward Benton, acknowledged that the Hanging Monument continued to cause emotional pain to not only the Dakota community, but all Native Americans. He demanded changing the inscription to "Here Were Hung 38 Freedom Fighters," or he could not promise it would still be standing after the conference concluded. Months after the meeting, the monument was removed and hidden from the public's eye to rid the town of its inconvenient past. Yet, the removal was not only because of the activism sparked by AIM and their allies.

 

As the American Revolution Bicentennial celebration commenced, many white Mankatoans pushed the city to be recognized by the American Revolution Bicentennial Commission as an "all American town." They needed the dark shadow of the mass hanging to disappear forever, hoping to replace it with other famous history or figures from Mankato's past. One Mankato citizen urged the city to recognize Maud Hart Lovelace and forget the negativity that had long brewed over the mass hanging of the Dakota 38.

 

The Bicentennial, then, represented, in an Indigenous inflection, what historian Edmund Morgan has called the "American Paradox”: a celebration of America's founding while at the same time ignoring the horrors that many enslaved peoples faced under bondage. In Mankato, that paradox celebrated America's great founding and the campaign of manifest destiny that both displaced Dakotas and all Indigenous peoples and erased them from that historical narrative. When many Mankatoans urged hiding the monument, this represented a second erasure of the historical narrative to conceal the town's participation in larger settler-colonial ambitions. After removing and replacing Indigenous peoples with a settler society, concealing a monument to that removal denied an entry point to the public debate for those communities’ narratives, stories, and experiences.

 

The Hanging Monument disappeared in the 1990s when city officials moved it from a city parking lot to an undisclosed location. Publicly, no one knows where the monument presently resides. Its story though, offers a clear avenue to understand present contestation over historical monuments better. This monument connects the Civil War era with America's broader ambitions in colonizing Indigenous lands. And, both while it stood and when its presence became inconvenient, the Hanging Monument shows how memorials control historical narratives and elevate particular interpretations of the past.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176566 https://historynewsnetwork.org/article/176566 0
What the Faithless Electors Decision Says about SCOTUS and Originalism  

 

 

The outcome of Chiafalo v. Washington (a unanimous decision that states may compel presidential electors to cast votes for the candidate to which they are pledged-ed.) was a foregone conclusion. In this troubling time, SCOTUS was not about to upend our system of selecting a president. To achieve the desired result, however, the justices were forced to turn the clear intent of the Framers on its head. This does not mean they made the wrong call; it simply shows us, point blank, that originalism is no more than a pragmatic tool, to be used or ignored at will. 

 

The opinion of the Court, delivered by Justice Kagan, states correctly that the system of presidential electors “emerged from an eleventh-hour compromise.” But neither the Court, nor the concurring opinion penned by Justice Thomas, tells us why compromise was required. While Kagan and Thomas quarreled over the Framers’ use of the term “manner,”neither discussed the competing views or warring factions—nor why the Framers, in the end, opted for this convoluted, untested resolution to the heated debate over presidential selection. Any originalist would eagerly explore such promising territory; no justice did, not even the self-proclaimed originalists.

 

Here’s the backstory:

 

The Virginia Plan, the Convention’s opening foray, called for “a national Executive ... to be chosen by the national Legislature.” That made sense. Under the Articles of Confederation, all the nation’s business was conducted by committees of Congress or boards it appointed, but that had proved highly inefficient. In the new plan, Congress would select a distinct executive to implement its laws. (Not until mid-summer did the Framers dub this executive “President.”)

 

A few delegates, however, preferred greater separation between the legislative and executive branches of government; they didn’t want the executive to be totally beholden to Congress. But when James Wilson proposed that the executive be selected by the people, several of his colleagues balked. “It would be as unnatural to refer the choice of a proper chief Magistrate to the people,” George Mason pronounced, “as it would, to refer a trial of colours to a blind man.” Most agreed with Elbridge Gerry, who held that “the evils we experience flow from the excess of democracy,” and with Roger Sherman, who proclaimed, “The people immediately should have as little to do as may be about the government. They want information and are constantly liable to be misled.” Rebuffed, Wilson then suggested that the people choose special electors, and this elite crew would select the “Executive Magistracy.” This option also fared poorly. On June 2, by a vote of eight states to two, the Convention affirmed that the national Executive would be chosen by the national Legislature.

 

Twice more popular election was proposed and turned down. In mid-July, Gouverneur Morris convinced the Convention to opt for special electors, but four days later five states that had favored electors reversed their votes; Congress would choose the Executive, as originally suggested. That is how matters stood until the waning days of August. Then, by a devious maneuver, Gouverneur Morris managed to refer the matter to a committee charged with taking up unsettled issues—even though the manner of selection had been discussed several times and settled. There, in committee, the system we now call the “Electoral College” was written into the Constitution. 

 

The Committee reported out on September 4, less than two weeks before the Convention would adjourn. Morris, a member, presented “the reasons of the Committee and his own.” “Immediate choice by the people” was not acceptable, while “appointment by the Legislature” would lead to “intrigue and faction.” The committee’s ingenious elector system, on the other hand, depoliticized the process. “As the Electors would vote at the same time throughout the U. S. and at so great a distance from each other, the great evil of cabal was avoided,” he explained. Under such conditions, it would be “impossible” for any cabal to “corrupt” the electors. 

 

Hamilton, in Federalist 68, sold this notion to the public: “Nothing was more to be desired than that every practicable obstacle should be opposed to cabal, intrigue, and corruption,. ... The convention have guarded against all danger of this sort with the most provident and judicious attention.” Voting separately and independently, “under circumstances favorable to deliberation,” electors would “enter upon the task free from any sinister bias.” Further, to guard against political interference, the Constitution stated that “no Senator or Representative, or Person holding an Office of Trust or Profit under the United States, shall be appointed an Elector.” This argument addressed the concerns of those who had opposed congressional selection. 

 

Those who had opposed popular election of the Executive were also pacified. Madison, for example, believed the people should elect their representatives to the lower house of Congress, but selection of senators, judges, and the president should be “refined” by “successive filtrations,” keeping the people at some remove from their government. The elector system did exactly that: people choose their state legislatures, these bodies determine how to choose electors, and the electors choose the president—a most thorough “filtration.” During the debates over ratification, Anti-Federalists complained about this. A New York writer calling himself Cato wrote, “It is a maxim in republics, that the representative of the people should be of their immediate choice; but by the manner in which the president is chosen he arrives to this office at the fourth or fifth hand.” Republicus, from Kentucky, commented wryly, “An extraordinary refinement this, on the plain simple business of election; and of which the grand convention have certainly the honour of being the first inventors.” 

 

Both arguments presented by the Framers were based on the premise that electors, chosen for their greater wisdom and free to act independently, were best positioned to choose the president. It didn’t work out that way. Political parties quickly gamed the system, leading to the fiascos of 1796 and 1800. The Twelfth Amendment addressed one flaw in the scheme by requiring separate votes for the president and vice president, but it did not change the fundamental structure. To this day, we remain saddled with a system devised to shield selection of the president from politics and the people. 

 

That was then, and this is now—an unpleasant truth for originalists. A Court composed of faithful originalists would have decided Chiafalo v. Washington unanimously in favor of the rebel electors, who insisted on maintaining their Constitutionally-guaranteed independence. Fortunately, there are no true originalists on the Court. Yet that does raise troubling questions:

 

Did the Court’s professed originalists consciously ignore the historical context for presidential electors, too embarrassed by the Framers’ distrust of democracy and their inability to foresee the system’s basic flaws? Or did they not understand the textual record, which makes that context clear? Neither alternative is acceptable. Any standard of jurisprudence must be applied evenly and knowledgeably, or it is no standard at all. 

 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176562 https://historynewsnetwork.org/article/176562 0
Two Contagions, One Opportunity to Reboot our Approach

Stearman biplane dropping borate retardant on the 1956 Inaja fire.  After a brief lag, successful air tankers were one of the byproducts of FireStop.

Photo Courtesy U.S. Forest Service

 

 

 

The American West is now experiencing two contagions.  The power of both resides in their power to propagate.  One is ancient, the other recent.  For the old one, fire, there are lots of treatments available but no vaccine possible.  For the new one, COVID-19, treatments are still inchoate; while no vaccine yet exists, producing one is possible. 

Unhappily, the two contagions are meeting on the fireline.  Most fires are caught early; only 2-3% escape containment, but these are becoming larger, and that means a massive buildup in response.  The basic pattern crystallized during the Great Fires of 1910 when some 9,500 men worked firelines and most of the standing army in the Pacific Northwest was called out to assist.  What’s curious is that we are dispatching similar numbers of people today.

Why?  The usual expectation is for machinery and knowledge to replace labor.  In wildland fire, however, equipment and know-how just get added to the amalgam, such that today’s fire camps are logistical marvels (or nightmares) and ideal breeding grounds for a pandemic.  This year the established strategy – massing counterforces against big fires – requires rethinking.  

One approach follows from the letter of April 3 by the chief of the U.S. Forest Service to double down on initial attack and prevent big fires from breaking out. It treats fires as an ecological riot that must be quickly suppressed.  The evidence is clear that this strategy fails over time, and even considered as a one-year exception (actually the second one-year exception in a decade), it will fail to contain all fires.  

Another approach sees the crisis as an opportunity to work remote fires with more knowledge, sharper tools, and fewer people.  The managed wildfire has become a treatment of choice in much of the West removed from towns.  The summer’s crisis is a chance to experiment with novel tactics that do not rely on massed fire crews.  Better, seize the moment to propose a major reconsideration in how people and tools interact.  

History holds precedents for resolving many of the big issues confronting landscape fire today.  A problem with powerlines starting lethal fires?  Consider the case of railroads in the 19th and early 20th century that kindled wildfires with abandon, and presently hardly ever do.  Communities burning?  America’s cities and rural settlements burned much like their surrounding countrysides until a century ago; now it takes earthquakes, wars, or riots to kindle most urban conflagrations (what has delayed dealing with contemporary exurbs is that they got misdefined as wildlands with houses, a novel problem, instead of urban enclaves with peculiar landscaping, a familiar one.)  Too many personnel on firelines and in camps, so that managing the people requires as much attention as dealing with the fire?  Consider Operation FireStop.

Camp Pendleton, California, site for methodical experiments in firefighting technology.

Photo courtesy U.S. Forest Service

 

In 1954 the U.S. Forest Service and California Department of Forestry met at Camp Pendleton to conduct a year-long suite of trials aimed at converting the machines and organized science spurred by World War II and the Korean war into materiel and methods suitable for fire control.  Operation FireStop experimented with aircraft, retardants, radios, trucks and tracked vehicles, now war surplus for which the Forest Service had priority access.  FireStop helped announce a cold war on fire.

Out of FireStop came air tankers, borate retardant, helicopters for laying hose, helijumping, models for adapting vehicles – from jeeps to halftracks – for fire pumps and plows.  The newly available hardware catalyzed equipment development centers.  The organization of fire science helped push toward fire research labs.  Some of these activities might have happened anyway.  But FireStop quickened and focused the trends.

We should reboot FireStop, this time with contemporary purposes and gear, to align the revolution in digital technology and the reformation in policy that crystallized 40 years ago.  This time we don’t need to militarize fire management: we need to modernize it in ways that reduce the need for mass call-outs and logistical carnivals, that allow us to use today’s cornucopia of technology to do fire more nimbly, precisely, and with less cost and environmental damage.  

Call it Operation Reburn.  Schedule it for two years, one fire season for widespread testing, a second for refining the best outcomes.  It isn’t just new hardware that matters: it’s how that technology interacts with people and tactics.  Presently, for example, drones are being used for reconnaissance and burnouts, but a fully integrated system could transform emergency backfires into something like prescribed fires done under urgent conditions.  The burnouts could proceed at night, do less damage, and demand fewer people.  New tools can encourage new ways of imagining the task to which they are applied.

The two contagions make an odd coupling.  Wearing masks to protect against aerosols is akin to hardening houses against ember storms.  Social distancing resembles defensible space.  Herd immunity looks a lot like boosting the proportion of good fires to help contain the bad ones.   

It’s common to liken a plague to a wild fire.  But it’s equally plausible to model fire spread as an epidemic, a contagion of combustion.  Outbursts of megafires resemble emerging diseases because they are typically the outcome of broken biotas – a ruinous interaction between people and nature that unhinges the old checks and balances.  They kindle from the friction of how we inhabit our land.  For now we have to live with COVID-19.  We’ll have to live with fire forever.

An American way of fire came together during the Great Fires of 1910.  Half of America’s fire history since then has sought to remove fire as fully as possible.  The other half has sought to restore the good fires the early era deleted.  Operation FireStop adapted military hardware and a military mindset to support fire’s suppression.  Operation Reburn could enlist more appropriate technology to enhance the policies wildland fire needs today.  Not least, it could scrap the unhelpful war metaphor for more apt analogues drawn from the natural world that sustains fire – and ourselves.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176572 https://historynewsnetwork.org/article/176572 0
"You Sold Me to Your Mother-in-Law...": An Ongoing Quest to Reconnect a Family Yes, black lives matter, even a century and a half later. Let us remember them with dignity and post proofs that they are not forgotten, as Edward E. Baptist advised in Stony the Road they Trod: Forced Migration of African Americans in the Slave South, 1790-1865 (2002):

 

The idea that lost loved ones were out there somewhere continued to haunt and to inspire black families for years to come. . . 

. . . historians, who can do their work only because those who have long gone have left them messages and words buried in documents they study, have an ethical obligation to give voice to the dead.

 

Sparked by those convictions almost 50 years ago, Jan Hillegas and I collaborated with George P. Rawick, a comrade and friend from my teens, to gather, edit, and publish narratives of former slaves in ten supplementary volumes to Rawick’s encyclopedic compilation The American Slave: A Composite Autobiography.

 

Members of the Federal Writers Project had conducted the interviews in the 1930s, but that Depression Era program ended abruptly when the United States was thrust into World War II. Unfinished transcripts were boxed and stored, many of them unlabeled, without a record of their locations. Surviving files had been neglected for more than 30 years when we mounted a search for them in archives and institutional depositories throughout the Southern states.

 

That was the largest project undertaken by our Deep South People’s History Project, and probably the most indelible. The essential value of the Mississippi section, which comprises five volumes, is that it increased the number of published first-hand accounts by survivors of slavery in that state from a couple dozen to about 450.*

 

Those were gratifying achievements, performed without institutional or government patronage. I have continued to write about history ever since, mostly as a contributor to philatelic publications. But an important letter that I bought from a stamp dealer about a decade ago has resisted my attempts to flesh out, follow, and narrate the story of its author. Perhaps History News Network’s community of scholars can suggest a fruitful approach.

 

One hundred fifty years ago, David Jackson of Montgomery, Alabama, sent this letter to a former Confederate army officer at Richmond, Virginia — a man who had inherited him, possessed him, and sold him before the Civil War:

 

 

 

Montgomery, Ala June 14 1870

            Capt John F C Potts

            

Dear Sir

 

Yours of March 29th is at hand and contents noted. To refresh your memory I will give you a little of my history. I was raised by your father, Thos. L. Potts, at Sussex Court House, at the division of your father’s estate I was drawn by you. You sold me to your mother-in-law Mrs. Graves, and at her death, I among others was sold at auction — a speculator bought me and brought me to this place and sold me to Col. C. T. Pollard Pres. M.&W.P.R.R. My wife, whose name was Mary, belonged to Mrs. Graves. We had four children whose names were Martha Ann, Alice, Henrietta & Mariah. My mother’s name was Nanny and belonged to your father. What I have written will probably bring me to your recollection, although it has been 22 years since I left. If you know anything about my wife or children please write me in regard to them. I had three brothers named Henry, Cyrus & Wash. If you know anything about them let me know. By answering this you will greatly oblige. Accept my kind regards and believe me 

                                                                        

Yours truly

                                                                                                David Jackson

 

Sold at auction to a speculator in 1848, transported from Virginia to Alabama, and sold by the speculator to a Montgomery & West Point Railroad tycoon, David Jackson had been forcibly separated from members of his family with no word of their subsequent fates for more than two decades, yet he had not given up hope of finding them. 

 

Can today’s historians shed light on his quest?

 

Census records show that Jackson was born about 1825. He was about age 23 when the estate of his Virginia owner separated him from his wife and children. He was about age 45 when he wrote to John F. C. Potts hoping for word about them. 

 

During the Civil War, Potts had commanded Company D of the 25th Battalion, Virginia Infantry (the Richmond City Battalion) until a few weeks before Appomattox. After the war he was a banker and an underwriter. He was 58 years old when he died June 21, 1876, at Richmond.

 

That is all I have. David Jackson, members of his family, and his descendants deserve better. If readers can help bring more to light, please write to me via the HNN editor.

 

* Our publisher, Greenwood Press, has hidden my 1976 interpretive introduction to the Mississippi narratives, without so much credit as a byline, in The American Slave: A Composite Autobiography, Supplement, Series I, Mississippi Narratives, Part 1, pages lxix-cx.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176497 https://historynewsnetwork.org/article/176497 0
Will the Crisis Year of 2020 Turn Out Like 1968?

 

 

 

The year 1968 was one of the darkest in the nation’s history. With the public deeply divided over the toll of the Vietnam War (35,000 combat deaths by year’s end), the country was plunged into mourning after the murders of Dr. Martin Luther King and Robert F. Kennedy.  When violent protests erupted in dozens of cities after Dr. King’s death, the Republican presidential candidate, Richard Nixon, vowed to use “law and order” to restore “traditional American” values.

Nixon’s appeal to a conservative white electorate worked. After winning a narrow victory over Hubert Humphrey (George Wallace got 13% of the vote), Nixon began a rollback of many of the Kennedy-Johnson civil rights and anti-poverty programs. 

In 2020, the nation is once lost in fear and mourning, this time over 140,000 deaths from an epidemic and the shocking death of George Floyd. As widespread protests (generally peaceful) continued in many cities over racial injustice, the Republican president has drawn from Richard Nixon’s strategy. Vowing to impose “law and order,” he has attacked protestors as criminals and called for a return to “traditional” values.

Will Trump be able to repeat Nixon’s success? 

The short answer is no. 

Here are four reasons why the nation’s political and social climate in 2020 is much different than in 1968.

1. Demographics. Both the overall population and the electorate have changed dramatically from 1968. Fifty years ago, the nation was 85% white and the governing class almost entirely white. In 1968 Congress has only six Black officials (five congressmen, one senator), none from the South. 

In 2020, the Congressional Black Caucus has 55 members. Many of the nation’s largest cities including Chicago, Detroit, Baltimore and San Francisco have Black mayors. 

Today the “Non-Hispanic White” group comprises barely 60% of the country's population. People of color (nonwhites and Hispanics) now comprise a majority of those under age 16. The nation is, in effect, “browning” from the bottom up.

2. Education. The electorate is much better educated today. In 1968, only 14% of men and 7% of women had four+ years of college. In 2020, the numbers have increased to 35% of men and 36% of women. 

Numerous studies have found a strong correlation between higher education and liberal political views. One can debate the causes (e.g. liberal professors) but the outcome is clear; better educated voters support issues such as affirmative action, abortion rights, gun control and increased funding for social programs.     

3. Awareness of racial injustice. In 1968, whites had a very limited understanding of how Black people lived and the pervasive discrimination they endured. The nation’s schools, housing and workplaces were largely segregated. Black people, who were still referred to as “Negroes” in many newspapers, were invisible in popular white culture. For example, the first network TV show to feature a Black family, Sanford and Son (about a junk dealer in Watts), did not appear until 1972. 

Ignorance breed intolerance. In 1966, when Dr. King led a protest march through an all-white suburb in western Chicago he was met with a hail of rocks and bottles. 

I was a senior at an all-white high school in California in 1968. In our U.S. History class, the civil rights movement was ignored. The institution of slavery was dismissed in a few paragraphs in our textbook; it was simply “abolished” by Lincoln at the end of the Civil War. No mention was made of Jefferson and Washington owning slaves or the slaveholding states’ role in shaping the Constitution.  The next year, when I began college and attended classes with Black students for the first time, I experienced culture shock. They wanted to talk about discrimination in jobs, housing and education. I literally did not understand what they were talking about. 

4. The Vietnam War. Today, some 45 years after the war ended, it is difficult to comprehend how completely the conflict dominated public discourse. In 1968, we had 540,000 troops fighting in Vietnam and any man over the age of 18 and not in college was likely to be drafted. Today, the entire (all-volunteer) U.S. Army strength is less than 480,000. 

During each week of 1968, some 250 American soldiers were killed. Images of besieged Army bases and wounded G.I.s filled the network news every night. 

The Vietnam War was the number one issue in the 1968 election. Nixon’s position was one big lie. He promised an “honorable” end to the war, but refused to say how he would achieve it. It took Kissinger and Nixon five more years to negotiate a peace, which was basically a surrender. The cost: 15,000 more dead Americans. 

Today we are still engaged in several foreign wars, but the Pentagon has learned how to maintain a low profile by restricting information and limiting access to the news media. Foreign wars are just not an issue this election.

An All-Star’s Optimism

In a recent editorial in the Los Angeles Times, Kareem Abdul-Jabbar (who was in his third year at UCLA in 1968) wrote: 

“The moral universe doesn’t bend toward justice unless pressure is applied. In my seventh decade of hope, I am once again optimistic that we may be able to collectively apply that pressure, not just to fulfill the revolutionary promises of the U.S. Constitution, but because we want to live and thrive.”

Trump’s attempt to use Nixon’s outdated playbook will fail. Our nation is younger, more diverse and better educated now. 

We know better. 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176573 https://historynewsnetwork.org/article/176573 0
Who Opened the Door to Trumpism? David Frum's "Trumpocalypse" Reviewed

 

 

 

David Frum is among the most well-known conservative Never Trumpers.  His columns in the Atlantic and appearances on television and podcasts are filled with insightful and cutting criticism of Donald Trump’s policies, personality, and character (or lack thereof).  In his new book, Trumpocalypse: Restoring American Democracy, he offers more of the same, oftentimes presented in witty, pithy prose.  He asserts that Trump, despite all of his obvious flaws, was able to gain control of the Republican Party and win the presidency by walking through an “unlocked door” that was in many ways left open by American conservatives:  

 

I came of age inside the conservative movement of the twentieth century.  In the twenty-first, that movement has delivered much more harm than good, from the Iraq War to the financial crisis to the Trump presidency.  

 

As a former speech writer for George W. Bush with considerable conservative bona fides, Frum is in a unique position to give a full accounting of just how conservativism fell short before the 2016 election, and offer a compelling path forward for conservatives eager to reclaim their movement.  Unfortunately, he does neither.  

 

Frum is at his best when he is attacking Trump’s personal corruption, highlighting his failures as a businessman before becoming president, and his use of the office for his own financial and political gain.  He characterizes Trump as a low-grade grifter as opposed to someone trying to score, dare I say, “bigly”: 

 

Net-net, how much can Trump have pocketed from Vice President Pence’s two-day stay at Trump’s Irish golf course? How much from the Secret Service renting golf carts at Trump golf courses? How much from Air Force officers being billeted at the Turnberry in Scotland? How valuable were Ivanka Trump’s Chinese trademarks, really? 

 

Never one to miss twisting the knife, he then scornfully notes that Trump has probably dishonestly extracted roughly $4 million “less… than Michelle Obama earned from her book and speaking fees.” 

 

Frum also catalogues and assails Trump’s long history of trafficking in racist discourse, from his promotion of birtherism to his insinuation that Barack Obama was an ISIS supporter to his inexplicable equivocation after the 2017 neo-Nazi march in Charlottesville.  In a curious twist that reflects Frum’s frustration over Trump’s weathering of the Russian scandal, he devotes more attention to criticizing Robert Mueller’s failure to dig deeper into Trump’s ties to Russia than he does to the scandal itself.  

 

Trumpocalypse also takes aim at the administration’s foreign policy.  Frum shares some cringe-inducing quotes revealing Trump’s long-held affinity for authoritarian strongmen.  For example, in a 1990 Playboy interview Trump expressed admiration for Deng Xiao-ping’s brutal suppression of the pro-democracy movement: 

 

When the students poured into Tiananmen Square, the Chinese government almost blew it.  Then they were vicious, they were horrible, but they put it down with strength.  That shows you the power of strength.  Our country is right now perceived as weak…as being spit on by the rest of the world.  

 

Frum explores the corrosive impact of Trump’s strange and twisted “bromances” with brutal dictators like Kim Jong Un (“We fell in love”), Vladimir Putin (“I think in terms of leadership he’s getting an A”), and Mohammed Bin Salman (“He’s a strong person.  He has very good control…I mean that in a positive way”).  Such praise has emboldened wannabe dictators from the Philippines to Hungary to Brazil, and dismayed the democratic allies of the United States.  

 

Through his long analysis of Trump’s follies, Frum never develops his contention that twenty-first-century conservatism helped open the door for Trump.  He notes the Iraq War as a factor, but as the man who wrote the “Axis of Evil Speech” that became the justification for the war, he could have provided a much more detailed accounting of how it helped Trump’s America First slogan resonate with so many people.  He does not.  He also asserts that the 2008 financial collapse contributed to the Trump’s economic populism, but did not explain how.  How did conservative economic policies contribute to the financial meltdown?  Was it a failure of oversight?  Not cutting enough regulations?  Too much deregulation?  Without a full accounting, his political mea culpa is hollow and fails to offer guidance on how to avoid mistakes in the future.  

 

Drawing a distinction between twentieth and twenty-first-century American conservatism is also disingenuous.  Frum is rightly horrified by the racially-charged tactics that Trump routinely employs, but without looking deeper into the history of the conservative movement, he does a disservice to his readers. For instance, he does not address National Review’s opposition to the desegregation efforts of the Civil Rights movement, Nixon’s Southern Strategy, or George H.W. Bush’s notorious Willie Horton ad, and how each of them has greatly damaged the Republican Party in the eyes of African Americans. Frum also could have delved into the fiscal recklessness of the GOP, which only seems to care about budget deficits when a Democrat is in the White House.  Ronald Reagan went beyond Lyndon Johnson’s promise that the country could have “guns and butter” by essentially saying that the country could have guns, butter, and low taxes.  This brand of free-lunch conservatism led to a near-tripling of the national debt from 1981 to 1989.  

 

In the second half of the book, Frum offers a list of reforms to help the country restore its democratic structures and ensure their survival, including mandating that presidential candidates release their tax returns prior to elections; eliminating the filibuster to prevent legislative minorities from impeding Senate proceedings; granting statehood to the District of Columbia, an area more populous than Vermont and Wyoming; passing a new Voting Rights Act to ensure that all Americans have access to polling stations without lengthy wait times or cumbersome procedures; and creating non-partisan commissions to create electoral districts that are not simply designed to perpetuate the majority party’s hold on seats.  He also makes sensible policy recommendations on major issues like health care, immigration, and climate change.  This was the most disappointing part of the book, not because of the proposals themselves, but because of his failure to ground them in a conservative historical and philosophical framework.  

 

In the introduction, Frum dedicates the book to “…those of you who share my background in conservative and Republican politics.  We have both a special duty—and a special perspective.  We owe more; and we also, I believe are positioned to do more.”  Yet he does little to make his recommended reforms palatable—let alone desirable—to conservatives outside the Never Trump camp.  Consider the environment.  Many of today’s self-proclaimed conservatives recoil from measures to protect the environment, rejecting such initiatives as attempts by “the Left” (which seems to include everyone on the political spectrum between Joseph Stalin and David Brooks) to seize control over more of the American economy and society.  In fact, environmentalism and conservatism share a great deal philosophically.  One of the cornerstones of conservative thought comes from Edmund Burke’s Reflections on the Revolution in France, first published in 1790.  In the book, Burke—an Irish Whig who served in the British House of Commons from 1766 to 1794—proposed a social contract in sharp contrast to Rousseau’s, which was predicated on the notion of the “general will” of the people.  For Burke, “Society is a contract between the generations:  a partnership between those who are living, those who have lived before us, and those who have yet to be born.”  Placed in this context, climate change is a vital issue that we need to address, lest we break the covenant and bequeath a world less hospitable to life than the one we inherited from our forebears. As Barry Goldwater starkly put it in The Conscience of a Majority (1970), “It is our job to prevent that lush orb known as Earth…from turning into a bleak and barren, dirty brown planet.”  Reminding Republicans of their party’s numerous contributions to environmental protection—such as Goldwater’s support for the Clean Air Act, Richard Nixon’s establishment of the Environmental Protection Agency, and Ronald Reagan’s signing of the Montreal Protocol, which placed limits on ozone-depleting chemicals—could help Frum convince them that supporting future environmental measures is not an abandonment of conservative principles.  

 

Trumpocalypse will delight many liberals who revel in watching the internal divisions in the GOP.  It will also supply them with considerable ammunition that they can use in debates with their Trump-supporting friends and family members.  The book provides some sharp digs (describing the One America Network as “Fox-on-meth” was my favorite) and is clearly and concisely argued.  I have my doubts, however, that the book will do much to win over many of Frum’s fellow conservatives who make up the book’s intended audience.  Sadly, due to the carefully constructed information bubbles that Americans have created for themselves, it’s possible that very few self-proclaimed conservatives will even know the book was published.  

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176564 https://historynewsnetwork.org/article/176564 0
Weighing the Evidence when a President is Accused of Antisemitism

 

 

Mary Trump, the niece of President Donald Trump, says she has heard him utter “antisemitic slurs” in private. Michael D. Cohen, the president’s former attorney, reportedly will assert in his forthcoming book that Trump has made “antisemitic remarks against prominent Jewish people.” The president, however, denies doing so. How are we to weigh the veracity of the allegations?

Historians demand a high level of evidence when an accusation of bigotry is made against anybody, all the moreso when the accusation is made against the leader of the Free World. In weighing the evidence that has so far been produced concerning Trump, one must consider the standards that historians have applied with regard to the other three presidents who have been accused of antisemitism—Richard Nixon, Harry Truman, and Franklin Roosevelt.

The first question to ask is whether there is documentation that corroborates the claim, such as a tape recording or a diary.

We know that President Nixon made antisemitic remarks because he taped his Oval Office conversations. Until those tapes were made public, the accusation gained no traction. Both the New York Times and CBS-TV reported in May 1974—three months before his resignation—that Nixon had referred to some of his critics as "Jew boys," and had complained about "those Jews" in the U.S. Attorney's Office who were causing him difficulties. So long as the public could not see the evidence, Nixon and his defenders could deny it. 

In the years to follow, the tapes came out, confirming the earlier reports and revealing many more antisemitic slurs, including Nixon’s use of the word “kike.” Hearing such language in the president’s own voice made it impossible to deny his antisemitism.

We know of President Harry Truman’s antisemitism primarily from his diary. Discovered by accident in the Truman presidential library in Missouri in 2003, the previously unknown diary included acerbic comments about Jews that Truman wrote after Treasury Secretary Henry Morgenthau, Jr. telephoned him concerning the British decision to prevent the refugee ship Exodus from reaching Palestine.

The president wrote: "He'd no business, whatever to call me. The Jews have no sense of proportion nor do they have any judgement on world affairs….The Jews, I find are very, very selfish. They care not how many Estonians, Latvians, Finns, Poles, Yugoslavs or Greeks get murdered or mistreated as D[isplaced] P[ersons] as long as the Jews get special treatment. Yet when they have power, physical, financial or political neither Hitler nor Stalin has anything on them for cruelty or mistreatment to the under dog." 

Until the diary surfaced, few historians acknowledged Truman’s antisemitism. His 1918 letter referring to New York City as a "kike" town was chalked up to his immaturity. His 1935 letter referring to a poker player who "screamed like a Jewish merchant” was dismissed as an isolated incident. Truman’s 1946 remark about Jewish lobbyists, "Well, you can't satisfy these people….The Jews aren't going to write the history of the United States or my history” was excused as a momentary outburst in response to tension with Jewish lobbyists. But seeing explicit anti-Jewish language in President Truman’s own handwriting, in the diary, made it impossible to deny his antisemitism any longer.

Franklin D. Roosevelt did not keep a diary or tape-record his Oval Office conversations. What we know about his private sentiments concerning Jews derives from other types of documentation, including diaries kept by his cabinet members and transcripts of official conversations by note-takers who were not FDR’s political enemies.

Captain John McCrea, the president’s Naval Aide, was the note-taker at the 1943 Casablanca conference. He reported that FDR said the number of Jews allowed to enter various professions in Allied-liberated North Africa “should be definitely limited,” in order to avoid a repetition of the “understandable complaints which the Germans bore towards the Jews in Germany, namely, that while they represented a small part of the population, over fifty percent of the lawyers, doctors, school teachers, college professors, etc, in Germany, were Jews."

Harvard professor Samuel H. Cross, one of the foremost experts on Russian and other Slavic languages, was the translator and note-taker at the 1942 White House meeting between President Roosevelt, adviser Harry Hopkins, and Soviet Foreign Minister Vyacheslav Molotov. According to Cross’s record, Hopkins complained that the American Communist Party contained many “largely disgruntled, frustrated, ineffectual, and vociferous people--including a comparatively high proportion of distinctly unsympathetic Jews.” FDR replied that he himself was “far from anti-Semitic, as everyone knew, but there was a good deal in this point of view.” Molotov, Roosevelt, and Hopkins then apparently agreed that “there were Communists and Communists,” which they compared to what they called “the distinction between ‘Jews’ and ‘Kikes’,” all of which was “something that created inevitable difficulties.” 

In assessing President Roosevelt’s private views, historians naturally assign much greater weight to diaries and private memoranda that were authored by the president’s friends or political allies, than to accusations made by enemies who had an axe to grind or a rival agenda to pursue.

FDR’s allies had plenty to say on this subject. Secretary of the Treasury Henry Morgenthau, Jr. wrote privately that President Roosevelt boasted about his role in imposing a quota on the admission of Jewish students to Harvard. Vice President Henry Wallace wrote in his diary that FDR spoke (in 1943) of the need to “spread the Jews thin” and not allow more than “four or five Jewish families” to settle in some regions, so they would fully assimilate. U.S. Senator Burton Wheeler, whom Roosevelt considered for vice president for his third term, wrote in a private memo that FDR boasted (in 1939) of having “no Jewish blood” in his veins. Rabbi Stephen S. Wise, an ardent supporter of the president, privately noted that Roosevelt told him (in 1938) that Polish Jews were to blame for antisemitism because they dominated the Polish economy.

Specificity is important. That’s why a particularly disturbing remark attributed to President Roosevelt has been widely ignored by historians, even though it came from a reliable and friendly source. Samuel Rosenman, FDR’s closest Jewish confidante and chief speechwriter, told a Jewish leader in October 1943 that, in response to a rally by rabbis outside the White House, the president “used language that morning while breakfasting which would have pleased Hitler himself.” But Rosenman never revealed precisely what it was that he heard Roosevelt say.

The lack of specifics—so far—in the allegations by Ms. Trump and Mr. Cohen will be cited by the president’s defenders as reason to doubt their veracity. Others will point to the accusers’ personal conflicts with the president as evidence to question their motives in raising the issue of antisemitism.

Certainly, accounts by embittered relatives need to be scrutinized with extra care. By contrast, a friendly relative presumably has no motive to smear the president. Curtis Roosevelt, a grandson of the president (and not known to be unfriendly toward his late grandfather) told FDR biographer Geoffrey Ward that he “recalled hearing the President tell mildly anti-Semitic stories in the White House…The protagonists were always Lower East Side Jews with heavy accents…" 

Opportunity is also important. Mary Trump’s innumerable interactions with Donald Trump, over the course of decades, certainly gave her ample opportunity to

hear him express his private opinions. Likewise Michael Cohen, who served as Trump’s attorney and confidante from 2016 to 2018. The fact that not one, but two highly placed, unconnected individuals are making similar accusations adds credibility to their charge.

There is also the matter of Trump’s track record on the subject. How should historians judge remarks invoking a stereotype that seems to be complimentary? In a 1991 book, Trump was reported to have said, “The only kind of people I want counting my money are short guys that wear yarmulkes every day.” In 2015, he told a group of Jewish supporters, “I’m a negotiator like you folks.”

Perhaps not everyone would take offense at those kinds of comments. The danger of brushing off such remarks, however, is that it could be a short leap from perceiving Jews as “good with money” and “good at negotiating” to suspecting that rich Jews are using their wily negotiating skills to manipulate economies or governments. Still, the key word is “could.” Until he says it, he hasn’t said it.

When it comes to writing history, patience yields dividends. With the passage of time, more evidence emerges or existing evidence is discredited. Further down the road, archival collections open and previously classified documents shed new light on the topic. 

Pundits, of course, are not always inclined to wait patiently for incontrovertible evidence to accumulate. Their job is to express their opinions on the issues of the day, based on what they know on any given day. They may be comfortable making a case based on unnamed sources, perceived dog whistles, or other bits and pieces.

Assessing the accusations by Mary Trump and Michael Cohen of presidential antisemitism ultimately may depend on which standard of evidence is applied. Newspaper columnists whose goal is to influence public opinion sometimes proceed even with evidence that some might consider doubtful. A court of law, in a criminal case, requires an allegation to be proven beyond a reasonable doubt. A careful historian, weighing a charge this serious, will insist on evidence sufficient to prove the accusation beyond a shadow of a doubt.

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176571 https://historynewsnetwork.org/article/176571 0
Can Martin Luther King’s Spiritual Vision Kindle a New Progressivism?

Poor People's Campaign March, Lafayette Square, Washington D.C. 1968

 

 

 

Conservative columnist Ross Douthat’s recent “The Religious Roots of a New Progressive Era" indicates that a new progressive age is possible. Although he does not say much about the Progressive Era of 1890 to 1914, he does mention the religious Social Gospel movement of that period. And historian Jill Lepore has noted, “much that was vital in Progressivism grew out of Protestantism, and especially out of a movement known as the Social Gospel, adopted by almost all theological liberals and by a large number of theological conservatives, too.” Like then, so now, Douthat sees a “palpable spiritual dimension” to much of the “social justice activism, before and especially after the George Floyd killing,” 

 

The columnist’s article does not mention Martin Luther King (MLK), but the social justice movement he lead in the 1950s and 1960s demonstrated the greatest spiritual-based progressivism between the end of the Progressive Era and today. And rekindling and updating King’s ideas offers us the best hope of creating a new progressive era in the 2020s.

 

Most of the 1890-1914 progressives did not attempt to overthrow or replace capitalism, but to constrain and supplement it in order to insure that it served the public good. Their efforts reduced corruption in city governments, limited trusts and monopolies, expanded public services, and passed laws improving sanitation, education, housing, and workers’ rights and conditions, especially for women and children. Progressive efforts also helped pass pure food and drug laws and create the National Park Service. 

 

After three consecutive post-World-War-I Republican presidents from 1920 to 1932, Franklin Roosevelt renewed the progressive spirit in the 1930s, but it was the Baptist minister MLK, leading the Southern Christian Leadership Conference (SCLC) beginning in the 1950s), who restored religious fervor to progressive causes.

 

It was his religious vision that propelled King, that and the injustice suffered by black and other people. This was true from December 1955 at his Dexter Avenue Baptist Church, when he helped start the Montgomery Bus Boycott in Alabama because of Rosa Parks’ arrest for defying segregated bus seating, until April 1968, when a sniper’s bullet ended his life on a motel balcony in Memphis.

 

In mid-1955 King had received his doctorate in systematic theology from Boston University. But even before then, while a student at Pennsylvania’s Crozier Theological Seminary, he had been strongly influenced by Gandhi’s ideas. As historian Stephen Oates writes in his biography of King, he considered Gandhi “probably the first person in history to lift the love ethic of Jesus above mere interaction between individuals to a powerful effective social force on a large scale.”

In 1957 MLK delivered a powerful sermon on “Loving Your Enemies.” In it he said, “Yes, it is love that will save our world and our civilization, love even for enemies.” He also spoke of the need “to organize mass non-violent resistance based on the principle of love.” Moreover, he analyzed and amplified in great detail various meanings of love, especially agape, (“understanding, creative, redemptive goodwill for all”). In February 1968, just two months before his assassination, he told a congregation at Ebenezer Baptist Church in Atlanta that he wanted to be remembered as someone who “tried to love and serve humanity.”

If we look at many of King’s speeches and activities their religious underpinnings jump out at us. As a 2013 article in the Jesuit magazine America pointed out, “His famous speech, ‘I Have A Dream’ [1963], was actually a sermon rooted in the words of the prophet . . . Isaiah who too had a dream of a world made new with God’s loving justice.” That same article quotes King as saying, “In the quiet recesses of my heart, I am fundamentally a clergyman, a Baptist preacher.”

The first progressive idea of MLK’s for today is that of racial justice. More than a half-century after his death, it is not just the continuing protests following the police killing of George Floyd that force us to confront this continuing injustice. It is also the Black-Lives-Matter movement, the disproportionate number of black and Hispanic deaths from COVID-19, the economic inequality facing these minorities, and their higher unemployment and incarceration rates. Moreover, the political polarization and racism being stoked by President Trump is another indication that we are still far from the promised land that King dreamt of in his “I Have a Dream” speech--“that day when all of God’s children, black men and white men, Jews and Gentiles, Protestants and Catholics, will be able to join hands.”

The second progressive idea of King’s is his stress on peace and non-violence both at home and abroad. This emphasis owed much to the Gandhian influence on him and to the belief, as he expressed it in a 1957 sermon, that mass non-violent resistance tactics were to be “based on the principle of love.” Although reflecting mainly his religious principles, his non-violent approach also had--and has--political implications. In February 1968, he warned that rioting could lead to a “right-wing takeover,” and indicated that riots just helped segregationist-presidential-political-candidate George Wallace. Today, in the face of some, but not as much, lawlessness following the killing of George Floyd, some observers sound a similar warning--any lawless activities would aid Donald Trump. 

Abroad, the main target of King’s protests was the Vietnam War. In April 1967, in a New York Riverside Church speech, he displayed the type of empathy that deeply religious people should when he said the following about the Vietnamese people: “So they go, primarily women and children and the aged. They watch as we poison their water, as we kill a million acres of their crops. They must weep as the bulldozers roar through their areas preparing to destroy the precious trees. They wander into the hospitals with at least twenty casualties from American firepower for one Vietcong-inflicted injury. So far we may have killed a million of them, mostly children.” And he warned that “a nation that continues year after year to spend more money on military defense than on programs of social uplift is approaching spiritual death.”

Today progressives like Bernie Sanders are arguing that “we need to cut our [swollen Trumpian] military budget by 10 percent and invest that money in human needs.” 

In his Riverside Church speech, MLK also expressed a third important progressive idea--that our economic system needs major reform. Like the first Progressives, he believed our economy should serve the public good, not special interests. “We must,” he said, “rapidly begin the shift from a thing-oriented society to a person-oriented society. When machines and computers, profit motives and property rights, are considered more important than people, the giant triplets of racism, extreme materialism, and militarism are incapable of being conquered.”

In the year remaining of his short life, King worked most diligently on the Poor People’s Campaign, which aimed at pressuring the government and society to reform our economy, especially by reducing economic inequality. In his King biography, Oates writes that the “campaign was King’s ‘last, greatest dream’ because it sought ultimately to make capitalism reform itself, presumably with the power of redemptive love to win over economic oppressors, too, and heal antagonisms America must recognize.” Oates also mentions that MLK’s aims reflected the influence of theologian Walter “Rauschenbusch’s Social Gospel,” which, as Lepore noted, greatly influenced progressives of the 1890-1914 era.

 

King’s three important progressive ideas--racial justice, non-violence, and economic reform--are all as relevant today as they were in King’s day. But a fourth progressive idea, addressing climate change, was not in the 1960s perceived as an important problem. Yet, from all of MLK’s activities it is clear that if he were living today he would have been in the forefront of those insisting upon actions to confront human-caused climate change.  

 

Today, more than a half-century after King’s death and in the midst of a terrible pandemic, the chances of enacting much of his progressive agenda seem better than ever. Just as progressive New-Deal reforms arose as a response to the Great Depression, so too today new progressive actions can emerge from our pandemic and national disgust with Trumpism. And these actions, as were King’s, will be strengthened if they proceed from a strong spiritual base.

 

In 2016, presidential candidate Bernie Sanders gave a talk in Rome entitled “The Urgency of a Moral Economy: Reflections on the 25th Anniversary of Centesimus Annus.” The anniversary he referred to was that of the release of a Pope John Paul II encyclical, and Sanders spoke before a conference of The Pontifical Academy of Social Sciences. He noted that the Catholic Church’s “social teachings, stretching back to the first modern encyclical about the industrial economy, Rerum Novarum in 1891, to Centesimus Annus, to Pope Francis’s inspiring [environmental] encyclical . . . have grappled with the challenges of the market economy. There are few places in modern thought that rival the depth and insight of the Church’s moral teachings on the market economy.”

Although Sanders was not successful in either his 2016 or 2020 run for the presidency, his progressive ideas, often based on spiritual values, have continued to animate the Democratic Party. In early July, a Biden-Sanders task force released proposals indicating Sanders’ continuing influence on the person now favored to become our next president. 

 

Although not considered as progressive as Sanders, Biden’s background also suggests that he could further advance some of MLK’s progressive ideas. Biden’s popularity with black voters was a major reason for his securing the Democratic nomination, and he has said that his “two political heroes were MLK and Bobby Kennedy,” both assassinated when he was a senior in college.  

 

As with King and Sanders, Biden’s views are strongly influenced by spiritual values. In a 2015 interview, he praised Pope Francis, saying “he’s the embodiment of Catholic social doctrine that I was raised with. The idea that everyone’s entitled to dignity, that the poor should be given special preference, that you have an obligation to reach out and be inclusive.“ On Francis’ encyclical on the environment and climate change, which many conservatives criticized, Biden said in the same interview, “The way I read it—and I read it—it was an invitation, almost a demand, that a dialogue begin internationally to deal with what is the single most consequential problem and issue facing humanity right now.”

After Franklin Roosevelt had been elected president in 1932, but before he had taken office in March 1933, Arthur Schlesinger, Jr. tells us a couple of “old Wilsonians . . . became so fearful of Roosevelt’s apparent conservatism” that they urged an FDR adviser to persuade “the President-elect to be more progressive.” Perhaps if Biden is elected, he will follow FDR’s example--not only succeeding an increasingly unpopular Republican president, but also pleasantly surprising progressives who thought him too conservative. 

He has already surprised Bernie Sanders, who has been pleased with the results of the Biden-Sanders task force. In early July, on MSNBC, he stated, “I think the compromise that they came up with, if implemented, will make Biden the most progressive president since FDR.”

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176561 https://historynewsnetwork.org/article/176561 0
Don’t Tear Down the Wrong Monuments; Don’t Attack Every Holiday The United States defeated the Confederacy's Army of Northern Virginia at Gettysburg on July 3, 1863. That same evening, General John Pemberton agreed to surrender the Confederate army holding Vicksburg to Ulysses Grant. The next day, when the news of both Union victories began to spread through the nation, was surely the most memorable Independence Day in American history after the first one, four score and seven years earlier. Pemberton’s men stacked their arms and went home. Lee’s men withdrew from Gettysburg and made their way south. 

 

Historians have argued ever since over which victory was more important.

 

Both battlefields are now under the care of the National Park Service. This past Fourth of July, few people visited either, but when travel becomes easier, if you're within range, I suggest you visit whichever park is closer. Both parks are beautiful, especially in mid-summer. However, do not ask the question several NPS rangers submitted to me as their nomination for the dumbest query ever received from a visitor: "How come they fought so many Civil War battles in parks?" Instead, if you're at Vicksburg, suggest to the ranger that Gettysburg was the more important victory; if at Gettysburg, suggest Vicksburg. Probably you and your fellow tourists will be informed as well as entertained by the response. 

 

Perhaps the Mississippi victory was more telling, for several reasons. Vicksburg had been called "the Gibraltar of the Confederacy." After Richmond, the Confederate capital, it was surely the most strategic single place in the South, because it simultaneously blocked United States shipping down the Mississippi River and provided the Confederacy with its only secure link to the trans-Mississippi West. Vicksburg's capture led to the capitulation of the last Confederate stronghold on the Mississippi, Port Hudson, Louisiana, 130 miles south, five days later. This reopened the Mississippi River, an important benefit to farmers in its vast watershed, stretching from central Pennsylvania to northwestern Montana. Abraham Lincoln announced the victory with the famous phrase, "The Father of Waters again goes unvexed to the sea." In the wake of the victory, thousands of African Americans made their way to Vicksburg to be free, get legally married, help out the Union cause, make a buck, do the laundry and gather the firewood, and enlist in the United States Army. No longer was slavery secure in Mississippi, Arkansas, or Louisiana. Many whites from these states and west Tennessee also now joined the Union cause. 

 

But perhaps the Pennsylvania victory was more important. It taught the Army of the Potomac that Robert E. Lee and his forces were vincible. Freeman Cleaves, biographer of General George Gordon Meade, victor at Gettysburg, quotes a former Union corps commander, "I did not believe the enemy could be whipped." Lee's losses forced his army to a defensive posture for the rest of the war. The impact of the victory on Northern morale was profound. And of course it led to the immortal words of the Gettysburg Address. 

 

If you go to Vicksburg on the Fourth, be sure to visit the Illinois monument, a small marble pantheon that somehow stays cool even on the hottest July day. In Gettysburg, don't fail to take in the South Carolina monument. It claims, "Abiding faith in the sacredness of states rights provided their creed here" — a statement true about 1965, when it went up, but false about 1863. After all, in 1860, South Carolinians were perfectly clear about why they were seceding, and "states rights" had nothing to do with it. South Carolina was against states’ rights. South Carolina found no fault with the federal government when it said why it seceded, on Christmas Eve, 1860. On the contrary, its leaders found fault with Northern states and the rights they were trying to assert. These amounted to, according to South Carolina, “an increasing hostility on the part of the non-slaveholding States to the institution of slavery.” At both parks, come to your own conclusion about how the National Park Service is meeting its 1999 Congressional mandate "to recognize and include ... the unique role that the institution of slavery played in causing the Civil War."

 

The twin victories have also influenced how Americans have celebrated the Fourth of July since 1863. Living in Mississippi a century later taught me about the muted racial politics of the Fourth of July. African Americans celebrated this holiday with big family barbecues, speeches, and public gatherings in segregated black parks. Even white supremacists could hardly deny blacks the occasion to hold forth in segregated settings, since African Americans were only showing their patriotism, not holding some kind of fearsome “Black Power” rally. Both sides knew these gatherings had an edge, however. Black speakers did not fail to identify the Union victories with the anti-slavery cause and the still-unfinished removal of the vestiges of slavery from American life. This coded identification of the Fourth with freedom was the sweeter because in the 1960s, die-hard white Mississippians did not want to celebrate the Fourth at all, because they were still mourning the surrender at Vicksburg. We in the BLM movement can take a cue from the past. We can be patriotic on July 4 without being nationalistic. As Frederick Douglass put it, by my memory, “I call him a true patriot who rebukes his country for its sins, and does not excuse them.” And true patriots can also take pleasure from their country’s victories against a proslavery insurrection.

 

Muted racial politics also underlie the continuing changes on the landscape at both locations. In 1998 Gettysburg finally dedicated a new statue of James Longstreet, Lee's second in command. For more than a century, neo-Confederates had vilified Longstreet as responsible for the defeat. He did try to talk Lee out of the attack, deeming the U.S. position too strong, and his forces did take a long time getting into place.

 

James Longstreet had to wait to appear on the Gettysburg landscape until the United States became less racist.

Hopefully BLM protesters are informed enough to know not to tag or topple this Confederate monument. 

 

 

 

But the criticisms of Longstreet really stemmed from his actions after the Civil War. During Reconstruction he agreed that African Americans should have full civil rights and commanded black troops against an attempted white supremacist overthrow of the interracial Republican government of Louisiana. Ironically, ideological currents set into motion by the Civil Rights movement help explain why Gettysburg can now honor Longstreet. No longer do we consider it wrong to be in favor of equal rights for all, as we did during the Nadir. 

 

When I lived in Mississippi in the 1960s and '70s, bad history plagued how Grant’s campaign was remembered on the landscape. For example, a state historical marker stood a few miles south of Vicksburg at Rocky Springs:

 

Union Army Passes Rocky Springs

Upon the occupation of Willow Springs on May 3, 1863, Union Gen. J. A. McClernand sent patrols up the Jackson road.

These groups rode through Rocky Springs, where they encountered no resistance beyond the icy stares of the people who gathered at the side of the road to watch.

 

Actually, the area was then and remains today overwhelmingly black. "The people," mostly African Americans, supplied the patrols with food, showed them the best roads to Jackson, and told them exactly where the Confederates were. Indeed, support from the African American infrastructure made Grant's Vicksburg campaign possible. 

 

In about 1998, Mississippi took down this counterfactual marker. Or maybe a vigilante stole it — no one claims to know. Either way, the landscape benefits from its removal. Six years later, with funding from the state and from Vicksburg, a monument to roles African Americans played in support of Grant’s campaign went up at Vicksburg. It shows a wounded U.S.C.T. (United States Colored Troops) soldier being helped to safety by another member of the U.S.C.T. and by a black civilian. 

 

Now, if we can just fix that pesky South Carolina monument... 

 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/blog/154378 https://historynewsnetwork.org/blog/154378 0
Life during Wartime 515

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/blog/154380 https://historynewsnetwork.org/blog/154380 0
Roundup Top Ten for July 24, 2020

Trump Has Brought America’s Dirty Wars Home

by Stuart Schrader

The history of the Office of Public Safety, created to support counterinsurgency around the globe during the Cold War, demonstrates that Trump’s ardor for authoritarian force has long-standing, homegrown roots.

 

Reimagining America’s Memorial Landscape

by David W. Blight

As we are witnessing, the problem of the 21st century in this country is some agonizingly enduring combination of legacies bleeding forward from conquest, slavery and color lines. Freedom in its infinite meanings remains humanity’s most universal aspiration. How America reimagines its memorial landscape may matter to the whole world.

 

 

Historic Levels, but Not the Good Kind

by Heather Cox Richardson

Warren G. Harding created an atmosphere in which the point of government was not to help ordinary Americans, but to see how much leaders could get out of it.

 

 

How To Interpret Historical Analogies

by Moshik Temkin

Historical analogies, done in good faith, can make crucial points about the present and help to clarify where we stand on moral and political issues. The problem begins when we begin to substitute historical analogies for historical analysis – or, even more problematically, when we come to believe that history ‘repeats itself’.

 

 

John Lewis’ Fight for Equality Was Never Limited to Just the United States

by Keisha N. Blain

By linking national concerns to global ones, John Lewis compelled others to see that the problems of racism and white supremacy were not contained within U.S. borders.

 

 

Trump’s Push To Skew The Census Builds On A Long History Of Politicizing The Count

by Paul Schor

The Trump administration’s effort not to count undocumented immigrants is nothing less than an effort to redistribute political power, one that calls to mind a particularly fierce battle over the 1920 census that highlights the role of these broader fights.

 

 

J.F.K.’s “Profiles in Courage” Has a Racism Problem. What Should We Do About It?

by Nicholas Lemann

The Senators chosen by John F. Kennedy as "Profiles in Courage" would not fare well if their actions were evaluated today. 

 

 

History Shows That We Can Solve The Child-Care Crisis — If We Want To

by Lisa Levenstein

Today, in nearly two-thirds of households with children, the parents are employed. In 3 out of 5 states, the cost of day care for one infant is more than tuition and fees at four-year public universities.

 

 

The Strange Defeat of the United States

by Robert Zaretsky

Eighty years later, Bloch’s investigation casts useful light for those historians who, gripped by the white heat of their own moment, may seek to understand the once unthinkable defeat of the United States in its “war” against the new coronavirus.

 

 

Tearing Down Black America

by Brent Cebul

Ensuring that Black Lives Matter doesn't just require police reform. The history of urban renewal shows that governments have worked to dismantle and destabilize Black communities in the name of progress.

 

]]>
Sun, 09 Aug 2020 16:16:31 +0000 https://historynewsnetwork.org/article/176553 https://historynewsnetwork.org/article/176553 0