A people without the knowledge of their past history, origin and culture is like a
tree without roots
welcome to history
history you should know
*ON ITS HUNDREDTH BIRTHDAY IN 1959, EDWARD TELLER WARNED THE OIL INDUSTRY ABOUT GLOBAL WARMING(ARTICLE BELOW)
*WHITE NEWSPAPER JOURNALISTS EXPLOITED RACIAL DIVISIONS TO HELP BUILD THE GOP’S SOUTHERN FIREWALL IN THE 1960S(ARTICLE BELOW) *THE TWO MEN WHO ALMOST DERAILED NEW ENGLAND’S FIRST COLONIES (ARTICLE BELOW)
*UNITED DAUGHTERS OF THE CONFEDERACY LED A CAMPAIGN TO BUILD MONUMENTS AND WHITEWASH HISTORY(ARTICLE BELOW)
*HOW THE US GOVERNMENT CREATED AND CODDLED THE GUN INDUSTRY (ARTICLE BELOW)
*A HISTORIAN DESTROYS RACISTS’ FAVORITE MYTHS ABOUT THE VIKINGS (ARTICLE BELOW)
*THE MYTH OF THE KINDLY GENERAL LEE THE LEGEND OF THE CONFEDERATE LEADER’S HEROISM AND DECENCY IS BASED IN THE FICTION OF A PERSON WHO NEVER EXISTED.(EXCERPT BELOW)
*DONALD TRUMP, JEWS AND THE MYTH OF RACE: HOW JEWS GRADUALLY BECAME “WHITE,” AND HOW THAT CHANGED AMERICA(ARTICLE BELOW)
*THE REAL IRISH-AMERICAN STORY NOT TAUGHT IN SCHOOLS(ARTICLE BELOW)
*ROE V. WADE ABORTION WAS OUTLAWED SO DOCTORS COULD MAKE MONEY(ARTICLE BELOW)
*CHOMSKY ON AMERICA'S UGLY HISTORY: FDR WAS FASCIST-FRIENDLY BEFORE WWII(ARTICLE BELOW)
*HISTORICAL SNAPSHOT OF CAPITALISM AND IMPERIALISM(ARTICLE BELOW)
*WHY ALLEN DULLES KILLED THE KENNEDYS (ARTICLE BELOW)
*THE REAL (DESPICABLE) THOMAS JEFFERSON(ARTICLE BELOW)
*THE SECRET HISTORY OF HITLER’S ‘BLACK HOLOCAUST’(ARTICLE BELOW)
*THE THIRTEENTH AMENDMENT AND SLAVERY IN A GLOBAL ECONOMY(ARTICLE BELOW)
On its hundredth birthday in 1959, Edward Teller warned the oil industry about global warming
Somebody cut the cake – new documents reveal that American oil writ large was warned of global warming at its 100th birthday party.
Benjamin Franta the guardian Mon 1 Jan ‘18 06.00 EST
...The nuclear weapons physicist Edward Teller had, by 1959, become ostracized by the scientific community for betraying his colleague J. Robert Oppenheimer, but he retained the embrace of industry and government. Teller’s task that November fourth was to address the crowd on “energy patterns of the future,” and his words carried an unexpected warning:
Ladies and gentlemen, I am to talk to you about energy in the future. I will start by telling you why I believe that the energy resources of the past must be supplemented. First of all, these energy resources will run short as we use more and more of the fossil fuels. But I would [...] like to mention another reason why we probably have to look for additional fuel supplies. And this, strangely, is the question of contaminating the atmosphere. [....] Whenever you burn conventional fuel, you create carbon dioxide. [....] The carbon dioxide is invisible, it is transparent, you can’t smell it, it is not dangerous to health, so why should one worry about it?
Carbon dioxide has a strange property. It transmits visible light but it absorbs the infrared radiation which is emitted from the earth. Its presence in the atmosphere causes a greenhouse effect [....] It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.
How, precisely, Mr. Dunlop and the rest of the audience reacted is unknown, but it’s hard to imagine this being welcome news. After his talk, Teller was asked to “summarize briefly the danger from increased carbon dioxide content in the atmosphere in this century.” The physicist, as if considering a numerical estimation problem, responded:
At present the carbon dioxide in the atmosphere has risen by 2 per cent over normal. By 1970, it will be perhaps 4 per cent, by 1980, 8 per cent, by 1990, 16 per cent [about 360 parts per million, by Teller’s accounting], if we keep on with our exponential rise in the use of purely conventional fuels. By that time, there will be a serious additional impediment for the radiation leaving the earth. Our planet will get a little warmer. It is hard to say whether it will be 2 degrees Fahrenheit or only one or 5.
But when the temperature does rise by a few degrees over the whole globe, there is a possibility that the icecaps will start melting and the level of the oceans will begin to rise. Well, I don’t know whether they will cover the Empire State Building or not, but anyone can calculate it by looking at the map and noting that the icecaps over Greenland and over Antarctica are perhaps five thousand feet thick.
And so, at its hundredth birthday party, American oil was warned of its civilization-destroying potential.
Talk about a buzzkill.
How did the petroleum industry respond? Eight years later, on a cold, clear day in March, Robert Dunlop walked the halls of the U.S. Congress. The 1967 oil embargo was weeks away, and the Senate was investigating the potential of electric vehicles. Dunlop, testifying now as the Chairman of the Board of the American Petroleum Institute, posed the question, “tomorrow’s car: electric or gasoline powered?” His preferred answer was the latter:
We in the petroleum industry are convinced that by the time a practical electric car can be mass-produced and marketed, it will not enjoy any meaningful advantage from an air pollution standpoint. Emissions from internal-combustion engines will have long since been controlled.
Dunlop went on to describe progress in controlling carbon monoxide, nitrous oxide, and hydrocarbon emissions from automobiles. Absent from his list? The pollutant he had been warned of years before: carbon dioxide.
We might surmise that the odorless gas simply passed under Robert Dunlop’s nose unnoticed. But less than a year later, the American Petroleum Institute quietly received a report on air pollution it had commissioned from the Stanford Research Institute, and its warning on carbon dioxide was direct:
Significant temperature changes are almost certain to occur by the year 2000, and these could bring about climatic changes. [...] there seems to be no doubt that the potential damage to our environment could be severe. [...] pollutants which we generally ignore because they have little local effect, CO2 and submicron particles, may be the cause of serious world-wide environmental changes.
Thus, by 1968, American oil held in its hands yet another notice of its products’ world-altering side effects, one affirming that global warming was not just cause for research and concern, but a reality needing corrective action: “Past and present studies of CO2 are detailed,” the Stanford Research Institute advised. “What is lacking, however, is [...] work toward systems in which CO2 emissions would be brought under control.”
This early history illuminates the American petroleum industry’s long-running awareness of the planetary warming caused by its products. Teller’s warning, revealed in documentation I found while searching archives, is another brick in a growing wall of evidence.
In the closing days of those optimistic 1950s, Robert Dunlop may have been one of the first oilmen to be warned of the tragedy now looming before us. By the time he departed this world in 1995, the American Petroleum Institute he once led was denying the climate science it had been informed of decades before, attacking the Intergovernmental Panel on Climate Change, and fighting climate policies wherever they arose.
This is a history of choices made, paths not taken, and the fall from grace of one of the greatest enterprises – oil, the “prime mover” – ever to tread the earth. Whether it’s also a history of redemption, however partial, remains to be seen.
American oil’s awareness of global warming – and its conspiracy of silence, deceit, and obstruction – goes further than any one company. It extends beyond (though includes) ExxonMobil. The industry is implicated to its core by the history of its largest representative, the American Petroleum Institute.
It is now too late to stop a great deal of change to our planet’s climate and its global payload of disease, destruction, and death. But we can fight to halt climate change as quickly as possible, and we can uncover the history of how we got here. There are lessons to be learned, and there is justice to be served.
White newspaper journalists exploited racial divisions to help build the GOP’s southern firewall in the 1960s The Conversation 27 NOV 2017 AT 10:59 ET
From Raw Story: Conservatives who dislike Donald Trump like to blame the president and his Breitbart cheering section for the racial demagoguery they see in today’s Republican Party.
For example, New York Times columnist David Brooks lamented the GOP’s transformation over the past decade from a party that had always been decent on racial issues to one that now embraced “white identity politics.”
I respect Brooks and read him regularly, but on this issue he and his ideological allies have a blind spot. They ignore overwhelming evidence showing the central role racial politics played in the Republican Party’s rise to power after the civil rights movement.
In 1962, Republican William D. Workman Jr. launched a long-shot bid for a U.S. Senate seat in South Carolina. For more than eight decades, the Democratic Party had been the only party that mattered in state politics. To most white voters, it represented the overthrow of Reconstruction and the restoration of white political rule.
Yet Workman nearly defeated a two-term Democratic incumbent. It was a turning point that signaled the GOP’s reemergence as a competitive force in the region.
The nation’s top political reporter, James Reston of The New York Times, traveled to South Carolina to examine this new GOP in the Deep South. He called Workman a “journalistic Goldwater Republican.” It might seem like an odd description, but it fit the candidate perfectly.
In the late 1950s, Workman and his boss, Charleston News and Courier editor Thomas R. Waring Jr., were staunch segregationists who had found a political ally in William F. Buckley Jr., conservative editor of a new journal, National Review.Before he joined the senate race, Workman had been the state’s best-known political reporter. He had also been working secretly with GOP allies to build the party in South Carolina and rally support for Arizona Sen. Barry Goldwater, leader of the GOP’s rising conservative wing.
As political scientist Joseph E. Lowndes notes, National Review was the first conservative journal to try to link the southern opposition to enforced integration with the small-government argument that was central to economic conservatism.
In 1957, Buckley delivered the magazine’s most forthright overture to southern segregationists. In an editorial on black voting rights, Buckley called whites “the advanced race” in the South and said whites, therefore, should be allowed to “take such measures as necessary to prevail, politically and culturally, in areas in which it does not predominate numerically.” In Buckley’s view, “the claims of civilization supersede those of universal suffrage.” In the News and Courier, Waring described the editorial as “brave words,” but Buckley’s argument created a firestorm within the conservative movement. His brother-in-law, L. Brent Bozell, condemned the editorial in the pages of Buckley’s own magazine. Bozell said Buckley’s unconstitutional appeal to white supremacy threatened to do “grave hurt to the conservative movement.”
Workman and the ‘great white switch’
Buckley and Goldwater began avoiding such overtly racist appeals, but it took longer for their southern allies to temper their rhetoric and master the use of racially coded language. In his 1960 book, “The Case for the South,” Workman wrote that African-Americans remained “a white man’s burden” – a “violent” and “indolent” people who needed guidance from their white superiors.
Two years later, Workman rallied local and national Republicans to his banner in the senate race. At the time, the tiny GOP in South Carolina was run by conservative businessmen who had migrated to the Palmetto State from the North. They embraced Goldwater’s call for lower taxes, weaker unions and smaller government, but the Republicans lacked credibility with white voters who cared mostly about segregation and white political rule.
Workman’s 1962 campaign changed that. He united the state’s racial and economic conservatives, a political marriage that would fuel the party’s dramatic growth in South Carolina and the nation over the next two decades.
As historian Dan T. Carter contends, “even though the streams of racial and economic conservatism have sometimes flowed in separate channels, they ultimately joined in the political coalition that reshaped American politics” in the years after the civil rights movement.
Buoyed by the surprising strength of Workman’s campaign, South Carolina’s junior senator, Strom Thurmond, abandoned the Democratic Party in 1964 and joined the Republicans. Sixteen years earlier, Thurmond had left the Democrats briefly to run for president as a “Dixiecrat” on the States’ Rights ticket. His surprise announcement in 1964 signaled the start of what political scientists call the “great white switch.”Workman was thrilled by the letters he received from northern conservatives who embraced his campaign. One of those was William Loeb, editor of New Hampshire’s Manchester Union-Leader, who told Workman the GOP should become “the white man’s party.” Loeb said his proposal would “leave Democrats with the Negro vote,” but give the Republicans the white vote and “white people, thank God, are still in the majority.”
As more African-American voters joined the Democratic Party, southern whites moved to the GOP. William Loeb was getting his wish.
Reagan and the Neshoba County Fair
By the late 1970s, Ronald Reagan had united former segregationists with economic and social conservatives to create a political movement that would dominate American politics. Reagan built this coalition in part through the use of coded rhetoric tying race to such issues as crime, welfare and government spending.
In 1980, he launched his fall presidential campaign at the Neshoba County Fair near Philadelphia, Mississippi. Sixteen years earlier, three civil rights activists had been murdered in Neshoba County and buried in an earthen dam. Reagan used his visit to declare his support for states’ rights – a phrase indelibly linked to Thurmond’s Dixiecrat campaign of 1948.
In a 2007 column, Brooks angrily disputes any notion that Reagan’s Neshoba County trip was a dog-whistle appeal to white racial resentment in the post-civil rights era. He calls such claims a “slur” and a “calumny.” Reagan’s campaign was notoriously disorganized, Brooks argues. The candidate had planned to launch his general election campaign discussing inner-city problems with the Urban League, not preaching states’ rights in Neshoba County. Maybe he’s right. Perhaps it was just a scheduling mishap. But on the question of race and politics, I’m more inclined to believe Lee Atwater, the late political strategist from South Carolina.
Atwater served as White House political director under Reagan and chief strategist for President George H. W. Bush’s 1988 campaign. In a 1981 interview with political scientist Alexander Lamis, Atwater explained the evolution of coded racial language in political campaigns in the South.
“You start out in 1954 by saying, ‘nigger, nigger, nigger.’ By 1968 you can’t say ‘nigger’ – that hurts you, backfires,” Atwater said. “So you say stuff like forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites…”
This is not to say that all Republicans are racist, or that economic, social and cultural issues played no role in the rise of the conservative GOP. But it is clear that racial resentment mattered to voters – a lot – and the Republican Party found ways to stoke that animus for political gain.
Donald Trump’s racial appeals may be more transparent. But in him, the legacy of conservative journalists like William Workman lives on.
The two men who almost derailed New England’s first colonies
Behind the scheme that tried to undermine the Pilgrims
PETER C. MANCALL, THE CONVERSATION 11.23.2017•2:58 AM
From Salon: The story is simple enough. In 1620 a group of English Protestant dissenters known as Pilgrims arrived in what’s now Massachusetts to establish a settlement they called New Plymouth. The first winter was brutal, but by the following year they’d learned how to survive the unforgiving environment. When the harvest season of 1621 arrived, the Pilgrims gathered together with local Wampanoag Indians for a three-day feast, during which they may have eaten turkey.
The peace wouldn’t last for long, and much of America’s early Colonial history centers on the eventual conflicts between the colonists and the Native Americans. But the traditional version ignores the real danger that emerged from two Englishmen — Thomas Morton and Ferdinando Gorges — who sought to undermine the legal basis for Puritan settlements throughout New England.
Over 200 years later, when President Abraham Lincoln declared the first federal day of Thanksgiving in the midst of the Civil War, it was a good moment for Americans to recall a time when disparate peoples could reach across the cultural divide. He was either unaware of — or conveniently ignored — the English schemers who tried to chase those Pilgrims and Puritans away.
The Puritans followed the Pilgrims, founding the Massachusetts Bay colony in 1630. There, John Winthrop, who became the governor, wrote that the English wanted to create a “city upon a hill.” The line came from Matthew 5:14, an early example of how these English travelers viewed their actions through a biblical lens.
The growing numbers of English migrants strained the local resources of the Algonquian-speaking peoples. These locals, collectively known as Ninnimissinuok, had already suffered from a terrible epidemic possibly caused by a bacterial disease called leptospirosis and an infectious disorder, Weil syndrome, in the late 1610s that might have reduced their population by 90 percent.
English hostility against Natives has taken a central place in historians’ version of the origins of New England. But though it is a powerful and tragic narrative, indigenous Americans did not pose the greatest hazard to the survival of the colonists.
A new threat emerges
Just when the Pilgrims were trying to establish New Plymouth, an English war veteran named Ferdinando Gorges claimed that he and a group of investors possessed the only legitimate patent to create a colony in the region.
Gorges had gained notoriety after battling the Spanish in the Netherlands and commanding the defense of the port city of Plymouth, on the southwest coast of England. Afterwards, Gorges was in search of a new opportunity. It arrived in 1605 when the English sea captain George Waymouth returned to England after a voyage that had taken him to the coast of modern Maine and back. Along with news about the coastline and its resources, Waymouth brought back five captive Eastern Abenakis, members of the indigenous nation that claimed territory between the Penobscot and Saco rivers in Maine. Waymouth left three of them with Gorges. Soon they learned English and told Gorges about their homeland, sparking Gorges’ interest in North America.
Gorges, with a group of investors, financially backed an expedition to the coast of Maine in 1607, though the colony they hoped to launch there never succeeded.
These financiers believed that they possessed a claim to all territory stretching from 40 to 48 degrees north latitude — a region that stretches from modern-day Philadelphia to St. John, Newfoundland — a point they emphasized in their charter. Gorges remained among its directors.
As luck would have it, Gorges soon met Thomas Morton, a man with legal training and a troubled past who had briefly visited Plymouth Plantation soon after the first English arrived. Morton would join forces with Gorges in his attempt to undermine the legal basis for the earliest English colonies in New England.
Morton and the Pilgrims despised one another. By 1626 he had established a trading post at a place called Merrymount, on the site of modern day Quincy, Massachusetts. There, he entertained local Ninnimissinuok, offering them alcohol and guns. He also imported an English folk custom by erecting an 80-foot pole for them to dance around.
The Pilgrims, viewing Morton as a threat because of his close relations with the locals and the fact that he had armed them, exiled him to England in 1628.
To the disappointment of the Pilgrims, Morton faced no legal action back in England. Instead, he returned to New England in 1629, settling in Massachusetts just as Winthrop and his allies were trying to launch their new colony. Soon enough, Morton angered the rulers of this Puritan settlement, claiming that the way they organized their affairs flew in the face of the idea that they should follow all English laws. The Puritans, looking for an excuse to send him away, claimed that he had abused local natives (a charge that was likely baseless). Nonetheless, they burned Morton’s house to the ground and shipped him back to England.
After a short stint in jail, Morton was free again, and it was around this time that he began to conspire with Gorges. During the mid-1630s Gorges pushed English authorities to recognize his claim to New England. His argument pivoted on testimony provided by Morton, who claimed that the Puritans had violated proper religious and governing practices. Morton would soon write that the Puritans refused to use the Book of Common Prayer, a standard text employed by the Church of England, and that the Puritans closed their eyes when they prayed “because they thinke themselves so perfect in the highe way to heaven that they can find it blindfould.”
In a letter he wrote to a confidant, Morton claimed that at a hearing in London, the Massachusetts patent “was declared, for manifest abuses there discovered, to be void.” In 1637, such evidence convinced King Charles I to make Gorges the royal governor of Massachusetts.
But the king never followed through. Nor did the English bring the leaders of the colony to London for a trial. The Puritans maintained their charter, but Morton and Gorges refused to back down.
A quick compromise
In 1637, Morton published a book titled “New English Canaan.” In it, he accused the English of abusing and murdering Native Americans and also of violating widely accepted Protestant religious practices. (Today there are around 20 known copies of the original.)
With good reason, the Puritans feared Gorges and Morton. To make peace, they relented and in 1639 Gorges received the patent to modern-day Maine, which had been part of the original grant to the Massachusetts Bay Company. By then, Gorges’ agents had already begun to establish a plantation in Maine. That settlement ended the legal challenge to the existing New England colonies, which then prospered, free of English interference, for decades.
But Morton wasn’t quite done. He returned to Massachusetts, possibly as an agent for Gorges or perhaps because he had hoped that the situation might have improved. When he arrived local authorities, having seen his book, exiled him again. He retreated north, to Gorges’ planned colony. Winthrop wrote that he lived there “poor and despised.”
By 1644 Morton was dead, along with the scariest threat the Pilgrims and Puritans had faced.
United Daughters of the Confederacy Led A Campaign to Build Monuments and Whitewash History By David Love - November 1, 2017
The United Daughters of the Confederacy was responsible for the creation of the Stone Mountain Confederate Monument, known as the South’s version of Mount Rushmore (Photo: Public Domain Pictures)
From Atlanta Black Star: More than 150 years since the end of the Civil War, the United States has been unable to come to terms with a war that claimed 750,000 lives and was fought over the cause of white supremacy and the right of white men to continue to own Black people as chattel. Today, many white Americans have a positive attitude toward the slaveholding South, the Confederate cause and its symbolism.
This mindset was on display this week when White House Chief of Staff John Kelly claimed a “lack of ability to compromise led to the Civil War” and called the removal of Confederate monuments a “dangerous” scrubbing of history. Speaking with Fox News’ Laura Ingraham on her new show, “The Ingraham Angle,” Kelly also called Confederate General Robert E. Lee — who abused his slaves and became a soldier for white supremacy — “an honorable man” who “gave up his country to fight for his state.” Further, Kelly defended the Confederate monuments, arguing it is “dangerous” to scrub them away.
Kelly’s remarks come as GenForward released a poll on millennial racial attitudes. Among its findings, the poll found that while majorities of Black (83 percent), Latinx (65 percent) and Asian-American (71 percent) millennials believe the Confederate flag is a symbol of racism, a majority of whites ages 18 to 34 (55 percent) believe the flag is a symbol of Southern pride, and 62 percent oppose removing Confederate statues and symbols.
Other surveys have shown majorities believing the war was fought over a vague notion of states’ rights, with substantial minorities attributing the war to taxes or tariffs as opposed to slavery, the true answer. Over the years, Confederate sympathizers have distorted and rewritten the history of the Civil War to make the Confederates appear heroic and their cause glorious and noble. This project has been years in the making, through the efforts of groups such as United Daughters of the Confederacy (UDC), which lobbied for the construction of Confederate monuments and the banning of textbooks that were hostile to the role of the Southern secessionists in the Civil War.
Examining the original documents of the Southern states who seceded from the Union, it is clear that slavery was their motivation, and white supremacy their foundation. For example, the Constitution of the Confederate States, which mentions the word slave or its variations ten times, stated:
In all such territory the institution of negro slavery, as it now exists in the Confederate States, shall be recognized and protected be Congress and by the Territorial government; and the inhabitants of the several Confederate States and Territories shall have the right to take to such Territory any slaves lawfully held by them in any of the States or Territories of the Confederate States.
Among us the poor white laborer is respected as an equal. His family is treated with kindness, consideration and respect. He does not belong to the menial class. The negro is in no sense of the term his equal. He feels and knows this. He belongs to the only true aristocracy, the race of white men.
For the last ten years we have had numerous and serious causes of complaint against our non-slave-holding confederate States with reference to the subject of African slavery. They have endeavored to weaken our security, to disturb our domestic peace and tranquility, and persistently refused to comply with their express constitutional obligations to us in reference to that property, and by the use of their power in the Federal Government have striven to deprive us of an equal enjoyment of the common Territories of the Republic.
Our position is thoroughly identified with the institution of slavery — the greatest material interest of the world. Its labor supplies the product which constitutes by far the largest and most important portions of commerce of the earth. These products are peculiar to the climate verging on the tropical regions, and by an imperious law of nature, none but the black race can bear exposure to the tropical sun. These products have become necessities of the world, and a blow at slavery is a blow at commerce and civilization.
Texas spoke of “maintaining and protecting the institution known as negro slavery — the servitude of the African to the white race within her limits — a relation that had existed from the first settlement of her wilderness by the white race, and which her people intended should exist in all future time.”
UDC and other neo-Confederate organizations subscribe to the Cult of the Lost Cause, a concept promoted by white Southerners in the late 19th century that rebranded the Confederacy as “noble and praiseworthy,” denied that slavery was a central cause of the Civil War and toned down the brutality of slavery itself, even depicting the enslaved as happy. According to the Southern Poverty Law Center, this was a “whitewashing of history” to salvage white Southerners’ honor from defeat, and dominated Southern cultural history in the early 20th century. The cult of the Lost Cause was the core ideology of the Ku Klux Klan and the idea that Black emancipation was a threat to democracy and white women. The ideology has had an impact on Republican Party politics, as well as white nationalists and radical extremists.
In 1894, the United Daughters of the Confederacy was founded in Tennessee and became the foremost Southern women’s organization. UDC now has chapters in 33 states and the District of Columbia, and was responsible for erecting most of the Confederate monuments throughout the South. As of 2016, these 718 monuments included 551 built before 1950, 45 erected during the Civil Rights movement, and 32 built since 2000. For example, UDC was responsible for the Confederate Monument at Arlington National Cemetery, and Stone Mountain, Georgia, the South’s own Mount Rushmore, whose work was supervised by Ku Klux Klan members.
UDC instilled Confederate values in white youth to uphold white supremacy and states’ rights, lobbied state legislatures to provide military pensions to Confederate veterans, and helped form state-run homes for Confederate soldiers and widows. As Vox reported, UDC formed textbook committees and made school boards ban books that were “unjust to the South” and negatively portrayed the Confederacy. In addition, its auxiliary group, Children of the Confederacy, purportedly engages young people in “Southern” history. Open to descendants of men and women who served the Confederacy, according to its website, the group states its purposes include “to honor and perpetuate the memory and deeds of high principles of the men and women of the Confederacy,” “observe properly all Confederate Memorial Days,” and “serve society through civic affairs and to perpetuate National patriotism as our ancestors once defended their beliefs.” UDC is open only to women related to Confederate veterans of the “War Between the States.” The Southern Poverty Law Center noted that UDC has affiliated with white supremacists and racist groups such as the Council of Conservative Citizens and the League of the South, while its publications have minimized the middle passage and argued that slave ship crews suffered the most.
“The UDC helped rewrite both the history of the Civil War and the history of Reconstruction. For the neo-Confederate groups the Civil War and Reconstruction was one long struggle in two phases,” Edward H. Sebesta — editor of “The Confederate and Neo-Confederate Reader,” “Neo-Confederacy: A Critical Introduction,” and author of “Pernicious: The Neo-Confederate Campaign against Social Justice in America” — told Atlanta Black Star. “The problem is that when discussing the United Confederate Veterans, the Sons of Confederate Veterans and the United Daughters of the Confederacy, people don’t realize the extent that neo-Confederates had in rewriting Reconstruction. Yes, there was the Dunning School at Columbia University (a group of scholars that objected to the role of the federal government in Reconstruction), but the UDC provided a major component to push the Dunning interpretation to the general public,” Sebesta added.
UDC policed textbooks and made it clear to textbook publishers that if they did not change their Civil War content, they would not have a market in the South, Sebesta noted. However, the influence of the group regarding textbooks waned by the time of the Civil Rights movement. “By the 1950s, a general [desire] by the public to support white supremacy favored a Lost Cause account of the Civil War and Reconstruction, and it becomes self-sustaining because it teaches or supports a cluster of white attitudes and beliefs in individuals who then insist that for the next generation the same be taught to them,” he said.
“We teach children white nationalism, in our textbooks and it should not be surprising that white nationalism is manifested in our society.”
How the US government created and coddled the gun industry
How much of a role does America play in the gun industry?
BRIAN DELAY, THE CONVERSATION 10.14.2017•6:59 AM
From Salon: ....But what I’ve learned from a decade of studying the history of the arms trade has convinced me that the American public has more power over the gun business than most people realize.
The U.S. arms industry’s close alliance with the government is as old as the country itself, beginning with the American Revolution.
Forced to rely on foreign weapons during the war, President George Washington wanted to ensure that the new republic had its own arms industry. Inspired by European practice, he and his successors built public arsenals for the production of firearms in Springfield and Harper’s Ferry. They also began doling out lucrative arms contracts to private manufacturers such as Simeon North, the first official U.S. pistol maker, and Eli Whitney, inventor of the cotton gin.
The government provided crucial startup funds, steady contracts, tariffs against foreign manufactures, robust patent laws, and patterns, tools and know-how from federal arsenals.
The War of 1812, perpetual conflicts with Native Americans and the U.S.-Mexican War all fed the industry’s growth. By the early 1850s, the United States was emerging as a world-class arms producer. Now-iconic American companies like those started by Eliphalet Remington and Samuel Colt began to acquire international reputations.
Even the mighty gun-making center of Great Britain started emulating the American systemof interchangeable parts and mechanized production.
Profit in war and peace
The Civil War supercharged America’s burgeoning gun industry.
The Union poured huge sums of money into arms procurement, which manufacturers then invested in new capacity and infrastructure. By 1865, for example, Remington had made nearly US$3 million producing firearms for the Union. The Confederacy, with its weak industrial base, had to import the vast majority of its weapons.
The war’s end meant a collapse in demand and bankruptcy for several gun makers. Those that prospered afterward, such as Colt, Remington and Winchester, did so by securing contracts from foreign governments and hitching their domestic marketing to the brutal romance of the American West.
While peace deprived gun makers of government money for a time, it delivered a windfall to well capitalized dealers. That’s because within five years of Robert E. Lee’s surrender at Appomattox, the War Department had decommissioned most of its guns and auctioned off some 1,340,000 to private arms dealers, such as Schuyler, Hartley and Graham. The Western Hemisphere’s largest private arms dealer at the time, the company scooped up warehouses full of cut-rate army muskets and rifles and made fortunes reselling them at home and abroad.
More wars, more guns
By the late 19th century, America’s increasingly aggressive role in the world insured steady business for the country’s gun makers.
Consider Sig Sauer, the New Hampshire arms producer that made the MCX rifle used in the Orlando Pulse nightclub massacre. In addition to arming nearly a third of the country’s law enforcement, it recently won the coveted contract for the Army’s new standard pistol, ultimately worth $350 million to $580 million.
Colt might best illustrate the importance of public money for prominent civilian arms manufacturers. Maker of scores of iconic guns for the civilian market, including the AR-15 carbine used in the 1996 massacre that prompted Australia to enact its famously sweeping gun restrictions, Colt has also relied heavily on government contracts since the 19th century. The Vietnam War initiated a long era of making M16s for the military, and the company continued to land contracts as American war-making shifted from southeast Asia to the Middle East. But Colt’s reliance on government was so great that it filed for bankruptcy in 2015, in part because it had lost the military contract for the M4 rifle two years earlier.
Competition for contracts spurred manufacturers to make lethal innovations, such as handguns with magazines that hold 12 or 15 rounds rather than seven. Absent regulation, these innovations show up in gun enthusiast periodicals, sporting goods stores and emergency rooms.
Given their historic dependence on U.S. taxpayers, one might think that small arms makers would have been compelled to make meaningful concessions in such moments. But that seldom happens, thanks in large part to the National Rifle Association, a complicated yet invaluable industry partner.
The NRA, founded in 1871 as an organization focused on hunting and marksmanship, rallied its members to defeat the most important component of that bill: a tax meant to make it far more difficult to purchase handguns. Again in 1968, the NRA ensured Lyndon Johnson’s Gun Control Act wouldn’t include licensing and registrationrequirements.
Most recently, the gun lobby has succeeded by promoting an ingenious illusion. It has framed government as the enemy of the gun business rather than its indispensable historic patron, convincing millions of American consumers that the state may at any moment stop them from buying guns or even try to confiscate them. Hence the jump in the shares of gun makers following last week’s slaughter in Las Vegas. Investors know they have little to fear from new regulation and expect sales to rise anyway.
Yet almost never does this political activity seem to jeopardize access to lucrative government contracts. Americans interested in reform might reflect on that fact. They might start asking their representatives where they get their guns. It isn’t just the military and scores of federal agencies. States, counties and local governments buy plenty of guns, too.
For example, Smith & Wesson is well into a five-year contract to supply handguns to the Los Angeles Police Department, the second-largest in the country. In 2016 the company contributed $500,000 (more than any other company) to a get-out-the-vote operation designed to defeat candidates who favor tougher gun laws.
Do taxpayers in L.A. – or the rest of the country – realize they are indirectly subsidizing the gun lobby’s campaign against regulation?
A historian destroys racists’ favorite myths about the Vikings
The Conversation 29 SEP 2017 AT 11:40 ET
From Raw Story: The word “Viking” entered the Modern English language in 1807, at a time of growing nationalism and empire building. In the decades that followed, enduring stereotypes about Vikings developed, such as wearing horned helmets and belonging to a society where only men wielded high status.
During the 19th century, Vikings were praised as prototypes and ancestor figures for European colonists. The idea took root of a Germanic master race, fed by crude scientific theories and nurtured by Nazi ideology in the 1930s. These theories have long been debunked, although the notion of the ethnic purity of the Vikings still seems to have popular appeal – and it is embraced by white supremacists.
In contemporary culture, the word Viking is generally synonymous with Scandinavians from the ninth to the 11th centuries. We often hear terms such as “Viking blood”, “Viking DNA” and “Viking ancestors” – but the medieval term meant something quite different to modern usage. Instead it defined an activity: “Going a-Viking”. Akin to the modern word pirate, Vikings were defined by their mobility and this did not include the bulk of the Scandinavian population who stayed at home.
The mobility of Vikings led to a fusion of cultures within their ranks and their trade routes would extend from Canada to Afghanistan. A striking feature of the early Vikings’ success was their ability to embrace and adapt from a wide range of cultures, whether that be the Christian Irish in the west or the Muslims of the Abbasid Caliphate in the east.While the modern word Viking came to light in an era of nationalism, the ninth century – when Viking raids ranged beyond the boundaries of modern Europe – was different. The modern nation states of Denmark, Norway and Sweden were still undergoing formation. Local and familial identity were more prized than national allegiances. The terms used to describe Vikings by contemporaries: “wicing”, “rus”, “magi”, “gennti”, “pagani”, “pirati” tend to be non-ethnic. When a term akin to Danes, “danar” is first used in English, it appears as a political label describing a mix of peoples under Viking control.
Blending of cultures
Developments in archaeology in recent decades have highlighted how people and goods could move over wider distances in the early Middle Ages than we have tended to think. In the eighth century, (before the main period of Viking raiding began), the Baltic was a place where Scandinavians, Frisians, Slavs and Arabic merchants were in frequent contact. It is too simplistic to think of early Viking raids, too, as hit-and-run affairs with ships coming directly from Scandinavia and immediately rushing home again.
Recent archaeological and textual work indicates that Vikings stopped off at numerous places during campaigns (this might be to rest, restock, gather tribute and ransoms, repair equipment and gather intelligence). This allowed more sustained interaction with different peoples. Alliances between Vikings and local peoples are recorded from the 830s and 840s in Britain and Ireland. By the 850s, mixed groups of Gaelic (Gaedhil) and foreign culture (Gaill) were plaguing the Irish countryside. Written accounts survive from Britain and Ireland condemning or seeking to prevent people from joining the Vikings. And they show Viking war bands were not ethnically exclusive. As with later pirate groups (for example the early modern pirates of the Caribbean), Viking crews would frequently lose members and pick up new recruits as they travelled, combining dissident elements from different backgrounds and cultures.
The cultural and ethnic diversity of the Viking Age is highlighted by finds in furnished graves and silver hoards from the ninth and tenth centuries. In Britain and Ireland only a small percentage of goods handled by Vikings are Scandinavian in origin or style.
The evidence points to population mobility and acculturation over large distances as a result of Viking Age trade networks.The Galloway hoard, discovered in south-west Scotland in 2014, includes components from Scandinavia, Britain, Ireland, Continental Europe and Turkey. Cultural eclecticism is a feature of Viking finds. An analysis of skeletons at sites linked to Vikings using the latest scientific techniques points to a mix of Scandinavian and non-Scandinavian peoples without clear ethnic distinctions in rank or gender.
The Viking Age was a key period in state formation processes in Northern Europe, and certainly by the 11th and 12th centuries there was a growing interest in defining national identities and developing appropriate origin myths to explain them. This led to a retrospective development in areas settled by Vikings to celebrate their links to Scandinavia and downplay non-Scandinavian elements.
The fact that these myths, when committed to writing, were not accurate accounts is suggested by self-contradictory stories and folklore motifs. For example, medieval legends concerning the foundation of Dublin (Ireland) suggest either a Danish or Norwegian origin to the town (a lot of ink has been spilt over this matter over the years) – and there is a story of three brothers bringing three ships which bears comparison with other origin legends. Ironically, it was the growth of nation states in Europe which would eventually herald the end of the Viking Age.
In the early Viking Age, modern notions of nationalism and ethnicity would have been unrecognisable. Viking culture was eclectic, but there were common features across large areas, including use of Old Norse speech, similar shipping and military technologies, domestic architecture and fashions that combined Scandinavian and non-Scandinavian inspirations.
It can be argued that these markers of identity were more about status and affiliation to long-range trading networks than ethnic symbols. A lot of social display and identity is non-ethnic in character. One might compare this to contemporary international business culture which has adopted English language, the latest computing technologies, common layouts for boardrooms and the donning of Western suits. This is a culture expressed in nearly any country of the world but independently of ethnic identity.
Similarly, Vikings in the 9th and 10th centuries may be better defined more by what they did than by their place of origin or DNA. By dropping the simplistic equation of Scandinavian with Viking, we may better understand what the early Viking Age was about and how Vikings reshaped the foundations of medieval Europe by adapting to different cultures, rather than trying to segregate them.
The Myth of the Kindly General Lee The legend of the Confederate leader’s heroism and decency is based in the fiction of a person who never existed.
Adam Serwer - The Atlantic
...The myth of Lee goes something like this: He was a brilliant strategist and devoted Christian man who abhorred slavery and labored tirelessly after the war to bring the country back together.
There is little truth in this. Lee was a devout Christian, and historians regard him as an accomplished tactician. But despite his ability to win individual battles, his decision to fight a conventional war against the more densely populated and industrialized North is considered by many historians to have been a fatal strategic error.
But even if one conceded Lee’s military prowess, he would still be responsible for the deaths of hundreds of thousands of Americans in defense of the South’s authority to own millions of human beings as property because they are black. Lee’s elevation is a key part of a 150-year-old propaganda campaign designed to erase slavery as the cause of the war and whitewash the Confederate cause as a noble one. That ideology is known as the Lost Cause, and as historian David Blight writes, it provided a “foundation on which Southerners built the Jim Crow system.”
There are unwitting victims of this campaign—those who lack the knowledge to separate history from sentiment. Then there are those whose reverence for Lee relies on replacing the actual Lee with a mythical figure who never truly existed.
In the Richmond Times Dispatch, R. David Cox wrote that “For white supremacist protesters to invoke his name violates Lee’s most fundamental convictions.” In the conservative publication Townhall, Jack Kerwick concluded that Lee was “among the finest human beings that has ever walked the Earth.” John Daniel Davidson, in an essay for The Federalist, opposed the removal of the Lee statute in part on the grounds that Lee “arguably did more than anyone to unite the country after the war and bind up its wounds.” Praise for Lee of this sort has flowed forth from past historians and presidents alike. This is too divorced from Lee’s actual life to even be classed as fan fiction; it is simply historical illiteracy.
White supremacy does not “violate” Lee’s “most fundamental convictions.” White supremacy was one of Lee’s most fundamental convictions.
Lee was a slaveowner—his own views on slavery were explicated in an 1856 letter that it often misquoted to give the impression that Lee was some kind of an abolitionist. In the letter, he describes slavery as “a moral & political evil,” but goes on to explain that:
I think it however a greater evil to the white man than to the black race, & while my feelings are strongly enlisted in behalf of the latter, my sympathies are more strong for the former. The blacks are immeasurably better off here than in Africa, morally, socially & physically. The painful discipline they are undergoing, is necessary for their instruction as a race, & I hope will prepare & lead them to better things. How long their subjugation may be necessary is known & ordered by a wise Merciful Providence. Their emancipation will sooner result from the mild & melting influence of Christianity, than the storms & tempests of fiery Controversy.
The argument here is that slavery is bad for white people, good for black people, and most importantly, it is better than abolitionism; emancipation must wait for divine intervention. That black people might not want to be slaves does not enter into the equation; their opinion on the subject of their own bondage is not even an afterthought to Lee.
Lee’s cruelty as a slavemaster was not confined to physical punishment. In Reading the Man, the historian Elizabeth Brown Pryor’s portrait of Lee through his writings, Pryor writes that “Lee ruptured the Washington and Custis tradition of respecting slave families,” by hiring them off to other plantations, and that “by 1860 he had broken up every family but one on the estate, some of whom had been together since Mount Vernon days.” The separation of slave families was one of the most unfathomably devastating aspects of slavery, and Pryor wrote that Lee’s slaves regarded him as “the worst man I ever see.”
The trauma of rupturing families lasted lifetimes for the enslaved—it was, as my colleague Ta-Nehisi Coates described it, “a kind of murder.” After the war, thousands of the emancipated searched desperately for kin lost to the market for human flesh, fruitlessly for most. In Reconstruction, the historian Eric Foner quotes a Freedmen’s Bureau agent who notes of the emancipated, “in their eyes, the work of emancipation was incomplete until the families which had been dispersed by slavery were reunited.”
Lee’s heavy hand on the Arlington plantation, Pryor writes, nearly led to a slave revolt, in part because the enslaved had been expected to be freed upon their previous master’s death, and Lee had engaged in a dubious legal interpretation of his will in order to keep them as his property, one that lasted until a Virginia court forced him to free them.
When two of his slaves escaped and were recaptured, Lee either beat them himself or ordered the overseer to "lay it on well." Wesley Norris, one of the slaves who was whipped, recalled that “not satisfied with simply lacerating our naked flesh, Gen. Lee then ordered the overseer to thoroughly wash our backs with brine, which was done.”
Every state that seceded mentioned slavery as the cause in their declarations of secession. Lee’s beloved Virginia was no different, accusing the federal government of “perverting” its powers “not only to the injury of the people of Virginia, but to the oppression of the Southern Slaveholding States.” Lee’s decision to fight for the South can only be described as a choice to fight for the continued existence of human bondage in America—even though for the Union, it was not at first a war for emancipation.
During his invasion of Pennsylvania, Lee’s Army of Northern Virginia enslaved free blacks and brought them back to the South as property. Pryor writes that “evidence links virtually every infantry and cavalry unit in Lee’s army” with the abduction of free black Americans, “with the activity under the supervision of senior officers.”
Soldiers under Lee’s command at the Battle of the Crater in 1864 massacred black Union soldiers who tried to surrender. Then, in a spectacle hatched by Lee’s senior corps commander A.P. Hill, the Confederates paraded the Union survivors through the streets of Petersburg to the slurs and jeers of the southern crowd. Lee never discouraged such behavior. As the historian Richard Slotkin wrote in No Quarter: The Battle of the Crater, “his silence was permissive.”
The presence of black soldiers on the field of battle shattered every myth the South’s slave empire was built on: the happy docility of slaves, their intellectual inferiority, their cowardice, their inability to compete with whites. As Pryor writes, “fighting against brave and competent African Americans challenged every underlying tenet of southern society.” The Confederate response to this challenge was to visit every possible atrocity and cruelty upon black soldiers whenever possible, from enslavement to execution.
As the historian James McPherson recounts in Battle Cry of Freedom, in October of that same year, Lee proposed an exchange of prisoners with the Union general Ulysses S. Grant. “Grant agreed, on condition that blacks be exchanged ‘the same as white soldiers.’” Lee’s response was that “negroes belonging to our citizens are not considered subjects of exchange and were not included in my proposition.” Because slavery was the cause for which Lee fought, he could hardly be expected to easily concede, even at the cost of the freedom of his own men, that blacks could be treated as soldiers and not things. Grant refused the offer, telling Lee that “Government is bound to secure to all persons received into her armies the rights due to soldiers.” Despite its desperate need for soldiers, the Confederacy did not relent from this position until a few months before Lee’s surrender.
After the war, Lee did counsel defeated southerners against rising up against the North. Lee might have become a rebel once more, and urged the South to resume fighting—as many of his former comrades wanted him to. But even in this task Grant, in 1866, regarded his former rival as falling short, saying that Lee was “setting an example of forced acquiescence so grudging and pernicious in its effects as to be hardly realized.”
Nor did Lee’s defeat lead to an embrace of racial egalitarianism. The war was not about slavery, Lee insisted later, but if it was about slavery, it was only out of Christian devotion that white southerners fought to keep blacks enslaved. Lee told a New York Herald reporter, in the midst of arguing in favor of somehow removing blacks from the South (“disposed of,” in his words), “that unless some humane course is adopted, based on wisdom and Christian principles you do a gross wrong and injustice to the whole negro race in setting them free. And it is only this consideration that has led the wisdom, intelligence and Christianity of the South to support and defend the institution up to this time.”
Lee had beaten or ordered his own slaves to be beaten for the crime of wanting to be free, he fought for the preservation of slavery, his army kidnapped free blacks at gunpoint and made them unfree—but all of this, he insisted, had occurred only because of the great Christian love the South held for blacks. Here we truly understand Frederick Douglass’s admonition that "between the Christianity of this land and the Christianity of Christ, I recognize the widest possible difference."
Privately, according to the correspondence collected by his own family, Lee counseled others to hire white labor instead of the freedmen, observing “that wherever you find the negro, everything is going down around him, and wherever you find a white man, you see everything around him improving.”
In another letter, Lee wrote “You will never prosper with blacks, and it is abhorrent to a reflecting mind to be supporting and cherishing those who are plotting and working for your injury, and all of whose sympathies and associations are antagonistic to yours. I wish them no evil in the world—on the contrary, will do them every good in my power, and know that they are misled by those to whom they have given their confidence; but our material, social, and political interests are naturally with the whites.”
Publicly, Lee argued against the enfranchisement of blacks, and raged against Republican efforts to enforce racial equality on the South. Lee told Congress that blacks lacked the intellectual capacity of whites and “could not vote intelligently,” and that granting them suffrage would “excite unfriendly feelings between the two races.” Lee explained that “the negroes have neither the intelligence nor the other qualifications which are necessary to make them safe depositories of political power.” To the extent that Lee believed in reconciliation, it was between white people, and only on the precondition that black people would be denied political power and therefore the ability to shape their own fate.
Lee is not remembered as an educator, but his life as president of Washington College (later Washington and Lee) is tainted as well. According to Pryor, students at Washington formed their own chapter of the KKK, and were known by the local Freedmen’s Bureau to attempt to abduct and rape black schoolgirls from the nearby black schools.
There were at least two attempted lynchings by Washington students during Lee’s tenure, and Pryor writes that “the number of accusations against Washington College boys indicates that he either punished the racial harassment more laxly than other misdemeanors, or turned a blind eye to it,” adding that he “did not exercise the near imperial control he had at the school, as he did for more trivial matters, such as when the boys threatened to take unofficial Christmas holidays.” In short, Lee was as indifferent to crimes of violence toward blacks carried out by his students as he was when they were carried out by his soldiers.
Lee died in 1870, as Democrats and ex-Confederates were commencing a wave of terrorist violence that would ultimately reimpose their domination over the Southern states. The Ku Klux Klan was founded in 1866; there is no evidence Lee ever spoke up against it. On the contrary, he darkly intimated in his interview with the Herald that the South might be moved to violence again if peace did not proceed on its terms. That was prescient. --- To describe this man as an American hero requires ignoring the immense suffering for which he was personally responsible, both on and off the battlefield. It requires ignoring his participation in the industry of human bondage, his betrayal of his country in defense of that institution, the battlefields scattered with the lifeless bodies of men who followed his orders and those they killed, his hostility toward the rights of the freedmen and his indifference to his own students waging a campaign of terror against the newly emancipated. It requires reducing the sum of human virtue to a sense of decorum and the ability to convey gravitas in a gray uniform. --- The white supremacists who have protested on Lee’s behalf are not betraying his legacy. In fact, they have every reason to admire him. Lee, whose devotion to white supremacy outshone his loyalty to his country, is the embodiment of everything they stand for. Tribe and race over country is the core of white nationalism, and racists can embrace Lee in good conscience.
The question is why anyone else would.
Donald Trump, Jews and the myth of race: How Jews gradually became “white,” and how that changed America
Until the 1940s, Jews in America were considered a separate race. Their journey to whiteness has important lessons JONATHAN ZIMMERMAN - Salon APRIL 9, 2017 10:00AM
Earlier this year, President Trump denounced the desecration of Jewish cemeteries. But he has said almost nothing about the murder of a black New Yorker last month by a racist military veteran.
All 100 members of the U.S. Senate signed a letter condemning bomb threats against synagogues and Jewish community centers, which were eventually traced to a teenager in Israel. We haven’t heard a similar expression of concern over the four American mosques that were burned by vandals in the first two months of 2017.
Why the double standard? Some observers have suggested that Trump is more attuned to anti-Semitism than to other forms of bigotry because his daughter and son-in-law are Jews. Others point to Trump’s immigration orders and to his earlier threats to bar Muslims from the country, which have made Islamophobia seem less reprehensible — and, possibly, more permissible — than prejudice against Jews.
But I’ve got a simpler explanation: Jews are white.
For most of American history, that wasn’t the case. Jews were a separate race, as blacks and Asians are today. But over time Jews became white, which made it harder for other whites to hate them.
That’s the great elephant in the room, when it comes to the tortured subject of race in America. The word “race” conjures biology, a set of inheritable — and immutable — physical characteristics. But it’s actually a cultural and social category, not a biological one, which is why it changes over time.
Until the 1940s, Jews were recorded as a distinct racial group by American immigration authorities. But many non-Jewish whites also believed that Jews shared traits with the most stigmatized race of all: African-Americans.
In his 1910 book, “The Jew and the Negro,” North Carolina minister Arthur T. Abernathy argued that the ancient Jews had mixed with neighboring African peoples. Jews’ skin color lightened when they moved to other climates, Abernathy wrote, but they still bore the same eye shapes and fingernails (yes, fingernails!) as blacks.
Both groups were disposed to sexual indulgence, Abernathy warned, a recurring theme in American racism. “Every student of sociology knows that the black man’s lust after the white woman is not much fiercer than the lust of the licentious Jew for the gentile,” Georgia politician Tom Watson wrote in 1915.
In the North, meanwhile, white racial anxieties about Jewish college students led to quotas that limited their numbers. “Scurvy kikes are not wanted,” read a poster hung on one campus in 1923. “Make New York University a White Man’s College.”
For their own part, Jews couldn’t agree if they were a race or not. Some of them warned that a separate racial classification would inevitably link them to blacks, diminishing Jews in the eyes of other Americans. But others held tight to the racial idea, which could help make the case for a Jewish homeland in Palestine. Yet with the rise of Nazism in Europe, which was premised on Jewish racial inferiority, Jews in America stopped identifying as a race. “Scientifically and correctly speaking, there are three great races in the world: the black, yellow, and white,” a booklet distributed by the Anti-Defamation League declared in 1939. “Within the white race all the sub-races have long since been mixed and we Jews are part of the general admixture.”
After World War II, most white Americans quietly welcomed Jews into their “admixture.” Newspapers and social scientists referred to Jews as members of a religion, ethnicity or culture. But they were not called a “race,” a word that conjured up Adolf Hitler and his effort to eliminate them.
During these same years, the struggle against Nazism overseas helped jump-start the campaign for African-American civil rights at home. But blacks’ status as a separate race remained, as James Baldwin noted in a biting 1967 essay about African-Americans and Jews.
Blacks were tired of Jews telling them that the Jewish experience of prejudice was “as great as the American Negro’s suffering,” Baldwin wrote. The claim ignored racial realities, Baldwin wrote, and it fueled anti-Semitism in black communities. ”The most ironical thing,” Baldwin wrote, “is that the Negro is really condemning the Jew for having become an American white man.”
That’s not an option for African-Americans or for other nonwhite racial groups in America. (It's worth noting that the racial status of Hispanics is ambiguous: Some identify as white, some as black and many as neither.) These categories are cultural creations, too, not biological ones. When Sonia Sotomayor joined the Supreme Court in 2009, news headlines hailed her as the first Hispanic justice on the court. Yet when she was growing up in the Bronx in the 1950s and 1960s she was Puerto Rican rather than “Hispanic,” a term that did not enter our racial lexicon until the 1970s.
Let’s be clear: Race exists. But it exists in our minds, not in our bodies. In our homes and schools, we need to teach the next generation of Americans to resist the false doctrine of inherent racial difference. Then, and only then, will they truly recognize everyone as part of a single race: the human one.
To support the famine relief effort, British tax policy required landlords to pay the local taxes of their poorest tenant farmers, leading many landlords to forcibly evict struggling farmers and destroy their cottages in order to save money. (Sketch: The Irish Famine: Interior of a Peasants Hut)
“Wear green on St. Patrick’s Day or get pinched.” That pretty much sums up the Irish-American “curriculum” that I learned when I was in school. Yes, I recall a nod to the so-called Potato Famine, but it was mentioned only in passing.
Sadly, today’s high school textbooks continue to largely ignore the famine, despite the fact that it was responsible for unimaginable suffering and the deaths of more than a million Irish peasants, and that it triggered the greatest wave of Irish immigration in U.S. history. Nor do textbooks make any attempt to help students link famines past and present. Yet there is no shortage of material that can bring these dramatic events to life in the classroom. In my own high school social studies classes, I begin with Sinead O’Connor’s haunting rendition of “Skibbereen,” which includes the verse:
… Oh it’s well I do remember, that bleak December day, The landlord and the sheriff came, to drive Us all away They set my roof on fire, with their cursed English spleen And that’s another reason why I left old Skibbereen.
By contrast, Holt McDougal’s U.S. history textbook The Americans, devotes a flat two sentences to “The Great Potato Famine.” Prentice Hall’s America: Pathways to the Present fails to offer a single quote from the time. The text calls the famine a “horrible disaster,” as if it were a natural calamity like an earthquake. And in an awful single paragraph, Houghton Mifflin’s The Enduring Vision: A History of the American People blames the “ravages of famine” simply on “a blight,” and the only contemporaneous quote comes, inappropriately, from a landlord, who describes the surviving tenants as “famished and ghastly skeletons.” Uniformly, social studies textbooks fail to allow the Irish to speak for themselves, to narrate their own horror.
These timid slivers of knowledge not only deprive students of rich lessons in Irish-American history, they exemplify much of what is wrong with today’s curricular reliance on corporate-produced textbooks.
First, does anyone really think that students will remember anything from the books’ dull and lifeless paragraphs? Today’s textbooks contain no stories of actual people. We meet no one, learn nothing of anyone’s life, encounter no injustice, no resistance. This is a curriculum bound for boredom. As someone who spent almost 30 years teaching high school social studies, I can testify that students will be unlikely to seek to learn more about events so emptied of drama, emotion, and humanity.
Nor do these texts raise any critical questions for students to consider. For example, it’s important for students to learn that the crop failure in Ireland affected only the potato—during the worst famine years, other food production was robust. Michael Pollan notes in The Botany of Desire, “Ireland’s was surely the biggest experiment in monoculture ever attempted and surely the most convincing proof of its folly.” But if only this one variety of potato, the Lumper, failed, and other crops thrived, why did people starve?
Thomas Gallagher points out in Paddy’s Lament, that during the first winter of famine, 1846-47, as perhaps 400,000 Irish peasants starved, landlords exported 17 million pounds sterling worth of grain, cattle, pigs, flour, eggs, and poultry—food that could have prevented those deaths. Throughout the famine, as Gallagher notes, there was an abundance of food produced in Ireland, yet the landlords exported it to markets abroad.
The school curriculum could and should ask students to reflect on the contradiction of starvation amidst plenty, on the ethics of food exports amidst famine. And it should ask why these patterns persist into our own time.
More than a century and a half after the “Great Famine,” we live with similar, perhaps even more glaring contradictions. Raj Patel opens his book, Stuffed and Starved: Markets, Power and the Hidden Battle for the World’s Food System: “Today, when we produce more food than ever before, more than one in ten people on Earth are hungry. The hunger of 800 million happens at the same time as another historical first: that they are outnumbered by the one billion people on this planet who are overweight.”
Patel’s book sets out to account for “the rot at the core of the modern food system.” This is a curricular journey that our students should also be on — reflecting on patterns of poverty, power, and inequality that stretch from 19th century Ireland to 21st century Africa, India, Appalachia, and Oakland; that explore what happens when food and land are regarded purely as commodities in a global system of profit.
But today’s corporate textbook-producers are no more interested in feeding student curiosity about this inequality than were British landlords interested in feeding Irish peasants. Take Pearson, the global publishing giant. At its website, the corporation announces (redundantly) that “we measure our progress against three key measures: earnings, cash and return on invested capital.” The Pearson empire had 2011 worldwide sales of more than $9 billion—that’s nine thousand million dollars, as I might tell my students. Multinationals like Pearson have no interest in promoting critical thinking about an economic system whose profit-first premises they embrace with gusto.
As mentioned, there is no absence of teaching materials on the Irish famine that can touch head and heart. In a role play, “Hunger on Trial,” that I wrote and taught to my own students in Portland, Oregon—included at the Zinn Education Project website— students investigate who or what was responsible for the famine. The British landlords, who demanded rent from the starving poor and exported other food crops? The British government, which allowed these food exports and offered scant aid to Irish peasants? The Anglican Church, which failed to denounce selfish landlords or to act on behalf of the poor? A system of distribution, which sacrificed Irish peasants to the logic of colonialism and the capitalist market?
These are rich and troubling ethical questions. They are exactly the kind of issues that fire students to life and allow them to see that history is not simply a chronology of dead facts stretching through time.
So go ahead: Have a Guinness, wear a bit of green, and put on the Chieftains. But let’s honor the Irish with our curiosity. Let’s make sure that our schools show some respect, by studying the social forces that starved and uprooted over a million Irish—and that are starving and uprooting people today.
The Real Irish-American Story Not Taught in Schools
by Bill Bigelow - Common Dreams:
Published on Friday, March 17, 2017
Roe v. Wade abortion was outlawed so doctors could make money
The Supreme Court decriminalizes abortion by handing down their decision in the case of Roe v. Wade. Despite opponents’ characterization of the decision, it was not the first time that abortion became a legal procedure in the United States. In fact, for most of the country’s first 100 years, abortion as we know it today was not only not a criminal offense, it was also not considered immoral.
In the 1700s and early 1800s, the word “abortion” referred only to the termination of a pregnancy after “quickening,” the time when the fetus first began to make noticeable movements. The induced ending of a pregnancy before this point did not even have a name–but not because it was uncommon. Women in the 1700s often took drugs to end their unwanted pregnancies.
In 1827, though,Illinois passed a law that made the use of abortion drugs punishable by up to three years’ imprisonment. Although other states followed the Illinois example, advertising for “Female Monthly Pills,” as they were known, was still common through the middle of the 19th century.
Abortion itself only became a serious criminal offense in the period between 1860 and 1880. And the criminalization of abortion did not result from moral outrage. The roots of the new law came from the newly established physicians’ trade organization, the American Medical Association. Doctors decided that abortion practitioners were unwanted competition and went about eliminating that competition. The Catholic Church, which had long accepted terminating pregnancies before quickening, joined the doctors in condemning the practice.
By the turn of the century, all states had laws against abortion, but for the most part they were rarely enforced and women with money had no problem terminating pregnancies if they wished. It wasn’t until the late 1930s that abortion laws were enforced.Subsequent crackdowns led to a reform movement that succeeded in lifting abortion restrictions in California and New York even before the Supreme Court decision in Roe v. Wade.
The fight over whether to criminalize abortion has grown increasingly fierce in recent years, but opinion polls suggest that most Americans prefer that women be able to have abortions in the early stages of pregnancy, free of any government interference.
Chomsky on America's Ugly History: FDR Was Fascist-Friendly Before WWII
By Zain Raza, Noam Chomsky / Noam Chomsky's Official Site Alternet
Noam Chomsky: Well, it was a mixed story. Roosevelt himself had a mixed attitude. For example, he was pretty supportive of Mussolini's fascism, in fact described Mussolini as "that admirable Italian gentleman." He later concluded that Mussolini had been misled by his association with Hitler and had been led kind of down the wrong path. But the American business community, the power systems in the United States were highly supportive of Mussolini.
In fact, even parts of the labor bureaucracy were. Fortune Magazine for example, the major business journal I think in 1932, had an issue with the headline, I’m quoting it: "The wops are unwopping themselves." The "wop" is a kind of a derogatory term for Italians and the "wops are finally unwopping themselves," under Mussolini they're becoming part of the civilized world. There was criticism of the Italian invasion of Ethiopia, a lot of criticism. But basically pretty supportive attitude toward Mussolini's fascism. When Germany, when Hitler took over, the attitude was more mixed.
There was a concern for a potential threat but nevertheless the general approach of the U.S., the British even more so, was fairly supportive. So for example in 1937, the State Department described Hitler as a kind of a moderate, fending off the dangerous forces of the right (and left). The State Department described Hitler as a moderate who was holding off the forces, the dangerous forces of the left, meaning the Bolsheviks, the labor movement and so on, and of the right, namely the extremist Nazism. Hitler was kind of in the middle and therefore we should kind of support him. This is a pretty familiar stance, incidentally like in many other cases.
George Kennan, later famous as one of the architects of post-war policy, was actually the American Council in Berlin up until Pearl Harbor. He was sending back reports which are public, which were qualified. He said we shouldn't be too harsh in condemning the Nazis since a lot of what they are doing is kind of understandable and we could get along with them and so on and this is one strain and a major one. But there was also plenty of criticism and condemnation. But the general attitudes were fairly mixed.
At the Munich Conference late 1938, Roosevelt sent his most trusted adviser, Sumner Wells, to Munich and Wells came back with a pretty positive report saying we can really work with Hitler, this conference opens the possibility of a period of peace and justice for Europe and we should work out ways to interact and deal with him. That was late 1938! And so it was quite a mixed story.
historical snapshot of capitalism and imperialism
Imperialism 101 Against Empire by Michael Parenti
A central imperative of capitalism is expansion. Investors will not put their money into business ventures unless they can extract more than they invest. Increased earnings come only with a growth in the enterprise. The capitalist ceaselessly searches for ways of making more money in order to make still more money. One must always invest to realize profits, gathering as much strength as possible in the face of competing forces and unpredictable markets...
...North American and European corporations have acquired control of more than three-fourths of the known mineral resources of Asia, Africa, and Latin America. But the pursuit of natural resources is not the only reason for capitalist overseas expansion. There is the additional need to cut production costs and maximize profits by investing in countries with cheaper labor markets. U.S. corporate foreign investment grew 84 percent from 1985 to 1990, the most dramatic increase being in cheap-labor countries like South Korea, Taiwan, Spain, and Singapore.
Because of low wages, low taxes, nonexistent work benefits, weak labor unions, and nonexistent occupational and environmental protections, U.S. corporate profit rates in the Third World are 50 percent greater than in developed countries. Citibank, one of the largest U.S. firms, earns about 75 percent of its profits from overseas operations. While profit margins at home sometimes have had a sluggish growth, earnings abroad have continued to rise dramatically, fostering the development of what has become known as the multinational or transnational corporation. Today some four hundred transnational companies control about 80 percent of the capital assets of the global free market and are extending their grasp into the ex-communist countries of Eastern Europe...
... Myths of Underdevelopment
The impoverished lands of Asia, Africa, and Latin America are known to us as the "Third World," to distinguish them from the "First World" of industrialized Europe and North America and the now largely defunct "Second World" of communist states. Third World poverty, called "underdevelopment," is treated by most Western observers as an original historic condition. We are asked to believe that it always existed, that poor countries are poor because their lands have always been infertile or their people unproductive. In fact, the lands of Asia, Africa, and Latin America have long produced great treasures of foods, minerals and other natural resources. That is why the Europeans went through all the trouble to steal and plunder them. One does not go to poor places for self-enrichment. The Third World is rich. Only its people are poor—and it is because of the pillage they have endured.
The process of expropriating the natural resources of the Third World began centuries ago and continues to this day. First, the colonizers extracted gold, silver, furs, silks, and spices, then flax, hemp, timber, molasses, sugar, rum, rubber, tobacco, calico, cocoa, coffee, cotton, copper, coal, palm oil, tin, iron, ivory, ebony, and later on, oil, zinc, manganese, mercury, platinum, cobalt, bauxite, aluminum, and uranium. Not to be overlooked is that most hellish of all expropriations: the abduction of millions of human beings into slave labor.
Through the centuries of colonization, many self-serving imperialist theories have been spun. I was taught in school that people in tropical lands are slothful and do not work as hard as we denizens of the temperate zone. In fact, the inhabitants of warm climates have performed remarkably productive feats, building magnificent civilizations well before Europe emerged from the Dark Ages. And today they often work long, hard hours for meager sums. Yet the early stereotype of the "lazy native" is still with us. In every capitalist society, the poor—both domestic and overseas—regularly are blamed for their own condition.[...]
Read more at http://www.michaelparenti.org/Imperialism101.html
Why Allen Dulles Killed the Kennedys
by David Swanson - Freepress:
...JFK and the Unspeakable depicts Kennedy as getting in the way of the violence that Allen Dulles and gang wished to engage in abroad. He wouldn't fight Cuba or the Soviet Union or Vietnam or East Germany or independence movements in Africa. He wanted disarmament and peace. He was talking cooperatively with Khrushchev, as Eisenhower had tried prior to the U2-shootdown sabotage. The CIA was overthrowing governments in Iran, Guatemala, the Congo, Vietnam, and around the world. Kennedy was getting in the way.
The Devil's Chessboard depicts Kennedy, in addition, as himself being the sort of leader the CIA was in the habit of overthrowing in those foreign capitals. Kennedy had made enemies of bankers and industrialists. He was working to shrink oil profits by closing tax loopholes, including the "oil depletion allowance." He was permitting the political left in Italy to participate in power, outraging the extreme right in Italy, the U.S., and the CIA. He aggressively went after steel corporations and prevented their price hikes. This was the sort of behavior that could get you overthrown if you lived in one of those countries with a U.S. embassy in it.
Yes, Kennedy wanted to eliminate or drastically weaken and rename the CIA. Yes he threw Dulles and some of his gang out the door. Yes he refused to launch World War III over Cuba or Berlin or anything else. Yes he had the generals and warmongers against him, but he also had Wall Street against him.
Of course "politicians who ever try to get out of line" are now, as then, but more effectively now, handled first by the media. If the media can stop them or some other maneuver can stop them (character assassination, blackmail, distraction, removal from power) then violence isn't required.
The fact that Kennedy resembled a coup target, not just a protector of other targets, would be bad news for someone like Senator Bernie Sanders if he ever got past the media, the "super delegates," and the sell-out organizations to seriously threaten to take the White House. A candidate who accepts the war machine to a great extent and resembles Kennedy not at all on questions of peace, but who takes on Wall Street with the passion it deserves, could place himself as much in the cross-hairs of the deep state as a Jeremy Corbyn who takes on both capital and killing.
Accounts of the escapades of Allen Dulles, and the dozen or more partners in crime whose names crop up beside his decade after decade, illustrate the power of a permanent plutocracy, but also the power of particular individuals to shape it. What if Allen Dulles and Winston Churchill and others like them hadn't worked to start the Cold War even before World War II was over? What if Dulles hadn't collaborated with Nazis and the U.S. military hadn't recruited and imported so many of them into its ranks? What if Dulles hadn't worked to hide information about the holocaust while it was underway? What if Dulles hadn't betrayed Roosevelt and Russia to make a separate U.S. peace with Germany in Italy? What if Dulles hadn't begun sabotaging democracy in Europe immediately and empowering former Nazis in Germany? What if Dulles hadn't turned the CIA into a secret lawless army and death squad? What if Dulles hadn't worked to end Iran's democracy, or Guatemala's? What if Dulles' CIA hadn't developed torture, rendition, human experimentation, and murder as routine policies? What if Eisenhower had been permitted to talk with Khrushchev? What if Dulles hadn't tried to overthrow the President of France? What if Dulles had been "checked" or "balanced" ever so slightly by the media or Congress or the courts along the way?
These are tougher questions than "What if there had been no Lee Harvey Oswald?" The answer to that is, "There would have been another guy very similar to serve the same purpose, just as there had been in the earlier attempt on JFK in Chicago. But "What if there had been no Allen Dulles?" looms large enough to suggest the possible answer that we would all be better off, less militarized, less secretive, less xenophobic. And that suggests that the deep state is not uniform and not unstoppable. Talbot's powerful history is a contribution to the effort to stop it.
I hope Talbot speaks about his book in Virginia, after which he might stop saying that Williamsburg and the CIA's "farm" are in "Northern Virginia." Hasn't Northern Virginia got enough to be ashamed of without that?
The Real (Despicable) Thomas Jefferson
Michael Coard - philly tribune
During the second week of April this year, there were campaign exhibitions here in Philly for two presidential candidates. During the second week of April 2014, there was a controversial exhibition here in Philly for one presidential criminal.
Two years ago, the National Constitution Center began its six month presentation about Thomas Jefferson entitled “Slavery at Jefferson’s Monticello.” And to my surprise, the organizers didn’t engage in the customary American practice of sweeping presidential complicity with slavery under the rug. In fact, they included the word “Slavery” in the title and by addressing “the stories of six slave families who ‘lived’ and ‘worked’ at Jefferson’s plantation.” Nice, huh? Well, yes. But only kinda/sorta. By that, I mean they didn’t really “live.” Instead, they actually “suffered and survived.” And they didn’t really “work.” Instead, they actually “slaved and toiled.” But let’s not quibble over semantics. Instead, let’s go the to heart of the matter by enlightening you about who- and what- Thomas Jefferson truly was. Here are ten things you didn’t know about him:
1. He was a lifetime slaveholder. Thomas was the son of Peter Jefferson, a Virginia landowning slaveholder who died in 1757, leaving the eleven-year-old with a massive estate. Ten years later, he formally inherited 52 Black human beings. When he authored the Declaration of Independence in 1776, he held 175 Black men, women, and children in bondage. By 1822, he had increased that number to 267.
2. He was a hypocrite. While writing “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness…,” he enslaved hundreds of human beings. See item one above.
3. He was a rapist. As U.S. Envoy and Minister to France, Jefferson began living there periodically from 1784-1789. He took with him his oldest daughter Martha and a few of those whom he enslaved, including James Hemings. In 1787, he requested that his daughter Polly join him. This meant that Polly’s enslaved chambermaid, 14-year-old seamstress Sally Hemings (James’ younger sister), was to accompany her. Both Sally and James were among the six mulatto offspring of Jefferson’s father-in-law, John Wayles and his enslaved “domestic servant” Betty Hemings. Sally and James were half-siblings of Thomas’ late wife, Martha Wayles Skelton Jefferson. Thomas, after repeatedly raping Sally while in Paris, impregnated her. Her first child died after she returned to America. But she has six more of Thomas’ children at Monticello.
4. He was an incestuous pedophile. See item three above.
5. He was a “Back To Africa” proponent (but not really). This would have been a good thing if his purpose was the Afrocentric goal of reuniting Blacks with their roots. But it was a bad thing because, as Peter S. Onuf, Thomas Jefferson Memorial Foundation Professor Emeritus, notes, it was a scheme by Jefferson to conceal his “shadow family.”
6. He was a legislative racist. As pointed out by Joyce Oldham Appleby, Professor Emerita of History at UCLA and former President of the Organization of American Historians and the American Historical Association, as well as by Arthur M. Schlesinger Jr., former Professor of History at Harvard University and Professor Emeritus at CUNY Graduate Center, Jefferson opposed the practice of slaveholders freeing their enslaved because he claimed it would encourage rebellion. And, as noted by John E. Ferling, Professor Emeritus of History at University of West Georgia, after Jefferson was elected to the Virginia House of Burgesses in 1769, he attempted to introduce laws that essentially would have banned free Blacks from entering or exiting that state and would have banished children whose fathers were of African origin. He also tried to expel white women who had children by Black men. After being elected Governor in 1779, he signed a bill to encourage enlistment in the Revolutionary War by compensating white men by giving them, among other things, “a healthy sound Negro.”
7. He was an international racist. As Secretary of State in 1795, he gave $40,000 and one thousand firearms to colonial French slaveholders in Haiti in an attempt to defeat Toussaint L’Ouverture’s slave rebellion. As President, he supported French plans to resume power, lent France $300,000 “for relief of whites on the island,” and in 1804 refused to recognize Haiti as a sovereign republic after its military victory. Two years later, he imposed a trade embargo.
8. He was a blatantly ignorant racist. In his 1785 book entitled “Notes on the State of Virginia,” he wrote about “the preference of the ‘oran-outan’ (i.e., an ape-like creature) for the Black women over those of … (its) own species.” He also wrote that Blacks have “a very strong and disagreeable odor” and that they “are inferior to the whites ...”
9. He was a liar. His friend from the American Revolution, Polish nobleman Tadeusz Kosciuszko, came to America in 1798 to receive back pay for his military service. He then wrote a will directing Jefferson to use all of Kosciuszko’s money and land in the U.S. to “free and educate slaves.” Jefferson agreed to do so. After Kosciuszko died in 1817, Jefferson refused to free or educate any of them.
10. Black labor built Monticello. Beginning in 1768, Jefferson forced many in his enslaved population- including skilled Black carpenters- to do the tortuous work of building his palatial plantation, known as Monticello.
The words from David Walker’s Appeal, written in 1829, and the words of Christopher James Perry Sr., founder of the Tribune in 1884, are the inspiration for my “Freedom’s Journal” columns. In order to honor that pivotal nationalist abolitionist and that pioneering newspaper giant, as well as to inspire today’s Tribune readers, each column ends with Walker and Perry’s combined quote- along with my inserted voice- as follows: I ask all Blacks “to procure a copy of this… (weekly column) for it is designed… particularly for them” so they can “make progress… against (racist) injustice.”
The secret history of Hitler’s ‘Black Holocaust’
The fact that we officially commemorate the Holocaust on January 27, the date of the liberation of Auschwitz, means that remembrance of Nazi crimes focuses on the systematic mass murder of Europe’s Jews.
The other victims of Nazi racism, including Europe’s Sinti and Roma, are now routinely named in commemoration, but not all survivors have had equal opportunities to have their stories heard.
One group of victims who have yet to be publicly memorialized is black Germans.
All those voices need to be heard, not only for the sake of the survivors, but because we need to see how varied the expressions of Nazi racism were if we are to understand the lessons of the Holocaust for today.
When Hitler came to power in 1933, there were understood to have been some thousands of black people living in Germany—they were never counted and estimates vary widely. At the heart of an emerging black community was a group of men from Germany’s own African colonies (which were lost under the peace treaty that ended World War I) and their German wives.
They were networked across Germany and abroad by ties of family and association and some were active in Communist and anti-racist organizations. Among the first acts of the Nazi regime was the suppression of black political activism. There were also 600 to 800 children fathered by French colonial soldiers—many, though not all, African—when the French army occupied the Rhineland as part of the peace settlement after 1919. French troops were withdrawn in 1930 and the Rhineland was demilitarized until Hitler stationed German units there in 1936.
Denial of Rights and Work
The 1935 Nuremberg Laws stripped Jews of their German citizenship and prohibited them from marrying or having sexual relations with “people of German blood.”
A subsequent ruling confirmed that black people (like “gypsies”) were to be regarded as being “of alien blood” and subject to the Nuremberg principles. Very few people of African descent had German citizenship, even if they were born in Germany, but this became irreversible when they were given passports that designated them as “stateless Negroes.”
In 1941, black children were officially excluded from public schools, but most of them had suffered racial abuse in their classrooms much earlier. Some were forced out of school and none were permitted to go on to university or professional training. Published interviews and memoirs by both men and women, unpublished testimony and postwar compensation claims testify to these and other shared experiences.
Employment prospects that were already poor before 1933 got worse afterward. Unable to find regular work, some were drafted for forced labor as “foreign workers” during World War II. Films and stage shows making propaganda for the return of Germany’s African colonies became one of the few sources of income, especially after black people were banned from other kinds of public performance in 1939.
When SS leader Heinrich Himmler undertook a survey of all black people in Germany and occupied Europe in 1942, he was probably contemplating a round-up of some kind. But there was no mass internment.
Research in camp records and survivor testimony has so far come up with around 20 black Germans who spent time in concentration camps and prisons—and at least one who was a euthanasia victim. The one case we have of a black person being sent to a concentration camp explicitly for being a Mischling (mulatto)—Gert Schramm, interned in Buchenwald aged 15—comes from 1944.
Instead, the process that ended with incarceration usually began with a charge of deviant or antisocial behavior. Being black made people visible to the police, and it became a reason not to release them once they were in custody.
In this respect, we can see black people as victims not of a peculiarly Nazi racism, but of an intensified version of the kinds of everyday racism that persist today.
Sterilization: An Assault on Families
It was the Nazi fear of “racial pollution” that led to the most common trauma suffered by black Germans: the breakup of families. “Mixed” couples were harassed into separating. When others applied for marriage licenses, or when a woman was known to be pregnant or had a baby, the black partner became a target for involuntary sterilization.
In a secret action in 1937, some 400 of the Rhineland children were forcibly sterilized. Other black Germans went into hiding or fled the country to escape sterilization, while news of friends and relatives who had not escaped intensified the fear that dominated people’s lives.
The black German community was new in 1933; in most families the first generation born in Germany was just coming of age. In that respect it was similar to the communities in France and Britain that were forming around families founded by men from the colonies.
Nazi persecution broke those families and the ties of community. One legacy of that was a long silence about the human face of Germany’s colonial history: the possibility that black and white Germans could share a social and cultural space.
That silence helps to explain Germans’ mixed responses to today’s refugee crisis. The welcome offered by the German chancellor, Angela Merkel, and many ordinary Germans has given voice to the liberal humanitarianism that was always present in German society and was reinforced by the lessons of the Holocaust.
The reaction against refugees reveals the other side of the coin: Germans who fear immigration are not alone in Europe. But their anxieties draw on a vision that has remained very powerful in German society since 1945: the idea that however deserving they are, people who are not white cannot be German.
The Thirteenth Amendment and Slavery in a Global Economy
Tobias Barrington Wolff
From Race, Racism, and Law: The Thirteenth Amendment to the U.S. Constitution provides that "Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction." Its language is capacious and its mandate broad. The prohibition embodied in the Amendment, not limited to the form of chattel slavery peculiar to pre-Civil War America, forbids almost all forms of compelled labor within the physical bounds of the United States and its possessions. But what of slave labor outside U.S. territory? When U.S. citizens participate in slave practices in foreign jurisdictions, does the Thirteenth Amendment impose any interdiction? Can a U.S. citizen own a slave, so long as he does not bring the enslaved person to American shores?
These questions are emerging as matters of great importance, for the participation of U.S. citizens in foreign slave practices is on the rise. With increasing frequency, U.S.-based multinational corporations are carrying on their operations in some countries through the deliberate exploitation of involuntary or slave labor. The globalization of industry has carried with it a globalization of labor exploitation, occurring outside the ordinary jurisdiction of U.S. enforcement authorities. In countries such as Burma, Mauritania, Pakistan, and Ivory Coast, outright practices of slave labor have arisen, in varying forms and with varying levels of corporate involvement. American participation in such exploitation is often carried out indirectly--through intermediaries, with plausible deniability. And yet the abuse of foreign laborers under conditions of slavery is assuming an increasingly important position in the economics of some U.S. industries-- notable among them the resource extraction and manufacturing industries, where the use of cheap, expendable, involuntary labor has markedly increased profitability.
This development in the foreign labor practices of U.S. entities heralds a new era of challenge and transformation for the Thirteenth Amendment and its prohibition on the existence of slavery or involuntary servitude. It has become necessary to reexamine the range of activities in American industry, and American participation in global industry, that the Amendment reaches. The inquiry is long overdue. Despite the importance of the principle that the Thirteenth Amendment embodies, its doctrinal landscape is severely underdeveloped and has not yet been meaningfully translated into the present industrial context.
The Amendment has faced such challenges before. One of the first came around the turn of the twentieth century, in response to the attempts of post-Civil War landowners and industrialists to reinstate the practical realities of slavery in a more legally palatable form through the practice of peonage. No longer able to exploit slave labor as a formal institution, some employers pressed the law into service in the decades following emancipation, enacting statutes that purported to aim at such evils as debt default and fraud but had the effect of tying disempowered workers to forced terms of labor under threat of prosecution and imprisonment. The Supreme Court rose to this challenge, elevating substance over form and striking down these peonage schemes. In doing so, it carried forward into a new industrial context the Thirteenth Amendment's dual promise to emancipate the slave laborer within American industry and to emancipate American industry from slave labor.
The present era of globalization has brought with it the next logical step in this progression: the pressing into involuntary service of foreign laborers by U.S.-based multinational entities. Corporations based in the United States can now export the slave dependent elements of their business operations to foreign lands and then retrieve the fruits of those operations for domestic use and profit. With that step, we are once again seeing the reintroduction of slave labor into American industry. It has thus become necessary once again to translate the command of the Thirteenth Amendment for a new industrial context.
I choose the language of translation advisedly. As Professor Guyora Binder has observed, anyone seeking to articulate a coherent approach to modern interpretations of the Thirteenth Amendment must address difficult questions of history. The enactment of the Reconstruction Amendments undermined the precepts on which earlier approaches to constitutional interpretation had rested, throwing into question the proper interpretive approach to the Amendments themselves. "It was the Reconstruction Amendments' command to abolish one of American culture's defining customs," Professor Binder has observed, "that rendered them peculiarly uninterpretable."
In the case of the Fourteenth Amendment, this interpretive dilemma has already played out on the constitutional stage. The road from Plessy v. Ferguson to Brown v. Board of Education marked a journey between two distinct visions of the relationship between tradition and constitutional analysis. In Plessy, the Court explicitly rested its rejection of the equal protection challenge to legally enforced segregation upon "the established usages, customs and traditions of the people." Under that tradition, the Court explained, a separation of the races in public facilities could be defended as "reasonable, and . . . enacted in good faith for the promotion of the public good." Custom and usage were a sufficient response to a constitutional challenge under the dispensation to which the Plessy majority subscribed. One of the revolutionary changes wrought by Brown was a deliberate rejection of this interpretive method. "In approaching this problem [of segregation]," the Court wrote in Brown,
we cannot turn the clock back to 1868 when the Amendment was adopted, or even to 1896 when Plessy v. Ferguson was written. We must consider public education in the light of its full development and its present place in American life throughout the Nation. Only in this way can it be determined if segregation in public schools deprives these plaintiffs of the equal protection of the laws.
Thus, in concluding that legally enforced segregation in educational facilities is "inherently unequal," the Brown Court dramatically rejected custom and tradition, holding that the Fourteenth Amendment embodied substantive principles that do not automatically defer to established social norms. A similar observation may be made about the Fifteenth Amendment, which has occupied an interpretive landscape that has recapitulated that of the Fourteenth in most relevant respects.
In the case of the Thirteenth Amendment, the interpretive problem has at once been more straightforward and more opaque. There has never been any question that the Amendment, in eradicating slavery and elevating emancipation to the status of a constitutional imperative, embodied a substantive rejection of one of America's most pervasive customs and traditions. In that respect, the Thirteenth Amendment directly implemented a reshaping of the constitutional landscape that would only take hold in the other Reconstruction Amendments after the passage of ninety more years. But in a broader sense, the Thirteenth Amendment has yet to travel the road marked out by Plessy and Brown. Consider Robertson v. Baldwin, one of the early post- Reconstruction Thirteenth Amendment decisions, which the Court handed down in the Term following Plessy. In Robertson, a merchant seaman challenged a federal statute that authorized the imprisonment and forcible return of sailors who wished to leave the employ of their vessels. In rejecting this Thirteenth Amendment claim, the majority embraced the same interpretive method that it had recently deployed in its Fourteenth Amendment analysis in Plessy. Despite the radical rejection of tradition around the subject of slavery and labor that was inherent in the Thirteenth Amendment itself, the Robertson Court relied uncritically upon pre-Civil War common law authorities to carve out a substantive exception to the scope of the Amendment's command, concluding that "the amendment was not intended to introduce any novel doctrine with respect to certain descriptions of service which have always been treated as exceptional." Indeed, Robertson includes a vigorous dissent by Justice Harlan, who staked out the same interpretive ground that he had occupied in his Plessy dissent a year earlier, rejecting the use to which the Robertson majority put custom and tradition as inappropriate following the enactment of the Thirteenth Amendment.
To articulate a Thirteenth Amendment jurisprudence that is both internally coherent and in step with the interpretive method now firmly established for the Fourteenth and Fifteenth Amendments, one must avoid a myopic hindsight that views the Amendment as accomplishing nothing more than the constitutionalization of emancipation. This is especially so in seeking out sources to identify the core values that the Amendment embodies. Binder poses the problem in the following terms: "When the Constitution condemns society, where can we turn for aid in construing it? What aspects of American society authorize the Thirteenth Amendment and what aspects are amended by it? What was the essential feature of the slavery that the Thirteenth Amendment commands us to disestablish?" The Court made an initial gesture toward answering these interpretive questions early in the twentieth century when it employed the Thirteenth Amendment to strike down the peonage and "antifraud" statutes mentioned above. But since then, despite the interpretive revolution in the other Reconstruction Amendments heralded by Brown and its progeny, the Court has developed no approach to two basic questions of interpretation: "What was the essential feature of the slavery that the Thirteenth Amendment commands us to disestablish"; and "[W]here can we turn for aid in construing it?"
This Article examines the most pressing contemporary application of these questions: the increasingly important role played by multinational corporate entities in forced labor practices around the globe. In doing so, it offers an approach to addressing the broader implications of the Thirteenth Amendment's interpretive challenge. My principal contention is that the Thirteenth Amendment forbids the deliberate incorporation of slave labor into American industry. More precisely, I contend that the knowing use of slave labor by U.S. based entities in their foreign operations constitutes the presence of "slavery" within the United States, as that term is used in the Thirteenth Amendment, and hence that this practice renders such U.S. entities subject to the prohibitory authority of American courts through a private civil action. The term "slavery" as it is used in the Amendment entails more than the physical presence of enslaved individuals. Slavery is a multilayered practice. It creates a distinctive form of interpersonal relationship. It depends upon the existence of interrelated, supporting institutions for its sustainability. And it arises not by happenstance, but in response to the urging of industries that benefit from its distinctive features and intentionally create a market for it. Those who drafted the Thirteenth Amendment understood all three of these aspects of slavery--the interpersonal, the institutional, and the industrial--to be vital elements of the practice that they sought to eradicate with the Amendment's enactment. In this Article, I hope to begin the process of translating that understanding into the language of the global economy and, in the process, to lay the foundation for a more modern and salient Thirteenth Amendment jurisprudence.[...]