welcome to history
page 2
history you should know
**U.S. GUILTY OF MORE ELECTION MEDDLING THAN RUSSIA, HAS DONE TO OTHER COUNTRIES FOR OVER A CENTURY(ARTICLE BELOW)
*COARD: AMERICA'S 12 SLAVEHOLDING PRESIDENTS
(ARTICLE BELOW)
*A historian explains why Robert E. Lee wasn’t a hero — he was a traitor
(ARTICLE BELOW)
*ANDREW JOHNSON’S FAILED PRESIDENCY ECHOES IN TRUMP’S WHITE HOUSE
(ARTICLE BELOW)
*ON ITS HUNDREDTH BIRTHDAY IN 1959, EDWARD TELLER WARNED THE OIL INDUSTRY ABOUT GLOBAL WARMING(ARTICLE BELOW)
*WHITE NEWSPAPER JOURNALISTS EXPLOITED RACIAL DIVISIONS TO HELP BUILD THE GOP’S SOUTHERN FIREWALL IN THE 1960S(ARTICLE BELOW)
*THE TWO MEN WHO ALMOST DERAILED NEW ENGLAND’S FIRST COLONIES
(ARTICLE BELOW)
*UNITED DAUGHTERS OF THE CONFEDERACY LED A CAMPAIGN TO BUILD MONUMENTS AND WHITEWASH HISTORY(ARTICLE BELOW)
*HOW THE US GOVERNMENT CREATED AND CODDLED THE GUN INDUSTRY
(ARTICLE BELOW)
*A HISTORIAN DESTROYS RACISTS’ FAVORITE MYTHS ABOUT THE VIKINGS
(ARTICLE BELOW)
*DONALD TRUMP, JEWS AND THE MYTH OF RACE: HOW JEWS GRADUALLY BECAME “WHITE,” AND HOW THAT CHANGED AMERICA(ARTICLE BELOW)
*THE REAL IRISH-AMERICAN STORY NOT TAUGHT IN SCHOOLS(ARTICLE BELOW)
*ROE V. WADE
ABORTION WAS OUTLAWED SO DOCTORS COULD MAKE MONEY(ARTICLE BELOW)
*CHOMSKY ON AMERICA'S UGLY HISTORY: FDR WAS FASCIST-FRIENDLY BEFORE WWII(ARTICLE BELOW)
*HISTORICAL SNAPSHOT OF CAPITALISM AND IMPERIALISM(ARTICLE BELOW)
*WHY ALLEN DULLES KILLED THE KENNEDYS (ARTICLE BELOW)
*THE REAL (DESPICABLE) THOMAS JEFFERSON(ARTICLE BELOW)
*THE SECRET HISTORY OF HITLER’S ‘BLACK HOLOCAUST’(ARTICLE BELOW)
*THE THIRTEENTH AMENDMENT AND SLAVERY IN A GLOBAL ECONOMY(ARTICLE BELOW)
U.S. Guilty of More Election Meddling Than Russia, Has Done To Other Countries For Over A Century
By David Love - atlanta black star
March 18, 2018
With news of Russian meddling in the 2016 U.S. presidential election seizing the spotlight each day, America is now faced with the prospect of being victimized by the same practices it has promoted throughout the world. While Russia is receiving attention for its interference in the internal politics of the United States, Britain and other European nations, Uncle Sam has a long history of disrupting foreign governments, engaging in regime change, deposing elected leaders and even assassinating them.
Comparing the United States and Russia and their respective histories of overt and covert election influence in other countries, the former wins decisively. Carnegie Mellon University researcher Dov H. Levin has created a data set in which he found between 1946 and 2000, the U.S. interfered in foreign elections 81 times, while the Soviet Union and later Russia meddled on 36 occasions. “I’m not in any way justifying what the Russians did in 2016,” Levin told The New York Times. “It was completely wrong of Vladimir Putin to intervene in this way. That said, the methods they used in this election were the digital version of methods used both by the United States and Russia for decades: breaking into party headquarters, recruiting secretaries, placing informants in a party, giving information or disinformation to newspapers.”
Africa and the Caribbean provide ample evidence of a history of U.S. meddling in the elections of other nations. For example, Patrice Lumumba, the first democratically elected prime minister of the Democratic Republic of the Congo, was overthrown and assassinated in 1961 by the Belgians, who were reportedly aided and abetted by the CIA. The U.S. also had its own plan, which was not implemented, to assassinate Lumumba by lacing his toothpaste with poison, and otherwise remove him from power through other methods.
In 1966, the CIA was involved in the overthrow of Ghanaian President Kwame Nkrumah by way of a military coup. According to CIA intelligence officer John Stockwell in the book “In Search of Enemies,” the Accra office of the CIA had a “generous budget” and was encouraged by headquarters to maintain close contact with the coup plotters. According to a declassified U.S. government document, “The coup in Ghana is another example of a fortuitous windfall. Nkrumah was doing more to undermine our interests than any other black African. In reaction to his strongly pro-Communist leanings, the new military regime is almost pathetically pro-Western.” CIA participation in the coup was reportedly undertaken without approval from an interagency group that monitors clandestine CIA operations.
The U.S. Marines were on hand in 1912 to assist the Cuban government in destroying El Partido de Independiente de Color (PIC) or the Independent Party of Color, which was formed by descendants of slaves and became the first 20th century Black political party in the Western Hemisphere outside of Haiti. PIC — which believed in racial pride and equal rights for Black people — engaged in protest after the Cuban government banned the race-based party from participating in elections. In putting down the PIC, the United States invoked the Platt Amendment, which allowed American intervention in Cuban affairs. The military action from U.S. and Cuban forces resulted in the massacre of 6,000 Black people.
The U.S. occupied the Dominican Republic twice — from 1916 until 1924, controlling the government and who became president, and again in 1965, opposing elected president Juan Bosch, supporting a military coup and installing Joaquin Balaguer. President Reagan took advantage of the assassination of Prime Minister Maurice Bishop of Grenada and orchestrated its long planned invasion of the Caribbean nation, justifying the invasion on the grounds the regime was anti-American and supported by Cuba.
America is known for a high degree of intervention and coup sponsorship in Haiti, with military occupations and support for brutal dictators, and the ousting of President Jean-Bertrand Aristide under both Presidents George H.W. and George W. Bush. In 2009, during the Obama administration, Honduran President Manuel Zelaya was overthrown in a military coup and forced to fly to a U.S. military base at gunpoint and in his pajamas in an act of American-endorsed regime change. Although there was no evidence the Obama administration was involved in the coup, it did not stop it from taking place. The United States called for new elections rather than declaring a coup had taken place, and contributed to the subsequent deterioration and violence in Honduras. Elsewhere in Latin America, the U.S. was involved in the 1954 overthrow of Jacobo Arbenz Guzman, the democratically elected president of Guatemala, to protect the profits of the United Fruit Company.
In 1953, the CIA, with help from Great Britain, engineered a coup in Iran, overthrowing the democratically elected Prime Minister Mohammed Mossadegh and installing a puppet regime under the Shah. This, after Mossadegh nationalized the British Anglo-Iranian Oil Company, later known as BP. In 1965, the U.S. Embassy supported the rise to power of Indonesia’s brutal dictator General Suharto, and enabled his massacre of over half a million Indonesians.. Ten years later, the U.S. helped Suharto with political and military support in his invasion of East Timor, which had declared independence from Portugal. The Indonesian occupation killed more than 200,000 Timorese, one third of the population.
With the 2016 presidential election, America now has experienced having another country meddle in its internal affairs — something which the U.S. has perpetrated against other nations for more than a century.
Comparing the United States and Russia and their respective histories of overt and covert election influence in other countries, the former wins decisively. Carnegie Mellon University researcher Dov H. Levin has created a data set in which he found between 1946 and 2000, the U.S. interfered in foreign elections 81 times, while the Soviet Union and later Russia meddled on 36 occasions. “I’m not in any way justifying what the Russians did in 2016,” Levin told The New York Times. “It was completely wrong of Vladimir Putin to intervene in this way. That said, the methods they used in this election were the digital version of methods used both by the United States and Russia for decades: breaking into party headquarters, recruiting secretaries, placing informants in a party, giving information or disinformation to newspapers.”
Africa and the Caribbean provide ample evidence of a history of U.S. meddling in the elections of other nations. For example, Patrice Lumumba, the first democratically elected prime minister of the Democratic Republic of the Congo, was overthrown and assassinated in 1961 by the Belgians, who were reportedly aided and abetted by the CIA. The U.S. also had its own plan, which was not implemented, to assassinate Lumumba by lacing his toothpaste with poison, and otherwise remove him from power through other methods.
In 1966, the CIA was involved in the overthrow of Ghanaian President Kwame Nkrumah by way of a military coup. According to CIA intelligence officer John Stockwell in the book “In Search of Enemies,” the Accra office of the CIA had a “generous budget” and was encouraged by headquarters to maintain close contact with the coup plotters. According to a declassified U.S. government document, “The coup in Ghana is another example of a fortuitous windfall. Nkrumah was doing more to undermine our interests than any other black African. In reaction to his strongly pro-Communist leanings, the new military regime is almost pathetically pro-Western.” CIA participation in the coup was reportedly undertaken without approval from an interagency group that monitors clandestine CIA operations.
The U.S. Marines were on hand in 1912 to assist the Cuban government in destroying El Partido de Independiente de Color (PIC) or the Independent Party of Color, which was formed by descendants of slaves and became the first 20th century Black political party in the Western Hemisphere outside of Haiti. PIC — which believed in racial pride and equal rights for Black people — engaged in protest after the Cuban government banned the race-based party from participating in elections. In putting down the PIC, the United States invoked the Platt Amendment, which allowed American intervention in Cuban affairs. The military action from U.S. and Cuban forces resulted in the massacre of 6,000 Black people.
The U.S. occupied the Dominican Republic twice — from 1916 until 1924, controlling the government and who became president, and again in 1965, opposing elected president Juan Bosch, supporting a military coup and installing Joaquin Balaguer. President Reagan took advantage of the assassination of Prime Minister Maurice Bishop of Grenada and orchestrated its long planned invasion of the Caribbean nation, justifying the invasion on the grounds the regime was anti-American and supported by Cuba.
America is known for a high degree of intervention and coup sponsorship in Haiti, with military occupations and support for brutal dictators, and the ousting of President Jean-Bertrand Aristide under both Presidents George H.W. and George W. Bush. In 2009, during the Obama administration, Honduran President Manuel Zelaya was overthrown in a military coup and forced to fly to a U.S. military base at gunpoint and in his pajamas in an act of American-endorsed regime change. Although there was no evidence the Obama administration was involved in the coup, it did not stop it from taking place. The United States called for new elections rather than declaring a coup had taken place, and contributed to the subsequent deterioration and violence in Honduras. Elsewhere in Latin America, the U.S. was involved in the 1954 overthrow of Jacobo Arbenz Guzman, the democratically elected president of Guatemala, to protect the profits of the United Fruit Company.
In 1953, the CIA, with help from Great Britain, engineered a coup in Iran, overthrowing the democratically elected Prime Minister Mohammed Mossadegh and installing a puppet regime under the Shah. This, after Mossadegh nationalized the British Anglo-Iranian Oil Company, later known as BP. In 1965, the U.S. Embassy supported the rise to power of Indonesia’s brutal dictator General Suharto, and enabled his massacre of over half a million Indonesians.. Ten years later, the U.S. helped Suharto with political and military support in his invasion of East Timor, which had declared independence from Portugal. The Indonesian occupation killed more than 200,000 Timorese, one third of the population.
With the 2016 presidential election, America now has experienced having another country meddle in its internal affairs — something which the U.S. has perpetrated against other nations for more than a century.
Coard: America's 12 Slaveholding Presidents
Michael Coard - philly tribune
3/10/18
In my Freedom’s Journal columns on February 24 and March 3 here in The Philadelphia Tribune, I exposed the lies about President George Washington’s supposed wooden teeth and Thomas Jefferson’s supposed innocently romantic love affair with Sally Hemings.
Washington’s teeth were actually yanked from the mouths of our enslaved ancestors and Jefferson actually raped Sally repeatedly while she was just a child.
In response to both columns, white racists went certifiably crazy (I mean crazier) and denied and yelled and screamed and hollered and insulted. They also trolled on social media. Unfortunately for them, they’re gonna need a straight-jacket after reading this.
This week’s topic is about the twelve United States presidents who enslaved Black men, women, boys, and girls. And before you crazy racists start talking nonsense about those so-called “great” patriots simply being “men of their times,” you need to know that the anti-slavery movement amongst good white folks began in the 1730s and spread throughout the Thirteen Colonies as a result of the abolitionist activities during the First Great Awakening, which was early America’s Christian revival movement. Furthermore, the anti-slavery gospel of the Second Great Awakening was all over the nation from around 1790 through the 1850s.
America is and always has been a Christian country, right? Therefore, if the Christian revivalists weren’t men (and women) of that slaveholding time, why weren’t those twelve presidents who led this Christian country?
Beyond the religious abolitionist movement, the secular abolitionist movement was in full effect in the 1830s, thanks to the likes of the great newspaper publisher William Lloyd Garrison. Presidents knew how to read, right?
By the way, John Adams, the second president (from 1797-1801) and his son John Quincy Adams, the sixth president (from 1825-1829), never enslaved anybody. And they certainly were men of their times. Maybe they knew slavery was, is, and forever will be evil and inhumane.
Here are the evil and inhumane 12 slaveholding presidents listed from bad to worse to worst:
12. Martin Van Buren, the eighth president, enslaved 1 but not during his presidency. By the way, that 1 escaped.
11. Ulysses S. Grant, the eighteenth president, enslaved 5 but not during his presidency. In office from 1869-1877, he was the last slaveholding president.
10. Andrew Johnson, the seventeenth president, enslaved 8 but not during his presidency. However, when he was Military Governor of Tennessee, he persuaded President Abraham Lincoln to remove that state from those subject to “Honest Abe’s” Emancipation Proclamation.
9. William Henry Harrison, the ninth president, enslaved 11 but not during his presidency. However, as Governor of the Indiana Territory, he petitioned Congress to make slavery legal there. Fortunately, he was unsuccessful.
8. James K. Polk, the eleventh president, enslaved 25 and held many of them during his presidency. He also stole much of Mexico from the Mexicans during the 1846-1848 war in which those Brown people were robbed of California and almost all of today’s Southwest.
7. John Tyler, the tenth president, enslaved 70 and held many of them during his presidency. He was a states’ rights bigot and a jingoist flag-waver who robbed Mexico of Texas in 1845.
6. James Monroe, the fifth president, enslaved 75 and held many of them during his presidency. He hated Blacks so much that he wanted them sent back to Africa. That’s why he supported the racist American Colonization Society, robbed West Africans of a large piece of coastal land in 1821, and created a colony that later became Liberia. The Liberian state of Monrovia is named after that racist thug.
5. James Madison, the fourth president, enslaved approximately 100-125 and did so during his presidency. He’s the very same guy who proposed the Constitution’s Three-Fifths Clause.
4. Zachary Taylor, the twelfth president, enslaved approximately 150 and held many of them during his presidency. During his run for president in 1849, he campaigned on and bragged about his wholesale slaughter of Brown people when he was a Major General in the Mexican-American War. And white folks in America elected him.
3. Andrew Jackson, the seventh president, enslaved 150-200 and held many of them during his presidency. By the way, Jackson, nicknamed “Indian Killer”- whom fake President Donald Trump describes as his all-time favorite- wasn’t just a brutal slaveholder. He was also a genocidal monster who was responsible for the slaughter of approximately 30,000-50,000 Red men, women, and children. Moreover, he signed the horrific Indian Removal Act of 1830 that robbed the indigenous people of 25 million acres of fertile land and doomed them and their descendants to reservation ghettos.
2. Thomas Jefferson, the third president, enslaved 267 and held many of them during his presidency. For more info about this child rapist, read my March 3 column
1. George Washington, the first president, enslaved 316 and held many of them during his presidency. For more info about the man whose teeth were “yanked from the heads of his slaves,” read my February 24 column.
Washington’s teeth were actually yanked from the mouths of our enslaved ancestors and Jefferson actually raped Sally repeatedly while she was just a child.
In response to both columns, white racists went certifiably crazy (I mean crazier) and denied and yelled and screamed and hollered and insulted. They also trolled on social media. Unfortunately for them, they’re gonna need a straight-jacket after reading this.
This week’s topic is about the twelve United States presidents who enslaved Black men, women, boys, and girls. And before you crazy racists start talking nonsense about those so-called “great” patriots simply being “men of their times,” you need to know that the anti-slavery movement amongst good white folks began in the 1730s and spread throughout the Thirteen Colonies as a result of the abolitionist activities during the First Great Awakening, which was early America’s Christian revival movement. Furthermore, the anti-slavery gospel of the Second Great Awakening was all over the nation from around 1790 through the 1850s.
America is and always has been a Christian country, right? Therefore, if the Christian revivalists weren’t men (and women) of that slaveholding time, why weren’t those twelve presidents who led this Christian country?
Beyond the religious abolitionist movement, the secular abolitionist movement was in full effect in the 1830s, thanks to the likes of the great newspaper publisher William Lloyd Garrison. Presidents knew how to read, right?
By the way, John Adams, the second president (from 1797-1801) and his son John Quincy Adams, the sixth president (from 1825-1829), never enslaved anybody. And they certainly were men of their times. Maybe they knew slavery was, is, and forever will be evil and inhumane.
Here are the evil and inhumane 12 slaveholding presidents listed from bad to worse to worst:
12. Martin Van Buren, the eighth president, enslaved 1 but not during his presidency. By the way, that 1 escaped.
11. Ulysses S. Grant, the eighteenth president, enslaved 5 but not during his presidency. In office from 1869-1877, he was the last slaveholding president.
10. Andrew Johnson, the seventeenth president, enslaved 8 but not during his presidency. However, when he was Military Governor of Tennessee, he persuaded President Abraham Lincoln to remove that state from those subject to “Honest Abe’s” Emancipation Proclamation.
9. William Henry Harrison, the ninth president, enslaved 11 but not during his presidency. However, as Governor of the Indiana Territory, he petitioned Congress to make slavery legal there. Fortunately, he was unsuccessful.
8. James K. Polk, the eleventh president, enslaved 25 and held many of them during his presidency. He also stole much of Mexico from the Mexicans during the 1846-1848 war in which those Brown people were robbed of California and almost all of today’s Southwest.
7. John Tyler, the tenth president, enslaved 70 and held many of them during his presidency. He was a states’ rights bigot and a jingoist flag-waver who robbed Mexico of Texas in 1845.
6. James Monroe, the fifth president, enslaved 75 and held many of them during his presidency. He hated Blacks so much that he wanted them sent back to Africa. That’s why he supported the racist American Colonization Society, robbed West Africans of a large piece of coastal land in 1821, and created a colony that later became Liberia. The Liberian state of Monrovia is named after that racist thug.
5. James Madison, the fourth president, enslaved approximately 100-125 and did so during his presidency. He’s the very same guy who proposed the Constitution’s Three-Fifths Clause.
4. Zachary Taylor, the twelfth president, enslaved approximately 150 and held many of them during his presidency. During his run for president in 1849, he campaigned on and bragged about his wholesale slaughter of Brown people when he was a Major General in the Mexican-American War. And white folks in America elected him.
3. Andrew Jackson, the seventh president, enslaved 150-200 and held many of them during his presidency. By the way, Jackson, nicknamed “Indian Killer”- whom fake President Donald Trump describes as his all-time favorite- wasn’t just a brutal slaveholder. He was also a genocidal monster who was responsible for the slaughter of approximately 30,000-50,000 Red men, women, and children. Moreover, he signed the horrific Indian Removal Act of 1830 that robbed the indigenous people of 25 million acres of fertile land and doomed them and their descendants to reservation ghettos.
2. Thomas Jefferson, the third president, enslaved 267 and held many of them during his presidency. For more info about this child rapist, read my March 3 column
1. George Washington, the first president, enslaved 316 and held many of them during his presidency. For more info about the man whose teeth were “yanked from the heads of his slaves,” read my February 24 column.
A historian explains why Robert E. Lee wasn’t a hero — he was a traitor
November 20, 2019
By History News Network- Commentary - raw story
There’s a fabled moment from the Battle of Fredericksburg, a gruesome Civil War battle that extinguished several thousand lives, when the commander of a rebel army looked down upon the carnage and said, “It is well that war is so terrible, or we should grow too fond of it.” That commander, of course, was Robert Lee.
The moment is the stuff of legend. It captures Lee’s humility (he won the battle), compassion, and thoughtfulness. It casts Lee as a reluctant leader who had no choice but to serve his people, and who might have had second thoughts about doing so given the conflict’s tremendous amount of violence and bloodshed. The quote, however, is misleading. Lee was no hero. He was neither noble nor wise. Lee was a traitor who killed United States soldiers, fought for human enslavement, vastly increased the bloodshed of the Civil War, and made embarrassing tactical mistakes.
1) Lee was a traitor
Robert Lee was the nation’s most notable traitor since Benedict Arnold. Like Arnold, Robert Lee had an exceptional record of military service before his downfall. Lee was a hero of the Mexican-American War and played a crucial role in its final, decisive campaign to take Mexico City. But when he was called on to serve again—this time against violent rebels who were occupying and attacking federal forts—Lee failed to honor his oath to defend the Constitution. He resigned from the United States Army and quickly accepted a commission in a rebel army based in Virginia. Lee could have chosen to abstain from the conflict—it was reasonable to have qualms about leading United States soldiers against American citizens—but he did not abstain. He turned against his nation and took up arms against it. How could Lee, a lifelong soldier of the United States, so quickly betray it?
2) Lee fought for slavery
Robert Lee understood as well as any other contemporary the issue that ignited the secession crisis. Wealthy white plantation owners in the South had spent the better part of a century slowly taking over the United States government. With each new political victory, they expanded human enslavement further and further until the oligarchs of the Cotton South were the wealthiest single group of people on the planet. It was a kind of power and wealth they were willing to kill and die to protect.
According to Northwest Ordinance of 1787, new lands and territories in the West were supposed to be free while largescale human enslavement remained in the South. In 1820, however, Southerners amended that rule by dividing new lands between a free North and slave South. In the 1830s, Southerners used their inflated representation in Congress to pass the Indian Removal Act, an obvious and ultimately successful effort to take fertile Indian land and transform it into productive slave plantations. The Compromise of 1850 forced Northern states to enforce fugitive slave laws, a blatant assault on the rights of Northern states to legislate against human enslavement. In 1854, Southerners moved the goal posts again and decided that residents in new states and territories could decide the slave question for themselves. Violent clashes between pro- and anti-slavery forces soon followed in Kansas.
The South’s plans to expand slavery reached a crescendo in 1857 with the Dred Scott Decision. In the decision, the Supreme Court ruled that since the Constitution protected property and enslaved humans were considered property, territories could not make laws against slavery.
The details are less important than the overall trend: in the seventy years after the Constitution was written, a small group of Southerner oligarchs took over the government and transformed the United States into a pro-slavery nation. As one young politician put it, “We shall lie pleasantly dreaming that the people of Missouri are on the verge of making their State free; and we shall awake to the reality, instead, that the Supreme Court has made Illinois a slave State.”
The ensuing fury over the expansion of slave power in the federal government prompted a historic backlash. Previously divided Americans rallied behind a new political party and the young, brilliant politician quoted above. Abraham Lincoln presented a clear message: should he be elected, the federal government would no longer legislate in favor of enslavement, and would work to stop its expansion into the West.
Lincoln’s election in 1860 was not simply a single political loss for slaveholding Southerners. It represented a collapse of their minority political dominance of the federal government, without which they could not maintain and expand slavery to full extent of their desires. Foiled by democracy, Southern oligarchs disavowed it and declared independence from the United States.
Their rebel organization—the “Confederate States of America,” a cheap imitation of the United States government stripped of its language of equality, freedom, and justice—did not care much for states’ rights. States in the Confederacy forfeited both the right to secede from it and the right to limit or eliminate slavery. What really motivated the new CSA was not only obvious, but repeatedly declared. In their articles of secession, which explained their motivations for violent insurrection, rebel leaders in the South cited slavery. Georgia cited slavery. Mississippi cited slavery. South Carolina cited the “increasing hostility… to the institution of slavery.” Texas cited slavery. Virginia cited the “oppression of… Southern slaveholding.” Alexander Stephens, the second in command of the rebel cabal, declared in his Cornerstone Speech that they had launched the entire enterprise because the Founding Fathers had made a mistake in declaring that all people are made equal. “Our new government is founded upon exactly the opposite idea,” he said. People of African descent were supposed to be enslaved.
Despite making a few cryptic comments about how he refused to fight his fellow Virginians, Lee would have understood exactly what the war was about and how it served wealthy white men like him. Lee was a slave-holding aristocrat with ties to George Washington. He was the face of Southern gentry, a kind of pseudo royalty in a land that had theoretically extinguished it. The triumph of the South would have meant the triumph not only of Lee, but everything he represented: that tiny, self-defined perfect portion at the top of a violently unequal pyramid.
Yet even if Lee disavowed slavery and fought only for some vague notion of states’ rights, would that have made a difference? War is a political tool that serves a political purpose. If the purpose of the rebellion was to create a powerful, endless slave empire (it was), then do the opinions of its soldiers and commanders really matter? Each victory of Lee’s, each rebel bullet that felled a United States soldier, advanced the political cause of the CSA. Had Lee somehow defeated the United States Army, marched to the capital, killed the President, and won independence for the South, the result would have been the preservation of slavery in North America. There would have been no Thirteenth Amendment. Lincoln would not have overseen the emancipation of four million people, the largest single emancipation event in human history. Lee’s successes were the successes of the Slave South, personal feelings be damned.
If you need more evidence of Lee’s personal feelings on enslavement, however, note that when his rebel forces marched into Pennsylvania, they kidnapped black people and sold them into bondage. Contemporaries referred to these kidnappings as “slave hunts.”
3) Lee was not a military genius
Despite a mythology around Lee being the Napoleon of America, Lee blundered his way to a surrender. To be fair to Lee, his early victories were impressive. Lee earned command of the largest rebel army in 1862 and quickly put his experience to work. His interventions at the end of the Peninsula Campaign and his aggressive flanking movements at the Battle of Second Manassas ensured that the United States Army could not achieve a quick victory over rebel forces. At Fredericksburg, Lee also demonstrated a keen understanding of how to establish a strong defensive position, and foiled another US offensive. Lee’s shining moment came later at Chancellorsville, when he again maneuvered his smaller but more mobile force to flank and rout the US Army. Yet Lee’s broader strategy was deeply flawed, and ended with his most infamous blunder.
Lee should have recognized that the objective of his army was not to defeat the larger United States forces that he faced. Rather, he needed to simply prevent those armies from taking Richmond, the city that housed the rebel government, until the United States government lost support for the war and sued for peace. New military technology that greatly favored defenders would have bolstered this strategy. But Lee opted for a different strategy, taking his army and striking northward into areas that the United States government still controlled.
It’s tempting to think that Lee’s strategy was sound and could have delivered a decisive blow, but it’s far more likely that he was starting to believe that his men truly were superior and that his army was essentially unstoppable, as many supporters in the South were openly speculating. Even the Battle of Antietam, an aggressive invasion that ended in a terrible rebel loss, did not dissuade Lee from this thinking. After Chancellorsville, Lee marched his army into Pennsylvania where he ran into the United States Army at the town of Gettysburg. After a few days of fighting into a stalemate, Lee decided against withdrawing as he had done at Antietam. Instead, he doubled down on his aggressive strategy and ordered a direct assault over open terrain straight into the heart of the US Army’s lines.
The result—several thousand casualties—was devastating. It was a crushing blow and a terrible military decision from which Lee and his men never fully recovered. The loss also bolstered support for the war effort and Lincoln in the North, almost guaranteeing that the United States would not stop short of a total victory.
4) Lee, not Grant, was responsible for the staggering losses of the Civil War
The Civil War dragged on even after Lee’s horrific loss at Gettysburg. Even after it was clear that the rebels were in trouble, with white women in the South rioting for bread, conscripted men deserting, and thousands of enslaved people self-emancipating, Lee and his men dug in and continued to fight. Only after going back on the defensive—that is, digging in on hills and building massive networks of trenches and fortifications—did Lee start to achieve lopsided results again. Civil War enthusiasts often point to the resulting carnage as evidence that Ulysses S. Grant, the new General of the entire United States Army, did not care about the terrible losses and should be criticized for how he threw wave after wave of men at entrenched rebel positions. In reality, however, the situation was completely of Lee’s making.
As Grant doggedly pursued Lee’s forces, he did his best to flush Lee into an open field for a decisive battle, like at Antietam or Gettysburg. Lee refused to accept, however, knowing that a crushing loss likely awaited him. Lee also could have abandoned the area around the rebel capital and allowed the United States to achieve a moral and political victory. Both of these options would have drastically reduced the loss of life on both sides and ended the war earlier. Lee chose neither option. Rather, he maneuvered his forces in such a way that they always had a secure, defensive position, daring Grant to sacrifice more men. When Grant did this and overran the rebel positions, Lee pulled back and repeated the process. The result was the most gruesome period of the war. It was not uncommon for dead bodies to be stacked upon each other after waves of attacks and counterattacks clashed at the same position. At the Wilderness, the forest caught fire, trapping wounded men from both sides in the inferno. Their comrades listened helplessly to the screams as the men in the forest burned alive.
To his credit, when the war was truly lost—the rebel capital sacked (burned by retreating rebel soldiers), the infrastructure of the South in ruins, and Lee’s army chased one hundred miles into the west—Lee chose not to engage in guerrilla warfare and surrendered, though the decision was likely based on image more than a concern for human life. He showed up to Grant’s camp, after all, dressed in a new uniform and riding a white horse. So ended the military career of Robert Lee, a man responsible for the death of more United States soldiers than any single commander in history.
So why, after all of this, do some Americans still celebrate Lee? Well, many white Southerners refused to accept the outcome of the Civil War. After years of terrorism, local political coups, wholesale massacres, and lynchings, white Southerners were able to retake power in the South. While they erected monuments to war criminals like Nathan Bedford Forrest to send a clear message to would-be civil rights activists, white southerners also needed someone who represented the “greatness” of the Old South, someone of whom they could be proud. They turned to Robert Lee.
But Lee was not great. In fact, he represented the very worst of the Old South, a man willing to betray his republic and slaughter his countrymen to preserve a violent, unfree society that elevated him and just a handful of others like him. He was the gentle face of a brutal system. And for all his acclaim, Lee was not a military genius. He was a flawed aristocrat who fell in love with the mythology of his own invincibility.
After the war, Robert Lee lived out the remainder of his days. He was neither arrested nor hanged. But it is up to us how we remember him. Memory is often the trial that evil men never received. Perhaps we should take a page from the United States Army of the Civil War, which needed to decide what to do with the slave plantation it seized from the Lee family. Ultimately, the Army decided to use Lee’s land as a cemetery, transforming the land from a site of human enslavement to a final resting place for United States soldiers who died to make men free. You can visit that cemetery today. After all, who hasn’t heard of Arlington Cemetery?
The moment is the stuff of legend. It captures Lee’s humility (he won the battle), compassion, and thoughtfulness. It casts Lee as a reluctant leader who had no choice but to serve his people, and who might have had second thoughts about doing so given the conflict’s tremendous amount of violence and bloodshed. The quote, however, is misleading. Lee was no hero. He was neither noble nor wise. Lee was a traitor who killed United States soldiers, fought for human enslavement, vastly increased the bloodshed of the Civil War, and made embarrassing tactical mistakes.
1) Lee was a traitor
Robert Lee was the nation’s most notable traitor since Benedict Arnold. Like Arnold, Robert Lee had an exceptional record of military service before his downfall. Lee was a hero of the Mexican-American War and played a crucial role in its final, decisive campaign to take Mexico City. But when he was called on to serve again—this time against violent rebels who were occupying and attacking federal forts—Lee failed to honor his oath to defend the Constitution. He resigned from the United States Army and quickly accepted a commission in a rebel army based in Virginia. Lee could have chosen to abstain from the conflict—it was reasonable to have qualms about leading United States soldiers against American citizens—but he did not abstain. He turned against his nation and took up arms against it. How could Lee, a lifelong soldier of the United States, so quickly betray it?
2) Lee fought for slavery
Robert Lee understood as well as any other contemporary the issue that ignited the secession crisis. Wealthy white plantation owners in the South had spent the better part of a century slowly taking over the United States government. With each new political victory, they expanded human enslavement further and further until the oligarchs of the Cotton South were the wealthiest single group of people on the planet. It was a kind of power and wealth they were willing to kill and die to protect.
According to Northwest Ordinance of 1787, new lands and territories in the West were supposed to be free while largescale human enslavement remained in the South. In 1820, however, Southerners amended that rule by dividing new lands between a free North and slave South. In the 1830s, Southerners used their inflated representation in Congress to pass the Indian Removal Act, an obvious and ultimately successful effort to take fertile Indian land and transform it into productive slave plantations. The Compromise of 1850 forced Northern states to enforce fugitive slave laws, a blatant assault on the rights of Northern states to legislate against human enslavement. In 1854, Southerners moved the goal posts again and decided that residents in new states and territories could decide the slave question for themselves. Violent clashes between pro- and anti-slavery forces soon followed in Kansas.
The South’s plans to expand slavery reached a crescendo in 1857 with the Dred Scott Decision. In the decision, the Supreme Court ruled that since the Constitution protected property and enslaved humans were considered property, territories could not make laws against slavery.
The details are less important than the overall trend: in the seventy years after the Constitution was written, a small group of Southerner oligarchs took over the government and transformed the United States into a pro-slavery nation. As one young politician put it, “We shall lie pleasantly dreaming that the people of Missouri are on the verge of making their State free; and we shall awake to the reality, instead, that the Supreme Court has made Illinois a slave State.”
The ensuing fury over the expansion of slave power in the federal government prompted a historic backlash. Previously divided Americans rallied behind a new political party and the young, brilliant politician quoted above. Abraham Lincoln presented a clear message: should he be elected, the federal government would no longer legislate in favor of enslavement, and would work to stop its expansion into the West.
Lincoln’s election in 1860 was not simply a single political loss for slaveholding Southerners. It represented a collapse of their minority political dominance of the federal government, without which they could not maintain and expand slavery to full extent of their desires. Foiled by democracy, Southern oligarchs disavowed it and declared independence from the United States.
Their rebel organization—the “Confederate States of America,” a cheap imitation of the United States government stripped of its language of equality, freedom, and justice—did not care much for states’ rights. States in the Confederacy forfeited both the right to secede from it and the right to limit or eliminate slavery. What really motivated the new CSA was not only obvious, but repeatedly declared. In their articles of secession, which explained their motivations for violent insurrection, rebel leaders in the South cited slavery. Georgia cited slavery. Mississippi cited slavery. South Carolina cited the “increasing hostility… to the institution of slavery.” Texas cited slavery. Virginia cited the “oppression of… Southern slaveholding.” Alexander Stephens, the second in command of the rebel cabal, declared in his Cornerstone Speech that they had launched the entire enterprise because the Founding Fathers had made a mistake in declaring that all people are made equal. “Our new government is founded upon exactly the opposite idea,” he said. People of African descent were supposed to be enslaved.
Despite making a few cryptic comments about how he refused to fight his fellow Virginians, Lee would have understood exactly what the war was about and how it served wealthy white men like him. Lee was a slave-holding aristocrat with ties to George Washington. He was the face of Southern gentry, a kind of pseudo royalty in a land that had theoretically extinguished it. The triumph of the South would have meant the triumph not only of Lee, but everything he represented: that tiny, self-defined perfect portion at the top of a violently unequal pyramid.
Yet even if Lee disavowed slavery and fought only for some vague notion of states’ rights, would that have made a difference? War is a political tool that serves a political purpose. If the purpose of the rebellion was to create a powerful, endless slave empire (it was), then do the opinions of its soldiers and commanders really matter? Each victory of Lee’s, each rebel bullet that felled a United States soldier, advanced the political cause of the CSA. Had Lee somehow defeated the United States Army, marched to the capital, killed the President, and won independence for the South, the result would have been the preservation of slavery in North America. There would have been no Thirteenth Amendment. Lincoln would not have overseen the emancipation of four million people, the largest single emancipation event in human history. Lee’s successes were the successes of the Slave South, personal feelings be damned.
If you need more evidence of Lee’s personal feelings on enslavement, however, note that when his rebel forces marched into Pennsylvania, they kidnapped black people and sold them into bondage. Contemporaries referred to these kidnappings as “slave hunts.”
3) Lee was not a military genius
Despite a mythology around Lee being the Napoleon of America, Lee blundered his way to a surrender. To be fair to Lee, his early victories were impressive. Lee earned command of the largest rebel army in 1862 and quickly put his experience to work. His interventions at the end of the Peninsula Campaign and his aggressive flanking movements at the Battle of Second Manassas ensured that the United States Army could not achieve a quick victory over rebel forces. At Fredericksburg, Lee also demonstrated a keen understanding of how to establish a strong defensive position, and foiled another US offensive. Lee’s shining moment came later at Chancellorsville, when he again maneuvered his smaller but more mobile force to flank and rout the US Army. Yet Lee’s broader strategy was deeply flawed, and ended with his most infamous blunder.
Lee should have recognized that the objective of his army was not to defeat the larger United States forces that he faced. Rather, he needed to simply prevent those armies from taking Richmond, the city that housed the rebel government, until the United States government lost support for the war and sued for peace. New military technology that greatly favored defenders would have bolstered this strategy. But Lee opted for a different strategy, taking his army and striking northward into areas that the United States government still controlled.
It’s tempting to think that Lee’s strategy was sound and could have delivered a decisive blow, but it’s far more likely that he was starting to believe that his men truly were superior and that his army was essentially unstoppable, as many supporters in the South were openly speculating. Even the Battle of Antietam, an aggressive invasion that ended in a terrible rebel loss, did not dissuade Lee from this thinking. After Chancellorsville, Lee marched his army into Pennsylvania where he ran into the United States Army at the town of Gettysburg. After a few days of fighting into a stalemate, Lee decided against withdrawing as he had done at Antietam. Instead, he doubled down on his aggressive strategy and ordered a direct assault over open terrain straight into the heart of the US Army’s lines.
The result—several thousand casualties—was devastating. It was a crushing blow and a terrible military decision from which Lee and his men never fully recovered. The loss also bolstered support for the war effort and Lincoln in the North, almost guaranteeing that the United States would not stop short of a total victory.
4) Lee, not Grant, was responsible for the staggering losses of the Civil War
The Civil War dragged on even after Lee’s horrific loss at Gettysburg. Even after it was clear that the rebels were in trouble, with white women in the South rioting for bread, conscripted men deserting, and thousands of enslaved people self-emancipating, Lee and his men dug in and continued to fight. Only after going back on the defensive—that is, digging in on hills and building massive networks of trenches and fortifications—did Lee start to achieve lopsided results again. Civil War enthusiasts often point to the resulting carnage as evidence that Ulysses S. Grant, the new General of the entire United States Army, did not care about the terrible losses and should be criticized for how he threw wave after wave of men at entrenched rebel positions. In reality, however, the situation was completely of Lee’s making.
As Grant doggedly pursued Lee’s forces, he did his best to flush Lee into an open field for a decisive battle, like at Antietam or Gettysburg. Lee refused to accept, however, knowing that a crushing loss likely awaited him. Lee also could have abandoned the area around the rebel capital and allowed the United States to achieve a moral and political victory. Both of these options would have drastically reduced the loss of life on both sides and ended the war earlier. Lee chose neither option. Rather, he maneuvered his forces in such a way that they always had a secure, defensive position, daring Grant to sacrifice more men. When Grant did this and overran the rebel positions, Lee pulled back and repeated the process. The result was the most gruesome period of the war. It was not uncommon for dead bodies to be stacked upon each other after waves of attacks and counterattacks clashed at the same position. At the Wilderness, the forest caught fire, trapping wounded men from both sides in the inferno. Their comrades listened helplessly to the screams as the men in the forest burned alive.
To his credit, when the war was truly lost—the rebel capital sacked (burned by retreating rebel soldiers), the infrastructure of the South in ruins, and Lee’s army chased one hundred miles into the west—Lee chose not to engage in guerrilla warfare and surrendered, though the decision was likely based on image more than a concern for human life. He showed up to Grant’s camp, after all, dressed in a new uniform and riding a white horse. So ended the military career of Robert Lee, a man responsible for the death of more United States soldiers than any single commander in history.
So why, after all of this, do some Americans still celebrate Lee? Well, many white Southerners refused to accept the outcome of the Civil War. After years of terrorism, local political coups, wholesale massacres, and lynchings, white Southerners were able to retake power in the South. While they erected monuments to war criminals like Nathan Bedford Forrest to send a clear message to would-be civil rights activists, white southerners also needed someone who represented the “greatness” of the Old South, someone of whom they could be proud. They turned to Robert Lee.
But Lee was not great. In fact, he represented the very worst of the Old South, a man willing to betray his republic and slaughter his countrymen to preserve a violent, unfree society that elevated him and just a handful of others like him. He was the gentle face of a brutal system. And for all his acclaim, Lee was not a military genius. He was a flawed aristocrat who fell in love with the mythology of his own invincibility.
After the war, Robert Lee lived out the remainder of his days. He was neither arrested nor hanged. But it is up to us how we remember him. Memory is often the trial that evil men never received. Perhaps we should take a page from the United States Army of the Civil War, which needed to decide what to do with the slave plantation it seized from the Lee family. Ultimately, the Army decided to use Lee’s land as a cemetery, transforming the land from a site of human enslavement to a final resting place for United States soldiers who died to make men free. You can visit that cemetery today. After all, who hasn’t heard of Arlington Cemetery?
Andrew Johnson’s failed presidency echoes in Trump’s White House
The Conversation - raw story
19 FEB 2018 AT 07:43 ET
The two have much in common. Like Trump, Johnson followed an unconventional path to the presidency.
A Tennessean and lifelong Democrat, Johnson was a defender of slavery and white supremacy but also an uncompromising Unionist. He was the only U.S. senator from the South who opposed his state’s move to secede in 1861. That made him an unconventional yet attractive choice as Abraham Lincoln’s running mate in 1864 when Republicans took on the mantle of the “Union Party” to win support from Democrats alienated by their own party’s anti-war stance.
Six weeks after becoming vice president, an assassin’s bullet killed Lincoln and catapulted Johnson to the presidency.
Republican reception mixed for JohnsonLike Trump, Johnson entered the White House with a mixture of skepticism and support among Republicans who controlled Congress. He was a Southerner and a Democrat, which concerned them. But his courageous Unionism and blunt criticism of rebels – “Treason must be made infamous and traitors punished,” he proclaimed – impressed congressional leaders, who believed that he would take a firm approach to the South.
Through bold assertion of executive authority, Johnson quickly reconstructed state and local governments in the South. Because African-Americans were excluded from the process, these regimes were controlled by Southern whites, most of whom had been loyal Confederates. Predictably, they adopted laws designed to keep African-Americans in a servile position.
Southern officials also stood by and even abetted whites who unleashed a wave of violence against former slaves. For example, in July 1866, New Orleans police participated in a massacre that left 37 African-Americans and white Unionists dead and more than 100 wounded.
Unlike today’s Republicans, most of whom have become more loyal to Trump, Reconstruction-era Republicans pushed back against Johnson. They viewed emancipation as a crowning achievement of Union victory and were determined to ensure that former slaves enjoyed the fruits of freedom. Although they hoped to avoid conflict with the president, in early 1866, they adopted measures designed to establish color-blind citizenship and protect former slaves from injustice.
No fan of negotiationLike Trump, Johnson’s instinct was to attack rather than negotiate. A states’ rights Democrat and proponent of white supremacy, Johnson rebuffed Republicans’ efforts at compromise. He responded to Republican civil rights legislation with scathing veto messages. In September 1866, he toured the North, leveling personal attacks against congressional leaders and seeking to rally voters against them in the midterm elections.
Growing up poor and illiterate, Johnson had developed a deep hostility for African-Americans, believing that they looked down on people like him. His private conversations were laced with racist invective. After meeting with a delegation led by the black leader, Frederick Douglass, for example, he exclaimed to his secretary, “I know that damned Douglass; he’s just like any n—-r, and he would just as likely cut a white man’s throat as not.”
In speeches on the campaign trail and in Washington, Johnson cast his opposition to Republican civil rights policy in language that today appears clearly racist. He sought to appeal to voters — North as well as South – who felt threatened by African-American gains.
In vetoing the Civil Rights Act of 1866, Johnson argued that African-Americans, who had “just emerged from slavery,” lacked “the requisite qualifications to entitle them to the privileges and immunities of citizens of the United States.” Indeed, he asserted, the law discriminated “against large numbers of intelligent, worthy, and patriotic foreigners [who had to reside in the U.S. for five years to qualify for citizenship] in favor of the negro.”
Made excuses for Southern racismJohnson excused racist violence in the South. Ignoring facts that had been widely reported in the press, Johnson held Republicans in Congress responsible for encouraging black political activism during the New Orleans massacre. That was the July 1866 riot where a white mob – aided by police – set upon African-American marchers and their white Unionist sympathizers, leaving 37 African-Americans and white Unionists dead and more than 100 wounded.
“Every drop of blood that was shed is upon their skirts,” Johnson charged, “and they are responsible for it.” Whites who encouraged blacks to demand the right to vote, not the white mobs and police who attacked them, bore responsibility, Johnson implied.
Like Trump, Johnson fancied himself a populist. He saw himself as a principled defender of the people against Washington insiders bent on destroying the republic. By refusing to seat senators and representatives from the reconstructed states of the former Confederacy, he charged in 1866, congressional leaders were riding roughshod over the Constitution.
“There are individuals in the Government,” Johnson told an audience in Washington, D.C., “who want to destroy our institutions.”
Later, when confronted by catcalls from Republican partisans, Johnson fired back.
“Congress is trying to break up the Government,” he said, casting congressional leaders as enemies of the Union in the mold of secessionist heroes Jefferson Davis and Robert E. Lee.
Combining egotism, victimhood and paranoia in a manner similar to Trump, Johnson portrayed himself as the nation’s much maligned savior.
“I am your instrument,” he told one audience. “I stand for the country, I stand for the Constitution.”
The public could get tickets to attend the impeachment trial of President Andrew Johnson.
Johnson saw his opponents as enemies bent not just on impugning the legitimacy of his presidency but whose “intention … [is] to incite assassination.” In a melodramatic and revealing pledge, he proclaimed, “If my blood is to be shed because I vindicate the Union … then let it be shed.”
Perhaps unsurprisingly, Johnson’s presidency ended badly.
Reviled by African-AmericansWhile lauded by white Southerners, he was reviled by African-Americans and most Northerners for disgracing the office of the presidency. Thomas Nast, the popular political cartoonist, lampooned him, the press chastised him and private citizens expressed their disgust.
Commenting on Johnson’s electioneering tour of the North, Mary Todd Lincoln said acidly that his conduct “would humiliate any other than himself.”
Congressional Republicans overrode Johnson’s vetoes and tied his hands on matters of policy. In 1868, the House of Representatives voted to impeach him but the Senate fell one vote shy of the two-thirds majority necessary to remove him from office.
As his term came to an end in 1869, his successor, Ulysses Grant, refused to ride to the inauguration in the same carriage as the disgraced Johnson, who then declined to attend the ceremony. Instead, he remained at the White House, leaving it for the last time to go to his Tennessee home — and perhaps a more receptive audience of like-minded Southerners — after the inauguration was over.
Presidents succeed by appealing to shared values, uniting rather than dividing, making strategic use of the respect Americans have for the office, and avoiding the gutter. Andrew Johnson failed on all counts, destroying his presidency and bringing himself into contempt.
The Washington Post’s Senior Editor Marc Fisher writes that Donald Trump learned as a real estate developer and entertainer “that what humiliates, damages, even destroys others can actually strengthen his image and therefore his bottom line.”
Will those lessons serve Trump well as president? Or will they condemn him to Johnson’s ignominious fate?
Donald Nieman, Executive Vice President for Academic Affairs and Provost, Binghamton University, State University of New York
A Tennessean and lifelong Democrat, Johnson was a defender of slavery and white supremacy but also an uncompromising Unionist. He was the only U.S. senator from the South who opposed his state’s move to secede in 1861. That made him an unconventional yet attractive choice as Abraham Lincoln’s running mate in 1864 when Republicans took on the mantle of the “Union Party” to win support from Democrats alienated by their own party’s anti-war stance.
Six weeks after becoming vice president, an assassin’s bullet killed Lincoln and catapulted Johnson to the presidency.
Republican reception mixed for JohnsonLike Trump, Johnson entered the White House with a mixture of skepticism and support among Republicans who controlled Congress. He was a Southerner and a Democrat, which concerned them. But his courageous Unionism and blunt criticism of rebels – “Treason must be made infamous and traitors punished,” he proclaimed – impressed congressional leaders, who believed that he would take a firm approach to the South.
Through bold assertion of executive authority, Johnson quickly reconstructed state and local governments in the South. Because African-Americans were excluded from the process, these regimes were controlled by Southern whites, most of whom had been loyal Confederates. Predictably, they adopted laws designed to keep African-Americans in a servile position.
Southern officials also stood by and even abetted whites who unleashed a wave of violence against former slaves. For example, in July 1866, New Orleans police participated in a massacre that left 37 African-Americans and white Unionists dead and more than 100 wounded.
Unlike today’s Republicans, most of whom have become more loyal to Trump, Reconstruction-era Republicans pushed back against Johnson. They viewed emancipation as a crowning achievement of Union victory and were determined to ensure that former slaves enjoyed the fruits of freedom. Although they hoped to avoid conflict with the president, in early 1866, they adopted measures designed to establish color-blind citizenship and protect former slaves from injustice.
No fan of negotiationLike Trump, Johnson’s instinct was to attack rather than negotiate. A states’ rights Democrat and proponent of white supremacy, Johnson rebuffed Republicans’ efforts at compromise. He responded to Republican civil rights legislation with scathing veto messages. In September 1866, he toured the North, leveling personal attacks against congressional leaders and seeking to rally voters against them in the midterm elections.
Growing up poor and illiterate, Johnson had developed a deep hostility for African-Americans, believing that they looked down on people like him. His private conversations were laced with racist invective. After meeting with a delegation led by the black leader, Frederick Douglass, for example, he exclaimed to his secretary, “I know that damned Douglass; he’s just like any n—-r, and he would just as likely cut a white man’s throat as not.”
In speeches on the campaign trail and in Washington, Johnson cast his opposition to Republican civil rights policy in language that today appears clearly racist. He sought to appeal to voters — North as well as South – who felt threatened by African-American gains.
In vetoing the Civil Rights Act of 1866, Johnson argued that African-Americans, who had “just emerged from slavery,” lacked “the requisite qualifications to entitle them to the privileges and immunities of citizens of the United States.” Indeed, he asserted, the law discriminated “against large numbers of intelligent, worthy, and patriotic foreigners [who had to reside in the U.S. for five years to qualify for citizenship] in favor of the negro.”
Made excuses for Southern racismJohnson excused racist violence in the South. Ignoring facts that had been widely reported in the press, Johnson held Republicans in Congress responsible for encouraging black political activism during the New Orleans massacre. That was the July 1866 riot where a white mob – aided by police – set upon African-American marchers and their white Unionist sympathizers, leaving 37 African-Americans and white Unionists dead and more than 100 wounded.
“Every drop of blood that was shed is upon their skirts,” Johnson charged, “and they are responsible for it.” Whites who encouraged blacks to demand the right to vote, not the white mobs and police who attacked them, bore responsibility, Johnson implied.
Like Trump, Johnson fancied himself a populist. He saw himself as a principled defender of the people against Washington insiders bent on destroying the republic. By refusing to seat senators and representatives from the reconstructed states of the former Confederacy, he charged in 1866, congressional leaders were riding roughshod over the Constitution.
“There are individuals in the Government,” Johnson told an audience in Washington, D.C., “who want to destroy our institutions.”
Later, when confronted by catcalls from Republican partisans, Johnson fired back.
“Congress is trying to break up the Government,” he said, casting congressional leaders as enemies of the Union in the mold of secessionist heroes Jefferson Davis and Robert E. Lee.
Combining egotism, victimhood and paranoia in a manner similar to Trump, Johnson portrayed himself as the nation’s much maligned savior.
“I am your instrument,” he told one audience. “I stand for the country, I stand for the Constitution.”
The public could get tickets to attend the impeachment trial of President Andrew Johnson.
Johnson saw his opponents as enemies bent not just on impugning the legitimacy of his presidency but whose “intention … [is] to incite assassination.” In a melodramatic and revealing pledge, he proclaimed, “If my blood is to be shed because I vindicate the Union … then let it be shed.”
Perhaps unsurprisingly, Johnson’s presidency ended badly.
Reviled by African-AmericansWhile lauded by white Southerners, he was reviled by African-Americans and most Northerners for disgracing the office of the presidency. Thomas Nast, the popular political cartoonist, lampooned him, the press chastised him and private citizens expressed their disgust.
Commenting on Johnson’s electioneering tour of the North, Mary Todd Lincoln said acidly that his conduct “would humiliate any other than himself.”
Congressional Republicans overrode Johnson’s vetoes and tied his hands on matters of policy. In 1868, the House of Representatives voted to impeach him but the Senate fell one vote shy of the two-thirds majority necessary to remove him from office.
As his term came to an end in 1869, his successor, Ulysses Grant, refused to ride to the inauguration in the same carriage as the disgraced Johnson, who then declined to attend the ceremony. Instead, he remained at the White House, leaving it for the last time to go to his Tennessee home — and perhaps a more receptive audience of like-minded Southerners — after the inauguration was over.
Presidents succeed by appealing to shared values, uniting rather than dividing, making strategic use of the respect Americans have for the office, and avoiding the gutter. Andrew Johnson failed on all counts, destroying his presidency and bringing himself into contempt.
The Washington Post’s Senior Editor Marc Fisher writes that Donald Trump learned as a real estate developer and entertainer “that what humiliates, damages, even destroys others can actually strengthen his image and therefore his bottom line.”
Will those lessons serve Trump well as president? Or will they condemn him to Johnson’s ignominious fate?
Donald Nieman, Executive Vice President for Academic Affairs and Provost, Binghamton University, State University of New York
they knew!!!
On its hundredth birthday in 1959, Edward Teller warned the oil industry about global warming
Somebody cut the cake – new documents reveal that American oil writ large was warned of global warming at its 100th birthday party.
Benjamin Franta
the guardian
Mon 1 Jan ‘18 06.00 EST
...The nuclear weapons physicist Edward Teller had, by 1959, become ostracized by the scientific community for betraying his colleague J. Robert Oppenheimer, but he retained the embrace of industry and government. Teller’s task that November fourth was to address the crowd on “energy patterns of the future,” and his words carried an unexpected warning:
Ladies and gentlemen, I am to talk to you about energy in the future. I will start by telling you why I believe that the energy resources of the past must be supplemented. First of all, these energy resources will run short as we use more and more of the fossil fuels. But I would [...] like to mention another reason why we probably have to look for additional fuel supplies. And this, strangely, is the question of contaminating the atmosphere. [....] Whenever you burn conventional fuel, you create carbon dioxide. [....] The carbon dioxide is invisible, it is transparent, you can’t smell it, it is not dangerous to health, so why should one worry about it?
Carbon dioxide has a strange property. It transmits visible light but it absorbs the infrared radiation which is emitted from the earth. Its presence in the atmosphere causes a greenhouse effect [....] It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.
How, precisely, Mr. Dunlop and the rest of the audience reacted is unknown, but it’s hard to imagine this being welcome news. After his talk, Teller was asked to “summarize briefly the danger from increased carbon dioxide content in the atmosphere in this century.” The physicist, as if considering a numerical estimation problem, responded:
At present the carbon dioxide in the atmosphere has risen by 2 per cent over normal. By 1970, it will be perhaps 4 per cent, by 1980, 8 per cent, by 1990, 16 per cent [about 360 parts per million, by Teller’s accounting], if we keep on with our exponential rise in the use of purely conventional fuels. By that time, there will be a serious additional impediment for the radiation leaving the earth. Our planet will get a little warmer. It is hard to say whether it will be 2 degrees Fahrenheit or only one or 5.
But when the temperature does rise by a few degrees over the whole globe, there is a possibility that the icecaps will start melting and the level of the oceans will begin to rise. Well, I don’t know whether they will cover the Empire State Building or not, but anyone can calculate it by looking at the map and noting that the icecaps over Greenland and over Antarctica are perhaps five thousand feet thick.
And so, at its hundredth birthday party, American oil was warned of its civilization-destroying potential.
Talk about a buzzkill.
How did the petroleum industry respond? Eight years later, on a cold, clear day in March, Robert Dunlop walked the halls of the U.S. Congress. The 1967 oil embargo was weeks away, and the Senate was investigating the potential of electric vehicles. Dunlop, testifying now as the Chairman of the Board of the American Petroleum Institute, posed the question, “tomorrow’s car: electric or gasoline powered?” His preferred answer was the latter:
We in the petroleum industry are convinced that by the time a practical electric car can be mass-produced and marketed, it will not enjoy any meaningful advantage from an air pollution standpoint. Emissions from internal-combustion engines will have long since been controlled.
Dunlop went on to describe progress in controlling carbon monoxide, nitrous oxide, and hydrocarbon emissions from automobiles. Absent from his list? The pollutant he had been warned of years before: carbon dioxide.
We might surmise that the odorless gas simply passed under Robert Dunlop’s nose unnoticed. But less than a year later, the American Petroleum Institute quietly received a report on air pollution it had commissioned from the Stanford Research Institute, and its warning on carbon dioxide was direct:
Significant temperature changes are almost certain to occur by the year 2000, and these could bring about climatic changes. [...] there seems to be no doubt that the potential damage to our environment could be severe. [...] pollutants which we generally ignore because they have little local effect, CO2 and submicron particles, may be the cause of serious world-wide environmental changes.
Thus, by 1968, American oil held in its hands yet another notice of its products’ world-altering side effects, one affirming that global warming was not just cause for research and concern, but a reality needing corrective action: “Past and present studies of CO2 are detailed,” the Stanford Research Institute advised. “What is lacking, however, is [...] work toward systems in which CO2 emissions would be brought under control.”
This early history illuminates the American petroleum industry’s long-running awareness of the planetary warming caused by its products. Teller’s warning, revealed in documentation I found while searching archives, is another brick in a growing wall of evidence.
In the closing days of those optimistic 1950s, Robert Dunlop may have been one of the first oilmen to be warned of the tragedy now looming before us. By the time he departed this world in 1995, the American Petroleum Institute he once led was denying the climate science it had been informed of decades before, attacking the Intergovernmental Panel on Climate Change, and fighting climate policies wherever they arose.
This is a history of choices made, paths not taken, and the fall from grace of one of the greatest enterprises – oil, the “prime mover” – ever to tread the earth. Whether it’s also a history of redemption, however partial, remains to be seen.
American oil’s awareness of global warming – and its conspiracy of silence, deceit, and obstruction – goes further than any one company. It extends beyond (though includes) ExxonMobil. The industry is implicated to its core by the history of its largest representative, the American Petroleum Institute.
It is now too late to stop a great deal of change to our planet’s climate and its global payload of disease, destruction, and death. But we can fight to halt climate change as quickly as possible, and we can uncover the history of how we got here. There are lessons to be learned, and there is justice to be served.
Ladies and gentlemen, I am to talk to you about energy in the future. I will start by telling you why I believe that the energy resources of the past must be supplemented. First of all, these energy resources will run short as we use more and more of the fossil fuels. But I would [...] like to mention another reason why we probably have to look for additional fuel supplies. And this, strangely, is the question of contaminating the atmosphere. [....] Whenever you burn conventional fuel, you create carbon dioxide. [....] The carbon dioxide is invisible, it is transparent, you can’t smell it, it is not dangerous to health, so why should one worry about it?
Carbon dioxide has a strange property. It transmits visible light but it absorbs the infrared radiation which is emitted from the earth. Its presence in the atmosphere causes a greenhouse effect [....] It has been calculated that a temperature rise corresponding to a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York. All the coastal cities would be covered, and since a considerable percentage of the human race lives in coastal regions, I think that this chemical contamination is more serious than most people tend to believe.
How, precisely, Mr. Dunlop and the rest of the audience reacted is unknown, but it’s hard to imagine this being welcome news. After his talk, Teller was asked to “summarize briefly the danger from increased carbon dioxide content in the atmosphere in this century.” The physicist, as if considering a numerical estimation problem, responded:
At present the carbon dioxide in the atmosphere has risen by 2 per cent over normal. By 1970, it will be perhaps 4 per cent, by 1980, 8 per cent, by 1990, 16 per cent [about 360 parts per million, by Teller’s accounting], if we keep on with our exponential rise in the use of purely conventional fuels. By that time, there will be a serious additional impediment for the radiation leaving the earth. Our planet will get a little warmer. It is hard to say whether it will be 2 degrees Fahrenheit or only one or 5.
But when the temperature does rise by a few degrees over the whole globe, there is a possibility that the icecaps will start melting and the level of the oceans will begin to rise. Well, I don’t know whether they will cover the Empire State Building or not, but anyone can calculate it by looking at the map and noting that the icecaps over Greenland and over Antarctica are perhaps five thousand feet thick.
And so, at its hundredth birthday party, American oil was warned of its civilization-destroying potential.
Talk about a buzzkill.
How did the petroleum industry respond? Eight years later, on a cold, clear day in March, Robert Dunlop walked the halls of the U.S. Congress. The 1967 oil embargo was weeks away, and the Senate was investigating the potential of electric vehicles. Dunlop, testifying now as the Chairman of the Board of the American Petroleum Institute, posed the question, “tomorrow’s car: electric or gasoline powered?” His preferred answer was the latter:
We in the petroleum industry are convinced that by the time a practical electric car can be mass-produced and marketed, it will not enjoy any meaningful advantage from an air pollution standpoint. Emissions from internal-combustion engines will have long since been controlled.
Dunlop went on to describe progress in controlling carbon monoxide, nitrous oxide, and hydrocarbon emissions from automobiles. Absent from his list? The pollutant he had been warned of years before: carbon dioxide.
We might surmise that the odorless gas simply passed under Robert Dunlop’s nose unnoticed. But less than a year later, the American Petroleum Institute quietly received a report on air pollution it had commissioned from the Stanford Research Institute, and its warning on carbon dioxide was direct:
Significant temperature changes are almost certain to occur by the year 2000, and these could bring about climatic changes. [...] there seems to be no doubt that the potential damage to our environment could be severe. [...] pollutants which we generally ignore because they have little local effect, CO2 and submicron particles, may be the cause of serious world-wide environmental changes.
Thus, by 1968, American oil held in its hands yet another notice of its products’ world-altering side effects, one affirming that global warming was not just cause for research and concern, but a reality needing corrective action: “Past and present studies of CO2 are detailed,” the Stanford Research Institute advised. “What is lacking, however, is [...] work toward systems in which CO2 emissions would be brought under control.”
This early history illuminates the American petroleum industry’s long-running awareness of the planetary warming caused by its products. Teller’s warning, revealed in documentation I found while searching archives, is another brick in a growing wall of evidence.
In the closing days of those optimistic 1950s, Robert Dunlop may have been one of the first oilmen to be warned of the tragedy now looming before us. By the time he departed this world in 1995, the American Petroleum Institute he once led was denying the climate science it had been informed of decades before, attacking the Intergovernmental Panel on Climate Change, and fighting climate policies wherever they arose.
This is a history of choices made, paths not taken, and the fall from grace of one of the greatest enterprises – oil, the “prime mover” – ever to tread the earth. Whether it’s also a history of redemption, however partial, remains to be seen.
American oil’s awareness of global warming – and its conspiracy of silence, deceit, and obstruction – goes further than any one company. It extends beyond (though includes) ExxonMobil. The industry is implicated to its core by the history of its largest representative, the American Petroleum Institute.
It is now too late to stop a great deal of change to our planet’s climate and its global payload of disease, destruction, and death. But we can fight to halt climate change as quickly as possible, and we can uncover the history of how we got here. There are lessons to be learned, and there is justice to be served.
White newspaper journalists exploited racial divisions to help build the GOP’s southern firewall in the 1960s
The Conversation
27 NOV 2017 AT 10:59 ET
From Raw Story: Conservatives who dislike Donald Trump like to blame the president and his Breitbart cheering section for the racial demagoguery they see in today’s Republican Party.
For example, New York Times columnist David Brooks lamented the GOP’s transformation over the past decade from a party that had always been decent on racial issues to one that now embraced “white identity politics.”
I respect Brooks and read him regularly, but on this issue he and his ideological allies have a blind spot. They ignore overwhelming evidence showing the central role racial politics played in the Republican Party’s rise to power after the civil rights movement.
In my book “Newspaper Wars: Civil Rights and White Resistance in South Carolina,” I write about the white journalists who helped revive the GOP in one Deep South state. Their story shows how prominent voices of the conservative movement have long harnessed racial resentment to fuel the party’s political ascendancy.
A journalistic mouthpiece for segregation
In 1962, Republican William D. Workman Jr. launched a long-shot bid for a U.S. Senate seat in South Carolina. For more than eight decades, the Democratic Party had been the only party that mattered in state politics. To most white voters, it represented the overthrow of Reconstruction and the restoration of white political rule.
Yet Workman nearly defeated a two-term Democratic incumbent. It was a turning point that signaled the GOP’s reemergence as a competitive force in the region.
The nation’s top political reporter, James Reston of The New York Times, traveled to South Carolina to examine this new GOP in the Deep South. He called Workman a “journalistic Goldwater Republican.” It might seem like an odd description, but it fit the candidate perfectly.
In the late 1950s, Workman and his boss, Charleston News and Courier editor Thomas R. Waring Jr., were staunch segregationists who had found a political ally in William F. Buckley Jr., conservative editor of a new journal, National Review.Before he joined the senate race, Workman had been the state’s best-known political reporter. He had also been working secretly with GOP allies to build the party in South Carolina and rally support for Arizona Sen. Barry Goldwater, leader of the GOP’s rising conservative wing.
As political scientist Joseph E. Lowndes notes, National Review was the first conservative journal to try to link the southern opposition to enforced integration with the small-government argument that was central to economic conservatism.
In 1957, Buckley delivered the magazine’s most forthright overture to southern segregationists. In an editorial on black voting rights, Buckley called whites “the advanced race” in the South and said whites, therefore, should be allowed to “take such measures as necessary to prevail, politically and culturally, in areas in which it does not predominate numerically.” In Buckley’s view, “the claims of civilization supersede those of universal suffrage.”
In the News and Courier, Waring described the editorial as “brave words,” but Buckley’s argument created a firestorm within the conservative movement. His brother-in-law, L. Brent Bozell, condemned the editorial in the pages of Buckley’s own magazine. Bozell said Buckley’s unconstitutional appeal to white supremacy threatened to do “grave hurt to the conservative movement.”
Workman and the ‘great white switch’
Buckley and Goldwater began avoiding such overtly racist appeals, but it took longer for their southern allies to temper their rhetoric and master the use of racially coded language. In his 1960 book, “The Case for the South,” Workman wrote that African-Americans remained “a white man’s burden” – a “violent” and “indolent” people who needed guidance from their white superiors.
Two years later, Workman rallied local and national Republicans to his banner in the senate race. At the time, the tiny GOP in South Carolina was run by conservative businessmen who had migrated to the Palmetto State from the North. They embraced Goldwater’s call for lower taxes, weaker unions and smaller government, but the Republicans lacked credibility with white voters who cared mostly about segregation and white political rule.
Workman’s 1962 campaign changed that. He united the state’s racial and economic conservatives, a political marriage that would fuel the party’s dramatic growth in South Carolina and the nation over the next two decades.
As historian Dan T. Carter contends, “even though the streams of racial and economic conservatism have sometimes flowed in separate channels, they ultimately joined in the political coalition that reshaped American politics” in the years after the civil rights movement.
Buoyed by the surprising strength of Workman’s campaign, South Carolina’s junior senator, Strom Thurmond, abandoned the Democratic Party in 1964 and joined the Republicans. Sixteen years earlier, Thurmond had left the Democrats briefly to run for president as a “Dixiecrat” on the States’ Rights ticket. His surprise announcement in 1964 signaled the start of what political scientists call the “great white switch.”Workman was thrilled by the letters he received from northern conservatives who embraced his campaign. One of those was William Loeb, editor of New Hampshire’s Manchester Union-Leader, who told Workman the GOP should become “the white man’s party.” Loeb said his proposal would “leave Democrats with the Negro vote,” but give the Republicans the white vote and “white people, thank God, are still in the majority.”
As more African-American voters joined the Democratic Party, southern whites moved to the GOP. William Loeb was getting his wish.
Reagan and the Neshoba County Fair
By the late 1970s, Ronald Reagan had united former segregationists with economic and social conservatives to create a political movement that would dominate American politics. Reagan built this coalition in part through the use of coded rhetoric tying race to such issues as crime, welfare and government spending.
In 1980, he launched his fall presidential campaign at the Neshoba County Fair near Philadelphia, Mississippi. Sixteen years earlier, three civil rights activists had been murdered in Neshoba County and buried in an earthen dam. Reagan used his visit to declare his support for states’ rights – a phrase indelibly linked to Thurmond’s Dixiecrat campaign of 1948.
In a 2007 column, Brooks angrily disputes any notion that Reagan’s Neshoba County trip was a dog-whistle appeal to white racial resentment in the post-civil rights era. He calls such claims a “slur” and a “calumny.” Reagan’s campaign was notoriously disorganized, Brooks argues. The candidate had planned to launch his general election campaign discussing inner-city problems with the Urban League, not preaching states’ rights in Neshoba County.
Maybe he’s right. Perhaps it was just a scheduling mishap. But on the question of race and politics, I’m more inclined to believe Lee Atwater, the late political strategist from South Carolina.
Atwater served as White House political director under Reagan and chief strategist for President George H. W. Bush’s 1988 campaign. In a 1981 interview with political scientist Alexander Lamis, Atwater explained the evolution of coded racial language in political campaigns in the South.
“You start out in 1954 by saying, ‘nigger, nigger, nigger.’ By 1968 you can’t say ‘nigger’ – that hurts you, backfires,” Atwater said. “So you say stuff like forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites…”
This is not to say that all Republicans are racist, or that economic, social and cultural issues played no role in the rise of the conservative GOP. But it is clear that racial resentment mattered to voters – a lot – and the Republican Party found ways to stoke that animus for political gain.
Donald Trump’s racial appeals may be more transparent. But in him, the legacy of conservative journalists like William Workman lives on.
For example, New York Times columnist David Brooks lamented the GOP’s transformation over the past decade from a party that had always been decent on racial issues to one that now embraced “white identity politics.”
I respect Brooks and read him regularly, but on this issue he and his ideological allies have a blind spot. They ignore overwhelming evidence showing the central role racial politics played in the Republican Party’s rise to power after the civil rights movement.
In my book “Newspaper Wars: Civil Rights and White Resistance in South Carolina,” I write about the white journalists who helped revive the GOP in one Deep South state. Their story shows how prominent voices of the conservative movement have long harnessed racial resentment to fuel the party’s political ascendancy.
A journalistic mouthpiece for segregation
In 1962, Republican William D. Workman Jr. launched a long-shot bid for a U.S. Senate seat in South Carolina. For more than eight decades, the Democratic Party had been the only party that mattered in state politics. To most white voters, it represented the overthrow of Reconstruction and the restoration of white political rule.
Yet Workman nearly defeated a two-term Democratic incumbent. It was a turning point that signaled the GOP’s reemergence as a competitive force in the region.
The nation’s top political reporter, James Reston of The New York Times, traveled to South Carolina to examine this new GOP in the Deep South. He called Workman a “journalistic Goldwater Republican.” It might seem like an odd description, but it fit the candidate perfectly.
In the late 1950s, Workman and his boss, Charleston News and Courier editor Thomas R. Waring Jr., were staunch segregationists who had found a political ally in William F. Buckley Jr., conservative editor of a new journal, National Review.Before he joined the senate race, Workman had been the state’s best-known political reporter. He had also been working secretly with GOP allies to build the party in South Carolina and rally support for Arizona Sen. Barry Goldwater, leader of the GOP’s rising conservative wing.
As political scientist Joseph E. Lowndes notes, National Review was the first conservative journal to try to link the southern opposition to enforced integration with the small-government argument that was central to economic conservatism.
In 1957, Buckley delivered the magazine’s most forthright overture to southern segregationists. In an editorial on black voting rights, Buckley called whites “the advanced race” in the South and said whites, therefore, should be allowed to “take such measures as necessary to prevail, politically and culturally, in areas in which it does not predominate numerically.” In Buckley’s view, “the claims of civilization supersede those of universal suffrage.”
In the News and Courier, Waring described the editorial as “brave words,” but Buckley’s argument created a firestorm within the conservative movement. His brother-in-law, L. Brent Bozell, condemned the editorial in the pages of Buckley’s own magazine. Bozell said Buckley’s unconstitutional appeal to white supremacy threatened to do “grave hurt to the conservative movement.”
Workman and the ‘great white switch’
Buckley and Goldwater began avoiding such overtly racist appeals, but it took longer for their southern allies to temper their rhetoric and master the use of racially coded language. In his 1960 book, “The Case for the South,” Workman wrote that African-Americans remained “a white man’s burden” – a “violent” and “indolent” people who needed guidance from their white superiors.
Two years later, Workman rallied local and national Republicans to his banner in the senate race. At the time, the tiny GOP in South Carolina was run by conservative businessmen who had migrated to the Palmetto State from the North. They embraced Goldwater’s call for lower taxes, weaker unions and smaller government, but the Republicans lacked credibility with white voters who cared mostly about segregation and white political rule.
Workman’s 1962 campaign changed that. He united the state’s racial and economic conservatives, a political marriage that would fuel the party’s dramatic growth in South Carolina and the nation over the next two decades.
As historian Dan T. Carter contends, “even though the streams of racial and economic conservatism have sometimes flowed in separate channels, they ultimately joined in the political coalition that reshaped American politics” in the years after the civil rights movement.
Buoyed by the surprising strength of Workman’s campaign, South Carolina’s junior senator, Strom Thurmond, abandoned the Democratic Party in 1964 and joined the Republicans. Sixteen years earlier, Thurmond had left the Democrats briefly to run for president as a “Dixiecrat” on the States’ Rights ticket. His surprise announcement in 1964 signaled the start of what political scientists call the “great white switch.”Workman was thrilled by the letters he received from northern conservatives who embraced his campaign. One of those was William Loeb, editor of New Hampshire’s Manchester Union-Leader, who told Workman the GOP should become “the white man’s party.” Loeb said his proposal would “leave Democrats with the Negro vote,” but give the Republicans the white vote and “white people, thank God, are still in the majority.”
As more African-American voters joined the Democratic Party, southern whites moved to the GOP. William Loeb was getting his wish.
Reagan and the Neshoba County Fair
By the late 1970s, Ronald Reagan had united former segregationists with economic and social conservatives to create a political movement that would dominate American politics. Reagan built this coalition in part through the use of coded rhetoric tying race to such issues as crime, welfare and government spending.
In 1980, he launched his fall presidential campaign at the Neshoba County Fair near Philadelphia, Mississippi. Sixteen years earlier, three civil rights activists had been murdered in Neshoba County and buried in an earthen dam. Reagan used his visit to declare his support for states’ rights – a phrase indelibly linked to Thurmond’s Dixiecrat campaign of 1948.
In a 2007 column, Brooks angrily disputes any notion that Reagan’s Neshoba County trip was a dog-whistle appeal to white racial resentment in the post-civil rights era. He calls such claims a “slur” and a “calumny.” Reagan’s campaign was notoriously disorganized, Brooks argues. The candidate had planned to launch his general election campaign discussing inner-city problems with the Urban League, not preaching states’ rights in Neshoba County.
Maybe he’s right. Perhaps it was just a scheduling mishap. But on the question of race and politics, I’m more inclined to believe Lee Atwater, the late political strategist from South Carolina.
Atwater served as White House political director under Reagan and chief strategist for President George H. W. Bush’s 1988 campaign. In a 1981 interview with political scientist Alexander Lamis, Atwater explained the evolution of coded racial language in political campaigns in the South.
“You start out in 1954 by saying, ‘nigger, nigger, nigger.’ By 1968 you can’t say ‘nigger’ – that hurts you, backfires,” Atwater said. “So you say stuff like forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites…”
This is not to say that all Republicans are racist, or that economic, social and cultural issues played no role in the rise of the conservative GOP. But it is clear that racial resentment mattered to voters – a lot – and the Republican Party found ways to stoke that animus for political gain.
Donald Trump’s racial appeals may be more transparent. But in him, the legacy of conservative journalists like William Workman lives on.
The two men who almost derailed New England’s first colonies
Behind the scheme that tried to undermine the Pilgrims
PETER C. MANCALL, THE CONVERSATION
11.23.2017•2:58 AM
From Salon: The story is simple enough. In 1620 a group of English Protestant dissenters known as Pilgrims arrived in what’s now Massachusetts to establish a settlement they called New Plymouth. The first winter was brutal, but by the following year they’d learned how to survive the unforgiving environment. When the harvest season of 1621 arrived, the Pilgrims gathered together with local Wampanoag Indians for a three-day feast, during which they may have eaten turkey.
Over time this feast, described as “the first Thanksgiving,” became part of the nation’s founding narrative, though it was one among many days when colonists and their descendants offered thanks to God.
The peace wouldn’t last for long, and much of America’s early Colonial history centers on the eventual conflicts between the colonists and the Native Americans. But the traditional version ignores the real danger that emerged from two Englishmen — Thomas Morton and Ferdinando Gorges — who sought to undermine the legal basis for Puritan settlements throughout New England.
Over 200 years later, when President Abraham Lincoln declared the first federal day of Thanksgiving in the midst of the Civil War, it was a good moment for Americans to recall a time when disparate peoples could reach across the cultural divide. He was either unaware of — or conveniently ignored — the English schemers who tried to chase those Pilgrims and Puritans away.
Tensions mount
The Puritans followed the Pilgrims, founding the Massachusetts Bay colony in 1630. There, John Winthrop, who became the governor, wrote that the English wanted to create a “city upon a hill.” The line came from Matthew 5:14, an early example of how these English travelers viewed their actions through a biblical lens.
The growing numbers of English migrants strained the local resources of the Algonquian-speaking peoples. These locals, collectively known as Ninnimissinuok, had already suffered from a terrible epidemic possibly caused by a bacterial disease called leptospirosis and an infectious disorder, Weil syndrome, in the late 1610s that might have reduced their population by 90 percent.
Worse still, in 1636 the Puritans and Pilgrims went to war against the Pequots, whose homeland was in southern Connecticut. By the end of 1637, perhaps 700 to 900 nativeshad died in the violence, and another 900 or so had been sold into slavery. The English marked their victory with “a day of thanksgiving kept in all the churches for the victory obtained against the Pequods, and for other mercies.”
English hostility against Natives has taken a central place in historians’ version of the origins of New England. But though it is a powerful and tragic narrative, indigenous Americans did not pose the greatest hazard to the survival of the colonists.
A new threat emerges
Just when the Pilgrims were trying to establish New Plymouth, an English war veteran named Ferdinando Gorges claimed that he and a group of investors possessed the only legitimate patent to create a colony in the region.
Gorges had gained notoriety after battling the Spanish in the Netherlands and commanding the defense of the port city of Plymouth, on the southwest coast of England. Afterwards, Gorges was in search of a new opportunity. It arrived in 1605 when the English sea captain George Waymouth returned to England after a voyage that had taken him to the coast of modern Maine and back. Along with news about the coastline and its resources, Waymouth brought back five captive Eastern Abenakis, members of the indigenous nation that claimed territory between the Penobscot and Saco rivers in Maine. Waymouth left three of them with Gorges. Soon they learned English and told Gorges about their homeland, sparking Gorges’ interest in North America.
Gorges, with a group of investors, financially backed an expedition to the coast of Maine in 1607, though the colony they hoped to launch there never succeeded.
These financiers believed that they possessed a claim to all territory stretching from 40 to 48 degrees north latitude — a region that stretches from modern-day Philadelphia to St. John, Newfoundland — a point they emphasized in their charter. Gorges remained among its directors.
Kindred spirits
As luck would have it, Gorges soon met Thomas Morton, a man with legal training and a troubled past who had briefly visited Plymouth Plantation soon after the first English arrived. Morton would join forces with Gorges in his attempt to undermine the legal basis for the earliest English colonies in New England.
Morton and the Pilgrims despised one another. By 1626 he had established a trading post at a place called Merrymount, on the site of modern day Quincy, Massachusetts. There, he entertained local Ninnimissinuok, offering them alcohol and guns. He also imported an English folk custom by erecting an 80-foot pole for them to dance around.
The Pilgrims, viewing Morton as a threat because of his close relations with the locals and the fact that he had armed them, exiled him to England in 1628.
To the disappointment of the Pilgrims, Morton faced no legal action back in England. Instead, he returned to New England in 1629, settling in Massachusetts just as Winthrop and his allies were trying to launch their new colony. Soon enough, Morton angered the rulers of this Puritan settlement, claiming that the way they organized their affairs flew in the face of the idea that they should follow all English laws. The Puritans, looking for an excuse to send him away, claimed that he had abused local natives (a charge that was likely baseless). Nonetheless, they burned Morton’s house to the ground and shipped him back to England.
After a short stint in jail, Morton was free again, and it was around this time that he began to conspire with Gorges.
During the mid-1630s Gorges pushed English authorities to recognize his claim to New England. His argument pivoted on testimony provided by Morton, who claimed that the Puritans had violated proper religious and governing practices. Morton would soon write that the Puritans refused to use the Book of Common Prayer, a standard text employed by the Church of England, and that the Puritans closed their eyes when they prayed “because they thinke themselves so perfect in the highe way to heaven that they can find it blindfould.”
In a letter he wrote to a confidant, Morton claimed that at a hearing in London, the Massachusetts patent “was declared, for manifest abuses there discovered, to be void.” In 1637, such evidence convinced King Charles I to make Gorges the royal governor of Massachusetts.
But the king never followed through. Nor did the English bring the leaders of the colony to London for a trial. The Puritans maintained their charter, but Morton and Gorges refused to back down.
A quick compromise
In 1637, Morton published a book titled “New English Canaan.” In it, he accused the English of abusing and murdering Native Americans and also of violating widely accepted Protestant religious practices. (Today there are around 20 known copies of the original.)
With good reason, the Puritans feared Gorges and Morton. To make peace, they relented and in 1639 Gorges received the patent to modern-day Maine, which had been part of the original grant to the Massachusetts Bay Company. By then, Gorges’ agents had already begun to establish a plantation in Maine. That settlement ended the legal challenge to the existing New England colonies, which then prospered, free of English interference, for decades.
But Morton wasn’t quite done. He returned to Massachusetts, possibly as an agent for Gorges or perhaps because he had hoped that the situation might have improved. When he arrived local authorities, having seen his book, exiled him again. He retreated north, to Gorges’ planned colony. Winthrop wrote that he lived there “poor and despised.”
By 1644 Morton was dead, along with the scariest threat the Pilgrims and Puritans had faced.
Over time this feast, described as “the first Thanksgiving,” became part of the nation’s founding narrative, though it was one among many days when colonists and their descendants offered thanks to God.
The peace wouldn’t last for long, and much of America’s early Colonial history centers on the eventual conflicts between the colonists and the Native Americans. But the traditional version ignores the real danger that emerged from two Englishmen — Thomas Morton and Ferdinando Gorges — who sought to undermine the legal basis for Puritan settlements throughout New England.
Over 200 years later, when President Abraham Lincoln declared the first federal day of Thanksgiving in the midst of the Civil War, it was a good moment for Americans to recall a time when disparate peoples could reach across the cultural divide. He was either unaware of — or conveniently ignored — the English schemers who tried to chase those Pilgrims and Puritans away.
Tensions mount
The Puritans followed the Pilgrims, founding the Massachusetts Bay colony in 1630. There, John Winthrop, who became the governor, wrote that the English wanted to create a “city upon a hill.” The line came from Matthew 5:14, an early example of how these English travelers viewed their actions through a biblical lens.
The growing numbers of English migrants strained the local resources of the Algonquian-speaking peoples. These locals, collectively known as Ninnimissinuok, had already suffered from a terrible epidemic possibly caused by a bacterial disease called leptospirosis and an infectious disorder, Weil syndrome, in the late 1610s that might have reduced their population by 90 percent.
Worse still, in 1636 the Puritans and Pilgrims went to war against the Pequots, whose homeland was in southern Connecticut. By the end of 1637, perhaps 700 to 900 nativeshad died in the violence, and another 900 or so had been sold into slavery. The English marked their victory with “a day of thanksgiving kept in all the churches for the victory obtained against the Pequods, and for other mercies.”
English hostility against Natives has taken a central place in historians’ version of the origins of New England. But though it is a powerful and tragic narrative, indigenous Americans did not pose the greatest hazard to the survival of the colonists.
A new threat emerges
Just when the Pilgrims were trying to establish New Plymouth, an English war veteran named Ferdinando Gorges claimed that he and a group of investors possessed the only legitimate patent to create a colony in the region.
Gorges had gained notoriety after battling the Spanish in the Netherlands and commanding the defense of the port city of Plymouth, on the southwest coast of England. Afterwards, Gorges was in search of a new opportunity. It arrived in 1605 when the English sea captain George Waymouth returned to England after a voyage that had taken him to the coast of modern Maine and back. Along with news about the coastline and its resources, Waymouth brought back five captive Eastern Abenakis, members of the indigenous nation that claimed territory between the Penobscot and Saco rivers in Maine. Waymouth left three of them with Gorges. Soon they learned English and told Gorges about their homeland, sparking Gorges’ interest in North America.
Gorges, with a group of investors, financially backed an expedition to the coast of Maine in 1607, though the colony they hoped to launch there never succeeded.
These financiers believed that they possessed a claim to all territory stretching from 40 to 48 degrees north latitude — a region that stretches from modern-day Philadelphia to St. John, Newfoundland — a point they emphasized in their charter. Gorges remained among its directors.
Kindred spirits
As luck would have it, Gorges soon met Thomas Morton, a man with legal training and a troubled past who had briefly visited Plymouth Plantation soon after the first English arrived. Morton would join forces with Gorges in his attempt to undermine the legal basis for the earliest English colonies in New England.
Morton and the Pilgrims despised one another. By 1626 he had established a trading post at a place called Merrymount, on the site of modern day Quincy, Massachusetts. There, he entertained local Ninnimissinuok, offering them alcohol and guns. He also imported an English folk custom by erecting an 80-foot pole for them to dance around.
The Pilgrims, viewing Morton as a threat because of his close relations with the locals and the fact that he had armed them, exiled him to England in 1628.
To the disappointment of the Pilgrims, Morton faced no legal action back in England. Instead, he returned to New England in 1629, settling in Massachusetts just as Winthrop and his allies were trying to launch their new colony. Soon enough, Morton angered the rulers of this Puritan settlement, claiming that the way they organized their affairs flew in the face of the idea that they should follow all English laws. The Puritans, looking for an excuse to send him away, claimed that he had abused local natives (a charge that was likely baseless). Nonetheless, they burned Morton’s house to the ground and shipped him back to England.
After a short stint in jail, Morton was free again, and it was around this time that he began to conspire with Gorges.
During the mid-1630s Gorges pushed English authorities to recognize his claim to New England. His argument pivoted on testimony provided by Morton, who claimed that the Puritans had violated proper religious and governing practices. Morton would soon write that the Puritans refused to use the Book of Common Prayer, a standard text employed by the Church of England, and that the Puritans closed their eyes when they prayed “because they thinke themselves so perfect in the highe way to heaven that they can find it blindfould.”
In a letter he wrote to a confidant, Morton claimed that at a hearing in London, the Massachusetts patent “was declared, for manifest abuses there discovered, to be void.” In 1637, such evidence convinced King Charles I to make Gorges the royal governor of Massachusetts.
But the king never followed through. Nor did the English bring the leaders of the colony to London for a trial. The Puritans maintained their charter, but Morton and Gorges refused to back down.
A quick compromise
In 1637, Morton published a book titled “New English Canaan.” In it, he accused the English of abusing and murdering Native Americans and also of violating widely accepted Protestant religious practices. (Today there are around 20 known copies of the original.)
With good reason, the Puritans feared Gorges and Morton. To make peace, they relented and in 1639 Gorges received the patent to modern-day Maine, which had been part of the original grant to the Massachusetts Bay Company. By then, Gorges’ agents had already begun to establish a plantation in Maine. That settlement ended the legal challenge to the existing New England colonies, which then prospered, free of English interference, for decades.
But Morton wasn’t quite done. He returned to Massachusetts, possibly as an agent for Gorges or perhaps because he had hoped that the situation might have improved. When he arrived local authorities, having seen his book, exiled him again. He retreated north, to Gorges’ planned colony. Winthrop wrote that he lived there “poor and despised.”
By 1644 Morton was dead, along with the scariest threat the Pilgrims and Puritans had faced.
United Daughters of the Confederacy Led A Campaign to Build Monuments and Whitewash History
By David Love - November 1, 2017
From Atlanta Black Star: More than 150 years since the end of the Civil War, the United States has been unable to come to terms with a war that claimed 750,000 lives and was fought over the cause of white supremacy and the right of white men to continue to own Black people as chattel. Today, many white Americans have a positive attitude toward the slaveholding South, the Confederate cause and its symbolism.
This mindset was on display this week when White House Chief of Staff John Kelly claimed a “lack of ability to compromise led to the Civil War” and called the removal of Confederate monuments a “dangerous” scrubbing of history. Speaking with Fox News’ Laura Ingraham on her new show, “The Ingraham Angle,” Kelly also called Confederate General Robert E. Lee — who abused his slaves and became a soldier for white supremacy — “an honorable man” who “gave up his country to fight for his state.” Further, Kelly defended the Confederate monuments, arguing it is “dangerous” to scrub them away.
Kelly’s remarks come as GenForward released a poll on millennial racial attitudes. Among its findings, the poll found that while majorities of Black (83 percent), Latinx (65 percent) and Asian-American (71 percent) millennials believe the Confederate flag is a symbol of racism, a majority of whites ages 18 to 34 (55 percent) believe the flag is a symbol of Southern pride, and 62 percent oppose removing Confederate statues and symbols.
Other surveys have shown majorities believing the war was fought over a vague notion of states’ rights, with substantial minorities attributing the war to taxes or tariffs as opposed to slavery, the true answer. Over the years, Confederate sympathizers have distorted and rewritten the history of the Civil War to make the Confederates appear heroic and their cause glorious and noble. This project has been years in the making, through the efforts of groups such as United Daughters of the Confederacy (UDC), which lobbied for the construction of Confederate monuments and the banning of textbooks that were hostile to the role of the Southern secessionists in the Civil War.
Examining the original documents of the Southern states who seceded from the Union, it is clear that slavery was their motivation, and white supremacy their foundation. For example, the Constitution of the Confederate States, which mentions the word slave or its variations ten times, stated:
In all such territory the institution of negro slavery, as it now exists in the Confederate States, shall be recognized and protected be Congress and by the Territorial government; and the inhabitants of the several Confederate States and Territories shall have the right to take to such Territory any slaves lawfully held by them in any of the States or Territories of the Confederate States.
Immediately before secession, Georgia Governor Joseph E. Brown said:
Among us the poor white laborer is respected as an equal. His family is treated with kindness, consideration and respect. He does not belong to the menial class. The negro is in no sense of the term his equal. He feels and knows this. He belongs to the only true aristocracy, the race of white men.
The Declaration of Causes of Seceding States identified slavery as the primary motivation. The state of Georgia declared:
For the last ten years we have had numerous and serious causes of complaint against our non-slave-holding confederate States with reference to the subject of African slavery. They have endeavored to weaken our security, to disturb our domestic peace and tranquility, and persistently refused to comply with their express constitutional obligations to us in reference to that property, and by the use of their power in the Federal Government have striven to deprive us of an equal enjoyment of the common Territories of the Republic.
In its declaration, the state of Mississippi stated:
Our position is thoroughly identified with the institution of slavery — the greatest material interest of the world. Its labor supplies the product which constitutes by far the largest and most important portions of commerce of the earth. These products are peculiar to the climate verging on the tropical regions, and by an imperious law of nature, none but the black race can bear exposure to the tropical sun. These products have become necessities of the world, and a blow at slavery is a blow at commerce and civilization.
Texas spoke of “maintaining and protecting the institution known as negro slavery — the servitude of the African to the white race within her limits — a relation that had existed from the first settlement of her wilderness by the white race, and which her people intended should exist in all future time.”
UDC and other neo-Confederate organizations subscribe to the Cult of the Lost Cause, a concept promoted by white Southerners in the late 19th century that rebranded the Confederacy as “noble and praiseworthy,” denied that slavery was a central cause of the Civil War and toned down the brutality of slavery itself, even depicting the enslaved as happy. According to the Southern Poverty Law Center, this was a “whitewashing of history” to salvage white Southerners’ honor from defeat, and dominated Southern cultural history in the early 20th century. The cult of the Lost Cause was the core ideology of the Ku Klux Klan and the idea that Black emancipation was a threat to democracy and white women. The ideology has had an impact on Republican Party politics, as well as white nationalists and radical extremists.
In 1894, the United Daughters of the Confederacy was founded in Tennessee and became the foremost Southern women’s organization. UDC now has chapters in 33 states and the District of Columbia, and was responsible for erecting most of the Confederate monuments throughout the South. As of 2016, these 718 monuments included 551 built before 1950, 45 erected during the Civil Rights movement, and 32 built since 2000. For example, UDC was responsible for the Confederate Monument at Arlington National Cemetery, and Stone Mountain, Georgia, the South’s own Mount Rushmore, whose work was supervised by Ku Klux Klan members.
UDC instilled Confederate values in white youth to uphold white supremacy and states’ rights, lobbied state legislatures to provide military pensions to Confederate veterans, and helped form state-run homes for Confederate soldiers and widows. As Vox reported, UDC formed textbook committees and made school boards ban books that were “unjust to the South” and negatively portrayed the Confederacy. In addition, its auxiliary group, Children of the Confederacy, purportedly engages young people in “Southern” history. Open to descendants of men and women who served the Confederacy, according to its website, the group states its purposes include “to honor and perpetuate the memory and deeds of high principles of the men and women of the Confederacy,” “observe properly all Confederate Memorial Days,” and “serve society through civic affairs and to perpetuate National patriotism as our ancestors once defended their beliefs.”
UDC is open only to women related to Confederate veterans of the “War Between the States.” The Southern Poverty Law Center noted that UDC has affiliated with white supremacists and racist groups such as the Council of Conservative Citizens and the League of the South, while its publications have minimized the middle passage and argued that slave ship crews suffered the most.
“The UDC helped rewrite both the history of the Civil War and the history of Reconstruction. For the neo-Confederate groups the Civil War and Reconstruction was one long struggle in two phases,” Edward H. Sebesta — editor of “The Confederate and Neo-Confederate Reader,” “Neo-Confederacy: A Critical Introduction,” and author of “Pernicious: The Neo-Confederate Campaign against Social Justice in America” — told Atlanta Black Star. “The problem is that when discussing the United Confederate Veterans, the Sons of Confederate Veterans and the United Daughters of the Confederacy, people don’t realize the extent that neo-Confederates had in rewriting Reconstruction. Yes, there was the Dunning School at Columbia University (a group of scholars that objected to the role of the federal government in Reconstruction), but the UDC provided a major component to push the Dunning interpretation to the general public,” Sebesta added.
UDC policed textbooks and made it clear to textbook publishers that if they did not change their Civil War content, they would not have a market in the South, Sebesta noted. However, the influence of the group regarding textbooks waned by the time of the Civil Rights movement. “By the 1950s, a general [desire] by the public to support white supremacy favored a Lost Cause account of the Civil War and Reconstruction, and it becomes self-sustaining because it teaches or supports a cluster of white attitudes and beliefs in individuals who then insist that for the next generation the same be taught to them,” he said.
“We teach children white nationalism, in our textbooks and it should not be surprising that white nationalism is manifested in our society.”
This mindset was on display this week when White House Chief of Staff John Kelly claimed a “lack of ability to compromise led to the Civil War” and called the removal of Confederate monuments a “dangerous” scrubbing of history. Speaking with Fox News’ Laura Ingraham on her new show, “The Ingraham Angle,” Kelly also called Confederate General Robert E. Lee — who abused his slaves and became a soldier for white supremacy — “an honorable man” who “gave up his country to fight for his state.” Further, Kelly defended the Confederate monuments, arguing it is “dangerous” to scrub them away.
Kelly’s remarks come as GenForward released a poll on millennial racial attitudes. Among its findings, the poll found that while majorities of Black (83 percent), Latinx (65 percent) and Asian-American (71 percent) millennials believe the Confederate flag is a symbol of racism, a majority of whites ages 18 to 34 (55 percent) believe the flag is a symbol of Southern pride, and 62 percent oppose removing Confederate statues and symbols.
Other surveys have shown majorities believing the war was fought over a vague notion of states’ rights, with substantial minorities attributing the war to taxes or tariffs as opposed to slavery, the true answer. Over the years, Confederate sympathizers have distorted and rewritten the history of the Civil War to make the Confederates appear heroic and their cause glorious and noble. This project has been years in the making, through the efforts of groups such as United Daughters of the Confederacy (UDC), which lobbied for the construction of Confederate monuments and the banning of textbooks that were hostile to the role of the Southern secessionists in the Civil War.
Examining the original documents of the Southern states who seceded from the Union, it is clear that slavery was their motivation, and white supremacy their foundation. For example, the Constitution of the Confederate States, which mentions the word slave or its variations ten times, stated:
In all such territory the institution of negro slavery, as it now exists in the Confederate States, shall be recognized and protected be Congress and by the Territorial government; and the inhabitants of the several Confederate States and Territories shall have the right to take to such Territory any slaves lawfully held by them in any of the States or Territories of the Confederate States.
Immediately before secession, Georgia Governor Joseph E. Brown said:
Among us the poor white laborer is respected as an equal. His family is treated with kindness, consideration and respect. He does not belong to the menial class. The negro is in no sense of the term his equal. He feels and knows this. He belongs to the only true aristocracy, the race of white men.
The Declaration of Causes of Seceding States identified slavery as the primary motivation. The state of Georgia declared:
For the last ten years we have had numerous and serious causes of complaint against our non-slave-holding confederate States with reference to the subject of African slavery. They have endeavored to weaken our security, to disturb our domestic peace and tranquility, and persistently refused to comply with their express constitutional obligations to us in reference to that property, and by the use of their power in the Federal Government have striven to deprive us of an equal enjoyment of the common Territories of the Republic.
In its declaration, the state of Mississippi stated:
Our position is thoroughly identified with the institution of slavery — the greatest material interest of the world. Its labor supplies the product which constitutes by far the largest and most important portions of commerce of the earth. These products are peculiar to the climate verging on the tropical regions, and by an imperious law of nature, none but the black race can bear exposure to the tropical sun. These products have become necessities of the world, and a blow at slavery is a blow at commerce and civilization.
Texas spoke of “maintaining and protecting the institution known as negro slavery — the servitude of the African to the white race within her limits — a relation that had existed from the first settlement of her wilderness by the white race, and which her people intended should exist in all future time.”
UDC and other neo-Confederate organizations subscribe to the Cult of the Lost Cause, a concept promoted by white Southerners in the late 19th century that rebranded the Confederacy as “noble and praiseworthy,” denied that slavery was a central cause of the Civil War and toned down the brutality of slavery itself, even depicting the enslaved as happy. According to the Southern Poverty Law Center, this was a “whitewashing of history” to salvage white Southerners’ honor from defeat, and dominated Southern cultural history in the early 20th century. The cult of the Lost Cause was the core ideology of the Ku Klux Klan and the idea that Black emancipation was a threat to democracy and white women. The ideology has had an impact on Republican Party politics, as well as white nationalists and radical extremists.
In 1894, the United Daughters of the Confederacy was founded in Tennessee and became the foremost Southern women’s organization. UDC now has chapters in 33 states and the District of Columbia, and was responsible for erecting most of the Confederate monuments throughout the South. As of 2016, these 718 monuments included 551 built before 1950, 45 erected during the Civil Rights movement, and 32 built since 2000. For example, UDC was responsible for the Confederate Monument at Arlington National Cemetery, and Stone Mountain, Georgia, the South’s own Mount Rushmore, whose work was supervised by Ku Klux Klan members.
UDC instilled Confederate values in white youth to uphold white supremacy and states’ rights, lobbied state legislatures to provide military pensions to Confederate veterans, and helped form state-run homes for Confederate soldiers and widows. As Vox reported, UDC formed textbook committees and made school boards ban books that were “unjust to the South” and negatively portrayed the Confederacy. In addition, its auxiliary group, Children of the Confederacy, purportedly engages young people in “Southern” history. Open to descendants of men and women who served the Confederacy, according to its website, the group states its purposes include “to honor and perpetuate the memory and deeds of high principles of the men and women of the Confederacy,” “observe properly all Confederate Memorial Days,” and “serve society through civic affairs and to perpetuate National patriotism as our ancestors once defended their beliefs.”
UDC is open only to women related to Confederate veterans of the “War Between the States.” The Southern Poverty Law Center noted that UDC has affiliated with white supremacists and racist groups such as the Council of Conservative Citizens and the League of the South, while its publications have minimized the middle passage and argued that slave ship crews suffered the most.
“The UDC helped rewrite both the history of the Civil War and the history of Reconstruction. For the neo-Confederate groups the Civil War and Reconstruction was one long struggle in two phases,” Edward H. Sebesta — editor of “The Confederate and Neo-Confederate Reader,” “Neo-Confederacy: A Critical Introduction,” and author of “Pernicious: The Neo-Confederate Campaign against Social Justice in America” — told Atlanta Black Star. “The problem is that when discussing the United Confederate Veterans, the Sons of Confederate Veterans and the United Daughters of the Confederacy, people don’t realize the extent that neo-Confederates had in rewriting Reconstruction. Yes, there was the Dunning School at Columbia University (a group of scholars that objected to the role of the federal government in Reconstruction), but the UDC provided a major component to push the Dunning interpretation to the general public,” Sebesta added.
UDC policed textbooks and made it clear to textbook publishers that if they did not change their Civil War content, they would not have a market in the South, Sebesta noted. However, the influence of the group regarding textbooks waned by the time of the Civil Rights movement. “By the 1950s, a general [desire] by the public to support white supremacy favored a Lost Cause account of the Civil War and Reconstruction, and it becomes self-sustaining because it teaches or supports a cluster of white attitudes and beliefs in individuals who then insist that for the next generation the same be taught to them,” he said.
“We teach children white nationalism, in our textbooks and it should not be surprising that white nationalism is manifested in our society.”
How the US government created and coddled the gun industry
How much of a role does America play in the gun industry?
BRIAN DELAY, THE CONVERSATION
10.14.2017•6:59 AM
From Salon: ....But what I’ve learned from a decade of studying the history of the arms trade has convinced me that the American public has more power over the gun business than most people realize.
Washington’s patronage
The U.S. arms industry’s close alliance with the government is as old as the country itself, beginning with the American Revolution.
Forced to rely on foreign weapons during the war, President George Washington wanted to ensure that the new republic had its own arms industry. Inspired by European practice, he and his successors built public arsenals for the production of firearms in Springfield and Harper’s Ferry. They also began doling out lucrative arms contracts to private manufacturers such as Simeon North, the first official U.S. pistol maker, and Eli Whitney, inventor of the cotton gin.
The government provided crucial startup funds, steady contracts, tariffs against foreign manufactures, robust patent laws, and patterns, tools and know-how from federal arsenals.
The War of 1812, perpetual conflicts with Native Americans and the U.S.-Mexican War all fed the industry’s growth. By the early 1850s, the United States was emerging as a world-class arms producer. Now-iconic American companies like those started by Eliphalet Remington and Samuel Colt began to acquire international reputations.
Even the mighty gun-making center of Great Britain started emulating the American systemof interchangeable parts and mechanized production.
Profit in war and peace
The Civil War supercharged America’s burgeoning gun industry.
The Union poured huge sums of money into arms procurement, which manufacturers then invested in new capacity and infrastructure. By 1865, for example, Remington had made nearly US$3 million producing firearms for the Union. The Confederacy, with its weak industrial base, had to import the vast majority of its weapons.
The war’s end meant a collapse in demand and bankruptcy for several gun makers. Those that prospered afterward, such as Colt, Remington and Winchester, did so by securing contracts from foreign governments and hitching their domestic marketing to the brutal romance of the American West.
While peace deprived gun makers of government money for a time, it delivered a windfall to well capitalized dealers. That’s because within five years of Robert E. Lee’s surrender at Appomattox, the War Department had decommissioned most of its guns and auctioned off some 1,340,000 to private arms dealers, such as Schuyler, Hartley and Graham. The Western Hemisphere’s largest private arms dealer at the time, the company scooped up warehouses full of cut-rate army muskets and rifles and made fortunes reselling them at home and abroad.
More wars, more guns
By the late 19th century, America’s increasingly aggressive role in the world insured steady business for the country’s gun makers.
The Spanish American War brought a new wave of contracts, as did both World Wars, Korea, Vietnam, Afghanistan, Iraq and the dozens of smaller conflicts that the U.S. waged around the globe in the 20th and early 21st century. As the U.S. built up the world’s most powerful military and established bases across the globe, the size of the contracts soared.
Consider Sig Sauer, the New Hampshire arms producer that made the MCX rifle used in the Orlando Pulse nightclub massacre. In addition to arming nearly a third of the country’s law enforcement, it recently won the coveted contract for the Army’s new standard pistol, ultimately worth $350 million to $580 million.
Colt might best illustrate the importance of public money for prominent civilian arms manufacturers. Maker of scores of iconic guns for the civilian market, including the AR-15 carbine used in the 1996 massacre that prompted Australia to enact its famously sweeping gun restrictions, Colt has also relied heavily on government contracts since the 19th century. The Vietnam War initiated a long era of making M16s for the military, and the company continued to land contracts as American war-making shifted from southeast Asia to the Middle East. But Colt’s reliance on government was so great that it filed for bankruptcy in 2015, in part because it had lost the military contract for the M4 rifle two years earlier.
Overall, gun makers relied on government contracts for about 40 percent of their revenues in 2012.
Competition for contracts spurred manufacturers to make lethal innovations, such as handguns with magazines that hold 12 or 15 rounds rather than seven. Absent regulation, these innovations show up in gun enthusiast periodicals, sporting goods stores and emergency rooms.
NRA helped industry avoid regulation
So how has the industry managed to avoid more significant regulation, especially given the public anger and calls for legislation that follow horrific massacres like the one in Las Vegas?
Given their historic dependence on U.S. taxpayers, one might think that small arms makers would have been compelled to make meaningful concessions in such moments. But that seldom happens, thanks in large part to the National Rifle Association, a complicated yet invaluable industry partner.
Prior to the 1930s, meaningful firearms regulations came from state and local governments. There was little significant federal regulation until 1934, when Congress – spurred by the bloody “Tommy gun era” – debated the National Firearms Act.
The NRA, founded in 1871 as an organization focused on hunting and marksmanship, rallied its members to defeat the most important component of that bill: a tax meant to make it far more difficult to purchase handguns. Again in 1968, the NRA ensured Lyndon Johnson’s Gun Control Act wouldn’t include licensing and registrationrequirements.
In 1989, it helped delay and water down the Brady Act, which mandated background checks for arms purchased from federally licensed dealers. In 1996 the NRA engineered a virtual ban on federal funding for research into gun violence. In 2000, the group led a successful boycott of a gun maker that cooperated with the Clinton administration on gun safety measures. And it scored another big victory in 2005, by limiting the industry’s liability to gun-related lawsuits.
Most recently, the gun lobby has succeeded by promoting an ingenious illusion. It has framed government as the enemy of the gun business rather than its indispensable historic patron, convincing millions of American consumers that the state may at any moment stop them from buying guns or even try to confiscate them.
Hence the jump in the shares of gun makers following last week’s slaughter in Las Vegas. Investors know they have little to fear from new regulation and expect sales to rise anyway.
A question worth asking
So with the help of the NRA’s magic, major arms manufacturers have for decades thwarted regulations that majorities of Americans support.
Yet almost never does this political activity seem to jeopardize access to lucrative government contracts.
Americans interested in reform might reflect on that fact. They might start asking their representatives where they get their guns. It isn’t just the military and scores of federal agencies. States, counties and local governments buy plenty of guns, too.
For example, Smith & Wesson is well into a five-year contract to supply handguns to the Los Angeles Police Department, the second-largest in the country. In 2016 the company contributed $500,000 (more than any other company) to a get-out-the-vote operation designed to defeat candidates who favor tougher gun laws.
Do taxpayers in L.A. – or the rest of the country – realize they are indirectly subsidizing the gun lobby’s campaign against regulation?
Washington’s patronage
The U.S. arms industry’s close alliance with the government is as old as the country itself, beginning with the American Revolution.
Forced to rely on foreign weapons during the war, President George Washington wanted to ensure that the new republic had its own arms industry. Inspired by European practice, he and his successors built public arsenals for the production of firearms in Springfield and Harper’s Ferry. They also began doling out lucrative arms contracts to private manufacturers such as Simeon North, the first official U.S. pistol maker, and Eli Whitney, inventor of the cotton gin.
The government provided crucial startup funds, steady contracts, tariffs against foreign manufactures, robust patent laws, and patterns, tools and know-how from federal arsenals.
The War of 1812, perpetual conflicts with Native Americans and the U.S.-Mexican War all fed the industry’s growth. By the early 1850s, the United States was emerging as a world-class arms producer. Now-iconic American companies like those started by Eliphalet Remington and Samuel Colt began to acquire international reputations.
Even the mighty gun-making center of Great Britain started emulating the American systemof interchangeable parts and mechanized production.
Profit in war and peace
The Civil War supercharged America’s burgeoning gun industry.
The Union poured huge sums of money into arms procurement, which manufacturers then invested in new capacity and infrastructure. By 1865, for example, Remington had made nearly US$3 million producing firearms for the Union. The Confederacy, with its weak industrial base, had to import the vast majority of its weapons.
The war’s end meant a collapse in demand and bankruptcy for several gun makers. Those that prospered afterward, such as Colt, Remington and Winchester, did so by securing contracts from foreign governments and hitching their domestic marketing to the brutal romance of the American West.
While peace deprived gun makers of government money for a time, it delivered a windfall to well capitalized dealers. That’s because within five years of Robert E. Lee’s surrender at Appomattox, the War Department had decommissioned most of its guns and auctioned off some 1,340,000 to private arms dealers, such as Schuyler, Hartley and Graham. The Western Hemisphere’s largest private arms dealer at the time, the company scooped up warehouses full of cut-rate army muskets and rifles and made fortunes reselling them at home and abroad.
More wars, more guns
By the late 19th century, America’s increasingly aggressive role in the world insured steady business for the country’s gun makers.
The Spanish American War brought a new wave of contracts, as did both World Wars, Korea, Vietnam, Afghanistan, Iraq and the dozens of smaller conflicts that the U.S. waged around the globe in the 20th and early 21st century. As the U.S. built up the world’s most powerful military and established bases across the globe, the size of the contracts soared.
Consider Sig Sauer, the New Hampshire arms producer that made the MCX rifle used in the Orlando Pulse nightclub massacre. In addition to arming nearly a third of the country’s law enforcement, it recently won the coveted contract for the Army’s new standard pistol, ultimately worth $350 million to $580 million.
Colt might best illustrate the importance of public money for prominent civilian arms manufacturers. Maker of scores of iconic guns for the civilian market, including the AR-15 carbine used in the 1996 massacre that prompted Australia to enact its famously sweeping gun restrictions, Colt has also relied heavily on government contracts since the 19th century. The Vietnam War initiated a long era of making M16s for the military, and the company continued to land contracts as American war-making shifted from southeast Asia to the Middle East. But Colt’s reliance on government was so great that it filed for bankruptcy in 2015, in part because it had lost the military contract for the M4 rifle two years earlier.
Overall, gun makers relied on government contracts for about 40 percent of their revenues in 2012.
Competition for contracts spurred manufacturers to make lethal innovations, such as handguns with magazines that hold 12 or 15 rounds rather than seven. Absent regulation, these innovations show up in gun enthusiast periodicals, sporting goods stores and emergency rooms.
NRA helped industry avoid regulation
So how has the industry managed to avoid more significant regulation, especially given the public anger and calls for legislation that follow horrific massacres like the one in Las Vegas?
Given their historic dependence on U.S. taxpayers, one might think that small arms makers would have been compelled to make meaningful concessions in such moments. But that seldom happens, thanks in large part to the National Rifle Association, a complicated yet invaluable industry partner.
Prior to the 1930s, meaningful firearms regulations came from state and local governments. There was little significant federal regulation until 1934, when Congress – spurred by the bloody “Tommy gun era” – debated the National Firearms Act.
The NRA, founded in 1871 as an organization focused on hunting and marksmanship, rallied its members to defeat the most important component of that bill: a tax meant to make it far more difficult to purchase handguns. Again in 1968, the NRA ensured Lyndon Johnson’s Gun Control Act wouldn’t include licensing and registrationrequirements.
In 1989, it helped delay and water down the Brady Act, which mandated background checks for arms purchased from federally licensed dealers. In 1996 the NRA engineered a virtual ban on federal funding for research into gun violence. In 2000, the group led a successful boycott of a gun maker that cooperated with the Clinton administration on gun safety measures. And it scored another big victory in 2005, by limiting the industry’s liability to gun-related lawsuits.
Most recently, the gun lobby has succeeded by promoting an ingenious illusion. It has framed government as the enemy of the gun business rather than its indispensable historic patron, convincing millions of American consumers that the state may at any moment stop them from buying guns or even try to confiscate them.
Hence the jump in the shares of gun makers following last week’s slaughter in Las Vegas. Investors know they have little to fear from new regulation and expect sales to rise anyway.
A question worth asking
So with the help of the NRA’s magic, major arms manufacturers have for decades thwarted regulations that majorities of Americans support.
Yet almost never does this political activity seem to jeopardize access to lucrative government contracts.
Americans interested in reform might reflect on that fact. They might start asking their representatives where they get their guns. It isn’t just the military and scores of federal agencies. States, counties and local governments buy plenty of guns, too.
For example, Smith & Wesson is well into a five-year contract to supply handguns to the Los Angeles Police Department, the second-largest in the country. In 2016 the company contributed $500,000 (more than any other company) to a get-out-the-vote operation designed to defeat candidates who favor tougher gun laws.
Do taxpayers in L.A. – or the rest of the country – realize they are indirectly subsidizing the gun lobby’s campaign against regulation?
A historian destroys racists’ favorite myths about the Vikings
The Conversation
29 SEP 2017 AT 11:40 ET
From Raw Story: The word “Viking” entered the Modern English language in 1807, at a time of growing nationalism and empire building. In the decades that followed, enduring stereotypes about Vikings developed, such as wearing horned helmets and belonging to a society where only men wielded high status.
During the 19th century, Vikings were praised as prototypes and ancestor figures for European colonists. The idea took root of a Germanic master race, fed by crude scientific theories and nurtured by Nazi ideology in the 1930s. These theories have long been debunked, although the notion of the ethnic purity of the Vikings still seems to have popular appeal – and it is embraced by white supremacists.
In contemporary culture, the word Viking is generally synonymous with Scandinavians from the ninth to the 11th centuries. We often hear terms such as “Viking blood”, “Viking DNA” and “Viking ancestors” – but the medieval term meant something quite different to modern usage. Instead it defined an activity: “Going a-Viking”. Akin to the modern word pirate, Vikings were defined by their mobility and this did not include the bulk of the Scandinavian population who stayed at home.
The mobility of Vikings led to a fusion of cultures within their ranks and their trade routes would extend from Canada to Afghanistan. A striking feature of the early Vikings’ success was their ability to embrace and adapt from a wide range of cultures, whether that be the Christian Irish in the west or the Muslims of the Abbasid Caliphate in the east.While the modern word Viking came to light in an era of nationalism, the ninth century – when Viking raids ranged beyond the boundaries of modern Europe – was different. The modern nation states of Denmark, Norway and Sweden were still undergoing formation. Local and familial identity were more prized than national allegiances. The terms used to describe Vikings by contemporaries: “wicing”, “rus”, “magi”, “gennti”, “pagani”, “pirati” tend to be non-ethnic. When a term akin to Danes, “danar” is first used in English, it appears as a political label describing a mix of peoples under Viking control.
Blending of cultures
Developments in archaeology in recent decades have highlighted how people and goods could move over wider distances in the early Middle Ages than we have tended to think. In the eighth century, (before the main period of Viking raiding began), the Baltic was a place where Scandinavians, Frisians, Slavs and Arabic merchants were in frequent contact. It is too simplistic to think of early Viking raids, too, as hit-and-run affairs with ships coming directly from Scandinavia and immediately rushing home again.
Recent archaeological and textual work indicates that Vikings stopped off at numerous places during campaigns (this might be to rest, restock, gather tribute and ransoms, repair equipment and gather intelligence). This allowed more sustained interaction with different peoples. Alliances between Vikings and local peoples are recorded from the 830s and 840s in Britain and Ireland. By the 850s, mixed groups of Gaelic (Gaedhil) and foreign culture (Gaill) were plaguing the Irish countryside.
Written accounts survive from Britain and Ireland condemning or seeking to prevent people from joining the Vikings. And they show Viking war bands were not ethnically exclusive. As with later pirate groups (for example the early modern pirates of the Caribbean), Viking crews would frequently lose members and pick up new recruits as they travelled, combining dissident elements from different backgrounds and cultures.
The cultural and ethnic diversity of the Viking Age is highlighted by finds in furnished graves and silver hoards from the ninth and tenth centuries. In Britain and Ireland only a small percentage of goods handled by Vikings are Scandinavian in origin or style.
The evidence points to population mobility and acculturation over large distances as a result of Viking Age trade networks.The Galloway hoard, discovered in south-west Scotland in 2014, includes components from Scandinavia, Britain, Ireland, Continental Europe and Turkey. Cultural eclecticism is a feature of Viking finds. An analysis of skeletons at sites linked to Vikings using the latest scientific techniques points to a mix of Scandinavian and non-Scandinavian peoples without clear ethnic distinctions in rank or gender.
The Viking Age was a key period in state formation processes in Northern Europe, and certainly by the 11th and 12th centuries there was a growing interest in defining national identities and developing appropriate origin myths to explain them. This led to a retrospective development in areas settled by Vikings to celebrate their links to Scandinavia and downplay non-Scandinavian elements.
The fact that these myths, when committed to writing, were not accurate accounts is suggested by self-contradictory stories and folklore motifs. For example, medieval legends concerning the foundation of Dublin (Ireland) suggest either a Danish or Norwegian origin to the town (a lot of ink has been spilt over this matter over the years) – and there is a story of three brothers bringing three ships which bears comparison with other origin legends. Ironically, it was the growth of nation states in Europe which would eventually herald the end of the Viking Age.
Unrecognisable nationalism
In the early Viking Age, modern notions of nationalism and ethnicity would have been unrecognisable. Viking culture was eclectic, but there were common features across large areas, including use of Old Norse speech, similar shipping and military technologies, domestic architecture and fashions that combined Scandinavian and non-Scandinavian inspirations.
It can be argued that these markers of identity were more about status and affiliation to long-range trading networks than ethnic symbols. A lot of social display and identity is non-ethnic in character. One might compare this to contemporary international business culture which has adopted English language, the latest computing technologies, common layouts for boardrooms and the donning of Western suits. This is a culture expressed in nearly any country of the world but independently of ethnic identity.
Similarly, Vikings in the 9th and 10th centuries may be better defined more by what they did than by their place of origin or DNA. By dropping the simplistic equation of Scandinavian with Viking, we may better understand what the early Viking Age was about and how Vikings reshaped the foundations of medieval Europe by adapting to different cultures, rather than trying to segregate them.
During the 19th century, Vikings were praised as prototypes and ancestor figures for European colonists. The idea took root of a Germanic master race, fed by crude scientific theories and nurtured by Nazi ideology in the 1930s. These theories have long been debunked, although the notion of the ethnic purity of the Vikings still seems to have popular appeal – and it is embraced by white supremacists.
In contemporary culture, the word Viking is generally synonymous with Scandinavians from the ninth to the 11th centuries. We often hear terms such as “Viking blood”, “Viking DNA” and “Viking ancestors” – but the medieval term meant something quite different to modern usage. Instead it defined an activity: “Going a-Viking”. Akin to the modern word pirate, Vikings were defined by their mobility and this did not include the bulk of the Scandinavian population who stayed at home.
The mobility of Vikings led to a fusion of cultures within their ranks and their trade routes would extend from Canada to Afghanistan. A striking feature of the early Vikings’ success was their ability to embrace and adapt from a wide range of cultures, whether that be the Christian Irish in the west or the Muslims of the Abbasid Caliphate in the east.While the modern word Viking came to light in an era of nationalism, the ninth century – when Viking raids ranged beyond the boundaries of modern Europe – was different. The modern nation states of Denmark, Norway and Sweden were still undergoing formation. Local and familial identity were more prized than national allegiances. The terms used to describe Vikings by contemporaries: “wicing”, “rus”, “magi”, “gennti”, “pagani”, “pirati” tend to be non-ethnic. When a term akin to Danes, “danar” is first used in English, it appears as a political label describing a mix of peoples under Viking control.
Blending of cultures
Developments in archaeology in recent decades have highlighted how people and goods could move over wider distances in the early Middle Ages than we have tended to think. In the eighth century, (before the main period of Viking raiding began), the Baltic was a place where Scandinavians, Frisians, Slavs and Arabic merchants were in frequent contact. It is too simplistic to think of early Viking raids, too, as hit-and-run affairs with ships coming directly from Scandinavia and immediately rushing home again.
Recent archaeological and textual work indicates that Vikings stopped off at numerous places during campaigns (this might be to rest, restock, gather tribute and ransoms, repair equipment and gather intelligence). This allowed more sustained interaction with different peoples. Alliances between Vikings and local peoples are recorded from the 830s and 840s in Britain and Ireland. By the 850s, mixed groups of Gaelic (Gaedhil) and foreign culture (Gaill) were plaguing the Irish countryside.
Written accounts survive from Britain and Ireland condemning or seeking to prevent people from joining the Vikings. And they show Viking war bands were not ethnically exclusive. As with later pirate groups (for example the early modern pirates of the Caribbean), Viking crews would frequently lose members and pick up new recruits as they travelled, combining dissident elements from different backgrounds and cultures.
The cultural and ethnic diversity of the Viking Age is highlighted by finds in furnished graves and silver hoards from the ninth and tenth centuries. In Britain and Ireland only a small percentage of goods handled by Vikings are Scandinavian in origin or style.
The evidence points to population mobility and acculturation over large distances as a result of Viking Age trade networks.The Galloway hoard, discovered in south-west Scotland in 2014, includes components from Scandinavia, Britain, Ireland, Continental Europe and Turkey. Cultural eclecticism is a feature of Viking finds. An analysis of skeletons at sites linked to Vikings using the latest scientific techniques points to a mix of Scandinavian and non-Scandinavian peoples without clear ethnic distinctions in rank or gender.
The Viking Age was a key period in state formation processes in Northern Europe, and certainly by the 11th and 12th centuries there was a growing interest in defining national identities and developing appropriate origin myths to explain them. This led to a retrospective development in areas settled by Vikings to celebrate their links to Scandinavia and downplay non-Scandinavian elements.
The fact that these myths, when committed to writing, were not accurate accounts is suggested by self-contradictory stories and folklore motifs. For example, medieval legends concerning the foundation of Dublin (Ireland) suggest either a Danish or Norwegian origin to the town (a lot of ink has been spilt over this matter over the years) – and there is a story of three brothers bringing three ships which bears comparison with other origin legends. Ironically, it was the growth of nation states in Europe which would eventually herald the end of the Viking Age.
Unrecognisable nationalism
In the early Viking Age, modern notions of nationalism and ethnicity would have been unrecognisable. Viking culture was eclectic, but there were common features across large areas, including use of Old Norse speech, similar shipping and military technologies, domestic architecture and fashions that combined Scandinavian and non-Scandinavian inspirations.
It can be argued that these markers of identity were more about status and affiliation to long-range trading networks than ethnic symbols. A lot of social display and identity is non-ethnic in character. One might compare this to contemporary international business culture which has adopted English language, the latest computing technologies, common layouts for boardrooms and the donning of Western suits. This is a culture expressed in nearly any country of the world but independently of ethnic identity.
Similarly, Vikings in the 9th and 10th centuries may be better defined more by what they did than by their place of origin or DNA. By dropping the simplistic equation of Scandinavian with Viking, we may better understand what the early Viking Age was about and how Vikings reshaped the foundations of medieval Europe by adapting to different cultures, rather than trying to segregate them.
Donald Trump, Jews and the myth of race: How Jews gradually became “white,” and how that changed America
Until the 1940s, Jews in America were considered a separate race. Their journey to whiteness has important lessons
JONATHAN ZIMMERMAN - Salon
APRIL 9, 2017 10:00AM
Earlier this year, President Trump denounced the desecration of Jewish cemeteries. But he has said almost nothing about the murder of a black New Yorker last month by a racist military veteran.
All 100 members of the U.S. Senate signed a letter condemning bomb threats against synagogues and Jewish community centers, which were eventually traced to a teenager in Israel. We haven’t heard a similar expression of concern over the four American mosques that were burned by vandals in the first two months of 2017.
Why the double standard? Some observers have suggested that Trump is more attuned to anti-Semitism than to other forms of bigotry because his daughter and son-in-law are Jews. Others point to Trump’s immigration orders and to his earlier threats to bar Muslims from the country, which have made Islamophobia seem less reprehensible — and, possibly, more permissible — than prejudice against Jews.
But I’ve got a simpler explanation: Jews are white.
For most of American history, that wasn’t the case. Jews were a separate race, as blacks and Asians are today. But over time Jews became white, which made it harder for other whites to hate them.
That’s the great elephant in the room, when it comes to the tortured subject of race in America. The word “race” conjures biology, a set of inheritable — and immutable — physical characteristics. But it’s actually a cultural and social category, not a biological one, which is why it changes over time.
Until the 1940s, Jews were recorded as a distinct racial group by American immigration authorities. But many non-Jewish whites also believed that Jews shared traits with the most stigmatized race of all: African-Americans.
In his 1910 book, “The Jew and the Negro,” North Carolina minister Arthur T. Abernathy argued that the ancient Jews had mixed with neighboring African peoples. Jews’ skin color lightened when they moved to other climates, Abernathy wrote, but they still bore the same eye shapes and fingernails (yes, fingernails!) as blacks.
Both groups were disposed to sexual indulgence, Abernathy warned, a recurring theme in American racism. “Every student of sociology knows that the black man’s lust after the white woman is not much fiercer than the lust of the licentious Jew for the gentile,” Georgia politician Tom Watson wrote in 1915.
In the North, meanwhile, white racial anxieties about Jewish college students led to quotas that limited their numbers. “Scurvy kikes are not wanted,” read a poster hung on one campus in 1923. “Make New York University a White Man’s College.”
For their own part, Jews couldn’t agree if they were a race or not. Some of them warned that a separate racial classification would inevitably link them to blacks, diminishing Jews in the eyes of other Americans. But others held tight to the racial idea, which could help make the case for a Jewish homeland in Palestine.
Yet with the rise of Nazism in Europe, which was premised on Jewish racial inferiority, Jews in America stopped identifying as a race. “Scientifically and correctly speaking, there are three great races in the world: the black, yellow, and white,” a booklet distributed by the Anti-Defamation League declared in 1939. “Within the white race all the sub-races have long since been mixed and we Jews are part of the general admixture.”
After World War II, most white Americans quietly welcomed Jews into their “admixture.” Newspapers and social scientists referred to Jews as members of a religion, ethnicity or culture. But they were not called a “race,” a word that conjured up Adolf Hitler and his effort to eliminate them.
During these same years, the struggle against Nazism overseas helped jump-start the campaign for African-American civil rights at home. But blacks’ status as a separate race remained, as James Baldwin noted in a biting 1967 essay about African-Americans and Jews.
Blacks were tired of Jews telling them that the Jewish experience of prejudice was “as great as the American Negro’s suffering,” Baldwin wrote. The claim ignored racial realities, Baldwin wrote, and it fueled anti-Semitism in black communities. ”The most ironical thing,” Baldwin wrote, “is that the Negro is really condemning the Jew for having become an American white man.”
That’s not an option for African-Americans or for other nonwhite racial groups in America. (It's worth noting that the racial status of Hispanics is ambiguous: Some identify as white, some as black and many as neither.) These categories are cultural creations, too, not biological ones. When Sonia Sotomayor joined the Supreme Court in 2009, news headlines hailed her as the first Hispanic justice on the court. Yet when she was growing up in the Bronx in the 1950s and 1960s she was Puerto Rican rather than “Hispanic,” a term that did not enter our racial lexicon until the 1970s.
Let’s be clear: Race exists. But it exists in our minds, not in our bodies. In our homes and schools, we need to teach the next generation of Americans to resist the false doctrine of inherent racial difference. Then, and only then, will they truly recognize everyone as part of a single race: the human one.
All 100 members of the U.S. Senate signed a letter condemning bomb threats against synagogues and Jewish community centers, which were eventually traced to a teenager in Israel. We haven’t heard a similar expression of concern over the four American mosques that were burned by vandals in the first two months of 2017.
Why the double standard? Some observers have suggested that Trump is more attuned to anti-Semitism than to other forms of bigotry because his daughter and son-in-law are Jews. Others point to Trump’s immigration orders and to his earlier threats to bar Muslims from the country, which have made Islamophobia seem less reprehensible — and, possibly, more permissible — than prejudice against Jews.
But I’ve got a simpler explanation: Jews are white.
For most of American history, that wasn’t the case. Jews were a separate race, as blacks and Asians are today. But over time Jews became white, which made it harder for other whites to hate them.
That’s the great elephant in the room, when it comes to the tortured subject of race in America. The word “race” conjures biology, a set of inheritable — and immutable — physical characteristics. But it’s actually a cultural and social category, not a biological one, which is why it changes over time.
Until the 1940s, Jews were recorded as a distinct racial group by American immigration authorities. But many non-Jewish whites also believed that Jews shared traits with the most stigmatized race of all: African-Americans.
In his 1910 book, “The Jew and the Negro,” North Carolina minister Arthur T. Abernathy argued that the ancient Jews had mixed with neighboring African peoples. Jews’ skin color lightened when they moved to other climates, Abernathy wrote, but they still bore the same eye shapes and fingernails (yes, fingernails!) as blacks.
Both groups were disposed to sexual indulgence, Abernathy warned, a recurring theme in American racism. “Every student of sociology knows that the black man’s lust after the white woman is not much fiercer than the lust of the licentious Jew for the gentile,” Georgia politician Tom Watson wrote in 1915.
In the North, meanwhile, white racial anxieties about Jewish college students led to quotas that limited their numbers. “Scurvy kikes are not wanted,” read a poster hung on one campus in 1923. “Make New York University a White Man’s College.”
For their own part, Jews couldn’t agree if they were a race or not. Some of them warned that a separate racial classification would inevitably link them to blacks, diminishing Jews in the eyes of other Americans. But others held tight to the racial idea, which could help make the case for a Jewish homeland in Palestine.
Yet with the rise of Nazism in Europe, which was premised on Jewish racial inferiority, Jews in America stopped identifying as a race. “Scientifically and correctly speaking, there are three great races in the world: the black, yellow, and white,” a booklet distributed by the Anti-Defamation League declared in 1939. “Within the white race all the sub-races have long since been mixed and we Jews are part of the general admixture.”
After World War II, most white Americans quietly welcomed Jews into their “admixture.” Newspapers and social scientists referred to Jews as members of a religion, ethnicity or culture. But they were not called a “race,” a word that conjured up Adolf Hitler and his effort to eliminate them.
During these same years, the struggle against Nazism overseas helped jump-start the campaign for African-American civil rights at home. But blacks’ status as a separate race remained, as James Baldwin noted in a biting 1967 essay about African-Americans and Jews.
Blacks were tired of Jews telling them that the Jewish experience of prejudice was “as great as the American Negro’s suffering,” Baldwin wrote. The claim ignored racial realities, Baldwin wrote, and it fueled anti-Semitism in black communities. ”The most ironical thing,” Baldwin wrote, “is that the Negro is really condemning the Jew for having become an American white man.”
That’s not an option for African-Americans or for other nonwhite racial groups in America. (It's worth noting that the racial status of Hispanics is ambiguous: Some identify as white, some as black and many as neither.) These categories are cultural creations, too, not biological ones. When Sonia Sotomayor joined the Supreme Court in 2009, news headlines hailed her as the first Hispanic justice on the court. Yet when she was growing up in the Bronx in the 1950s and 1960s she was Puerto Rican rather than “Hispanic,” a term that did not enter our racial lexicon until the 1970s.
Let’s be clear: Race exists. But it exists in our minds, not in our bodies. In our homes and schools, we need to teach the next generation of Americans to resist the false doctrine of inherent racial difference. Then, and only then, will they truly recognize everyone as part of a single race: the human one.
“Wear green on St. Patrick’s Day or get pinched.” That pretty much sums up the Irish-American “curriculum” that I learned when I was in school. Yes, I recall a nod to the so-called Potato Famine, but it was mentioned only in passing.
Sadly, today’s high school textbooks continue to largely ignore the famine, despite the fact that it was responsible for unimaginable suffering and the deaths of more than a million Irish peasants, and that it triggered the greatest wave of Irish immigration in U.S. history. Nor do textbooks make any attempt to help students link famines past and present.
Yet there is no shortage of material that can bring these dramatic events to life in the classroom. In my own high school social studies classes, I begin with Sinead O’Connor’s haunting rendition of “Skibbereen,” which includes the verse:
… Oh it’s well I do remember, that bleak
December day,
The landlord and the sheriff came, to drive
Us all away
They set my roof on fire, with their cursed
English spleen
And that’s another reason why I left old
Skibbereen.
By contrast, Holt McDougal’s U.S. history textbook The Americans, devotes a flat two sentences to “The Great Potato Famine.” Prentice Hall’s America: Pathways to the Present fails to offer a single quote from the time. The text calls the famine a “horrible disaster,” as if it were a natural calamity like an earthquake. And in an awful single paragraph, Houghton Mifflin’s The Enduring Vision: A History of the American People blames the “ravages of famine” simply on “a blight,” and the only contemporaneous quote comes, inappropriately, from a landlord, who describes the surviving tenants as “famished and ghastly skeletons.” Uniformly, social studies textbooks fail to allow the Irish to speak for themselves, to narrate their own horror.
These timid slivers of knowledge not only deprive students of rich lessons in Irish-American history, they exemplify much of what is wrong with today’s curricular reliance on corporate-produced textbooks.
First, does anyone really think that students will remember anything from the books’ dull and lifeless paragraphs? Today’s textbooks contain no stories of actual people. We meet no one, learn nothing of anyone’s life, encounter no injustice, no resistance. This is a curriculum bound for boredom. As someone who spent almost 30 years teaching high school social studies, I can testify that students will be unlikely to seek to learn more about events so emptied of drama, emotion, and humanity.
Nor do these texts raise any critical questions for students to consider. For example, it’s important for students to learn that the crop failure in Ireland affected only the potato—during the worst famine years, other food production was robust. Michael Pollan notes in The Botany of Desire, “Ireland’s was surely the biggest experiment in monoculture ever attempted and surely the most convincing proof of its folly.” But if only this one variety of potato, the Lumper, failed, and other crops thrived, why did people starve?
Thomas Gallagher points out in Paddy’s Lament, that during the first winter of famine, 1846-47, as perhaps 400,000 Irish peasants starved, landlords exported 17 million pounds sterling worth of grain, cattle, pigs, flour, eggs, and poultry—food that could have prevented those deaths. Throughout the famine, as Gallagher notes, there was an abundance of food produced in Ireland, yet the landlords exported it to markets abroad.
The school curriculum could and should ask students to reflect on the contradiction of starvation amidst plenty, on the ethics of food exports amidst famine. And it should ask why these patterns persist into our own time.
More than a century and a half after the “Great Famine,” we live with similar, perhaps even more glaring contradictions. Raj Patel opens his book, Stuffed and Starved: Markets, Power and the Hidden Battle for the World’s Food System: “Today, when we produce more food than ever before, more than one in ten people on Earth are hungry. The hunger of 800 million happens at the same time as another historical first: that they are outnumbered by the one billion people on this planet who are overweight.”
Patel’s book sets out to account for “the rot at the core of the modern food system.” This is a curricular journey that our students should also be on — reflecting on patterns of poverty, power, and inequality that stretch from 19th century Ireland to 21st century Africa, India, Appalachia, and Oakland; that explore what happens when food and land are regarded purely as commodities in a global system of profit.
But today’s corporate textbook-producers are no more interested in feeding student curiosity about this inequality than were British landlords interested in feeding Irish peasants. Take Pearson, the global publishing giant. At its website, the corporation announces (redundantly) that “we measure our progress against three key measures: earnings, cash and return on invested capital.” The Pearson empire had 2011 worldwide sales of more than $9 billion—that’s nine thousand million dollars, as I might tell my students. Multinationals like Pearson have no interest in promoting critical thinking about an economic system whose profit-first premises they embrace with gusto.
As mentioned, there is no absence of teaching materials on the Irish famine that can touch head and heart. In a role play, “Hunger on Trial,” that I wrote and taught to my own students in Portland, Oregon—included at the Zinn Education Project website— students investigate who or what was responsible for the famine. The British landlords, who demanded rent from the starving poor and exported other food crops? The British government, which allowed these food exports and offered scant aid to Irish peasants? The Anglican Church, which failed to denounce selfish landlords or to act on behalf of the poor? A system of distribution, which sacrificed Irish peasants to the logic of colonialism and the capitalist market?
These are rich and troubling ethical questions. They are exactly the kind of issues that fire students to life and allow them to see that history is not simply a chronology of dead facts stretching through time.
So go ahead: Have a Guinness, wear a bit of green, and put on the Chieftains. But let’s honor the Irish with our curiosity. Let’s make sure that our schools show some respect, by studying the social forces that starved and uprooted over a million Irish—and that are starving and uprooting people today.
Sadly, today’s high school textbooks continue to largely ignore the famine, despite the fact that it was responsible for unimaginable suffering and the deaths of more than a million Irish peasants, and that it triggered the greatest wave of Irish immigration in U.S. history. Nor do textbooks make any attempt to help students link famines past and present.
Yet there is no shortage of material that can bring these dramatic events to life in the classroom. In my own high school social studies classes, I begin with Sinead O’Connor’s haunting rendition of “Skibbereen,” which includes the verse:
… Oh it’s well I do remember, that bleak
December day,
The landlord and the sheriff came, to drive
Us all away
They set my roof on fire, with their cursed
English spleen
And that’s another reason why I left old
Skibbereen.
By contrast, Holt McDougal’s U.S. history textbook The Americans, devotes a flat two sentences to “The Great Potato Famine.” Prentice Hall’s America: Pathways to the Present fails to offer a single quote from the time. The text calls the famine a “horrible disaster,” as if it were a natural calamity like an earthquake. And in an awful single paragraph, Houghton Mifflin’s The Enduring Vision: A History of the American People blames the “ravages of famine” simply on “a blight,” and the only contemporaneous quote comes, inappropriately, from a landlord, who describes the surviving tenants as “famished and ghastly skeletons.” Uniformly, social studies textbooks fail to allow the Irish to speak for themselves, to narrate their own horror.
These timid slivers of knowledge not only deprive students of rich lessons in Irish-American history, they exemplify much of what is wrong with today’s curricular reliance on corporate-produced textbooks.
First, does anyone really think that students will remember anything from the books’ dull and lifeless paragraphs? Today’s textbooks contain no stories of actual people. We meet no one, learn nothing of anyone’s life, encounter no injustice, no resistance. This is a curriculum bound for boredom. As someone who spent almost 30 years teaching high school social studies, I can testify that students will be unlikely to seek to learn more about events so emptied of drama, emotion, and humanity.
Nor do these texts raise any critical questions for students to consider. For example, it’s important for students to learn that the crop failure in Ireland affected only the potato—during the worst famine years, other food production was robust. Michael Pollan notes in The Botany of Desire, “Ireland’s was surely the biggest experiment in monoculture ever attempted and surely the most convincing proof of its folly.” But if only this one variety of potato, the Lumper, failed, and other crops thrived, why did people starve?
Thomas Gallagher points out in Paddy’s Lament, that during the first winter of famine, 1846-47, as perhaps 400,000 Irish peasants starved, landlords exported 17 million pounds sterling worth of grain, cattle, pigs, flour, eggs, and poultry—food that could have prevented those deaths. Throughout the famine, as Gallagher notes, there was an abundance of food produced in Ireland, yet the landlords exported it to markets abroad.
The school curriculum could and should ask students to reflect on the contradiction of starvation amidst plenty, on the ethics of food exports amidst famine. And it should ask why these patterns persist into our own time.
More than a century and a half after the “Great Famine,” we live with similar, perhaps even more glaring contradictions. Raj Patel opens his book, Stuffed and Starved: Markets, Power and the Hidden Battle for the World’s Food System: “Today, when we produce more food than ever before, more than one in ten people on Earth are hungry. The hunger of 800 million happens at the same time as another historical first: that they are outnumbered by the one billion people on this planet who are overweight.”
Patel’s book sets out to account for “the rot at the core of the modern food system.” This is a curricular journey that our students should also be on — reflecting on patterns of poverty, power, and inequality that stretch from 19th century Ireland to 21st century Africa, India, Appalachia, and Oakland; that explore what happens when food and land are regarded purely as commodities in a global system of profit.
But today’s corporate textbook-producers are no more interested in feeding student curiosity about this inequality than were British landlords interested in feeding Irish peasants. Take Pearson, the global publishing giant. At its website, the corporation announces (redundantly) that “we measure our progress against three key measures: earnings, cash and return on invested capital.” The Pearson empire had 2011 worldwide sales of more than $9 billion—that’s nine thousand million dollars, as I might tell my students. Multinationals like Pearson have no interest in promoting critical thinking about an economic system whose profit-first premises they embrace with gusto.
As mentioned, there is no absence of teaching materials on the Irish famine that can touch head and heart. In a role play, “Hunger on Trial,” that I wrote and taught to my own students in Portland, Oregon—included at the Zinn Education Project website— students investigate who or what was responsible for the famine. The British landlords, who demanded rent from the starving poor and exported other food crops? The British government, which allowed these food exports and offered scant aid to Irish peasants? The Anglican Church, which failed to denounce selfish landlords or to act on behalf of the poor? A system of distribution, which sacrificed Irish peasants to the logic of colonialism and the capitalist market?
These are rich and troubling ethical questions. They are exactly the kind of issues that fire students to life and allow them to see that history is not simply a chronology of dead facts stretching through time.
So go ahead: Have a Guinness, wear a bit of green, and put on the Chieftains. But let’s honor the Irish with our curiosity. Let’s make sure that our schools show some respect, by studying the social forces that starved and uprooted over a million Irish—and that are starving and uprooting people today.
The Real Irish-American Story Not Taught in Schools
by Bill Bigelow - Common Dreams:
Published on
Friday, March 17, 2017
Roe v. Wade
abortion was outlawed so doctors could make money
History.com:
The Supreme Court decriminalizes abortion by handing down their decision in the case of Roe v. Wade. Despite opponents’ characterization of the decision, it was not the first time that abortion became a legal procedure in the United States. In fact, for most of the country’s first 100 years, abortion as we know it today was not only not a criminal offense, it was also not considered immoral.
In the 1700s and early 1800s, the word “abortion” referred only to the termination of a pregnancy after “quickening,” the time when the fetus first began to make noticeable movements. The induced ending of a pregnancy before this point did not even have a name–but not because it was uncommon. Women in the 1700s often took drugs to end their unwanted pregnancies.
In 1827, though,Illinois passed a law that made the use of abortion drugs punishable by up to three years’ imprisonment. Although other states followed the Illinois example, advertising for “Female Monthly Pills,” as they were known, was still common through the middle of the 19th century.
Abortion itself only became a serious criminal offense in the period between 1860 and 1880. And the criminalization of abortion did not result from moral outrage. The roots of the new law came from the newly established physicians’ trade organization, the American Medical Association. Doctors decided that abortion practitioners were unwanted competition and went about eliminating that competition. The Catholic Church, which had long accepted terminating pregnancies before quickening, joined the doctors in condemning the practice.
By the turn of the century, all states had laws against abortion, but for the most part they were rarely enforced and women with money had no problem terminating pregnancies if they wished. It wasn’t until the late 1930s that abortion laws were enforced.Subsequent crackdowns led to a reform movement that succeeded in lifting abortion restrictions in California and New York even before the Supreme Court decision in Roe v. Wade.
The fight over whether to criminalize abortion has grown increasingly fierce in recent years, but opinion polls suggest that most Americans prefer that women be able to have abortions in the early stages of pregnancy, free of any government interference.
In the 1700s and early 1800s, the word “abortion” referred only to the termination of a pregnancy after “quickening,” the time when the fetus first began to make noticeable movements. The induced ending of a pregnancy before this point did not even have a name–but not because it was uncommon. Women in the 1700s often took drugs to end their unwanted pregnancies.
In 1827, though,Illinois passed a law that made the use of abortion drugs punishable by up to three years’ imprisonment. Although other states followed the Illinois example, advertising for “Female Monthly Pills,” as they were known, was still common through the middle of the 19th century.
Abortion itself only became a serious criminal offense in the period between 1860 and 1880. And the criminalization of abortion did not result from moral outrage. The roots of the new law came from the newly established physicians’ trade organization, the American Medical Association. Doctors decided that abortion practitioners were unwanted competition and went about eliminating that competition. The Catholic Church, which had long accepted terminating pregnancies before quickening, joined the doctors in condemning the practice.
By the turn of the century, all states had laws against abortion, but for the most part they were rarely enforced and women with money had no problem terminating pregnancies if they wished. It wasn’t until the late 1930s that abortion laws were enforced.Subsequent crackdowns led to a reform movement that succeeded in lifting abortion restrictions in California and New York even before the Supreme Court decision in Roe v. Wade.
The fight over whether to criminalize abortion has grown increasingly fierce in recent years, but opinion polls suggest that most Americans prefer that women be able to have abortions in the early stages of pregnancy, free of any government interference.
Chomsky on America's Ugly History: FDR Was Fascist-Friendly Before WWII
By Zain Raza, Noam Chomsky / Noam Chomsky's Official Site
Alternet
Noam Chomsky: Well, it was a mixed story. Roosevelt himself had a mixed attitude. For example, he was pretty supportive of Mussolini's fascism, in fact described Mussolini as "that admirable Italian gentleman." He later concluded that Mussolini had been misled by his association with Hitler and had been led kind of down the wrong path. But the American business community, the power systems in the United States were highly supportive of Mussolini.
In fact, even parts of the labor bureaucracy were. Fortune Magazine for example, the major business journal I think in 1932, had an issue with the headline, I’m quoting it: "The wops are unwopping themselves." The "wop" is a kind of a derogatory term for Italians and the "wops are finally unwopping themselves," under Mussolini they're becoming part of the civilized world. There was criticism of the Italian invasion of Ethiopia, a lot of criticism. But basically pretty supportive attitude toward Mussolini's fascism. When Germany, when Hitler took over, the attitude was more mixed.
There was a concern for a potential threat but nevertheless the general approach of the U.S., the British even more so, was fairly supportive. So for example in 1937, the State Department described Hitler as a kind of a moderate, fending off the dangerous forces of the right (and left). The State Department described Hitler as a moderate who was holding off the forces, the dangerous forces of the left, meaning the Bolsheviks, the labor movement and so on, and of the right, namely the extremist Nazism. Hitler was kind of in the middle and therefore we should kind of support him. This is a pretty familiar stance, incidentally like in many other cases.
George Kennan, later famous as one of the architects of post-war policy, was actually the American Council in Berlin up until Pearl Harbor. He was sending back reports which are public, which were qualified. He said we shouldn't be too harsh in condemning the Nazis since a lot of what they are doing is kind of understandable and we could get along with them and so on and this is one strain and a major one. But there was also plenty of criticism and condemnation. But the general attitudes were fairly mixed.
At the Munich Conference late 1938, Roosevelt sent his most trusted adviser, Sumner Wells, to Munich and Wells came back with a pretty positive report saying we can really work with Hitler, this conference opens the possibility of a period of peace and justice for Europe and we should work out ways to interact and deal with him. That was late 1938! And so it was quite a mixed story.
In fact, even parts of the labor bureaucracy were. Fortune Magazine for example, the major business journal I think in 1932, had an issue with the headline, I’m quoting it: "The wops are unwopping themselves." The "wop" is a kind of a derogatory term for Italians and the "wops are finally unwopping themselves," under Mussolini they're becoming part of the civilized world. There was criticism of the Italian invasion of Ethiopia, a lot of criticism. But basically pretty supportive attitude toward Mussolini's fascism. When Germany, when Hitler took over, the attitude was more mixed.
There was a concern for a potential threat but nevertheless the general approach of the U.S., the British even more so, was fairly supportive. So for example in 1937, the State Department described Hitler as a kind of a moderate, fending off the dangerous forces of the right (and left). The State Department described Hitler as a moderate who was holding off the forces, the dangerous forces of the left, meaning the Bolsheviks, the labor movement and so on, and of the right, namely the extremist Nazism. Hitler was kind of in the middle and therefore we should kind of support him. This is a pretty familiar stance, incidentally like in many other cases.
George Kennan, later famous as one of the architects of post-war policy, was actually the American Council in Berlin up until Pearl Harbor. He was sending back reports which are public, which were qualified. He said we shouldn't be too harsh in condemning the Nazis since a lot of what they are doing is kind of understandable and we could get along with them and so on and this is one strain and a major one. But there was also plenty of criticism and condemnation. But the general attitudes were fairly mixed.
At the Munich Conference late 1938, Roosevelt sent his most trusted adviser, Sumner Wells, to Munich and Wells came back with a pretty positive report saying we can really work with Hitler, this conference opens the possibility of a period of peace and justice for Europe and we should work out ways to interact and deal with him. That was late 1938! And so it was quite a mixed story.
historical snapshot of capitalism and imperialism
Imperialism 101
Against Empire by Michael Parenti
A central imperative of capitalism is expansion. Investors will not put their money into business ventures unless they can extract more than they invest. Increased earnings come only with a growth in the enterprise. The capitalist ceaselessly searches for ways of making more money in order to make still more money. One must always invest to realize profits, gathering as much strength as possible in the face of competing forces and unpredictable markets...
...North American and European corporations have acquired control of more than three-fourths of the known mineral resources of Asia, Africa, and Latin America. But the pursuit of natural resources is not the only reason for capitalist overseas expansion. There is the additional need to cut production costs and maximize profits by investing in countries with cheaper labor markets. U.S. corporate foreign investment grew 84 percent from 1985 to 1990, the most dramatic increase being in cheap-labor countries like South Korea, Taiwan, Spain, and Singapore.
Because of low wages, low taxes, nonexistent work benefits, weak labor unions, and nonexistent occupational and environmental protections, U.S. corporate profit rates in the Third World are 50 percent greater than in developed countries. Citibank, one of the largest U.S. firms, earns about 75 percent of its profits from overseas operations. While profit margins at home sometimes have had a sluggish growth, earnings abroad have continued to rise dramatically, fostering the development of what has become known as the multinational or transnational corporation. Today some four hundred transnational companies control about 80 percent of the capital assets of the global free market and are extending their grasp into the ex-communist countries of Eastern Europe...
...
Myths of Underdevelopment
The impoverished lands of Asia, Africa, and Latin America are known to us as the "Third World," to distinguish them from the "First World" of industrialized Europe and North America and the now largely defunct "Second World" of communist states. Third World poverty, called "underdevelopment," is treated by most Western observers as an original historic condition. We are asked to believe that it always existed, that poor countries are poor because their lands have always been infertile or their people unproductive.
In fact, the lands of Asia, Africa, and Latin America have long produced great treasures of foods, minerals and other natural resources. That is why the Europeans went through all the trouble to steal and plunder them. One does not go to poor places for self-enrichment. The Third World is rich. Only its people are poor—and it is because of the pillage they have endured.
The process of expropriating the natural resources of the Third World began centuries ago and continues to this day. First, the colonizers extracted gold, silver, furs, silks, and spices, then flax, hemp, timber, molasses, sugar, rum, rubber, tobacco, calico, cocoa, coffee, cotton, copper, coal, palm oil, tin, iron, ivory, ebony, and later on, oil, zinc, manganese, mercury, platinum, cobalt, bauxite, aluminum, and uranium. Not to be overlooked is that most hellish of all expropriations: the abduction of millions of human beings into slave labor.
Through the centuries of colonization, many self-serving imperialist theories have been spun. I was taught in school that people in tropical lands are slothful and do not work as hard as we denizens of the temperate zone. In fact, the inhabitants of warm climates have performed remarkably productive feats, building magnificent civilizations well before Europe emerged from the Dark Ages. And today they often work long, hard hours for meager sums. Yet the early stereotype of the "lazy native" is still with us. In every capitalist society, the poor—both domestic and overseas—regularly are blamed for their own condition.[...]
Read more at http://www.michaelparenti.org/Imperialism101.html
...North American and European corporations have acquired control of more than three-fourths of the known mineral resources of Asia, Africa, and Latin America. But the pursuit of natural resources is not the only reason for capitalist overseas expansion. There is the additional need to cut production costs and maximize profits by investing in countries with cheaper labor markets. U.S. corporate foreign investment grew 84 percent from 1985 to 1990, the most dramatic increase being in cheap-labor countries like South Korea, Taiwan, Spain, and Singapore.
Because of low wages, low taxes, nonexistent work benefits, weak labor unions, and nonexistent occupational and environmental protections, U.S. corporate profit rates in the Third World are 50 percent greater than in developed countries. Citibank, one of the largest U.S. firms, earns about 75 percent of its profits from overseas operations. While profit margins at home sometimes have had a sluggish growth, earnings abroad have continued to rise dramatically, fostering the development of what has become known as the multinational or transnational corporation. Today some four hundred transnational companies control about 80 percent of the capital assets of the global free market and are extending their grasp into the ex-communist countries of Eastern Europe...
...
Myths of Underdevelopment
The impoverished lands of Asia, Africa, and Latin America are known to us as the "Third World," to distinguish them from the "First World" of industrialized Europe and North America and the now largely defunct "Second World" of communist states. Third World poverty, called "underdevelopment," is treated by most Western observers as an original historic condition. We are asked to believe that it always existed, that poor countries are poor because their lands have always been infertile or their people unproductive.
In fact, the lands of Asia, Africa, and Latin America have long produced great treasures of foods, minerals and other natural resources. That is why the Europeans went through all the trouble to steal and plunder them. One does not go to poor places for self-enrichment. The Third World is rich. Only its people are poor—and it is because of the pillage they have endured.
The process of expropriating the natural resources of the Third World began centuries ago and continues to this day. First, the colonizers extracted gold, silver, furs, silks, and spices, then flax, hemp, timber, molasses, sugar, rum, rubber, tobacco, calico, cocoa, coffee, cotton, copper, coal, palm oil, tin, iron, ivory, ebony, and later on, oil, zinc, manganese, mercury, platinum, cobalt, bauxite, aluminum, and uranium. Not to be overlooked is that most hellish of all expropriations: the abduction of millions of human beings into slave labor.
Through the centuries of colonization, many self-serving imperialist theories have been spun. I was taught in school that people in tropical lands are slothful and do not work as hard as we denizens of the temperate zone. In fact, the inhabitants of warm climates have performed remarkably productive feats, building magnificent civilizations well before Europe emerged from the Dark Ages. And today they often work long, hard hours for meager sums. Yet the early stereotype of the "lazy native" is still with us. In every capitalist society, the poor—both domestic and overseas—regularly are blamed for their own condition.[...]
Read more at http://www.michaelparenti.org/Imperialism101.html
Why Allen Dulles Killed the Kennedys
by David Swanson - Freepress:
...JFK and the Unspeakable depicts Kennedy as getting in the way of the violence that Allen Dulles and gang wished to engage in abroad. He wouldn't fight Cuba or the Soviet Union or Vietnam or East Germany or independence movements in Africa. He wanted disarmament and peace. He was talking cooperatively with Khrushchev, as Eisenhower had tried prior to the U2-shootdown sabotage. The CIA was overthrowing governments in Iran, Guatemala, the Congo, Vietnam, and around the world. Kennedy was getting in the way.
The Devil's Chessboard depicts Kennedy, in addition, as himself being the sort of leader the CIA was in the habit of overthrowing in those foreign capitals. Kennedy had made enemies of bankers and industrialists. He was working to shrink oil profits by closing tax loopholes, including the "oil depletion allowance." He was permitting the political left in Italy to participate in power, outraging the extreme right in Italy, the U.S., and the CIA. He aggressively went after steel corporations and prevented their price hikes. This was the sort of behavior that could get you overthrown if you lived in one of those countries with a U.S. embassy in it.
Yes, Kennedy wanted to eliminate or drastically weaken and rename the CIA. Yes he threw Dulles and some of his gang out the door. Yes he refused to launch World War III over Cuba or Berlin or anything else. Yes he had the generals and warmongers against him, but he also had Wall Street against him.
Of course "politicians who ever try to get out of line" are now, as then, but more effectively now, handled first by the media. If the media can stop them or some other maneuver can stop them (character assassination, blackmail, distraction, removal from power) then violence isn't required.
The fact that Kennedy resembled a coup target, not just a protector of other targets, would be bad news for someone like Senator Bernie Sanders if he ever got past the media, the "super delegates," and the sell-out organizations to seriously threaten to take the White House. A candidate who accepts the war machine to a great extent and resembles Kennedy not at all on questions of peace, but who takes on Wall Street with the passion it deserves, could place himself as much in the cross-hairs of the deep state as a Jeremy Corbyn who takes on both capital and killing.
Accounts of the escapades of Allen Dulles, and the dozen or more partners in crime whose names crop up beside his decade after decade, illustrate the power of a permanent plutocracy, but also the power of particular individuals to shape it. What if Allen Dulles and Winston Churchill and others like them hadn't worked to start the Cold War even before World War II was over? What if Dulles hadn't collaborated with Nazis and the U.S. military hadn't recruited and imported so many of them into its ranks? What if Dulles hadn't worked to hide information about the holocaust while it was underway? What if Dulles hadn't betrayed Roosevelt and Russia to make a separate U.S. peace with Germany in Italy? What if Dulles hadn't begun sabotaging democracy in Europe immediately and empowering former Nazis in Germany? What if Dulles hadn't turned the CIA into a secret lawless army and death squad? What if Dulles hadn't worked to end Iran's democracy, or Guatemala's? What if Dulles' CIA hadn't developed torture, rendition, human experimentation, and murder as routine policies? What if Eisenhower had been permitted to talk with Khrushchev? What if Dulles hadn't tried to overthrow the President of France? What if Dulles had been "checked" or "balanced" ever so slightly by the media or Congress or the courts along the way?
These are tougher questions than "What if there had been no Lee Harvey Oswald?" The answer to that is, "There would have been another guy very similar to serve the same purpose, just as there had been in the earlier attempt on JFK in Chicago. But "What if there had been no Allen Dulles?" looms large enough to suggest the possible answer that we would all be better off, less militarized, less secretive, less xenophobic. And that suggests that the deep state is not uniform and not unstoppable. Talbot's powerful history is a contribution to the effort to stop it.
I hope Talbot speaks about his book in Virginia, after which he might stop saying that Williamsburg and the CIA's "farm" are in "Northern Virginia." Hasn't Northern Virginia got enough to be ashamed of without that?
The Devil's Chessboard depicts Kennedy, in addition, as himself being the sort of leader the CIA was in the habit of overthrowing in those foreign capitals. Kennedy had made enemies of bankers and industrialists. He was working to shrink oil profits by closing tax loopholes, including the "oil depletion allowance." He was permitting the political left in Italy to participate in power, outraging the extreme right in Italy, the U.S., and the CIA. He aggressively went after steel corporations and prevented their price hikes. This was the sort of behavior that could get you overthrown if you lived in one of those countries with a U.S. embassy in it.
Yes, Kennedy wanted to eliminate or drastically weaken and rename the CIA. Yes he threw Dulles and some of his gang out the door. Yes he refused to launch World War III over Cuba or Berlin or anything else. Yes he had the generals and warmongers against him, but he also had Wall Street against him.
Of course "politicians who ever try to get out of line" are now, as then, but more effectively now, handled first by the media. If the media can stop them or some other maneuver can stop them (character assassination, blackmail, distraction, removal from power) then violence isn't required.
The fact that Kennedy resembled a coup target, not just a protector of other targets, would be bad news for someone like Senator Bernie Sanders if he ever got past the media, the "super delegates," and the sell-out organizations to seriously threaten to take the White House. A candidate who accepts the war machine to a great extent and resembles Kennedy not at all on questions of peace, but who takes on Wall Street with the passion it deserves, could place himself as much in the cross-hairs of the deep state as a Jeremy Corbyn who takes on both capital and killing.
Accounts of the escapades of Allen Dulles, and the dozen or more partners in crime whose names crop up beside his decade after decade, illustrate the power of a permanent plutocracy, but also the power of particular individuals to shape it. What if Allen Dulles and Winston Churchill and others like them hadn't worked to start the Cold War even before World War II was over? What if Dulles hadn't collaborated with Nazis and the U.S. military hadn't recruited and imported so many of them into its ranks? What if Dulles hadn't worked to hide information about the holocaust while it was underway? What if Dulles hadn't betrayed Roosevelt and Russia to make a separate U.S. peace with Germany in Italy? What if Dulles hadn't begun sabotaging democracy in Europe immediately and empowering former Nazis in Germany? What if Dulles hadn't turned the CIA into a secret lawless army and death squad? What if Dulles hadn't worked to end Iran's democracy, or Guatemala's? What if Dulles' CIA hadn't developed torture, rendition, human experimentation, and murder as routine policies? What if Eisenhower had been permitted to talk with Khrushchev? What if Dulles hadn't tried to overthrow the President of France? What if Dulles had been "checked" or "balanced" ever so slightly by the media or Congress or the courts along the way?
These are tougher questions than "What if there had been no Lee Harvey Oswald?" The answer to that is, "There would have been another guy very similar to serve the same purpose, just as there had been in the earlier attempt on JFK in Chicago. But "What if there had been no Allen Dulles?" looms large enough to suggest the possible answer that we would all be better off, less militarized, less secretive, less xenophobic. And that suggests that the deep state is not uniform and not unstoppable. Talbot's powerful history is a contribution to the effort to stop it.
I hope Talbot speaks about his book in Virginia, after which he might stop saying that Williamsburg and the CIA's "farm" are in "Northern Virginia." Hasn't Northern Virginia got enough to be ashamed of without that?
The Real (Despicable) Thomas Jefferson
Michael Coard - philly tribune
During the second week of April this year, there were campaign exhibitions here in Philly for two presidential candidates. During the second week of April 2014, there was a controversial exhibition here in Philly for one presidential criminal.
Two years ago, the National Constitution Center began its six month presentation about Thomas Jefferson entitled “Slavery at Jefferson’s Monticello.” And to my surprise, the organizers didn’t engage in the customary American practice of sweeping presidential complicity with slavery under the rug. In fact, they included the word “Slavery” in the title and by addressing “the stories of six slave families who ‘lived’ and ‘worked’ at Jefferson’s plantation.” Nice, huh? Well, yes. But only kinda/sorta. By that, I mean they didn’t really “live.” Instead, they actually “suffered and survived.” And they didn’t really “work.” Instead, they actually “slaved and toiled.” But let’s not quibble over semantics. Instead, let’s go the to heart of the matter by enlightening you about who- and what- Thomas Jefferson truly was. Here are ten things you didn’t know about him:
1. He was a lifetime slaveholder. Thomas was the son of Peter Jefferson, a Virginia landowning slaveholder who died in 1757, leaving the eleven-year-old with a massive estate. Ten years later, he formally inherited 52 Black human beings. When he authored the Declaration of Independence in 1776, he held 175 Black men, women, and children in bondage. By 1822, he had increased that number to 267.
2. He was a hypocrite. While writing “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness…,” he enslaved hundreds of human beings. See item one above.
3. He was a rapist. As U.S. Envoy and Minister to France, Jefferson began living there periodically from 1784-1789. He took with him his oldest daughter Martha and a few of those whom he enslaved, including James Hemings. In 1787, he requested that his daughter Polly join him. This meant that Polly’s enslaved chambermaid, 14-year-old seamstress Sally Hemings (James’ younger sister), was to accompany her. Both Sally and James were among the six mulatto offspring of Jefferson’s father-in-law, John Wayles and his enslaved “domestic servant” Betty Hemings. Sally and James were half-siblings of Thomas’ late wife, Martha Wayles Skelton Jefferson. Thomas, after repeatedly raping Sally while in Paris, impregnated her. Her first child died after she returned to America. But she has six more of Thomas’ children at Monticello.
4. He was an incestuous pedophile. See item three above.
5. He was a “Back To Africa” proponent (but not really). This would have been a good thing if his purpose was the Afrocentric goal of reuniting Blacks with their roots. But it was a bad thing because, as Peter S. Onuf, Thomas Jefferson Memorial Foundation Professor Emeritus, notes, it was a scheme by Jefferson to conceal his “shadow family.”
6. He was a legislative racist. As pointed out by Joyce Oldham Appleby, Professor Emerita of History at UCLA and former President of the Organization of American Historians and the American Historical Association, as well as by Arthur M. Schlesinger Jr., former Professor of History at Harvard University and Professor Emeritus at CUNY Graduate Center, Jefferson opposed the practice of slaveholders freeing their enslaved because he claimed it would encourage rebellion. And, as noted by John E. Ferling, Professor Emeritus of History at University of West Georgia, after Jefferson was elected to the Virginia House of Burgesses in 1769, he attempted to introduce laws that essentially would have banned free Blacks from entering or exiting that state and would have banished children whose fathers were of African origin. He also tried to expel white women who had children by Black men. After being elected Governor in 1779, he signed a bill to encourage enlistment in the Revolutionary War by compensating white men by giving them, among other things, “a healthy sound Negro.”
7. He was an international racist. As Secretary of State in 1795, he gave $40,000 and one thousand firearms to colonial French slaveholders in Haiti in an attempt to defeat Toussaint L’Ouverture’s slave rebellion. As President, he supported French plans to resume power, lent France $300,000 “for relief of whites on the island,” and in 1804 refused to recognize Haiti as a sovereign republic after its military victory. Two years later, he imposed a trade embargo.
8. He was a blatantly ignorant racist. In his 1785 book entitled “Notes on the State of Virginia,” he wrote about “the preference of the ‘oran-outan’ (i.e., an ape-like creature) for the Black women over those of … (its) own species.” He also wrote that Blacks have “a very strong and disagreeable odor” and that they “are inferior to the whites ...”
9. He was a liar. His friend from the American Revolution, Polish nobleman Tadeusz Kosciuszko, came to America in 1798 to receive back pay for his military service. He then wrote a will directing Jefferson to use all of Kosciuszko’s money and land in the U.S. to “free and educate slaves.” Jefferson agreed to do so. After Kosciuszko died in 1817, Jefferson refused to free or educate any of them.
10. Black labor built Monticello. Beginning in 1768, Jefferson forced many in his enslaved population- including skilled Black carpenters- to do the tortuous work of building his palatial plantation, known as Monticello.
The words from David Walker’s Appeal, written in 1829, and the words of Christopher James Perry Sr., founder of the Tribune in 1884, are the inspiration for my “Freedom’s Journal” columns. In order to honor that pivotal nationalist abolitionist and that pioneering newspaper giant, as well as to inspire today’s Tribune readers, each column ends with Walker and Perry’s combined quote- along with my inserted voice- as follows: I ask all Blacks “to procure a copy of this… (weekly column) for it is designed… particularly for them” so they can “make progress… against (racist) injustice.”
Two years ago, the National Constitution Center began its six month presentation about Thomas Jefferson entitled “Slavery at Jefferson’s Monticello.” And to my surprise, the organizers didn’t engage in the customary American practice of sweeping presidential complicity with slavery under the rug. In fact, they included the word “Slavery” in the title and by addressing “the stories of six slave families who ‘lived’ and ‘worked’ at Jefferson’s plantation.” Nice, huh? Well, yes. But only kinda/sorta. By that, I mean they didn’t really “live.” Instead, they actually “suffered and survived.” And they didn’t really “work.” Instead, they actually “slaved and toiled.” But let’s not quibble over semantics. Instead, let’s go the to heart of the matter by enlightening you about who- and what- Thomas Jefferson truly was. Here are ten things you didn’t know about him:
1. He was a lifetime slaveholder. Thomas was the son of Peter Jefferson, a Virginia landowning slaveholder who died in 1757, leaving the eleven-year-old with a massive estate. Ten years later, he formally inherited 52 Black human beings. When he authored the Declaration of Independence in 1776, he held 175 Black men, women, and children in bondage. By 1822, he had increased that number to 267.
2. He was a hypocrite. While writing “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness…,” he enslaved hundreds of human beings. See item one above.
3. He was a rapist. As U.S. Envoy and Minister to France, Jefferson began living there periodically from 1784-1789. He took with him his oldest daughter Martha and a few of those whom he enslaved, including James Hemings. In 1787, he requested that his daughter Polly join him. This meant that Polly’s enslaved chambermaid, 14-year-old seamstress Sally Hemings (James’ younger sister), was to accompany her. Both Sally and James were among the six mulatto offspring of Jefferson’s father-in-law, John Wayles and his enslaved “domestic servant” Betty Hemings. Sally and James were half-siblings of Thomas’ late wife, Martha Wayles Skelton Jefferson. Thomas, after repeatedly raping Sally while in Paris, impregnated her. Her first child died after she returned to America. But she has six more of Thomas’ children at Monticello.
4. He was an incestuous pedophile. See item three above.
5. He was a “Back To Africa” proponent (but not really). This would have been a good thing if his purpose was the Afrocentric goal of reuniting Blacks with their roots. But it was a bad thing because, as Peter S. Onuf, Thomas Jefferson Memorial Foundation Professor Emeritus, notes, it was a scheme by Jefferson to conceal his “shadow family.”
6. He was a legislative racist. As pointed out by Joyce Oldham Appleby, Professor Emerita of History at UCLA and former President of the Organization of American Historians and the American Historical Association, as well as by Arthur M. Schlesinger Jr., former Professor of History at Harvard University and Professor Emeritus at CUNY Graduate Center, Jefferson opposed the practice of slaveholders freeing their enslaved because he claimed it would encourage rebellion. And, as noted by John E. Ferling, Professor Emeritus of History at University of West Georgia, after Jefferson was elected to the Virginia House of Burgesses in 1769, he attempted to introduce laws that essentially would have banned free Blacks from entering or exiting that state and would have banished children whose fathers were of African origin. He also tried to expel white women who had children by Black men. After being elected Governor in 1779, he signed a bill to encourage enlistment in the Revolutionary War by compensating white men by giving them, among other things, “a healthy sound Negro.”
7. He was an international racist. As Secretary of State in 1795, he gave $40,000 and one thousand firearms to colonial French slaveholders in Haiti in an attempt to defeat Toussaint L’Ouverture’s slave rebellion. As President, he supported French plans to resume power, lent France $300,000 “for relief of whites on the island,” and in 1804 refused to recognize Haiti as a sovereign republic after its military victory. Two years later, he imposed a trade embargo.
8. He was a blatantly ignorant racist. In his 1785 book entitled “Notes on the State of Virginia,” he wrote about “the preference of the ‘oran-outan’ (i.e., an ape-like creature) for the Black women over those of … (its) own species.” He also wrote that Blacks have “a very strong and disagreeable odor” and that they “are inferior to the whites ...”
9. He was a liar. His friend from the American Revolution, Polish nobleman Tadeusz Kosciuszko, came to America in 1798 to receive back pay for his military service. He then wrote a will directing Jefferson to use all of Kosciuszko’s money and land in the U.S. to “free and educate slaves.” Jefferson agreed to do so. After Kosciuszko died in 1817, Jefferson refused to free or educate any of them.
10. Black labor built Monticello. Beginning in 1768, Jefferson forced many in his enslaved population- including skilled Black carpenters- to do the tortuous work of building his palatial plantation, known as Monticello.
The words from David Walker’s Appeal, written in 1829, and the words of Christopher James Perry Sr., founder of the Tribune in 1884, are the inspiration for my “Freedom’s Journal” columns. In order to honor that pivotal nationalist abolitionist and that pioneering newspaper giant, as well as to inspire today’s Tribune readers, each column ends with Walker and Perry’s combined quote- along with my inserted voice- as follows: I ask all Blacks “to procure a copy of this… (weekly column) for it is designed… particularly for them” so they can “make progress… against (racist) injustice.”
The secret history of Hitler’s ‘Black Holocaust’
The fact that we officially commemorate the Holocaust on January 27, the date of the liberation of Auschwitz, means that remembrance of Nazi crimes focuses on the systematic mass murder of Europe’s Jews.
The other victims of Nazi racism, including Europe’s Sinti and Roma, are now routinely named in commemoration, but not all survivors have had equal opportunities to have their stories heard.
One group of victims who have yet to be publicly memorialized is black Germans.
All those voices need to be heard, not only for the sake of the survivors, but because we need to see how varied the expressions of Nazi racism were if we are to understand the lessons of the Holocaust for today.
When Hitler came to power in 1933, there were understood to have been some thousands of black people living in Germany—they were never counted and estimates vary widely. At the heart of an emerging black community was a group of men from Germany’s own African colonies (which were lost under the peace treaty that ended World War I) and their German wives.
They were networked across Germany and abroad by ties of family and association and some were active in Communist and anti-racist organizations. Among the first acts of the Nazi regime was the suppression of black political activism. There were also 600 to 800 children fathered by French colonial soldiers—many, though not all, African—when the French army occupied the Rhineland as part of the peace settlement after 1919. French troops were withdrawn in 1930 and the Rhineland was demilitarized until Hitler stationed German units there in 1936.
Denial of Rights and Work
The 1935 Nuremberg Laws stripped Jews of their German citizenship and prohibited them from marrying or having sexual relations with “people of German blood.”
A subsequent ruling confirmed that black people (like “gypsies”) were to be regarded as being “of alien blood” and subject to the Nuremberg principles. Very few people of African descent had German citizenship, even if they were born in Germany, but this became irreversible when they were given passports that designated them as “stateless Negroes.”
In 1941, black children were officially excluded from public schools, but most of them had suffered racial abuse in their classrooms much earlier. Some were forced out of school and none were permitted to go on to university or professional training. Published interviews and memoirs by both men and women, unpublished testimony and postwar compensation claims testify to these and other shared experiences.
Employment prospects that were already poor before 1933 got worse afterward. Unable to find regular work, some were drafted for forced labor as “foreign workers” during World War II. Films and stage shows making propaganda for the return of Germany’s African colonies became one of the few sources of income, especially after black people were banned from other kinds of public performance in 1939.
When SS leader Heinrich Himmler undertook a survey of all black people in Germany and occupied Europe in 1942, he was probably contemplating a round-up of some kind. But there was no mass internment.
Research in camp records and survivor testimony has so far come up with around 20 black Germans who spent time in concentration camps and prisons—and at least one who was a euthanasia victim. The one case we have of a black person being sent to a concentration camp explicitly for being a Mischling (mulatto)—Gert Schramm, interned in Buchenwald aged 15—comes from 1944.
Instead, the process that ended with incarceration usually began with a charge of deviant or antisocial behavior. Being black made people visible to the police, and it became a reason not to release them once they were in custody.
In this respect, we can see black people as victims not of a peculiarly Nazi racism, but of an intensified version of the kinds of everyday racism that persist today.
Sterilization: An Assault on Families
It was the Nazi fear of “racial pollution” that led to the most common trauma suffered by black Germans: the breakup of families. “Mixed” couples were harassed into separating. When others applied for marriage licenses, or when a woman was known to be pregnant or had a baby, the black partner became a target for involuntary sterilization.
In a secret action in 1937, some 400 of the Rhineland children were forcibly sterilized. Other black Germans went into hiding or fled the country to escape sterilization, while news of friends and relatives who had not escaped intensified the fear that dominated people’s lives.
The black German community was new in 1933; in most families the first generation born in Germany was just coming of age. In that respect it was similar to the communities in France and Britain that were forming around families founded by men from the colonies.
Nazi persecution broke those families and the ties of community. One legacy of that was a long silence about the human face of Germany’s colonial history: the possibility that black and white Germans could share a social and cultural space.
That silence helps to explain Germans’ mixed responses to today’s refugee crisis. The welcome offered by the German chancellor, Angela Merkel, and many ordinary Germans has given voice to the liberal humanitarianism that was always present in German society and was reinforced by the lessons of the Holocaust.
The reaction against refugees reveals the other side of the coin: Germans who fear immigration are not alone in Europe. But their anxieties draw on a vision that has remained very powerful in German society since 1945: the idea that however deserving they are, people who are not white cannot be German.
The other victims of Nazi racism, including Europe’s Sinti and Roma, are now routinely named in commemoration, but not all survivors have had equal opportunities to have their stories heard.
One group of victims who have yet to be publicly memorialized is black Germans.
All those voices need to be heard, not only for the sake of the survivors, but because we need to see how varied the expressions of Nazi racism were if we are to understand the lessons of the Holocaust for today.
When Hitler came to power in 1933, there were understood to have been some thousands of black people living in Germany—they were never counted and estimates vary widely. At the heart of an emerging black community was a group of men from Germany’s own African colonies (which were lost under the peace treaty that ended World War I) and their German wives.
They were networked across Germany and abroad by ties of family and association and some were active in Communist and anti-racist organizations. Among the first acts of the Nazi regime was the suppression of black political activism. There were also 600 to 800 children fathered by French colonial soldiers—many, though not all, African—when the French army occupied the Rhineland as part of the peace settlement after 1919. French troops were withdrawn in 1930 and the Rhineland was demilitarized until Hitler stationed German units there in 1936.
Denial of Rights and Work
The 1935 Nuremberg Laws stripped Jews of their German citizenship and prohibited them from marrying or having sexual relations with “people of German blood.”
A subsequent ruling confirmed that black people (like “gypsies”) were to be regarded as being “of alien blood” and subject to the Nuremberg principles. Very few people of African descent had German citizenship, even if they were born in Germany, but this became irreversible when they were given passports that designated them as “stateless Negroes.”
In 1941, black children were officially excluded from public schools, but most of them had suffered racial abuse in their classrooms much earlier. Some were forced out of school and none were permitted to go on to university or professional training. Published interviews and memoirs by both men and women, unpublished testimony and postwar compensation claims testify to these and other shared experiences.
Employment prospects that were already poor before 1933 got worse afterward. Unable to find regular work, some were drafted for forced labor as “foreign workers” during World War II. Films and stage shows making propaganda for the return of Germany’s African colonies became one of the few sources of income, especially after black people were banned from other kinds of public performance in 1939.
When SS leader Heinrich Himmler undertook a survey of all black people in Germany and occupied Europe in 1942, he was probably contemplating a round-up of some kind. But there was no mass internment.
Research in camp records and survivor testimony has so far come up with around 20 black Germans who spent time in concentration camps and prisons—and at least one who was a euthanasia victim. The one case we have of a black person being sent to a concentration camp explicitly for being a Mischling (mulatto)—Gert Schramm, interned in Buchenwald aged 15—comes from 1944.
Instead, the process that ended with incarceration usually began with a charge of deviant or antisocial behavior. Being black made people visible to the police, and it became a reason not to release them once they were in custody.
In this respect, we can see black people as victims not of a peculiarly Nazi racism, but of an intensified version of the kinds of everyday racism that persist today.
Sterilization: An Assault on Families
It was the Nazi fear of “racial pollution” that led to the most common trauma suffered by black Germans: the breakup of families. “Mixed” couples were harassed into separating. When others applied for marriage licenses, or when a woman was known to be pregnant or had a baby, the black partner became a target for involuntary sterilization.
In a secret action in 1937, some 400 of the Rhineland children were forcibly sterilized. Other black Germans went into hiding or fled the country to escape sterilization, while news of friends and relatives who had not escaped intensified the fear that dominated people’s lives.
The black German community was new in 1933; in most families the first generation born in Germany was just coming of age. In that respect it was similar to the communities in France and Britain that were forming around families founded by men from the colonies.
Nazi persecution broke those families and the ties of community. One legacy of that was a long silence about the human face of Germany’s colonial history: the possibility that black and white Germans could share a social and cultural space.
That silence helps to explain Germans’ mixed responses to today’s refugee crisis. The welcome offered by the German chancellor, Angela Merkel, and many ordinary Germans has given voice to the liberal humanitarianism that was always present in German society and was reinforced by the lessons of the Holocaust.
The reaction against refugees reveals the other side of the coin: Germans who fear immigration are not alone in Europe. But their anxieties draw on a vision that has remained very powerful in German society since 1945: the idea that however deserving they are, people who are not white cannot be German.
The Thirteenth Amendment and Slavery in a Global Economy
Tobias Barrington Wolff
From Race, Racism, and Law: The Thirteenth Amendment to the U.S. Constitution provides that "Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction." Its language is capacious and its mandate broad. The prohibition embodied in the Amendment, not limited to the form of chattel slavery peculiar to pre-Civil War America, forbids almost all forms of compelled labor within the physical bounds of the United States and its possessions. But what of slave labor outside U.S. territory? When U.S. citizens participate in slave practices in foreign jurisdictions, does the Thirteenth Amendment impose any interdiction? Can a U.S. citizen own a slave, so long as he does not bring the enslaved person to American shores?
These questions are emerging as matters of great importance, for the participation of U.S. citizens in foreign slave practices is on the rise. With increasing frequency, U.S.-based multinational corporations are carrying on their operations in some countries through the deliberate exploitation of involuntary or slave labor. The globalization of industry has carried with it a globalization of labor exploitation, occurring outside the ordinary jurisdiction of U.S. enforcement authorities. In countries such as Burma, Mauritania, Pakistan, and Ivory Coast, outright practices of slave labor have arisen, in varying forms and with varying levels of corporate involvement. American participation in such exploitation is often carried out indirectly--through intermediaries, with plausible deniability. And yet the abuse of foreign laborers under conditions of slavery is assuming an increasingly important position in the economics of some U.S. industries-- notable among them the resource extraction and manufacturing industries, where the use of cheap, expendable, involuntary labor has markedly increased profitability.
This development in the foreign labor practices of U.S. entities heralds a new era of challenge and transformation for the Thirteenth Amendment and its prohibition on the existence of slavery or involuntary servitude. It has become necessary to reexamine the range of activities in American industry, and American participation in global industry, that the Amendment reaches. The inquiry is long overdue. Despite the importance of the principle that the Thirteenth Amendment embodies, its doctrinal landscape is severely underdeveloped and has not yet been meaningfully translated into the present industrial context.
The Amendment has faced such challenges before. One of the first came around the turn of the twentieth century, in response to the attempts of post-Civil War landowners and industrialists to reinstate the practical realities of slavery in a more legally palatable form through the practice of peonage. No longer able to exploit slave labor as a formal institution, some employers pressed the law into service in the decades following emancipation, enacting statutes that purported to aim at such evils as debt default and fraud but had the effect of tying disempowered workers to forced terms of labor under threat of prosecution and imprisonment. The Supreme Court rose to this challenge, elevating substance over form and striking down these peonage schemes. In doing so, it carried forward into a new industrial context the Thirteenth Amendment's dual promise to emancipate the slave laborer within American industry and to emancipate American industry from slave labor.
The present era of globalization has brought with it the next logical step in this progression: the pressing into involuntary service of foreign laborers by U.S.-based multinational entities. Corporations based in the United States can now export the slave dependent elements of their business operations to foreign lands and then retrieve the fruits of those operations for domestic use and profit. With that step, we are once again seeing the reintroduction of slave labor into American industry. It has thus become necessary once again to translate the command of the Thirteenth Amendment for a new industrial context.
I choose the language of translation advisedly. As Professor Guyora Binder has observed, anyone seeking to articulate a coherent approach to modern interpretations of the Thirteenth Amendment must address difficult questions of history. The enactment of the Reconstruction Amendments undermined the precepts on which earlier approaches to constitutional interpretation had rested, throwing into question the proper interpretive approach to the Amendments themselves. "It was the Reconstruction Amendments' command to abolish one of American culture's defining customs," Professor Binder has observed, "that rendered them peculiarly uninterpretable."
In the case of the Fourteenth Amendment, this interpretive dilemma has already played out on the constitutional stage. The road from Plessy v. Ferguson to Brown v. Board of Education marked a journey between two distinct visions of the relationship between tradition and constitutional analysis. In Plessy, the Court explicitly rested its rejection of the equal protection challenge to legally enforced segregation upon "the established usages, customs and traditions of the people." Under that tradition, the Court explained, a separation of the races in public facilities could be defended as "reasonable, and . . . enacted in good faith for the promotion of the public good." Custom and usage were a sufficient response to a constitutional challenge under the dispensation to which the Plessy majority subscribed. One of the revolutionary changes wrought by Brown was a deliberate rejection of this interpretive method. "In approaching this problem [of segregation]," the Court wrote in Brown,
we cannot turn the clock back to 1868 when the Amendment was adopted, or even to 1896 when Plessy v. Ferguson was written. We must consider public education in the light of its full development and its present place in American life throughout the Nation. Only in this way can it be determined if segregation in public schools deprives these plaintiffs of the equal protection of the laws.
Thus, in concluding that legally enforced segregation in educational facilities is "inherently unequal," the Brown Court dramatically rejected custom and tradition, holding that the Fourteenth Amendment embodied substantive principles that do not automatically defer to established social norms. A similar observation may be made about the Fifteenth Amendment, which has occupied an interpretive landscape that has recapitulated that of the Fourteenth in most relevant respects.
In the case of the Thirteenth Amendment, the interpretive problem has at once been more straightforward and more opaque. There has never been any question that the Amendment, in eradicating slavery and elevating emancipation to the status of a constitutional imperative, embodied a substantive rejection of one of America's most pervasive customs and traditions. In that respect, the Thirteenth Amendment directly implemented a reshaping of the constitutional landscape that would only take hold in the other Reconstruction Amendments after the passage of ninety more years. But in a broader sense, the Thirteenth Amendment has yet to travel the road marked out by Plessy and Brown. Consider Robertson v. Baldwin, one of the early post- Reconstruction Thirteenth Amendment decisions, which the Court handed down in the Term following Plessy. In Robertson, a merchant seaman challenged a federal statute that authorized the imprisonment and forcible return of sailors who wished to leave the employ of their vessels. In rejecting this Thirteenth Amendment claim, the majority embraced the same interpretive method that it had recently deployed in its Fourteenth Amendment analysis in Plessy. Despite the radical rejection of tradition around the subject of slavery and labor that was inherent in the Thirteenth Amendment itself, the Robertson Court relied uncritically upon pre-Civil War common law authorities to carve out a substantive exception to the scope of the Amendment's command, concluding that "the amendment was not intended to introduce any novel doctrine with respect to certain descriptions of service which have always been treated as exceptional." Indeed, Robertson includes a vigorous dissent by Justice Harlan, who staked out the same interpretive ground that he had occupied in his Plessy dissent a year earlier, rejecting the use to which the Robertson majority put custom and tradition as inappropriate following the enactment of the Thirteenth Amendment.
To articulate a Thirteenth Amendment jurisprudence that is both internally coherent and in step with the interpretive method now firmly established for the Fourteenth and Fifteenth Amendments, one must avoid a myopic hindsight that views the Amendment as accomplishing nothing more than the constitutionalization of emancipation. This is especially so in seeking out sources to identify the core values that the Amendment embodies. Binder poses the problem in the following terms: "When the Constitution condemns society, where can we turn for aid in construing it? What aspects of American society authorize the Thirteenth Amendment and what aspects are amended by it? What was the essential feature of the slavery that the Thirteenth Amendment commands us to disestablish?" The Court made an initial gesture toward answering these interpretive questions early in the twentieth century when it employed the Thirteenth Amendment to strike down the peonage and "antifraud" statutes mentioned above. But since then, despite the interpretive revolution in the other Reconstruction Amendments heralded by Brown and its progeny, the Court has developed no approach to two basic questions of interpretation: "What was the essential feature of the slavery that the Thirteenth Amendment commands us to disestablish"; and "[W]here can we turn for aid in construing it?"
This Article examines the most pressing contemporary application of these questions: the increasingly important role played by multinational corporate entities in forced labor practices around the globe. In doing so, it offers an approach to addressing the broader implications of the Thirteenth Amendment's interpretive challenge. My principal contention is that the Thirteenth Amendment forbids the deliberate incorporation of slave labor into American industry. More precisely, I contend that the knowing use of slave labor by U.S. based entities in their foreign operations constitutes the presence of "slavery" within the United States, as that term is used in the Thirteenth Amendment, and hence that this practice renders such U.S. entities subject to the prohibitory authority of American courts through a private civil action. The term "slavery" as it is used in the Amendment entails more than the physical presence of enslaved individuals. Slavery is a multilayered practice. It creates a distinctive form of interpersonal relationship. It depends upon the existence of interrelated, supporting institutions for its sustainability. And it arises not by happenstance, but in response to the urging of industries that benefit from its distinctive features and intentionally create a market for it. Those who drafted the Thirteenth Amendment understood all three of these aspects of slavery--the interpersonal, the institutional, and the industrial--to be vital elements of the practice that they sought to eradicate with the Amendment's enactment. In this Article, I hope to begin the process of translating that understanding into the language of the global economy and, in the process, to lay the foundation for a more modern and salient Thirteenth Amendment jurisprudence.[...]
These questions are emerging as matters of great importance, for the participation of U.S. citizens in foreign slave practices is on the rise. With increasing frequency, U.S.-based multinational corporations are carrying on their operations in some countries through the deliberate exploitation of involuntary or slave labor. The globalization of industry has carried with it a globalization of labor exploitation, occurring outside the ordinary jurisdiction of U.S. enforcement authorities. In countries such as Burma, Mauritania, Pakistan, and Ivory Coast, outright practices of slave labor have arisen, in varying forms and with varying levels of corporate involvement. American participation in such exploitation is often carried out indirectly--through intermediaries, with plausible deniability. And yet the abuse of foreign laborers under conditions of slavery is assuming an increasingly important position in the economics of some U.S. industries-- notable among them the resource extraction and manufacturing industries, where the use of cheap, expendable, involuntary labor has markedly increased profitability.
This development in the foreign labor practices of U.S. entities heralds a new era of challenge and transformation for the Thirteenth Amendment and its prohibition on the existence of slavery or involuntary servitude. It has become necessary to reexamine the range of activities in American industry, and American participation in global industry, that the Amendment reaches. The inquiry is long overdue. Despite the importance of the principle that the Thirteenth Amendment embodies, its doctrinal landscape is severely underdeveloped and has not yet been meaningfully translated into the present industrial context.
The Amendment has faced such challenges before. One of the first came around the turn of the twentieth century, in response to the attempts of post-Civil War landowners and industrialists to reinstate the practical realities of slavery in a more legally palatable form through the practice of peonage. No longer able to exploit slave labor as a formal institution, some employers pressed the law into service in the decades following emancipation, enacting statutes that purported to aim at such evils as debt default and fraud but had the effect of tying disempowered workers to forced terms of labor under threat of prosecution and imprisonment. The Supreme Court rose to this challenge, elevating substance over form and striking down these peonage schemes. In doing so, it carried forward into a new industrial context the Thirteenth Amendment's dual promise to emancipate the slave laborer within American industry and to emancipate American industry from slave labor.
The present era of globalization has brought with it the next logical step in this progression: the pressing into involuntary service of foreign laborers by U.S.-based multinational entities. Corporations based in the United States can now export the slave dependent elements of their business operations to foreign lands and then retrieve the fruits of those operations for domestic use and profit. With that step, we are once again seeing the reintroduction of slave labor into American industry. It has thus become necessary once again to translate the command of the Thirteenth Amendment for a new industrial context.
I choose the language of translation advisedly. As Professor Guyora Binder has observed, anyone seeking to articulate a coherent approach to modern interpretations of the Thirteenth Amendment must address difficult questions of history. The enactment of the Reconstruction Amendments undermined the precepts on which earlier approaches to constitutional interpretation had rested, throwing into question the proper interpretive approach to the Amendments themselves. "It was the Reconstruction Amendments' command to abolish one of American culture's defining customs," Professor Binder has observed, "that rendered them peculiarly uninterpretable."
In the case of the Fourteenth Amendment, this interpretive dilemma has already played out on the constitutional stage. The road from Plessy v. Ferguson to Brown v. Board of Education marked a journey between two distinct visions of the relationship between tradition and constitutional analysis. In Plessy, the Court explicitly rested its rejection of the equal protection challenge to legally enforced segregation upon "the established usages, customs and traditions of the people." Under that tradition, the Court explained, a separation of the races in public facilities could be defended as "reasonable, and . . . enacted in good faith for the promotion of the public good." Custom and usage were a sufficient response to a constitutional challenge under the dispensation to which the Plessy majority subscribed. One of the revolutionary changes wrought by Brown was a deliberate rejection of this interpretive method. "In approaching this problem [of segregation]," the Court wrote in Brown,
we cannot turn the clock back to 1868 when the Amendment was adopted, or even to 1896 when Plessy v. Ferguson was written. We must consider public education in the light of its full development and its present place in American life throughout the Nation. Only in this way can it be determined if segregation in public schools deprives these plaintiffs of the equal protection of the laws.
Thus, in concluding that legally enforced segregation in educational facilities is "inherently unequal," the Brown Court dramatically rejected custom and tradition, holding that the Fourteenth Amendment embodied substantive principles that do not automatically defer to established social norms. A similar observation may be made about the Fifteenth Amendment, which has occupied an interpretive landscape that has recapitulated that of the Fourteenth in most relevant respects.
In the case of the Thirteenth Amendment, the interpretive problem has at once been more straightforward and more opaque. There has never been any question that the Amendment, in eradicating slavery and elevating emancipation to the status of a constitutional imperative, embodied a substantive rejection of one of America's most pervasive customs and traditions. In that respect, the Thirteenth Amendment directly implemented a reshaping of the constitutional landscape that would only take hold in the other Reconstruction Amendments after the passage of ninety more years. But in a broader sense, the Thirteenth Amendment has yet to travel the road marked out by Plessy and Brown. Consider Robertson v. Baldwin, one of the early post- Reconstruction Thirteenth Amendment decisions, which the Court handed down in the Term following Plessy. In Robertson, a merchant seaman challenged a federal statute that authorized the imprisonment and forcible return of sailors who wished to leave the employ of their vessels. In rejecting this Thirteenth Amendment claim, the majority embraced the same interpretive method that it had recently deployed in its Fourteenth Amendment analysis in Plessy. Despite the radical rejection of tradition around the subject of slavery and labor that was inherent in the Thirteenth Amendment itself, the Robertson Court relied uncritically upon pre-Civil War common law authorities to carve out a substantive exception to the scope of the Amendment's command, concluding that "the amendment was not intended to introduce any novel doctrine with respect to certain descriptions of service which have always been treated as exceptional." Indeed, Robertson includes a vigorous dissent by Justice Harlan, who staked out the same interpretive ground that he had occupied in his Plessy dissent a year earlier, rejecting the use to which the Robertson majority put custom and tradition as inappropriate following the enactment of the Thirteenth Amendment.
To articulate a Thirteenth Amendment jurisprudence that is both internally coherent and in step with the interpretive method now firmly established for the Fourteenth and Fifteenth Amendments, one must avoid a myopic hindsight that views the Amendment as accomplishing nothing more than the constitutionalization of emancipation. This is especially so in seeking out sources to identify the core values that the Amendment embodies. Binder poses the problem in the following terms: "When the Constitution condemns society, where can we turn for aid in construing it? What aspects of American society authorize the Thirteenth Amendment and what aspects are amended by it? What was the essential feature of the slavery that the Thirteenth Amendment commands us to disestablish?" The Court made an initial gesture toward answering these interpretive questions early in the twentieth century when it employed the Thirteenth Amendment to strike down the peonage and "antifraud" statutes mentioned above. But since then, despite the interpretive revolution in the other Reconstruction Amendments heralded by Brown and its progeny, the Court has developed no approach to two basic questions of interpretation: "What was the essential feature of the slavery that the Thirteenth Amendment commands us to disestablish"; and "[W]here can we turn for aid in construing it?"
This Article examines the most pressing contemporary application of these questions: the increasingly important role played by multinational corporate entities in forced labor practices around the globe. In doing so, it offers an approach to addressing the broader implications of the Thirteenth Amendment's interpretive challenge. My principal contention is that the Thirteenth Amendment forbids the deliberate incorporation of slave labor into American industry. More precisely, I contend that the knowing use of slave labor by U.S. based entities in their foreign operations constitutes the presence of "slavery" within the United States, as that term is used in the Thirteenth Amendment, and hence that this practice renders such U.S. entities subject to the prohibitory authority of American courts through a private civil action. The term "slavery" as it is used in the Amendment entails more than the physical presence of enslaved individuals. Slavery is a multilayered practice. It creates a distinctive form of interpersonal relationship. It depends upon the existence of interrelated, supporting institutions for its sustainability. And it arises not by happenstance, but in response to the urging of industries that benefit from its distinctive features and intentionally create a market for it. Those who drafted the Thirteenth Amendment understood all three of these aspects of slavery--the interpersonal, the institutional, and the industrial--to be vital elements of the practice that they sought to eradicate with the Amendment's enactment. In this Article, I hope to begin the process of translating that understanding into the language of the global economy and, in the process, to lay the foundation for a more modern and salient Thirteenth Amendment jurisprudence.[...]