Mother Tongue – Mother Jones https://www.motherjones.com Smart, fearless journalism Sun, 02 Jun 2024 12:39:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://www.motherjones.com/wp-content/uploads/2017/09/cropped-favicon-512x512.png?w=32 Mother Tongue – Mother Jones https://www.motherjones.com 32 32 130213978 How We’ve Failed the Promise of Making “Genocide” a Crime https://www.motherjones.com/politics/2024/06/israel-palestine-gaza-genocide-war-crimes-icj-south-africa-raphael-lemkin/ Mon, 03 Jun 2024 10:00:23 +0000 https://www.motherjones.com/?p=1058064 Israel declared its independence in 1948. That same year, the United Nations adopted the convention that defined genocide as a crime. The tension between these two “never agains” was there from the start.

The word “genocide” was coined in 1941 by Raphael Lemkin, a Jewish lawyer from a Polish family, who combined the Greek word for a people (genos) and the Latin translation for killing (cide). At its most basic, genocide meant systematically destroying another group. Lemkin laid it out as a two-phase, often colonial process in his 1944 book, Axis Rule in Occupied Europe: First, the oppressor erases the “national pattern” of the victim. Then, it imposes its own. Genocide stretched from antiquity (Carthage) to modern times (Ireland).

“The term does not necessarily signify mass killings although it may mean that,” Lemkin explained in a 1945 article. “More often it refers to a coordinated plan aimed at destruction of the essential foundations”—cultural institutions, physical structures, the economy—“of the life of national groups.” The “machine gun” was merely a “last resort.”

Lemkin was a lawyer, not a sociologist. By birthing the term “genocide,” he was not trying to taxonomize the horrors of war. Instead, Lemkin—who lost 49 family members in the Holocaust—hoped that he could identify a crime to stop it. Nazi terror could not simply be Germany’s “internal problem.” With genocide, Lemkin hoped to give legal and moral weight to international intervention. He hoped to bring into being an offense that could be policed and, in turn, stopped in a new and supposedly civilized world.

Today, as Israel stands accused by South Africa of genocide before the International Court of Justice for the methods used in its war on Gaza, it is worth recalling Lemkin’s arguments. The question of Israel’s actions has been a narrow one: Has the killing met the criteria for genocide under current international law? But Lemkin’s broader conception of the term—though it has been chipped away at by courts and has faded from public memory—has been less discussed.

The sad reality is that Israel’s actions likely met Lemkin’s original definition long before the war on Gaza. Starting in 1947, roughly 700,000 Palestinians fled or were expelled by Israel and barred from returning. After the 1967 war, Israel began occupying the remainder of what was once Palestine. It has since settled hundreds of thousands of people on that land, while subjecting Palestinians to what international human rights groups increasingly consider to be a system of apartheid. The goal of its settlement policy has been clear: to replace one cultural fabric with another.

Israel is not the only nation whose actions fit Lemkin’s conception of genocide. The same could be said of the formation of the United States and the mass slaughter of Native Americans. (In fact, Lemkin listed it as a textbook case.) What is different now is a more obvious hypocrisy after decades of international governance designed to create a supposedly new, rules-based order.

“Genocide” represented Lemkin’s desire to move toward this internationally policed peace. Douglas Irvin-­Erickson, a George Mason professor who wrote an intellectual biography of Lemkin, said he aspired for a form of world citizenship that reflected his “stunningly broad indictment of oppressive state powers.”

It’s no surprise then that when making it a crime after World War II, nations were careful to protect themselves. The Soviet Union removed political groups from those that could be victims of genocide, to secure a free hand for its purges of dissidents. The United States kept Lemkin’s ideas about cultural genocide out, lest it be in violation for Jim Crow laws. The convention was a “lynching bill in disguise” in the words of a Louisiana segregationist. (Congress only ratified the genocide treaty in 1988 after Ronald Reagan caused a backlash by laying a wreath at a German military cemetery that included the graves of 49 members of the SS.)

“Outlawing genocide becomes this marker of a civilized society,” Irvin-Erickson explained. “But at the same time, [Lemkin] is watching the delegates who are negotiating the Genocide Convention, and they’re literally writing their own genocides out of the law.” His greatest allies came from what is now called the Global South—the nations that had been, or reasonably feared becoming, victims.

Since the Genocide Convention’s adoption, international courts have arrived at a narrow reading of the already narrow interpretation of Lemkin’s concept, says Leila Sadat, the James Carr Professor of International Criminal Law at the Washington University in St. Louis School of Law. The emphasis of the law is determining whether a country or individual has killed massive numbers of a group of people, and whether they did so with a provable intent to destroy that group. This poses a problem for prosecutors since most perpetrators of genocide are not as transparent as Adolf Hitler.  

This winnowing of what counts as genocide would have deeply frustrated Lemkin. As Irvin-Erickson has written, genocide was “not a spontaneous occurrence” for Lemkin but a “process that begins long before and continues long after the physical killing of the victims.” Melanie O’Brien, a visiting professor at the University of Minnesota’s Center for Holocaust and Genocide Studies, emphasized that the dehumanization that leads to one group killing another en masse is not a prelude to genocide but a part of it.

But because its definition has been narrowed, Rwanda has been the most clear-cut case of genocide since the Holocaust according to international judges. For the horrors that occurred in Bosnia, a seemingly textbook genocide of Bosnian Muslims at the hands of Slobodan Milošević and fellow Serbs, only the 1995 slaughter of 8,000 men and boys at Srebrenica cleared the bar as an act of genocide in international trials of Serbian war criminals. (Milošević died in prison before any conviction could be secured.)

In the eyes of Sadat, one of the world’s leading experts on international criminal law, the Genocide Convention has become more of “a monument to the Holocaust” than a tool that can be effectively deployed in court. Still, she says South Africa may prevail in clearing the “very high bar” needed to prove a charge of genocide in court due to the scale of death in Gaza, the extreme rhetoric of top Israeli officials, and Israel’s decision to restrict food and other humanitarian assistance. “A question I have is: Is this a Srebrenica moment?” she says. “And I fear that we may well be looking at exactly that.”

In May, Aryeh Neier, a co-founder of Human Rights Watch, concluded that Israel’s actions in Gaza have crossed the line. Neier wrote in the New York Review of Books that he initially refrained from using the term but that the obstruction of aid into Gaza had convinced him: “Israel is engaged in genocide.”

The situation is so grave that, even under the current constrained definition, the judges of the International Court of Justice may eventually agree. In the face of Hamas’ brutal attack and its officials’ repeated calls to annihilate Israel, the Netanyahu government has turned to physical destruction—a technique that Lemkin considered to be the “last and most effective phase of genocide”—at a scale that is unprecedented in its history. More than 36,000 Palestinians have been killed in Gaza, most of them civilians. The deaths of thousands more Palestinians trapped under the rubble are believed to still be uncounted. A famine has begun. 

Karim Khan, the chief prosecutor of the International Criminal Court, is now seeking warrants for the arrest of Prime Minister Benjamin Netanyahu, Defense Minister Yoav Gallant, and Hamas’ leaders for war crimes and crimes against humanity. But Israel remains undeterred. In late May, the International Court of Justice ordered Israel to halt its assault on Rafah. Days later, the Israeli military used American bombs in a strike on Rafah that killed dozens of civilians. Lemkin saw something like this ineffectiveness of international governance coming.

“Better laws are made by people with greater hearts,” Lemkin wrote about how his original definition of genocide had been undermined in its codification. “They want non­enforceable laws with many loopholes in them, so that they can manage life like currency in a bank.”

]]>
1058064
“Abortionist”: The Label That Turns Healthcare Workers Into Criminals https://www.motherjones.com/politics/2024/04/abortionist-the-label-that-turns-healthcare-workers-into-criminals/ Fri, 12 Apr 2024 10:00:05 +0000 https://www.motherjones.com/?p=1051122 In 2007, after Paul Ross Evans pleaded guilty to leaving a bomb outside of a women’s health clinic in Austin, he assured the judge: He never meant for anyone to get hurt. “Except,” he clarified, “for the abortionists.”

For almost two centuries, the moniker “abortionist” has branded those who help terminate pregnancies as illegitimate, dangerous, and, in turn, allowable targets of violence. Before Roe v. Wade, the label turned midwives and doctors into criminals to be cracked down on by the state. After the 1973 decision, right-wing movements continued to deploy the term to imply only back-alley doctors performed abortions.

In 2022, the sobriquet showed up once more in the halls of power: “Abortionist” was used four times in the Dobbs v. Jackson Women’s Health Organization decision, channeling a fraught history.

Until the late 1800s, abortion and reproductive health were primarily handled by women—midwives, many of whom were Black, Indigenous, or immigrants. As medicine professionalized, male doctors viewed this skilled group as a threat to their business. Birth, they argued, ought to take place in a hospital. “The midwife is a relic of barbarism,” Dr. Joseph DeLee, a prominent 20th century obstetrician, proclaimed, “a drag on the progress of the science and art of obstetrics.”

The restructuring of gynecological medicine went hand in hand with a budding movement to criminalize abortion. In 1860, governors of every state received a letter from the president of a young organization, the American Medical Association. Ghostwritten by Horatio Storer, a Harvard-educated surgeon, the letter was part of an AMA campaign touting a new idea: Abortion should be illegal because life begins at conception—not, as previous laws considered, at “quickening,” when fetal movements are first detected. Under this logic, as Storer made it his mission to convince the masses, practically all abortions should be a crime.

A key part of the propaganda was to insist most abortions could not be health care. Following an 1864 AMA meeting in New York, Storer took up the argument in a famous essay. If a “practitioner of any standing in the profession has been known, or believed, to be guilty of producing abortion, except absolutely to save a woman’s life,” he explained, the physician “immediately and universally [would be] cast from fellowship, in all cases losing the respect of his associates.” The message was clear: We are doctors and they are abortionists. (The AMA awarded Storer its Gold Medal for the article.)

His argument prevailed. During the late 1800s, statehouses wrote new legislation banning abortion, police stations filled with local practitioners offering the procedure, and newsrooms took up the cause, relegating reproductive care to the crime blotter. (The first known usage of “abortionist” comes courtesy of an 1843 article in the New York Herald noting the arrest of Madame Costello—known for her euphemistic newspaper ads for women, promising to help “those who wish to be treated for obstruction of their monthly periods.”) By the early 1900s, abortion was illegal in every state.

More than a century later, we’re once again in an era of reproductive criminalization, and “abortionist” has reemerged in prominent places. In an April 2023 ruling that stayed the FDA’s approval of the abortion pill mifepristone, US District Judge Matthew Kacsmaryk mentions the term 11 times. Last year, Fox News called doctors “late-term abortionist[s].” And Southern Baptist Convention President Bart Barber wrote, following Dobbs, that “the abortionist is the murderer.”

A moniker that supposedly condemns violence is being used once more to condone it. In this landscape, anyone who—to borrow the language of Texas’ restrictive law—“aids or abets” abortion can be seen as an abortionist.

Grace McGarry was at work in the Texas clinic the day Evans planted the bomb there. She was not performing surgeries; she was a patient advocate. Yet, the explosives could have killed her, too. “Oh, he means me,” she later realized, after Evans told the court he had wanted to hurt the abortionists. “He doesn’t just mean my doctors. He means me.”

]]>
1051122
“The Algorithm” Does Not Exist https://www.motherjones.com/politics/2024/01/the-algorithm-social-media-facebook-technology-the-apparatus/ Mon, 29 Jan 2024 11:00:20 +0000 https://www.motherjones.com/politics/2024/01/the-algorithm-social-media-facebook-technology-the-apparatus/ In 2009, when Facebook changed its newsfeed significantly for the first time, there wasn’t much uproar over “the algorithm.” Now we’re all talking about it—whatever “it” is. The algorithm and its ramifications have been the focus of congressional hearings and scholarly debates. In an article on the collapse of Twitter, writer Willy Staley noted “vague concerns about ‘the algorithm,’ the exotic mathematical force accused of steering hypnotized users into right-wing extremism, or imprisoning people in a cocoon of smug liberalism, or somehow both.”

But “the algorithm” does not exist. And widespread use of the phrase implies a false hope that we can fully understand our dizzying information system. If it were only the algorithm on YouTube radicalizing us, or the algorithm on Facebook weaponizing misinformation, then we would know how to fix these things. We would just need regulators to pressure Mark Zuckerberg into fiddling with some code, and things would go back to normal.

The truth is more unsettling: We are living with technology moving at an inhuman speed, operating at scales simultaneously smaller than we can detect and larger than anyone can comprehend.

Algorithms are about as old as basic mathematics. But ever since Alan Turing’s cryptography breakthroughs helped defeat the Nazis, most of us have associated the word with computing machines. Algorithms are, almost tautologically, what computers do: execute a series of discrete steps to transform an input into an output.

Our first mass exposure to a social media algorithm came in 2011, when Facebook altered its newsfeed again to more clearly favor popular posts over simple updates from “friends.” Digital media sites like (the aptly named) BuzzFeed quickly abused this feature to successfully break the internet with often meaningless content designed to go viral. We live in the shattered aftermath.

Contemporary invocations of “the algorithm” technically include both the current monstrosities and Comp Science 101 commands, but it fails to fully describe our brave new world. The media theorist Vilém Flusser has proposed we use “apparatus” instead, arguing that the emergence of new media has caused a mutation in the way humans relate to each other—and to their environment. From the fullness of our physical being we are reduced to mere “operators,” experiencing life primarily through the apparatus, which “programs” both the producers and consumers of media.

This may sound complex, but you have felt it. Think of a concert: After watching a band perform on Instagram, you attend a live show where you pull out your phone and record—for Instagram—the band’s encore, which became famous from Instagram. Even if you are not interacting with the algorithm, you are within the apparatus. The world has been cropped and edited.

There is a whimsy to Flusser’s conception—more fantasy than sci-fi. We have stepped through the looking glass. Rather than using the internet to communicate with people, we communicate with the machine. But it doesn’t listen. It responds—and trains us to respond back.

Consider TikTok. Users attempt alchemy—dancing, editing, talking to the camera—trying to figure out how to transmute their posts into viral gold. In each case, machine learning algorithms (plural) decompose every detail of how we watch and how we post, creating databases across thousands of dimensions. None of this data points to anything except for other data.

We seem to be grasping at some of this intuitively. TikTokers talk about training “my algorithm.” But that still offers an unearned personification to the machines. As we accept the content presented to us, we react as if it were the product of humans rather than human accounts; we accept our role as operators engaged in what Flusser calls “unconscious functioning.”

The fact that we see agency—often cruel but at least human—in the algorithm reveals our fundamental need for interpersonal connection. It lets us imagine someone like Zuck in control, instead of adrift in “the apparatus” like the rest of us. We wish Big Brother were watching. But we might just be alone with our phones.

]]>
1042316
The Mirage of “Middle Class” https://www.motherjones.com/politics/2023/10/middle-class-david-roediger-democratic-messaging-clinton-obama-cold-war-liberalism/ Thu, 05 Oct 2023 10:00:54 +0000 https://www.motherjones.com/?p=1022024 In June, Democratic advisers began circulating the usual warning: The party needed to change its message. As the Washington Post explained, focus group testing had shown that the slogan of “‘economic fairness’ was a loser.” Instead, Democrats should talk about “growing the middle class.” Soon, leaders like Rep. Hakeem Jeffries (D-N.Y.) and Sen. Elizabeth Warren (D-Mass.) followed the path of former President Barack Obama, who had made growing the middle class a key part of his platform. Democrats were again aiming their economic appeal at the group between the poor and the rich: the vague middle.

We are used to hearing politicians press to “grow,” “build,” or “expand” the middle class. The idea has a cross-­partisan appeal, implying class politics without insisting on thornier demands: the need for redistribution (tax the rich) or hierarchy (trickle-down economics). But the phrase’s vexing dominance is a relatively recent trend.

The term “middle class” was rarely used in the nation’s first 140 years. In the 19th century, it referred overwhelmingly to the self-employed: farmers, artisans, and merchants. By the 20th century, it was largely composed of salaried workers.

It was only after the rise of industrial unionism in the 1930s that use of the term “middle class” began to skyrocket. C. Wright Mills—the American sociologist most closely associated with the label—described the “new” middle class in 1951 as having to act on “somebody else’s” priorities.

This coincided with the mistaken idea of the United States as a “middle-class nation.” It beat back a radical leftist politics, rebranding the bourgeoisie as a positive force to fit the antisocialist imperatives of the Cold War. But throughout the term’s evolution, it was never quite clear who was being described.

Mills’ work largely rejected the idea of a singular middle class; he preferred “middle classes.” He worried that in the celebratory atmosphere after World War II, scholars too often assumed almost everyone in America was happily middle class. Today, the term has a similar blurriness. It encompasses either 42 percent of the US population or more than 90 percent, depending on how surveys are structured (and depending on what people—who do not want to call themselves poor or rich—say). As Fortune notes, the middle class includes anyone from the “part-time bartender” to the “suburban power couple” earning 20 times more. People who employ or manage the labor of others land in the same class as those being bossed.

The wild imprecision in defining “middle class” is partially to blame for the term’s worst political mobilizations. Its popularity has elbowed out discussions of poverty; with both parties beholden to the rich while appealing to a middle that can include 96 percent of the electorate, attempts to rectify inequality are easily dismissed as divisive.

It was during the 1992 presidential election, when Bill Clinton prioritized what his camp called “middle-class dreams,” that explicit appeals to this vague group of voters became common.

Like today, Clinton’s pitch was driven by focus groups. His pollsters zeroed in on people in segregated Detroit suburbs, hoping to bring back Democratic voters who had become Republicans under Reagan. The all-white groups studied did mention class grievances—many in the focus groups were unionized workers—but the settings invited crabbed racial complaints more than “middle-class dreams.” For Clinton, “middle class” implied white middle class. And the promise to listen to this group was hardly helpful to everyone in the middle: It led to liberals urging curbs on welfare, policies that propelled incarceration, and legislation for an “effective death penalty.”

Still, in our age of austerity, the term “middle class” offers one of America’s only whiffs of actual class politics. We do need to listen to those who identify as middle class—not in their expressions of racism, but in their disquiet about the contradictions of their everyday lives. You can sense possibility in the shaky way white-collar workers define capitalist miseries—unpaid overtime, crushing consumer debt, abusive managers—as woes shared by all workers. The imprecise, often ideologically driven idea of a middle class might be a lie. But it still has a role in how a politics uniting working people can emerge.

David Roediger is the Foundation Professor of American Studies at the University of Kansas and the author of The Shrinking Middle Class: A Political History.

]]>
1022024
Anarcho-Tyranny: How the New Right Explains Itself https://www.motherjones.com/politics/2023/07/anarcho-tyranny-penny-neely-sam-francis-new-right/ Thu, 06 Jul 2023 10:00:37 +0000 The same day Manhattan District Attorney Alvin Bragg charged Daniel Penny with manslaughter for choking to death Jordan Neely—a homeless man who’d been acting in a way Penny perceived as threatening—Rep. Matt Gaetz (R-Fla.) recorded a podcast. Elites, he told listeners, were making the common man fearful of defending himself while allowing criminals to roam free. The congressman had a name for this inversion of justice: “anarcho-tyranny.”

For a certain kind of right-wing shitposter with intellectual pretensions—Tim Pool, Mike Cernovich, the New York Young Republican Club—invoking the specter of anarcho-tyranny was obligatory after Penny’s arrest. The portmanteau signaled that you’d done the reading (or at least listened to a podcast by someone who had) and recognized the real oppressed: an ill-defined middle class caught between anarchy from below and tyranny from above. The term lent an academic gloss to a gut reaction.

Anarcho-tyranny has become a favorite expression for proponents of a new populism on the right. Tucker Carlson has used it to explain everything from the prosecution of Kyle Rittenhouse to Democrats’ nonchalant reaction to classified documents at the Penn Biden Center. A few weeks before Fox News fired him, Carlson mused, “It does seem like anarcho-tyranny is one of these ideas, you know, that some political philosopher thought up a long time ago.”

Well, no. The expression belongs to Sam Francis—the late paleoconservative columnist whose disgust with establishment Republicans, managerial elites (of any party), and the idea of a majority-minority America has led him to be widely cited by intellectuals of the MAGA movement. Francis’ idea was not that anarcho- tyranny was an age-old problem, but rather that it was the product of an “entirely new form of government” that was “unique in human history.”

In a 1994 article, Francis proclaimed that a first-of-its-kind “Hegelian synthesis” had been created by the upper class as the 20th century closed. This perversion forced the aggrieved masses to live under both anarchy and tyranny at once. In North Carolina, he explained, a “law-abiding citizen” was arrested as part of a media stunt to improve seatbelt enforcement even as violent criminals were being let out of state prisons on parole. (Francis did not mention that the prison releases resulted from a state law enacted in 1987 to prevent extreme overcrowding caused by record arrest numbers.) Here, he suggested, was a government imposing on us while allowing them to do as they pleased.

And the identities of us and them were clear: Francis wrote from a home office that was once Robert E. Lee’s childhood bedroom. He was pushed out of the right-wing Washington Times in 1995 for, essentially, being too bigoted. He obsessed over any restrictions that could touch a middle-class white person: speed limits, the tax code, gun laws, and even child porn statutes. But he often ignored other oppressions enforced by the state: slavery, Jim Crow, and generations of deliberate economic dispossession.

Francis’ anarcho-tyranny is the lens through which the blurry rage of MAGA comes into focus. When Sen. J.D. Vance (R-Ohio) complains about a “lunatic” in New York getting away with harassing a white family as Donald Trump is hounded over a hush-money case, it can seem like an age-old tale of liberal hypocrisy. But as Carlson made clear earlier this year, the New Right sees progressive elites as engaged in something far more sinister. “What they’re describing is a caste system,” Carlson explained, “where they can do what they want, and you are subject to the minutiae of their legal code. It’s called anarcho-tyranny.”

Francis did have a solution. Order could be restored, he argued, not by police but via a return to something else: frontier vigilantism. He even had a modern example of this justified resistance: Bernhard Goetz, the man who shot four Black teenagers on the subway in 1984 after one of them reportedly said, “Give me 5 dollars.” Francis hailed the subway vigilante for “picking off” the “hoodlums.” (All of the kids survived, though one was paralyzed.) “I wanted to kill those guys,” Goetz confessed. “If I had more bullets, I would have shot them all again and again.”

]]>
1013211
What if a “State of Emergency” Isn’t Enough? https://www.motherjones.com/environment/2023/06/state-of-emergency-keyword-mother-tongue/ Thu, 08 Jun 2023 10:00:20 +0000 In early 2023, California Gov. Gavin Newsom declared a “state of emergency” after multiple “atmospheric river systems” slammed the state. The storms flooded highways, caused mudslides, and toppled trees. Newsom’s declaration expedited the response. Police evacuated some senior citizens from parts of the East Bay. Counties and cities distributed sandbags. Crews erected walls to prevent flooding.

But while this disruption to everyday life certainly looked like an emergency, such catastrophes are now commonplace, especially in California. Annual wildfire season is peaking earlier and ending later. Winter storms packed the Sierra Nevada with three times the usual snow. Somehow, everything is a crisis but also the crises never stop; in fact, the consistency of calamity exacerbates each disaster.

As we face a future warped by climate change, it’s worth asking: What does it mean if we’re in a constant “state of emergency”?

In some ways, we already know. Presidents have declared more than 70 national emergencies over the years; 41 remain active—the oldest being Jimmy Carter’s 1979 freezing of Iranian assets in the United States. A “state of emergency” is a perpetual hum in the background, albeit one that suspends normal procedures. It has an authoritarian potential. In 1950, President Harry S. Truman declared one of the first national states of emergency to fight “the increasing menace” of communism. North Korea had invaded South Korea, and Truman needed to boost military readiness, but without the pesky requirement of a congressional declaration of war. “I just had to act as commander in chief,” he said, “and I did.” (Truman’s national emergency remained in effect after the conflict technically ended.)

Lawmakers who want expanded powers find backing in philosopher John Locke, who argued that crises require governments to circumnavigate the shortcomings of existing laws. A “state of emergency” means we consent to a bit of dictatorial power in the name of protecting order.

On the left, it has often been noted that emergency powers can be used for ill. In 2007, Naomi Klein famously argued in The Shock Doctrine that crises allow capitalists to entrench policies without proper scrutiny. Historian Mike Davis, in his 1995 essay “The Case for Letting Malibu Burn,” showed that these crises reify class and social inequality—he outlines the egregious disparities of Los Angeles’ fire response, during which rich enclaves were given resources but “scandalously little attention” was “paid to the man-made and remediable fire crisis of the inner city.”

As Davis made clear, how we respond to an emergency, and what we define as one, can show what our leaders deem as urgently in need of protection—and what they don’t. One only has to look back at the rain in California. Officials recognized for decades the Pajaro River levee in Monterey County was flawed, but never made the repairs. By the time the levee was an “emergency,” it was too late and the Pajaro flooded surrounding towns and fields, causing the evacuation of about 2,000 people.

A “state of emergency” can serve as a stopgap to help clean up. But it limits political action to reaction. When Gov. Newsom expanded his declaration to secure more federal aid, he invoked the language of “rebuild and recover”—coding the declaration with a hope that veers into naivete. Congress still has not updated the Federal Emergency Management Agency standards with current data, mapped certain high-risk zones, or accounted for new stormwater flooding risks. Can we prepare? New Orleans suffered catastrophic damage after Hurricane Katrina in 2005, as did Houston from 2017’s Harvey, and Lake Charles, Louisiana, from severe storms in 2020 and 2021. All of these supposedly “once in a lifetime” weather events caused oversize damage because of what we don’t call an emergency: housing shortages, racism, flood insurance failure, debt, infrastructure decay, poverty.

One cliche line about the pandemic rings true for all politics of disaster: It deepened the problems we already knew existed. To fix those will require more than a never-ending emergency.

]]>
1008693
How Nature Metaphors Shade Technology Companies From Scrutiny https://www.motherjones.com/media/2023/05/nature-metaphors-the-cloud-keyword-mother-tongue/ Mon, 08 May 2023 10:00:40 +0000 In 2006, at an industry conference, then-Google-CEO Eric Schmidt introduced a now ubiquitous term: “the cloud.” Here was a grand technological shift, Schmidt explained, that would let information exist simultaneously nowhere and everywhere. Naming it “the cloud” made the change sound almost natural. Your information is not in a massive bank of servers in Nevada; it is, as he put it, “in a ‘cloud’ somewhere.” Data as a nimbus floating above.

The cloud is just one of many linguistic elisions between the artificial and natural worlds. These appropriations span the gamut: Firefox, OpenSea, OnStar, Airbnb, Apple (Yosemite, Monterey, Big Sur), internet surfing, neural networks, mouses, viruses. Sue Thomas—a writer and scholar of digital culture—argues that bringing nature into the lexicon lets technologists position their domain as “a real and integrated extension to human experience.” This framing brings a sense of comfort to complex innovations but encourages us to not think too deeply, either. As researchers have noted, the phrase “data mining” does not exactly clarify the privacy concerns at play when Meta sorts through your personal information.

The historian Paul N. Edwards sees this naturalization—technology becoming as ordinary “as trees, daylight, and dirt”—as a defining characteristic of modern life, a process whereby a company’s tools self-camouflage. Media scholar Lisa Parks calls this “infrastructural concealment,” and for many modern apparatuses, environmental neologisms prove critical.

If the cloud is really as natural as the fluffy wisps that drift across the sky on a pleasant afternoon, then it follows that its presence is similarly harmless—never mind the environmental burden all this data storage imposes. “The cloud” is out of reach, floating indifferently above the realm of human affairs.

But of course this isn’t true. These technologies are not ephemeral abstractions, nor are they elemental processes out of our control. Unlike clouds or tornadoes or wildfires or earthquakes, they are products owned by companies that society can fine, restrict, and regulate—a fact most corporations would prefer governments forget. But no matter how closely companies try to align their wares with the elements, they remain within human jurisdictions. We are not helpless bystanders staring at the sky.

Remembering that power seems important now. The past decades have witnessed profound shifts in how we inhabit the world. Natural and created environments are both evolving quickly. Technophiles continue to roll out designs that weave themselves into every aspect of our reality—the Internet of Things, the metaverse, AR goggles, haptic wearables—even as scientists warn of the “Anthropocene” and urge us to witness the destructive impact of human industry.

As social theorist John Durham Peters has argued, we’ve always understood our relationship to the environment through our technologies (and vice versa). Before Google Maps, star maps guided us; before algorithmic pathways, charts of wind and currents shaped the flow of trade; before the addictive scroll of TikTok, the hypnotic flicker of fire captured our gaze.

Metaphors like “the cloud” draw upon these ancient roots, interweaving slick marketing coinages with primordial gravitas. In his speech, Schmidt was highlighting the commercial potential of this “new cloud model where people are living…more and more online”—a captive audience for advertisers.

Yet the age-old entanglement between our tools and the environment reveals that there’s a way to think about this relationship beyond the narrow lens of profit. Employing “the cloud” shows we can expand beyond what seems humanly possible; we can normalize a new world. As the doyenne of eco-philosophy Donna Haraway reminds us, if a vast societal reconfiguration is required amid climatic upheaval, it matters “what thoughts think thoughts.” The only question that remains is how we will deploy our nature metaphors: To sell iPhones? Or to survive the ongoing sixth mass extinction?

]]>
999943
The Word That Makes Brutal Budgets Sound “Truly Evil” https://www.motherjones.com/politics/2023/01/austerity-mother-tongue-keyword-truly-evil/ Tue, 31 Jan 2023 11:00:09 +0000 https://www.motherjones.com/?p=993272 The cuts have come even for the word itself. “Austerity” is a cornerstone of conservatism—a catch-all term for limiting government spending to promote capitalist growth—and yet the expression seems to have disappeared from deficit hawks’ speeches. “I call it balancing the budget,” said German Chancellor Angela Merkel in 2013. “Everyone else is using this term ‘austerity.’ That makes it sound like something truly evil.”

Yes. That aura of “evil” is perhaps why writers, especially on the left, return to it again and again. As economists Clara Mattei and Sam Salour argue, “austerity” is a helpful word to describe what the failed policies of neoliberalism have done since the 1970s across the world: subdued the working class by driving up unemployment, keeping wages stagnant, and reducing welfare spending. The word connects everyday issues to structural decay caused by a fiscal playbook. Austerity is, as economic journalist Doug Henwood has argued, “causing America to rot.”

Historically, the word gave market cruelties a whiff of moralism. Drawing on the ancient Greek austeros (“what makes the tongue dry”), it offered a religious ring to financial asceticism. It was how one could argue that cutting social spending was a path to self-­reliance. But over the last decade, there has been a shift. Too many people have lived under the actual policies. Austerity has moved from being a rationale for slashing government budgets to a damning critique of the consequences. In the reversal, you can see what many think of living under this anti–social spending project: It sucks.

The moral project of austerity has its beginnings in John Locke’s Second Treatise. Written in 17th-century England, it was a time when, as economist Mark Blyth notes, “public debt [was] the debt of kings” and rulers invoked God as a justification for appropriating wealth as they pleased. Locke found this arbitrary. He argued, in his now-famous philosophical framework, that governments should primarily safeguard private property. To undermine the monarchy’s influence, Locke preached limited government. Freedom, he explained, through austerity.

But this Lockean card is regularly overplayed. The modern rhetoric of austerity politics has firmer roots in the Confederate resistance to Reconstruction. As W.E.B. Du Bois documented in Black Reconstruction, white planter elites pushed back on new multiracial governments made up of Black people and poor white people by proclaiming them irresponsible, corrupt, and plagued by extravagant spending. For much of American political history, a repudiation of the underclass was inherent in the push for small government. Austerity offered up economic hardship as a disciplinary tool.

This cruelty to the poor can be easily seen through Andrew Mellon, Herbert Hoover’s Treasury secretary, who said in the face of the Great Depression that the downturn would rid the system of “rottenness” and lead people to “live a more moral life.” Or Mitt Romney, who in 2012 said 47 percent of Americans were “takers” not “makers,” and told an audience of the wealthy that his “job is not to worry about those people.” Here, freedom is weaponized as a way to abandon the poor.

In the years following the Great Recession of 2008, austerity took divergent routes in the United States. On the left, Occupy Wall Street argued that the real corruption was our inability to “tax the rich” (remember the chants?) in order to fund a safety net for the poor. On the right, the tea party depicted social spending as theft. The left won some followers, but the right won power in Congress. In 2012, a series of spending cuts—predominantly in health, education, and social services—were implemented, and a new “age of austerity” was born.

Yet after living under austerity policies for more than a decade, polls show Americans are tired of the unequal discipline. They want the government to ensure their basic needs: food, housing, public education, and clean air and water. The movement for a $15 minimum wage has gained momentum among major companies, many cities, and some states. The right’s promise of a moral high ground gained from thrift seems not to be as appealing anymore. A decade of spending cuts has made “austerity” sound a lot less like salvation and more like hell.

]]>
993272
“Suicide by Cop”: How Police Present Killings as Unavoidable https://www.motherjones.com/politics/2022/12/suicide-by-cop-how-police-present-killings-as-unavoidable/ Thu, 15 Dec 2022 11:00:37 +0000 https://www.motherjones.com/?p=987134 American police speak in a peculiar lexicon. When an officer guns somebody down, it is an “officer-involved shooting.” When a department spokesperson speaks of “the use of force,” police violence is reduced to basic physics. “Fatality” dulls a plainer reality: that somebody has been killed.

A less-discussed clause carries with it the same obfuscation of meaning: “suicide by cop.” The phrase refers to a person who provokes the police’s violence to kill themselves. But too often it has been loosely applied to excuse police killings as unavoidable.

The history is murky but “suicide by cop” is thought to have been coined by a cop-turned-psychologist named Karl Harris in the early 1980s. When he left policing, Harris supposedly took up work at a suicide hotline in Los Angeles. “I saw all the different ways people attempted suicide,” he told the New York Times more than a decade later, “and it occurred to me that maybe some people were actually forcing cops to shoot them because they wanted to die.” Harris, who would go on to earn a PhD, conducted an informal study on suicides by cop; it was never published, but the concept caught on among scholars. At the time, new theories of criminality and social deviance were changing the foundations of policing. One theory soon to become orthodox saw derelict neighborhoods—signified by “broken windows”—as sites of criminality demanding control. Simultaneously, mental illness was increasingly being treated by jails and prisons, rather than by other institutions. “Suicide by cop” joined a bundle of new concepts that recast the perceived social crises of the day as manifest in a group of violent people who would only understand violence in return; the rise of a militant police force could be explained, in this logic, as self-protection.

By the end of the ’90s, the phrase secured its place in the minds of police as absolution. “These people want to die,” a SWAT veteran said at the time. “How are you about to change their minds?” Academics followed along. In their studies, they often took for granted that people were trying to be killed. They focused on locating the measurable traces of this new suicidality, rather than on probing the sureness or explanatory power of the concept.

Their findings were generally questionable. In one 1998 paper, published by the FBI, a police killing was deemed a suicide by cop because the victim was holding up an unloaded rifle and, just before he was shot, had “a strange look” on his face. (The paper remains well cited today.) Another influential study that year, led by a Harvard Medical School professor, relied on assuming police violence victims as suicidal if, when confronted by officers, they exhibited suicidal characteristics like raising a gun or, in one case, holding an undisclosed “blunt object.”

For all the cases of suicide by cop that rely on surmise, there are some that don’t. Buried in a couple of journal articles are mentions of victims’ notes, in which people announce their intention to die. But such admissions are rare.

According to a 2016 review of the literature, “tentative evidence” suggests that somewhere between one-tenth and one-half of all police shootings are suicides by cop. But why would 100 to 500 people choose to die at the hand of the police each year? Some law enforcement experts say that victims have a death wish they cannot fulfill alone. Others think victims could possess ulterior motives. A Yale pathologist, who doubled as deputy chief medical examiner of the Bronx, co-authored a study that proposed that some victims could be angling for their families to receive a life-insurance payout.

The speed to proclaim deaths as suicides has consequences. A 2021 study found that more than half of police killings in the United States are mislabeled by medical officials. Suicide by cop adds to this problem. It falsely “implies that a huge number of killings are justified,” says Alex Vitale, a sociologist at Brooklyn College.

In the police’s common language, suicide by cop satisfies some basic element of police self-understanding. If a person is armed, cops shoot. If a person acts strange, cops shoot. These are the bulk of deaths researchers describe as “suicides by cop,” as if police killings were no more avoidable than natural law. The police, in their strange tongue, tend to say more about themselves than about anything else.

]]>
987134
Why Does Every Tech Company Want to “Democratize” Something? https://www.motherjones.com/media/2022/10/democratize-tech-mission-conscious-capitalism/ Mon, 24 Oct 2022 10:00:42 +0000 https://www.motherjones.com/?p=979943 If you were cooking up a pitch for a tech company, you could do worse than “Our mission is to democratize X.” Many have used it in the past.

Glossier, a cosmetics company, explains it is “giving voice through beauty” in order to “democratize an industry that has forever been top-down.” Robinhood, an app that gamified trading, says its “mission is to democratize finance for all.” CoachHub, a corporate coaching company, asserts: “Our Mission: Democratize coaching.” It goes on like this on About pages. Airtable wants to “democratize software creation”; Bolt is going to “democratize commerce”; PayPal is working to “democratize financial services.” Elizabeth Holmes, infamously, set out to “democratize healthcare,” according to media fanfare.

Democratize has two usual definitions: One is to bring democracy or democratic principles to a place. The other, and the usage that has infiltrated Silicon Valley, is to “make (something) accessible to a wide range of people.” For the past couple decades, tech companies and startups have used the latter version liberally, generally to mean they are hoping to make a product or service available at a low cost.

The word’s usage reflects optimism for a new approach to business dealings. In the 1990s, companies began centering their customers, often at the expense of other values. Over the next decade, Jeff Bezos’ signature “customer obsession” became mainstream. Companies sought to give users a voice, to make them feel like they were doing something with their purchasing power. With tech’s version of “democratization,” a concept rooted in politics and the public sphere was squeezed into a new container: that of the individual consumer.

This conscious capitalism became popular as customers were empowered to vote with their wallets, participating in a globalism that promised to end history. In books, usage of “democratize” spiked in 1918 (World War I ended and countries were becoming democracies), and again in 1947 (soon after World War II was over). But the highest peak for the word’s usage came in 2006, around the time Twitter was born, Google bought YouTube, Facebook was in its infancy, and the United States was claiming it was spreading democracy in the Middle East.

By the early 2010s, tech giants including Google were stirring together the language of business and civic engagement. According to Astra Taylor’s The People’s Platform, “In the 2012 ‘open issue’ of Google’s online magazine Think Quarterly, phrases like ‘open access to information’ and ‘open for business’ appear side by side purposely blurring participation and profit seeking.”

But, as Taylor warned, “Despite enthusiastic commentators and their hosannas to democratization, inequality is not exclusive to closed systems. Networks reflect and exacerbate imbalances of power as much as they improve them.” Putting something online, or making it cheaper, does not make it just.

Democracy has a positive social valence. An affiliation with the idea, no matter how oblique, is flattering. It suggests that a good or service—whether it’s a device that runs tests on a few drops of blood, or a one-click payment processor—is for the people. But despite lofty mission statements, companies have in the end hewed closely to traditional pathways for their purpose: making a profit.

Kendra Albert, a clinical instructor at Harvard Law School’s Cyberlaw Clinic, has studied “legal talismans”—terms like “free speech” that tech companies use to give legitimacy to decisions (say a failure to ban a user) that do not involve only legal processes. Democratization is a bit different, Albert says, since democracy doesn’t have a settled legal definition: “The lack of specific meanings for democratization is a plus not a minus in the sense that it basically allows companies to make it mean whatever they want, while still invoking this theme” of civic participation.

“Democratize” offers a synecdoche for an optimism that tech’s social goals and financial imperatives are aligned. And it helps that it looks great on a Squarespace landing page. (Squarespace wants to “democratize good design,” by the way.)

]]>
979943