When I read this short piece by Rosa Brooks in Foreign Policy it felt shockingly familiar. Could have written it myself. Although I obviously couldn’t have because I lack her skills as a columnist. What makes it even weirder is that she is very much part of the US establishment, a beast that I tend to be very critical about. That in itself is reason for optimism, someone like that on the inside, what better counterweight to the Goldman Sachs alumni. It’s also weird because she is not only establishment, she is security issues, war and military. Anyway, she said it better than I ever will so I am going re-blog lock, stock and barrel. Before you read Apocalypse Soon. The end is nigh! And I’m not ready. have a look at these amazing images of our sun (more info here). They seem a befitting accompaniment to Mrs. Brooks column for reasons that I cannot explain.
I was sitting in a meeting full of serious people last week, listening to a presentation on technology and the future of work by a very smart guy from a very fancy university. The smart guy was talking about Moore’s Law and the many things we used to think computers could never do better than humans — such as driving cars, interpreting mammograms, and writing columns. He urged us to consider our impending superfluity, since much of what most Americans do at work these days will soon be done better by iPhones, robots, toasters, and electric toothbrushes.
He didn’t actually say that about the toasters or toothbrushes. Still, the serious people around me were nodding sagely and muttering about Structural Changes to the Economy, and the Importance of Finding Solutions, and Perhaps these Changes will Eliminate Soul-Sapping Labor and Reinforce Human Dignity, and so forth.
But I was way past that point. I was thinking about the Singularity, and whether law professors will become obsoletebefore I reach retirement age (Magic Eight Ball says: “Signs point to yes.”), and the prospect that we will all soon be enslaved byintelligent toasters.
I shared some of these thoughts with my colleagues. Shouldn’t we, I suggested, stop kidding ourselves about “finding solutions” to the challenges posed by technologies that evolve faster than our brains? Shouldn’t we instead recognize that historically speaking, humans really suck at managing rapid technological and social change (c.f., the Thirty Years War, World Wars I and II, and so on), and recognize that developments that reinforce human dignity are often preceded by really crappy periods in which millions suffer and die? Shouldn’t we just accept that the technological and economic changes to come will likely cause massive and painful dislocation, perhaps similar in order to the above-mentioned catastrophes? Shouldn’t we abandon quixotic projects geared towards “finding solutions” and instead focus on simple risk mitigation — on trying to find ways to keep things from becoming as catastrophic as they may potentially become?
This intervention was greeted with polite silence — the kind that suggests you think your crazy colleague is off his meds — and after a moment, discussion resumed and we were back to the Search for Solutions.
I admit that my intervention was a bit of a downer. The truth is, I’m 99 percent convinced of the coming apocalypse (minus the Seven Seals, the Rapture, and all that). When there’s a blackout during an electrical storm, I always suspect the lights are never coming on again. A few years ago, when a blackout was accompanied by my inability to get a signal on my cell phone, I began to seriously suspect a terrorist attack, and I spent at least five minutes planning for the inevitable chaos and barbarism that would result. I realized I should probably try to clear a defensive perimeter around the house before the looting and cannibalism began. Fortunately, the lights went back on at that point, so I didn’t have to take things any further.
But the embarrassing truth is this: It’s mostly sheer slothfulness that keeps me from being a survivalist. If I weren’t so lazy (and poor), I’d stockpile canned goods, batteries, gasoline, weapons, and ammunition. I’d buy up a nice defensible island, or mountain, or old missile silo and set up low-tech booby traps all around my fortress. I’d teach my children to hunt, fish, shoot, and make fires by rubbing two sticks together. You get the idea.
I don’t actually do any of this, of course. For one thing, it’s way too much work. For another thing, most doomsday preppers seem to be fundamentalist religious cranks, and I don’t feel like allying myself with anyone who’s going to be quoting Revelations throughout the apocalypse. As a result, I have a couple of extra flashlights, like everyone else, and a few rusting packs of D batteries in a drawer somewhere, but that’s it.
Mostly, I deal with pending apocalypse by crossing my fingers and hoping catastrophe can wait another hundred years. (I’d like my kids to make it through the next century. My hypothetical grandkids will have to fend for themselves.) But rationally, I think that if we make it through the next century without serious national or global catastrophe, it will mainly be the result of sheer dumb luck.
You’re skeptical? Join the crowd. Most of my friends — all perfectly sane people! — think I’m nuts.
Their perspective is the usual one: The good old human race has hung in there for millennia, so it will most likely keep muddling on. Besides, we’ve had dangerous technologies such as nuclear weapons for decades now, and we haven’t blown the world up yet!
To this, I say: Looking on the bright side is a fine thing in a kindergarten teacher, but it’s unbecoming in those of us who purportedly deal in the grown-up world. The fact that “we’re not dead yet” is neither here nor there. The fact that you’ve ridden your motorcycle through the rain without a helmet many times before and you’re still alive doesn’t make you any less stupid. AsJared Diamond pointed out recently in a New York Times op-ed, even trees that have been standing for many years can fall down overnight.
One of the many cognitive failings of human beings is that we tend to think tomorrow will be a lot like today. As a day to day heuristic, this is actually pretty sensible; if you predict that tomorrow’s weather will likely be quite similar to today’s weather, you’ll be right most of the time. Except, of course, when you’re wrong. In the 1930s and 40s, Europe’s Jews assumed that each day would be much like the previous day, and they were right, by and large — but a whole series of days that are only marginally different from the previous day can bring you, with surprising speed, to some terrible places.
Setting cognitive errors aside, we do not, as a nation or as a species, have much basis for assuming that things will keep on getting better. For that matter, we have little basis for assuming that things that are crummy now will get fixed, or even stay only as crummy as they are now (as opposed to getting a whole lot crummier). To keep things in perspective, the cataclysm of World War II was only 70 years ago. World War I was only a century ago. Why would anyone imagine that such catastrophes — still alive in the memories of older Americans — can’t happen again? Do we really think the human species has evolved somehow in the last few decades?
Steven Pinker thinks so: In his 2011 book, The Better Angels of Our Nature, he argues that human violence is in decline, at least if viewed over the last few centuries. Whether he’s “right” or “wrong,” however, his argument is, for present purposes, largely irrelevant. Even if humans are somewhat less nasty to one another than they used to be, the complexity of our world has increased exponentially, and our ability to inadvertently mess the world up has similarly increased.
Take your pick of anthropogenic apocalypse scenarios. You don’t like enslavement to intelligent toasters? Fine. There’s always nuclear annihilation, still a distinct possibility. Or deadly epidemics spread by bio-engineered germs (or naturally occurring germs whose transmission is aided by air travel and so on), or a meltdown of the global financial system that will make 2008 look like a boom year, or climate change that submerges coastal cities, or cyberattacks that cause catastrophic infrastructure failure. (Richard Posner, who will certainly be the only law professor to survive the apocalypse, offers lurid details of these scenarios and many more in his 2005 book, Catastrophe).
Ah, you’re still scoffing. “Ha,” you say, “People have been predicting catastrophes for decades — remember Silent Spring? Acid Rain? Overpopulation? SARS? Swine flu? Betcha all those doomsday prophets feel silly now!”
I bet they do feel silly. I feel silly whenever I contemplate buying more than a few extra flashlight batteries. But once again, feeling silly doesn’t mean you’re wrong to worry. Black swans may yet appear, and low-probability/high-consequence events may yet happen.
But don’t take my word for it. Consider this recent report from Chatham House, which is not known for apocalyptic hysteria: “Current contingency planning often assumes the return of thestatus quo ante after a crisis. But this approach may be inadequate in a world of complex economic and social risks, especially when combined with slow-motion crises like climate change and water scarcity. Slow-motion crises such as these build over many years, but are likely to result in a higher frequency and greater severity of shocks….We have always had risks to face. Two things seem to have changed today: the frequency of catastrophes seems to be increasing; and our population remains relatively unaccustomed to the magnitude and probability of the risks we are currently facing.” Adjusting for the dryness of British think-tank reports, this is a hysterical cry for help.
Even if we think catastrophic events are extremely unlikely to occur, it makes sense to start thinking about how to mitigate risks. In the words of philosopher Huw Price, co-founder of the new Centre for the Study of Existential Risk at the University of Cambridge, shouldn’t we all be trying to “shift some probability from the bad side to the good”?
Yup. I’m ready to pledge my support for the project of mitigating existential risks.
And in the meantime, I might even buy some more flashlight batteries.