Skip to content

Apocalypse Soon?

June 18, 2010

In the world of political public relations, one of the oldest strategems in the playbook to trump up the potency of your issue is to paint a picture of the world on the verge of a precipice.  By crafting a narrative that weights one outcome with inconceivably and incomputably large consequences, pepople start to think in terms of end-game strategies and forget to discount with the actual probability that attends the high-magnitude impact of the event.  Think of how many times you hear, “Failure is not an option,” even when the costs of avoiding such “failure” are worse than the consequences.  At the same time, substantial probabilities of omnipresent risks (e.g., a gigantic oil spill in the Gulf of Mexico) can be totally put out of mind:

[A]s Cass Sunstein argues in his book Worst-Case Scenarios, humans seem to have an inherently difficult time preparing for low-probability catastrophes—we tend to vacillate between total panic and utter neglect, with little middle ground.

As a result, it’s hard to convince people to pay the upfront costs of averting potential catastrophes, especially when the catastrophes seem remote and uncertain. Back in 2003, the Interior Department agreed with BP and other oil companies that installing a $500,000 acoustic shutoff switch on every offshore rig would be unreasonably expensive (even though such a switch would likely have prevented all that oil from spewing out). Of course, now that BP is staring at billions of dollars in clean-up costs and the prospect of bankruptcy, that $500,000 switch looks like a bargain, but back then, the incentives for short-term cost-cutting were persuasive.

As I alluded to above, the difference appears to lie in the sensationalism of the consequences; we ironically tend to worry too much about things that are the most freakish events.  The classic example is driving a car, which is far more dangerous on a statistical level than, say, flying through volcanic ash.   Perhaps it is human nature to fear making a noteworthy or attention-grabbing mistake, while becoming just another statistic  is an acceptable way for us to fail, if need be?

To take just one of many examples, many Americans avoided planes after 9/11 and travelled [sic, British] by road instead. As a result, a team of researchers from Cornell University estimated there were at least 1,200 more deaths on America’s roads than there would have been.

When it comes to discounting infinity and end-game strategies, people usually stop utilizing rationality as a tool; instead, religious instincts kick in and people defer to absolute imperatives without much consideration for the consequences of the other set of possibilities.  See, e.g., Pascal’s Wager.  On the flip side, the same problems of misperception inevitably arise when people try to use rationality to make calculations based on data that lies outside mortal experience and thus by definition are not subject to measurement.  Such topics lying outside of our experiential grasp range from how we treat the afterlife in our decisions to how we deal with externalities that will be borne by later generations.  Any calculus here seems necessarily and systematically flawed by an inability to properly discount.  Peter Singer, one of the finest living philosophers and no stranger to uncomfortable questions, poses this thought experiment: What if this were the last generation to live?

Is a world with people in it better than one without? Put aside what we do to other species — that’s a different issue. Let’s assume that the choice is between a world like ours and one with no sentient beings in it at all. And assume, too — here we have to get fictitious, as philosophers often do — that if we choose to bring about the world with no sentient beings at all, everyone will agree to do that. No one’s rights will be violated — at least, not the rights of any existing people. Can non-existent people have a right to come into existence?

What Singer is really asking is what moral obligations do we have to future generations.  I had thought about that question in similar terms when I first read Ayn Rand and other existentialist thinkers and it was brought up again in light of Ward Elliott’s seminar on (over-)population.  I thought, what claim do future generations hold on the present?  What rights to they have to demand that we act in a way that mitigates the achievement of the highest standard of living attainable at present?  If the Earth’s natural resources can be analogized to butter, why should we keep scraping that butter across an ever-expanding piece of toast to everyone’s eventual diminution?  The Greeks understood that nothing is eternal, at least nothing directly experienced by man if you want to allow for Plato’s Forms, so why not go out in a blaze of glory?

These questions about existentialism and generational shift produce some nasty resultant tension because we can’t deal with these problems using rationality.  Take, for example, the so-called “Prince Charles problem”: in the exact same way that Charles has spent a lifetime as king-in-waiting behind his now-octogenarian mother, the mathematical structure of the baby boom and ever-lengthening life expectancies creates a bottleneck of older workers within employment hierarchies that prevents younger employees from advancing in their careers until their own golden years.  And so on down the line.  But let’s see you try forcing retirement on a baby boomer while not giving them enough social security to cover their preferred level of personal excesses.

Or, let’s say you try to come up with some answers to Singer’s thought experiment when considering ideal population sizes or how much to value externalities and spillover costs that will be borne by future generations.  Rationality and consequentialism don’t work so well in the face of existentialism if you aren’t the one to bear the costs.  And if you consider the possibility that we owe some moral obligations to as-yet unborn people, what are the moral differences, if any, between intentionally and arbitrarily reducing the size of the population through population controls and killing live people if the net benefit to the rest of society is serious enough?  Or if society produces some life-saving medical cure, doesn’t that increase the average burden on the rest of the population later down the line?  How do we compare these things?  Poorly.

I guess the other pithy and cynical answer is that we compare the direness of the narratives that spinsters and vested interests generate.  For example, the distressing proposal for an “Internet Kill Switch” is likely to spook enough people to ignore any of the actual proposals that are in the bill, for better or for worse.  By creating the narrative in your preferred image, people will react to that definition, and a self-fulfilling prophecy effect can take over.  But be careful about making your dystopia seem too cool because if you give people the wrong ideas, they will emulate that just as readily.  See, e.g., the technology of Minority Report, all of which has become reality about 40 years ahead of schedule due to the Star Trek Effect.

Maybe the better political/social strategy is simply to hedge; Scott Adams suggests investing in companies you hate:

If you buy stock in a despicable company, it means some of the previous owners of that company sold it to you. If the stock then rises more than the market average, you successfully screwed the previous owners of the hated company. That’s exactly like justice, only better because you made a profit. Then you can sell your stocks for a gain and donate all of your earnings to good causes, such as education for your own kids.

Instead of investing in companies you hate, as I have suggested, perhaps you could invest in companies you love. I once hired professional money managers at Wells Fargo to do essentially that for me. As part of their service they promised to listen to the dopey-happy hallucinations of professional liars (CEOs) and be gullible on my behalf. The pros at Wells Fargo bought for my portfolio Enron, WorldCom, and a number of other much-loved companies that soon went out of business. For that, I hate Wells Fargo. But I sure wish I had bought stock in Wells Fargo at the time I hated them the most, because Wells Fargo itself performed great. See how this works?

Precise quantification may not be so necessary if we go into our decisionmaking process aware of the orders of magnitude and direction of all of the circumstances and second-order effects that attend any decision.  But that’s asking a lot of a large, unsophisticated population.  And that’s asking even more of a sophisticated group of elected officials trying to craft their own narratives.

Advertisement
3 Comments leave one →
  1. Shannon permalink
    July 7, 2010 8:50 am

    Very interesting article, I like. Definitely subscribing!

Trackbacks

  1. Dump on a Page « The New Print

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: