What I Was Wrong About This Year
Posted December 24, 2017 8:33 p.m. EST
The Israeli intelligence service asked the great psychologist Daniel Kahneman for help in the 1970s, and Kahneman came back with a suggestion: Get rid of the classic intelligence report. It allows leaders to justify any conclusion they want, Kahneman said. In its place, he suggested giving the leaders estimated probabilities of events.
The intelligence service did so, and an early report concluded that one scenario would increase the chance of full-scale war with Syria by 10 percent. Seeing the number, a top official was relieved. “Ten percent increase?” he said. “That is a small difference.”
Kahneman was horrified (as Michael Lewis recounts in his book “The Undoing Project”). A 10 percent increase in the chance of catastrophic war was serious. Yet the official decided that 10 wasn’t so different from zero.
Looking back years later, Kahneman said: “No one ever made a decision because of a number. They need a story.”
His change of heart is a good way to introduce my ritual self-criticism. There is a burgeoning tradition in which columnists devote a year-end column to the errors of our ways. The journalist Dave Weigel calls it “pundit accountability.”
I’ll start with some back story: Like the pre-1970s Israeli army, the news business of old didn’t have much use for probabilities, outside of the weather report. These days, though, probabilities pop up all over.
At 10 p.m. on Alabama’s recent election night, The New York Times said that Doug Jones had roughly a 70 percent chance of winning, based on counted votes. (That scoreboard drew 13 million views.) Likewise, the financial media reports recession odds, and sports websites publish real-time win probabilities.
I’m a probability advocate. In previous jobs, I have helped create election scoreboards. Probabilities are more meaningful than safe “anything can happen” platitudes, vague “it’s likely” analyses or artificially confident guarantees.
But I’ve come to realize that I was wrong about a major aspect of probabilities.
They are inherently hard to grasp. That’s especially true for an individual event, like a war or election. People understand that if they roll a die 100 times, they will get some 1’s. But when they see a probability for one event, they tend to think: Is this going to happen or not?
They then effectively round to 0 or to 100 percent. That’s what the Israeli official did. It’s also what many Americans did when they heard Hillary Clinton had a 72 percent or 85 percent chance of winning. It’s what football fans did in the Super Bowl when the Atlanta Falcons had a 99 percent chance of victory.
And when the unlikely happens, people scream: The probabilities were wrong!
Usually, they were not wrong. The screamers were wrong.
I used to believe that the best response was explanation and context. After all, people understand that many outcomes with long odds do happen. “Just because it’s rare,” says the medical expert H. Gilbert Welch, “doesn’t mean it doesn’t happen.” You draw an ace (8 percent). A random baby girl grows up to be at least 5’9” (6 percent). New York has a white Christmas (11 percent). In my computer, I’ve got a long list of these unlikely events.
But I now think explanation is doomed to fail. For an individual event, people can’t resist saying that a probability was “right” if it was above 50 percent and “wrong” if it was below 50 percent. When this happens, those of us who believe in probabilities can’t just shake our heads and mutter about white Christmases. We have to communicate more effectively.
I think part of the answer lies with Kahneman’s insight: Human beings need a story.
It’s not enough to say an event has a 10 percent probability. People need a story that forces them to visualize the unlikely event — so they don’t round 10 to zero.
Imagine that a forecast giving Candidate X a 10 percent chance included a prominent link, “How X wins.” It would explain how the polling could be off and include a winning map for X. It would all but shout: This really may happen.
Welch, a Dartmouth professor, pointed me to an online pictograph about breast-cancer risk. It shows 1,000 stick figures, of which 973 are gray (no cancer), 22 are yellow (future survivor) and 5 are red (die in next 10 years). You can see the most likely outcome without ignoring the others.
Yes, I understand that ideas like this won’t eliminate confusion. But even modest progress would be worthwhile.
The rise of big data means that probabilities are becoming a larger part of life. And our misunderstandings have real costs. Obama administration officials, to take one example, might have treated Russian interference more seriously if they hadn’t rounded Donald Trump’s victory odds down to almost zero. Alas, unlike a dice roll, the election is not an event we get to try again.
Capitol Broadcasting Company's Opinion Section seeks a broad range of comments and letters to the editor. Our Comments beside each opinion column offer the opportunity to engage in a dialogue about this article.
In addition, we invite you to write a letter to the editor about this or any other opinion articles. Here are some tips on submissions >> SUBMIT A LETTER TO THE EDITOR