The title character of Kenny Rogers’ 1978 hit song, “The Gambler,” warned another unnamed, down-on-his-luck card-player, “If you’re gonna play the game, boy,/ You’ve gotta learn to play it right.”
His famous and final wisdom, immediately characterized by his companion as “an ace that I could keep,” included:
You’ve got to know when to hold ’em
Know when to fold ’em
Know when to walk away
Know when to run. . . .
However, beyond enumerating these crucial skills, the man who “made a life/ Out of readin’ people’s faces/ [And] knowin’ what the cards were / By the way they held their eyes” offered no details on how to master these crucial topics.
Similarly, Nobel Prize-winner Daniel Kahneman, summarizing for a popular audience, in the humbling—and mind-expanding— Thinking, Fast and Slow (2011), his decades of pioneering research in identifying cognitive traps, concluded:
“How can we improve judgments and decisions, both our own and those of the institutions that we serve and that serve us? The short answer is that little can be achieved without a considerable investment of effort.…Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues. I have improved only in my ability to recognize situations in which errors are likely.…
“Organizations are better than individuals when it comes to avoiding errors, because they naturally think more slowly and have the power to impose orderly procedures. Organizations can institute and enforce the application of useful checklists, as well as more elaborate exercises.”
Ten years later, some procedures, checklists, and exercises are featured in Noise: A Flaw in Human Judgment, co-authored by Kahneman, Oliver Siboney (You’re About to Make a Terrible Mistake! (2020)), and Cass R. Sunstein (with Richard Thaler, Nudge: Improving Decisions about Health, Wealth, and Happiness (2008)).
However, a number of readers might find the book a faster-paced, jazzier version of proven crowd-pleasing material—or, the same apparent ace, dealt one more time.
Kahneman, Siboney, and Sunstein sidestep some of today’s most socially sensitive issues about fairness and accuracy in decision-making, by declaring that bias—a predictable deviation from a norm (whether conscious or subconscious; and whether for reasons innocuous or otherwise)— “is a compelling figure, but noise is the background to which we [usually] pay no attention.”
Information theory defines “noise” as unintentional and undesirable additions, distortions, or other errors in the transmission of an original “signal.”
For these authors, though, “noise” refers to an array of “undesirable variability in judgments of the same problem”—for example, in divergent sentences handed down for the same offense by judges of different dispositions towards leniency (“level noise”); for the same offense, by the same judge during different moods (“occasion noise”); and by “patterned differences between judges in the influence of offense/offender characteristics” (“pattern noise”). In light of this list (at least the last of whose elements might be difficult always to distinguish from “bias”), it is hardly surprising that the authors insist that, “Wherever there is judgment, there is noise, and more of it than you think.”
They turn to “the technical psychological literature” for the definition of “judgment,” as “measurement in which the instrument is a human mind.” If the varieties of noise might overlap, there is also no clear line between “predictive judgments” (such as, “How well will Candidate V perform if we hire her for Job W?”) and “evaluative judgments” (“How well did Y perform for us in Position Z over the last year?”)
Compounding this confusion, the authors acknowledge that “Matters of judgment, including professional judgments, occupy a space between questions of fact or computation on the one hand and matters of taste or opinion on the other. They are defined by the expectation of bounded disagreement.”
And yet, in the book’s depiction of a hypothetical boardroom meeting conducted to minimize noise, a CEO admonishes an outspoken director: “[W]e are all reasonable people and we disagree, so this must be a subject on which reasonable people can disagree.”
Given the authors’ own vagueness about the acceptable limits of disagreement, which “is itself a judgment call and depends on the difficulty of the problem,” this CEO’s slippery circularity might strike careful readers as disingenuous as best (and perhaps prompt them to wish that Kahneman, Sibony and Sunstein had devoted much more discussion to scholarly studies of “bullshit receptivity”).
In fact, by the authors’ own definition, considerations of “noise” do not apply to directors’ (or anyone else’s) “singular decisions” (as in their boardroom example, approving or rejecting a potential acquisition), which are not “recurrent judgments that interchangeable employees routinely make in large organizations.” It is unclear how noise and “bounded disagreement” would apply to any decision or “judgment” that, in contrast to handing down a discretionary sentence or otherwise selecting one option from several (or many) available, consists simply of the approval or disapproval of a proposed transaction.
However, the authors maintain that the same techniques used, in the context of repeated decisions, for reducing bias and noise—which themselves “play the same role in the calculation of overall error”—would enhance the process for making “singular decisions.”
They then introduce a few diagrams, some simple formulas, and some references to Carl Friedrich Gauss’s “method of least squares” for measuring “error,” which they define to encompass both bias and noise.
Although Gauss’s mathematical technique, developed in 1795, is “the intellectual foundation of this book,” the authors admit that it relies on “detailed arguments. . . far beyond the scope of this book,” and “is not immediately obvious.” In fact, “The idea seems arbitrary, even bizarre.”
At this point, these three experts in exposing the fallibilities of human intuition state that the formula “builds on an intuition that you almost certainly share. . . [N]o other formula would be compatible with your intuition. . . .”
Even so, the formula “does not apply to evaluative judgments, . . . because the concept of error, which depends on the existence of a true value, is far more difficult to apply.”
What practical guidance does Noise ultimately offer to directors? Instead of intuition-based processes for decision-making, the authors recommend statistical approaches, although warning that, as a form of the “System 2” approaches discussed in Kahneman’s previous work, “statistical thinking is effortful”; it also “demands specialized training.”
Other elements resurfacing from the authors’ works include warnings against “informational cascades” (in which group decisions are affected by the order and authority with which members contribute opinions and information) and “group polarization” (“The basic idea is that when people speak with one another, they often end up at a more extreme point in line with their original inclinations”). On a “Bias Observation Checklist” provided in an appendix, now-familiar concepts like anchoring, loss aversion, and present bias appear.
To reduce noise, key “decision hygiene” principles include “sequencing [exposure to] information [including other decision-makers’ opinions] to limit the formation of premature intuitions,” and assembling teams whose members “are selected for being both good at what they do and complementary to one another.”
The content of two brief chapters could serve as the basis of a useful book of its own: “In predictive judgments, human experts are easily outperformed by simple formulas—models of reality, models of a judge, or even randomly generated models,” or crude models that weight all relevant variables equally. Even “frugal models” that “look like ridiculously simplified, back-of-the-envelope calculations” can under certain circumstances “produce surprisingly good predictions.” The authors acknowledge the potential of “algorithmic bias,” particularly with regard to race and gender, but claim that algorithms “can be more transparent than human beings are” in this regard.
Another chapter presents “a stylized example that is a composite of several real cases,” to illustrate the effect of a Mediating Assessments Protocol (or, MAP) on a board considering a potential acquisition (although the situation is described as a “one-off, singular decision,” which the authors have already defined as being less susceptible to noise-minimization considerations).
If the process emphasizes independently-submitted assessments at the beginning, “[a] board member would need to come up with strong reasons to be against the deal while staring at a list of mediating assessments that mostly supported it.”
Although this approach might complicate the efforts of a director to dominate, “game,” or stymie the deliberations (particularly, the authors suggest, as compared to situations involving the board’s application of a formula), it certainly would not preclude such concerns, especially if that director had read Noise.
Similarly, I proposed some years ago, in connection with my own summary list of decision-making traps, that a “cognitive curriculum” of the popular literature in this area would not only improve the board’s function but would also “help to immunize executives against aggressive attempts by their competitors, creditors, and customers to exploit the dozens of vulnerabilities identified by these works.”
Rather than (as the authors recommend in another appendix) “conducting a noise audit,” or appointing a “decision observer, someone who watches [the] group and uses a checklist to diagnose whether any biases may be pushing the group away from the best possible judgment,” boards might be better served simply by: (1) providing each director with a copy of Thinking, Fast and Slow; (2) devoting a few minutes at each of their succeeding meetings to discuss (or having an expert lead a discussion of) a portion of it; and (3) developing their own customized procedures, checklists, and exercises.
As Kenny Rogers sang in 1978,
Every gambler knows that the secret to surviving
Is knowing what to throw away
And knowing what to keep. . . .
Or, as information theorists might say, knowing how to distinguish the “signal” from the “Noise.”