According to an ancient scripture, the Buddha once refused to answer a monk’s metaphysical questions, and instead compared him to a man who, having been wounded—possibly fatally—by a poisoned arrow, refused to have it extracted unless and until he could be told the particulars of the archer, the bow, and the arrow.
Certainly, many students and practitioners of Buddhism would agree on the practical primacy, particularly in situations of personal peril, of the Buddha’s foundational Four Noble Truths, the first of which has been loosely translated as, “Life is suffering”; and the others of which briefly identify the cause of, and paths to relief from, that suffering.
Although each of those four abbreviated statements can be usefully explored in great depth, more esoteric philosophical doctrines would probably serve little immediate purpose, and might not even ever be verifiable through personal experience.
Yet under some circumstances, boards might well consider relying for “actionable” advice on systems, models, practices, and/or experts whose results, conclusions, and recommendations have not been—and maybe cannot be—explained, in any detail, to or by the directors themselves.
Section 141(e) of the Delaware General Corporation Law provides that directors will, in performing their duties,“be fully protected” from personal liability if they rely “in good faith” upon corporate records or upon “information, opinions, reports or statements presented to the corporation” by any of its officers, employees, board committees, or by “any other person as to matters the member reasonably believes are within such other person’s professional or expert competence and who has been selected with reasonable care by or on behalf of the corporation.”
For instance, in rejecting breach of fiduciary duty claims against the directors of The Walt Disney Company, who had granted incoming president Michael Ovitz a compensation package that entitled him to a severance amount of more than $130 million after fourteen months in office, the Delaware Supreme Court noted that the board, and its compensation committee, had been entitled to rely on the expertise of compensation consultant Graef Crystal. Brehm v. Eisner, 746 A.2d 244, 261 (Del. 2000).
In such circumstances, for a shareholder’s derivative lawsuit to overcome the business judgment rule’s presumption that the directors had decided loyally, carefully, and in good faith—and for her complaint to survive the company’s motion to dismiss it—she “must allege particularized facts (not conclusions) that, if proved, would show, for example, that: (a) the directors did not in fact rely on the expert; (b) their reliance was not in good faith; (c) they did not reasonably believe that the expert’s advice was within the expert’s professional competence; (d) the expert was not selected with reasonable care by or on behalf of the corporation; (e) the subject matter. . . that was material and reasonably available was so obvious that the board’s failure to consider it was grossly negligent regardless of the expert’s advice or lack of advice; or (f) that the decision of the Board was so unconscionable as to constitute waste or fraud.” Id. at 262.
The appropriateness and good faith of the board’s reliance was not affected by “[w]hat Crystal now believes in hindsight that he and the board should have done” in approving the contract, id. at 261; or, by the fact that the compensation committee “chose not to follow Crystal’s recommendations to the letter. The role of experts under s141(e) is to assist the board’s decisionmaking—not supplant it.” In re The Walt Disney Company Derivative Litigation, 907 A.2d 693, 770 n.550 (Del. Ch. 2005).
However complex a compensation arrangement might become, it certainly involves mathematics more simple and less opaque than those employed by quantitative traders, or “quants,” to value, buy, and sell stocks, bonds, and commodities.
In his account, The Quants, Scott Patterson explains that these traders “couldn’t care less about a company’s ‘fundamentals,’ amorphous qualities such as the morale of its employees or the cut of its chief executive’s jib. . . . [They] devot[ed] themselves instead to predicting whether a company’s stock would move up or down based on a dizzying array of numerical variables such as how cheap it was relative to the rest of the market, how quickly the stock had risen or declines, or a combination of the two—and much more.”
Gregory Zuckerman’s profile of James Simons, The Man Who Solved the Market, and his pioneering quant firm, Renaissance Technologies, similarly observed that in his early research, “Simons and his colleagues used mathematics to determine the set of states best fitting the observed pricing data: their model then made its bets accordingly. The why’s didn’t matter, [they] seemed to suggest, just the strategies to take advantage of the inferred states.”
In fact, “Simons and his colleagues generally avoid predicting pure stock moves. It’s not clear any expert or system can reliably predict individual stocks, at least over the long term, or even the direction of financial markets. What Renaissance does is try to anticipate stock moves relative to other stocks, to an index, to a factor model, and to an industry.”
More concerned with correlations than corroborations, some quants “let the data point them to the anomalies signaling opportunity. They. . . didn’t think it made sense to worry about why these phenomena existed. All that mattered was that they happened frequently enough to include in their updated trading system, and that they could be tested to ensure they weren’t statistical flukes.”
At one point, Simons observed, “I don’t know why planets orbit the sun. . . That doesn’t mean I can’t predict them.” (Possibly for the a related reason, one of the quant firms that Simons dealt with had been named Kepler Financial Management,)
Yet Simons complained, of an early trading program, “I can’t get comfortable with what this is telling me. . . I don’t understand why [the program is saying to buy and not sell]. . . . It’s a black box!” Indeed, “By 1997, more than half of the trading signals Simons’s team was discovering were nonintuitive, or those they couldn’t fully understand. . . . They only steered clear of the most preposterous ideas. . . Over time, they frequently discovered reasonable explanations. . . .”
Renaissance’s programs could automatically direct more funds to strategies that had been successful. However, when things went wrong, “because so many of the system’s trading signals had developed on their own through a form of machine learning, it was hard to pinpoint the exact cause of the problems or when they might ebb; the machines seemed out of control.”
As Clifford Asness, another leading quant, noted, “When you’re following a model that makes thousands of decisions, judgmentally overriding . . . any one or a handful of decisions is highly unlikely to matter, and overriding many is impossible. And to the extent it matters, we quants worry very much that we’ll undo what our models are trying to do.”
When the stock market dropped dramatically in August 2007, “Some rank-and-file senior scientists [at Renaissance] were upset—not so much by the [firm’s] losses, but because Simons had interfered with the trading system and reduced positions. Some took the decision as a personal affront, a sign of ideological weakness and a lack of conviction in their labor. . . ‘You believe in the system, or you don’t,’ [one] scientist said, with some disgust.”
Asness also insisted that quants do not “use ‘black boxes.’ . . . The box is about as translucent as it comes. . . . [Q]uants may not know everything they own, but once they try, they can tell you precisely why they own it! I think black box is a Luddite slur that is rarely accurate or fair.”
Ultimately, though, however opaque and self-directed a trading system might be, its justification and validation lies in its consistently attaining “alpha,” or, investment returns measurably better than those of the market generally. An industry focused on the numerical bottom line will likely grant little leeway to such spinning as that gamely offered, after second-quarter (Q2) 2020’s disappointing financial results, by comedian Alexis Gay’s fictional manager: “The numbers tell one story—I’d like to tell you another!”
On the other side of the decision-making spectrum from “hard [as in, both difficult and definite] mathematics,” a formerly-obscure group of government agents purportedly produced practical results, probably otherwise unattainable, through an initiative whose principles, physics, and metaphysics would prove even more mind-boggling than those of the quants.
For decades, the U.S. intelligence and defense communities employed and deployed (mentally, if not physically) “remote viewers,” who exercised, in the words of an original “psychic spy,” Joseph McMoneagle, “a human ability to produce information about a targeted object, person, place, or event, while being completely shielded from the target by space, time, and other forms of shielding [under] a very specific scientific protocol. . . developed at Stanford Research Institute in the early 1970s [which] has become more rigorous and specific since then.”
Just as quant (and physics Ph.D.) Emanuel Derman concluded that, “The more I look at the conflict between markets and theories, the more that limitations of models in the financial and human world become apparent to me,” McMoneagle, who received the Army’s Legion of Merit award, introduced his account by admitting that it was “not about dissecting the secrets of how remote viewing works. To date, 100 years of research has failed to crack the code, so I feel that probably isn’t going to happen very soon.”
His unit, which was created by the CIA in the late 1960s (apparently largely because of concerns that the Soviet Union was already engaged in such efforts), worked under various names, including Project STARGATE. At different times, it also operated under the auspices of the U.S. Army Intelligence and Security Command (INSCOM) and of the Defense Intelligence Agency (DIA). In 1995, the CIA publicly acknowledged its existence, and formally disbanded it; in 2017, the Agency declassified, and made publicly available, a number of documents related to remote viewing efforts.
If the quants, in order to test and hone their theories, went to great (figurative and literal) lengths and expense to collect, compile, and computerize historical data of stock and commodity trading prices, remote viewers generated their own data, from minimal cues, usually while lying down or sitting in a quiet room.
If quants recognized that much of their value lay in defining a client’s question and then designing and refining the process for answering it (according to one, “Usually the hardest part. . . is framing the problem in the first place”), remote viewers were provided with only a string of numerical “coordinates” (not necessarily corresponding to geographical longitude and latitude), or even less information (perhaps a photo sealed in an envelope).
A key part of their protocol was that the viewer, as well as the “monitor” who prompted and helped record the viewer’s answers, were not to be “front-loaded” with any information about the target. Viewers, who worked in separate rooms, were generally discouraged from discussing their impressions with each other. As some crises became breaking news, they might be ordered not to listen to their car radios while driving to work.
In some circumstances, their reported activities might have made even these constraints academic. For instance, McMoneagle claims to have, for demonstration purposes (for the skeptical head of a government agency; for a psychology professor; and for a television program) identified the target even before his questioners had selected it. The publicly-available literature contains recurring references to and reflections on the degree to which remote viewers might be able reliably to “see the future.”
Yet by 1978, “Some of our reports were being passed around areas of the Pentagon and were being viewed with great interest. Our accuracy against many of these targets was even more astounding since only the people in the Pentagon who identified the targets. . . knew what was actually located in those positions. Some of the targets were even deliberately skewed to see what would happen.”
For instance, McMoneagle claims to have described in detail, in 1978, a prototype of the Army’s Abrams XM-1 tank (which, to further test his ability, had been moved into an aircraft hangar that itself was surrounded by airplanes); and, in 1981, the giant Typhoon submarine being constructed by the Soviet Union in a secret Baltic facility.
Among their other missions, STARGATE viewers were requested to locate: hostages in the Iran hostage crisis (1979-1980) (McMoneagle claims that they also perceived preparations for, and the fatal helicopter collision that ended, Operation Eagle Claw’s failed rescue effort); Brigadier General James L. Dozier, kidnapped in Verona by Marxist terrorists (1981); and the location of a downed Soviet aircraft, believed to be carrying nuclear weapons, in the Congo (1995).
According to McMoneagle, the individual viewers’ “material would be summed up in a report and passed back to the office requesting support. Since they were the only ones who knew or suspected what was going on [at the target site], it would then be compared to other information they possessed and deemed either supportive or non-supportive. In any event, it would be used to generate newly formed leads for more traditional methods of collection, but it was never used as material that stood alone. . . .”
Participants’ published accounts of the program indicate that the protocols for remote viewing were being developed and (often incrementally) refined and calibrated as their work progressed—and that a uniform process was generally emphasized, although viewers conducted their own individual experiments with variations. McMoneagle wrote that, after a certain point, “I didn’t even need a monitor. . . I had spent the better part of my career as a remote viewer teaching myself to do remote viewing under any circumstances.”
This literature suggests that practicing and perfecting remote viewing, like developing and operating mathematical models and software programs for trading, might be considered both a science and an art. Remote viewers might be likely to agree with Derman’s declaration that, “The truth is that models are rarely an unambiguous source of profits. What counts as much or more is the trading system and the discipline it imposes, the operational errors it disallows and the intuition that traders gain from being able to experiment with a model.” Three of quant Thomas C. Wilson’s five “Lessons Learned” specifically involve intuition (“Build your intuition before building your model”; “Trust your intuition”; and “Challenge your intuition”).
Like quants, remote viewers devoted much energy to developing not only a system, but methods of training others to apply that system. Derman reported that, “Whenever I have a new problem to work on—in physics or options theory—the first major struggle is to gain some intuition about how to proceed; the second struggle is to transform this intuition into something more formulaic, a set of rules anyone can follow, rules that no longer require the original insight itself. In this way, one person’s breakthrough becomes everybody’s possession.” He also recognized that “What traders need is standardized systems that contain the models, systems that force them to use the models in disciplined ways.”
Similarly, remote viewer Lyn Buchanan recalled that some agencies originally wanted his unit to “develop a standardized teaching method. . . . that could be taught to anyone. . . in five minutes, so he could tell his commander what was over the hill and where to point the guns.”
However, although quants, individually and collectively, ultimately overturned their image, among traders and some others, as being ineffectual intellectuals, remote viewers and their unit (funded on a year-to-year basis) never overcame sponsors’ sensitivity to the “giggle factor.”
Buchanan noted, “The very nature of a unit of ‘psychic spies’ was an anathema to the military. It was also suspected to be an anathema to the American public. The politicians who funded the project were always fearful that they might be found out and have to explain their actions to their constituents.”
McMoneagle, also, noticed, among “a lot of people who owed their positions, promotions, and livelihood to politics, . . . plain old-fashioned fear—that if someone caught them supporting something they themselves would naturally ridicule, then by association they would be ridiculed as well. Simply put, they didn’t have the stomach or the courage for it.”
(Concerns about fiduciaries who embraced less than fully-terrestrial perspectives surfaced in Silicon Valley three years after the public announcement of Project STARGATE’s termination. The CEO and co-founder of one firm stepped down shortly after publicly espousing UFO-related theories, and discussing a mystical experience of his own; he ultimately rejoined the reorganized company as its chairman. These developments might have helped inspire the November 8, 1994 episode, The Candidate, of the television show, Frasier, in which a mayoral candidate whom the psychiatrist title character had been preparing to publicly endorse, privately disclosed to him, at the last minute, a UFO experience.)
(Accounts of the quants’ careers often contrast their acceptance and appreciation, by and of, the academic and the applied-finance atmospheres that they traversed (or, sometimes, straddled). A comfortingly prosaic feature of the otherwise extraordinary, and often-unsettling, memoirs of remote viewers is the constant influence of inter- and intra-office and -agency politics. To Buchanan, “Political and financial problems and all the other aspects of the modern workplace were just as much a part of our daily lives as any other worker in any other office.” This theme is strikingly present in another participant’s book, which presents what might be the single most detailed history of, as well as speculation on the principles behind, the unit’s operations, while concluding that “the basic principle is still mysterious.”)
So where does this leave directors who are considering applying “black box” methods?
First, although boards are certainly not legally required to follow (or to confine themselves to) the practices of other boards, they would want to be able to establish both the legitimacy of the field of inquiry and their reasonableness is selecting the quant(s), remote viewer(s), or other system-builders in question, as qualified in their fields.
It would certainly be useful, in both contexts, to document, to the degree possible, the reliance of other boards (preferably but not necessarily in the same industry), and/or the military and intelligence communities, on practices and practitioners of this type, and any reports, or even rumors, of their successes or failures.
Second, to the degree that the technique or process can be demystified and/or demonstrated, particularly through documentation, the board should inform itself as well and as reasonably as the circumstances allow, including (in the case of quant programs) when and how their automated operations might be overriden.
Third, if time permits, boards might initiate a limited, trial-basis involvement with the method, and evaluate the results and their implications, before committing more resources, and risk, to the program.
Fourth, boards should be able to show that they provided, or otherwise did what they could to enable, access by the expert to, as much relevant, and correct, data as possible. (For remote viewers, this concern would seem to be “out of place.”)
Fifth, boards should document their efforts to compare the results of a “black box” method with those of more traditional methods that they had employed, and be able to explain how they reconciled any conflicts between the two, and/or how they found the newer method’s results corroborated by or consistent with those of more familiar methods.
Sixth, boards should record the degree to which conventional methods have been unable to produce “actionable” information, and the degree to which the company can be considered to be in “crisis mode”— factors that have been cited in support of the official use of remote viewers in the nation’s interest (and, sometimes, of local police departments’ use of “psychics”).
Finally, it might help to keep in mind a (quite possibly apocryphal) story about Nobel Prize-winning physicist and Manhattan Project participant Niels Bohr (1885-1962).
Supposedly, a visitor to Bohr’s office was startled to see a horseshoe hanging on a wall.
When asked whether he, a world-class champion of rationality and logic, actually thought that this practice would bring him good luck, Bohr (entirely unlike the Renaissance quant who had declared, “You believe in the [trading] system, or you don’t”), reportedly answered, “Personally, I don’t really believe that—but I understand that it works whether you believe in it or not.”
[This blog post is dedicated to the memory of my friend and law school classmate Steve Price—the very first person to speak in our very first class, but only after everyone else wouldn’t—and who was, then and afterwards, never at a loss for words both fitting and (often) funny.
[Steve was not only a mensch, but also a shadchan (matchmaker), both of people and of ideas. He was, seemingly effortlessly, the most-connected and most-networked person I knew, in the best and highest senses of both of those terms. He truly and selflessly took delight in making introductions and catalyzing connections.
[Although Steve may well have been the smartest person in many rooms we were in, he never went out of his way to prove it. He often fostered conversations, but never had to be their subject or their center.
[Once classes return to campus, Steve, whenever I see law students laughing with each other, I will remember you. Just something I’ve been thinking about.
[Rest in peace, my friend.]