You are viewing an archive of a previous version of etfguide.com. Click here to browse current articles or return to the main site.

The Myth of Expert Advice - Part 5

May 10, 2012
Mike Dever

"Experts" (people who make it their job to understand and forecast markets or events) are usually no better at their jobs than dart-throwing monkeys. In this series of articles, we examine why these "experts" can be so wrong.

“Monkeys, Rats, and Bugs”

Part 1, Part 2, Part 3, and Part 4 of our series on The Myth of Expert Advice showed that “experts”, people who make it their job to un­der­stand and forecast markets or events, are generally unreliable, typically woefully inaccurate, and often conflicted. 

In this final installment of our series on The Myth of Expert Advice, we will examine why these “experts” can be so wrong.

Monkeys

As you may recall from Part 1, Philip Tetlock, a research psycholo­gist at Stanford University, conducted a truly telling study and found that the experts could literally have been beaten by dart-throwing monkeys![i]  The ex­perts’ predic­tions were worse than if they had randomly se­lected the out­come.

And it gets worse. Tetlock asked similar questions to those who were not experts. The experts scored no better than did this non-expert group. Their massive level of knowledge rela­tive to the non-experts did nothing to improve their predictive capa­bilities. Tetlock wasn’t the first to discover this. In one earlier study from the 1960s, researchers asked college coun­selors to predict the grades high school students would achieve as college freshmen. The counselors were pro­vided with test scores, grades and the results of personality tests. They were also permitted to interview the stu­dents. Their results were compared to those derived from a formula based solely on test scores and grades. The outcome was that the counselors were beaten by the formula.

An even ear­lier study from the 1950s involved the results of tests used to diagnose brain damage in patients. This data was presented to a group of clinical psy­chologists and their secreta­ries. The re­sult of the study revealed that the psy­cholo­gists’ diagnoses were no better than the secretaries’.[ii]

The fact is that people over-think and quickly form biases based on limited information. Once that bias is formed, sub­stantial effort is employed in supporting it, regardless of whether it is right or wrong. People really hate to be wrong.

Rats!

No one on earth would ever as­sume that a rat could outsmart a human in any given situa­tion. But in the ba­sic understanding of prob­ability, without-a-doubt a manda­tory skill for achieving any measure of success in money man­age­ment, rats seem to be able to outperform peo­ple. Here’s an ex­ample that Tet­lock witnessed at Yale Univer­sity 30 years before he pub­lished the results from his pun­dits study.

In this particular Yale study, a rat was placed in a T-shaped maze. The re­searchers placed food in the left part of the “T” 60% of the time and in the right part 40% of the time. Stu­dents were asked to predict on which side of the “T” the food would ap­pear each time. The rat, of course, was left to find the food on his own. The students weren’t told that there would be a bias to one side. But it was the rat who eventually figured out that the food was more likely to appear on the left side than the right and, as a result, almost always went to the left first, scoring roughly 60%. In contrast, the students scored only 52%![iii] In trying to outsmart the placement of the food, the stu­dents seemed to be looking for patterns that clearly didn’t exist and, as a result, were outsmarted by the rat.

This is a common human behavior. We try to outsmart the system (or the market) looking for patterns that don’t exist – desperate to “beat the system.” When that desire is combined with our need to be ‘”right” and our easily established biases, we can become dumber than a rodent. Experts – pundits and advi­sors – surprisingly enough, are (in most cases) humans too.

Bugs

This leads us to a third significant discovery made by Tet­lock: the more often an expert appears on TV or other media, the worse his or her batting average. Think about that.

 

The experts who are most often touted, and who reach the most disciples, are shown to be the most often wrong.

This is not surprising. Experts exist to provide the me­dia with a steady flow of content and, more importantly, enter­tainment. Their purpose is not to provide you with useful, profit­-making information. Furthermore, if an expert has staked his reputation on a prediction announc­ed to millions of fans repeatedly on television, across the Internet, over the ra­dio, and in print, it will be very difficult for him to have a change of mind – even if the evidence overwhelmingly indicates he is wrong. At that point he is locked in to his bias.

If there was ever a reason to avoid expert opinions, this is it.

People have an enormous capability for ignoring the facts and be­lieving what they have already made up their minds to believe. This capability seems to also hold true with their choices of how to manage their money.

Everyone has a bias, which is formed through a combina­tion of research, learned reasoning and intuition. Each bias will be reinforced when you seek out ex­perts, as you will welcome new information that supports your bias and dismiss informa­tion that conflicts with your view.

A major problem with fol­lowing expert advice is that it com­pounds an individ­ual’s bias.

People form their biases and then only welcome the views of those experts who hold the same biases. And those ex­perts in turn dismiss new information that doesn’t fit in with what they already believe. As a result, people quickly reach a tipping point where their minds are set and they are locked in to their view.

Today, for example, the prices of Gold (NYSEArca: GLD) and Apple (NASDAQ: AAPL) are commanding the attention of the general public.  In fact, the predominant gold ETF (NYSEArca: GLD) recently surpassed the S&P 500 ETF (NYSEArca: SPY) to become the largest ETF by market cap!   The performances of GLD and AAPL are truly reflective of our current environment, so you likely have a friend or relative who is either a Gold Bug or a lover of i-Gadgets (we’ll call them Apple Bugs!).  These Bugs devour any in­formation that supports their view of rising prices. The fact that Gold and Apple have risen in price for much of the past 10 years only supports their belief in the truth behind this informa­tion and the experts who are presenting it to them. But that doesn’t make the information correct. And it certainly doesn’t mean that the re­sult of that information will be higher prices going for­ward. However, it does provide our Bugs comfort in pursu­ing their goal of owning a significant amount of Gold or Apple.

When people get locked in to a view and only consume infor­mation that supports their view, they are no longer en­gaged in the pursuit of profits, but the pursuit of entertain­ment. All of the studies described above indicate that our Bugs’ fascina­tion with Gold or Apple (and their acceptance of expert opi­nion that supports their views) will likely lead to a loss of money.

Following “expert” advice becomes a drug. It’s hard to stop. The best advice is not to start.

This article is excerpted from Myth #13 of “Jackass Investing:  Don’t do it.  Profit from it.” by Michael Dever. 

About the Authors:

Michael Dever is the CEO and Director of Research for Brandywine Asset Management, an investment firm he founded in 1982.  He is also the author of “Jackass Investing:  Don’t do it.  Profit from it.”, which is the Amazon Kindle #1 best-seller in the mutual fund and futures categories.  John Uebler is a Research Associate for Brandywine Asset Management.  Please visit www.brandywine.com andwww.jackassinvesting.com


[i] Philip Tetlock, Expert Political Judgment: How Good Is It? How Can We Know? (Princeton: Princeton University Press, 2005).

[ii] L.R. Goldberg, “The effectiveness of clinicians’ judgments: The diagnosis of organic brain damage from the Bender-Gestalt test,” Journal of Consulting Psychology, Vol. 23 (1959): 25-33.

[iii] Tetlock, “Expert Political Judgment: How Good Is It? How Can We Know?:” 40.

 

CommentsAdd Comment

No Comments found.
 
Comment:
Your Name:
Your Email: (Email will not be displayed anywhere)
Verification Code: