A staggering caseload of misleading research

chemist at workMost published research is false. That’s just a fact. If you don’t believe me, research it.

I know that is a disconcerting thing to say in a space read primarily by real estate operatives who annually produce or bank their investment decisions on stacks of research. But sometimes we have to look a situation squarely in the teeth.

If you need help researching my thesis and proving the accuracy of its assertion of inaccuracy (did you follow that?), I suggest Dr. John Ioannidis, an epidemiologist who has been waging war on sloppy research for years. He has helped develop a discipline called meta-research, which is the act of researching research.

You can find Dr. Ioannidis at Stanford University, where he has institutionalized his battle against errant research projects with the launch of the Meta-Research Innovation Center. METRICS is connecting people in fields such as medicine, statistics and epidemiology to delve more deeply into this problem.

This effort to tear the mask off the fecklessness of most research dates back to 2005 when Ioannidis wrote a paper titled, “Why Most Published Research Findings Are False”. It was in that paper that he pointed an accusatory hypodermic at the “over-interpretation of statistical significance” in studies with sample sizes that were too small to be valid and, thus, lead to erroneous conclusions.

Besides leading us astray, bad research is a titanic waste of money. Lancet recently wrote that in 2010 about $200 billion were squandered on medical research that was either flawed in design, redundant, never published or poorly reported. That fizzled away 85 percent of world’s total spending that year on medical research.

Ioannidis and the others running the METRICS lab have created the Journal of Irreproducible Results to monitor research projects and, presumably, publicly humiliate those who produce spurious work. But it is the kind of humiliation that serves the greater public good. Basically, shaming researchers into doing better work.

Be on guard.

Then again, we are talking primarily about medical and scientific research in METRICS’ case. But there is much to learn from flawed research of any stripe. Too-small sample sizes, lenient standards and other blemishes that afflict research are not strictly the faux pas of scientific inquiry.

There is also “publication bias” to contend with, which is said to be a pet peeve of Dr. Ioannidis. Researchers are more inclined to submit, and editors to accept, positive results rather than negative or inconclusive results. Compounding the problem is the tendency (especially in the Internet age) for research results to be republished dozens, hundreds or thousands of times. Only later do some of these research projects eventually get debunked and exposed as fallacious.

I must counter-charge, however, that the notion of conducting meta-research creates impressions that approximate an endless echo. If we are so bad at research, how good can we be at researching research?

What is our sample size? Are the standards we observe too lenient or stringent? Do we have a publication bias toward reporting egregious instances of deceptive research, while giving curt treatment to the exemplary?

Put a researcher on it.

Not a subscriber to IREI Insights blog? Sign up to receive alerts on new blog posts.


MikeCfinalwebMike Consol is editor of The Institutional Real Estate Letter – Americas.