Saturday, November 27, 2010

How to Set the Bullshit Filter When the Bullshit is Thick | Wired Science | Wired.com

How to Set the Bullshit Filter When the Bullshit is Thick

A while back I wrote a short piece in the New York Times Magazine about a researcher named John Ioannidis who had found that over half of all new research findings later prove false:

Many of us consider science the most reliable, accountable way of explaining how the world works. We trust it. Should we? John Ioannidis, an epidemiologist, recently concluded that most articles published by biomedical journals are flat-out wrong. The sources of error, he found, are numerous: the small size of many studies, for instance, often leads to mistakes, as does the fact that emerging disciplines, which lately abound, may employ standards and methods that are still evolving. Finally, there is bias, which Ioannidis says he believes to be ubiquitous. Bias can take the form of a broadly held but dubious assumption, a partisan position in a longstanding debate (e.g., whether depression is mostly biological or environmental) or (especially slippery) a belief in a hypothesis that can blind a scientist to evidence contradicting it. These factors, Ioannidis argues, weigh especially heavily these days and together make it less than likely that any given published finding is true.

Now I’m delighted (and chagrined, too, I admit, that I didn’t do the damn story) to see that David H. Freedman, author of Wrong: Why experts keep failing us — and how to know when not to trust them — has profiled Ioannidis at length in the current Atlantic.

He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem.

This is an important story, for it — or rather, Ioannidis’s work — calls into question how much we can trust the evidence base that people are calling on to support evidence-based practice. According to Ioannidis, there’s scarcely a body of medical research that’s not badly undermined by multiple factors that will create either bias or error. And these errors persist, he says, because people and institutions are invested in them.

Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”

This presents some really difficult problems for doctors, patients — and science and medical journalists. Ioannidis is not saying all studies are wrong; just a good healthy half or so of them, often more. In a culture that a — for good reason — wants testable knowledge to draw on, what are we to draw on if the better of the tests (the papers and findings, that is) are false? You can throw up your hands. You could, alternatively, figure that this wrong-much-of-the-time dynamic still leaves us ahead overall — advanced beyond what we were before, perhaps, but still not as far as we’d like.

The latter response makes some sense But it’s made more problematic by the high stakes involved when we’re talking about high-impact (and expensive) treatments like surgery or heavy-duty pharmaceuticals. A stunning review a couple years ago, for instance, found that the second-generation antipsychotics developed in the 1980s, hailed then as more effective and with lesser side-effects than the prior generation, actually worked no better and caused (different) side effects just as bad — even though they cost about 10 times as much.

Enormous expense and, I suspect, not a little harm. The hype and false confidence around those drugs — the conviction that they improved on the drugs available before — probably led many doctors to prescribe them (and patients to take them) when they might have taken a pass on prescribing the earlier generation. As with the generation of antidepressants popularized at around the same time, these ‘newer, better’ drugs gave fresh impetus to pharmacological responses to mental health issues just as the profession and the culture were growing cynical about existing meds. They resuscitated belief in psychopharmacology. But that new life was based on false data. The consequence was not trivial; it created a couple of decades — and counting — of heavy reliance and overselling of psychopharmaceuticals whose benefits were oversold and drawbacks downplayed.

There’s error and there’s error. It’s one thing to be wrong about low-impact treatments: to be wrong, for instance, about how much a low-impact drug like aspirin or glucosamine helps modest knee pain in athletes, or how much benefit you get from walking versus running, or whether coffee makes your smarter or just makes you feel smarter. The stakes run much higher when the treatments cost a lot in money or health. Yet little in our regulatory, medical, or journalistic cultures or practices acknowledges that.

Ioannidis hints at a way to compensate for this. He notes that the big expensive false reports tend to be generated and propagated by big moneyed interests. Ideally, skepticism should be applied accordingly. It’s not even that this science is more likely to be wrong (though that may be). It’s that the consequences may be more expensive. Here, as elsewhere, the smell of money should sharpen your bullshit filter.

update/addendum, 14 Oct, 2010, 2:01 PM EDT:

For yet more perspective on this, I recommend reading not only the Atlantic article cited above, but two otherss: Iaonnidis’s big-splash 2005 paper in PLOS (quite readable), “Why Most Research Findings Are False,” and a follow-up by some others, “Most Research Findings are False — But Replication Helps.” If you’re feeling hopeless from the above, as several people have expressed below and on Twitter, these may help.

It also helps to keep in mind the corollaries or risk factors that Iaonnidis sets out in that 2005 paper. Useful in adjusting your BS filter and in identifying the sorts of disciplines and fields and findings that deserve more skepticism.

Those corollaries:

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

He elaborates on this fruitfully.

Finally, J.R. Minkel alerts me to  a post at Seth’s blog that looks like a good addition. (I lack time to read it thoroughly at the moment b/c I have to finish up an assignment. Trying to, you know, get it right, against the odds.)

If in doubt, it’s always safe and sensible to apply to any novel finding the old maxim that the great oceanographer Henry Bryant Bigelow reminded his brother of when his brother reporting seeing a donkey sail by during a hurricane in Cuba: “Interesting if true.”

_____

David Dobbs writes articles and books, shares Google Reader items, occasionally visits Facebook, and spends too much time on Twitter. He’s currently in London working on a book about the epigenetic underpinnings of temperament.

Posted via email from ElyssaD's Posterous

No comments:

Post a Comment