When I was a kid growing up in Los Angeles, there was this local TV show my dad used to enjoy watching called “Fight Back with David Horowitz.” Basically, Horowitz, a TV reporter and consumer advocate, used to put the claims a manufacturer made about their products to the test—whether it was if Samsonite luggage could withstand abuse from a Gorilla or Bounty really was the “quicker picker upper,” it was on its show and ended up either endorsed or debunked by it. It was Consumer Reports come to life, if you will—pitting products against one another to see which one was worth putting down some hard earned dollars for.
Now, over 30 years later, we in medicine are just getting around to doing the exact same thing that Horowitz was with retail way back in the 1970s—comparing the claims made my drug and device makers about their products.
Being the sophisticated academics we believe we are, we’ve given this process a name: Comparative Effectiveness Research (CER). But come on, aren’t we really just asking David Horowitz to come Back to the Future and host Fight Back for doctors and patients?
One has to wonder why this common sense approach to figuring out how to spend or health care dollars has taken so much time to become part of our efforts to fix health care. Frankly, I have no idea, except to point the finger at the usual suspects like drug companies and their lobbyists on Capital Hill, who don’t want to invest money into R&D if they’re product is going to be more expensive, more dangerous or less effective than an existing one. Or more, recently, the cliché argument that somehow learning something about what works best is somehow a rationing of care (as if spending trillions on excessive, inefficient, and uncertain care, yet having 48 million people uninsured, isn’t).
Still, you’d think that holy triad of characters--doctors, patients and insurance agents (including Medicare)--would want have wanted to know just this kind of information eons ago.
It’s been in pondering what seems like our complete lack of practicality that I read more about CER and what it’s going to take to make it work. Among the challenges:
- Where to spend our money: Remember, Obama is funding CER with stimulus funds—a one time shot in the arm for the project (but one that will hopefully continue into the future). Still, even $1.1 billion seems limited relative to the vast array of drugs and priorities. To that end, the Institute of Medicine released a report that’s basically its Hot 100 list of topics we ought to invest in. More on that in an earlier post by Josh Seidman
- We spent the money and got some answers. Now what? In a great commentary about the potential and pitfalls of CER in JAMA. Dr. Robert Brooks asks this fundamental question not because it answer isn’t obvious—implement the finding of research—but because history tells us that doing that is much harder than it sounds. “The history of science shows that it takes a long time for new knowledge to be incorporated into day-to-day practice. So a second requirement for work funded under the stimulus package should be that successful innovations are implemented immediately. Thus, a successful application under the comparative effectiveness initiative must include constituents, such as health care organizations, hospitals, physicians, or organized community groups, that would agree to adopt the new therapy immediately if it were shown to be as safe as the old therapy but substantially less expensive.”
Brooks has a point and if you don’t believe him, just look at how poorly current best practices are implemented by doctors across the country.
Despite those possible concerns, it seems that CER is here to stay in a big way. My feeling is that it’s better we have it and hold doctors, drug and device makers accountable for their health care decisions than keep practicing in the black box we work in today.