Last year brought some bad news for the manufacturers of so-called “antidepressant” drugs. Doctor Irving Kirsch’s book, The Emperor’s New Drugs, forcefully called attention to the fact that the drug companies’ own data – the data they didn’t want you to see – show these drugs are no better than a placebo for treating major depression. Around the same time, a meta-analysis published in JAMA called attention to the fact that the drug companies (who get to measure the safety and effectiveness of their own nostrums) routinely stack the deck to minimize the placebo effect in clinical trials.
Now comes the backlash to the backlash. And who better qualified to lead it than Doctor Peter Kramer, author of the 1993 bestseller Listening to Prozac?
Kramer’s love letter to Eli Lilly’s blockbuster product read like a sales pitch from a street-corner drug dealer: I’ve got this great stuff, man! Everybody’s tryin’ it! Why aren’t you tryin’ it? You gotta try it!
Doctor Eliot S. Valenstein, author of Blaming the Brain, had an interesting take on Kramer’s opus:
“The psychiatrist Peter Kramer’s book Listening to Prozac has conveyed the idea that marvelous things happen to people when their serotonin levels are raised. The book is written is a seductive style, because Kramer appears to neutral and objective. So, for example, while Kramer does not question any of the claims for what Prozac is believed to do, he raises the ethical question about whether it is right for a psychiatrist to change the personality of people who are not really mentally ill. However, most readers who are exposed to the descriptions of people who have had their lives turned around by Prozac – people claimed to have been made happy, productive, and successful for the first time in their lives – are likely to request the pill and leave Peter Kramer to worry about the ethical questions.
“Because Kramer’s book is written in an anecdotal style, there is no way for a critical reader to assess whether the cases described are typical of the results generally produced by Prozac. Anecdotal reports in medicine often turn out to be unreliable and misleading. Several psychiatrists who have had considerable clinical experience with Prozac have pointed out that the wonderful changes described in Kramer’s book are rare, but he manages to convey the impression that that they are commonplace and what happens to most people who take Prozac.”
Now, in on op-ed piece in the New York Times, Kramer is coming to the defense of the drugs he has been shilling for most of his professional life. Most of his essay consists of attempts to debunk the findings of Kirsch and his colleagues.
A bit of background is in order here. Before the FDA approves a new drug, the drug companies are required to submit results of two clinical trials which show the drug to be significantly better than a placebo. That sounds pretty good, but there are two problems. One is that the benefits conferred by a drug may be statistically significant without being clinically significant. A simple example will make this clear. Suppose you had overwhelming evidence that taking a certain drug every would make you live longer, with no untoward side effects. Would you take it? You would? Now suppose you found the drug would make you live all of ten seconds longer. Would you still take it? I didn’t think so.
Now it’s easy to understand how they can measure “hard” outcomes such as death, blindness, renal failure, etc. How do you measure the effectiveness of a treatment for depression? Well, the most commonly used metric is the Hamilton Rating Scale for Depression (HRSD), a questionnaire that probes such matters as whether the patients has trouble sleeping, has ever attempted suicide, etc. The higher the score, the more depressed the patient is judged to be. The maximum possible score is 51 points. The American Psychiatric Association in the USA and the National Institute for Health and Clinical Excellence in the UK both define a HRSD score of 8-13 as “mild” depression and a score of 23 or above as “very severe” depression. In addition, the National Institute of Health and Clinical Excellence defines a “clinically meaningful” outcome as a decrease of 3 points on the HRSD.
The other problem is that while the FDA requires two clinical trials which show significant benefits, there may have been any number of other trials which showed no benefits. It’s like you and I betting on the result of a coin toss, with this catch: I get to toss the coin as many times as I want until it comes up the way I want it to, and then all the previous tosses don’t count.
This is a huge problem. Previously, Dr. Hans Melander and his colleagues reviewed all 42 clinical trials of SSRI’s submitted to the Swedish Drug Authority as a basis for marketing approval. 21 of these studies showed a significant drug effect, and 21 did not. Of the 21 studies that did show significant effects, 19 were published in the medical literature – some more than once (!). Of the 21 studies that did not show significant effect, only six saw the light of day.
Kirsch used a Freedom of Information Act request to obtain data on all clinical trials submitted to the US Food and Drug Administration for the licensing of four commonly prescribed SSRI’s: fluoxetine (trade name Prozac), venlafaxine (brand name Effexor), nefazodone (trade name Serzone), and paroxetine (trade name Paxil). Tens of millions of prescriptions are written for these drugs every year.
Kirsch and his colleagues analyzed the data and found that while there was a statistically significant drug effect, it was tiny – a reduction of less than two points on the HRSD. Is this reduction meaningful for the patient? Kirsch points out that a two-point reduction in the HRSD score can be achieved by no longer waking during the night, or no longer waking early in the morning, or being less fidgety during the interview, or by eating better. The patient may still be plagued by the same feelings of guilt, worthlessness, thoughts of suicide, etc.
N.B.: all but one of the studies analyzed by Kirsch et al. looked at patients labeled as having “very severe” depression, and omission of that one study did not change the results significantly.
Kramer’s opening salvo is to ask (rhetorically I assume):
“Could this be true? Could drugs that are ingested by one in 10 Americans each year, drugs that have changed the way that mental illness is treated, really be a hoax, a mistake or a concept gone wrong?”
That’s an interesting argument. Millions of Americans take these drugs, so they must be effective. By that logic, acupuncture, chiropractic, and homeopathy must also be effective remedies, since millions of Americans swear by those interventions as well. Hell, by that logic, horoscopes must be an effective means of divining the future, since millions of Americans believe in astrology. I shall return to this point later.
He goes on to assert: “Antidepressants work – ordinarily well, on a par with other medications doctors prescribe.” This assertion doesn’t seem very reassuring, given that the drug companies’ dirty little secret – well, one of their many dirty little secrets – is that many of their new “medicines” do nothing for most of the people who take them. “Yes, certain researchers have questioned their efficacy in particular areas — sometimes, I believe, on the basis of shaky data.” As if the burden of proof lies on anyone who questions the value of these drugs.
Kramer goes on to attack the reliability of the studies which formed the basis of Kirsch’s meta-analysis – an odd argument, given that these studies formed the basis for FDA approval for these drugs. If the studies cannot be believed, then the FDA should never have approved the use of these drugs in the first place.
With billions of dollars at stake, you know the drug companies are going to do everything they can to stack the deck in favor of their products, and yet their own data show these drugs are not significantly better than a placebo for treating major depression. Does anyone want to try to argue that these drugs are more effective than the drug company studies found?
“Often subjects who don’t really have depression are included — and (no surprise) weeks down the road they are not depressed. People may exaggerate their symptoms to get free care or incentive payments offered in trials. Other, perfectly honest subjects participate when they are at their worst and then spontaneously return to their usual, lower, level of depression.”
But all the subjects were diagnosed with depression before the study began. So he’s telling us that doctors cannot reliably distinguish between those who “really have depression” and those who do not. I’d say to whatever extent that’s true, that’s an argument for abandoning depression as a diagnostic category, not for ignoring negative clinical findings.
Kramer then turns his attention to a meta-analysis published by Jay C. Fournier and his colleagues in the 6 January 2010 issue of JAMA. The meta-analysis was concerned with the common practice of including a “placebo washout period” in many clinical trials. For the first two weeks of the study, all subjects are given a placebo, and those who improve are removed from the trial. Kramer calls this an attempt to “mute the placebo effect.” I call it cheating.
Fournier et al. confined their analysis to trials for which the authors provided the requisite original data, which comprised adult outpatients, which compared medication to placebo for at least six weeks, and which did not include a placebo washout period. That didn’t leave much. After combing through 2,164 studies, the authors found a whopping total of six which met their requirements.
And what did they conclude? They found antidepressants conferred no advantage over placebos for patients with mild to moderate depression. For patients with severe depression, out of every four patients treated with antidepressants did significantly better (as defined by an increase in the HRSD score of three points) than those treated with a placebo. That’s a pretty meager result.
Kramer blasted the study and its “intentionally maximized placebo effect,” a position I find bizarre. The purpose of a clinical trial is, or rather ought to be, to determine whether a given drug produces substantial clinical benefits which are broadly applicable to the population that is likely to be taking a given drug – not to make the company’s product look good at all costs.
Kramer winds up his essay with some self-serving remarks about how cautious he is in prescribing psychoactive drugs for his patients. Even if he’s telling the truth, I’d say the subtle, nuanced approach he claims to favor bears no relation whatsoever to the hurried 15-minute “meds check” which increasingly is becoming the norm. All too often, the docs who prescribe these drugs know less about these patients than you could learn sitting next to them on a long airplane flight.
In closing, I’d like to return to Kramer’s initial argument, in which he averred that one out of ten Americans are prescribed antidepressants, and doctors cannot possibly be wrong about so many cases. I would turn that argument around: the doctors cannot possibly be right about so many cases. If such an entity as “clinical depression” exists (which I doubt), it cannot possibly as common as the medical profession claims. Our Paleolithic ancestors trekked hundreds of miles in search of game, ran down woolly mammoths, and battled giant cave bears – not to mention each other. They didn’t lie down and say “I’m too depressed to go on” – and if any of their contemporaries did, their genes got weeded out of the gene pool. After the Neolithic Revolution, our ancestors were, for the most part, peasant farmers, doing back-breaking labor to survive, until two generations ago in my case, and I think I’m fairly typical in that regard. We were meant to be strong and healthy.
Photo via Wikimedia Commons