It amazes me how many research scientists seem to have flunked statistics. Or ought to have. Me, I majored in the liberal arts. But at Reed, even those who took Science for Poets would be required to rewrite some of the scientific papers I have read on the subject of antidepressants.
So the vocabulary terms for the week are observer bias and confounding variables. No worries -- lots of pictures.
Clinical Experience in Defense of Prozac
Let's say you are a doctor treating 100 patients with severe depression. You give them all antidepressants. It seems irresponsible not to, doesn't it. Thirty of them get better. Fifteen do not make a follow-up appointment. You switch the fifty-five who are still trying to another antidepressant. Another fifteen get better. And another fifteen do not make a follow-up appointment.
Over the course of a year, you get up to fifty whose depression is remission and ten who are still struggling. You don't know what happened with the forty who are no longer seeing you. They couldn't afford treatment; they didn't like your face; they couldn't find parking; they got worse on your medication. You have no idea. But you have fifty patients who think you saved their lives. You feel pretty good about yourself, don't you.
Now the FDA comes along and tells you that you have to tell your patients that antidepressants may cause them to feel suicidal.
Following the FDA requirement of the black box warning, lots of news stories appeared with quotes like this one: "I've seen the SSRIs help people immensely," said Helen Stavros, a clinical social worker... "I've never seen anyone become suicidal as a result of being on antidepressants." That is clinical experience talking.
Another word for clinical experience is anecdote. They call it anecdote when patients are reporting and clinical experience when clinicians are reporting. But it is the same story.
Commentary on Clinical Experience
Individual clinicians, in particular, are hindered by observer bias. They do know, theoretically, that of those fifty people in remission, forty-five might have gotten better if they had been taking tic tacs (the placebo effect). They do know, theoretically, that a depressive episode is self-limiting and usually goes away with no intervention whatsoever in six to nine months. They do know, theoretically, that one cannot draw conclusions about those who drop out of the experiment. And yet, they will ascribe an 80% success rate to their antidepressants in this hypothetical, because they prescribed them, expecting their patients to get better, and because the patients they still see did.
How many of their patients stopped seeing them because they got worse on the treatment? They have no idea. They do not observe them.
But it's not just clinical social workers like Stavros who get blind-sided by observer bias. A friend of mine heads the mood disorder section of the psychiatry department of a reputable research facility. He tells me that he asks his colleagues why they get so agitated about non-compliance, when research shows that 90% of the efficacy rate for antidepressants is due to the placebo effect. They respond with a blank stare.
Anecdotes have their place in research. Their place is to suggest areas of investigation. Is Stavros' six years of experience universal? Is it even typical? The FDA started looking at the suicide issue because of testimony by parents whose children had a very different experience of antidepressants. -- Anecdotes, very powerful anecdotes that led the FDA to examine the experience of 77,000 patients in clinical studies, including the ones that the drug companies did not publish.
Among the many ways used to establish cause and effect, anecdote is the weakest. Well-constructed clinical studies have the highest level of reliability. But are 77,000 subjects enough?
Dr. Julio Licinio, co-author of the much reported Depression, Antidepressants and Suicidality: A Critical Appraisal, said, I don't see how these drugs could be causing suicide if these rates [of suicide] are actually going down.
Commentary on Epidemiological Studies
Defenders of antidepressants don't publish these charts any more. The charts do show that suicide rates fell steadily for a decade after the introduction of SSRIs. But that trend ended at the start of the 21st century. Suicide rates bottomed out in 2000, and started climbing again in 2001, before the black box warning, while antidepressant sales were still sky-rocketing. Suicide rates have climbed steadily since 2001, especially in people of working age. Meanwhile, more people are taking antidepressants than ever.
You see, if you chart suicide rates beginning in the 1920s through the current day, and then examine business conditions and the unemployment rate in the same frame, you establish a 90-year correlation between suicide rates and the condition of the economy.
Population-based studies run afoul of confounding variables. If two things happen at the same time (antidepressant sales go up; suicide rate goes down), that does not mean that one caused the other. [For science geeks, correlation ≠ causation.] There might be a third issue responsible for the data.
Unemployment rate is the confounding variable here, at least one of them. It turns out that SSRIs came along just as the boom years began. Their apparent success rode the business cycle up, and now rides it down.
Observational Cohort Study
An interesting twist on population-based studies is one from Denmark that bored in on the specific population that had purchased antidepressants in 1995-99. Those who refilled their prescription had a lower suicide rate than those who filled it only once. And the more refills, the lower the suicide rate.
The authors published this study under the title Do Antidepressants Prevent Suicide? They conclude, Continued antidepressant treatment... is found to be associated with a reduced risk of suicide. They aren't going to trip on that correlation ≠ causation thing. So they leave it to the reader to answer the question in the title.
Well, don't answer too fast. There was no effort to determine why those who quit did so. Admittedly, with almost half a million patients, that would take a Herculean effort. But here your confounding variable might be adverse events. If the reason people failed to refill their prescriptions is that something bad happened when they took the medication, then we are comparing two different groups, those who can tolerate antidepressants and those who, for some reason, cannot. These two groups might naturally have different suicide rates.
So there is your vocabulary lesson for the week, observer bias and confounding variables. Put them together and you get a lot of research driven by wishful thinking.
Coming up, we look more closely at that Danish confounding variable, adverse events.
So the vocabulary terms for the week are observer bias and confounding variables. No worries -- lots of pictures.
Clinical Experience in Defense of Prozac
Let's say you are a doctor treating 100 patients with severe depression. You give them all antidepressants. It seems irresponsible not to, doesn't it. Thirty of them get better. Fifteen do not make a follow-up appointment. You switch the fifty-five who are still trying to another antidepressant. Another fifteen get better. And another fifteen do not make a follow-up appointment.
Over the course of a year, you get up to fifty whose depression is remission and ten who are still struggling. You don't know what happened with the forty who are no longer seeing you. They couldn't afford treatment; they didn't like your face; they couldn't find parking; they got worse on your medication. You have no idea. But you have fifty patients who think you saved their lives. You feel pretty good about yourself, don't you.
Now the FDA comes along and tells you that you have to tell your patients that antidepressants may cause them to feel suicidal.
Following the FDA requirement of the black box warning, lots of news stories appeared with quotes like this one: "I've seen the SSRIs help people immensely," said Helen Stavros, a clinical social worker... "I've never seen anyone become suicidal as a result of being on antidepressants." That is clinical experience talking.
Another word for clinical experience is anecdote. They call it anecdote when patients are reporting and clinical experience when clinicians are reporting. But it is the same story.
Commentary on Clinical Experience
Individual clinicians, in particular, are hindered by observer bias. They do know, theoretically, that of those fifty people in remission, forty-five might have gotten better if they had been taking tic tacs (the placebo effect). They do know, theoretically, that a depressive episode is self-limiting and usually goes away with no intervention whatsoever in six to nine months. They do know, theoretically, that one cannot draw conclusions about those who drop out of the experiment. And yet, they will ascribe an 80% success rate to their antidepressants in this hypothetical, because they prescribed them, expecting their patients to get better, and because the patients they still see did.
How many of their patients stopped seeing them because they got worse on the treatment? They have no idea. They do not observe them.
But it's not just clinical social workers like Stavros who get blind-sided by observer bias. A friend of mine heads the mood disorder section of the psychiatry department of a reputable research facility. He tells me that he asks his colleagues why they get so agitated about non-compliance, when research shows that 90% of the efficacy rate for antidepressants is due to the placebo effect. They respond with a blank stare.
Anecdotes have their place in research. Their place is to suggest areas of investigation. Is Stavros' six years of experience universal? Is it even typical? The FDA started looking at the suicide issue because of testimony by parents whose children had a very different experience of antidepressants. -- Anecdotes, very powerful anecdotes that led the FDA to examine the experience of 77,000 patients in clinical studies, including the ones that the drug companies did not publish.
Among the many ways used to establish cause and effect, anecdote is the weakest. Well-constructed clinical studies have the highest level of reliability. But are 77,000 subjects enough?
Epidemiological Defense of Prozac
After the FDA mandated the black box warning, population-based (epidemiological) studies that crunched even bigger numbers were marshaled in defense of antidepressants. One type of study would chart suicide rates for ten or twenty years before and ten years after the introduction of Prozac and other SSRIs. In nation after nation, suicide rates rose in the 1980s until the introduction of Prozac, and then fell in the 1990s as the sales of these new antidepressants rose.
Commentary on Epidemiological Studies
Defenders of antidepressants don't publish these charts any more. The charts do show that suicide rates fell steadily for a decade after the introduction of SSRIs. But that trend ended at the start of the 21st century. Suicide rates bottomed out in 2000, and started climbing again in 2001, before the black box warning, while antidepressant sales were still sky-rocketing. Suicide rates have climbed steadily since 2001, especially in people of working age. Meanwhile, more people are taking antidepressants than ever.
You see, if you chart suicide rates beginning in the 1920s through the current day, and then examine business conditions and the unemployment rate in the same frame, you establish a 90-year correlation between suicide rates and the condition of the economy.
Population-based studies run afoul of confounding variables. If two things happen at the same time (antidepressant sales go up; suicide rate goes down), that does not mean that one caused the other. [For science geeks, correlation ≠ causation.] There might be a third issue responsible for the data.
Unemployment rate is the confounding variable here, at least one of them. It turns out that SSRIs came along just as the boom years began. Their apparent success rode the business cycle up, and now rides it down.
Observational Cohort Study
An interesting twist on population-based studies is one from Denmark that bored in on the specific population that had purchased antidepressants in 1995-99. Those who refilled their prescription had a lower suicide rate than those who filled it only once. And the more refills, the lower the suicide rate.
The authors published this study under the title Do Antidepressants Prevent Suicide? They conclude, Continued antidepressant treatment... is found to be associated with a reduced risk of suicide. They aren't going to trip on that correlation ≠ causation thing. So they leave it to the reader to answer the question in the title.
Well, don't answer too fast. There was no effort to determine why those who quit did so. Admittedly, with almost half a million patients, that would take a Herculean effort. But here your confounding variable might be adverse events. If the reason people failed to refill their prescriptions is that something bad happened when they took the medication, then we are comparing two different groups, those who can tolerate antidepressants and those who, for some reason, cannot. These two groups might naturally have different suicide rates.
So there is your vocabulary lesson for the week, observer bias and confounding variables. Put them together and you get a lot of research driven by wishful thinking.
Coming up, we look more closely at that Danish confounding variable, adverse events.
photo of diploma by author
flair from Facebook.com
clip art from Microsoft Office
Reading your blog. I was in psychiatry for 1 year as an intern. We had only tricyclics and major tranquilizers at the time....and talk therapy of course. Talking is out the window now. Too bad.
ReplyDelete