Are Americans Becoming More Depressed?
The madness of depression is, generally speaking, the antithesis of violence. It is a storm indeed, but a storm of murk. Soon evident are the slowed-down responses, near paralysis, psychic energy throttled back close to zero. Ultimately, the body is affected and feels sapped, drained.
William Styron
Are American's becoming more depressed? To be more specific, has the prevalence of major depressive disorder increased over the past few decades? It's typical to hear the answer: yes.
There are a number of related and interesting questions here:
- What predictions can we make about the growth of mental health expenditure?
- Are Americas over-diagnosed for mental health disorders?
- Are do mental health disorders have still have too much stigma surrounding them?
- Are humans able to adapt to happily adapt to modern life? Or would were we better off as hunter-gathers on the savannah?1
Looking at depression trends is relevant to each. But before getting too distracted, here's a simple question to start with: did the prevalence of major depressive disorder in America rise or fall from the 90s to early 2000s?
One simple way to answer questions of this form is to look at a snapshot of survey responses and compare generations. If you did that and you read the following:
Members of Generation Z, born between the mid-1990s and the early 2000s, had an overall loneliness score of 48.3. Millennials, just a little bit older, scored 45.3. By comparison, baby boomers scored 42.4. The Greatest Generation, people ages 72 and above, had a score of 38.6 on the loneliness scale.2
Then you'd conclude that Americans are becoming more lonely. This would be a mistake. Someone's loneliness may change throughout their life. Being a teenager may just be a lonely affair. What one wants to do is measure a population across time, not at a snapshot. Ideally, you'd give a representative part of the population the same survey across time. Unfortunately, that's rather elusive.
The first thing I found was the frequently cited study, The Epidemiology of Major Depressive Disorder. This study provides a history of the National Comorbidity Survey (NCS) conducted in 1990-1992 and its follow up a decade later. The prevalence of the first NCS survey was high: 8.6% of respondents said that they experienced depression over the past year. However, there are two reasons to think that this estimate is off: first, it excluded people over 54, who have lower rates of depression and, second, it didn't include a clinical significance criterion. The reason it didn't do that is that it used DSM-3 and until DSM-4 came along clinical significance wasn't emphasized. What does that mean? Basically, DSM-4 came along an said that in order for someone to qualify as having major depressive disorder they need to have 5 of the 9 features:
- depressive mood
- anhedonia
- change in appetite or weight
- sleep problems
- psychomotor problems
- fatigue or loss of energy
- excessive self-reproach or guilt
- impaired decision-making
- thoughts of death
Yes, that's 277 different configurations. At any rate, the first NCS survey didn't use clinical significance criteria. The National Comorbidity Survey Replication, NCS-R, did and find that 6.6% of adults suffered from depression over the last 12 months. So, depression fell from 90s to 2000s?
Changes in the prevalence of major depression and comorbid substance use disorders in the United States between 1991–1992 and 2001–2002 throws out the NCS and NCR-R results:
[B]ecause the two surveys used different diagnostic criteria (DSM-III-R and DSM-IV, respectively) and because of extensive differences in how the surveys assessed major depression, the two studies’ prevalence rates are not comparable, and thus change over time cannot be reliably assessed.
One can estimate the impact of adding a clinical significance criteria to NCS, which is what William E Narrow and co did in Revised Prevalence Estimates of Mental Disorders in the United States. However, the response Lowered Estimates-but of what? is pretty damning of the attempt. Roughly, they argue that William E Narrow and co aren't measuring depression or other mental disorders:
[T]he authors offer no conceptual argument that the addition of their CS represents a valid redefinition of disorder. They note that a CS appears in the DSM-IV criteria sets. However, the DSM-IV ’s criterion requires only significant distress or impairment. The authors’ more demanding CS is arbitrary as a requirement for disorder. First, service contact is conceptually unrelated to disorder status; people commonly seek treatment for non-disorders (the DSM ’s V codes), and many disorders go untreated. This is why psychiatric epidemiologists turned to community studies of true prevalence of disorder measured independently of service use. Second, symptoms interfering “a lot” with life is an inappropriate criterion for disorder because many true disorders, including mild or moderate ones, may not interfere with life a lot.
Seems persuasive.
Changes in the prevalence of major depression and comorbid substance use disorders in the United States between 1991–1992 and 2001–2002, aims to get around this by relying on two surveys on alcohol use that include clinical significance criterion of DSM-IV. Another advantage, these surveys have over NCS and NCS-R is that they sample from population of the same age. Great. What did they find?
The prevalence of past-year major depressive episode in the total samples increased significantly from 3.33% in 1991–1992 to 7.06% in 2001–2002.
So, unlike the NCS and NCS-R finding, we see a significant increase. This increase can't entirely be explained by a rise in substance abuse disorders, since the rate rose for non-substance abusing types as well: "the prevalence of major depressive episode increased from 2.76% in 1991–1992 to 6.23% in 2001–2002."
The main concern with this analysis is that is uses two different surveys and just those two surveys. The studies use the same significance criterion for depression, but have other differences. The most significant difference is that in the first survey all respondents were asked whether they had depressive symptoms. In the second, survey they were only asked this if they passed an initial screening round. There's obviously a worry that this contaminated results. To address this, they note that both had the same context for lifetime depression and found an increase (9.86% in the NLAES compared with 13.23% in the NESARC). It's unclear to me why that removes the worry. Another puzzling development: they don't find a difference between low mood and anhedonia in the screening questions. That's surprising, since you'd expect an increase if depression increased. They ask "Did the U.S. population become more willing in general between 1991–1992 and 2001–2002 to report psychiatric symptoms?" Maybe.
Challenging the myth of an "epidemic" of common mental disorders: trends in the global prevalence of anxiety and depression between 1990 and 2010 looked at a massive number of studies for the Global Burden of Disease project, excluded tens of thousands and landed on keeping 116 studies on depression. A relatively complex bayesian meta-regression model is constructed to fill the gaps. As the paper suggests, they don't find an increase.
My sense is that this is some the best work in the field. They throw out studies that should be thrown out (studies that only use lifetime prevalence or don't measure clinical significance) and have carefully constructed a model through numerous iterations.
With additional work from the Global Burden of Disease project, we find that depression rates have moved from 4.68% to 4.84% from the 1990 to 2007.3 Note that this is prevalence at a point in time, not year long prevalence - unlike the previous studies. Regardless, it's evidence that the answer to the question that began this query is largely: no, Americans have not been becoming more depressed. At least for the relevant timeframe.
But, as we've seen, this work is messy, so I wouldn't be that surprised if I change my mind.