One of the most frustrating things I face as the creator of an evidenced-based, suicide-focused treatment that has been proven to work with the largest population of suicidal people (i.e., those with suicidal thoughts), is the rising suicide rate due to suicide risk not being properly treated. Even with a handful of suicide-focused treatments (such as CAMS) with randomized controlled trials (RCTs) providing convincing data, mental health providers still seem to ignore proven suicide-focused treatments and fall back to “traditional” treatments, such as an over-reliance on medication and brief non-suicide focused inpatient hospitalizations, that have limited to no empirical evidence supporting their use for suicide risk. There are in fact correlational data showing that certain medications and the post-discharge period following inpatient stays are significantly associated with increases in suicidal risk.
Given that effective treatments are available, why would a mental health professional choose not to use them? After all, we are talking about potential-life-or-death scenarios when it comes to seeing suicidal patients. One of the reasons I have seen over the years is the unfortunate spread of misinformation about the effectiveness of our suicide-focused treatments.
For example, I was sitting in the outer office of a senior Pentagon official waiting for a meeting, along with some Department of Defense (DoD) colleagues who work in military suicide prevention. We were chatting about evidence-based interventions for suicide and one colleague said, “…well, based on your Army trial, CAMS doesn’t work so are you still going to talk about it in our meeting?”
This was not the first time that the CAMS Framework has been summarily dismissed because of a perceived lack of empirical support. In such cases, I know that this colleague and similar “naysayers” have not actually read through the thirty years of rigorous clinical research that provides overwhelming empirical support for CAMS and the use of the SSF. The data about the effectiveness of CAMS appear in eight published correlational/open trials and the five published RCT’s of CAMS (the highest level of scientific rigor that speaks to the causal impact of an intervention). If this colleague had actually read the extant CAMS/SSF research, he would have known from these rigorous clinical trials and through replication that:
- CAMS significantly reduces suicidal ideation as well as overall symptom distress in 6-8 sessions at 12 months follow-up compared to treatment as usual (TAU) care.
- CAMS significantly decreases hopelessness while it increases hope.
- CAMS also significantly decreases depression.
- CAMS also significantly reduces visits to Emergency Departments in subsamples of suicidal patients relative to TAU care.
Additionally, CAMS has been proven cost-effective, patients prefer it to usual care, and clinicians prefer CAMS training in comparison to other trainings.
As for the summarily dismissed Army RCT? CAMS dropped suicidal ideation like a rock in the first months of care and had sustained that reduction 12 months later. True, treatment as usual “caught up” at 6-months in reducing ideation, but CAMS achieved this important outcome months earlier than TAU. Moreover, the effect sizes (a measure of treatment impact) across all outcome variables were robust for CAMS; however, effect sizes for TAU were strong as well.
So, what exactly is the “problem” with the Army study and why has this colleague dismissed it so quickly? In this particular RCT, we had a particularly good control group of competent clinicians who interacted well with their patients. So, the between-group booming effects we were anticipating did not happen in this trial because everyone in the study got better. But there is more than one way to interpret an RCT.
Indeed, based on this Army RCT, we published a paper from this study of “moderator analyses” in which 6 of 8 significant findings supported CAMS over TAU. The most robust moderator findings? CAMS increased resiliency and decreased overall symptom distress for a subsample of Soldiers while another subsample that received CAMS had significantly fewer emergency department visits than TAU patients. In another secondary analysis led by Dr. Ron Kessler at Harvard, a “machine learning” methodology was used with all the data from the Army RCT which impressively “predicted” through computer-generated algorithms that 78% of Soldiers in the sample would have benefitted more from receiving CAMS whereas 22% would have benefitted from TAU care (to effectively decrease suicidal ideation).
So, while some naysayers point to this study as a failure, my team found a lot of valuable data to further help reduce the suicidal risk through effective clinical care. And an unpublished masters thesis project using data from this study showed that CAMS was significantly more cost-effective that TAU care.
In another example of bias and misinformation, a journal reviewer caustically commented that CAMS did not decrease suicidal behaviors in an RCT of suicidal college students and “only” reduced suicidal ideation. And this is a bad thing?
Upon reflection, this is a very peculiar critique. Based on data from the federal government we know that 10,600,000 American adults struggle with serious suicidal ideation. Add kids and teens to this number and we are talking about 13 million citizens who are “only” struggling with suicidal ideation! (For more information on the suicide prevention field’s tendency to trivialize suicide ideation see editorial Reflections on Suicidal Ideation in the Journal Crisis.
This is an odd bias of many researchers, clinicians, and even policymakers who overly emphasize suicidal behaviors over ideation when the ideation population is 7.6 times greater that suicide attempter population and 225 times greater than the population who die by suicide. For my part, I will never lament that CAMS reliably reduces suicide ideation.
It is unfortunate that some misinformed colleagues selectively read the treatment research literature to serve a particular bias. Such biases can stand in the way of progress, and within their harsh critiques at meetings and in their journal reviews there is implicit default position to support and promote traditional interventions that people with “lived experience” (those who have previously been suicidal) don’t like and the clinical trial research literature in suicide does not support.
For my part, I will continue to stand by and defend the facts, in the form of replicated clinical trial data. And I will continue to question the default position of those who presume that existing practices must be better when there is little to no evidence for that assumption, because suicidal lives are in the balance.