March 2025 // Psychological Therapy Research

The Truth About Therapy (In)Effectiveness: What They Don’t Tell You

Proving a therapy works is important, but not all research is as reliable as it seems. Ioannidis (2005) pointed out that many studies produce misleading results, and Chalmers and Glasziou (2009) estimated that up to 85% of biomedical research is wasted due to poor methodology. Despite efforts to improve standards, therapy research is still prone to these issues, making it essential to critically evaluate claims of effectiveness. But how can we distinguish genuinely effective treatments from those that merely seem to work?

1. Researcher Bias and the Power of Belief

A strong allegiance to a therapy can unintentionally influence study results. When researchers or therapists strongly believe in a treatment, they may overemphasise its benefits in communication, influencing patients’ responses.

Example: A therapist convinced their method is the best may encourage positive feedback, even when results are mixed.

2. The Placebo Effect in Therapy

Expectation can lead to perceived improvement, regardless of a therapy’s actual impact. If patients believe they are receiving an effective treatment, they may report better outcomes.

Example: Describing a therapy as “groundbreaking” can create an expectation of success, influencing patients’ self-reported progress.

3. Weaknesses in Randomised Trials

Randomised Controlled Trials (RCT) have become the gold standard to prove that therapies for mental health problems, are effective and have even been regarded as ‘objective scientific methodology’. However, RCTs have flaws that can distort results:

  • Unblinded Assessments: If researchers know which patients received treatment, they may (unintentionally) rate improvements more favourably.
  • Ignoring Dropouts: Excluding patients who quit therapy may make results seem more positive than they truly are.

Example: If many patients drop out due to lack of progress but aren’t counted, the therapy appears more effective than it is.

4. The Choice of Control Groups

A therapy’s effectiveness can depend on what it is compared to.

  • Waiting List Controls: Patients who receive no intervention naturally fare worse, making the tested therapy look more effective.
  • Avoiding Established Comparisons: Not comparing a therapy to proven treatments keeps its effectiveness uncertain.

Example: A therapy may seem impressive when compared to doing nothing, but less so against an existing treatment.

5. Selective Outcome Reporting

Studies track multiple success measures, but researchers may highlight only the most favourable results while ignoring others.

  • Choosing the Best Data: If a therapy shows minor benefits in some areas but none in others, the strongest results may be emphasised.
  • Suppressing Negative Findings: Unsuccessful results may be omitted or labeled “secondary.”

Example: A therapy that improves sleep but not anxiety may be marketed based only on its effect on sleep.

6. Publication Bias

Negative or inconclusive studies often go unpublished, leading to a distorted picture of a therapy’s effectiveness.

Example: If ten studies are conducted but only the three positive ones get published, the therapy may seem more effective than it is.

Why This Matters

Overstating a therapy’s effectiveness can mislead patients, waste resources, and hinder progress in mental health treatment. Scrutinising research claims and demanding transparency ensures therapies are evaluated fairly.


References:

Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMJ, 340, c332.rapy: Skills and applications (2nd ed.). SAGE Publications.

Chalmers, I., & Glasziou, P. (2009). Avoidable waste in the production and reporting of research evidence. The Lancet, 374(9683), 86-89.

Cuijpers, P., & Cristea, I. A. (2016). How to prove that your therapy is effective, even when it is not: A guideline. Epidemiology and Psychiatric Sciences, 25(5), 428-435.

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.