change the world badge

change the world badge

feedspot

Child Psychology Blogs

Concerned About Unconventional Mental Health Interventions?

Concerned About Unconventional Mental Health Interventions?
Alternative Psychotherapies: Evaluating Unconventional Mental Health Treatments

Saturday, March 19, 2011

Who Gave David Brooks a Psychology Book? A Threnody

Stand up, the person who encouraged the columnist David Brooks to quote bits of psychological research! I have a rod in pickle for you, or at least I wish I did. Somebody deserves a few of the juiciest for whatever they did that led to www.brooks.blogs.nytimes.com and its repeated visits to the cherry orchard.

Brooks’ approach is to choose a study, report its outcome briefly, and draw a conclusion or two from it. In his column today, he notes magisterially that “one study is never dispositive”-- but that’s not much help when he’s just given highly simplified outcomes for a series of unrelated single studies.

Psychologists as a group may be pleased that Brooks is hanging around the journal corner-- I heard this referred to several times at the recent Eastern Psychological Association conference. As for me, I’m concerned that Brooks’ remarks suggest to the public that reading the abstract of a journal article puts one in a position to state an important conclusion. According to today’s column, his new blog is intended to celebrate “odd and brilliant studies from researchers around the world”. This would be fine if the emphasis were on the oddness (“isn’t this weird? Wonder what anyone else has found about it?”), but to combine the odd and the brilliant seems to isolate these findings from the rest of the extensive psychological research literature.

But these are peer-reviewed journals he’s quoting, right? If so, what can be wrong? I’m not saying that anything is wrong with the studies themselves (although the peer review process is certainly no guarantee that everything in a journal is unimpeachable). What I am saying is that when you’re dealing with the complexities of research on human beings, you need more than one study to create reasonable confidence in the results. In guidelines about evidence-based practice, it’s suggested that there must be an independent replication of a study-- by researchers other than the original authors-- before we can conclude that there is clear research support for a practice. The same idea can well be applied to other types of psychological research.

Because of concerns about chance outcomes in single studies, it’s common nowadays to carry out procedures that pull together the results of multiple studies. Systematic research syntheses examine the literature on a topic and consider both the quality of the studies and their outcomes. In clinical areas, world-wide projects use trained volunteers to examine published research on treatments and report the evidence about their effectiveness. The Cochrane Collaboration does this in medicine and the Campbell Collaboration in psychology and education. In addition to this approach, it’s possible to use statistical meta-analyses to combine the data from a number of studies and draw a conclusion about the results of this large data set.

The very existence of systematic research syntheses and of meta-analyses tells us that cherry-picking single studies is not regarded as a suitable way to learn from psychological (or other) research.

There’s more to this issue, though. Whether a given study gives us any meaningful information is to a considerable extent a matter of the study’s design. Clinical topics give us some clear examples of the complexity of this issue. For instance, in a study of the effectiveness of a psychotherapy, we need to have a comparison group who do not receive the treatment in question. (If we just looked at people before and after treatment, we would have no idea whether any changes were caused by the treatment or by multiple other factors.) But what should be the nature of the comparison group? Are they people who get no treatment at all? Are they people who are put on a waiting list and will receive the treatment later on? Are they people who get the current “standard” treatment for their problem? Do you set up different comparison groups of each of these types?

The outcome of psychotherapy research can depend on which of these designs is chosen. For example, there would probably be a bigger difference between a treatment group and a comparison group who received no treatment at all, than between a treatment group and a comparison group who received the current standard treatment. But this example shows what happens in other areas of research as well, and that’s the point: even when there’s no issue about possible treatments, the way a comparison is built into the study can make a great deal of difference to the outcome. That means that anyone who is interpreting a research report needs to consider the extent to which the design makes a conclusion possible. So far, very little of that has appeared in Brooks’ blog, although I have to give him credit for knowing that correlational studies can’t tell us about causation.

I recognize that most readers will prefer the simple David Brooks version of psychological research to my version with its statistics and research design. That’s exactly what worries me. Brooks’ approach may please some psychologists by calling attention to published research, but I believe its oversimplification can deceive readers about specific topics and about the field of psychology itself.

2 comments:

  1. Hi!, love the blog. Will be listening for your interview on the Parenting within Reason.

    "but I believe its oversimplification can deceive readers about specific topics and about the field of psychology itself."

    Considering it's David Brooks, you might wish to consider whether the above was his intention.

    ReplyDelete
  2. OMG. That is an awful thought--he does it on purpose!

    Thanks for kind words.

    ReplyDelete