change the world badge

change the world badge

feedspot

Child Psychology Blogs

Concerned About Unconventional Mental Health Interventions?

Concerned About Unconventional Mental Health Interventions?
Alternative Psychotherapies: Evaluating Unconventional Mental Health Treatments

Saturday, June 4, 2011

Does Individualized Treatment Mean Systematic Research is Impossible?

In a recent e-mail discussion, someone mentioned to me her belief that it was not really relevant that some psychotherapies for children are not supported by clear research evidence and thus can’t properly be called “evidence-based”. She reasoned that because psychological treatments have to be tailored to individual cases, it is impossible to control how the treatment is done, and real experimental research like that done in physical medicine cannot be managed. Therefore, she argued, we should not demand that psychotherapies be supported by evidence from randomized trials.

That correspondent was talking about a treatment called Dyadic Developmental Psychotherapy, which has not earned the “evidence-based” category, but she was certainly not the only person to make that argument, nor is DDP the only treatment that has been discussed in this way. Several weeks ago, when I was present at a workshop on Sensory Integration treatment, I took the opportunity to ask the presenter privately where she stood on the idea that Sensory Integration lacks the support of research evidence. She cheerfully agreed that it had very little support, and followed up her statement with the argument that individualization of treatments means that it will never be possible to evaluate SI and provide clear research evidence supporting its effectiveness--- so, we should choose to use SI or similar methods simply on the basis of therapists’ judgments about what seems to “work”.

These are not irrational arguments, but I think they are wrong. Obviously, research on any treatment of human beings is going to involve many more complicated factors than, say, studying which kind of fertilizer gives the best crop for a particular kind of bean. With more factors, it becomes increasingly harder to define and interpret outcomes. Nevertheless, by following rules of research design, it’s possible to work out ways of deciding whether a treatment is more effective than another choice (whether it’s a different treatment, no treatment at all, being on a waiting list for the desired treatment, or whatever it may be). Applying those rules gives a better assessment of the treatment than can be given by the judgment of a therapist who is very much involved in the therapy, or the judgment of parents who may have their own reasons for enthusiasm or hostility toward a particular psychotherapy for their children, or for that matter the judgment of the treated children themselves.

Here’s the thing: in most situations where either a child or and adult is going to receive psychological treatment, somebody is going to make a decision about what the treatment will be. That “somebody” may be a parent or a teacher or a judge or a psychologist or a social worker. Without research evidence, none of those people have anything but their own experience or other people’s opinions to go on, and even very experienced therapists can hardly have treated more than a few hundred of certain kinds of problems in their professional lifetimes. Their opinions may be better ways of making choices than flipping a coin, but they can’t be nearly as good decision guides as systematic research, and I’ll tell you why.

1. Individual teachers or therapists can only have a limited number of past observations to go on, but systematic research approaches can bring together information about hundreds of similar cases. The great advantage of having all those cases is that the individual differences between children and their situations are likely to average out and “disappear” arithmetically from measurements of the children’s conditions. In addition, when you work with small numbers of children, it’s much more likely that individual differences will affect the results, and even that the children you are looking at are by accident a different sort of people than the rest of the population. (This is like the old sock drawer problem: if you have a drawer full of socks, half of them white and half black, and you pull out only three, it could easily happen that all three are the same color. But when you pull out a dozen socks, chances are that you will get something closer to half black and half white, and with two dozen that’s even more likely. Similarly, if half of the children will get “better” with a treatment, and the other half will get ”worse”, it could easily happen that a small number of children could by accident all be of the “getting better” type.)

2. Individual therapists or parents may not be in a good position to compare children in treatment to children with similar problems who are not being treated. Without that kind of comparison, it’s impossible to know whether changes in the treated children are caused by the treatment, or whether they change just because they are getting older, or whether their problems are of a temporary kind to begin with. The rules of systematic research direct us to begin a study by establishing a comparison group of people who are similar in age, sex, socioeconomic status, etc., etc. to the people who will be treated, but who will receive a different treatment, or possibly no treatment at all. Simply comparing the children’s conditions before and after treatment is just not good enough, because it gives us no idea whatever why any improvement has occurred, and improvement can take place for a thousand reasons-- even the parents’ changed behavior because they have confidence in a therapist.

3. Individual therapists, parents, or teachers, however careful they may try to be, can find it impossible to be objective about the effects of a treatment. This is not an insult, or even a criticism, but just the statement that all of us human beings easily think or remember things the way that makes most sense to us or that suggests that we’re right about something. Well-designed research does not let this kind of subjectivity get in the way, because it makes sure that children are evaluated by people who do not know what kind of treatment they are getting, and that the data collected are also analyzed “blind” by people who don’t know the children.

4. Individuals may be very much tempted to make decisions about treatments on the basis of other individuals’ experiences-- testimonials, for example. But those individual experiences are very likely to be seriously biased in ways that systematic research is not. What testimonial ever said, “Oh, it was okay, I guess”, or for that matter, “It was ghastly-- what a waste of time and money”? Decisions based on individual experience are likely to pay close attention to positive statements and ignore neutral or negative experiences. On the contrary, the job of systematic research is to try hard to find negative information that will lead to the rejection of a treatment unless it is far more than balanced by positive findings.


All these are reasons why choices about treatment are better based on systematic research than on individual experiences. All choices are likely to be based on some information rather than by random decisions, so it only makes sense to choose treatments based on systematic research evidence when that is possible (and, by the way, it usually is possible, as there are well-supported psychotherapies for parents and children).

One other thought, however. I don’t mean to say that individual experiences are worthless, and in fact there is one situation where they may be enormously valuable in the assessment of a treatment. This is the case when a therapy is actually potentially harmful under some circumstances. We can assume that even potentially harmful therapies don’t hurt every client, because it would rapidly become obvious if they did. However, if they harmed only one person in a thousand, it would be important to know this. Chance factors might mean that no individual in a systematic study would be hurt. We would be likely to find out about problems only from family members or from a therapist who observed an adverse event. Let’s hope that if such discoveries are made, the observers will find some way to make their findings publicly known.

1 comment:

  1. This is a very common misconception about Evidence Based Practice, that it precludes individualization when actually the definition, which originated in the field of medicine by Sackett and his colleagues and the steps, include individualization. The process involves formulating a question relevant to your individual client, doing a search of the literature to answer that question, evaluating the evidence found and then selecting an intervention for your individual client, taking into account the client's values and using your clinical expertise. Once the intervention is implemented, individualized evaluation via reliable and valid assessment tools continues and adjustments are made, as needed.

    People who maintain that their favorite therapy is not scientifically testable are, in essence, admitting it is a pseudoscience. Scientists consider the line of demarcation between science and pseudoscience is whether a claim is testable/falsifiable. If a therapy is not testable, that puts it outside the realm of science and puts it on a par with any other non-scientific or pseudoscientific practice. That would be a very sad statement if that is how they want to classify mental health practices. Most are actually testable but proponents either don't want to test them or explain away disconfirmatory results.

    ReplyDelete