Sunday, July 13, 2014
The Randolph Attachment Disorder Questionnaire: When a Psychological Test Is Not a Test
On several occasions over the last couple of years, I’ve referred on this blog to the case of a mother whose children’s fates seem to be under the control of one Forrest Lien, LCSW, of the Institute for Attachment and Child Development in Colorado (for instance, http://childmyths.blogspot.com/2013/01/kafka-again-more-on-capture-of-child.html).
As recently pointed out by my colleague Linda Rosa, it turns out that Lien was working under a “stipulation” by his licensing board for some years, beginning in 2009, following an unstated problem in his work. This stipulation, as noted at
required him to be supervised and mentored by another social worker for two years-- a requirement that must have been rather galling for someone who had years of experience and who was accustomed to supervising others. In addition to the supervision, Lien was required to take 30 hours per year of approved continuing education courses dealing with assessment and diagnosis, ethics, dual relationships, clinical supervision, and recordkeeping.
All of these continuing education requirements give us some idea about the problems that got Lien into trouble in the first place, but the one that needs the most discussion is the requirement for study of assessment and diagnosis techniques. This topic is of interest because of Lien’s past advocacy of tests and diagnoses without acceptable evidence bases. His IACD website, www.instituteforattachment.org (under previous guises), had been engaged with inventive but unsubstantiated diagnostic approaches in the past, then had stopped mentioning them.
But the IACD site is once again making claims that are unsupported and selling a test that has never been demonstrated to be valid or reliable: the Randolph Attachment Disorder Questionnaire (RADQ, not to be confused with another test with the same acronym developed some years ago by Helen Minnis). The RADQ was developed and marketed by Elizabeth Randolph, a practitioner who had lost her license in California and moved to Colorado.
What do I mean when I say that a test has not been shown to be either reliable or valid? Some readers will know this, and others may remember it vaguely from Intro to Psych, so I think I’d better clarify.
Psychological tests have two jobs to do. One is to identify problems quickly and easily, without having to wait years to see how development goes, or having to commit months of time to tedious observations. The other is to help decide whether a treatment has had the effect it was intended to have, and thus whether it should be used again.
In order to do these jobs, a test must be reliable and give the same results each time it is given to a person, unless something like a treatment has acted to change the underlying condition. In addition, the test must be valid-- it must test what it is supposed to test.
To find out whether a test is reliable, several methods can be used. Sometimes there are two or more test administrations to a group of people, and the results are examined mathematically to see whether they are sufficiently similar. Sometimes the test results for a group are “split” so that the results from one half of the test questions are compared with the results from the other half, and the comparison is examined as before. Reliability is very important when a test is used to determine whether a treatment is effective; if the test does not ordinarily give the same results each time, it can hardly be used to detect whether the treatment changed patients’ characteristics.
However, validity is even more basic to the usefulness of a test. If a test does not give information about the issues it purports to examine, it adds confusion instead of clarity to assessment. But how do we know whether the test is valid? We have to find out whether the test results give us information about people that are similar to the information we could get in some other, more difficult, more time-consuming way. For example, we might give a test to a group of children, then wait until they grew up and see whether the test had done a good job of predicting their adult characteristics. Or, we could give the test and compare it with the results from some other well-validated but expensive or cumbersome test we wanted to replace. Or, if there was agreement among clinicians about how to diagnose a problem, we could give the test, and compare its results to the diagnoses given by clinicians who did not know anything about the test or its results and who were not involved in the test administration.
Let’s look at the RADQ in terms of validity. First, Elizabeth Randolph herself says in her test manual that this is not a test of Reactive Attachment Disorder, but of some other “unofficial” disorder she calls Attachment Disorder. Confusingly, although many people shorten Reactive Attachment Disorder to RAD, the R in RADQ is for Randolph, not Reactive. What is Attachment Disorder, then? According to the IACD website, it is a term that covers a number of dissimilar problems, each with different symptoms as described in DSM or ICD: Reactive Attachment Disorder, Oppositional and Defiant Disorder, Post-Traumatic Stress Disorder, childhood trauma effects, and Pervasive Developmental Disorders (now usually called Autistic Spectrum Disorder). Whatever AD is, it can begin before birth when baby and mother do not bond prenatally (whatever this means) or can occur later. Its symptoms, or perhaps I should say the symptoms of all the diagnoses mentioned earlier in this paragraph, are said to include the usual list used by attachment therapists, including cruelty, provocative behavior, lying, and superficial charm, none of which are actual symptoms of most of the diagnoses named as making up AD.
Evidently, there are problems here. The RADQ is presented as a way to diagnose a number of dissimilar difficulties of behavior and mood. Yet it consists of a fairly small number of questions. Are the answers used to detect a pattern of behavior that is characteristic of each of the diagnoses? No, the total number of answers is supposed to measure the notional Attachment Disorder that somehow shares the sometimes-contradictory characteristics of a whole set of different diagnoses. There is no clear definition of the problem to be assessed, so it is impossible to work out whether the RADQ is a valid assessment of … something.
There’s more to deal with here. Randolph says she can diagnose AD; one of her methods is to see whether the child is able to crawl backward on command. [I’m just reporting the news, you understand.] So, she can identify AD, and she will validate her test by seeing whether she and the test come to the same decision about each child. In order to do this, of course, what she needed to do was to have someone else administer the test, and to look at the mathematical relationship between the test results and her assessments of a group of children. But, no. That was not what happened. Not only did Randolph administer the test herself, she did so by discussion with each child’s familiar female caregiver. The RADQ is not a test of what children think or do, it is a test of what their mothers or guardians say they think or do. Randolph, who had already worked with and knew the children, administered the test by talking over each question with the caregivers until she and the caregiver came to a conclusion about the correct answer to the question. In other words, Randolph, who had already come to a conclusion about the child, guided the caregiver to a set of answers on the RADQ-- perhaps unwittingly, but it is impossible to think that there was no influence brought to bear.
Not astonishingly, there was a high correlation between Randolph’s diagnosis and what the RADQ responses said, so Randolph reported that the test was a valid one. But, of course, it was not, and would not have been even if AD existed as a disorder in any meaningful sense. To validate a psychological test, the test administration and the validating criterion (in this case, Randolph’s diagnosis, whatever it meant) must be independent of each other to begin with. If they influence each other, one will predict the other without actually being a valid way to assess a problem.
What does all this mean about Forrest Lien and the IACD? If they are selling the RADQ, as they advertise on the website, they are committing what appears to be a fraudulent act. If they have paid the slightest attention to the professional literature over the last ten years, they must be aware of the criticisms that have been leveled against the RADQ, and are selling it anyway. If they have not paid any such attention, they have failed to keep up with professional development and are not meeting the practice standards required for continuing licensure.
The stipulation under which Lien practiced for some years did not clearly state the reasons for the disciplinary action, but it did require him to do further study about assessment and diagnosis. Could it have been the use of the RADQ that was the original problem, and that was why the test disappeared from the website for a while? If so, what will happen now that the RADQ is back? What ought to happen is pretty clear to me.