Behavior observations tell us a lot about children’s development, abilities, and moods, but to be accurate they require a lot of work from highly trained people. Understanding a child’s past experiences would also help us assess that child’s present needs and capacities, but many-- even most—experiences in the family are private, and we can identify them only if they have consequences that can be observed later on. It’s partly because of these difficulties that there have been a number of suggestions about using hormonal or neurotransmitter testing to try to assess children’s backgrounds and the help they need.
Sometimes these suggestions appear to ignore the fact that a biological measure is only part of the complex picture that represents the child in the family. For example, one recent case study was used to show that termination of parental rights could be decided on the basis of a child’s neurotransmitter levels following a visit with the non-custodial mother (Purvis, K.B., McKenzie, L.B., Kellermann, G., & Cross, D.R. [2010]. An attachment based approach to child custody evaluation: A case study. Journal of Child Custody, 7, 45-60). That study was very much lacking in the kind of evidence that would be needed to conclude that such measurement could be used appropriately.
Of the various biological measures that have received attention as possible indicators of a child’s condition and needs, cortisol, the “stress hormone” that is produced in response to a wide range of experiences, including neglect and abuse. For example, an adoptive mother commenting on http://phtherapies.wordpress.com/2010/09/23/monica-pignotti-responds-to-adoptive-parents-comments explains her child’s distress and difficult behavior by referring to high levels of cortisol.
Increases in cortisol levels following brief, intense stress help the individual to withstand stress effects. Some cortisol production is valuable-- it would be a problem to be without cortisol to help withstand the occasional shocks of normal life. But long periods of stress, with chronic cortisol activation, have a number of negative effects, including cognitive and emotional problems. And some people who have been stressed over long periods begin to show lower levels of cortisol production as a consequence. Clearly, there is no simple one-to-one connection between current cortisol level and history of stress, or between cortisol level and present ability to handle stress.
A recent article in Child Development summarizes some of the complexities of the connections between cortisol levels and experiences of abuse or neglect (Cicchetti, D., Rogosch, F.A., Howe, M.L., & Toth, S.L. [2010]. The effects of maltreatment and neuroendocrine regulation on memory performance. Child Development, 81, 1504-1519). These authors point out that there seem to be no overall differences in cortisol levels between all types of maltreated children and non-maltreated children. However, like so many biological characteristics, cortisols vary with respect to time of day, and the pattern of that variation may be different for children who have been extensively maltreated (both sexual and physical abuse, as well as neglect and emotional abuse) and those who have been physically abused only. Some maltreated children produce high levels of cortisol, some low levels, and some show a normal pattern in spite of their experiences with abuse or neglect. As Cicchetti and his colleagues conclude, “… hypercortisolism [high levels of cortisol] is more likely to occur as a result of acute stress and a shorter time since the onset of stress or maltreatment, whereas hypocortisolism [low levels of cortisol] is likely the result of a long-term process of chronic stress and a later time since the onset of stress or maltreatment” (p. 1506).] Once again, the cortisol measure is not related in a simple way to whether a child has or has not been maltrerated.
Why can’t we just use children’s cortisol levels to decide what kinds of stresses they may have experienced, and at what points in their development? Should we use hypercortisolism to convict a child’s present caregiver of abusive treatment, or hypocortisolism to convict a former caregiver? Or should we tell the adoptive mother cited above that if her child has high cortisol levels (which of course we don’t know), this must be due to the stress of his recent experiences? No, of course not. For one thing, as Cicchetti and his co-authors point out, there is a good deal of variation in children’s cortisol patterns, and no overall difference between maltreated and non-maltreated children. In addition, we need to recall that these are correlational studies, in which there is an association between two measures, but no certain way to decide which caused the other. It is conceivable-- though perhaps remotely—that children with certain daily patterns of cortisol production are more likely to be extensively abused than others, and even that biologically-related caregivers who share certain patterns are more likely to be abusive. Information about stress hormones and neurotransmitters allows us to explore connections between biological factors, abusive treatment, and child outcomes, but at this point such information is only a compass, not a detailed map.
Tuesday, September 28, 2010
Sunday, September 26, 2010
A Good Law: School Assignment for Foster and Homeless Children
“There ought to be a law”-- we’ve all said that at times. Unfortunately, not every law has the outcomes we’d like to see, and it can be even harder to alter a law than to create a new one.
Most states have problematic but well-intentioned laws about children’s attendance at public schools. Children are generally expected to attend the school to which their residential district assigns them-- sometimes a school near the child’s home, but sometimes not. Because school expenses are paid by the school district of the child’s residence, there is concern about having payment go from one district to another, except in the unusual case where one district can offer needed services and another cannot. If a school district discovers that a child’s family has moved to another area, the child will often not be allowed to continue to attend the familiar school, even for a few remaining weeks of the school year. A few, but by no means all, school districts will allow parents to pay substantial fees to keep their child in a public school even though the family lives out of the district. Even when arrangements can be made for the child to attend school out of district, however, the distance between the home and the desired school may be prohibitive unless parents can manage transportation.
The result of these laws about school assignment is that children whose living situation changes must very often change schools as well as homes. Two groups who have been strongly affected by this are children who are moved from their familiar (but neglectful or abusive) home to the care of a foster (or “resource”) family living in some other area, and children whose families have become homeless and are residing in a shelter or similar environment. Because familiar schools and established relationships with teachers and with other children are important stabilizing factors in school-age children’s lives, the requirement that they change schools removes yet another element of stability in situations where unfamiliarity of people and/or places is already creating stress. Such changes create problems both for children’s emotional lives and for their educational process. Entering a new school always means a loss of friends, and may mean conflicts as a newcomer has to make his or her way into the pecking order; these changes, when combined with the emotional impact of foster placement or homelessness, may interfere with a child’s self-control and result in fighting, sulking, or withdrawal that generates even more negative consequences in the school environment. In addition, because schools do not have identical educational practices or daily lesson plans, children changing schools may miss instruction, and busy teachers may not be aware of developing educational problems.
Today, though, we have reason to be delighted that there is a new law in New Jersey that should have a very beneficial effect on foster children and homeless children (see http://www.njleg.state.nj.us/2010/Bills/A2500/2137_R3.PDF). This law protects children placed in foster care from having to change schools, when moving to the district where the foster parents live, if it is different from the area where the parental home was. It also protects children living in homeless shelters or group homes from having to change schools, allowing them to continue at their familiar schools. Of especial importance, this law establishes payment for transportation of children to their accustomed schools, and prevents municipalities from making laws that would discriminate between use of schools and recreational facilities by children living with their families in the district and by children in other situations. The New Jersey legislation thus removes some emotional and educational obstacles that have further interfered with the positive development of children whose living situations were disrupted by situations that themselves had negative impacts.
What about other children whose lives are disrupted if they have to change schools at awkward points? A group who are obviously affected are children of divorce. Children whose parents divorce frequently find themselves moving to a new house or apartment because parents who are dividing their resources can no longer maintain the living standards that were possible with combined incomes. The familiar home, neighborhood, and friends are replaced by unfamiliarity, and often by unfamiliar surroundings that are less comfortable or safe than the original home. The move may be at the beginning of a long school vacation, which would give the children time to become acclimated and even make some new friends before starting at a new school-- but it may just as well be in the middle of a school term, with nothing to soften the “school shock” as they plunge into an entirely new situation. And, of course, many school systems nowadays have abandoned the old-fashioned long summer vacation in favor of short terms with a few weeks’ break between them. In these cases, children may lose much that could offer comfort and buffer the impact of divorce on their mood and learning.
Let’s hope that the legislation for foster and homeless children may be the beginning of more child- and family-friendly school policies for all families undergoing significant changes.
Most states have problematic but well-intentioned laws about children’s attendance at public schools. Children are generally expected to attend the school to which their residential district assigns them-- sometimes a school near the child’s home, but sometimes not. Because school expenses are paid by the school district of the child’s residence, there is concern about having payment go from one district to another, except in the unusual case where one district can offer needed services and another cannot. If a school district discovers that a child’s family has moved to another area, the child will often not be allowed to continue to attend the familiar school, even for a few remaining weeks of the school year. A few, but by no means all, school districts will allow parents to pay substantial fees to keep their child in a public school even though the family lives out of the district. Even when arrangements can be made for the child to attend school out of district, however, the distance between the home and the desired school may be prohibitive unless parents can manage transportation.
The result of these laws about school assignment is that children whose living situation changes must very often change schools as well as homes. Two groups who have been strongly affected by this are children who are moved from their familiar (but neglectful or abusive) home to the care of a foster (or “resource”) family living in some other area, and children whose families have become homeless and are residing in a shelter or similar environment. Because familiar schools and established relationships with teachers and with other children are important stabilizing factors in school-age children’s lives, the requirement that they change schools removes yet another element of stability in situations where unfamiliarity of people and/or places is already creating stress. Such changes create problems both for children’s emotional lives and for their educational process. Entering a new school always means a loss of friends, and may mean conflicts as a newcomer has to make his or her way into the pecking order; these changes, when combined with the emotional impact of foster placement or homelessness, may interfere with a child’s self-control and result in fighting, sulking, or withdrawal that generates even more negative consequences in the school environment. In addition, because schools do not have identical educational practices or daily lesson plans, children changing schools may miss instruction, and busy teachers may not be aware of developing educational problems.
Today, though, we have reason to be delighted that there is a new law in New Jersey that should have a very beneficial effect on foster children and homeless children (see http://www.njleg.state.nj.us/2010/Bills/A2500/2137_R3.PDF). This law protects children placed in foster care from having to change schools, when moving to the district where the foster parents live, if it is different from the area where the parental home was. It also protects children living in homeless shelters or group homes from having to change schools, allowing them to continue at their familiar schools. Of especial importance, this law establishes payment for transportation of children to their accustomed schools, and prevents municipalities from making laws that would discriminate between use of schools and recreational facilities by children living with their families in the district and by children in other situations. The New Jersey legislation thus removes some emotional and educational obstacles that have further interfered with the positive development of children whose living situations were disrupted by situations that themselves had negative impacts.
What about other children whose lives are disrupted if they have to change schools at awkward points? A group who are obviously affected are children of divorce. Children whose parents divorce frequently find themselves moving to a new house or apartment because parents who are dividing their resources can no longer maintain the living standards that were possible with combined incomes. The familiar home, neighborhood, and friends are replaced by unfamiliarity, and often by unfamiliar surroundings that are less comfortable or safe than the original home. The move may be at the beginning of a long school vacation, which would give the children time to become acclimated and even make some new friends before starting at a new school-- but it may just as well be in the middle of a school term, with nothing to soften the “school shock” as they plunge into an entirely new situation. And, of course, many school systems nowadays have abandoned the old-fashioned long summer vacation in favor of short terms with a few weeks’ break between them. In these cases, children may lose much that could offer comfort and buffer the impact of divorce on their mood and learning.
Let’s hope that the legislation for foster and homeless children may be the beginning of more child- and family-friendly school policies for all families undergoing significant changes.
Friday, September 24, 2010
Violent Video Games: Do We Know That They Cause Violent Behaviors?
California has banned the sale of violent video games to minors, and concerned persons have argued that playing violent games causes children and adolescents to become more aggressive—an undesirable outcome as far as most of society is concerned. In a petition before the Supreme Court of the United States, two related groups, the Entertainment Merchants Association and the Entertainment Software Association, have asked for the ban to be removed. They argue that although it would be undesirable for young people to become more aggressive, in fact the evidence does not suggest that violent video games would cause this outcome.
The cynical—and rather natural-- response to the Entertainment Merchants is that their financial interests in the sale of violent video games are outweighing whatever concerns they might have about aggressive behavior and its impact on society. However, eighty-two people (of whom I am one) have signed an “amici curiae” (friends of the court) brief arguing against the ban. This brief can be read at http://www.scribd.com/doc/37676518/EFF-PFF-Supreme-Court-Amicus-Brief-in-SCHWARTZENEGGER-v-EMA-Video-Games-Case. The signatories are all people who have contributed in some way to the discussion of the possible influence of violent video games on youth aggression, and they have presented information, based on understanding of systematic research, that supports the argument of the Entertainment Merchants. (It is, after all, possible that the Entertainment Merchants can be both motivated by financial interests and right in their assessment of the effects of violent video games.)
The amici brief counters the research cited by the California state senator Leland Yee and condemning the ill effects of violent video games. Without saying so directly, the brief suggests that Senator Yee (who has a Ph.D. in child psychology) and others have been cherry-picking among the available research, and have concocted an unpalatable pie with the results. There are hundreds of published studies on violent media and their effects, going back to the ‘50s and the great comic book investigations. As it would be very unwieldy to cite all these, it is not surprising that Californians in favor of the ban have chosen among them.
Given that some choice among studies is necessary, though, how do we choose? The choices should not be a matter of searching for investigations that have the “right” outcome and support whatever belief someone had to begin with. An important theme in scientific investigation is that researchers search hard for evidence that will counter their hypothesis; the rule is to try to prove that the hypothesis is NOT true, rather than to look only for information that confirms the prediction. This means that in selecting from an overabundance of material, if we were to pay attention to conclusions at all (which we really shouldn’t), we would be looking for those that argued against our expectation.
The choices made among studies should be based on the quality of the investigation and the acceptability of the conclusions drawn. Because it is really quite difficult to do good studies of a complex subject like the connections between violent video games and aggressive or violent youth behavior, these choices must be made carefully. It can’t be assumed that any study published in a peer-reviewed journal has met all the highest standards for data collection and interpretation.
One real problem is that most studies of the effects of video games are in fact correlational studies. This means that the researchers have sought young people who already had habits with respect to video games , have asked them about their preferences and their game-playing habits, and have in some way assessed either their actual violent behavior or some related factor like angry moods. Taking these two kinds of information, video game use, and violence or anger, the researchers carried out a statistical analysis to show how much the two factors were related to each other. If they also created a graph to show the relationship, the way the graph looked would indicate how much of a relationship there was. Points (each representing a tested individual) would be scattered at random all over the graph if there was little connection between video game use and aggressiveness. If there was a strong relationship between the two-- for instance, that people who played a lot of violent video games were quite aggressive, and those who played few were not-- the graph would form a tight, straight line. The scattered graph would be said to show a low correlation, and the straight line graph a high correlation.
It’s important and interesting to know whether two measures are correlated. But when we have that information-- well, that’s all we have. Even the highest correlation does not tell us that one factor is the cause of the other. It doesn’t even tell us which one might have been the cause and which one the effect. In the case of correlational research on video games, there are two equally possible interpretations about cause and effect. One is that violent video games cause violent behavior, as Senator Yee and many others have argued. The other is that people who are prone to violent behavior enjoy violent video games and choose to play them often, while continuing their usual level of aggressiveness.
In recent years, researchers have used brain imaging in the form of the fMRI to try to get something better than a correlational approach to the study of violence. Perhaps the time will come when this kind of work can be more conclusive, but at present there are many questions about the results of fMRI studies and their relationship to real-world behavior. Like correlational studies, imaging studies cannot be assumed to give us solid information about causal connections between violent video games and violent behavior.
As the amici brief has argued, we presently have little choice among studies that use good methodology for investigating causes of violent behavior, because such methodology is so poorly developed. As a result, it is presently impossible to conclude that violent video games cause youth violence. If the only argument in favor of banning the games is that scientific work supports the decision, there is no reason for such a ban, because there is no adequate scientific work.
That said, I acknowledge that I dislike all video games myself, and think there are powerful esthetic arguments against them! But my likes and dislikes, or those of others, should not be confused with systematic research evidence.
The cynical—and rather natural-- response to the Entertainment Merchants is that their financial interests in the sale of violent video games are outweighing whatever concerns they might have about aggressive behavior and its impact on society. However, eighty-two people (of whom I am one) have signed an “amici curiae” (friends of the court) brief arguing against the ban. This brief can be read at http://www.scribd.com/doc/37676518/EFF-PFF-Supreme-Court-Amicus-Brief-in-SCHWARTZENEGGER-v-EMA-Video-Games-Case. The signatories are all people who have contributed in some way to the discussion of the possible influence of violent video games on youth aggression, and they have presented information, based on understanding of systematic research, that supports the argument of the Entertainment Merchants. (It is, after all, possible that the Entertainment Merchants can be both motivated by financial interests and right in their assessment of the effects of violent video games.)
The amici brief counters the research cited by the California state senator Leland Yee and condemning the ill effects of violent video games. Without saying so directly, the brief suggests that Senator Yee (who has a Ph.D. in child psychology) and others have been cherry-picking among the available research, and have concocted an unpalatable pie with the results. There are hundreds of published studies on violent media and their effects, going back to the ‘50s and the great comic book investigations. As it would be very unwieldy to cite all these, it is not surprising that Californians in favor of the ban have chosen among them.
Given that some choice among studies is necessary, though, how do we choose? The choices should not be a matter of searching for investigations that have the “right” outcome and support whatever belief someone had to begin with. An important theme in scientific investigation is that researchers search hard for evidence that will counter their hypothesis; the rule is to try to prove that the hypothesis is NOT true, rather than to look only for information that confirms the prediction. This means that in selecting from an overabundance of material, if we were to pay attention to conclusions at all (which we really shouldn’t), we would be looking for those that argued against our expectation.
The choices made among studies should be based on the quality of the investigation and the acceptability of the conclusions drawn. Because it is really quite difficult to do good studies of a complex subject like the connections between violent video games and aggressive or violent youth behavior, these choices must be made carefully. It can’t be assumed that any study published in a peer-reviewed journal has met all the highest standards for data collection and interpretation.
One real problem is that most studies of the effects of video games are in fact correlational studies. This means that the researchers have sought young people who already had habits with respect to video games , have asked them about their preferences and their game-playing habits, and have in some way assessed either their actual violent behavior or some related factor like angry moods. Taking these two kinds of information, video game use, and violence or anger, the researchers carried out a statistical analysis to show how much the two factors were related to each other. If they also created a graph to show the relationship, the way the graph looked would indicate how much of a relationship there was. Points (each representing a tested individual) would be scattered at random all over the graph if there was little connection between video game use and aggressiveness. If there was a strong relationship between the two-- for instance, that people who played a lot of violent video games were quite aggressive, and those who played few were not-- the graph would form a tight, straight line. The scattered graph would be said to show a low correlation, and the straight line graph a high correlation.
It’s important and interesting to know whether two measures are correlated. But when we have that information-- well, that’s all we have. Even the highest correlation does not tell us that one factor is the cause of the other. It doesn’t even tell us which one might have been the cause and which one the effect. In the case of correlational research on video games, there are two equally possible interpretations about cause and effect. One is that violent video games cause violent behavior, as Senator Yee and many others have argued. The other is that people who are prone to violent behavior enjoy violent video games and choose to play them often, while continuing their usual level of aggressiveness.
In recent years, researchers have used brain imaging in the form of the fMRI to try to get something better than a correlational approach to the study of violence. Perhaps the time will come when this kind of work can be more conclusive, but at present there are many questions about the results of fMRI studies and their relationship to real-world behavior. Like correlational studies, imaging studies cannot be assumed to give us solid information about causal connections between violent video games and violent behavior.
As the amici brief has argued, we presently have little choice among studies that use good methodology for investigating causes of violent behavior, because such methodology is so poorly developed. As a result, it is presently impossible to conclude that violent video games cause youth violence. If the only argument in favor of banning the games is that scientific work supports the decision, there is no reason for such a ban, because there is no adequate scientific work.
That said, I acknowledge that I dislike all video games myself, and think there are powerful esthetic arguments against them! But my likes and dislikes, or those of others, should not be confused with systematic research evidence.
Thursday, September 16, 2010
If It's Not RAD, What Is It?
What psychiatric diagnosis describes cases where children are aloof, irritable, rebellious, and given to rages? Hands up, everyone who says Reactive Attachment Disorder, or just plain “RAD”-- no, sorry, that’s not the right answer, no matter what you read on commercial Internet sites.
Reactive Attachment Disorder is defined by the most recent Diagnostic and Statistical Manual of the American Psychiatric Association (DSM-IV-TR) as involving markedly disturbed and developmentally inappropriate social relatedness in most contexts, beginning before age 5. Most children under the age of 5 are shy with strangers, although they are friendlier to unfamiliar children than to unfamiliar adults. They approach familiar people when frightened or hurt, and show distress when separated from them, although they tolerate this better and better and they get older. Disturbed social relatedness can involve a problem with any of these normal behaviors.
Children diagnosed with Reactive Attachment Disorder may be inhibited in their social behavior, shy and anxious with everyone-- or they may be disinhibited, overly friendly, and as inclined to approach strangers as familiar people. In either case, to be diagnosed with this disorder, a child must also have had problematic early care experiences, and must not be mentally retarded or have other cognitive or emotional problems like autism.
Although Reactive Attachment Disorder used to be defined differently in an earlier DSM (it was defined as a feeding problem of infancy), it has never been defined as involving irritability or “rages”. Although some Internet sites of groups who claim expertise on this topic provide “RAD checklists” including love of blood and gore, aggressiveness, and a “darkness behind the eyes” as symptoms of RAD, others of the same school of thought suggest that these symptoms do not indicate RAD, but another type of attachment disorder that they say DSM has not yet included.
If not RAD, what might be an appropriate diagnosis for irritable children? The pediatrician Daniel Dickstein has recently discussed this in an article entitled “DSM-5 and pediatric irritability: Acknowledging the elephant in the room”, in the Brown University Child and Adolescent Behavior Letter, October, 2010. Dickstein commented on the possible directions that will be taken on this by DSM-5, the new edition, forthcoming in 2013 and presently in development. He noted that in DSM-IV-TR, irritability is “everywhere and nowhere simultaneously”, without being treated as a distinct or well-described symptom. But this situation means that while irritable children do not get the RAD diagnosis, it’s not necessarily clear what they should get.
Dickstein gave several examples of children for whom irritability was an important part of the picture. In the case of one 9-year-old, Dickstein said, “his parents report that he becomes very irritable and angry, hitting them and throwing objects, not always in reaction to being asked to do something… they describe one week where he had several multihour ‘rages’ at home, had trouble falling asleep, could not focus on anything, was talking very loudly, and acted as though his parents had no authority over him” (pp. 6-7), but there were also days when he did well. (And it’s remarkable how many of these behaviors show up on the checklists that claim to evaluate children for attachment disorders.)
One possible diagnosis for this child would be bipolar disorder (BD), or a suggested related category, “severe mood dysregulation” (SMD), in which the child reacts intensely to negative emotional stimuli, as well as showing excitement (hyperarousal). Another possibility,proposed but not decided on for DSM-V, is called Temper Dysregulation Disorder with Dysphoria (TDDD). As the name says, one focus of this diagnosis is on developmentally inappropriate temper tantrums-- a 2-year-old, for whom the occasional tantrum is developmentally expectable, would not be likely to receive this diagnosis. Included among symptoms of TDDD would be an angry or irritable mood, in which the child “loses temper, is touchy/easily annoyed by others, and is angry/resentful”.
As Dickstein points out, committee members working on DSM-V have not yet decided how the symptom of irritability will be handled in the 2012 edition. However, it’s clearly not a part of Reactive Attachment Disorder, and concerned parents should be wary of web sites or therapists that suggest a RAD diagnosis for angry children.
Reactive Attachment Disorder is defined by the most recent Diagnostic and Statistical Manual of the American Psychiatric Association (DSM-IV-TR) as involving markedly disturbed and developmentally inappropriate social relatedness in most contexts, beginning before age 5. Most children under the age of 5 are shy with strangers, although they are friendlier to unfamiliar children than to unfamiliar adults. They approach familiar people when frightened or hurt, and show distress when separated from them, although they tolerate this better and better and they get older. Disturbed social relatedness can involve a problem with any of these normal behaviors.
Children diagnosed with Reactive Attachment Disorder may be inhibited in their social behavior, shy and anxious with everyone-- or they may be disinhibited, overly friendly, and as inclined to approach strangers as familiar people. In either case, to be diagnosed with this disorder, a child must also have had problematic early care experiences, and must not be mentally retarded or have other cognitive or emotional problems like autism.
Although Reactive Attachment Disorder used to be defined differently in an earlier DSM (it was defined as a feeding problem of infancy), it has never been defined as involving irritability or “rages”. Although some Internet sites of groups who claim expertise on this topic provide “RAD checklists” including love of blood and gore, aggressiveness, and a “darkness behind the eyes” as symptoms of RAD, others of the same school of thought suggest that these symptoms do not indicate RAD, but another type of attachment disorder that they say DSM has not yet included.
If not RAD, what might be an appropriate diagnosis for irritable children? The pediatrician Daniel Dickstein has recently discussed this in an article entitled “DSM-5 and pediatric irritability: Acknowledging the elephant in the room”, in the Brown University Child and Adolescent Behavior Letter, October, 2010. Dickstein commented on the possible directions that will be taken on this by DSM-5, the new edition, forthcoming in 2013 and presently in development. He noted that in DSM-IV-TR, irritability is “everywhere and nowhere simultaneously”, without being treated as a distinct or well-described symptom. But this situation means that while irritable children do not get the RAD diagnosis, it’s not necessarily clear what they should get.
Dickstein gave several examples of children for whom irritability was an important part of the picture. In the case of one 9-year-old, Dickstein said, “his parents report that he becomes very irritable and angry, hitting them and throwing objects, not always in reaction to being asked to do something… they describe one week where he had several multihour ‘rages’ at home, had trouble falling asleep, could not focus on anything, was talking very loudly, and acted as though his parents had no authority over him” (pp. 6-7), but there were also days when he did well. (And it’s remarkable how many of these behaviors show up on the checklists that claim to evaluate children for attachment disorders.)
One possible diagnosis for this child would be bipolar disorder (BD), or a suggested related category, “severe mood dysregulation” (SMD), in which the child reacts intensely to negative emotional stimuli, as well as showing excitement (hyperarousal). Another possibility,proposed but not decided on for DSM-V, is called Temper Dysregulation Disorder with Dysphoria (TDDD). As the name says, one focus of this diagnosis is on developmentally inappropriate temper tantrums-- a 2-year-old, for whom the occasional tantrum is developmentally expectable, would not be likely to receive this diagnosis. Included among symptoms of TDDD would be an angry or irritable mood, in which the child “loses temper, is touchy/easily annoyed by others, and is angry/resentful”.
As Dickstein points out, committee members working on DSM-V have not yet decided how the symptom of irritability will be handled in the 2012 edition. However, it’s clearly not a part of Reactive Attachment Disorder, and concerned parents should be wary of web sites or therapists that suggest a RAD diagnosis for angry children.
Is It a Child's Job to Make You a Parent? A Couple of Takes on Parent-Child Relations
In my reading for this week, there have been two different but complementary approaches to the relationship between parents and children. Each of these views focuses on what we as modern parents may implicitly demand of our children-- what their existence and good or poor development does for our sense of ourselves and of life in general.
Lisa Belkin, writing in the New York Times Sunday magazine, talks about a new view of human motivation suggested by the psychologist Douglas Kenrick and his colleagues. This idea is based on the old Maslow “pyramid” of needs (which frankly I had hoped not to see or hear of again, but as Pippi Longstocking says, One can’t be having fun all the time). As readers will recall, that “pyramid” suggested that humans do not feel needs all at the same time, but must have more primitive survival needs satisfied before they are able to want social benefits, and to have everything else they need in place before seeking self-actualization, which they may achieve through the arts, philosophy, and so on. But Kenrick has suggested that this cannot really be the case, because there should be some evolutionary advantage to any innate motive, and there is arguably no direct benefit to artists or other Maslowian “self-actualizers”. Instead, Kenrick and colleagues propose that real “self-actualization” must have to do with characteristics that attract a mate and contribute to rearing children who can also attract mates, etc.
The Kenrick approach would seem to have more to do with Erik Erikson’s psychosocial stages of development than with “self-actualization” in Maslow’s sense. It also seems to ignore the fact that evolution can select for useless or irrelevant characteristics, if they are linked with useful ones and if they have no immediate harmful effects. But, be all that as it may, it’s Lisa Belkin’s take on the elevation of parenting to a high-order need that I want to talk about. Belkin’s concern is that parental behavior has become something of a goal in itself, as well as a proof that an adult is an excellent person in his or her own eyes and the eyes of others. Because this seems to be the case, Belkin argues, parents are delaying their children’s independence and hovering over them unnecessarily in order to prolong the parents’ involvement with the honored parenting role. Parents, Belkin says, have forgotten the primary parenting goal: making ourselves unnecessary. And, I might add, we may have come to concentrate on what children mean about us, not on what we need to do for our children.
Of course, the problem with Belkin’s or my arguments is that they focus primarily on parental attitudes and behavior, and give little attention to changes in the world such as fewer serious childhood illnesses, but more cars on the road, or fewer children per family as well as fewer grandparents living nearby and making themselves available. Maybe it will be useful to look at a more specific parenting situation as it is described by another author.
The anthropologist Rachael Stryker, in her book “The Road to Evergreen”, examines adoption, particularly adoption from Russia, and looks at adoptive parents’ motives. She mentions the obvious motives for this type of adoption: the belief that adoptive parents are helping children out of a miserable situation; the desire, not always frankly stated, to have children who are ethnically similar to the parents; and, in some cases, the ill-judged wish to skip the sticky, demanding baby period and to start with older children who (they think) won’t be so much trouble. But Stryker goes on from these obvious or stated motives and suggests a much more powerful motivation: that adopted children have a critical role as “emotional assets” who are expected to establish a family life and permit the adults to experience roles which they too may think of as the top of the pyramid. If the adopted children do not accomplish this, they are failing in their responsibilities as emotional assets, and the parents interviewed by Stryker were dissatisfied or even concerned that the children were mentally ill. The parents in Stryker’s study sought a disturbing and ill-supported form of treatment, Attachment Therapy, for their adopted children; both parents and therapists couched the children’s behavior in terms of “wanting to be part of the family”. When children did not comply, they were placed in out-of-home care, a solution referred to as “loving at a distance”, allowing parents to continue to use the child as an emotional asset and family-maker even when he or she was not present.
Some of the parents interviewed by Stryker noted that they did not have their children do any chores, expressing shock over the tasks they had done in their Russian institutions. Many of them also stressed a “consumer” aspect of childhood, providing toys, clothes, and in some cases almost immediate trips to Disney World as ways of engaging with the child. The fact that the children were often frightened or uncomfortable with these “pleasures” was seen as a failure on the child’s part, not as lack of parental empathy for children in a shockingly new environment.
The matters discussed by Belkin and by Stryker raise a question for me: Are children the new wives? I’m not talking about the wife of the distant past, the one who labored in her vineyard, and her children rose up and called her blessed (unlike any children I’ve ever met). And I’m not talking about the full-time-employed wife of today, because she does not do the “wife job” I’m referring to. I mean the well-off Victorian wife, or the Chinese wife of the foot-binding era--- women who consumed and did not produce, who were decorative and pleasant and obedient to their husbands, and whose basic role was to show the world how affluent were the husbands who did not require wifely labor. That was a lot to ask of wives, but of course it didn’t really matter to society what happened to their development as a result of that treatment. If we’re asking children to do this “wife job”, it is a lot to ask of them-- and as Belkin points out, their development, which is important, may well be negatively affected.
Lisa Belkin, writing in the New York Times Sunday magazine, talks about a new view of human motivation suggested by the psychologist Douglas Kenrick and his colleagues. This idea is based on the old Maslow “pyramid” of needs (which frankly I had hoped not to see or hear of again, but as Pippi Longstocking says, One can’t be having fun all the time). As readers will recall, that “pyramid” suggested that humans do not feel needs all at the same time, but must have more primitive survival needs satisfied before they are able to want social benefits, and to have everything else they need in place before seeking self-actualization, which they may achieve through the arts, philosophy, and so on. But Kenrick has suggested that this cannot really be the case, because there should be some evolutionary advantage to any innate motive, and there is arguably no direct benefit to artists or other Maslowian “self-actualizers”. Instead, Kenrick and colleagues propose that real “self-actualization” must have to do with characteristics that attract a mate and contribute to rearing children who can also attract mates, etc.
The Kenrick approach would seem to have more to do with Erik Erikson’s psychosocial stages of development than with “self-actualization” in Maslow’s sense. It also seems to ignore the fact that evolution can select for useless or irrelevant characteristics, if they are linked with useful ones and if they have no immediate harmful effects. But, be all that as it may, it’s Lisa Belkin’s take on the elevation of parenting to a high-order need that I want to talk about. Belkin’s concern is that parental behavior has become something of a goal in itself, as well as a proof that an adult is an excellent person in his or her own eyes and the eyes of others. Because this seems to be the case, Belkin argues, parents are delaying their children’s independence and hovering over them unnecessarily in order to prolong the parents’ involvement with the honored parenting role. Parents, Belkin says, have forgotten the primary parenting goal: making ourselves unnecessary. And, I might add, we may have come to concentrate on what children mean about us, not on what we need to do for our children.
Of course, the problem with Belkin’s or my arguments is that they focus primarily on parental attitudes and behavior, and give little attention to changes in the world such as fewer serious childhood illnesses, but more cars on the road, or fewer children per family as well as fewer grandparents living nearby and making themselves available. Maybe it will be useful to look at a more specific parenting situation as it is described by another author.
The anthropologist Rachael Stryker, in her book “The Road to Evergreen”, examines adoption, particularly adoption from Russia, and looks at adoptive parents’ motives. She mentions the obvious motives for this type of adoption: the belief that adoptive parents are helping children out of a miserable situation; the desire, not always frankly stated, to have children who are ethnically similar to the parents; and, in some cases, the ill-judged wish to skip the sticky, demanding baby period and to start with older children who (they think) won’t be so much trouble. But Stryker goes on from these obvious or stated motives and suggests a much more powerful motivation: that adopted children have a critical role as “emotional assets” who are expected to establish a family life and permit the adults to experience roles which they too may think of as the top of the pyramid. If the adopted children do not accomplish this, they are failing in their responsibilities as emotional assets, and the parents interviewed by Stryker were dissatisfied or even concerned that the children were mentally ill. The parents in Stryker’s study sought a disturbing and ill-supported form of treatment, Attachment Therapy, for their adopted children; both parents and therapists couched the children’s behavior in terms of “wanting to be part of the family”. When children did not comply, they were placed in out-of-home care, a solution referred to as “loving at a distance”, allowing parents to continue to use the child as an emotional asset and family-maker even when he or she was not present.
Some of the parents interviewed by Stryker noted that they did not have their children do any chores, expressing shock over the tasks they had done in their Russian institutions. Many of them also stressed a “consumer” aspect of childhood, providing toys, clothes, and in some cases almost immediate trips to Disney World as ways of engaging with the child. The fact that the children were often frightened or uncomfortable with these “pleasures” was seen as a failure on the child’s part, not as lack of parental empathy for children in a shockingly new environment.
The matters discussed by Belkin and by Stryker raise a question for me: Are children the new wives? I’m not talking about the wife of the distant past, the one who labored in her vineyard, and her children rose up and called her blessed (unlike any children I’ve ever met). And I’m not talking about the full-time-employed wife of today, because she does not do the “wife job” I’m referring to. I mean the well-off Victorian wife, or the Chinese wife of the foot-binding era--- women who consumed and did not produce, who were decorative and pleasant and obedient to their husbands, and whose basic role was to show the world how affluent were the husbands who did not require wifely labor. That was a lot to ask of wives, but of course it didn’t really matter to society what happened to their development as a result of that treatment. If we’re asking children to do this “wife job”, it is a lot to ask of them-- and as Belkin points out, their development, which is important, may well be negatively affected.
Monday, September 13, 2010
Dr. Oz, Faux Science, and Mistaken Conclusions About Self-Hypnosis
Can a study appear to be “scientific”, but not meet the requirements for drawing science-based conclusions? Unfortunately, there are certain words and phrases that push our “science” buttons and make us think that work is scientific when it is actually “faux science”, and like a faux jewel, resembles the real thing only in the most superficial way. Some of the words that make us too readily accept claims as science-based are randomization, control group, and statistical analysis. These button-pushing words can make it difficult for us to think critically about research reports or force ourselves to do the close reading and questioning needed for a real assessment of a study. It’s easy under these circumstances to slip and fall on the snake oil of pseudoscience.
In 1995, Dr. Mehmet Oz began the activities of his Complementary Care Center at Columbia-Presbyterian Medical Center in New York with the publication of a study on hypnosis:
Ashton, R.C., Whitworth, G.C., Seldomridge, J.A., Shapiro, P.A., Michler, R.E., Smith, C.R., Rose, E.A., Fisher, S., & Oz, M.C. (1995). The effects of self-hypnosis on quality of life following coronary artery bypass surgery: Preliminary results of a prospective, randomized trial. Journal of Alternative and Complementary Medicine, 1(3), 285-290. doi: 10: 1089/acm.1995.1.285.
An important piece of information about that study was omitted and was later added in the form of a letter to the editor by Oz four years later:
Oz, M.C. (1999). Self-hypnosis and coronary bypass surgery. Journal of Alternative and Complemenary Medicine, 5(5), 397.
In their 1999 article, Mehmet Oz and his colleagues claimed to show scientific evidence that surgery patients who had been instructed about self-hypnosis had significantly better experiences than those who had not. They hypothesized that patients taught self-hypnosis would have a better quality of life after surgery, as measured by a mood self-assessment. If true, this would have been a most important finding, demonstrating a simple and inexpensive way to improve patient outcomes. The study sounded good and was full of those button-pushing words like randomization. But let’s wipe away the snake oil and examine that study for plausibility, logic, and proper use of statistical analysis.
One of Oz’s hypotheses is not, in fact, implausible. This is the idea that people who are taught techniques of self-hypnosis, and who practice it repeatedly before surgery, will later be in better moods than an untreated control group or than those who fail to practice. Such an outcome is plausible for a couple of reasons. One is simply that people who have positive social contacts and attention from others usually are better pleased than who do not, whatever the specific content of the social interaction; this is a possible explanation for advantages over an untreated control group. As for a better outcome for those who elect to practice than those who do not, the most parsimonious view is that people who are cheered up by the practice will continue it, and those who find it unpleasant or neutral will not.
It is a good deal less plausible that blood pressure, bleeding, and infection due to coronary artery bypass surgery will be influenced by self-hypnosis, as Oz’s group suggested (but did not test). Plausible or implausible, however, the hypothesized associations must be demonstrated empirically before it is legitimate to claim that the medical use of self-hypnosis is effective either for improvement of the patient’s emotional experiences or for physical outcomes.
Oz and his colleagues presented research results which they argued supported the hypothesized effect of self-hypnosis on mood following surgery. They used all those good words like randomization, control group, and statistical analysis. However, their research design and report would have received devastating criticism in an undergraduate research methods course. The single good statistical decision in this work is all that would have saved this project from a failing grade. The Oz paper is an egregious example of faux science, and anyone who recognizes its many problems will decline to accept its conclusions.
Here is a list of problems in the design and implementation of the 1995 study:
Intervention fidelity. To be true science-- to follow the rules that allow us to draw a reliable conclusion about a treatment-- a study must show intervention fidelity, or guarantees that each participant has experienced the treatment exactly as planned. Without this guarantee, we can’t be sure whether we are comparing apples to oranges, bananas, or mangos.
Oz’s published report stated that subjects were randomly assigned to a treatment group and a nontreatment control group. Both groups, a total of 22 patients, were assessed before randomization on the Hypnotic Induction Profile (HIP; Siegel, 1974; note that this author’s name is misspelled in citations and reference by the Oz group, and that in fact there has been considerable controversy about this test). The randomization method assigned equal proportions of people judged to be highly hypnotizable to each group. However, the randomization of treatment was limited to presence or absence of instruction about self-hypnosis on the night before surgery. Patients in the treatment group were asked to repeat the self-hypnosis activity hourly on that evening, and again after their surgery, but not all of them did so, whereas some control group patients were interested in the activities used in the initial HIP testing and later reported that they had practiced those on their own. In other words, there was no assurance that patients in the treatment group had all had similar experiences, and that those in the control group had shared a different set of experiences. Comparison of the two groups was a look at two somewhat different fruit salads.
Confounding variables. One important technique in experimental science involves studying the effects of one factor at a time; when two factors usually operate together, their effects are confused or “confounded”, and it’s impossible to know how each would operate independently, or which causes a particular outcome. Randomization in assignment of participants to groups is a step toward making variables independent, but factors may still be confounded when treatment and control groups have experiences that differ in more than one way. In the case of this study, the treatment and control groups were different both in exposure to specific instruction and in attention and social contact from the researchers. Outcome differences, if any, can thus not be attributed to one of these factors rather than the other. Well-known effects of social attention, such as the “Hawthorne effect” in which any kind of attention improves performance, cannot be excluded by this design.
Statistical issues. You can only lie with statistics to those who don’t understand statistics, or who don’t take the time to work through the (admittedly quite boring) statistical presentation. There are a number of mistaken conclusions and claims in the Oz group’s report. (I apologize for the tedium of this section, but the only way to see what’s wrong is just to walk through the material systematically.)
The first concern raised by the Oz groups’ statistical analysis has to do with the initial claim of 22 patients, 13 in the treatment group and 9 in the control group. In a 1999 letter to the editor, Oz acknowledged that in fact only 9 patients, 5 of them in the treatment group, had completed all aspects of the data collection and made data analysis possible. Why it took four years to make this rather important correction is not clear.
Oz and his colleagues deserve full credit for their choice of a nonparametric test (Mann-Whitney) under these circumstances of a small N and ordinal data; this decision is what would save their paper from a failing grade in an undergraduate course. However, the analysis of the statistical test results, as shown in a table on p. 288 of the 1995 paper, cannot be taken to support the claim that one difference on the Profiles of Mood Scale was meaningfully changed and that others showed trends in that direction.
The choice of a .05 probability level as a cut-off point means that 1 in 20 tests of data would be expected to reach the level of statistical significance by chance alone. Multiple tests of data sets make it increasingly difficult to know how to interpret a result whose probability is less than .05. In addition to this problem, however, examination of the table shows other problems, especially with respect to the claim that trends existed.
Of the 6 scales on the POMS, there was one, the Tension scale, on which control subjects showed an increase and treatment subjects a decrease after surgery; the probability of the difference was 0.0317, a value that indicates a significant difference (with a probability of occurrence by chance less than 5 times in 100) but that needs to be interpreted in terms of the comments in the last paragraph. The Depression scale showed an increase for control subjects and a smaller increase for treatment subjects, with a probability for this difference of 0.5556; this probability value, despite its small difference from 50%, was incorrectly described as a trend in the predicted direction. The Anger scale showed an increase for control subjects and a slightly smaller increase for treatment subjects, the difference between them having the large associated probability of 0.7302, again incorrectly described as a trend in the predicted direction. On the Vigor scale, both groups showed a decrease, greater in the case of the treatment group, but with a very high associated probability for the difference between groups. The fatigue scale showed an increase for both groups, with a greater increase for the control group; the associated probability was 0.4127, very close to 50%, but once again described as a trend in the predicted direction. Finally, the Confusion scale showed a greater increase for the treatment group than for the control group, again at a probability much higher than .05.
To sum up, there was a single significant difference when changes in the 6 POMS scores were compared for the control and treatment groups; differences described in the paper as indicating “trends” were nowhere near the .10 probability level often used as a criterion for indication of a trend. The Oz group’s conclusion that “Self-hypnosis relaxation techniques can have positive effects after coronary artery bypass surgery” (1995, p. 289) is a specious conclusion with vague support from the research evidence.
Transparency and failure to report. One requirement for scientific work is transparency-- a full, straightforward, accurate description of the way a study has been done, sufficiently detailed to permit other researchers to replicate the original study and compare the two sets of data. One important piece missing from the Oz group’s report is a description of how patients were originally chosen and approached to be in the study. It’s not clear whether all patients who met the criteria were approached, nor how many (if any) who were approached responded with refusal. Neither is it clear how informed consent information presented the possible outcomes of participation, or how the Hypnotic Induction Profile was presented; both of these events have possible influences on outcomes.
The self-hypnosis instruction included suggestions that the patient concentrate on physical issues like blood pressure and infection. We might expect that the Oz group’s investigation would include comparisons of these outcomes for the treatment and control groups, especially because hospital records would be available. However, no such comparisons are reported in the paper.
The Oz group’s research is faux science, it’s plain. They have done what they should not have done, and they have left undone that which they ought to have done. The biggest problem of all would seem to be their premature commitment to their expected outcome, and the resulting nonchalance about evidence and analysis. In real science, the goal is to find all possible evidence that will reject a hypothesis. In pseudoscience, the search is for affirming evidence. That Oz’s group sought confirmation and ignored disconfirmation is certainly shown in their claims of “trends” in patient moods. Above all, this shows in their statement about patients who did not comply with the self-hypnosis methods: “When an opportunity to help oneself is presented and not taken advantage of, one must question the individual’s desire to regain their health and happiness again” (1995, p. 289). What a shame that all that snake oil was wasted on people who enjoy being sick and miserable!
In 1995, Dr. Mehmet Oz began the activities of his Complementary Care Center at Columbia-Presbyterian Medical Center in New York with the publication of a study on hypnosis:
Ashton, R.C., Whitworth, G.C., Seldomridge, J.A., Shapiro, P.A., Michler, R.E., Smith, C.R., Rose, E.A., Fisher, S., & Oz, M.C. (1995). The effects of self-hypnosis on quality of life following coronary artery bypass surgery: Preliminary results of a prospective, randomized trial. Journal of Alternative and Complementary Medicine, 1(3), 285-290. doi: 10: 1089/acm.1995.1.285.
An important piece of information about that study was omitted and was later added in the form of a letter to the editor by Oz four years later:
Oz, M.C. (1999). Self-hypnosis and coronary bypass surgery. Journal of Alternative and Complemenary Medicine, 5(5), 397.
In their 1999 article, Mehmet Oz and his colleagues claimed to show scientific evidence that surgery patients who had been instructed about self-hypnosis had significantly better experiences than those who had not. They hypothesized that patients taught self-hypnosis would have a better quality of life after surgery, as measured by a mood self-assessment. If true, this would have been a most important finding, demonstrating a simple and inexpensive way to improve patient outcomes. The study sounded good and was full of those button-pushing words like randomization. But let’s wipe away the snake oil and examine that study for plausibility, logic, and proper use of statistical analysis.
One of Oz’s hypotheses is not, in fact, implausible. This is the idea that people who are taught techniques of self-hypnosis, and who practice it repeatedly before surgery, will later be in better moods than an untreated control group or than those who fail to practice. Such an outcome is plausible for a couple of reasons. One is simply that people who have positive social contacts and attention from others usually are better pleased than who do not, whatever the specific content of the social interaction; this is a possible explanation for advantages over an untreated control group. As for a better outcome for those who elect to practice than those who do not, the most parsimonious view is that people who are cheered up by the practice will continue it, and those who find it unpleasant or neutral will not.
It is a good deal less plausible that blood pressure, bleeding, and infection due to coronary artery bypass surgery will be influenced by self-hypnosis, as Oz’s group suggested (but did not test). Plausible or implausible, however, the hypothesized associations must be demonstrated empirically before it is legitimate to claim that the medical use of self-hypnosis is effective either for improvement of the patient’s emotional experiences or for physical outcomes.
Oz and his colleagues presented research results which they argued supported the hypothesized effect of self-hypnosis on mood following surgery. They used all those good words like randomization, control group, and statistical analysis. However, their research design and report would have received devastating criticism in an undergraduate research methods course. The single good statistical decision in this work is all that would have saved this project from a failing grade. The Oz paper is an egregious example of faux science, and anyone who recognizes its many problems will decline to accept its conclusions.
Here is a list of problems in the design and implementation of the 1995 study:
Intervention fidelity. To be true science-- to follow the rules that allow us to draw a reliable conclusion about a treatment-- a study must show intervention fidelity, or guarantees that each participant has experienced the treatment exactly as planned. Without this guarantee, we can’t be sure whether we are comparing apples to oranges, bananas, or mangos.
Oz’s published report stated that subjects were randomly assigned to a treatment group and a nontreatment control group. Both groups, a total of 22 patients, were assessed before randomization on the Hypnotic Induction Profile (HIP; Siegel, 1974; note that this author’s name is misspelled in citations and reference by the Oz group, and that in fact there has been considerable controversy about this test). The randomization method assigned equal proportions of people judged to be highly hypnotizable to each group. However, the randomization of treatment was limited to presence or absence of instruction about self-hypnosis on the night before surgery. Patients in the treatment group were asked to repeat the self-hypnosis activity hourly on that evening, and again after their surgery, but not all of them did so, whereas some control group patients were interested in the activities used in the initial HIP testing and later reported that they had practiced those on their own. In other words, there was no assurance that patients in the treatment group had all had similar experiences, and that those in the control group had shared a different set of experiences. Comparison of the two groups was a look at two somewhat different fruit salads.
Confounding variables. One important technique in experimental science involves studying the effects of one factor at a time; when two factors usually operate together, their effects are confused or “confounded”, and it’s impossible to know how each would operate independently, or which causes a particular outcome. Randomization in assignment of participants to groups is a step toward making variables independent, but factors may still be confounded when treatment and control groups have experiences that differ in more than one way. In the case of this study, the treatment and control groups were different both in exposure to specific instruction and in attention and social contact from the researchers. Outcome differences, if any, can thus not be attributed to one of these factors rather than the other. Well-known effects of social attention, such as the “Hawthorne effect” in which any kind of attention improves performance, cannot be excluded by this design.
Statistical issues. You can only lie with statistics to those who don’t understand statistics, or who don’t take the time to work through the (admittedly quite boring) statistical presentation. There are a number of mistaken conclusions and claims in the Oz group’s report. (I apologize for the tedium of this section, but the only way to see what’s wrong is just to walk through the material systematically.)
The first concern raised by the Oz groups’ statistical analysis has to do with the initial claim of 22 patients, 13 in the treatment group and 9 in the control group. In a 1999 letter to the editor, Oz acknowledged that in fact only 9 patients, 5 of them in the treatment group, had completed all aspects of the data collection and made data analysis possible. Why it took four years to make this rather important correction is not clear.
Oz and his colleagues deserve full credit for their choice of a nonparametric test (Mann-Whitney) under these circumstances of a small N and ordinal data; this decision is what would save their paper from a failing grade in an undergraduate course. However, the analysis of the statistical test results, as shown in a table on p. 288 of the 1995 paper, cannot be taken to support the claim that one difference on the Profiles of Mood Scale was meaningfully changed and that others showed trends in that direction.
The choice of a .05 probability level as a cut-off point means that 1 in 20 tests of data would be expected to reach the level of statistical significance by chance alone. Multiple tests of data sets make it increasingly difficult to know how to interpret a result whose probability is less than .05. In addition to this problem, however, examination of the table shows other problems, especially with respect to the claim that trends existed.
Of the 6 scales on the POMS, there was one, the Tension scale, on which control subjects showed an increase and treatment subjects a decrease after surgery; the probability of the difference was 0.0317, a value that indicates a significant difference (with a probability of occurrence by chance less than 5 times in 100) but that needs to be interpreted in terms of the comments in the last paragraph. The Depression scale showed an increase for control subjects and a smaller increase for treatment subjects, with a probability for this difference of 0.5556; this probability value, despite its small difference from 50%, was incorrectly described as a trend in the predicted direction. The Anger scale showed an increase for control subjects and a slightly smaller increase for treatment subjects, the difference between them having the large associated probability of 0.7302, again incorrectly described as a trend in the predicted direction. On the Vigor scale, both groups showed a decrease, greater in the case of the treatment group, but with a very high associated probability for the difference between groups. The fatigue scale showed an increase for both groups, with a greater increase for the control group; the associated probability was 0.4127, very close to 50%, but once again described as a trend in the predicted direction. Finally, the Confusion scale showed a greater increase for the treatment group than for the control group, again at a probability much higher than .05.
To sum up, there was a single significant difference when changes in the 6 POMS scores were compared for the control and treatment groups; differences described in the paper as indicating “trends” were nowhere near the .10 probability level often used as a criterion for indication of a trend. The Oz group’s conclusion that “Self-hypnosis relaxation techniques can have positive effects after coronary artery bypass surgery” (1995, p. 289) is a specious conclusion with vague support from the research evidence.
Transparency and failure to report. One requirement for scientific work is transparency-- a full, straightforward, accurate description of the way a study has been done, sufficiently detailed to permit other researchers to replicate the original study and compare the two sets of data. One important piece missing from the Oz group’s report is a description of how patients were originally chosen and approached to be in the study. It’s not clear whether all patients who met the criteria were approached, nor how many (if any) who were approached responded with refusal. Neither is it clear how informed consent information presented the possible outcomes of participation, or how the Hypnotic Induction Profile was presented; both of these events have possible influences on outcomes.
The self-hypnosis instruction included suggestions that the patient concentrate on physical issues like blood pressure and infection. We might expect that the Oz group’s investigation would include comparisons of these outcomes for the treatment and control groups, especially because hospital records would be available. However, no such comparisons are reported in the paper.
The Oz group’s research is faux science, it’s plain. They have done what they should not have done, and they have left undone that which they ought to have done. The biggest problem of all would seem to be their premature commitment to their expected outcome, and the resulting nonchalance about evidence and analysis. In real science, the goal is to find all possible evidence that will reject a hypothesis. In pseudoscience, the search is for affirming evidence. That Oz’s group sought confirmation and ignored disconfirmation is certainly shown in their claims of “trends” in patient moods. Above all, this shows in their statement about patients who did not comply with the self-hypnosis methods: “When an opportunity to help oneself is presented and not taken advantage of, one must question the individual’s desire to regain their health and happiness again” (1995, p. 289). What a shame that all that snake oil was wasted on people who enjoy being sick and miserable!
Thursday, September 2, 2010
When a Scientist Doesn't Think Like a Scientist: Review of Susan Barry's "Fixing My Gaze"
Susan Barry’s recent book, Fixing my gaze (2009), is an engaging narrative of life with a
small but intrusive disability – strabismus, or “crossed eyes”, with its common effect, poor depth
perception. Barry recounts the events of her infancy, when her strabismus developed; the
surgeries that cosmetically corrected the condition without entirely dealing with problems of
vision; her awkward school days; difficulty with driving and other tasks requiring judgment of
depth; and so on until a behavioral treatment, she says, transformed her life. She describes
her early and recent experiences vividly, even poetically, and intersperses scientific discussions
with anecdotes that serve as the “spoonful of sugar that helps the medicine go down”. But she
seems to forget her training in neuroscience when she should be thinking critically about the
effectiveness of treatments. Regrettably, this rather charming and informative book serves as an
extended advertisement for “developmental optometry” or “orthoptics”, a program that claims to
correct some problems of vision by eye exercises that increase control of convergence and
divergence (co-ordinated eye movements that are needed for ideal visual ability). Barry notes
that she excludes from consideration self-help orthoptic methods like the Bates method, which
were critiqued by Worrall, Nevyas, and Barrett in 2009. However, the method she advocates, the
Brock method, appears to have as little plausibility and as weak an evidence basis as the others
do. In addition, Barry speaks of her improved visual functioning as due to “rewiring” of the
brain, and, like many others who resort to this inapt “wiring” metaphor, suggests that high levels
of juvenile brain plasticity persist throughout life. It’s possible that they do, but Barry’s examples
may be more parsimoniously explained by reference to the less dramatic brain changes we call
“learning”.
Barry’s experience of strabismus and its consequences was not unusual, but her book is
unique in providing an understanding of the subjective experience of a strabismic. She notes her
early problems with reading and later problems with looking into the distance and with driving
confidently-- these in spite of excellent acuity of vision in each eye tested separately.
Nevertheless, she did drive, used a stereomicroscope, played tennis, and did not have any sense
of missing the experience of depth. Indeed, because she could use monocular depth cues
(information that can come from one eye rather than needing both), she did have some ability to
judge depth. As far as is known, though, she could not experience the very clear and accurate
sense of distance that comes from using retinal disparity, or a comparison of images as they
occur simultaneously at the two eyes (the reason for this will be discussed a little later). In
middle age, she began to experience shifts or “jiggling” of vision to such an extent that she
consulted a number of specialists. Following treatment by a developmental optometrist, who
prescribed “orthoptic” or “vision therapy” eye exercises, she reports that she began to have vivid
experiences of depth and improved her confidence and skill in driving and other tasks needing
distance judgments. Barry attributes her improved visual skills to a type of vision therapy, a
treatment category that has been defined as “a proposed optometric treatment for developing
efficient visual skills and processing… as a treatment for accommodative disorders, amblyopia,
binocular disorders (strabismic and nonstrabismic), learning disabilities, and ocular motility
disorders” (CIGNA Medical Coverage Policy, 2008, p. 1).
Although there is much that is valuable in Barry’s description of her subjective visual
experience, it is notable that little in the way of objective measures can be retrieved from her
early life. She provides in endnotes some objective measures made at the beginning and at the
end of her treatment, a period of 7 years. These indicate improvements in co-ordinated control of
eye movements. However, at no time does there seem to have been measurement of actual
judgment of depth, using the simple Howard-Dohlman apparatus so familiar to past generations
of psychology students. This device allows an observer to look through a small window into an
illuminated box in which there are two vertical rods. A system of strings allows the observer to
pull each rod backward or forward until the two appear to be side-by-side, the same distance
away. Normally, people do this poorly when using one eye at a time, and very well when they
use both eyes, and the difference between the monocular and binocular conditions shows how
well the eyes are used in coordination and how well retinal disparity is taken into account.
To examine the claims Barry makes on behalf of developmental optometrists, we need to
consider the plausibility of the claimed “rewiring” mechanism, the possibility of alternative
mechanisms, and the empirical evidence that the treatment Barry received was an effective
treatment for strabismic problems.
Is It Plausible That Orthoptic Treatment Could Cause “Rewiring”?
Barry suggests that she had been unable to use retinal disparity to judge distance, that this
was impossible because of the absence of binocular cells in the visual cortex, and that orthoptic
exercises allowed her to develop binocular cells receiving information from both eyes at once.
To discuss these issues, we need first to consider the question of plasticity. This term refers to
the extent to which development is guided by environmental stimulation. If a characteristic is
pretty well determined by heredity no matter what stimulation occurs (e.g., eye color), plasticity
is low. If it were possible to “rewire” all sorts of brain structures, as Barry implies, very high
plasticity would be present. But some aspects of development show high plasticity only during a
certain period of life (experience-expectant plasticity), while others can be guided by the
environment throughout life (experience-dependent plasticity). Quick language learning in early
life is an example of experience-expectant plasticity, and our slow,plodding acquisition of
vocabulary words later on is an example of experience-dependent plasticity.
The use of retinal disparity is usually considered to be a matter of experience-expectant
plasticity. As Barry points out, babies in the first months do not use their eyes together, but
switch attention from one to the other. By about 6 months, they are moving the eyes together.
This causes the image of an object they are looking at to fall on matching areas (corresponding
points) on the left and on the right retina. Messages are sent to the visual cortex simultaneously
from the two stimulated areas, and they cause activity in a single neuron which “lights up” only
when it gets a message from both eyes. This binocular neuron’s activity indicates that the two
eyes are looking at the same object in the same place, rather than two different objects, one seen
by the right and one by the left eye.
Repeating this experience many times makes the binocular neuron more responsive and
strengthens the connection between retinal areas and their associated neurons. But covering one
of the baby’s eyes for as little as a week (perhaps because of an injury), or a big difference in
the clearness of the images at the two eyes, can prevent the development of the connection with
the binocular neuron, and can even reduce the number of neurons from a single eye to the brain
until that eye is functionally blind. Similarly, if the baby is cross-eyed, and if the eyes often do
not co-ordinate, binocular neurons would not be likely to make normal connections. Studies of
cats and monkeys, by Hubel and Wiesel (1965), showed that animals deprived of co-ordinated
visual experiences in early life would develop abnormal use of their eyes and poor depth
perception. Another relevant point is that human babies go through at about 8 months a period of
exuberant synaptogenesis-- they create many synaptic connections between neurons, and indeed
have more synapses than they will ever have again in their lives. But some months later they
begin a process called “pruning” in which unused synapses are destroyed, and neurons
themselves disappear through programmed cell death ( Blakemore, 1989).
All these points challenge the plausibility of a continuing plasticity of the visual cortex
that would allow for major changes during later adulthood. However, it would be not be wise to
assume that we can completely reject the existence of such plasticity. The animal work by Hubel
and Wiesel is the only experimental evidence we have to depend on, and there are two problems
about generalizing from it. One is that Hubel and Wiesel had to kill the animals in order to
examine most of the brain features that interested them, and this was done fairly soon, so there
was no opportunity to see whether functions would be recovered or whether neural connections
would shift back to what they had been. Generalizing from non-humans to humans does not
necessarily give accurate answers about humans, either; even very similar species may differ in
important respects. A critical period, or time when experience had to occur in order to have an
effect, may have been present for cats, even for monkeys, but may not be the case for human
beings.If human beings are different in this way, it might be plausible that later experience could
affect the use of retinal disparity, although the other points made earlier suggest it is not. The
available information about human beings comes from uncontrolled studies showing the loss of
acuity in an eye following strabismus or patching of an eye in infancy. Rare reports of humans
who have had life-long cataracts removed in adulthood indicate that these people have not
recovered normal vision in spite of therapeutic and educational efforts (Gregory, 1997), but it is
hard to know whether visual problems were caused by the strabismus or cataract experience, or by additional factors that caused the original problem.
Is It Plausible That Other Mechanisms Could Be Affected by Eye Exercises?
Barry stresses the idea that the eye exercises she did might have altered the visual cortex
and other brain areas, perhaps through changes in long-term potentiation and consequent
improvement in the responsiveness of binocular neurons. However, there are other aspects of
vision that relate to retinal disparity; Barry mentions these, but returns to brain “rewiring” as her
preferred explanation. Given the arguments against persistence of early plasticity of the brain,
though, it is important to keep in mind that there are other plausible mechanisms by which
improved control over eye movements could cause better depth perception. These include the use
of the horopter for comparison of distances, constancy mechanisms, and visual adaptation.
Using the horopter.
How do people with a history of normal vision use retinal disparity?
The stimulation of a binocular neuron is not the only factor to be considered when we think
about judgment of depth using both eyes.
Older children or adults who have a normal visual history, and who have useful binocular
neurons, employ retinal disparity as their best way of judging distances. They can fixate (“fix”
the gaze) with both eyes on a single object, so the images at right and left eye fall on
corresponding points and activate the appropriate binocular neuron. This provides a single
“fused” image rather than separate images for each eye (double vision), and the fusion is
accompanied by a sense of depth. When this happens, however, the fixated object is not
necessarily the only one for which a single image is seen. Any object which is at any point on an
imaginary surface, all of whose points are at equal distances from the eyes, also has its images at
right and left eye fall on corresponding points, and is seen as a single object with fused images.
This imaginary surface, the horopter, shifts its position as the individual fixates objects at
different distances. Wherever the horopter may be, though, it determines which objects are seen
singly and which ones have double images. All objects that are off the horopter are experienced
as double images, but those that are very close to the horopter overlap so much that it is almost
as if they were fused. The farther the object from the horopter, the less the images overlap-- no
matter whether the object is on the same side of the horopter as the observer (near her) or on the
opposite side (far away). The overlapping or less-overlapping nature of the double images
provides information about depth.
Why don’t we consciously experience all these double images? We don’t pay attention to
them. We only pay attention to the object we are “looking at”. But we are able to pay attention
to double images by voluntary efforts. For example, when you are driving, you pay attention to
the road, or perhaps to your speedometer or other dashboard instrument. However, your hands on
the steering wheel are probably within your visual field. If you attend to them while still looking
ahead, you may notice that they look large; glance down at a hand, and it seems of normal size
again. Why? When the hand is not at the horopter, you see an overlapping doubled image--
larger than the single image by the amount that does not overlap. Fixating the hand gives you a
single image (assuming that you’re not strabismic) whose size is determined by the distance of
the hand from the eye, not by an extra image.
A person with a history of strabismus-- especially varying amounts of strabismus
produced by repeated surgical interventions-- may have learned that double images have no
reliable relation to the horopter, and learned to exclude them from consideration as depth
judgments are made. But it is not implausible that practice of eye movements, and increased
attentiveness to double images, could enable an adult to develop skill at comparing distances
with information about double or single images. Barry herself refers to a broadening of the part
of the visual field she actually pays attention too, an event that would help the observer pay
attention to images in the periphery.
Constancy and context.
Barry gives only slight attention to an essential aspect of
perception: the capacity for constancy. Constancy is the powerful tendency to experience objects
in the world as remaining the same, in spite of the continual changes in the ways they stimulate
our sense organs. Constancy is most obvious in visual perception. We see a square object as
retaining the same shape and size even though as we move relative to it its image becomes larger
or smaller and the image shape alters through a range of trapezoids. (Indeed, there are probably
few circumstances in which a square object creates a square image on the retina.)
Although the neural foundation of constancy is not well understood, it is clear that this
ability involves context. Shapes, sizes, and other aspects are judged in the context of a complex
surrounding visual field. Look at the square object through a tube that excludes the rest of the
field, and you see it as trapezoidal, not square. Similarly, a person who walks away from you
does not seem to shrink, but look through the tube and you will see that the image is much
smaller than before. Even when there is no movement of the object or the observer, constancy is
needed to overcome the effects of involuntary eye, head, and body movements, which make
images move across the retina.
Barry reports that her visual experience involved jiggling or shifting of the field, that she
had trouble recognizing where she was when driving, and that hawks went too fast for her to
count them on a bird-watching expedition. While other visual abilities are needed for keeping
the field still and so on, constancy also plays a role in these visual tasks. Constancy requires
some attention to the visual field surrounding the object being fixated. It is plausible that practice
and increased control over eye movements would enable an observer to improve constancy and
minimize some of the disturbing apparent movements or other problems with the fixated object. Barry calls attention to sudden changes in three-dimensional vision, as well as in other
aspects of visual experience, and attributes these to a “sudden and global change in brain state, a change in the activity of whole populations of neurons”(Barry, 2009, p. 236). Yet constancy
mechanisms are known for producing “flip-flops” of perceptual change in apparent distance,
size, brightness, and so on. Decades ago, the perception psychologist Adhemar Gelb
demonstrated abrupt changes in the perceived nature of a visual stimulus. He placed a piece of
coal so that it was illuminated by a spotlight that did not light any other part of the room.
Observers described the brightly-lit coal as appearing white. Gelb then introduced into the
spotlight beam a white piece of paper, holding it so that the observer saw coal and paper
simultaneously. Instantaneously, the coal “turned” black and the paper was seen as white.
Although one assumes that changes in neural activity underlie this effect of changed context, it
remains questionable whether there are “populations” involved, and if so, what the size of those
populations may be.
Adaptation.
Visual adaptation—learning by experience to interpret visual stimulation in
different ways-- is a capacity well-documented through experimental work, but also personally
familiar to wearers of corrective lenses. A new lens prescription for eyeglasses usually takes
several days before vision seems completely normal, and the experience can be accompanied by
“swinging of the scene” as movements of images seen through the lens and at the periphery are
compared. Adaptation may be related to cues such as the feeling of the eyeglass frame pressing
on the nose. (After cataract surgery, I wear glasses when driving, to correct the myopia of my
unoperated eye. Initially I saw double when I looked into the side mirror. Now my experience is
normal as long as I sit in the car-- but if I get out while still wearing the glasses, I lose my
balance.)
Experimentally, adaptation can take place in a few hours for subjects wearing prism
lenses that shift all images to one side or make all images tilt by 10 or 15 degrees. Not only do
these prism-wearers report that their visual world recovers its normal appearance, but they show this in objectively measurable ways. Asked to position a dim light straight in front of them, or to
set an illuminated rod to the vertical position, they initially respond in ways that compensate for
the prism displacement, but after a few hours of activity while wearing the prisms, they make
more accurate placements. Taking off the prism lenses, they show temporary after-effects in
which they set straight-ahead or vertical as if objects appear to them to be displaced in the
opposite way from the original prism displacement.
Because visual adaptation occurs quite quickly, and because it can easily be reversed, it
seems to be a matter of ordinary learning, rather than the brain “rewiring” suggested by Barry.
Adaptation may be a plausible alternative explanation for Barry’s much-increased ability to use
information from retinal disparity. Such an explanation might be supported by her anecdotal
reports of people whose new abilities diminished when they stopped doing the exercises, but
recovered when they began again.
Is There Non-Anecdotal Evidence for the Effect of Eye Exercises?
Whether or not there are plausible ways in which eye exercises that improve control of
co-ordinated eye movements might help depth perception and other visual abilities, the most
important way to evaluate “orthoptics” is through systematic tests of perceptual changes
following treatments by developmental optometrists. Barry’s book emphasizes individual
experiences to the almost complete exclusion of the kind of systematic investigation we normally
call “science”. The book’s index does not include the words “research”, “evidence”, or
“experiment”.
CIGNA HealthCare has declined to cover vision therapy treatments on the grounds that
they are considered “experimental, investigational, or unproven for the management of visual
disorders and learning disabilities” (CIGNA Medical Coverage Policy, 2008, p. 1). The company
states that “insufficient evidence exists in the published, peer-reviewed literature to conclude that vision therapy is effective for the treatment of any of the strabismic disorders except preoperative prism adaptation for acquired esotropia” (CIGNA, 2008, p. 3). In its policy, Aetna provides coverage for some uses of vision therapy (Aetna Clinical Policy Bulletins, 2009) but considers it experimental and investigational for anomalous retinal correspondence, one of Susan Barry’s problems.
Most serious research on aspects of vision therapy has concentrated on its role in
treatment of amblyopia (lazy eye). A Cochrane review (Shotton & Elliott, 2008) reported three
randomized controlled trial studies on this subject, but said they showed no clear evidence of an
effect of near visual activity of the kind used in vision therapy protocols. One study which used
retrospective comparisons to case records reported that eye exercises did not appear to reduce
symptoms in patients with esophoria, a mild version of the “in-turned” eye position Susan Barry
had to cope with (Aziz, Cleary, Stewart, & Weir, 2006). A randomized controlled study using
several placebo and other treatments reported significant improvement using specific vision
therapy methods, but had excluded from the study strabismics and patients with a history of
strabismus surgery-- the category into which Susan Barry would fall (Convergence Insufficiency
Treatment Trial (CITT) Study Group, 2008). None of these high-quality investigations appear to have used the Brock protocol which Barry advocates.
In Conclusion
The idea that many aspects of vision can be corrected by training goes back a long way.
Erasmus Darwin, grandfather of Charles, described in the Transactions of the Royal Society a 5-
year-old who used one eye only, turning his head so that images fell on the blind spot of the
other eye. Darwin proposed that the child wear a large false nose that would force use of the
problem eye by occluding the view of the good eye (King-Hele, 1999). As I noted earlier, there
are a number of plausible mechanisms for improvement of visual skills under this type of
regimen. However, Darwin found that the visual ability of his patient got worse in spite of his
efforts, and modern vision therapists have failed to present evidence that their more complicated
methods are any more effective.
One cannot argue with Barry’s subjective experiences and her sense that improved eye
co-ordination opened a new world of visual excitement as well as improved visually-related
skills. Whether others can benefit equally from vision therapy remains questionable, however,
and it is regrettable that Barry’s book, while stressing scientific facts, nevertheless fails to model
scientific thought for its readers. As has occurred before (Linus Pauling and Niko Tinbergen are
unfortunate examples), expertise in one area of science does not seem to guarantee critical
thinking on other topics. However, the general public, and even many more sophisticated
readers, are likely to accept questionable conclusions like Barry’s on the basis of her training in
biology combined with her personal experience. Indeed, Barry’s book was selected as a Library
Journal Best Sci-Tech Book of 2009. Certainly, it is in many ways a “good” book, vividly
written, with a suitable balance between subjective and objective material, and with an
informative discussion of neuroscience issues. But Barry has not shown her readers how to take
an investigative sip at the developmental optometry Koolaid. Instead, she drains the pitcher and offers packets for others to mix up.
References
Aetna Clinical Policy Bulletins: Vision Therapy. (2009). Retrieved on Aug. 31, 2009 from http://www.aetna.com/cpb/medical/data/400-499/0489.html.
Aziz, S., Cleary,M., Stewart, H.K., & Weir, C.R. (2006). Are orthoptic exercises an effective treatment for convergence and fusional deficiencies? Strabismus, 14(4), 183-189.
Barry, S. (2009). Fixing my gaze. New York: Basic.
Blakemore, C. (1989). Principles of development in the nervous system. In C. von Euler, H.Forssberg, & H. Lagerkrantz (Eds.), Neurology of early infant behavior (pp. 7-18). New York: Stockton Press.
CIGNA Medical Coverage Policy (2008). Retrieved on Aug. 30, 2020 from http://www.cigna.com/customer_care_professional/medical/mna_0221_coveragepositioncriteria_vision_therapy_orthoptics.pdf.
Convergency Insufficiency Treatment Trial (CITT) Study Group. (2008). The Convergence Insufficiency Treatment Trial: Design,methods, and baseline data. Ophthalmic Epidemiology, 15, 24-36.
Gregory, R.L. (1997). Eye and brain. Princeton: Princeton University Press.
Hubel, D., & Wiesel, T. (1965). Binocular interaction in striate cortex of kittens reared with artificial squint. Journal of Neurophysiology, 28, 1041-1059.
King-Hele, D.(1999). Erasmus Darwin. London: Giles de la Mare.
Shotton, K., & Elliott, S. (2008, Issue 2). Intervention for strabismic amblyopia. Cochrane Database of Systematic Reviews. Art. No.: CD006461. DOI: 10.1002/14651858.CD006461.pub2.
Worrall, R.S., Nevyas, J., & Barrett, S. (2009). Eye-related quackery. Retrieved Aug. 26, 2009, from www.quackwatch.com/01Quackery/RelatedTopics/eyequack.html.
small but intrusive disability – strabismus, or “crossed eyes”, with its common effect, poor depth
perception. Barry recounts the events of her infancy, when her strabismus developed; the
surgeries that cosmetically corrected the condition without entirely dealing with problems of
vision; her awkward school days; difficulty with driving and other tasks requiring judgment of
depth; and so on until a behavioral treatment, she says, transformed her life. She describes
her early and recent experiences vividly, even poetically, and intersperses scientific discussions
with anecdotes that serve as the “spoonful of sugar that helps the medicine go down”. But she
seems to forget her training in neuroscience when she should be thinking critically about the
effectiveness of treatments. Regrettably, this rather charming and informative book serves as an
extended advertisement for “developmental optometry” or “orthoptics”, a program that claims to
correct some problems of vision by eye exercises that increase control of convergence and
divergence (co-ordinated eye movements that are needed for ideal visual ability). Barry notes
that she excludes from consideration self-help orthoptic methods like the Bates method, which
were critiqued by Worrall, Nevyas, and Barrett in 2009. However, the method she advocates, the
Brock method, appears to have as little plausibility and as weak an evidence basis as the others
do. In addition, Barry speaks of her improved visual functioning as due to “rewiring” of the
brain, and, like many others who resort to this inapt “wiring” metaphor, suggests that high levels
of juvenile brain plasticity persist throughout life. It’s possible that they do, but Barry’s examples
may be more parsimoniously explained by reference to the less dramatic brain changes we call
“learning”.
Barry’s experience of strabismus and its consequences was not unusual, but her book is
unique in providing an understanding of the subjective experience of a strabismic. She notes her
early problems with reading and later problems with looking into the distance and with driving
confidently-- these in spite of excellent acuity of vision in each eye tested separately.
Nevertheless, she did drive, used a stereomicroscope, played tennis, and did not have any sense
of missing the experience of depth. Indeed, because she could use monocular depth cues
(information that can come from one eye rather than needing both), she did have some ability to
judge depth. As far as is known, though, she could not experience the very clear and accurate
sense of distance that comes from using retinal disparity, or a comparison of images as they
occur simultaneously at the two eyes (the reason for this will be discussed a little later). In
middle age, she began to experience shifts or “jiggling” of vision to such an extent that she
consulted a number of specialists. Following treatment by a developmental optometrist, who
prescribed “orthoptic” or “vision therapy” eye exercises, she reports that she began to have vivid
experiences of depth and improved her confidence and skill in driving and other tasks needing
distance judgments. Barry attributes her improved visual skills to a type of vision therapy, a
treatment category that has been defined as “a proposed optometric treatment for developing
efficient visual skills and processing… as a treatment for accommodative disorders, amblyopia,
binocular disorders (strabismic and nonstrabismic), learning disabilities, and ocular motility
disorders” (CIGNA Medical Coverage Policy, 2008, p. 1).
Although there is much that is valuable in Barry’s description of her subjective visual
experience, it is notable that little in the way of objective measures can be retrieved from her
early life. She provides in endnotes some objective measures made at the beginning and at the
end of her treatment, a period of 7 years. These indicate improvements in co-ordinated control of
eye movements. However, at no time does there seem to have been measurement of actual
judgment of depth, using the simple Howard-Dohlman apparatus so familiar to past generations
of psychology students. This device allows an observer to look through a small window into an
illuminated box in which there are two vertical rods. A system of strings allows the observer to
pull each rod backward or forward until the two appear to be side-by-side, the same distance
away. Normally, people do this poorly when using one eye at a time, and very well when they
use both eyes, and the difference between the monocular and binocular conditions shows how
well the eyes are used in coordination and how well retinal disparity is taken into account.
To examine the claims Barry makes on behalf of developmental optometrists, we need to
consider the plausibility of the claimed “rewiring” mechanism, the possibility of alternative
mechanisms, and the empirical evidence that the treatment Barry received was an effective
treatment for strabismic problems.
Is It Plausible That Orthoptic Treatment Could Cause “Rewiring”?
Barry suggests that she had been unable to use retinal disparity to judge distance, that this
was impossible because of the absence of binocular cells in the visual cortex, and that orthoptic
exercises allowed her to develop binocular cells receiving information from both eyes at once.
To discuss these issues, we need first to consider the question of plasticity. This term refers to
the extent to which development is guided by environmental stimulation. If a characteristic is
pretty well determined by heredity no matter what stimulation occurs (e.g., eye color), plasticity
is low. If it were possible to “rewire” all sorts of brain structures, as Barry implies, very high
plasticity would be present. But some aspects of development show high plasticity only during a
certain period of life (experience-expectant plasticity), while others can be guided by the
environment throughout life (experience-dependent plasticity). Quick language learning in early
life is an example of experience-expectant plasticity, and our slow,plodding acquisition of
vocabulary words later on is an example of experience-dependent plasticity.
The use of retinal disparity is usually considered to be a matter of experience-expectant
plasticity. As Barry points out, babies in the first months do not use their eyes together, but
switch attention from one to the other. By about 6 months, they are moving the eyes together.
This causes the image of an object they are looking at to fall on matching areas (corresponding
points) on the left and on the right retina. Messages are sent to the visual cortex simultaneously
from the two stimulated areas, and they cause activity in a single neuron which “lights up” only
when it gets a message from both eyes. This binocular neuron’s activity indicates that the two
eyes are looking at the same object in the same place, rather than two different objects, one seen
by the right and one by the left eye.
Repeating this experience many times makes the binocular neuron more responsive and
strengthens the connection between retinal areas and their associated neurons. But covering one
of the baby’s eyes for as little as a week (perhaps because of an injury), or a big difference in
the clearness of the images at the two eyes, can prevent the development of the connection with
the binocular neuron, and can even reduce the number of neurons from a single eye to the brain
until that eye is functionally blind. Similarly, if the baby is cross-eyed, and if the eyes often do
not co-ordinate, binocular neurons would not be likely to make normal connections. Studies of
cats and monkeys, by Hubel and Wiesel (1965), showed that animals deprived of co-ordinated
visual experiences in early life would develop abnormal use of their eyes and poor depth
perception. Another relevant point is that human babies go through at about 8 months a period of
exuberant synaptogenesis-- they create many synaptic connections between neurons, and indeed
have more synapses than they will ever have again in their lives. But some months later they
begin a process called “pruning” in which unused synapses are destroyed, and neurons
themselves disappear through programmed cell death ( Blakemore, 1989).
All these points challenge the plausibility of a continuing plasticity of the visual cortex
that would allow for major changes during later adulthood. However, it would be not be wise to
assume that we can completely reject the existence of such plasticity. The animal work by Hubel
and Wiesel is the only experimental evidence we have to depend on, and there are two problems
about generalizing from it. One is that Hubel and Wiesel had to kill the animals in order to
examine most of the brain features that interested them, and this was done fairly soon, so there
was no opportunity to see whether functions would be recovered or whether neural connections
would shift back to what they had been. Generalizing from non-humans to humans does not
necessarily give accurate answers about humans, either; even very similar species may differ in
important respects. A critical period, or time when experience had to occur in order to have an
effect, may have been present for cats, even for monkeys, but may not be the case for human
beings.If human beings are different in this way, it might be plausible that later experience could
affect the use of retinal disparity, although the other points made earlier suggest it is not. The
available information about human beings comes from uncontrolled studies showing the loss of
acuity in an eye following strabismus or patching of an eye in infancy. Rare reports of humans
who have had life-long cataracts removed in adulthood indicate that these people have not
recovered normal vision in spite of therapeutic and educational efforts (Gregory, 1997), but it is
hard to know whether visual problems were caused by the strabismus or cataract experience, or by additional factors that caused the original problem.
Is It Plausible That Other Mechanisms Could Be Affected by Eye Exercises?
Barry stresses the idea that the eye exercises she did might have altered the visual cortex
and other brain areas, perhaps through changes in long-term potentiation and consequent
improvement in the responsiveness of binocular neurons. However, there are other aspects of
vision that relate to retinal disparity; Barry mentions these, but returns to brain “rewiring” as her
preferred explanation. Given the arguments against persistence of early plasticity of the brain,
though, it is important to keep in mind that there are other plausible mechanisms by which
improved control over eye movements could cause better depth perception. These include the use
of the horopter for comparison of distances, constancy mechanisms, and visual adaptation.
Using the horopter.
How do people with a history of normal vision use retinal disparity?
The stimulation of a binocular neuron is not the only factor to be considered when we think
about judgment of depth using both eyes.
Older children or adults who have a normal visual history, and who have useful binocular
neurons, employ retinal disparity as their best way of judging distances. They can fixate (“fix”
the gaze) with both eyes on a single object, so the images at right and left eye fall on
corresponding points and activate the appropriate binocular neuron. This provides a single
“fused” image rather than separate images for each eye (double vision), and the fusion is
accompanied by a sense of depth. When this happens, however, the fixated object is not
necessarily the only one for which a single image is seen. Any object which is at any point on an
imaginary surface, all of whose points are at equal distances from the eyes, also has its images at
right and left eye fall on corresponding points, and is seen as a single object with fused images.
This imaginary surface, the horopter, shifts its position as the individual fixates objects at
different distances. Wherever the horopter may be, though, it determines which objects are seen
singly and which ones have double images. All objects that are off the horopter are experienced
as double images, but those that are very close to the horopter overlap so much that it is almost
as if they were fused. The farther the object from the horopter, the less the images overlap-- no
matter whether the object is on the same side of the horopter as the observer (near her) or on the
opposite side (far away). The overlapping or less-overlapping nature of the double images
provides information about depth.
Why don’t we consciously experience all these double images? We don’t pay attention to
them. We only pay attention to the object we are “looking at”. But we are able to pay attention
to double images by voluntary efforts. For example, when you are driving, you pay attention to
the road, or perhaps to your speedometer or other dashboard instrument. However, your hands on
the steering wheel are probably within your visual field. If you attend to them while still looking
ahead, you may notice that they look large; glance down at a hand, and it seems of normal size
again. Why? When the hand is not at the horopter, you see an overlapping doubled image--
larger than the single image by the amount that does not overlap. Fixating the hand gives you a
single image (assuming that you’re not strabismic) whose size is determined by the distance of
the hand from the eye, not by an extra image.
A person with a history of strabismus-- especially varying amounts of strabismus
produced by repeated surgical interventions-- may have learned that double images have no
reliable relation to the horopter, and learned to exclude them from consideration as depth
judgments are made. But it is not implausible that practice of eye movements, and increased
attentiveness to double images, could enable an adult to develop skill at comparing distances
with information about double or single images. Barry herself refers to a broadening of the part
of the visual field she actually pays attention too, an event that would help the observer pay
attention to images in the periphery.
Constancy and context.
Barry gives only slight attention to an essential aspect of
perception: the capacity for constancy. Constancy is the powerful tendency to experience objects
in the world as remaining the same, in spite of the continual changes in the ways they stimulate
our sense organs. Constancy is most obvious in visual perception. We see a square object as
retaining the same shape and size even though as we move relative to it its image becomes larger
or smaller and the image shape alters through a range of trapezoids. (Indeed, there are probably
few circumstances in which a square object creates a square image on the retina.)
Although the neural foundation of constancy is not well understood, it is clear that this
ability involves context. Shapes, sizes, and other aspects are judged in the context of a complex
surrounding visual field. Look at the square object through a tube that excludes the rest of the
field, and you see it as trapezoidal, not square. Similarly, a person who walks away from you
does not seem to shrink, but look through the tube and you will see that the image is much
smaller than before. Even when there is no movement of the object or the observer, constancy is
needed to overcome the effects of involuntary eye, head, and body movements, which make
images move across the retina.
Barry reports that her visual experience involved jiggling or shifting of the field, that she
had trouble recognizing where she was when driving, and that hawks went too fast for her to
count them on a bird-watching expedition. While other visual abilities are needed for keeping
the field still and so on, constancy also plays a role in these visual tasks. Constancy requires
some attention to the visual field surrounding the object being fixated. It is plausible that practice
and increased control over eye movements would enable an observer to improve constancy and
minimize some of the disturbing apparent movements or other problems with the fixated object. Barry calls attention to sudden changes in three-dimensional vision, as well as in other
aspects of visual experience, and attributes these to a “sudden and global change in brain state, a change in the activity of whole populations of neurons”(Barry, 2009, p. 236). Yet constancy
mechanisms are known for producing “flip-flops” of perceptual change in apparent distance,
size, brightness, and so on. Decades ago, the perception psychologist Adhemar Gelb
demonstrated abrupt changes in the perceived nature of a visual stimulus. He placed a piece of
coal so that it was illuminated by a spotlight that did not light any other part of the room.
Observers described the brightly-lit coal as appearing white. Gelb then introduced into the
spotlight beam a white piece of paper, holding it so that the observer saw coal and paper
simultaneously. Instantaneously, the coal “turned” black and the paper was seen as white.
Although one assumes that changes in neural activity underlie this effect of changed context, it
remains questionable whether there are “populations” involved, and if so, what the size of those
populations may be.
Adaptation.
Visual adaptation—learning by experience to interpret visual stimulation in
different ways-- is a capacity well-documented through experimental work, but also personally
familiar to wearers of corrective lenses. A new lens prescription for eyeglasses usually takes
several days before vision seems completely normal, and the experience can be accompanied by
“swinging of the scene” as movements of images seen through the lens and at the periphery are
compared. Adaptation may be related to cues such as the feeling of the eyeglass frame pressing
on the nose. (After cataract surgery, I wear glasses when driving, to correct the myopia of my
unoperated eye. Initially I saw double when I looked into the side mirror. Now my experience is
normal as long as I sit in the car-- but if I get out while still wearing the glasses, I lose my
balance.)
Experimentally, adaptation can take place in a few hours for subjects wearing prism
lenses that shift all images to one side or make all images tilt by 10 or 15 degrees. Not only do
these prism-wearers report that their visual world recovers its normal appearance, but they show this in objectively measurable ways. Asked to position a dim light straight in front of them, or to
set an illuminated rod to the vertical position, they initially respond in ways that compensate for
the prism displacement, but after a few hours of activity while wearing the prisms, they make
more accurate placements. Taking off the prism lenses, they show temporary after-effects in
which they set straight-ahead or vertical as if objects appear to them to be displaced in the
opposite way from the original prism displacement.
Because visual adaptation occurs quite quickly, and because it can easily be reversed, it
seems to be a matter of ordinary learning, rather than the brain “rewiring” suggested by Barry.
Adaptation may be a plausible alternative explanation for Barry’s much-increased ability to use
information from retinal disparity. Such an explanation might be supported by her anecdotal
reports of people whose new abilities diminished when they stopped doing the exercises, but
recovered when they began again.
Is There Non-Anecdotal Evidence for the Effect of Eye Exercises?
Whether or not there are plausible ways in which eye exercises that improve control of
co-ordinated eye movements might help depth perception and other visual abilities, the most
important way to evaluate “orthoptics” is through systematic tests of perceptual changes
following treatments by developmental optometrists. Barry’s book emphasizes individual
experiences to the almost complete exclusion of the kind of systematic investigation we normally
call “science”. The book’s index does not include the words “research”, “evidence”, or
“experiment”.
CIGNA HealthCare has declined to cover vision therapy treatments on the grounds that
they are considered “experimental, investigational, or unproven for the management of visual
disorders and learning disabilities” (CIGNA Medical Coverage Policy, 2008, p. 1). The company
states that “insufficient evidence exists in the published, peer-reviewed literature to conclude that vision therapy is effective for the treatment of any of the strabismic disorders except preoperative prism adaptation for acquired esotropia” (CIGNA, 2008, p. 3). In its policy, Aetna provides coverage for some uses of vision therapy (Aetna Clinical Policy Bulletins, 2009) but considers it experimental and investigational for anomalous retinal correspondence, one of Susan Barry’s problems.
Most serious research on aspects of vision therapy has concentrated on its role in
treatment of amblyopia (lazy eye). A Cochrane review (Shotton & Elliott, 2008) reported three
randomized controlled trial studies on this subject, but said they showed no clear evidence of an
effect of near visual activity of the kind used in vision therapy protocols. One study which used
retrospective comparisons to case records reported that eye exercises did not appear to reduce
symptoms in patients with esophoria, a mild version of the “in-turned” eye position Susan Barry
had to cope with (Aziz, Cleary, Stewart, & Weir, 2006). A randomized controlled study using
several placebo and other treatments reported significant improvement using specific vision
therapy methods, but had excluded from the study strabismics and patients with a history of
strabismus surgery-- the category into which Susan Barry would fall (Convergence Insufficiency
Treatment Trial (CITT) Study Group, 2008). None of these high-quality investigations appear to have used the Brock protocol which Barry advocates.
In Conclusion
The idea that many aspects of vision can be corrected by training goes back a long way.
Erasmus Darwin, grandfather of Charles, described in the Transactions of the Royal Society a 5-
year-old who used one eye only, turning his head so that images fell on the blind spot of the
other eye. Darwin proposed that the child wear a large false nose that would force use of the
problem eye by occluding the view of the good eye (King-Hele, 1999). As I noted earlier, there
are a number of plausible mechanisms for improvement of visual skills under this type of
regimen. However, Darwin found that the visual ability of his patient got worse in spite of his
efforts, and modern vision therapists have failed to present evidence that their more complicated
methods are any more effective.
One cannot argue with Barry’s subjective experiences and her sense that improved eye
co-ordination opened a new world of visual excitement as well as improved visually-related
skills. Whether others can benefit equally from vision therapy remains questionable, however,
and it is regrettable that Barry’s book, while stressing scientific facts, nevertheless fails to model
scientific thought for its readers. As has occurred before (Linus Pauling and Niko Tinbergen are
unfortunate examples), expertise in one area of science does not seem to guarantee critical
thinking on other topics. However, the general public, and even many more sophisticated
readers, are likely to accept questionable conclusions like Barry’s on the basis of her training in
biology combined with her personal experience. Indeed, Barry’s book was selected as a Library
Journal Best Sci-Tech Book of 2009. Certainly, it is in many ways a “good” book, vividly
written, with a suitable balance between subjective and objective material, and with an
informative discussion of neuroscience issues. But Barry has not shown her readers how to take
an investigative sip at the developmental optometry Koolaid. Instead, she drains the pitcher and offers packets for others to mix up.
References
Aetna Clinical Policy Bulletins: Vision Therapy. (2009). Retrieved on Aug. 31, 2009 from http://www.aetna.com/cpb/medical/data/400-499/0489.html.
Aziz, S., Cleary,M., Stewart, H.K., & Weir, C.R. (2006). Are orthoptic exercises an effective treatment for convergence and fusional deficiencies? Strabismus, 14(4), 183-189.
Barry, S. (2009). Fixing my gaze. New York: Basic.
Blakemore, C. (1989). Principles of development in the nervous system. In C. von Euler, H.Forssberg, & H. Lagerkrantz (Eds.), Neurology of early infant behavior (pp. 7-18). New York: Stockton Press.
CIGNA Medical Coverage Policy (2008). Retrieved on Aug. 30, 2020 from http://www.cigna.com/customer_care_professional/medical/mna_0221_coveragepositioncriteria_vision_therapy_orthoptics.pdf.
Convergency Insufficiency Treatment Trial (CITT) Study Group. (2008). The Convergence Insufficiency Treatment Trial: Design,methods, and baseline data. Ophthalmic Epidemiology, 15, 24-36.
Gregory, R.L. (1997). Eye and brain. Princeton: Princeton University Press.
Hubel, D., & Wiesel, T. (1965). Binocular interaction in striate cortex of kittens reared with artificial squint. Journal of Neurophysiology, 28, 1041-1059.
King-Hele, D.(1999). Erasmus Darwin. London: Giles de la Mare.
Shotton, K., & Elliott, S. (2008, Issue 2). Intervention for strabismic amblyopia. Cochrane Database of Systematic Reviews. Art. No.: CD006461. DOI: 10.1002/14651858.CD006461.pub2.
Worrall, R.S., Nevyas, J., & Barrett, S. (2009). Eye-related quackery. Retrieved Aug. 26, 2009, from www.quackwatch.com/01Quackery/RelatedTopics/eyequack.html.
Subscribe to:
Posts (Atom)