Thursday, November 12, 2009
American Philosophical Association
Nevertheless, it makes me feel a little further removed from the mainstream of philosophy. I imagine I'll return to the fold in a few years.
Tuesday, December 30, 2008
SEP: Murphy on Concepts of Disease and Health
It's an excellent survey of the issues, and it discusses some issues related to psychiatry, which is inevitable given that most of the controversial cases are to do with mental illness. He sets up the debate by making the distinction between Objectivist views and Constructivist views of disease. On this divide, Boorse is an objectivist and most other people (such as Cooper, Wakefield, and Reznek) are constructivists. I'm not a fan of the terminology: I think that it is more helpful to distinguish between those who think that the concept of disease is intrinsically value-laden and those who don't. It also lumps together people who have quite different views, but that's just about inevitable in an encyclopedia article. Murphy's article is strong in its bringing together the issue with the philosophy of biology, some discussion of the nature of functions, and the problems faced by the two sides.
On Murphy's view, the main problem faced by the Objectivists is in providing a scientific basis for the distinction between normal and abnormal. For Constructivists, the main problem is in justifying any significant distinction between medical and other forms of undesirable conditions. It seems relatively clear that it would be hard to provide any general justification for our present conception of what counts as diseases or medical condition, and if we were to make our conceptual scheme with regard to medicine more rational, we would have to redraw our existing conceptions of disease.
Friday, December 12, 2008
Buss on Autonomy in SEP

How can we be so confident about our own autonomy if we have not worked out the details of our theory of autonomy? More to the point, in the case of people with addictions, compulsions, and even delusions, how can we be sure that they lack autonomy if we haven't worked out our theory of autonomy? The answer must be that there are broad features of autonomous action that we can identify even if we haven't worked out the theoretical details. We can tell if a car is working well or broken down even if we don't know how the engine works.Agents can be deprived of their autonomy by brainwashing, depression, anxiety, fatigue; they can succumb to compulsions and addictions. To what, exactly, are we calling attention when we say that, under these conditions, an agent does not govern herself, even if she acts as she does because she thinks she has sufficient reason to do so, even if she has (thorougly) considered the pros and cons of her options, and has endorsed her behavior on this basis, and even if she would have acted differently if there had been stronger reason to do so? Most agents who are capable of asking this question are confident that they are the authors of most of their actions, and are thus accountable for what they do. Nonetheless, as this brief survey indicates, the self-relation they thereby attribute to themselves is extremely difficult to pin down.
But aren't the broad features all we need then for a satisfactory theory of autonomy for it to make the distinctions we need it to make, at least with regard to working out who is autonomous and who is not? Do we need to sort out the details of the debates between coherentists and externalists, or how agents authorize their desires, it sorting this out does not help us make the distinctions we want to make? Even further, can't we conclude that whatever these debates achieve, they don't really tell us more about what autonomy is. We might use the car example: we can understand the concept of a functioning car without knowing how the engine works, and furthermore, knowing how the engine works does not add anything to our concept of a functioning car. To be sure, it is useful for other purposes, but not in the basic use of the concept of functioning car. So with autonomy, the sophisticated debates about self-relations are interesting in their own terms, but they don't tell us more about what we mean by autonomy.
I'm not sure I accept this conclusion, but it certainly is tempting.
Sunday, November 16, 2008
Review of The Illusion of Conscious Will by Daniel Wegner
DANIEL M. WEGNER. The Illusion of Conscious Will. Cambridge, Massachusetts and London, England: Bradford Books, The MIT Press 2002. Pp. xi+405. (Cloth: ISBN 0-262-23222-7);
Daniel Wegner, Professor of Psychology at Harvard University, has devoted much of his career to understanding the nature of self-control and its limitations. He is perhaps best known to the general public as author of White Bears and Other Unwanted Thoughts (New York: Viking Press, 1989) that summarized research on the difficulty we have in controlling the contents of our thoughts. His new book, The Illusion of Conscious Will, branches out into the realm of philosophy, and surveys a wide range of phenomena and experimental work relevant to agency. It has chapters on neurophysiology, phenomenology, automatism (including automatic writing, Ouija boards, water divining, and dissociative personality), protecting the illusion of conscious agency (including posthypnotic suggestion, confabulation in split-brain patients, phenomena of 'alien control' in schizophrenia), projection of agency (including beliefs in intelligent horses and pigs, and facilitated communication), virtual agency (including possession, mediumship, multiple personalities), hypnosis, and a final chapter on the importance of our beliefs in free will and authorship. These chapters are for the most part sprawling and unfocused. Wegner examines many topics and gives his opinion of how best to interpret them, but the book is full of unsubstantiated interpretations. It is often unclear whether the facts he presents are meant to serve as evidence for his main thesis about the illusion of free will, whether they are meant to be consequences of his view, or whether they are merely interesting phenomena that are tangentially related to his main theme. There is repetition of ideas from chapter to chapter, but often the examination of particular topics is cursory. In short, the book reads like a rough draft rather than a finished version.
Wegner's writing style is often casual and he peppers his text with jokes and asides. There are many illustrations, from diagrams explaining his views about the will and setting out details of scientific experiments to drawings of mesmerism, a reproduction of an advertisement for the "hypno-coin," and a photograph of Peter Sellers in the role of Dr. Strangelove. One might hope this would make the book more readable, but instead, the book fails to be either good scholarship or popular psychology, and is likely to leave both academic philosophers and psychologists and the general reader unsatisfied.
Wegner's main claim is that "the experience of consciously willing an action is not a direct indication that the conscious thought has caused the action" (2). He defines will as a feeling (3) and says (apparently by way of explicating our concept of will) that an action is not willed if the person says it is not (4) – ignoring the possibility of error or deception on the part of the agent. Wegner then makes a great deal of cases of people who perform actions with no apparent experience of willing them, but he makes no effort to prove that his initial definition of will is satisfactory or that it is a conceptual truth that will is a feeling. It remains open to a defender of free will to argue that our knowledge of willing is defeasible, and so that Wegner's many cases of action without awareness of willing fail to prove that the will is an illusion.
A potentially useful distinction Wegner makes is between the phenomenal will – the person's reported experience of will – and empirical will – the causality of the person's conscious thoughts as established by a scientific analysis of their covariation with the person's behavior (14). At times, Wegner's main thesis seems to be the modest one that the phenomenal will and the empirical will are not the same, rather than a denial of the existence of will. He says we accept a simple explanation of our behavior, "We intended to do it, so we did it" and we do not see the physical and mental processes that go to make up the empirical will (27). However, Wegner never makes a strong case that the phenomenal will is indeed generally incompatible with the empirical will, and the claim is prima facie implausible. The common sense psychology of ordinary folk assumes that the empirical will and the phenomenal will are different, and that the former explains the latter.
The most interesting argument for the illusory nature of conscious will stems from the research of Libet and others on the timing of consciousness awareness of willing relative to the action performed. The awareness of willing of finger movement occurs after neurophysiological activity that leads to the finger movement, and this suggests the awareness is causally irrelevant to the action. Wegner concludes from such experiments that "consciousness is kind of a slug" (58). He seems oblivious to the need to be very careful about the interpretation of the experimental data and the risk in generalizing from such specialized experimental conditions to ordinary life. Suffice to say, he casts very little doubt on the ordinary supposition that through deliberating about our lives we can often decide what is best and then act on our decision.
The remaining discussion of the book provides a wealth of fascinating cases where a person's agency is contestable. Especially provocative is Wegner's claim that the experience of conscious will occurs only when conscious thoughts are (mistakenly) seen as causing perceived actions. Philosophers new to the psychological literature on the will should find the bibliography an excellent resource for further research, and Wegner's work makes a strong case that the psychological literature deserves attention from philosophers working on freedom of the will and personal autonomy. The central failing of Wegner's argument is that he attributes to defenders of the will implausible beliefs about the nature of will and its role in agency. When he proceeds to show how the experimental data are incompatible with those beliefs, the implications are not as significant as he claims. Psychology has shown how humans tend to be less rational than we like to suppose, and there are many cases where are self-understanding is limited. However, just as the claims of psychoanalysis and behaviorism to undermine our central beliefs in our self-control have in the past been shown to be overblown, so too Wegner's use of modern cognitive and social psychology to undermine our belief in conscious will is ultimately unpersuasive.
Christian Perring
Dowling College
Saturday, November 15, 2008
Worries about Psychotropic Medication: A Philosophical Guide
Worries about Psychotropic Medication: A Philosophical Guide
My aim in this paper is to explore reasons people may have to being worried about the widespread use of psychotropic drugs, and especially in cases when the drugs are not medically necessary. I will not take a strong position for or against this use, although I will express some skepticism towards some of the objections that have been made against the use of antidepressants. My motive for writing this paper is to try to encourage discussion of the issues, and in particular to get people who are against this use of medication to better articulate their reasons.
My approach here is solidly “bottom-up” as opposed to the alternative “top down” approach of starting with a certain sophisticated theoretical view and showing how it applies to particular cases, which I find to be of limited value. I will take some simple ideas and intuitions and try to develop them a little. Thus I will not start out with any discussion of great philosophers such as Aristotle, Hume, Kant, or Wittgenstein. Nor will I assume the truth of any fully worked out theories of the nature of nature, autonomy, or the good life. I will refer to some well known philosophical theories along the way, and I may argue that some have more plausibility than others, but my aim here is to be basically neutral in my stance among the various philosophical approaches that are relevant to this issue.
The Rise in Psychotropic Medication: Prozac and Ritalin
It is uncontroversial that there has been a massive increase in the use of psychotropic drugs. Eli Lilly boasts that 40 million people in 100 countries have taken Prozac. This maybe due to an increase in mental illness:
Millions of us are falling prey to what is now identified as a disease. Five million of us each year have some sort of depressive illness that would justify medical intervention. That's not much less than a tenth of the population. A third of those who go to the GP have underlying depression. The young, with the world ahead of them, should have the blithest hearts. Yet 12% of male students and 15% of female students at university are depressed. Yesterday, meanwhile, it emerged that university counsellors are reporting a dramatic increase in the number of students seeking help for severe mental problems. Just over a year ago, the World Health Organisation declared that depression had reached epidemic proportions. Within 20 years, the WHO said, it would be the world's second most debilitating illness after cardiovascular disease in terms of lost years of human productivity. [1]
The rise in psychiatric medication is especially noticeable among children. Between 1992 and 1998, in one study:
Prescription prevalence in school-aged children 6 to 14 years increased from 4.4% to 9.5% for stimulants during the study period, and from 0.2% to 1.5% for SSRIs. In 1998, stimulant prescription prevalence was highest for white school-aged males (18.3%) vs black females (3.4%) and SSRI prescription prevalence was highest for white school-aged males (2.8%) vs black females (0.6%).[2]
Furthermore, young people often use stimulants without a prescription.
Among the findings from a soon-to-be-published Massachusetts Department of Public Health survey: 13 percent of 6,000 high-school students and 4 percent of middle-school students admitted to an “illicit, unprescribed use” of Ritalin in anonymous, written surveys.[3]
They might do this for recreational use because they enjoy the experience, or possibly because it helps them study better, as often happens with students taking caffeine tablets. In this paper I am especially interested in the use of drugs to enhance one’s capabilities and to make up for one’s deficits, although if there are objections to the use of medication in these cases, they may also apply to cases in the gray area between illness and health, and even to treatment of cases of clear mental illness if there are alternative, non-medication treatments available.
Natural Remedies
There’s also an increase in the use of herbal remedies and vitamin supplements in the hope that they will improve one’s mental life.[4] One report that in the US, $400 million is spent on St. John’s wort each year. Furthermore, “Approximately 42 percent of U.S. health care consumers spent $27 billion on ``complementary and alternative medicine'' therapies in 1997, the most recent year for which data is available.”[5] “Americans spent $4.13 billion on all herbal supplement sales in 2000, with $248 million on top-seller gingko biloba, a product that presumably improves memory. They shelled out $210 million on echinacea, for its alleged immune boosting fighting ability; another $174 million on garlic, for its supposed infection-fighting properties; and $170 on St. John's wort, the so-called natural antidepressant.”[6] Sales are still growing, even though they are not growing at the same high rate as a few years ago.
Critiques of Psychopharmacology
Faced with this increased role of medication in our everyday lives, some social critics have suggested that it represents a deeply disturbing tendency. They have given various sorts of reasons, which can be divided into two kinds:
“Sociological Concerns” (based on assumptions about our society and other empirical claims)
Not addressing the real issues.
Many social critics have suggested that the rise in depression is a real phenomenon, but is a symptom of a deeper underlying problem. Generally, this problem is identified as the increasingly alienating nature of modern society, which involves less human contact, and more impersonal technology. However, there are other possible candidates for the “real problem”, including environmental toxins, the change in family structure due to the changes in women’s roles, or capitalism. This approach normally acknowledges that depression and anxiety are real, but point out that treating it on an individual basis rather than a social basis means that we never address the real problem, and indeed, mental health professionals may be perpetuating the problem or making it worse.
This is largely an empirical issue, although it may be hard to do studies that would decide which view is correct. Whether our society is really more alienating than it was one or two hundred years ago is hard to prove, since alienation is not an easy variable to measure, and there were no direct measures of alienation in previous centuries. Measures of the growth of the alienating nature of society have to be indirect, and are inevitably highly contestable. It is more feasible to do anthropological studies of alienation and depression, but even here, there’s a great deal of room for debate about what we can conclude from such studies. Cross-cultural comparisons introduce new concerns about what should count as depression, since symptoms of depression are often claimed to vary from culture to culture.[7] The alienation issue will arise again below, under “self” concerns.
The “increase in depression” is due to relabeling ordinary unhappiness
Some have suggested that the increase in the incidence of depression is not real: it is simply a result of lowering the criteria for what counts as depression, and also a result of introducing the category of dysthymia, chronic low level depression.[8] This is related to a claim that the reason for the change in criteria serves the interest of mental health professionals, because it brings them more work. As non-psychiatrists do more psychotherapy, psychiatrists find a need to preserve a role for themselves in modern society, and that role is to prescribe medication.
One might think that whether it is true that the official criteria for depression have become more liberal should be easy to determine: one could simply compare manuals from different decades. But in fact the criteria are relatively vague, and it’s debatable to what extent they determine the judgments of clinicians. It’s clear that what makes a difference is not what it says in diagnostic manuals and textbooks, but whether the actual criteria used by clinicians have changed, and this is especially hard to discover.
The Undue Influence of the Pharmaceutical Industry.
It’s clear that medication is big business, which in the US is now able to advertise direct to the public. Clinicians are given financial incentives to diagnose mental disorders and psychiatrists are given financial incentives to prescribe medication (rather than, or in addition to, suggesting psychotherapy). The facts about the power of the pharmaceutical industry are very impressive, but of course the industry itself would say it is providing needed and helpful resources for clinicians, and that it is performing a valuable service. There have been studies on the extent to which psychiatrists are influenced by perks, incentives, and free lunches offered by the pharmaceutical companies, and the results often show that the influence is strong.
However, there has been no proof that this sort of influence can explain the rise in the diagnosis of depression over the last fifty years. Even once we admit the potential power of the industry, we have not shown that this is in fact the correct explanation for the rise in the diagnosis of depression and the use of antidepressants. The most we will have shown is that it has played some role, but not how great a role.
The Side Effects of Medication
Some authors have argued that antidepressant medication has side effects that are underestimated.[9] The main idea is that these drugs are far more dangerous than people realize, and that the government drug trials used to test their safety are conducted over a short period and do not detect long-term effects or effects on children. These medications are used a great deal with children, but their long-term effects on children are unknown.
Although these criticisms are often put in alarmist form, they have been not received much attention, and have been largely ignored by the psychiatric establishment. There is often a pattern with popular drugs that after many years of use, it is found that they have unforeseen effects (e.g. Phen Fen, Valium) and they then become prescribed much less often, so it would not be very surprising if this criticism turned out to be true.[10] Nevertheless, it remains an empirical issue.
Distinct from the above concerns are three rather different ones, which I will call “Self Concerns.” They are more philosophical than empirical.
Dehumanizing Ourselves
First, it might be argued that in taking medication we are treating ourselves like machines, or like objects, rather than humans. I have heard people make this sort of argument, but it is clearly rather weak. Dehumanization cannot simply a matter of putting a foreign object into one’s body, because then eating would also be dehumanizing. It has to be a matter of ingesting technology, or possibly using technology to alter one’s emotional outlook. It would seem that this sort of view would mean that all manufactured medication is problematic, and this view seems highly problematic and far too extreme to be plausible. I turn quickly to a more sophisticated version of the objection.
Against the Natural Order
Some may argue that in medicating ourselves, we are going against the natural order. Christian Science religion takes such a view: they believe that God has his plans for us, and we should not resort to medicine in curing our illnesses because this goes against God’s plan. This view, I think, is implausible even for the theologically inclined, because it rests on unsupported Biblical interpretation. Nevertheless, to go back to the more general objection, the idea of a natural order is one that has a great deal of intuitive appeal. Many people have a sense that there is a natural way of being and that it is wrong or dangerous to interfere with this natural order, especially when one is considering mind-altering drugs. The problem comes when they try to articulate this sense, because the concepts of nature and the natural order are very hard to pin down or to justify.
Authenticity
Finally, there is the most plausible of the self-concerns, that taking mind-altering medication in some way reduced one’s autonomy or authenticity. At its crudest, this sort of objection relies on a problematic empirical claim that psychotropic drugs cloud one’s thinking or give one an emotional high[11]: while this may be true of alcohol, stimulants such as Ritalin or anti-anxiety drugs such as Xanax and Valium, (although it probably is not true when the medications are working as intended), there is very little reason to take this assumption seriously in the case of antidepressants. I speak partly from personal experience, having taken antidepressants myself, but more importantly, there is no scientific and little anecdotal evidence to support the ideas that one’s cognitive abilities are impaired by antidepressants or that one’s values are significantly altered (in ways unconnected with the depression) by the medication.[12] I have heard and read reports of people who say that antidepressants make them feel numb and incapable of feeling anything, and this is an important phenomenon.[13] This isn’t the intended effect of the medication, although it might be a clinically acceptable effect if it is the only alternative to agonizing depression. But for the philosophical objection to taking medication for enhancement to be interesting though, we need to consider the cases where the medication works well.
To make this “authenticity” objection plausible, one has two options. First, one can argue that the world is alienating, and thus that alienation and depression are rational responses to the world. To take antidepressants is then to interfere with a normal reaction, and can impair one’s ability to respond appropriately to the state of the world.
Secondly, one could argue that to bring in such radically foreign material into one’s central nervous system to affect the functioning of one’s brain is inherently to reduce one’s autonomy or authenticity. I will return to this at the end of the paper.
The Popular Media
Remarkably, the literature in medical ethics has largely ignored these concerns. The books that have most forcefully addressed these issues have been written by psychiatrists and psychologists. (A partial list would include David Healey’s Antidepressant Era, Peter Kramer’s Listening to Prozac, Lawrence Diller’s Running on Ritalin, Richard DeGrandpre’s Ritalin Nation, Lauren Slater’s Prozac Diary, and Peter Breggin’s series of books against psychotropic drugs. Shelves of less thoughtful self-help books have also been published on related topics.) Some might argue that these authors are indeed medical ethicists, and in a sense they are, but they are very much removed from the mainstream of medical ethics. They rarely publish their work in the prestigious medical ethics journals and they are not employed in bioethics institutes or centers.
The other main source of noticeable concern concerning psychotropic medication comes from mainstream media such as TV news shows and magazines such as Newsweek and Time. As we might expect, the analyses in these media are mostly shallow; what is interesting is that the bioethics establishment has done so little to follow up on the public concern expressed in popular culture.
Medical Ethics and Enhancement
Insofar as mainstream medical ethics has addressed the issue, although there has been some discussion of Prozac, the literature has mainly focused on how to make the distinction between curing a disease and enhancing a normal condition.[14] Certainly this is a very important distinction, and it seems that it is far less problematic to use medication to cure illness or to relieve the symptoms of illness than it is to enhance or alter a healthy person. It is worth discussing the relevance of this distinction to my main thesis here.
The general philosophical problem of the definition of disease is a long-standing and difficult one, which is still being discussed vigorously. Most people acknowledge that the categories of illness are value-laden, and there is much work to be done in finding in what ways the category of depression is value-laden. I’d prefer the claims of this paper to be independent of the debate about the definition of mental illness: whatever the correct definition, it’s clear there will be a large gray area between health and illness and that we will be able to get a good amount of intersubjective agreement about which cases constitute an enhancement of a normal person. My main point here is to do with cases of enhancement, although I am ready for the conclusion of the argument to be applicable to cases of taking medication to relieve depression. I am primarily interested in what could be philosophically problematic about taking medication, and I don’t believe this depends on the definition of mental illness.
It is worth also noting briefly that there have been empirical studies of “patient compliance” – i.e. the extent to which people are willing to follow their doctor’s instructions, as opposed to “disobeying” their doctor. It’s not unusual for people to accept a prescription, but then to not get it filled, or else to get their medication, but then to only take it for a week or two, rather than the recommended 4-6 weeks it takes to see if antidepressants are effective. Often there are pronounced side-effects of the drugs (e.g., dry mouth, nausea, fatigue or sleeplessness) that decline or disappear after a week or two of taking the medication regularly. One of the reasons that Prozac became a best-selling medication is that it has fewer side-effects than the older antidepressants, so patient compliance was improved, and this was demonstrated in studies. We should bear in mind though that these kinds of studies tend to be rather crude and operationalized, and certainly will give us little insight into people’s philosophical qualms about taking medication.
Genetic Ethics and the Suspicion of Technology
Some of the discussion concerning the morality of genetic engineering is applicable also to the use of psychotropic drugs. For this reason, I will make a small detour into the ethics of cloning. Philosophers such as Leon Kass have tried to articulate their reasons for a deep discomfort with our interfering with the natural order.
In his widely reprinted article “The Wisdom of Repugnance”[15] Kass argues that although to have a strong reaction to something is not an argument against it, but that repugnance can be an emotional expression of deep wisdom. He writes,
The repugnance at human cloning belongs in this category. We are repelled by the prospect of cloning human beings not because of the strangeness or novelty of the undertaking, but because we intuit and feel, immediately and without argument, the violation of things that we rightfully hold dear. Repugnance, here as elsewhere, revolts against the excesses of human willfulness, warning us not to transgress what is unspeakably profound. Indeed, in this age in which everything is held to be permissible so long as it is freely done, in which our given human nature no longer commands respect, in which our bodies are regarded as mere instruments of our autonomous rational wills, repugnance may be the only voice left that speaks up to defend the central core of our humanity. Shallow are the souls that have forgotten how to shudder. (page 19).
Kass goes on to set out some of the consequences that he holds to be repugnant, and I believe his argument is fairly weak because the supposedly awful consequences don’t seem so bad to me, certainly no worse than what we are already ready to accept in contemporary society. Nevertheless, his method and this passage in particular strike me as important, and it could be helpful for those who want to articulate their discomfort with the use of medication for enhancement. Note that his method is highly controversial. In a recent magazine article, Sheila Jasanoff of Harvard University’s Kennedy School of Government equates Kass’s method with expression of a “yuck” reaction, and prominent bioethicist Dan Brock is quoted as saying, “it doesn’t have any intellectual content.”[16] While I’m sympathetic to these criticisms, and certainly that there is a danger of this approach amounting to nothing more than the expression of mere opinion, I am also sympathetic to the possibility that we might be able to educate our moral sensibilities, and attune ourselves to moral reactions. Ultimately, I believe, moral sensibilities and intuitions play an important role in our moral epistemology, and so I am somewhat sympathetic to elements in the traditions of moral casuistry and moral particularism. Kass’s discussion of repugnance can be seen as an instance of particularist methodology. But even if we accept this approach, we still need some way to distinguish between irrational prejudices and reasonable moral perception. At a minimum, we should be able to articulate some similarity relation between fundamental intuitions, or some mutually supporting relation between intuitions and a plausible moral theory. Furthermore, it should not be possible to completely explain away the moral intuition as a prejudice or an effect of arbitrary moral conditioning. Furthermore, the stronger and more widespread a moral intuition is, the more reason there is to take it seriously.
Now it would be an exaggeration to say that people feel repugnance at the use to psychotropic drugs to enhance one’s abilities, (although some people do have strong negative reactions to the recreational use of drugs such as marijuana and cocaine, which may be related). I’d say that most people have qualms, worries, or reservations about the widespread of use of Prozac, Ritalin, and similar drugs. Even so, Kass’s approach could still be useful for this investigation.
Central to Kass’s thought here is his concern about using the body as an instrument of our will. Maybe this is a version of the idea of dehumanizing ourselves that I so swiftly dismissed earlier on, but at least this is a slightly more articulated approach. To take drugs to enhance one’s abilities could certainly be seen as treating one’s body as an object, although as I said previously, one needs to be able to distinguish this kind of case from that of consuming food and drink for sustenance. Presumably the distinction between the two kinds of cases rests on the idea that eating and drinking are natural activities necessary for living, while taking drugs is not.
It is worth briefly mentioning a related issue: some people have very strong intuitions against the use of genetically engineered food, body parts, or animals, often because they worry about the unforeseen results of such experimentation, but often simply because it is “messing with nature.” The worry about food can be strong when people consider that it means that they are ingesting something unusual and artificial. Many people say they only want to ingest natural foods.
One feature that distinguishes Prozac is that it is a designer drug, which makes it more parallel to the case of genetically engineered food and cloning. A great deal of research has gone into the creation of the drug: unlike most psychotropic drugs, it was not discovered by accident, but instead was created through years of research. This feature of Prozac makes it closer to the case of cybernetics, and I think people do have strong reactions, including repugnance, to this extremity of unnaturalness. We are facing the introduction of technology into our bodies and minds, and people do find this disturbing as well as exciting.
Conclusion
As I warned at the start of this paper, my aims have been modest: I simply aimed to explore the suspicion and worry people have about drugs that enhance our moods and cognitive capabilities. I have argued that there is a range of possible reasons for worry, some empirical, some philosophical; some plausible, some far more implausible. I want to end by suggesting that the three philosophical objections I have paid most attention to, that drugs are dehumanizing, unnatural, or inauthentic, are interconnected and may even amount to just one basic objection.
As I have suggested already, the idea that we treat ourselves as objects does not cause much concern if it applies equally to eating, drinking, showing concern for one’s body, and getting medical help. For the objection to carry much force, it needs to show that some kinds of treating oneself as an object are more problematic than others. It seems that the best way to do this is to distinguish between ordinary ways of manipulating oneself and extraordinary or unnatural ways of doing so: the more artificial the manipulation, the more worrisome it is.
But why should artificial manipulation of one’s mind be problematic? Of course, one might just say that the very fact it is unnatural is enough. Taking such a position would push one into the difficult position of having to say that one needs to live as natural a life as possible. Even if one could define what is natural, it would seem to leave one in an extreme position of rejecting much of modern life, and rejecting technology that could be very helpful to one. So I suggest that for this objection to be successful, it needs to bring in an extra element: it should limit itself to criticizing treating oneself as an object in an unnatural way when this reduces the person’s autonomy or authenticity. The immediate problem with this move is to give a non-circular account of the reduction of autonomy or authenticity: it is possible to just go in a tight circle, and say that one’s authenticity is diminished because one is treating oneself as an object rather than a subject.
I am not sure how to resolve this problem. Maybe the best option is to simply say that self-objectification, unnaturalness, and inauthenticity are a closely related group of concepts, and that they come as a package. They all become involved in a critique of the widespread use of psychotropic drugs.
[1] http://www.guardianunlimited.co.uk/g2/story/0,3604,419761,00.html
The low country, Tuesday January 9, 2001, The Guardian
[2] http://archpedi.ama-assn.org/issues/v155n5/abs/poa00346.html
Arch Pediatr Adolesc Med. 2001;155:560-565, Jerry L. Rushton, MD, MPH; J. Timothy Whitmire, PhD
Pediatric Stimulant and Selective Serotonin Reuptake: Inhibitor Prescription Trends 1992 to 1998
[3] Ritalin Alert: As Abuse Rates Climb, Schools Are Scrutinized, Katy Abel. FamilyEducation.com (Nov 17, 2000)
http://www.familyeducation.com/article/0,1120,2-20061-0-1,00.html
[4] http://www.guardianunlimited.co.uk/comment/story/0,3604,482144,00.html
Threatened by a herb, Jerome Burne, Thursday May 3, 2001, The Guardian
[5] http://www0.mercurycenter.com/partners/docs1/083952.htm
Tuesday, May 22, 2001, New studies may boost credibility of products, BY LISA M. KRIEGER, Mercury News.
[6] Friday May 11, 2001. Health - ABCNEWS.com. Study: Herbal Supplement Sales Down.
http://dailynews.yahoo.com/h/abc/20010511/hl/herbalsupplements010511_1.html
[7] For a summary of some cross cultural work on depression, see Richard J. Castillo, Culture & Mental Illness: A Client-Centered Approach. (Pacific Grove, CA: Brooks/Cole, 1997), Chapter 12.
[8] See the last chapter of Edward Shorter, A History of Psychiatry. John Wiley, 1998.
[9] Most notorious are the books of Peter Breggin, including Toxic Psychiatry and Talking Back to Prozac. More measured criticism is to be found in Joseph Glenmullen’s Prozac Backlash. Also directly relevant is David Healey’s The Antidepressant Era.
[10] See A Social History of the Minor Tranquilizers: The Quest for Small Comfort in the Age of Anxiety, Mickey Smith, PhD., Haworth Press, 1991.
[11] This is suggested by Louis Marinoff in Plato Not Prozac, [find reference] and Jeffrey Schaler [find reference]. It is also suggested in a slightly different way by Joseph Glenmullen.
[12] I should, in a fuller treatment of this issue, address the point that in some ways depressed people have more realistic expectations of the world than non-depressed people. Some studies have shown that depressed people rate probabilities of certain kinds of events or traits differently from normal people: normal people tend to have a rosy view of the world. For example, most people believe that they are better-than-average drivers and rather their children as above-average. For certain kinds of estimates, depressed people do not have this bias. However, note that in other ways depressed people have a distorted view of the world: they view their situations as hopeless, their lives as pointless, and they don’t think they have any friends.
[13] See the first person accounts in Living With Prozac: And Other Serotonin-Reuptake Inhibitors, by Debra Elfenbein, Harpercollins, 1995.
[14] The main example of this is Enhancing Human Traits: Ethical and Social Implications, edited by Erik Parens, Georgetown University Press, Washington DC, 1988. A recent issue of The Hastings Center Report was devoted to “Prozac, Alienation, and the Self,” Vol. 30, No. 2, 2000. Note that in this paper I have ignored the idea, propounded most forcefully by Carl Elliott in both these collections as well as in other work, that antidepressants are problematic because they lead us to treat alienation as a medical condition when it is really an insightful reaction to the world.
[15]The Ethics of Human Cloning, by Leon R. Cass and James Q. Wilson, AEI Press, 1998.
[16] The American Prospect. “Irrationalist in Chief,” by Chris Mooney. 12, #17, September 24 – October 8, 2001.
Friday, November 14, 2008
Defining and Defending ADHD
Defining and Defending ADHD:
On The Category of Attention Deficit Hyperactivity Disorder and Its Implications for the Controversy About the Overprescription of Ritalin[1]
Introduction
Public concern about psychopharmacology has existed since psychotropic medications started being prescribed to large numbers of patients about 40 years ago. The major tranquilizers, such as Thorazine, have unpleasant short term, and sometimes awful permanent, side effects. When the minor tranquilizer Valium became prescribed on a large scale in the 1950s and 1960s, feminists and other social critics suggested that the drug was used keep to women drugged up in their high-rise apartments or suburban homes, in order to prevent them from acting on their dissatisfaction. In the last few years, there has been worry that the antidepressant Prozac is being used to keep people from being critical about their surroundings, and that the chemical eradication of emotional depths will stifle human creativity. Many individuals suffering from bipolar mood disorder (more commonly known as manic depression) dislike mood stabilizing medication such as lithium that prevents their manic periods since they feel that it is during such times that they are at their most creative and productive.
Such worries also apply to some extent to the treatment of attention deficit hyperactivity disorder (ADHD) with stimulants. These worries are compounded by the fact that it is mostly children who are being prescribed these drugs. Recently, these concerns have centered around the issue of the prescription of Ritalin, which has been receiving much media attention. A search of Internet Web Sites and Usenet Newsgroups, for instance, will reveal many people wondering how they should deal with their own children, and many others proposing a wide variety of solutions. Numerous articles have appeared in magazines and journals about childhood ADHD, and when children should receive medication.[2] Maybe most striking is the number of books in the popular psychology and self-help sections of bookstores on attention-deficit disorder and hyperactivity, and the number of those which question conventional psychiatric wisdom over the causes and treatments for the condition. It might not be surprising that so many self-help manuals and guides to current treatments are available, since the ‘Health’ sections of the same book stores are also full of books giving advice about how to keep fit, eat well, and treat a wide variety of illnesses. But when it comes to mental health practice, what is offered is not just advice, but exposés, medical and political critiques at both academic and popular levels, and proposals for alternatives treatments or solutions.[3]
Where there is such open public questioning of psychiatric authority, philosophers of medicine and bioethicists have an opportunity, and indeed a responsibility, to address the legitimacy of these criticisms.[4] It is not a job one can simply leave to psychiatry and the wider mental health profession, for three reasons. First, the mental health profession is not, as a matter of fact, very good at defending itself in public debate. This may be due to the amount of internal disagreement within the profession, a sense of complacency, an unwillingness to take outside detractors seriously, a lack of public representatives of the profession, or just the fact the public is very skeptical about psychiatry. Second, even if the mental health profession did do more to speak on its own behalf, it might be seen as partisan.[5] Bioethicists could be seen by the public as more independent judges, and more trustworthy even despite their lack of full medical training. Third, and most importantly, the issues are not purely medical, but involve ethical and social issues, which brings them closer to the realm of expertise of biomedical ethicists. So I conclude that bioethicists have an appropriate role to play in this debate. It is not my aim to provide a definitive answer to the question 'Is Ritalin overprescribed?' or to list precise conditions for when its use is unjustified. Rather it is to set out and clarify the different issues that should be addressed in this debate, in a deliberately provocative way.
Despite the fact that the drug Ritalin has been prescribed since the late 1960s and that its use has been controversial almost from the start, my review of the bioethics literature turned up little discussion of the issue.[6] So the controversy over Ritalin's use is fairly new in biomedical ethics, and the debates it raises promise to be of increasing importance, if, as seems likely, the prescription of stimulants to children continues on a large scale, and new drugs are developed which will be more sophisticated in their ability to control and form the minds of children. It is important that we find appropriate questions to ask, and the right language to use in framing the discussion of these issues, since the form of the start of the debate may well influence its future course .[7],[8] For the most part, I will keep the discussion at an abstract level rather than rely on details of case histories and statistics of the use of Ritalin and other drug treatments for ADHD.
Thus the primary purpose of this paper is to survey ethical issues that arise when medicating children with Ritalin and to connect them with existing discussions within biomedical ethics. But I also aim to articulate a provocative view in favor of the widespread use of performance enhancing drugs, not just to treat illnesses, but for non-medical cases as well. I take this position because I think that many arguments against such use of drugs are not sound, and often rest on unarticulated and flawed assumptions. It will serve a useful purpose to clarify the issues by setting out a controversial position, and give others a set of arguments to which to react. While I tentatively endorse the position I arrive at by the end of this paper, I am not unequivocally advocating the use of drugs on children, and I would warn against an overly individualistic and biological approach to understanding the problems that children face. We should not lose sight of the political and social factors involved, on both national and international levels.[9]
This paper is in two main parts. First, I consider the issue of using the diagnosis of ADHD, and what can be said for and against it. This involves a fairly lengthy discussion of the normativity of the concept of illness, and its implication for ADHD. Second, I consider arguments against the widespread prescription of stimulants to children, and I rebut each one in turn.
Definitions and Data
To return to the focus of this paper, I will now set out some relevant data and history concerning the issue. The fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), published by the American Psychiatric Association in 1994, specifies the criteria by which all mental disorders are to be diagnosed in North America. ADHD is diagnosed by the presence of symptoms of hyperactivity and inattention that last for at least six months.[10] The prevalence of ADHD has several estimates. DSM-IV says it affects 3% - 5% of school age children and the male-to-female ratio estimate ranges from 4:1 to 9:1 (DSM-IV, 82). Also relevant are related conditions, which can overlap with ADHD. Conduct disorder, which is characterized by aggression towards people and animals, destruction of property, deceitfulness or theft, and serious violation of rules, is much more prevalent, with estimates ranging from 6% to 16% for boys, and from 2% to 9% for girls. (DSM-IV, 88). Oppositional Defiant Disorder (ODD), which is characterized by negativistic, hostile, and defiant behavior leading to significant impairment in social, academic or occupational functioning, is estimated to affect from 2% to 16% of the population, depending on the sample and methods of ascertainment, and rates are roughly equal among the genders after puberty (DSM-IV, 92).
History of Attention-Deficit/Hyperactivity/Impulsivity Diagnoses
ADHD has a history nearly a century old. In 1902 G.F. Still, publishing in the Lancet, identified children with hyperactive behavior patterns where there was no reason to think it was caused by brain damage.[11] In 1957 Laufer and his associates published a number of influential papers, introducing the terms "hyperkinetic behavior syndrome" and "hyperkinetic impulse disorder." They found that treatment by stimulants was effective. In fact, the stimulant D-Amphetamine was first used by Bradley in 1937 to treat hyperactive children.[12] "By the late 1960s, the concept of hyperactivity was firmly established in the literature." (Ross and Ross, 14) There has been controversy about the diagnosis and its treatment ever since.
Reliability of Diagnosis
A criticism often leveled at psychiatric diagnostic methods is that they merely identify collections of symptoms and do not identify a cause of the illness or disorder that can be tested for. No sophisticated brain scans or blood tests are available for the diagnosis of any common mental illnesses. Furthermore, identification of behavioral symptoms is often thought to be subjective and unreliable, because so much depends on the values and expectations of the person making the diagnosis. In order for us to be confident that we even have a condition to talk about, diagnosis has to be reliable, so that different health care professionals would for the most part agree on diagnoses. There is controversy about how much intersubjective agreement there is amongst doctors on ADHD. It seems that the U.S. has many more cases of ADHD than other countries, casting doubt on the objectivity of the diagnostic category. It would help consistent diagnosis if the symptoms of ADHD clustered together, but there is no good evidence that they do, (Ross, 17). There is also little evidence that all cases of hyperactivity have a common cause.[13]
Treatment Options
Even if there is an identifiable condition of ADHD, there is a separate empirical question about what treatments can alter the condition, and which are most effective. The most common drug treatment is Ritalin. Ritalin (methylphenidate) is a central nervous system stimulant, similar to amphetamines in the nature and duration of its effects. It is believed to work by activating the brain stem arousal system and cortex. Pharmacologically, it works on the neurotransmitter dopamine, and in that respect resembles the stimulant characteristics of cocaine. There are well documented dangers associated with taking Ritalin as well. The Physicians’ Desk Reference states that
Sufficient data on safety and efficacy of long-term use of Ritalin in children are not yet available. Although a causal relationship has not been established, suppression of growth (i.e., weight gain, and/or height) has been reported with the long-term use of stimulants in children.
Common reactions to the drug include nervousness and insomnia. It is possible that some children can become dependent on Ritalin (although some dispute this) or abuse the drug. There is reportedly now a flourishing black market in Ritalin in high schools, where students take it for the pleasurable sensations it provides, rather than to enhance their school performance.
With so many children taking Ritalin (one estimate is that 5% of boys between the ages of 8 and 14 are prescribed the drug in the US[14]), it is clear that there is a huge market willing to spend money on possible treatments, and given the discomfort some feel with the use of drug treatment, alternative options have proliferated; self-help books, alternative drug treatments, and different parenting and schooling methods are all available.
The conventional treatment options apart from stimulants are behavior management, environmental engineering, and personal or famliy therapy. It is generally helpful to provide a child who has ADHD with a hightly structured environment and clear, reliably enforced rules. It has been shown, however, that no single treatment plan will benefit all children with the disorder (Ross and Ross, 16). Different treatments can often be used in conjunction with each other and conventional medical wisdom is that this is often what works best for the child.
Worries About Labeling Hyperactivity an Illness
Now that we have some background facts, we can discuss the ways in which the diagnosis of ADHD and the prescription of Ritalin might be a problem. I will start with concerns about diagnosis. It is clear that some conditions are labeled psychiatric illnesses in part because they cause suffering to the person who has them or to those who must deal with that person. Labeling a child hyperactive is to decide that there is a problem with his or her condition, and to indicate that something should be done to stop it if possible. As with other controversial cases, such as very short children, unhappy and pessimistic people, homosexuals, women's menopause, and premenstrual irritability, we may well worry that this medicalization demonstrates intolerance of people's distinctive traits and differences from the norm. Furthermore, some would say this list of purported illnesses and disorders suggests that the disadvantaged and victimized people in society are further stigmatized by medical labeling.
As far as hyperactivity is concerned, some critics of conventional approaches argue that it is normal for children, especially boys, to be extremely energetic, even aggressive and to have short attention spans. On this view, the fault is not with the children, but with a hyperactive and inflexible society in which parents are too busy to play with their children and help them run off their energy. Making hyperactivity an illness also gives the medical and educational establishments more power over the lives of children, since they can use the status of the condition as illness to overpower the wishes of parents or children who do not desire treatment. Institutional power is increased by medicalization. Philip Graham suggests this is inappropriate, and writes concerning doubts about the wisdom of medicating children for these sorts of problem, it is not part of the psychiatrist's job to smooth out normal variations in learning ability, especially when a lower level of concentration is accompanied by greater vivacity, curiosity, and explorativeness, all of which have their own appeal, and maybe lost with exposure to medication.[15]
Furthermore, there is a general worry that labeling a child as having a mental illness, and letting the child know this diagnosis, will itself have a damaging effect upon the child. It may affect the self-image of the child, so he or she starts to think of him or herself as having a problem or being ill, and so he or she will continue to think and behave in such a way in the future. On top of this, once the child has been labeled, adults will treat the child differently, as if expecting him or her to behave in a certain way. The label will place the child on a different path, while if there were no label, the child might be able to move on and forget about the problem in a short period of time.
Reply to Worries about Using the ADHD Label
I do not find these worries I have listed to be very alarming. First, it is worth noting that the changes in treatment that the label of ADHD introduce are also beneficial to the child, since it leads to treatment which should normally give him or her a greater ability to concentrate on work, less disruption and conflict at home, and more attention from educators. Furthermore, a child is given many other labels in the course of his or her life, at home and at school, including diagnoses of illnesses. This labeling often does not have any damaging effects, and the child may just expect to get over his or her medical condition, or he or she can pay little attention to the diagnoses altogether. The worries about labeling may give us good reason to be careful how we label children, but it does not give a strong reason to avoid labeling them altogether.
Behind these issues is the question whether the label of ADHD is appropriate, and if so, what justifies it. To answer this, I will first go further into the objectivity of our the categories of illnesses. I do not propose the impossible task of reviewing all the literature on the concepts of health, illness, disease, and disorder. The discussion of the normativity of our concept of illness is one of the most debated issues within biomedical ethics, and has been joined by some of the major names in the field, such as Christopher Boorse, Robert Veatch, Arthur Caplan, Tristram Englehardt, as well as many more recent contributors with a range of different outlooks.[16] Here, I am not trying to add anything fundamentally new to add to what others have written on the topic. Instead, I will base my position on what I consider to be two of the most carefully considered approaches to the concept of disease, by William Fulford and Tristram Englehardt.
The main debate about the concept of disease concerns the possibility of specifying the nature of a disease, qua disease, purely within scientific, biological or other morally neutral terms. Despite the arguments of Boorse that one can define disease in purely biological terms, free from value judgments, there is now something approaching a consensus, at least among bioethicists and social scientists, if not among clinicians and medical researchers, that this is not possible, and that a negative evaluation of a condition is essential in classifying it as a disease. This idea has been formulated in various ways, such as "disease categories are essentially normative," or "diseases are socially constructed."
William Fulford, a British psychiatrist and philosopher, is one of the most prolific writers within the philosophy of psychiatry. Fulford’s argument for his view of illness and disease is set out in its most detailed form in his book Moral Theory and Medical Practice. I will not set out that argument here. Rather, I will use his more succinct statement of his views in a recent article.
Fulford allows that it is possible to stipulate a purely descriptive definition of disease, but says that “the concept is simply unable to do the work that is required of it in actual use without the reintroduction of an evaluative element into its meaning.”[17] He suggests that we should adopt a model in which we see that the terms ‘illness’ and ‘disease’ express a particular kind of medical value, as distinct from moral or aesthetic values. He says that insofar as there is a distinction between the concepts of illness and disease, “illness tends to refer to the patient’s experience, whereas disease refers to the doctor’s specialized scientific knowledge.”[18] On a science-based perspective, it is natural to try to define illness in terms of disease, but Fulford argues that it should be the other way around, i.e., “illness is the primary concept, our concepts of disease owing their logical properties, ultimately, to the patient’s experience of illness.” From such considerations, he draws the conclusion that not just doctors and scientists, but also philosophers should be involved in the process of deciding on what classification schemes should be used in psychiatry.[19] Understanding illnesses requires not just scientific knowledge, but the ability to understand the logic of value judgments, as well as philosophy of science, philosophy of action, and even the mind-body problem.[20] The work left to do in his project is to elaborate what medical, as opposed to moral or aesthetic values, are and should be presupposed in our nosology.
I think Fulford's terminology of ‘medical values’ is misleading here, in that it leads us to suppose he means a new realm of value distinct from the moral or the aesthetic. This is not what he means. He connects the evaluative element in the concept of illness to the idea of the incapacity of a person to act intentionally when afflicted by an illness.[21] Illness is the failure of action. Even though mental illnesses are characterized by forms of behavior and even action, it is the fact that the behavior is not fully voluntary that shows that it is a symptom of an illness. For instance, an person with a compulsion or who is suffering psychotic delusions is not acting freely, given that freedom requires both lack of internal and external coercion, and a knowledge of what one is doing. Truly compulsive action is forced by a kind of inner coercion, while in psychosis, a person fails to understand what he is really doing. On this analysis, we can see why there is legitimate controversy about the classification of alcoholism as an illness, since it is debatable whether the actions of alcoholics are really forced by an inner need or craving.[22] The involuntariness of a behavioral symptom is of course only a necessary condition for diagnosis of illness, not a sufficient condition. Fulford says that his project to spelling out what medical values is speculative, but essential to the enterprise of understanding the nature of illness. There are two major benefits that come from his approach.
In the first place, what now emerges is a more complete account of the properties of the medical concepts, in psychological medicine as well as in physical, and in primary health care (e.g., in general practice) as well as in hospital medicine. In the second place, it is with this more complete account that results, not merely explaining the logical properties of medical concepts, but with the implications for actual practice, are finally obtained.[23]
Thus Fulford's account of the nature of illness has useful practical implications, and I will bring it to bear on the case of ADHD.
But I also want to bring in one other writer, Tristram Englehardt. Englehardt has long defended the view that the concept of illness is value-laden and that it is this concept, rather than a zoological/evolutionary concept of disease, that is and should be used in medicine. He writes, in his The Foundations of Bioethics,
The experienced reality with which medicine deals is shaped by (1) evaluative assumptions regarding which functions, pains, and deformities are normal in the sense of proper and acceptable; (2) views of how descriptions are to be given; (3) causal explanatory models; and (4) social expectations regarding individual ills or particular forms of disease. (196)
Englehardt does not focus much attention on the difference between the internal and external factors that shape our experience of reality. I.e., our experiences depend partly on what we bring to them (internal factors), and partly on what there is in the world (external factors), although he is of course quite aware of the distinction. It is worth being clear when our understanding of reality changes because we discover something new about the world, and when it changes because of a shift in our values or expectations. This is especially important in deciding what form medicine should take, and what classifications we want to use. With this proviso, then, Englehardt's list of reality-shaping factors provides a useful way to structure my discussion of ADHD.[24]
The Normativity of the Concept of ADHD
As Fulford shows, for ADHD to be properly classed as a disorder it is crucial that the behavior associated with it is not chosen with full autonomy or voluntariness. This needs further investigation. And following Englehardt, I want to consider what model of proper and acceptable child development is behind the judgment of ADHD being undesirable and abnormal,
what are the causal explanatory models of the behavior associated with ADHD, and what are the social consequences of diagnosing children as having ADHD. I will skip the issue of how descriptions of ADHD affect our experienced reality, because I don't see that it is much relevance here. I will consider the above issues in a slightly different order from that in which I just listed them.
1. Causal Explanatory Models of ADHD
As we come to better understand the causes of illness, we start to classify it differently. Some researchers have suggested that having ADHD is closely linked to being predisposed to addiction.[25] We have seen that the causes of ADHD are not well understood, and what now is classified as one disorder may well eventually come to be seen as a collection of distinct conditions. But until we have one, our understanding of reality cannot be affected by a causal explanation of ADHD.
2. Voluntariness of ADHD Behavior
Typical childhood ADHD symptoms include a lack of close attention to details, careless and messy work or activity, lack of sustained attention, uncompleted tasks, failure to perform requested tasks, fidgetiness, persistent getting up and moving around even when requested to stay still, being noisy, being always "on the go," talking excessively, being impatient, blurting out answers before questions have been completed, interrupting others, clowning around, and engaging in potentially dangerous activities (DSM-IV, pp. 78-9). On what grounds could these symptoms be judged involuntary or non-autonomous? I have two quick notes to make here. First, the diagnostic criteria of DSM-IV do not mention that the behavior should be involuntary, but I take it that this is at least an implicit assumption. Second, although strictly speaking the concepts of voluntariness and autonomy are different, in this paper I will ignore the subtle differences between them.
Lack of patience and attention are not themselves actions, and it is implausible to suppose that the children are intentionally messy and careless, even if it was in their power to pay more attention to what they were doing. Some of the impulsive behavior could be seen as beyond the children's control, in the same way that an overexcited child will be unable to contain herself. However, the other forms of behavior do, on the face of it, look like intentionally performed actions. Are there reasons to question this surface appearance? I will tentatively argue that there are.
If we could show that this behavior was the result of an organic condition, it would at least be possible to blame that condition, rather than the child, for the behavior. There are plenty of chemical "imbalances" that have typical behavioral effects. High or low blood sugar can affect activity levels, and it is well known that children who have had too much candy are annoyingly active. A more unusual example is that the psychiatric symptoms of mercury poisoning include xenophobia, anxiety and severe irritability.[26]
However, since we have no way as yet to confidently attribute the symptoms of ADHD behavior to an organic condition or a chemical imbalance, we are in a more difficult position. If ADHD occurred as a distinct set of symptoms, and formed a clear natural kind, then we would have some reason to class it as a distinct condition, and this could license us in saying the symptoms are not voluntarily chosen. But again, as we have seen, ADHD symptoms does not generally come as a discrete package. Rather, it comes in a large variety of forms and degrees.
I would suggest that this leaves us with three options. First, we could remain agnostic on this issue until more is found out about the condition. However, given the pragmatic realities we are faced with, this is not feasible. We have to make some judgment, because we have to know what to do with children who are behaving badly now. We cannot tell the parents to wait for ten years. Second, we could simply rule that no behavior of children is fully autonomous, because children have not reached a required level of maturity to make autonomous decisions. Autonomy requires the ability to reflect fully on one's behavior, and children do not have the necessary experience or moral sensibilities to do this. This second option is extreme, since having such a high standard of autonomy would probably mean that a good deal of adult behavior is also not autonomously chosen. Furthermore, saying that children lack autonomy goes against many modern intuitions and practices, where we think of children as capable of original, creative and mature thought. The leaves the third option, which is say that it is reasonable to suppose the children's behavior is in some sense involuntary, because they would really obey their parents and teachers if they could. I.e., on this option, we make some judgment of what the children's "real" desires are, despite their behavior. Some evidence for this option would be if the children themselves expressed frustration about their behavior, and were unable to stop it despite sincere attempts to do so. Indeed, many ADHD children do show such frustration and regret.
I conclude from this that some slim justification can be given for saying that the behavior typical of childhood ADHD is involuntary. Further understanding of the causes of ADHD might increase our justification here in the future, but we should not expect any scientific evidence to be decisive when it comes to attributions of voluntariness, since this is a concept that is firmly in the realm of philosophy rather than science.
3. Models of Normal Child Development
While the voluntariness of children's behavior when they fit ADHD diagnostic criteria is an interesting and philosophically productive question, I suspect more controversy is centered around what should be our model of normal child development. Disagreement about conceptions of normality can lead to intense ideological argument, as is very apparent in the debate about homosexuality. If we accepted the "is/ought" distinction, we might expect that understanding human nature would be a purely descriptive enterprise, but of course it is highly prescriptive. There are many different conceptions of human nature and normal development. Some of these are meant to be universal, and so applicable to all cultures and races. However, even if a universal conception is applicable in the case of physical development, it is implausible to think that all cultures have or should have the same expectations for psychological development. Expectations vary over time and across cultures. There will be limits to the possible variation, but it will be hard to determine a priori what those limits are.
Given that it seems reasonable to take a fairly relativistic stance towards conceptions of psychological normality over time and across cultures, the question arises by what criteria we judge, justify or criticize, standards of normality in our own culture. Relativism can sometimes lead to a pluralist non-judgmental attitude, but it seems clear that we need at least a minimal conception of psychological normality, if only because many of our social institutions rely on such a conception. It is certainly hard to imagine how we could have a psychiatric profession that did not have a minimal conception of normal psychological development. What should determine this minimal conception?
The basic answer that psychiatry adopts, I venture, is that psychological health consists in the ability to work and love.[27] While this is overly broad, it is the answer I will also adopt. It is sufficient to give a justification for judging children with ADHD abnormal, since the condition strains their relationships with their family and friends, and makes their work in school suffer, which in turn affects their future prospects, even if the condition itself disappears by adulthood.
Critics will assert that it is a mistake to judge the child abnormal even if she fits the diagnostic criteria for ADHD, since either
(i) the expectations on all children to be able to be sensible, sit still and pay attention for long periods are inappropriate, or
(ii) while those expectations are not inappropriate, it is not a child's fault if she does not meet them, because the stresses of modern life and lack of adequate parenting make it very hard to do so. It is society itself that should be seen as abnormal and made to change.
This second version of the criticism depends on an empirical assumption about the influence of society and family on children. Presumably this assumption could be tested to some degree, although I am not aware not any studies that have done so. Even if there are data on this, I suspect that there will be similar problems that arise as in empirical claims about the effects of TV violence on children, or pornography on the rate of sexual assaults. There are relevant data, but there is controversy over its interpretation. Even if it is true that modern culture and society, with all its one parent families and two parent families where both have full time jobs, has a deleterious effect on children's attention span and obedience, it does not follow that children should not be diagnosed with ADHD. For they can be seen as having a mental disorder, caused by lack of care and nurturing, for instance.
Returning to the first criticism (i), that we should not place such high minimal expectations on children, I am unsure how to determine what these expectations should be. It is clear that it is not a purely medical or scientific issue. We might determine what minimal expectations we should have using utilitarian considerations, so that we chose whatever is best for society. Or we might use a variant of this, a Rawlsian 'maximin' principle, where we set the expectations on children's psychological abilities to maximize the benefit to the least well off. Or we could do what the American Psychiatric Association has chosen to do, and make the stated criteria vague, so that it is left largely to the discretion of individual psychiatrists whether a given child should be given an ADHD diagnosis.
I conclude then that these two criticisms of the conception of normal childhood development I am advocating do not pose significant problems for me.
4. Social Consequences of ADHD Diagnosis
I have already considered some of the social consequences of using the ADHD diagnosis in affecting people's perception of children so-labeled. With all too brief reasoning, I concluded that these consequences would not necessarily be serious enough to be of major ethical concern. However, there are other consequences to be considered, viz., the treatment that is given for ADHD, in particular, the use of stimulants. I consider this in the next section.
Several Worries About Using Psychoactive Drugs Like Ritalin
Western culture has a strong suspicion of mind-altering drugs, both regulated and non-regulated. We don't see anything wrong with making people less depressed, or increasing their attention span if this can be achieved through 'natural' means, such as exercise, vitamins, or homeopathic medicine. Using a drug to accomplish the same end, however, is sometimes deemed problematic. I will consider some possible reasons for this suspicion.
The most accepted use of drugs is to cure an illness or lessen the symptoms of an illness. We saw in the last section that it is debatable what conditions count as illnesses. Suppose that we can achieve a strict definition of ADHD, and even find some biological causes of the condition. If the use of stimulants or other drugs is shown to help more than any other treatment for children who clearly have ADHD, with the diagnostic criteria strictly applied, I imagine that there would be relatively little worry about this medicinal use of drugs. For our purposes, the more ethically perplexing and intriguing cases are those in the gray area between illness and health. The public concern about overmedication with Ritalin suggests that it is here that people feel most uncomfortable about the use of drugs on children. So I will from here on restrict my discussion to cases in this gray area. What reasons are there against the use of drugs on children to reduce their elevated levels of activity and energy, and to increase their attention span?
Side Effects.
Most obviously, there are the side effects of drugs. These can be very unpleasant and even dangerous. If they are just short term effects, then it might be worth making a short term sacrifice in order to get the positive effects that the drugs can deliver. However, a more serious worry comes from the possible long term effects of drugs. These may be especially important to consider since children's brains are still developing, and taking drugs which affect the chemical interactions within the brain strikes many people as potentially dangerous, whatever assurances drug manufactures might provide. Even if it has no long term effects on physical health, there might be long term effects on personality and cognitive skills. I take both short and long term side effects to be a serious worry. Some side effects are well documented, and even if others are not, there is still some risk involved in taking drugs. However, the fact is that most doctors consider Ritalin to be relatively safe and worth the risk, given the benefits it can provide in at least some cases.
Still, I think, there are misgivings about the widespread use of drugs on children over and above the possible side effects, and I want to investigate what the basis of such misgivings might be. I first list the possible concerns with short explanations, and then discuss some of them in more detail.
Unnaturalness.
The naturalness or unnaturalness of the drug seems to be a factor in people's views as to the wisdom of taking the drug. For instance, the use of lithium as a mood stabilizer seems to be more acceptable to some people as a treatment because lithium is naturally occurring. This might be due to an inclination to believe that naturally occurring drugs should be less dangerous.
Profit Motives.
Some are wary of psychotropic drugs because they are produced by large corporations, and these corporations are more concerned with their own profit margins than patients' health and best interests.
Thought Control.
Another worry about psychotropic drugs arises from a concern about the possibility of 'thought-control.' They are seen as a way for the medical establishment to enforce conformity on troublesome members of society. This sort of suspicion was widely voiced at the height of the Anti-psychiatry movement of the 1960s and early 1970s, (e.g. the use of ECT and lobotomy in 'One Flew Over the Cuckoo's Nest') and is certainly still around in a lot of libertarian ideas most clearly voiced by Thomas Szasz.[28] Anything which increases state and institutional power over the individual is seen as dangerous.
Competitiveness.
There is the worry that using drugs to enhance performance for some children will create a pressure on all parents (who can afford it) to get their children to take similar drugs.[29] We could imagine a time when most children take drugs to stimulate their curiosity, increase their attention span, make them more cheerful, and even make them less rebellious and more polite. Many would see this possible future as far from utopian, even if the children did seem to benefit from the drugs.
Doctors’ Power.
A final misgiving about the use of the drugs on children could arise from a discomfort with giving doctors or other professionals power over the lives of children and families that may not be necessary. This may be seen as taking control of their lives away from parents and even their children.
Why Most of Those Worries about the Use of Psychotropic Drugs are Ill-Founded
How do these worries stand up when given careful scrutiny? I hope that it will be uncontroversial to assume that the ethical discussion should center around the question of whether the use of stimulants is in the best interests of the child, and the larger effects on the whole of society should take second place. We can divide interests into short-term and long-term. The benefits in the short term would be that the child on Ritalin would be better able to pay attention at school, and not bother those around him so much, which in turn might well mean that he alienates fewer people. In some cases of ADHD, the affected child finds life very hard without the drug, and much better with the drug. The short term costs of taking the drug would be the discomfort and annoyance of the side effects, which could in turn impair the child’s performance in school and other activities. In the long term, the child should, ideally, reap many benefits from improved performance at school. The long term costs of taking Ritalin are uncertain, and there may be none.
These benefits may or may not outweigh the costs. This is a hard decision to make, at least as a generality. I will not attempt to provide a calculus by which we could weigh the benefits against the problem of side-effects. Rather, I want to concentrate on the other concerns I have just listed, and to do so, I propose that we consider a hypothetical situation, in which we have a drug which has no detrimental side-effects. Furthermore, this hypothetical drug not only provides the benefits of Ritalin, but also turns naughty children into well-behaved, diligent children. We can refer to this drug as the Wonder-Drug.
We might still have a sense of discomfort about using the Wonder-Drug in such a case. I will consider each of the possible sources of discomfort.
Unnaturalness.
This can be swiftly dispatched as an irrational worry. There is no a priori or empirical reason to suppose that natural drugs, per se, are any less dangerous than artificially manufactured ones. Natural drugs can be extremely harmful, while artificial ones can be extremely helpful.
Profit Motives.
There may be good reasons to be suspicious of the motives of large international corporations, but they seem no more powerful here than in other areas of life. The fact is that we do place trust in these corporations in many areas of our lives, and so if we insist on rejecting the technology offered in this case, we should be consistent and reject it in many other areas of life as well. Few people are willing to do this, and although some may yearn for a simpler age less dominated by technology, I doubt that most people would be willing to make the necessary sacrifices to return to such a simpler state, even if they were offered the choice.
Thought Control.
This worry, as it applies to the Wonder-Drug, would go as follows. “The doctor will be imposing his or her own values on the family and the child. What counts as naughty should not be within the jurisdiction of health care professionals to impose on children. It is the job of schools and parents to decide what is good and bad behavior, and getting children to do good deeds is not one of the aims of medicine. So the use of the Wonder-Drug is akin to mind-control. Doctors should simply aim to alleviate suffering and cure illness. Being badly behaved is not an illness, but rather a matter of morals and lack of respect for other people and their property.”
This worry has little rational backing. It can be split up into three issues:
1) would use of such a drug be a form of thought control?
2) if so, is there anything wrong with such a form of thought control?
3) even if not, should the medical profession be party to such thought control?
My answers to these questions will be, roughly speaking, (1) yes, in a way; (2) not in moderate forms; and (3) in limited ways. I need to explain these answers. I address the first two here, and the third in a few paragraphs, under the category of Doctor's power.
In what sense is the prescription of a good-behavior drug a form of thought control? It does alter the behavior of the child. However, doesn't all teaching alter the behavior of the child? Do we call all teaching a form of thought control? No. We like to think of it as the provision of knowledge and skills. It is seen as controlling when we force a view on a person when there is room for reasonable dispute about what the truth is. Giving children the Wonder-Drug would change behavior, but it is less clear that it forces a view of the world on children in an unjustified way.[30] Whether or not prescribing the Wonder Drug meets some definition of thought control is largely a stipulative matter. What is more clear is that we generally feel that parents have the right to control their children's minds. They create and enforce their moral lives, to the extent that they can, and enforce their aesthetic views as well. Children do sometimes rebel, but it is far more common to actually adopt the beliefs and values of their parents. We don't generally see anything morally problematic in parents' right to control their children's lives, even if we disagree with the particular views and choices involved. So if we find something wrong with the Wonder-Drug, it can't be in its effects. There is a lot of agreement about what counts as good behavior and politeness within even as diverse a society as that in North America. Cultural difference and relativism are not big problems here.
Competitiveness.
It may well be true that once some children start taking the Wonder-Drug for performance enhancement, there will social pressure on other children to also take it. But again, we have not used this phenomenon as a reason to stop children from using calculators or computers. Many countries allow parents to send their children to expensive high schools, colleges and universities where the advantage is not just, and sometimes not even, that they get a good education, but that the children gain useful connections with other people who could be beneficial to them in their future careers. We allow competitiveness and social pressure in all sorts of aspects of children’s lives, so it is arbitrary to draw the line at the Wonder-Drug, just because it helps the children in a different way. Some might object that there is a principled moral difference between technology and social institutions that improve our lives through external means, and pharmaceutical technology that changes us internally, through interfering with the chemical processes in our bodies. But I do not see that there is any principled moral difference between "external" and "internal" technologies.
I will have to leave the claim that there is no significant moral difference per se between the internal and external use of technology unargued for here. I suggest that it is up to those who say there is a difference to explain what it is. It may be that many people have intuitions that conflict with mine on this issue, and there is much to say about how our concept of autonomy is related to internal and external ways of changing people's thoughts and behavior. But however we map out that relation, we will still be left with a distinction between coerced change and noncoercive enhancement of abilities. As I say, I see no reason to suppose that the use of psychopharmacology to help adults or children is necessarily or worryingly coercive in most cases.[31]
Doctors’ Power.
Should doctors be the ones to prescribe behavior modifying drugs in cases of children who do not meet the strict criteria of illness? The obvious case against them doing do is that naughtiness is not an illness. Doctors are not in the business of prescribing good behavior, and should restrict themselves to only curing illnesses.
This view as it stands though is obviously faulty. Doctors do set themselves up in the business of cosmetic surgery. Some clinical psychologists offer to help people's careers and their living of their life, even when their clients don't have identifiable mental illnesses. There are plenty of performance enhancing methods that are tried on children by pediatric psychologists. So doctors and health care workers do sometimes do more than cure illnesses. There are limits to what they will attempt to do, but they are not as narrow as some suppose. Could the role of a doctor or psychiatrist extend to ensuring that children behave well? Some would argue that the medicalization of bad behavior such as conduct disorders shows that doctors have already taken on such roles. The view that the medical profession should not be involved in enhancing the performance of children is hard to justify because the roles of medical professionals are not narrowly constrained to curing illness or alleviating suffering in today’s society.
Conclusion
To recap the overall argument of this paper: I considered when and whether the label of ADHD should be applied to children. I argued that the extension of the concept ADHD, being a concept of medical disorder, is determined by non-scientific considerations as well as scientific ones. In order to be an illness or disorder, a condition must have symptoms not under the direct voluntary control of the subject, and it must be abnormal and disvalued. Furthermore, the social consequences of applying the diagnosis must be taken into account, because they can be such as to legitimately prevent the use of illness concept. I argued that ADHD did meet these requirements for being a disorder, since its symptoms can reasonably be counted as non-voluntary, and it is a condition which creates a problem for children in their prospects for living satisfying lives and fulfilling their potentials. The worries that arise from the effects, both individual and social, of children being labeled as ADHD and the effects of drug treatments are not serious enough to give reason to stop using the diagnosis.
What I am saying is then is that the fact that stimulants such as Ritalin are in such widespread use is not itself a bad thing. This still leaves open the possibility that many cases are being mishandled, in that the children's individual best interests are not being served by the treatment they are receiving. We can still see the rapid rise in the use of Ritalin as a warning sign meriting further investigation, even if it is not bad in itself.
William Fulford has commented that although there is a large literature on the concept of mental illness and the general concepts of illness and disease, “there has been remarkably little contact between this area and psychiatric classification.”[32] This paper has tried to bridge the gap between philosophy and theoretical psychiatry. While some of the arguments here have been too brief for my own satisfaction, I at least hope that it will serve the function of provoking replies, and so start more discussion within biomedical ethics of these important issues.[33]
[1]A shorter version of this paper is forthcoming in a special issue of Bioethics, July 1997, under the title "Medicating Children: The Case of Ritalin."
[2]As well as the predictable smattering of “women’s magazines” which have covered the issue, I came across a number of references to articles about ADHD in magazines for military families.
[3]Recent books proposing or considering alternatives to established ideas about attention deficit disorder include: Hartmann, Tom Beyond ADD: Hunting for Reasons in the Past & Present Underwood Books, Grass Valley, CA, 1996; Miller, David and Kenneth Blum Overload: Attention Deficit Disorder and the Addictive Brain Andrews and McMeel, Kansas City, 1996; Reichenberg-Ullman, Judyth and Robert Ullman Ritalin-free kids : safe and effective homeopathic medicine for ADD and other behavioral and learning problems Prima Pub., Rocklin, CA, 1996; Block, Mary Ann. No more Ritalin : treating ADHD without drugs Kensington Books, New York, 1996; Ingersoll, Barbara D. and Sam Goldstein Attention Deficit Disorder and Learning Disabilities: Realities, Myths and Controversial Treatments Doubleday, New York, 1993.
[4]This leads me to suggest, parenthetically, a peculiar difference between standard medical ethics and issues in the philosophy of psychiatry. While issues in mainstream biomedical ethics, such as abortion or physician assisted suicide, are often on the front page of major newspapers, or argued about in front of the Supreme Court, controversial issues concerning psychiatry tend to receive a different sort of popular attention. I.e., controversies in mental health provoke enough interest in the general public for magazine editors, book publishers, and TV producers to consider it worthwhile covering them, and maybe this helps to continue the controversies.
[5]To call someone speaking in her own defense biased might seem strange logic. However, I am struck how often students in introductory philosophy classes use such logic. In papers evaluating the contrasting views of two philosophers, it is not unusual for students to say "I think the arguments of philosopher X are better than those of philosopher Y, but I am biased because I agree with the views of philosopher X." Furthermore, students are willing to apply the same logic to others, and will very often counter an argument with a phase such as "But that's just your opinion." Rational justification is hardly seen as relevant. For a more systematic development of such an analysis of trends in popular reasoning, and their connection with the popular media, see Susan Bordo's paper on the jury decision and reasoning in the O. J. Simpson trial.
[6]This is one more example of how biomedical ethics has persistently neglected issues in psychiatry.
[7]However, we should not forget that there has been discussion of hyperactivity and the use of stimulants to stop it for over 20 years. The following is a partial list of relevant literature concerning the ethical dimensions of the issue: Schrag, P. and Divorky, D.: The myth of the hyperactive child. New York, Pantheon, 1975; Bosco, J. J. and Robin, S. S. (ed.) The hyperactive child and stimulant drugs. Chicago, University of Chicago Press, 1976; Jackson, Jane E, The coerced use of Ritalin for behavior control in public schools: legal challenges. Clearinghouse Review 10(3): 181-193, Jul. 1976. Box. S.: "Hyperactivity: the scandalous silence" New Society, 1 December 1977, pp. 548-60; Peter Conrad, "On the Medicalization of Deviance and Social Control" in Critical Psychiatry: the Politics of Mental Health edited by David Ingleby (Pantheon, 1980); O'Leary, James C. "An analysis of the legal issue surrounding the forced use of Ritalin: protecting a child's right to 'just say no'" [Note]. New England Law Review. 1993 Summer; 27(4): 1173-1209.; Breggin, Peter R. Breggin, Ginger Ross "The Hazards of Treating 'Attention-Deficit/Hyperactivity Disorder' with Methylphenidate (Ritalin)" Journal of college student psychotherapy. 1995 v 10 n 2 Page: 55; Kolata, Gina "Boom in Ritalin sales raises ethical issues" [News]. New York Times. 1996 May 15: C8.; LynNell Hancock, "Mother's Little Helper: With Ritalin, the Son Also Rises," Newsweek 18 March 1996 pp. 50-56; Jennifer Cunningham, "A deficit of education", Living Marxism issue 88, March 1996; Diller, Lawrence H., "The run on Ritalin: attention deficit disorder and stimulant treatment in the 1990s." Hastings Center Report. 1996 Mar-Apr.; 26(2): 12-18, and Kristin Leutwyler, "Paying Attention" Scientific American August 1996, pp. 12-14.
[8]Furthermore, the whole area of psychiatric ethics is underdeveloped, and the recent advances and developments that have occurred in the area in the last few years may have profound effects on the direction of subsequent discussion. So it is important for several reasons that we be careful how we frame these issues.
[9]See my article "Prozac, Psychiatry, and Political Activism," published in Clio's Psyche 3 (2) September 1996, pp. 55-56.
[10]The diagnostic criteria for ADHD are the following:
A. Either (1) or (2):
(1) six (or more) or the following symptoms of inattention have persisted for at least 6 months to a degree that is maladaptive and inconsistent with developmental level:
Inattention
(a) often fails to give close attention to details or makes careless mistakes in schoolwork, work, or other activities
(b) often has difficulties sustaining attention in tasks or play activities
(c) often does not seem to listen when spoken to directly
(d) often does not follow through on instructions and fails to finish schoolwork, chores, or duties in the workplace (not due to oppositional behavior or failure to understand instructions)
(e) often has difficulty organizing tasks and activities
(f) often avoids, dislikes, or is reluctant to engage in tasks that require sustained mental effort (such as schoolwork or homework)
(g) often loses things necessary for tasks or activities (e.g., toys, school assignments, pencils, books, or tools)
(h) is often easily distracted by extraneous stimuli
(i) is often forgetful in daily activities
(2) six (or more of the following symptoms of hyperactivity-impulsivity have persisted for at least 6 months to a degree that is maladaptive and inconsistent with developmental level:
Hyperactivity
(a) often fidgets with hands or feet or squirms in seat
(b) often leaves seat in classroom or in other situations in which remaining seated is expected
(c) often runs about or climbs excessively in situations in which it is inappropriate (in adolescents or adults, may be limited to subjective feelings of restlessness)
(d) often has difficulty playing or engaging in leisure activities quietly
(e) is often "on the go" or often acts as if "driven by a motor"
(f) often talks excessively
Impulsivity
(g) often blurts out answers before questions have been completed
(h) often has difficulty awaiting turn
(i) often interrupts or intrudes on others (e.g., butts into conversations or games)
B. Some hyperactive-impulsive or inattentive symptoms that caused impairment were present before age 7 years.
C. Some impairment from the symptoms is present in two or three more settings (e.g., at school [or work] and at home).
D. There must be clear evidence of clinically significant impairment in social, academic, or occupational functioning.
E. The symptoms do not occur exclusively during the course of a Pervasive Developmental Disorder, Schizophrenia, or other Psychotic Disorder and are not better accounted for by another mental disorder (e.g., Mood Disorder, Anxiety Disorder, Dissociative Disorder, or a Personality Disorder). (DSM IV, pp. 83-5)
[11]Dorothea M. Ross and Sheila A. Ross, Hyperactivity: Current Issues, Research, and Theory (2nd ed.), John Wiley & Sons, New York, 1982. Page 13.
[12]Brian E. Leonard, Fundamentals of Psychopharmacology, John Wiley & Sons, Chichester, 1992. Page 224.
[13]For a recent short review of this evidence, see the article in Scientific American from August 1996, "Paying Attention."
[14]See LynNell Hancock, "Mother's Little Helper: With Ritalin, the Son Also Rises."
[15]Philip Graham "Ethics and child psychiatry," in Sidney Block and Paul Chodoff (eds.) Psychiatric Ethics (2nd ed.) Oxford, Oxford University Press, 1991. Page 347.
[16]Here is a very selective bibliography on this topic: Arthur Caplan, H. Tristram Englehardt, Jr., and James J. McCartney Concepts of Health and Disease: Interdisciplinary Perspectives Addison-Wesley Publishing Company, Reading, MA, 1981; Robert M. Veatch, "The Medical Model: Its Nature and Problems," and Christopher Boorse, "What a Theory of Mental Health Should Be," both reprinted in Psychiatry and Ethics, edited by Rem B. Edwards, Prometheus Books, Buffalo, 1982; Charles M. Culver and Bernard Gert Philosophy in Medicine: Conceptual and Ethical Issues in Medicine and Philosophy Oxford University Press, New York, 1982; Lennart Nordenfelt and B. Ingemar B. Lindahl (editors) Health, Disease, and Causal Explanation in Medicine (Philosophy and Medicine Volume 16) D. Reidel Pub. Co., Dordrecht, 1984; Martin Roth and Jerome Kroll The Reality of Mental Illness Cambridge University Press, Cambridge, 1986; Reznek, L. (1988). The Nature of Disease. London, Routledge & Kegan Paul; Arthur Caplan “The Concepts of Health and Disease,” in Robert Veatch (editor) Medical Ethics Jones and Bartlett Publishers, Boston, 1989; K.W.M. Fulford Moral Theory and Medical Practice Cambridge University Press, Cambridge, 1989; Lawrie Reznek, The Philosophical Defence of Psychiatry, Chapter 10, Routledge, London, 1991; Philosophical Perspectives on Psychiatric Classification edited by John Sadler et al. (Johns Hopkins University Press, Baltimore, 1994), [henceforth PPPC]; H. Tristram Englehardt, Jr. The Foundations of Bioethics (Second Edition) Oxford University Press, New York, 1996, Chapter 5.
[17]K.W.M. Fulford, “Closet Logics: Hidden Conceptual Elements in the DSM and ICD Classifications of Mental Disorders,” in PPPC, p. 216.
[18]Ibid., pp. 219-220.
[19]Ibid., p. 229.
[20]Ibid., p. 230.
[21]Ibid., p. 222.
[22]Fulford discusses alcoholism at length in Moral Theory and Medical Practice, pp. 154-164.
[23]Moral Theory and Medical Practice, p. xiv.
[24]I want to pay particular attention to the subtleties of the ways in which normative judgments go into the classification of particular diseases, and we should be on our guard against taking over-general slogans too literally. There are ways in which disease classification is dependent on normative judgments, but also there are ways in which the classification is quite independent of other normative judgments, and it is a complex matter to sort out in what ways a disease classification is normative. Furthermore, some disease concepts are far more normatively loaded than others, are loaded in with much more controversial normative assumptions than others. This much is relatively obvious even from reading the definition of disorder in the Introduction to DSM-IV (which was also used in DSM-III and DSM-III-R).
... each of the mental disorders is conceptualized as a clinically significant behavioral or psychological syndrome or pattern that occurs in an individual and that is associated with present distress (e.g., a painful symptom) or disability (i.e., impairment in one or more important areas of functioning) or with a significantly increased risk of suffering death, pain, disability, or an important loss of freedom. In addition, this syndrome or pattern must not be merely an expectable and culturally sanctioned response to a particular event, for example, the death of a loved one. Whatever its original cause, it must be currently be considered a manifestation of a behavioral, psychological, or biological dysfunction in the individual. Neither deviant behavior (e.g., political, religious, or sexual) nor conflicts that are primarily between the individual and society are mental disorders unless the deviance or conflict is a symptom of a dysfunction in the individual, as described above. (DSM-IV, American Psychiatric Association, Washington , DC, 1994, pp. xxi-xxii.)
Obviously philosophers could pick holes in this definition for its circularity and vagueness. In fairness to the writers of DSM-IV, they do preface the definition with an admission that it is inadequate, and the minimal justification that they have not been able to find any better ones. I quote it to highlight the idea that values inevitably come into play in the interpretation of what are clinically significant or deviant forms of behavior, important areas of functioning, significantly increased risks, important losses of freedom, and so on. Furthermore, the way values come into play is complex. Disorders are not simply defined as undesirable conditions or conditions causing undesirable behavior. There are a whole host of other evaluative considerations that come into play when determining whether a particular condition, such as hyperactivity, is a disorder.
[25]Miller, David and Kenneth Blum, Overload: Attention Deficit Disorder and the Addictive Brain .
[26]See Mark S. Gold The Good News About Panic, Anxiety, and Phobias, Bantam Books, New York, 1989, p. 185.
[27]There is, I am sure, a quote of Freud to the same effect.
[28]This idea seems to have found a recent proponent in Louise Armstrong And They Call It Help: The Psychiatric Policing of America's Children Addison-Wesley, Reading, MA, 1993. See especially Chapter 8: The School Connection I. See also Schrag and Divorky, Op. cit.
[29]This has been discussed by Peter Kramer in connection with antidepressants in his Listening to Prozac Viking; New York, 1993.
[30]Consider the parallel case of depression. Antidepressants do not force more cheerful views of the world on patients, even if they do enable them to avoid despair and bleak views of their lives. Emotions have a cognitive component, and changing a person's emotions may be more than just changing their affect, so taking antidepressants may also affect their intellectual views as well. This is often seen as empowering.
[31]I aim to write more fully about this issue in a paper on psychopharmacology and autonomy.
[32]Closet Logics, p. 215.
[33]I read this paper to the Philosophy Department at Loyola University Chicago, and received some useful comments. My thanks to Georgeann Higgins for her comments on the substance and style of an earlier draft.