Technology

The Troubled History of Psychiatry


Modern medicine can be seen as a quest to understand pathogenesis, the biological cause of an illness. Once pathogenesis—the word comes from the Greek pathos (suffering) and genesis (origin)—has been established by scientific experiment, accurate diagnoses can be made, and targeted therapies developed. In the early years of the AIDS epidemic, there were all kinds of theories about what was causing it: toxicity from drug use during sex, allergic reactions to semen, and so on. Only after the discovery of the human immunodeficiency virus helped lay such conjectures to rest did it become possible to use specific blood tests for diagnosis and, eventually, to provide antiviral drugs to improve immune defenses.

Sometimes a disease’s pathogenesis is surprising. As a medical student, I was taught that peptic ulcers were often caused by stress; treatments included bed rest and a soothing diet rich in milk. Anyone who had suggested that ulcers were the result of bacterial infection would have been thought crazy. The prevailing view was that no bacterium could thrive in the acidic environment of the stomach. But in 1982 two Australian researchers (who later won a Nobel Prize for their work) proposed that a bacterium called Helicobacter pylori was crucial to the onset of many peptic ulcers. Although the hypothesis was met with widespread scorn, experimental evidence gradually became conclusive. Now ulcers are routinely healed with antibiotics.

But what can medicine do when pathogenesis remains elusive? That’s a question that has bedevilled the field of psychiatry for nearly a century and a half. In “Mind Fixers” (Norton), Anne Harrington, a history-of-science professor at Harvard, follows “psychiatry’s troubled search for the biology of mental illness,” deftly tracing a progression of paradigms adopted by neurologists, psychiatrists, and psychologists, as well as patients and their advocates.

Her narrative begins in the late nineteenth century, when researchers explored the brain’s anatomy in an attempt to identify the origins of mental disorders. The studies ultimately proved fruitless, and their failure produced a split in the field. Some psychiatrists sought nonbiological causes, including psychoanalytic ones, for mental disorders. Others doubled down on the biological approach and, as she writes, “increasingly pursued a hodgepodge of theories and projects, many of which, in hindsight, look both ill-considered and incautious.” The split is still evident today.

The history that Harrington relays is a series of pendulum swings. For much of the book, touted breakthroughs disappoint, discredited dogmas give rise to counter-dogmas, treatments are influenced by the financial interests of the pharmaceutical industry, and real harm is done to patients and their loved ones. One thing that becomes apparent is that, when pathogenesis is absent, historical events and cultural shifts have an outsized influence on prevailing views on causes and treatments. By charting our fluctuating beliefs about our own minds, Harrington effectively tells a story about the twentieth century itself.

In 1885, the Boston Medical and Surgical Journal noted, “The increase in the number of the insane has been exceptionally rapid in the last decade.” Mental asylums built earlier in the century were overflowing with patients. Harrington points out that the asylum may have “created its own expanding clientele,” but it’s possible that insanity really was on the rise, in part because of the rapid spread of syphilis. What we now know to be a late stage of the disease was at the time termed “general paralysis of the insane.” Patients were afflicted by dementia and grandiose delusions and developed a wobbly gait. Toward the end of the century, as many as one in five people entering asylums had general paralysis of the insane.

Proof of a causal relationship between the condition and syphilis came in 1897, and marked the first time, Harrington writes, that “psychiatry had discovered a specific biological cause for a common mental illness.” The discovery was made by the neurologist Richard von Krafft-Ebing (today best known for “Psychopathia Sexualis,” his study of sexual “perversion”) and his assistant Josef Adolf Hirschl. They devised an experiment that made use of a fact that was already known: syphilis could be contracted only once. The pair took pus from the sores of syphilitics and injected it into patients suffering from general paralysis of the insane. Then they watched to see if the test subjects became infected. Any patient who did could be said with certainty not to have had the disease before. As it turned out, though, none of the subjects became infected, leading the researchers to conclude that the condition arose from previous infection with syphilis.

This apparent validation of the biological approach was influential. “If it could be done once,” Harrington writes, “maybe it could be done again.” But the work on syphilis proved to be something of a dead end. Neurologists of the time, knowing nothing of brain chemistry, were heavily focussed on what could be observed at autopsy, but there were many mental illnesses that left no trace in the solid tissue of the brain. Harrington frames this outcome in the Cartesian terms of a mind-body dualism: “Brain anatomists had failed so miserably because they focused on the brain at the expense of the mind.”

Meanwhile, two neurologists, Pierre Janet and Sigmund Freud, had been exploring a condition that affected both mind and body and that left no detectable trace in brain tissue: hysteria. The symptoms included wild swings of emotion, tremors, catatonia, and convulsions. Both men had studied under Jean-Martin Charcot, who believed that hysteria could arise from traumatic events as well as from physiological causes. Janet contended that patients “split off” memories of traumatic events and manifested them in an array of physical symptoms. He advocated hypnosis as a means of accessing these memories and discovering the causes of a patient’s malady. Freud believed that traumatic memories were repressed and consigned to the unconscious. He developed an interview method to bring them to consciousness, interpreted dreams, and argued that nearly all neuroses arose from repressed “sexual impressions.”

Freud acknowledged the fact “that the case histories I write should read like short stories and that, as one might say, they lack the serious stamp of science.” He justified the approach by pointing to the inefficacy of other methods and asserting that there was “an intimate connection between the story of the patient’s sufferings and the symptoms of his illness.” Many neurologists, responding to the demand for confessional healing, gave up on anatomy and adopted psychotherapeutics.

Soon, however, the limits of this approach, too, were exposed. During the First World War, men who returned from the trenches apparently uninjured displayed physical symptoms associated with hysteria. Clearly, they couldn’t all be manifesting neuroses caused by repressed sexual fantasies. The English physician Charles Myers coined the term “shell shock,” proposing a physiological cause: damage to the nervous system from the shock waves of artillery explosions. Yet that explanation wasn’t entirely satisfactory, either. Sufferers included soldiers who had not been in the trenches or exposed to bombing.

Harrington commends physicians who charted a middle course. Adolf Meyer, a Swiss-born physician who, in 1910, became the first director of the psychiatry clinic at the Johns Hopkins Hospital, advocated an approach he called, variously, “psychobiology” and “common sense” psychiatry—the gathering of data without a guiding dogma. Meanwhile, in Europe, Eugen Bleuler, credited with coining the term “schizophrenia,” took a view somewhat similar to Meyer’s and incurred the wrath of Freud. In 1911, Bleuler left the International Psychoanalytical Association. “Saying ‘he who is not with us is against us’ or ‘all or nothing’ is necessary for religious communities and useful for political parties,” he wrote in his resignation letter. “All the same I find that it is harmful for science.”

As the century progressed, the schism between the biological camp and the psychoanalytic camp widened. With advances in bacteriology, the biological camp embraced the idea that microbes in the intestine, the mouth, or the sinuses could release toxins that impaired brain functions. Harrington writes of schizophrenia treatments that included “removing teeth, appendixes, ovaries, testes, colons, and more.”

The most notorious mid-century surgical intervention was the lobotomy. Pioneered in the thirties, by Egas Moniz, whose work later won him the Nobel Prize, the treatment reached a grotesque apogee in America, with Walter Freeman’s popularization of the transorbital lobotomy, which involved severing connections near the prefrontal cortex with an icepick-like instrument inserted through the eye sockets. Freeman crisscrossed the country—a trip he called Operation Icepick—proselytizing for the technique in state mental hospitals.

On the nonbiological, analytic side of the discipline, world events again proved pivotal. The postwar period, dubbed “The Age of Anxiety” by W. H. Auden, was clouded by fears about the power of nuclear weapons, the Cold War arms race, and the possibility that communist spies were infiltrating society. In 1948, President Harry Truman told the annual meeting of the American Psychiatric Association, “The greatest prerequisite for peace, which is uppermost in the minds and hearts of all of us, must be sanity—sanity in its broadest sense, which permits clear thinking on the part of all citizens.”

Accordingly, American neo-Freudians substituted anxiety for sex as the underlying cause of psychological maladies. They replaced Freudian tropes with a focus on family dynamics, especially the need for emotional security in early childhood. Mothers bore the brunt of this new diagnostic scrutiny: overprotective mothers stunted their children’s maturation and were, according to a leading American psychiatrist, “our gravest menace” in the fight against communism; excessively permissive mothers produced children who would become juvenile delinquents; a mother who smothered a son with affection risked making him homosexual, while the undemonstrative “refrigerator mother” was blamed for what is now diagnosed as autism.

In 1963, Betty Friedan’s “Feminine Mystique” denounced neo-Freudian mother blamers. She wrote, “It was suddenly discovered that the mother could be blamed for almost everything. In every case history of a troubled child . . . could be found a mother.” Her indictment was later taken up by the San Francisco Redstockings, a group of female psychotherapists who distributed literature to their A.P.A. colleagues which declared, “Mother is not public enemy number one. Start looking for the real enemy.”

Feminism furnished just one of several sweeping attacks on psychiatry that saw the enterprise as a tool of social control. In 1961, three influential critiques appeared. “Asylums,” by the sociologist Erving Goffman, compared mental hospitals to prisons and concentration camps, places where personal autonomy was stripped from “inmates.” Michel Foucault’s history of psychiatry, “Madness and Civilization,” cast the mentally ill as an oppressed group and the medical establishment as a tool for suppressing resistance. Finally, Thomas Szasz, in “The Myth of Mental Illness,” argued that psychiatric diagnoses were too vague to meet scientific medical standards and that it was a mistake to label people as being ill when they were really, as he termed it, “disabled by living”—dealing with vicissitudes that were a natural part of life.

By the early seventies, such critiques had entered the mainstream. Activists created the Insane Liberation Front, the Mental Patients’ Liberation Project, and the Network Against Psychiatric Assault. Psychiatry, they argued, labelled people disturbed in order to deprive them of freedom.

Challenges to the legitimacy of psychiatry forced the profession to examine the fundamental question of what did and did not constitute mental illness. Homosexuality, for instance, had been considered a psychiatric disorder since the time of Krafft-Ebing. But, in 1972, the annual A.P.A. meeting featured a panel discussion titled “Psychiatry: Friend or Foe to Homosexuals?” One panelist, disguised with a mask and a wig, and using a voice-distorting microphone, said, “I am a homosexual. I am a psychiatrist. I, like most of you in this room, am a member of the A.P.A. and am proud to be a member.” He addressed the emotional suffering caused by social attitudes, and called for the embrace of “that little piece of humanity called homosexuality.” He received a standing ovation.

Homosexuality was still listed as a disorder in the Diagnostic and Statistical Manual of Mental Disorders, even as many psychiatrists clearly held a different view. Robert Spitzer, an eminent psychiatrist and a key architect of the DSM, was put in charge of considering the issue, and devised what has become a working criterion for mental illness: “For a behavior to be termed a psychiatric disorder it had to be regularly accompanied by subjective distress and/or ‘some generalized impairment in social effectiveness of functioning.’ ” Spitzer noted that plenty of homosexuals didn’t suffer distress (except as a result of stigma and discrimination) and had no difficulty functioning socially. In December, 1973, the A.P.A. removed homosexuality from the DSM.

Today, around one in six Americans takes a psychotropic drug of some kind. The medication era stretches back more than sixty years and is the most significant legacy of the biological approach to psychiatry. It has its roots in the thirties, when experiments on rodents suggested that paranoid behavior was caused by high dopamine levels in the brain. The idea that brain chemistry could offer a pathogenesis for mental illness led researchers to hunt for chemical imbalances, and for medications to treat them.

In 1954, the F.D.A., for the first time, approved a drug as a treatment for a mental disorder: the antipsychotic chlorpromazine (marketed with the brand name Thorazine). The pharmaceutical industry vigorously promoted it as a biological solution to a chemical problem. One ad claimed that Thorazine “reduces or eliminates the need for restraint and seclusion; improves ward morale; speeds release of hospitalized patients; reduces destruction of personal and hospital property.” By 1964, some fifty million prescriptions had been filled. The income of its maker—Smith, Kline & French—increased eightfold in a period of fifteen years.

Next came sedatives. Approved in 1955, meprobamate (marketed as Miltown and Equanil) was hailed as a “peace pill” and an “emotional aspirin.” Within a year, it was the best-selling drug in America, and by the close of the fifties one in every three prescriptions written in the United States was for meprobamate. An alternative, Valium, introduced in 1963, became the most commonly prescribed drug in the country the next year and remained so until 1982.

One of the first drugs to target depression was Elavil, introduced in 1961, which boosted available levels of norepinephrine, a neurotransmitter related to adrenaline. Again there was a marketing blitz. Harrington mentions “Symposium in Blues,” a promotional record featuring Duke Ellington, Louis Armstrong, and Artie Shaw. Released by RCA Victor, it was paid for by Merck and distributed to doctors. The liner notes included claims about the benefits that patients would experience if the drug was prescribed for them.

Focus shifted from norepinephrine to the neurotransmitter serotonin, and, in 1988, Prozac appeared, soon followed by other selective serotonin reuptake inhibitors (SSRIs). Promotional material from GlaxoSmithKline couched the benefits of its SSRI Paxil in cozy terms: “Just as a cake recipe requires you to use flour, sugar, and baking powder in the right amounts, your brain needs a fine chemical balance.”

Yet, despite the phenomenal success of Prozac, and of other SSRIs, no one has been able to produce definitive experimental proof establishing neurochemical imbalances as the pathogenesis of mental illness. Indeed, quite a lot of evidence calls the assumption into question. Clinical trials have stirred up intense controversy about whether antidepressants greatly outperform the placebo effect. And, while SSRIs do boost serotonin, it doesn’t appear that people with depression have unusually low serotonin levels. What’s more, advances in psychopharmacology have been incremental at best; Harrington quotes the eminent psychiatrist Steven Hyman’s assessment that “no new drug targets or therapeutic mechanisms of real significance have been developed for more than four decades.” This doesn’t mean that the available psychiatric medication isn’t beneficial. But some drugs seem to work well for some people and not others, and a patient who gets no benefit from one may do well on another. For a psychiatrist, writing a prescription remains as much an art as a science.

Harrington’s book closes on a sombre note. In America, the final decade of the twentieth century was declared the Decade of the Brain. But, in 2010, the director of the National Institute of Mental Health reflected that the initiative hadn’t produced any marked increase in rates of recovery from mental illness. Harrington calls for an end to triumphalist claims and urges a willingness to acknowledge what we don’t know.

Although psychiatry has yet to find the pathogenesis of most mental illness, it’s important to remember that medical treatment is often beneficial even when pathogenesis remains unknown. After all, what I was taught about peptic ulcers and stress wasn’t entirely useless; though we now know that stress doesn’t cause ulcers, it can exacerbate their symptoms. Even in instances where the discovery of pathogenesis has produced medical successes, it has often worked in tandem with other factors. Without the discovery of H.I.V. we would not have antiretroviral drugs, and yet the halt in the spread of the disease owes much to simple innovations, such as safe-sex education and the distribution of free needles and condoms.

Still, the search for pathogenesis in psychiatry continues. Genetic analysis may one day shed light on the causes of schizophrenia, although, even if current hypotheses are borne out, it would likely take years for therapies to be developed. Recent interest in the body’s microbiome has renewed scrutiny of gut bacteria; it’s possible that bacterial imbalance alters the body’s metabolism of dopamine and other molecules that may contribute to depression. Meanwhile, Edward Bullmore, the chief of psychiatry at Cambridge University, argues that the pathogenesis of mental disorders will be deciphered by linking the workings of the mind to that of the immune system. Bullmore’s evidence, presented in his recent book, “The Inflamed Mind” (Picador), is largely epidemiological: inflammatory illness in childhood is associated with adult depression, and people with inflammatory autoimmune disorders like rheumatoid arthritis are often depressed.

It’s too early to say whether any of these hypotheses could hold the key to mental illness. More important, we’d do better not to set so much store by the idea of a single key. It’s more useful to think in terms of cumulative advances in the field. Many people have been helped, and the stigma both of severe mental illness and of fleeting depressive episodes has been vastly reduced. Practitioners and potential patients are more knowledgeable than ever about the range of treatments available. In addition to medication and talk therapy, there have been other approaches, such as cognitive-behavioral therapy, which was propounded in the seventies by the psychiatrist Aaron Beck. He posited that depressed individuals habitually felt unworthy and helpless, and that their beliefs could be “unlearned” with training. An experiment in 1977 showed that cognitive-behavioral therapy outperformed one of the leading antidepressants of the time. Thanks to neuroscience, we can demonstrate that cognitive-behavioral therapy causes neuronal changes in the brain. (This is also true of learning a new language or a musical instrument.) It may be that the more we discover about the brain the easier it will be to disregard the apparent divide between mind and body.

In the late nineties, as an oncologist, I treated a teacher in her fifties suffering from metastatic melanoma. It had spread from her upper arm to lymph nodes in one of her armpits and her neck. The surgeon had removed as much of the disease as he could, and referred her to me because I had previously conducted early clinical trials of an agent called interferon. Interferon is a naturally occurring protein that our bodies produce as part of the immune response to infection. Initially hailed as a possible panacea for all cancers, interferon eventually proved beneficial for some twenty per cent of patients with metastatic melanoma. But the treatment required high doses, which sometimes caused considerable side effects, including depression.

My patient had been widowed and she had no children. “My pupils are my kids,” she said. Unable to teach, she missed the uplift of the classroom. She told me that she was anxious and had been unable to sleep well; she knew that the treatment might not help, and would make her feel sick. In the past, she had experienced depression, and, before I administered interferon, I wanted her to consult a psychiatrist at the hospital who served as a liaison between his department and the oncology unit. He was in his early sixties, with a graying beard and a wry sense of humor: the staff often remarked that he reminded them of Freud. But, unlike Freud, he was not dogmatic. He treated his patients, variously, with medications, talk therapy, hypnosis, and relaxation techniques, often combining several of these.

It was a pragmatic, empirical approach, trying to find what worked for each patient. I admired his humility and reflected that his field was not so unlike my own, where, despite a growing knowledge of the pathogenesis of cancer, one could not precisely predict whether a patient would benefit from a treatment or suffer pointlessly from its side effects. In some sense, everything my colleague and I did for the patient was in the end biological. Words can alter, for better or worse, the chemical transmitters and circuits of our brain, just as drugs or electroconvulsive therapy can. We still don’t fully understand how this occurs. But we do know that all these treatments are given with a common purpose based on hope, a feeling that surely has its own therapeutic biology. ♦



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.