I first came across the term “pathological science” in 1989 while researching a book about a research fiasco known as cold fusion, which was quickly becoming a case study in how not to do science. The simple story is that two chemists working at the University of Utah claimed they had created nuclear fusion in a beaker of heavy water with a palladium electrode, all powered by an electric current from a wall socket. Their most compelling evidence was that the device had exploded once in the lab, and a physicist at nearby Brigham Young University, who should have known better, appeared to be working behind the scenes to steal their “invention.” Cold fusion was touted as a possible source of infinite clean energy and remained a prominent fixture in the news, often front-page, for months.
\n
It took only three months, though, before serious researchers concluded it was as ludicrous as it sounded. In short, no such thing as “cold fusion” existed. When the experiments were done correctly, and they were relatively easy to do, the original flamboyant claims were irreproducible. As hope faded, physicists and chemists took to faxing around copies of a 1953 lecture given by the Nobel laureate chemist Irving Langmuir on this topic of science gone awry.
\n
This is what Langmuir called “pathological science,” which he defined simply as the “science of things that aren’t so.” His lecture discussed a series of examples, already fading from institutional memory back then — N rays, the Davis Barnes effect, the Allison effect — and the surprisingly consistent manner in which pathological science plays out. Cold fusion was just the latest thing that wasn’t so. It was by no means the first and certainly not the last.
\n
The concept of pathological science may be among the most important, most misunderstood, and least discussed in all of science. It may describe an exceedingly common state of affairs, and yet, to discuss that possibility is often to be perceived as challenging the primacy of science and so to seem anti-science. As such, the only researchers who tend to bring it up are from disciplines such as experimental physics, which is relatively immune to the pathology (for reasons we’ll discuss). Other researchers will then typically respond with a “what the hell do they know about our discipline?” kind of intellectual shrug.
\n
The implication is that scientists are not supposed to criticize the scientific endeavor, let alone the scientific endeavors of others, and yet the immune system of science is built on just that kind of criticism: institutionalized skepticism, as it’s often known. The situation is problematic. To understand what’s likely happening in the kinds of health-related research disciplines discussed on the CrossFit website, though, it may be necessary to have a familiarity with pathological science — what it looks like and, well, smells like.
\n
Defining the pathology in pathological science
\n
Let’s start with what sets pathological science apart from normal science. It is not whether some experimental result or conclusion happens to be wrong. Being wrong is a natural state of scientific research. If a researcher publishes a paper, makes a claim, and none of his peers take it seriously, if it remains on the fringes of research, that’s also a natural occurrence even in a healthy scientific endeavor. Ideas come and go. They have to earn credibility — as do the researchers who promote them — and other researchers have to see the ideas as sufficiently important and likely to be right that they are worth the time and effort to study. At that point, ideally, experimental results are either refuted or they’re not, and the research community embraces them or moves on.
\n
The pursuit becomes pathological only when these bogus phenomena are accepted as more likely to be real than not, when they become serious subjects of scientific inquiry. In the case of Langmuir’s examples, hundreds of papers had been published on each, and that was in an era when hundreds of papers was a significant number. With this experimental support, these non-existent phenomena survived as subjects of research and discussion for 10 to 20 years before fading away. “The ratio of supporters to critics rises up to somewhere near 50%,” Langmuir noted, “and then falls gradually to oblivion.”
\n
Another critical thing to understand about pathological science, as Langmuir described it, is that it is not the result of scientific misconduct — i.e., fraud. If someone commits fraud and uses manipulated data to make a claim for a meaningful new discovery, other researchers are going to try and replicate it and fail. The fraud will be exposed. That’s the end of the story. When scientists expose fraud, they’re doing their job as scientists. In Langmuir’s examples, the researchers promoting pathological science hadn’t faked or manipulated evidence in a way that would get them, if caught, expelled from the field or in danger of losing funding. They weren’t trying to deceive their peers, which is the essence of fraud; instead, they were deceiving themselves. The deception was internal, not external.
\n
The pathology at work is more akin to what the medical community would call malpractice. Another way to describe it would simply be “bad science” (which was the title of my 1993 book on cold fusion). “These are cases,” as Langmuir put it, “where there is no dishonesty involved but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions.”
\n
The relevant question today and the reason for this post (and arguably for the direction of my career as a science journalist since the mid-1980s) is how common is this kind of pathological science? The subtext of many of the posts on CrossFit.com is that pathological science may be more the norm than the exception, specifically in the disciplines of science that are relevant to our health. Is it?
\n
Researchers and philosophers of science have typically discussed pathological science as though it exists only in discrete episodes like cold fusion or fringe fields like homeopathy — isolated infections in what are otherwise healthy endeavors. The researchers either can’t believe or refuse to believe that pathological science could be a common state of affairs, systemic infections rather than localized ones. Hence, they discuss it in a way that makes it appear relatively benign.
\n
But the possibility exists that entire disciplines may be essentially pathological, generating unreliable knowledge day in and day out, producing meaningless noise, in a sense, rather than a meaningful signal. And they do so because the researchers involved simply lack the understanding about how easy it is to be led astray, or they lack the experimental wherewithal to prevent this from happening.
\n
This idea of pathological science as a systemic problem was the implication of all three of my books on nutrition, obesity, and chronic disease: Good Calories, Bad Calories (2007), Why We Get Fat (2011), and The Case Against Sugar (2011). If even a portion of what I was arguing in these books is right — and Ivor Cummings, Mike Eades, Jason Fung, Tim Noakes, and others have been arguing the same on this website — then the existing research in these disciplines is pathological, and the researchers involved, with a few rare exceptions, have been incapable of producing meaningful science.
\n
The Goal of Science?
\n
To understand the problems with pathological science, we have to first understand the goal of functional science. Here it is simply: to establish reliable knowledge about the subject of investigation. This phrase, reliable knowledge, is one I also picked up in my cold fusion research, in this case from the philosopher of science John Ziman and his 1978 book by that title. In the research I was documenting and assessing in my books, the researchers involved had done this job so poorly that they either failed to understand that establishing reliable knowledge was the goal of science or they didn’t care.
\n
I focused on this problem in the epilogue to Good Calories, Bad Calories (GCBC), which began with two quotes that capture the essence of this concept of establishing reliable knowledge:
\n\n\n
Merton was a philosopher of science. Feynman was a working physicist — a theorist — and Nobel laureate. They were saying the same thing in two different ways. The goal of science, what “the community of science thus provides for,” is to assure that what we know is really so. If we’re doing that, we’re unlikely to be fooling ourselves (and we’re all too easy to fool). The knowledge we gain is reliable. We can trust it. We can rest our convictions on it. We can build on it to extend our knowledge further into the unknown. That’s the goal. If that goal can’t be established, in a healthy field of science the scientists will be the first to admit it. In pathological science, they’ll be the last.
\n
To make this argument, I began the GCBC epilogue with what I considered an example of modern obesity research in all its pathology: two supposed authorities proposing in the very high-profile journal Science a century-old idea about preventing obesity — that if we all ate maybe 100 calories a day fewer than we normally do (say three bites fewer of a McDonald’s hamburger) none of us would get fat — and then observing that it still required an “empirical test” to judge if it was right.
\n
I suggested these researchers — not just the two authors of the Science article but the entire field — clearly had little real desire to know if what they were proposing was really so. If they had, the necessary empirical tests would have been done long ago and replicated many times since. After all, the idea and the proposal dated essentially to the early 1900s. I suggested that maybe the test had never been done because copious evidence already existed to refute the idea. From my perspective, these researchers seemed to be playing a game, and not a very serious one at that; they were pretending to be scientists rather than acting as scientists.
\n
Here’s how I set up that idea and described the problem, contrasting it with an example from when this particular research discipline was still a healthy one such that the difference would be self-evident:
\n
In the 1890s, Frances Benedict and Wilbur Atwater, pioneers of the science of nutrition in the United States, spent a year of their lives testing the assumption that the law of energy conservation applied to humans as well as animals. They did so not because they doubted that it did, but precisely because it seemed so obvious. “No one would question” it, they wrote. “The quantitative demonstration is, however, desirable, and an attested method for such demonstration is of fundamental importance for the study of the general laws of metabolism of both matter and energy.”
\n
This is how functioning science works. Outstanding questions are identified or hypotheses proposed; experimental tests are then established to either answer the questions or refute the hypotheses, regardless of how obviously true they might appear to be. If assertions are made without the empirical evidence to defend them, they are vigorously rebuked. In science, as [the philosopher of science Robert] Merton noted, progress is only made by first establishing whether one’s predecessors have erred or “have stopped before tracking down the implications of their results or have passed over in their work what is there to be seen by the fresh eye of another.” Each new claim to knowledge, therefore, has to be picked apart and appraised. Its shortcomings have to be established unequivocally before we can know what questions remain to be asked, and so what answers to seek -— what we know is really so and what we don’t. “This unending exchange of critical judgment,” Merton wrote, “of praise and punishment, is developed in science to a degree that makes the monitoring of children’s behavior by their parents seem little more than child’s play.”
\n
This institutionalized vigilance, “this unending exchange of critical judgment,” is nowhere to be found in the study of nutrition, chronic disease and obesity, and it hasn’t been for decades. For this reason, it is difficult to use the term “scientist” to describe those individuals who work in these disciplines and, indeed, I have actively avoided doing so in this book. It’s simply debatable, at best, whether what these individuals have practiced for the past fifty years and whether the culture they have created, as a result, can reasonably be described as science, as most working scientists or philosophers of science would typically characterize it. Individuals in these disciplines think of themselves as scientists; they use the terminology of science in their work and they certainly borrow the authority of science to communicate their beliefs to the general public, but “the results of their enterprise,” as Thomas Kuhn, author of The Structure of Scientific Revolutions, might have put it, “do not add up to science as we know it.”
\n
What perhaps I should have mentioned in that epilogue, particularly as one of my two epigraphs was from Richard Feynman’s famous 1974 Caltech commencement address, is that Feynman was making precisely the same argument about the dismal condition of some unknown proportion of modern research in that presentation. Feynman was speaking specifically about psychology and education, even softer sciences than nutrition (if such a thing is possible), but he was also implying that the problem — the pathology — could be wider. He was suggesting entire disciplines of science were indeed pathological. And he pointed out that physicists were aware of these things because in learning the history of their discipline, they were also learning the many embarrassing ways physicists had managed to delude themselves in the past. This history and the lessons that emerged from it about avoiding self-delusion, Feynman said, were never taught explicitly in any course at Caltech; the expectation was that maybe the students “caught on by osmosis.” (And maybe they did at Caltech, he was implying, but clearly not elsewhere.)
\n
Here’s Feynman, using a typically imaginative and colorful metaphor — Cargo Cults — to capture the pathology:
\n
… the educational and psychological studies I mentioned are examples of what I would like to call Cargo Cult Science. In the South Seas there is a Cargo Cult of people. During the [Second World] war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas — he’s the controller — and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.
\n
Now it behooves me, of course, to tell you what they’re missing. But it would be just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school — we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty — a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid — not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked — to make sure the other fellow can tell they have been eliminated.
\n
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can — if you know anything at all wrong, or possibly wrong — to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have to put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
\n
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
\n
Feynman contrasted this “kind of utter honesty” with the advertising business, in which the object is to get consumers to believe something about a product so they’ll buy it. What is stated in an advertisement may be strictly true, but the goal of the ad is to mislead, nonetheless; to prompt consumers to judge the product based on only a very carefully determined selection of the evidence, not the entirety. Scientific integrity, on the other hand, requires that “leaning over backward” kind of honesty, to assure that evidence is never oversold.
\n
If this kind of intellectual integrity is absent, Feynman implied, then the researchers have a motive other than not fooling themselves, other than finding out if what they know is really so. They’re not doing science; they’re doing something else. Career advancement — procuring funding, which often means getting your work in the news (i.e., selling it), and getting tenure — and ego reinforcement are two usual suspects.
\n
In modern science, researchers are encouraged to promote their work and, regrettably, perhaps oversell the implications. Their goal is to induce highly cited journals to publish their papers and maximize funding. As such the (dis)honesty of advertising can be more the norm than the utter honesty of a healthy science. In public health and medicine, a common rationale is that people are dying; hence researchers have to act quickly, even if it means cutting corners or taking leaps of faith. Public health authorities, as I learned in my research, will convince themselves they have to act on premature evidence for the same reason. And they then have to oversell this evidence to us, such that we will act on it and our premature deaths will be postponed. Their jobs require that they believe and promote the idea that they know something reliable (eating low-fat diets, for instance, will make us live longer), even in the absence of the kind of rigorous experimental tests that would provide reliable knowledge.
\n
When that kind of integrity is absent, Feynman implied, and regardless of the justification, the research discipline is a pathological one. The pursuit of reliable knowledge will be corrupted by these other ambitions and fail. So this kind of utter honesty, of bending over backward, is necessary to establish reliable knowledge, although it is clearly not sufficient, as we’ll discuss.
\n
A brief history lesson delving deeper into the history of science
\n
Worth knowing here is that neither Feynman nor Langmuir were saying anything new. Feynman could have argued the reason it was never taught explicitly at Caltech is because it is woven into the very fabric of the scientific enterprise. What Feynman and Langmuir said constitute only minor variations on what Francis Bacon wrote 400 years ago when he essentially inaugurated the scientific method with his book Novum Organum (loose translation: “a new instrument of reasoning”).
\n
Bacon argued that such a methodological process, a new way of thinking, was necessary because humans are incapable of seeing the world the way it is. Rather, we are hardwired to delude ourselves. Hence, we need a structured way to approach and understand the unknown that minimizes this all-too-human tendency, if we’re ever going to learn anything reliable about how the universe works. What Langmuir called “pathological science” and Feynman called “Cargo Cult Science,” Bacon called “wishful science” and made the point that it was what humans do naturally:
\n
The human understanding is not a dry light, but is infused by desire and emotion, which give rise to ‘wishful science.’ For man prefers to believe what he wants to be true. He therefore rejects difficulties, being impatient of inquiry; sober things, because they restrict his hope; deeper parts of nature, because of his superstition; the light of experience, because of his arrogance and pride, lest his mind should seem to concern itself with things mean and transitory; things that are strange and contrary to all expectation, because of common opinion. In short, emotion in numerous often imperceptible ways pervades and infects the understanding.
\n
Adhering to a strict methodology designed specifically to minimize these tendencies, according to Bacon’s perspective, was the only hope of making progress.
\n
In 1865, Claude Bernard, the legendary French physiologist, made the same point again in his seminal book An Introduction to the Study of Experimental Medicine. Bernard was arguing for the necessity of experiment (active intervention by the researcher) in medicine to establish that what we think we know really is. He was not discussing how all of us come to delude ourselves, as Bacon was, but how those specifically do who think of themselves as scientists:
\n
Men who have excessive faith in their theories or ideas are not only ill prepared for making discoveries; they also make very poor observations. Of necessity, they observe with a preconceived idea, and when they devise an experiment, they can see, in its results, only a confirmation of their theory. In this way they distort observation and often neglect very important facts because they do not further their aim. This is what made us say elsewhere that we must never make experiments to confirm our ideas, but simply to control them; which means, in other terms, that one must accept the results of experiments as they come, with all their unexpectedness and irregularity.
\n
But it happens further quite naturally that men who believe too firmly in their theories, do not believe enough in the theories of others. So the dominant idea of these despisers of their fellows is to find others’ theories faulty and to try to contradict them. The difficulty, for science, is still the same. They make experiments only to destroy a theory, instead of to seek the truth. At the same time, they make poor observations, because they choose among the results of their experiments only what suits their object, neglecting whatever is unrelated to it, and carefully setting aside everything which might tend toward the idea they wish to combat. By these two opposite roads, men are thus led to the same result, that is, to falsify science and the facts.
\n
Bernard was implying that pathological science was common, at least in the 19th century. The question we’re interested in is whether it is still, over a century and a half later.
\n
The short answer, again, is that it certainly could be.
\n\n
Notes
\n
*In this 1953 speech, Irving Langmuir provides examples of what he calls “pathological science” or “the science of things that aren’t so”. Each of the examples illustrates cases where a scientific team, or even a small scientific community, became convinced some physical phenomenon was occurring that was entirely illusory. Through these examples — including research on the existence of ESP (extrasensory perception) and UFOs, as well as multiple examples of basic physics and chemistry research — he describes six consistent symptoms of pathological science:
\n\n
The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
\n
The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.
\n
Claims of great accuracy
\n
Fantastic theories contrary to experience
\n
Criticisms are met by ad hoc excuses thought up on the spur of the moment.
\n
Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually into oblivion.
\n\n
When these symptoms are in place, scientists can easily come to believe a phenomenon exists when it does not — and continue to believe it does even when evidence begins to indicate its falsity. In Langmuir’s examples, each fallacy is only snuffed out when an improved data-gathering methodology, or a clever scientist, is able to conclusively disprove its existence.
\n\n
Gary Taubes is co-founder of the Nutrition Science Initiative (NuSI) and an investigative science and health journalist. He is the author of The Case Against Sugar (2016), Why We Get Fat (2011), and Good Calories, Bad Calories (2007). Taubes was a contributing correspondent for the journal Science and a staff writer for Discover. As a freelancer, he has contributed articles to The Atlantic Monthly, The New York Times Magazine, Esquire, Slate, and many other publications. His work has been included in numerous “Best of” anthologies including The Best of the Best American Science Writing (2010). He is the first print journalist to be a three-time winner of the National Association of Science Writers Science-in-Society Journalism Award and the recipient of a Robert Wood Johnson Foundation Investigator Award in Health Policy Research. Taubes received his B.S. in physics from Harvard University, his M.S. in engineering from Stanford University, and his M.S. in journalism from Columbia University.
I first came across the term “pathological science” in 1989 while researching a book about a research fiasco known as cold fusion, which was quickly becoming a case study in how not to do science. The simple story is that two chemists working at the University of Utah claimed they had created nuclear fusion in a beaker of heavy water with a palladium electrode, all powered by an electric current from a wall socket. Their most compelling evidence was that the device had exploded once in the lab, and a physicist at nearby Brigham Young University, who should have known better, appeared to be working behind the scenes to steal their “invention.” Cold fusion was touted as a possible source of infinite clean energy and remained a prominent fixture in the news, often front-page, for months.
It took only three months, though, before serious researchers concluded it was as ludicrous as it sounded. In short, no such thing as “cold fusion” existed. When the experiments were done correctly, and they were relatively easy to do, the original flamboyant claims were irreproducible. As hope faded, physicists and chemists took to faxing around copies of a 1953 lecture given by the Nobel laureate chemist Irving Langmuir on this topic of science gone awry.
This is what Langmuir called “pathological science,” which he defined simply as the “science of things that aren’t so.” His lecture discussed a series of examples, already fading from institutional memory back then — N rays, the Davis Barnes effect, the Allison effect — and the surprisingly consistent manner in which pathological science plays out. Cold fusion was just the latest thing that wasn’t so. It was by no means the first and certainly not the last.
The concept of pathological science may be among the most important, most misunderstood, and least discussed in all of science. It may describe an exceedingly common state of affairs, and yet, to discuss that possibility is often to be perceived as challenging the primacy of science and so to seem anti-science. As such, the only researchers who tend to bring it up are from disciplines such as experimental physics, which is relatively immune to the pathology (for reasons we’ll discuss). Other researchers will then typically respond with a “what the hell do they know about our discipline?” kind of intellectual shrug.
The implication is that scientists are not supposed to criticize the scientific endeavor, let alone the scientific endeavors of others, and yet the immune system of science is built on just that kind of criticism: institutionalized skepticism, as it’s often known. The situation is problematic. To understand what’s likely happening in the kinds of health-related research disciplines discussed on the CrossFit website, though, it may be necessary to have a familiarity with pathological science — what it looks like and, well, smells like.
Defining the pathology in pathological science
Let’s start with what sets pathological science apart from normal science. It is not whether some experimental result or conclusion happens to be wrong. Being wrong is a natural state of scientific research. If a researcher publishes a paper, makes a claim, and none of his peers take it seriously, if it remains on the fringes of research, that’s also a natural occurrence even in a healthy scientific endeavor. Ideas come and go. They have to earn credibility — as do the researchers who promote them — and other researchers have to see the ideas as sufficiently important and likely to be right that they are worth the time and effort to study. At that point, ideally, experimental results are either refuted or they’re not, and the research community embraces them or moves on.
The pursuit becomes pathological only when these bogus phenomena are accepted as more likely to be real than not, when they become serious subjects of scientific inquiry. In the case of Langmuir’s examples, hundreds of papers had been published on each, and that was in an era when hundreds of papers was a significant number. With this experimental support, these non-existent phenomena survived as subjects of research and discussion for 10 to 20 years before fading away. “The ratio of supporters to critics rises up to somewhere near 50%,” Langmuir noted, “and then falls gradually to oblivion.”
Another critical thing to understand about pathological science, as Langmuir described it, is that it is not the result of scientific misconduct — i.e., fraud. If someone commits fraud and uses manipulated data to make a claim for a meaningful new discovery, other researchers are going to try and replicate it and fail. The fraud will be exposed. That’s the end of the story. When scientists expose fraud, they’re doing their job as scientists. In Langmuir’s examples, the researchers promoting pathological science hadn’t faked or manipulated evidence in a way that would get them, if caught, expelled from the field or in danger of losing funding. They weren’t trying to deceive their peers, which is the essence of fraud; instead, they were deceiving themselves. The deception was internal, not external.
The pathology at work is more akin to what the medical community would call malpractice. Another way to describe it would simply be “bad science” (which was the title of my 1993 book on cold fusion). “These are cases,” as Langmuir put it, “where there is no dishonesty involved but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions.”
The relevant question today and the reason for this post (and arguably for the direction of my career as a science journalist since the mid-1980s) is how common is this kind of pathological science? The subtext of many of the posts on CrossFit.com is that pathological science may be more the norm than the exception, specifically in the disciplines of science that are relevant to our health. Is it?
Researchers and philosophers of science have typically discussed pathological science as though it exists only in discrete episodes like cold fusion or fringe fields like homeopathy — isolated infections in what are otherwise healthy endeavors. The researchers either can’t believe or refuse to believe that pathological science could be a common state of affairs, systemic infections rather than localized ones. Hence, they discuss it in a way that makes it appear relatively benign.
But the possibility exists that entire disciplines may be essentially pathological, generating unreliable knowledge day in and day out, producing meaningless noise, in a sense, rather than a meaningful signal. And they do so because the researchers involved simply lack the understanding about how easy it is to be led astray, or they lack the experimental wherewithal to prevent this from happening.
This idea of pathological science as a systemic problem was the implication of all three of my books on nutrition, obesity, and chronic disease: Good Calories, Bad Calories (2007), Why We Get Fat (2011), and The Case Against Sugar (2011). If even a portion of what I was arguing in these books is right — and Ivor Cummings, Mike Eades, Jason Fung, Tim Noakes, and others have been arguing the same on this website — then the existing research in these disciplines is pathological, and the researchers involved, with a few rare exceptions, have been incapable of producing meaningful science.
The Goal of Science?
To understand the problems with pathological science, we have to first understand the goal of functional science. Here it is simply: to establish reliable knowledge about the subject of investigation. This phrase, reliable knowledge, is one I also picked up in my cold fusion research, in this case from the philosopher of science John Ziman and his 1978 book by that title. In the research I was documenting and assessing in my books, the researchers involved had done this job so poorly that they either failed to understand that establishing reliable knowledge was the goal of science or they didn’t care.
I focused on this problem in the epilogue to Good Calories, Bad Calories (GCBC), which began with two quotes that capture the essence of this concept of establishing reliable knowledge:
Merton was a philosopher of science. Feynman was a working physicist — a theorist — and Nobel laureate. They were saying the same thing in two different ways. The goal of science, what “the community of science thus provides for,” is to assure that what we know is really so. If we’re doing that, we’re unlikely to be fooling ourselves (and we’re all too easy to fool). The knowledge we gain is reliable. We can trust it. We can rest our convictions on it. We can build on it to extend our knowledge further into the unknown. That’s the goal. If that goal can’t be established, in a healthy field of science the scientists will be the first to admit it. In pathological science, they’ll be the last.
To make this argument, I began the GCBC epilogue with what I considered an example of modern obesity research in all its pathology: two supposed authorities proposing in the very high-profile journal Science a century-old idea about preventing obesity — that if we all ate maybe 100 calories a day fewer than we normally do (say three bites fewer of a McDonald’s hamburger) none of us would get fat — and then observing that it still required an “empirical test” to judge if it was right.
I suggested these researchers — not just the two authors of the Science article but the entire field — clearly had little real desire to know if what they were proposing was really so. If they had, the necessary empirical tests would have been done long ago and replicated many times since. After all, the idea and the proposal dated essentially to the early 1900s. I suggested that maybe the test had never been done because copious evidence already existed to refute the idea. From my perspective, these researchers seemed to be playing a game, and not a very serious one at that; they were pretending to be scientists rather than acting as scientists.
Here’s how I set up that idea and described the problem, contrasting it with an example from when this particular research discipline was still a healthy one such that the difference would be self-evident:
In the 1890s, Frances Benedict and Wilbur Atwater, pioneers of the science of nutrition in the United States, spent a year of their lives testing the assumption that the law of energy conservation applied to humans as well as animals. They did so not because they doubted that it did, but precisely because it seemed so obvious. “No one would question” it, they wrote. “The quantitative demonstration is, however, desirable, and an attested method for such demonstration is of fundamental importance for the study of the general laws of metabolism of both matter and energy.”
This is how functioning science works. Outstanding questions are identified or hypotheses proposed; experimental tests are then established to either answer the questions or refute the hypotheses, regardless of how obviously true they might appear to be. If assertions are made without the empirical evidence to defend them, they are vigorously rebuked. In science, as [the philosopher of science Robert] Merton noted, progress is only made by first establishing whether one’s predecessors have erred or “have stopped before tracking down the implications of their results or have passed over in their work what is there to be seen by the fresh eye of another.” Each new claim to knowledge, therefore, has to be picked apart and appraised. Its shortcomings have to be established unequivocally before we can know what questions remain to be asked, and so what answers to seek -— what we know is really so and what we don’t. “This unending exchange of critical judgment,” Merton wrote, “of praise and punishment, is developed in science to a degree that makes the monitoring of children’s behavior by their parents seem little more than child’s play.”
This institutionalized vigilance, “this unending exchange of critical judgment,” is nowhere to be found in the study of nutrition, chronic disease and obesity, and it hasn’t been for decades. For this reason, it is difficult to use the term “scientist” to describe those individuals who work in these disciplines and, indeed, I have actively avoided doing so in this book. It’s simply debatable, at best, whether what these individuals have practiced for the past fifty years and whether the culture they have created, as a result, can reasonably be described as science, as most working scientists or philosophers of science would typically characterize it. Individuals in these disciplines think of themselves as scientists; they use the terminology of science in their work and they certainly borrow the authority of science to communicate their beliefs to the general public, but “the results of their enterprise,” as Thomas Kuhn, author of The Structure of Scientific Revolutions, might have put it, “do not add up to science as we know it.”
What perhaps I should have mentioned in that epilogue, particularly as one of my two epigraphs was from Richard Feynman’s famous 1974 Caltech commencement address, is that Feynman was making precisely the same argument about the dismal condition of some unknown proportion of modern research in that presentation. Feynman was speaking specifically about psychology and education, even softer sciences than nutrition (if such a thing is possible), but he was also implying that the problem — the pathology — could be wider. He was suggesting entire disciplines of science were indeed pathological. And he pointed out that physicists were aware of these things because in learning the history of their discipline, they were also learning the many embarrassing ways physicists had managed to delude themselves in the past. This history and the lessons that emerged from it about avoiding self-delusion, Feynman said, were never taught explicitly in any course at Caltech; the expectation was that maybe the students “caught on by osmosis.” (And maybe they did at Caltech, he was implying, but clearly not elsewhere.)
Here’s Feynman, using a typically imaginative and colorful metaphor — Cargo Cults — to capture the pathology:
… the educational and psychological studies I mentioned are examples of what I would like to call Cargo Cult Science. In the South Seas there is a Cargo Cult of people. During the [Second World] war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas — he’s the controller — and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.
Now it behooves me, of course, to tell you what they’re missing. But it would be just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school — we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty — a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid — not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked — to make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can — if you know anything at all wrong, or possibly wrong — to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have to put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
Feynman contrasted this “kind of utter honesty” with the advertising business, in which the object is to get consumers to believe something about a product so they’ll buy it. What is stated in an advertisement may be strictly true, but the goal of the ad is to mislead, nonetheless; to prompt consumers to judge the product based on only a very carefully determined selection of the evidence, not the entirety. Scientific integrity, on the other hand, requires that “leaning over backward” kind of honesty, to assure that evidence is never oversold.
If this kind of intellectual integrity is absent, Feynman implied, then the researchers have a motive other than not fooling themselves, other than finding out if what they know is really so. They’re not doing science; they’re doing something else. Career advancement — procuring funding, which often means getting your work in the news (i.e., selling it), and getting tenure — and ego reinforcement are two usual suspects.
In modern science, researchers are encouraged to promote their work and, regrettably, perhaps oversell the implications. Their goal is to induce highly cited journals to publish their papers and maximize funding. As such the (dis)honesty of advertising can be more the norm than the utter honesty of a healthy science. In public health and medicine, a common rationale is that people are dying; hence researchers have to act quickly, even if it means cutting corners or taking leaps of faith. Public health authorities, as I learned in my research, will convince themselves they have to act on premature evidence for the same reason. And they then have to oversell this evidence to us, such that we will act on it and our premature deaths will be postponed. Their jobs require that they believe and promote the idea that they know something reliable (eating low-fat diets, for instance, will make us live longer), even in the absence of the kind of rigorous experimental tests that would provide reliable knowledge.
When that kind of integrity is absent, Feynman implied, and regardless of the justification, the research discipline is a pathological one. The pursuit of reliable knowledge will be corrupted by these other ambitions and fail. So this kind of utter honesty, of bending over backward, is necessary to establish reliable knowledge, although it is clearly not sufficient, as we’ll discuss.
A brief history lesson delving deeper into the history of science
Worth knowing here is that neither Feynman nor Langmuir were saying anything new. Feynman could have argued the reason it was never taught explicitly at Caltech is because it is woven into the very fabric of the scientific enterprise. What Feynman and Langmuir said constitute only minor variations on what Francis Bacon wrote 400 years ago when he essentially inaugurated the scientific method with his book Novum Organum (loose translation: “a new instrument of reasoning”).
Bacon argued that such a methodological process, a new way of thinking, was necessary because humans are incapable of seeing the world the way it is. Rather, we are hardwired to delude ourselves. Hence, we need a structured way to approach and understand the unknown that minimizes this all-too-human tendency, if we’re ever going to learn anything reliable about how the universe works. What Langmuir called “pathological science” and Feynman called “Cargo Cult Science,” Bacon called “wishful science” and made the point that it was what humans do naturally:
The human understanding is not a dry light, but is infused by desire and emotion, which give rise to ‘wishful science.’ For man prefers to believe what he wants to be true. He therefore rejects difficulties, being impatient of inquiry; sober things, because they restrict his hope; deeper parts of nature, because of his superstition; the light of experience, because of his arrogance and pride, lest his mind should seem to concern itself with things mean and transitory; things that are strange and contrary to all expectation, because of common opinion. In short, emotion in numerous often imperceptible ways pervades and infects the understanding.
Adhering to a strict methodology designed specifically to minimize these tendencies, according to Bacon’s perspective, was the only hope of making progress.
In 1865, Claude Bernard, the legendary French physiologist, made the same point again in his seminal book An Introduction to the Study of Experimental Medicine. Bernard was arguing for the necessity of experiment (active intervention by the researcher) in medicine to establish that what we think we know really is. He was not discussing how all of us come to delude ourselves, as Bacon was, but how those specifically do who think of themselves as scientists:
Men who have excessive faith in their theories or ideas are not only ill prepared for making discoveries; they also make very poor observations. Of necessity, they observe with a preconceived idea, and when they devise an experiment, they can see, in its results, only a confirmation of their theory. In this way they distort observation and often neglect very important facts because they do not further their aim. This is what made us say elsewhere that we must never make experiments to confirm our ideas, but simply to control them; which means, in other terms, that one must accept the results of experiments as they come, with all their unexpectedness and irregularity.
But it happens further quite naturally that men who believe too firmly in their theories, do not believe enough in the theories of others. So the dominant idea of these despisers of their fellows is to find others’ theories faulty and to try to contradict them. The difficulty, for science, is still the same. They make experiments only to destroy a theory, instead of to seek the truth. At the same time, they make poor observations, because they choose among the results of their experiments only what suits their object, neglecting whatever is unrelated to it, and carefully setting aside everything which might tend toward the idea they wish to combat. By these two opposite roads, men are thus led to the same result, that is, to falsify science and the facts.
Bernard was implying that pathological science was common, at least in the 19th century. The question we’re interested in is whether it is still, over a century and a half later.
The short answer, again, is that it certainly could be.
Notes
*In this 1953 speech, Irving Langmuir provides examples of what he calls “pathological science” or “the science of things that aren’t so”. Each of the examples illustrates cases where a scientific team, or even a small scientific community, became convinced some physical phenomenon was occurring that was entirely illusory. Through these examples — including research on the existence of ESP (extrasensory perception) and UFOs, as well as multiple examples of basic physics and chemistry research — he describes six consistent symptoms of pathological science:
The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.
Claims of great accuracy
Fantastic theories contrary to experience
Criticisms are met by ad hoc excuses thought up on the spur of the moment.
Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually into oblivion.
When these symptoms are in place, scientists can easily come to believe a phenomenon exists when it does not — and continue to believe it does even when evidence begins to indicate its falsity. In Langmuir’s examples, each fallacy is only snuffed out when an improved data-gathering methodology, or a clever scientist, is able to conclusively disprove its existence.
Gary Taubes is co-founder of the Nutrition Science Initiative (NuSI) and an investigative science and health journalist. He is the author of The Case Against Sugar (2016), Why We Get Fat (2011), and Good Calories, Bad Calories (2007). Taubes was a contributing correspondent for the journal Science and a staff writer for Discover. As a freelancer, he has contributed articles to The Atlantic Monthly, The New York Times Magazine, Esquire, Slate, and many other publications. His work has been included in numerous “Best of” anthologies including The Best of the Best American Science Writing (2010). He is the first print journalist to be a three-time winner of the National Association of Science Writers Science-in-Society Journalism Award and the recipient of a Robert Wood Johnson Foundation Investigator Award in Health Policy Research. Taubes received his B.S. in physics from Harvard University, his M.S. in engineering from Stanford University, and his M.S. in journalism from Columbia University.
Pathological Science, Part 1