DEAL EXTENDED ON LEVEL 1 AND LEVEL 2 COURSES

The Case for Keto — Exclusive Preview #1

ByGary TaubesDecember 25, 2020

The article below is a portion of Chapter 12 from Gary Taubes’ book The Case For Keto, presented here with permission from the author and publisher. The book is set for publication on Dec. 29, 2020, and is available for pre-order here.


 

Chapter 12: The Path Well Traveled, Part 1

Given a choice between a hypothesis and an experience, go with the experience.

 

“Someone asked me the other day how I was losing weight. I told them I eat less than 20g of carbs a day. They proceeded to freak the heck out. Told me how dangerous it was. (No.) Asked me if my doc knew. (Yes.) Told me that carbs were essential to human survival. Finally I was like, dude, do you really believe I was healthier 90 pounds heavier than I am now? I really think he wanted to say yes but was worried that I was going to punch his lights out. He probably would’ve been right.”

—Rachelle Ploetz, on her Instagram account #eatbaconloseweight

 

The question Rachelle Ploetz asked speaks to the very heart of this endlessly controversial subject: “Dude, do you really believe I was healthier 90 pounds heavier than I am now?” Ultimately the goal is to be healthy. Whether ninety pounds are lost or ten, it’s quite possible that a way of eating that induces fat loss becomes harmful as the years go by.

Rachelle’s experience presents a good case study. Rachelle had wrestled with her weight throughout her life and had tried to eat healthy by the conventional definition. When she began her LCHF/ketogenic program, she weighed 380 pounds. She would eventually lose 150 pounds, documenting it all on her Insta- gram account and settling in at 230 pounds. Her husband lost seventy-five pounds eating as Rachelle did. Her teenage daughter dropped fifty pounds. They came to believe, as do I, that if they now changed how they were eating, if they went back to eating even “healthy” carbohydrates—say, from whole grains or from beans and legumes (and, of course, cut back on the butter and bacon)—they would eventually gain the weight back. They consider this to be a way of eating for life, out of necessity. Are they healthier for doing so?

When I first wrote about (and still barely understood) the paradox presented by LCHF/ketogenic eating to the medical community in my New York Times Magazine cover story in July 2002, I admitted to trying the Atkins diet as an experiment and effortlessly losing twenty-five pounds by doing so. Those were twenty-five pounds I had essentially been trying to lose every day of my life since I’d hit my thirties, despite an addiction to exercise and the better part of a decade—the 1990s—of low-fat, mostly plant, “healthy” eating. I avoided avocados and peanut butter because they were high in fat, and I thought of red meat, particularly a steak or bacon, as an agent of premature death. I ate only the whites of eggs. Having failed to make noticeable headway, I had come to accept those excess pounds as an inescapable fact of my life. When I changed how I ate—and not, as far as I could tell, how much—those pounds disappeared.

At the time I was simply fascinated by the experience, feeling as though a switch had been flipped (which I now understand to be the case). But I also acknowledged in the article something that remained true for years afterward: my anxiety. Every morning when I sat down to my breakfast of eggs—with the yolks— and sausage or bacon, I wondered whether, how, and when it was going to kill me. I didn’t worry about any lack of green vegetables in my diet because I was eating more of them than ever. I worried about the fat and the red and processed meat. Despite all my reporting and my journalistic skepticism, my thoughts on the nature of a healthy diet were a product of the nutritional belief system that had become firmly ensconced as I was becoming an adult, the theories or, technically, hypotheses of what constituted a healthy diet. Bacon, sausage, eggs (yolks, anyway), red meat, and copious butter were not included.

“After 20 years steeped in a low-fat paradigm,” I wrote in that 2002 article, I find it hard to see the nutritional world any other way. I have learned that low-fat diets fail in clinical trials and in real life, and they certainly have failed in my life. I have read the papers suggesting that 20 years of low-fat recommendations have not managed to lower the incidence of heart disease in this country, and may have led instead to the steep increase in obesity and Type 2 diabetes. I have interviewed researchers whose computer models have calculated that cutting back on the saturated fats in my diet to the levels recommended by the American Heart Association would not add more than a few months to my life, if that. I have even lost considerable weight with relative ease by giving up carbohydrates on my test diet, and yet I can look down at my eggs and sausage and still imagine the imminent onset of heart disease and obesity, the latter assuredly to be caused by some bizarre rebound phenomena the likes of which science has not yet begun to describe.

Little meaningful evidence existed then, as I also noted, to ease these anxieties. A critical fact in this debate, indeed, the reason it continues to exist at all, is that we still have precious little evidence. What we want to know, after all, is whether LCHF/ketogenic eating—rather than, say, a Mediterranean diet or a very-low-fat diet or a vegetarian diet—will not only lead to more or less weight loss but will kill us prematurely.

To establish this knowledge in any reliable manner, we have to do experiments, the finest of which known to medicine are randomized controlled trials. In concept, they’re simple: Choose two groups of people at random; have one group eat one diet and the other group eat another diet; see what happens. Which group of randomly chosen individuals lives longer, and which has more or less disease? The catch is that it takes decades for these chronic diseases to establish themselves, and to find out how long we live, and the differences between groups in what is technically known as morbidity (sickness) and mortality (age at death) may be subtle. For these reasons the kinds of experiments that shed light on this question of which are the healthiest eating patterns (for all or some subset of the population) require at least a few tens of thousands of subjects, and then they have to proceed for long enough—perhaps decades—to reliably determine if the subjects are getting more or less heart disease, dying sooner or later, in a way that’s clearly the result of what they’re eating.

Medicine is a science, so the concept of hypothesis and test still holds, and these clinical trials are the tests of the relevant hypotheses about diet and health. To do these trials correctly, though, would cost a huge amount of money. Many such trials would have to be done, some just to see if the others got it right, and they are almost unimaginably challenging. The concept is simple, the reality anything but. They can fail in so many different ways that some prominent public health authorities have recently taken to arguing that they shouldn’t be done. They argue that we should trust what they think they know about the nature of a healthy diet, and that this knowledge should apply to all of us, whether we are predisposed to get fat on such a diet or not. I respectfully disagree.

Absent this kind of reliable evidence, we can speculate on whether a diet is likely to kill us prematurely or is healthier than some other way of eating (i.e., we’ll live longer and stay healthy longer) by applying certain rules, but we must always acknowledge that we are guessing. For instance, eating foods that humans have been eating for thousands or hundreds of thousands of years, and in the form in which these foods were originally eaten, is likely to have fewer risks and so to be more benign than eating foods that are relatively new to human diets or processed in a way that is relatively new. This argument was made famously in the context of guidelines for public health by the British epidemiologist Geoffrey Rose in 1981. If the goal is to prevent disease, Rose observed, which is what public health guidelines and recommendations are intended to do, then the only acceptable measures of prevention are those that remove what Rose called “unnatural factors” and restore “‘biological normality’—that is … the conditions to which presumably we are genetically adapted.”

Remove and unnatural are the operative words. Removing something unnatural implies that we’re getting rid of something that is likely to be harmful. Take, for example, the advice that we shouldn’t smoke cigarettes. We have very little reason to think that removing cigarettes from our lives will do physical harm, because there’s nothing “natural” about smoking cigarettes. They’re a relatively new addition to the human experience.

If we’re adding something that is new to our diets, hence “unnatural,” thinking it will make us healthier, we’re guessing that the benefits outweigh the harms. There are likely to be both. Now we have to treat that new thing just as we would a drug that we think is good for us and that we’re supposed to take for life (say, a drug that lowers our cholesterol levels or our blood pressure). How do we know it’s safe, even if it seems to be beneficial in the short term?

All this is a judgment call and depends on perspective. One reason all diet authorities now agree more or less that we should cut back on our consumption of highly processed grains (white flour) and sugars (sucrose and high-fructose syrups) is that these refined grains and sweet refined sugars are relatively new to human diets. We assume that no harm can come from not eating them and perhaps quite a bit of good. Eating or drinking sugar, for instance, might have benefits in the short run—the rush of energy might fuel athletic performance or allow us to perform better on a test in school—but that doesn’t tell us whether the long-term consumption is to our detriment. Health authorities have mostly come to believe it is.

The idea that we should all eat tubers, like sweet potatoes, as proponents of the paleo diet suggest, is based on the assumption that our hunter-gatherer ancestors ate them for a couple of million years, implying that they are safe. Some paleo advocates take this assumption a step further and propose that we’d be healthier eating tubers than not. But they’re only guessing. It may be true, or maybe it’s true for some of us but not for others. We have no way to tell, short of doing one of those incredibly expensive, unimaginably challenging clinical trials.

When we’re told that we should consume more omega-3 fatty acids (a kind of polyunsaturated fat in fish oil and flaxseeds, among other sources) and fewer omega-6s (another kind of fat), it is based on the assumption that this shift in the balance of fats we ingest will make us healthier and live longer. In this case, researchers have done a few long-term trials to test the assumption, and the results have been mixed: Maybe they do, maybe they don’t. Nonetheless, we continue to hear that we should eat more omega-3s and fewer omega-6s because we currently consume a lot of omega-6s in our diets (from corn and soybean oil, conspicuously, and from eating animals that have been raised on corn and soybeans), and that’s considered unnatural. By this thinking, we are not genetically adapted to have such a high percentage of fats from omega-6s. It might be the correct assumption, but we don’t know.

One reason I and others promote the idea that eating saturated fat from animal products is most likely benign is that we’ve been consuming these fats as a species for as long as humans have been a species. The evidence isn’t compelling enough to convince us that this assumption is likely to be wrong. We may or may not have been consuming as much of these saturated fats, but we can presume we are genetically adapted to eating them. They are “vintage fats,” to use a term I first saw employed by Jennifer Calihan and Adele Hite, a registered nurse, in their book Dinner Plans: Easy Vintage Meals, and they include some vegetable oils—from olives, peanuts, sesame, avocado, and coconuts—and all animal fats in this category. Calihan and Hite contrast them to “modern fats”—margarine; shortenings of any kind; and industrially processed oils from rapeseeds (canola oil), corn, soy, cottonseed, grapeseed, and safflower. Vintage fats, by this thinking, can be trusted to be benign. Modern fats, not so much.

This is also why we believe that meat from grass-fed, pasture-raised animals is healthier for us than that from grain-fed, factory-farmed animals: The fat content of this meat will be more closely aligned to that of the animals our ancestors ate for the past million or so years. It will be more natural. (Perhaps more important, it is a way of eating that does not support the cruel and inhumane treatment that is common to factory-farming operations.) New foods or old foods in unnatural forms are more likely to be harmful than those foods to which we are presumably genetically adapted.

This belief also, ultimately, underpins the conventional thinking that a healthy diet includes ancient grains—quinoa, for instance, or couscous—or brown rice and whole grains rather than highly refined grains like white rice and white flour. Even without knowing any mechanisms for why this might be true—gluten content or glycemic index (how quickly or slowly the glucose hits our bloodstream)—and absent, once again, any meaningful experimental evidence, the assumption is that our ancestors ate these grains for maybe a few thousand years, in the form in which we’re eating them. Hence they are likely to be benign, at least for people who are predisposed to be lean and can tolerate a higher carbohydrate content in their diet.

The caveat, of course, is that definitions of natural and unnatural can depend on the perspective of the nutrition authority. When we’re parsing the latest diet advice, we have to make judgments about how the proponents of the advice define natural and unnatural. Are ancient grains natural because some populations (but not all) have consumed them for thousands of years, more or less since the invention of agriculture? Or are all grains unnatural because we’ve been consuming them for only a few thousand years, since the invention of agriculture? Are we safe adding something presumably natural to the diet (ancient grains or tubers or omega-3 fatty acids), or is it a better idea to remove only the unnatural elements (refined grains, sugars, some of the omega-6 fatty acids)? I think the latter is the safer bet. But this, too, gets complicated because as we remove sources of energy from the diet, we have to replace them.

What might be the most complicating factor in how we think about how we eat is the influence of the latest news, the latest media report on the latest study that is making a claim sufficiently interesting to constitute news. By definition that is what’s new, which means it either adds significantly to the conventional wisdom or contradicts it or speaks to whatever diets have indeed become particularly faddish these days. (After my 2002 article suggesting that Atkins was right all along, I was accused of taking a contrarian perspective not because I really thought the evidence supported it, but because it was more newsworthy and would earn me a large book contract. Reporting that the conventional wisdom was indeed right would not. The editors of The New York Times Magazine might not have even published such a version because it wouldn’t have been news.)

The best reason to ignore the latest study results, the latest media reports suggesting we should eat this and not that, is that the interpretation of these latest studies is most likely wrong. A discussion highlighted in the media these days is what science journalists refer to as the “reproducibility crisis”—some large pro- portion of the studies that are published either get the wrong results or are interpreted incorrectly or maybe both. If we include those studies that are just meaningless, only one in ten or one in twenty studies (that make the press or appear on your home page) may be worth our notice. This percentage may be even smaller in nutrition and lifestyle research, in which the researchers are so poorly trained and the research so challenging to do. This is one reason the committees that decide on Nobel Prizes traditionally wait decades before acknowledging work to be prizeworthy. Far more often than not, if we wait long enough, we’ll see other studies being published making the opposite claims of whatever we’re reading today. We won’t know which is right until long after their publication. Perhaps never.

“Trying to determine what is going on in the world by reading newspapers,” as a famously clever screenwriter/director/journalist named Ben Hecht once wrote, “is like trying to tell the time by watching the second hand of a clock.” The same is true of research and science. Trying to tell what’s true by looking at the latest articles published in a journal—and particularly in nutrition— is another fool’s game. The best idea is to attend little to the latest research and focus instead on the long-term trends, the accumulation of studies (one hopes, interpreted without bias), even if the long-term trends rarely, if ever, appear in the news.

Preview or purchase the book here.

Published December 29, 2020 by Alfred A. Knopf, an imprint of The Knopf Doubleday Publishing Group, a division of Penguin Random House LLC. Copyright © 2020 by Gary Taubes.