An understanding of the validity of science and scientific criticism, whether about cosmology, or climatology, or physiology and the efficacy of CrossFit, requires knowledge of the terms conjecture, hypothesis, theory, and law.
Be aware, now, consensus on the meaning of these terms is fading. … In common use, scientists speak at once of probability theory and the laws of probability. Scientifically credentialed individuals advance unvalidated models by proclaiming a consensus. It’s an infection like university grade inflation. Nevertheless, here is a guideline that will improve your science literacy; give you a framework for evaluating all variety of supposedly objective or scientific claims, arguments, and models; and hold you in good stead with real scientists.
Science is all about models of the real world, whether natural (basic science) or manmade (applied science, or technology). These models are not discovered in nature, for nature has no numbers, no coordinate systems, no parameters, no equations, no logic, no predictions, neither linearity nor non-linearity, nor many of the other attributes of science. Models are man’s creations, written in the languages of science: natural language, logic, and mathematics. They are built upon the structure of a specified factual domain. The models are generally appreciated, if not actually graded, in four levels:
A conjecture is an incomplete model, or an analogy to another domain. Here are some examples of candidates for the designation:
- “Ephedrine enhances fitness.”
- “The cosmological red shift is cause by light losing energy as it travels through space.” (This is the “tired-light conjecture.”)
- “The laws of physics are constant in time and space throughout the universe.” (This one is known in geology as “uniformitarianism.”)
- “Species evolve to superior states.”
- “A carcinogen to one species will necessarily be carcinogenic to another.”
A hypothesis is a model based on all data in its specified domain, with no counterexample, and incorporating a novel prediction yet to be validated by facts. Candidates:
- “Mental aging can be delayed by applying the ‘use it or lose it’ dictum.”
- “The red shift of light is a Doppler shift.”
A theory is a hypothesis with at least one nontrivial validating datum. Candidates:
- Relativity.
- Big Bang cosmology.
- Evolution.
A law is a theory that has received validation in all possible ramifications, and to known levels of accuracy. Candidates:
- Newtonian mechanics.
- Gravity.
- Henry’s Law.
- The laws of thermodynamics.
Each of these candidates can stir arguments worthy of a paper, if not a book, and no model is secure in its position. Weak scientists will strengthen their beliefs and stances by promoting their models while demoting the competition. Some familiar models fail even to be ranked because they are beyond science, usually for want of facts. Candidates:
- Creation science or notions of “intelligent design.”
- Astrology.
- Parapsychology.
- UFO-ology.
Jeff Glassman has a B.S., M.S., and Ph.D. from the UCLA Engineering Department of Systems Science, specializing in electronics, applied mathematics, applied physics, and communication and information theory. For more than half of his three decades at Hughes Aircraft Company, he was division chief scientist for Missile Development and Microelectronics Systems. Since retiring from Hughes, he has consulted in various high-tech fields. He is the author of the book “Evolution in Science: California Dreaming to American Awakening” (1992).
He has also worked as a bush pilot for Alaska Helicopters and was a naval aviator in helicopters and single- and multiengine aircraft, instructor pilot, and maintenance test pilot, making lieutenant commander in the reserves before resigning with a total of 12 years.
Comments on Conjecture, Hypothesis, Theory, Law: The Basis Of Rational Argument
These four levels of grading are even important to know in the context of the training environment. Historically, individuals claiming they are experts in the field have looked to pull the strength and conditioning world toward their “Laws” of training. Not surprisingly, nutrition "experts" have done the same. Understanding the validity of these "expert" claims starts with understanding of the four terms in the article’s title. This knowledge can “set a Trainer free”, so to speak, and open up further training and/or nutritional possibilities for their clients.
As usual it reminds us that we alway should know the definition of a word we decide to use. It is far too often that we say Theory or Hypothesis and don't know the difference or even how to move from one to the other.
In the field of training, we all will benefit from this article.
Well-defined article. Words certainly should have specific meanings when describing science and methods used for scientific purposes. Looking within, where does CrossFit methodology fall in the spectrum? Certainly there is data through the years of training and teaching through affiliates and seminars alike. Using the “pruning” method described in the article, well-developed studies and observation could be done in CrossFit with great experimental design. There are many forms of data, collection, and assessment, so the question comes to: is consensus for determining methods (not results) the best way? Consensus for results inherently places bias on the result itself, but consensus methodology seems like the opportunity. And finally, once these questions are answered, by definition, CrossFit will be a transparent example of what corporations in science should look like.
Before I fully understood the differences in conjecture, hypothesis, theory, and law, I was easily swept away by the unvalidated models. Now, I know better. I am teaching my children that the strongest, loudest voice with the deepest conviction is often the furthest from the truth. In the meantime, we must look at the available data and think for ourselves. I will no longer unquestioningly trust those who rush to connect the data dots.
Dr. Glassman pretty much traces the arc of my career in this article. For my first book, Nobel Dreams, I lived with physicists at CERN as they “discovered” non-existent elementary particles. The book ends with me sitting in the CERN cafeteria with a famous Spanish theorist who is wondering whether he should get on the string “theory” bandwagon. (I never found out if he did.) My second book, Bad Science, was on cold fusion and I was mentored by some of the by-all-accounts great experimental scientists of the era. Several of those (Steve Koonin, then provost of Caltech now at NYU, and William Happer of Princeton) and the theorist Freeman Dyson, went onto become noted critics of the climate change science. They would agree with much of what Dr. Glassman says in this article. (Although they might not go so far to use the term “quack” to describe believers. I’m sensitive on that, too, as I’ve been on the receiving end of the term myself and it, too, has a very specific meaning.)
The physicists have a term to describe the kind of sloppy research and, worse. sloppy thinking that Dr. Glassman is describing. They call it “pathological science” or the science “of things that aren’t so.” It was coined by the Nobel Laureate physicist Irving Langmuir back in the fifties and described at a famous lecture at Caltech, the transcript of which has circulated through the physics community ever since. Now, of course, it’s available to be read for everyone. I don’t know if Dr. Glassman was aware of this terminology but his use of the term infection – “it’s an infection like university grade inflation ” – suggests he thinking in the same terms. Maybe because I see the world from this perspective of “pathological science,” I, too, have to come to see it everywhere. I think the physics community is still able to fight it off because despite the sloppiness of terminology like string theory – and don’t get me started on dark matter – they recognize that they haven’t got anything until the predictions of their models or hypotheses are experimentally confirmed. They may be desperate that this hasn’t happened yet (and may never happen) but they understand the rules of the game. In effect working in a discipline in which you can experimentally test hypotheses and do it rigorously allows researchers to continue to fight off the infection that is pathological science.
The problem is in the softer sciences, which includes much of medicine and all of nutrition and exercise physiology. In these fields, the hypotheses of interest are exceedingly difficult to test and it can take decades to do it. The researchers no longer understand that hypotheses that haven’t survived the rigorous trial of experimental tests can’t be believed, let alone embraced or the implications promulgated as public health advice. These researchers were never taught the kind of strict rigorous thinking that drove science through the late decades of the last century. (As I’ve just written in my latest book, they tend to be medical doctors, not scientists – the two are very different things.) Tell them a model is not a hypothesis or a theory (and certainly not a law) or that correlation does not imply causation, and they’ll begrudgingly agree and then respond that this is the best they’ve got so they’re going to treat these ideas as though they are. After all people are dying out there and so they have to jump quickly to conclusions. With that kind of pathological thinking, the infection spreads ever deeper.  I could write books about this but I think I already have.
Langmuir’s talk can be read here:
http://galileo.phys.virginia.edu/~rjh2j/misc/Langmuir.pdf
I love this. The original 2007 article was something I would reference regularly while fielding questions at CrossFit Level 1 Seminars. People have a tendency to ask a question or make a statement which contains some little piece of information which supports their belief. They typically present this little bit of info as if it is a "law", but in reality it is "conjecture". We would all be well served to familiarize ourselves with these terms and their meanings.
Love this logical progression of thought, and its concise simplicity. Smart educational piece; look forward to more of this quality publication.
Very important and relevant topic. As physicians, we are expected to adhere to guidelines imposed by our national academies. They might seem like good ideas, agreed upon by a roomful of academic experts, but that doesn’t mean the evidence is strong. Physicians are expected to recommend these actions to hundreds and thousands of patients. If we don’t, we are in jeopardy of not following the standard of care....with such negative consequences as license review or malpractice suit. And so the recommendations manifest into “fact”. While in reality, they may be a conjecture or an unproven hypothesis.
Great point, Shaka. Even if a large number of experts agree that something is a good idea, their consensus means nothing absent true evidence. Yet the weight of "expert" opinion often gives conjecture, hypothesis and theory the status of law if no one blows the whistle. A prime example from the fitness world: classical periodization. The NSCA has promoted this periodization as the best programming method for years, and yet no evidence proves that it's without doubt better than other methods. Lon Kilgore wrote about this in "Periodization: Period or Question Mark?" A great quote from the article: "A responsible professional should demand more from a professional society than to ubiquitously adopt and disseminate opinion and conjecture as undisputed fact." Here's the URL for the article: https://journal.crossfit.com/article/periodization-period-or-question-mark-2
I'd recommend you read this one twice. It's worth the investment.
(Jeff) Glassman’s article is a push for us to more rigorously assess the level of evidence supporting our scientific beliefs, and to understand the implications of that assessment.
Glassman clarifies that a single factor separates conjecture from hypothesis from theory from law - the strength and consistency of the evidence supporting a given model. An abstractly satisfying conjecture with contradictory data is still a conjecture; an unvalidated hypothesis cannot become a theory. And so on.
If we believe a particular conjecture is “right”, it’s the data that will elevate it. If it lacks support, we must show how the behavior the model predicts occurs in reality; if some data contradicts it, we must edit the model until the contradictions are resolved. If we can’t do either, we’re stuck with a conjecture, and may do best to treat it with caution and explore alternatives.
Crucially, consensus is irrelevant. As Glassman argues via Smolin, string theory is in reality “string conjecture” - a model lacking validating observations that has been given unearned significance due to the number of people who believe it is true. There’s a risk to this - if we elevate a conjecture too quickly, we crowd out other conjectures, and increase the risk of investing our time and energy, or taking actions, based on an abstract shorthand that fails to predict reality.
The applications to health are obvious. It doesn’t matter whether 1% or 99% of a field believe a model is correct, only whether the model’s predictions match our observations. Is our preferred diet or program or technique consistent with all available data? If we support a model - say, the theoretical underpinning of a ketogenic diet, or a certain recovery technique - we elevate it by generating quantitative evidence (measurements) to support it and resolving any contradictions. We ought to be deeply skeptical of any model that is supported by consensus, and especially of any model that became stronger over time without any new evidence being produced, unless that consensus directly follows from uniquely clear and comprehensive data.
Glassman’s review of anthropogenic global warming (AGW) on these grounds is instructive. As he argues, AGW is strictly a conjecture, and the fact that 99% of scientists believe it is true in no way excuses the inconsistencies in its evidence base. Whether we should act on it is another question entirely. We can acknowledge AGW is a conjecture and still, in a Pascalian wager, believe it is best to act as if it were true. We could, in the 1970s, acknowledge the model linking saturated fat to heart disease is a conjecture and still believe it is best to recommend all Americans change their diets. In either case, we’re gambling that the theoretical benefits of complying with the model exceed the theoretical costs. We can make that bet. But we must acknowledge it’s a bet, that we might be wrong, and that both the benefits and costs in reality may be entirely different from what we anticipated.
Sometimes we’ll have to act on conjectures. But we should at least recognize exactly what we’re doing, and why, and not let those actions cloud our understanding of just how weakly supported those beliefs may be.
Conjecture, Hypothesis, Theory, Law: The Basis of Rational Argument
10