The statement “Mic drop moment!” was a social media reply in a thread where the original poster had provided an eloquent rebuttal to a nutrition article with which they’d disagreed. The rebuttal was well thought out, valid and I didn’t wholly disagree with it. But it definitely didn’t mark the end of the debate on the issue – certainly not a mic-drop moment. Dietary advice is ever changing, and this is a good thing. Science isn’t finite; it’s a process. Humans like convenient, robust answers to highly complex questions, and in few fields is this more apparent than in nutrition and dietetics. The nuance of ideas can make it seem like nutritionists are always changing their minds. When this stems from people having a range of ideas from different evidence or from different interpretations of the same evidence, this should be celebrated. Unfortunately, however, opinions are too often based on little more than emotionally charged conjecture serving to assert a nutritional ideology.
So, how does one distinguish the good information from the bad? The credible from the mumbo jumbo? The authentic from the quackery? While science does indeed attempt to provide answers, how many of us are motivated to trawl through hundreds of papers on a topic and then listen to several experts’ interpretations of the ever-changing data? I highlighted the importance of the scientific method in Nutrition and Enlightenment Values. In it, I pointed out that, bias abound, the scientific process is slow, fallible, prone to corruption and poorly understood. Worse: even if we were to follow the scientific consensus, this too might be wrong and itself subject to bias. So, is there a convenient technique that might help us navigate the deathly labyrinths of misinformation? It just so happens that there might be: by making use of a tool that’s been around for a quarter of a millennium.
Bayes’ Theorem
While rifling through the papers of a recently deceased friend, Dr Richard Price came across an essay featuring a bunch of mathematical symbols and philosophical reflections. Musing, the paper’s author appeared to have been enquiring how a person who had recently been brought into the world should reason, after seeing his first, second and his third sunrises, the probability that the sun will rise every day? Despite witnessing daily sunrises, the new arrival should not be led to believe that the sun will always rise; rather he should be somewhat cautious as, even after seeing a lifetime of sunrises, the fact of its rising is never a certainty. The essayist in question was English statistician, philosopher and Presbyterian minister Thomas Bayes (1701-1761). Bayes reflected that nothing should be certain or taken for granted. Like Bayes, Price was a church minister who was interested in how new scientific discoveries could be squared from miracles in the Bible, leading him to publish the piece in question posthumously in 1763, titled “An Essay Towards Solving a Problem in the Doctrine of Chances” [1]. Later, the insights of French mathematician Pierre-Simon Marquis de Laplace were incorporated, and Bayes theorem was born:
Bayes’ theorem is a mathematical formula for determining cnditional probability. It’s the likelihood of something happening based on previous knowledge of the outcome arising under similar circumstances. The probability of the event occurring before new data is collected is known as the prior, and is the best rational assessment of the likelihood of a particular outcome based on current knowledge. The probability of the event is then revised after considering any new information and is calculated by updating the prior using Bayes’ theorem, giving the posterior probability. In statistical lingo, the posterior probability is the probability of event A given event B; it’s the probability of something happening given a hypothesis.
Consider the following puzzle: Ten percent of diners in a restaurant have accidently ingested toxic substance T. Using a particular test, 60 percent of people will test positive for T in the blood. However, 20 percent of subjects who haven’t consumed substance T will also test positive. What is the probability of having consumed T given a positive test result?
Intuitively, many people incorrectly state the accuracy of the test to be 60 percent, as they fail to account for the broader context. Let’s break it down: out of 100 people, ten have ingested substance T and the test will correctly identify six of these people. However, 90 subjects haven’t consumed T, yet 20 percent of these receive a positive test result, which means that 18 will be incorrectly flagged as having had T. Therefore, 24 people would get a positive result, with only six of them actually having consumed T. The answer is therefore 25 percent. Conversely, a negative result gives about a 95 percent probability that you're clear, as four of the 76 negatives will have consumed T.
Bayesian Reasoning
Confused? If, like me, you’re no mathematician, the idea of considering a complicated equation whenever you want to evaluate the credence of a statement seems both daunting and, well, silly. The good news is you don’t have to. What matters more than Bayes’ theorem is its core insight: we gradually get closer to the truth by constantly updating our knowledge in proportion to the weight of the evidence. Nothing in life is certain – no matter how strong the evidence – so, by keeping our minds open, we can adjust and adapt to new evidence as and when it becomes available. This style of thinking is known as Bayesian reasoning and is useful to work out how likely something is, taking into account what we already know and any new evidence. A new belief depends on two things: your prior belief (along with all the knowledge that informed it) and the value of the new information. Viewing knowledge from a Bayesian perspective means updating one’s beliefs to be better aligned with the available evidence.
Rarely do we have statistics available that allow us to accurately calculate odds. In their 2023 essay Bayesian Balance, Ed Gibney and Zafir Ivanov propose a modified version of “Reason’s Fulcrum”* [2]. Consider a plank balancing on a fulcrum positioned somewhere in the middle, like a seesaw. By adjusting the position of the fulcrum, you can evenly balance two different weight objects placed on each end of the beam. Rather than two objects, let’s imagine the ratio of how likely the evidence is to be observed between a claim and its counterclaim, capturing both the absolute and relative qualities of the evidence. Once considered, the position of the fulcrum gives us our credence in the two competing claims. The fulcrum represents our willingness to have our minds changed. Using the previous example, we have a beam with 24 units, and, to evenly balance the beam, the fulcrum would be positioned at the point where an 18-unit weight was level with a six-unit (see Figure 1 [3]). The beam is balanced on the fulcrum, yet the stronger odds are on one side. Reason’s Fulcrum shows us that there can never be 100 percent certainty of anything; otherwise, rather than a beam, there’d be a ramp (Figure 2 [4]). The result? A breakdown of reasonable epistemology. Gibney and Ivanov propose that the beam-fulcrum analogy provides a tangible way of treating evidence in Bayesian reasoning to arrive at credence.
Bayesian Biases
Although Bayesian reasoning is a tool to help minimise our biases, they are impossible to completely mitigate. Consequently, critics of Bayesian reasoning point out that Bayesians often fail to mention sources of knowledge or the justification behind their credence. From one perspective, these criticisms are fair as there are problems with this style of thinking. For example, one could spatula-in any evidence that aligns with one’s beliefs, or one could attribute an arbitrary amount of weight to a piece of evidence and then use that to add credence to the belief. However, acting in this way would merely be self-manipulation, and if one genuinely seeks the truth, one will be mindful of these behaviours.
Another criticism is that we might have only come upon a small amount of evidence to back up our beliefs, and, in this case, our beliefs will be fragile. Furthermore, not all evidence deserves equal credence. If, for example, 100 gym-goers claim they managed to add 5kg to their bench-press in a month just by upping their protein intake by 20g per day, we would treat that body of evidence differently than if a randomised controlled trial of 20 experienced male weightlifters on a set training regimen and a controlled food intake showed a mean 2.2kg increase in weight lifted compared to controls. There’s a difference between the quality of the evidence and the quantity of evidence.
Be Reasonable!
We can strengthen our evidence through slow, small steps until eventually we're up and running with the growing list of reasons why we trust the science. Crucially, Bayesian reasoning is convenient. To use it effectively, we need only adjust our credences without having to delve into a detailed examination of the evidence or different calculations. We simply need to keep weighing the evidence and paying attention to which kinds of evidence are more likely to count or seem true, while remembering that observations can sometimes be misleading. A useful guiding principle is to continually ask oneself: “Could my evidence be observed even if I’m wrong?” Doing so, Gibney and Ivanov assert, fosters a properly sceptical mindset, freeing us from the truth trap, enabling us to move forward, wisely proportioning our credences as effectively as the evidence allows us [5].
Our worldview influences how we perceive our surroundings. This top-down processing plays a role in our perception, and our beliefs impact how we respond to evidence and our expectations. Bayesian reasoning is a tool to help us increase (or decrease) our confidence in our prior position. Although imperfect, it nudges us closer to the truth. To accept this we need two things: a willingness to change our minds and to avoid absolute certainty. Any action resulting from a belief is reasonable behaviour until it isn’t anymore; indeed, it may never be demonstrated to be unreasonable. Unfortunately, people crave certainty and they seek it out. Communicators who demonstrate their certainty when delivering their message have an advantage: by gaining more followers, their message has a wider reach and, consequently, is believed by a greater number of people. By thinking like a Bayesian, you can notice these nuances and learn to challenge what you hear. You’ll discover that the only mic-drop moment is when you realise that there is no mic-drop moment.
* Edit 08/07/2024: Reason’s Fulcrum was originally devised by Andy Norman in his 2021 book, Mental Immunity.
References:
1. Bayes. T. (1763). ‘An essay towards solving a problem in the doctrine of chances.’ By the late Rev. Mr. Bayes, F.R.S. communicated by Mr. Price, in a letter to John Canton, A.M.F.R.S.
2. Gibney, E. and Ivanov, Z. (2023) ‘Bayesian Balance: How a Tool for Bayesian Thinking Can Guide Us Between Relativism and the Truth Trap’. Skeptic, 28(4), 74-9.
3. ibid (2).
4. ibid (2).
5. ibid (2).
Nicely done, and I'm pleased that our article caught your attention.
Andy Norman came up with Reason's Fulcrum a few years back. I encountered it in his 2021 book 'Mental Immunity'. Myself and Ed Gibney showed how a moveable fulcrum gives us a working model for Bayes.
Cheers
Zafir
What an interesting piece combining principles from philosophy, physics and nutrition science! I've not come across "Reason's Fulcrum" before- thank you for explaining this fascinating concept! The figures really helped me understand the concept.
Are there any specific sciences/humanities that experience the same volatility that nutrition science does? Is it the case that once one delves into any subject deep enough, they will be exposed to the fluctuating views or is there something particularly unique to the field of nutrition science (disregarding the influence of the media/celebrities endorsing diets)? If nutrition science was treated like any other science or humanity (i.e. without some social media influencers hyping supplements/making unsubstantiated claims), is there anything specific to this field that makes it particularly susceptible to the fluctuations? Thank you!