★ Essential

What Is Risk? The Intuitive Primer

Before formal probability, you need an intuitive vocabulary for risk. This unit covers absolute risk, baselines, and why your gut feeling about risk is systematically, predictably wrong — and how that gets exploited.

Time: 12 minutes

Opening Hook

Picture a room with ten people in it. One of them will develop a particular condition over the next decade. You can feel the weight of that. One face out of ten. Bad odds.

Now picture a different room. Ten thousand people. One of them will develop the same condition. The number is the same, in a sense — one person, one outcome. But it feels completely different. A room of ten thousand people is an aircraft hangar. The one unlucky person is invisible in the crowd.

That gap between 1 in 10 and 1 in 10,000 — that felt sense of scale — is what this unit is about. Because most of the time, when someone presents you with a risk figure, they have decided in advance which room they are going to show you. And they are not going to show you both.

The Concept

Risk, at its simplest, is the probability that a specific bad thing happens to you. Not to someone else. Not in theory. To you, under specific circumstances, over a specific period of time.

That last part matters. A risk figure without those three anchors — who, what, how long — is not a risk figure. It is a noise.

The number you actually need is called the absolute risk. Absolute risk is your personal probability of the outcome. It is a frequency: out of everyone like you, in your situation, how many will experience this event? If 3 in every 100 people in your demographic develop condition X within five years, your absolute risk is 3 percent, or 3 in 100. That is a concrete number. It has a denominator. You can think with it.

To know whether that number is alarming or reassuring, you also need the baseline. The baseline is the starting point: what is the background rate of this thing in people like you, before any intervention or exposure? If the baseline risk of a condition is 0.1 percent and something doubles it to 0.2 percent, you still have a 99.8 percent chance of not getting the condition. That is very different from a condition where the baseline is 20 percent, and something doubles it to 40 percent. The relative change is the same. The lived reality is entirely different.

This is the context problem, and it is how a great deal of risk communication misleads people without technically lying. A 20 percent increase in risk sounds alarming regardless of whether the baseline is 0.001 percent or 40 percent. The number “20 percent” triggers something in us. We do not automatically reach for the denominator. We do not ask: 20 percent of what?

Why not? Because our intuitive risk-sensing equipment was not built for statistics. It was built for immediate, vivid, concrete threats. The human brain is very good at noticing a growl in the dark. It is poorly equipped to evaluate the difference between a 0.02 percent risk and a 0.04 percent risk, even though doubling a risk is, in absolute terms, either catastrophic or trivial depending entirely on where you started.

Several well-documented patterns drive the miscalibration.

The first is vividness. A dramatic story about one person who was harmed by something — a news report, a personal account, a photograph — raises perceived risk far above what the frequency data justifies. Plane crashes are memorable. Car journeys are not. This is why people fear flying and text while driving.

The second is availability. The easier it is to bring an example to mind, the more probable it feels. If you have just read three stories about a rare side effect, that side effect will feel more common than it is. Media coverage is not distributed in proportion to actual risk. It is distributed in proportion to novelty and drama.

The third is proportion blindness. When a number is presented without its denominator, the brain tends to treat it as if the denominator were small. “There were 400 cases” sounds like a lot. “There were 400 cases out of 20 million people exposed” sounds like very few. The bare number lands first and biases everything that follows.

All three of these patterns are well known to anyone who communicates about risk professionally. Which means they are available to be exploited.

Why It Matters

In 1996, Pfizer ran a widely circulated advertisement for Lipitor, their cholesterol-lowering drug. The headline claimed that Lipitor “reduces the risk of heart attack by 36%.” That figure came from a clinical trial: among the participants not taking the drug, 3 percent had a heart attack over the study period; among those taking Lipitor, 2 percent did. Three percent to two percent. One percentage point.

Express that as a relative change — how much did the risk in the treated group fall relative to the untreated group — and you get 33 percent. With slightly different baseline figures, you get numbers in the mid-thirties. Hence 36 percent.

Express it as an absolute change — what is the actual difference in probability between taking the drug and not taking it — and you get 1 percentage point.

Both numbers are mathematically honest. One of them is the number the advertisement uses. You do not have to ask which one.

The 36 percent figure is an example of relative risk reduction: the proportional change in risk between two groups. The 1 percentage point figure is the absolute risk reduction: the actual change in your probability of the bad outcome. Relative risk reduction always sounds larger than absolute risk reduction, sometimes by a small margin and sometimes by an enormous one. The smaller the baseline, the larger the gap.

In the statin case, the number needed to treat — how many people must take the drug for one person to benefit — works out to around 100 for that study population. A hundred people take the drug; ninety-nine of them receive no measurable protection against heart attack and are exposed to whatever side effects the drug carries; one person is spared. This is not an argument against statins. There are populations where the numbers are more favourable. It is an argument for knowing the actual numbers rather than the advertising number.

Media coverage operates by the same logic. A headline reporting that a substance “doubles your risk” of a condition sounds alarming. If the condition affects 1 person in 10,000 normally, doubling the risk makes it 2 people in 10,000. You have moved from a 0.01 percent chance to a 0.02 percent chance. If you are one of the 9,998 people who were not going to get the condition either way, this information should not change your behaviour much at all.

The confusion between relative and absolute risk has consequences in the clinic. A review published in the British Medical Journal found that both patients and physicians significantly overestimate the benefit of medical treatments when benefits are expressed in relative rather than absolute terms. When the same information was presented as absolute risk reduction, assessments of benefit dropped sharply. The format changes the decision without changing the facts.

How to Spot It

The documented case that shows the pattern most clearly is the Lipitor advertisement described above. It is not the only example, but it is the one that has been most carefully analysed and is publicly available.

The advertisement placed the “36%” figure in large type at the top. At the bottom, in noticeably smaller type, it included a statement that the difference in absolute terms was 3 percent versus 2 percent, and that the difference was 1 percentage point. The information was there. The design ensured that almost nobody would process both numbers with equal weight.

The tell is always the same: a risk claim expressed as a percentage change without a baseline. “Reduces risk by 36%.” “Doubles your risk.” “40% more likely.” Any of these should immediately trigger one question: what is the starting number?

If the starting number is not provided, you cannot evaluate the claim. Full stop. The absence of a baseline is not an oversight. Baselines are inconvenient for people who want to make a treatment, a product, or a danger sound significant.

A second tell: watch for the word “relative.” If a source specifies “relative risk reduction,” the absolute number exists somewhere and was not chosen as the headline figure. Ask what it is.

The third tell is the absence of a denominator in a frequency claim. “Hundreds of cases have been reported” — hundreds out of how many exposures? “The rate has increased” — from what to what, in which population? A missing denominator is almost always doing a job.

Your Challenge

A news story reports: “Scientists have found that people who regularly eat processed meat increase their brain tumour risk by 40 percent.”

Before you decide how alarmed to be, here is a short list of questions. What is the baseline rate of brain tumours in the population the study looked at? Is the 40 percent a relative risk increase or an absolute risk increase? What does “regularly” mean, and how was it measured? What counts as “processed meat” in this study? How large was the study, and how were participants selected?

Find out the baseline incidence of primary brain tumours in adults in the UK or your own country. Then calculate what a 40 percent relative increase would mean in absolute terms. Decide whether the headline matches the number.

There is no answer on this page. That is the point.

References

Lipitor “36%” advertisement and the absolute versus relative risk analysis: Moxley, B.D. et al. This specific Lipitor advertisement is documented and analysed in Woloshin, S. and Schwartz, L.M., “Communicating Data about the Benefits and Harms of Treatment: A Randomized Trial,” Annals of Internal Medicine (2011); and Gigerenzer, G., Wegwarth, O. and Feufel, M., “Misleading communication of risk,” British Medical Journal (2010). URL: https://www.bmj.com/content/341/bmj.c4830

Pfizer CARDS trial data underlying the “36%” claim: Colhoun, H.M. et al., “Primary prevention of cardiovascular disease with atorvastatin in type 2 diabetes in the Collaborative Atorvastatin Diabetes Study (CARDS),” Lancet (2004). The advertisement citing 36% relative risk reduction is documented in: Researchgate, “An advertisement for Lipitor which emphasizes the relative risk reduction of a heart attack (36%).” URL: https://www.researchgate.net/figure/An-advertisement-for-Lipitor-which-emphasizes-the-relative-risk-reduction-of-a-heart_fig1_272189007

Statin absolute risk reductions in the low-risk population: Law, M.R., Wald, N.J. and Rudnicka, A.R. meta-analysis figures reproduced in multiple sources; primary prevention NNT of around 100-217: The NNT Group, “Statins for Persons at Low Risk of Cardiovascular Disease.” URL: https://thennt.com/nnt/statins-persons-low-risk-cardiovascular-disease/

Relative vs absolute risk framing and physician/patient overestimation: Malenka, D.J. et al., “The framing effect of relative and absolute risk,” Journal of General Internal Medicine (1993); and Covey, J., “A meta-analysis of the effects of presenting treatment benefits in different formats,” Medical Decision Making (2007). URL: https://journals.sagepub.com/doi/10.1177/0272989X07306781

Glioma baseline incidence in England (approximately 5 per 100,000 per year for glioblastoma specifically): Cancer Research UK, “Brain, other CNS and intracranial tumours incidence statistics.” URL: https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/brain-other-cns-and-intracranial-tumours/incidence

INTERPHONE study 40% increased glioma risk in the highest decile of mobile phone users: International Agency for Research on Cancer, “INTERPHONE Study Group — Mobile phone use and brain tumours,” International Journal of Epidemiology (2010). The 40% figure (OR = 1.40) applied to the top 10% of users by cumulative exposure hours. URL: https://interphone.iarc.fr/pr200-e.pdf

Availability heuristic and risk perception: Tversky, A. and Kahneman, D., “Availability: A heuristic for judging frequency and probability,” Cognitive Psychology (1973).