MD Stein - Cholesterol, Health, And Disease: A National Experiment In What's Clinically Desirable
For blood test results, reference ranges set by the testing laboratory define health and disease. Health (that is, not having a disorder like diabetes or hypothyroidism) is defined as any test result for a specific condition that falls within the middle 95% of the millions of values reported to the lab for that test in that year. Disease means your lab value is at the margins, the top 2.5% or the bottom 2.5%. Unless some external authority decides to tinker with this reference range (and sometimes they do—check the blog on diabetes I wrote three weeks ago) to redefine normal as a “clinically desirable range,” one would think that reference ranges should be relatively stable across an entire population over time. The word reference, after all, suggests solidity, fixity, nearness to truth; think of reference books. They’re not reformulated every few years. Nor would we think that our definitions of health and disease can be dramatically altered.
But in the world of lab tests, reference ranges can change. Between 2001 and 2004, based on 80 million cholesterol tests performed and collected, the average American LDL cholesterol level for patients under physicians’ care fell from 124 to 112, a ten percent drop. This decline is astounding over such a short period and the finding was considered very important for the public health. Lower LDL cholesterol should translate to fewer heart attacks, strokes, and other cardiovascular events over the following decade for those whose cholesterol fell. But this decline in average cholesterol means that the entire reference range shifted. The middle 95% of all cholesterol values had changed. How could this have happened at the same time that the population was gaining weight, right in the midst of the obesity epidemic?
The most likely explanation for this trend is that during these years the use of cholesterol-lowering drugs exploded. The average American cholesterol level decreased because hundreds of thousands of people started taking cholesterol-lowering medications after the new National Cholesterol Education Program of 2002 suggested that a lower cholesterol level was clinically desirable, particularly for those at high risk of vascular disease. This meant that more Americans were candidates for treatment. The fact that the decrease in the average American LDL cholesterol was greatest (approximately 13%) for people aged 70 years and older (those presumably at higher risk, and so on medication) and least pronounced (approximately 7%) for the 20-to-39-year age range (those presumably at lower risk) supports this explanation.
Doctors responded to new guidelines for what was clinically desirable with prescriptions. Statin use increased, driving cholesterol levels of those treated lower, and the midpoint of 80 million test results changed. There is, then, an invisible feedback loop, tying the reference range to the clinical desirable range.
This loop, setting an ever-lower target for what’s considered the optimal result of cholesterol test, is a form of national, population-based experiment. We won’t know for years if the outcome will actually be positive as would be evidenced by the prevention of heart attacks and strokes. Or if it will be negative, which is possible too, as all medications have side effects and costs, and the widespread use of any pharmaceutical can have unexpected results. It will take a long time to find out.
Follow us on Facebook!