Resident Biden’s massive increase in home-based rapid antigen COVID tests will lead to a tsunami of false-positive results, which will indefinitely extend the current “casedemic.”
By Madhava Setty, M.D. - January 28, 2022
The idea is to allow people to test themselves before attending social events, school, going to work, etc., so they can know, almost in real time, whether they may be infectious.
On its face, this seems like a reasonable approach. Why wouldn’t having more information about possible infections be a good thing?
Here’s why that’s actually a really bad idea. Mass testing of people who are overwhelmingly asymptomatic (showing no symptoms) will in fact inevitably extend this pandemic nightmare for additional months — maybe even years — as “cases” continue to mount from false positives (a test result that incorrectly identifies infection when none exists).
To be clear: Biden’s mass testing approach is exactly the opposite of what is needed right now. We should not be testing asymptomatic people.
The reason for this becomes clear only by looking beyond the headlines that claim skyrocketing cases and deaths from the infection.
The accuracy of the case, hospitalization and death numbers are a function of the accuracy of the screening tests we are implementing. Inaccurate tests will naturally lead to inaccurate data.
However, the distortion of these numbers is more than just a matter of the accuracy of our screening tests as will be explained below.
Though the public generally understands every test will have some amount of inherent error, we are told that the widely used COVID-19 tests are very accurate and thus we can trust the reports of “NEW CASES” shouted daily from most mainstream media platforms.
The reality is that even when a reasonably accurate test is used on a population that has a low background prevalence of active disease, the majority of positive test results will, in fact, be false.
Why is this the case? We must first examine what is meant by a test’s accuracy.
Sensitivity Versus Specificity — What’s The Difference?
A test’s accuracy is defined by two things: its ability to diagnose a condition when it exists and its ability to rule out a condition when it doesn’t.
A given diagnostic test does not necessarily have an equal ability to rule in and rule out the condition it is designed to identify. For this reason, the accuracy of a test is defined by its sensitivity and specificity.
Sensitivity and specificity have precise definitions. A test’s sensitivity is the proportion of people who have a disease that the test will correctly identify with a positive result. In other words, if a test has a 90% sensitivity it will return a positive result nine times out of 10 when testing people with the disease.
Specificity is the proportion of people who do not have the disease that the test will correctly identify with a negative result. A test with 90% specificity will return a negative result nine times out of 10 when testing people who don’t have the disease.
Let’s demonstrate this further using an extreme example. Let’s say our test for diagnosing COVID-19 doesn’t involve PCR or antibody titers or antigen testing.
Instead, the test simply involves confirming that a person is alive.
If a person is alive, then in this hypothetical test, they must have COVID-19. If they are dead, they do not have COVID-19. Our hypothetical test’s sensitivity would be 100% because every person who has COVID-19 will test positive; no COVID-19 case will escape detection.
Obviously, this hypothetical test does not offer any meaningful information because every living person tested will test positive for the disease. Assuming we would test only living people, our test will never return a negative result.
In other words, this test will not identify anyone who doesn’t have the disease.
Another way of stating this is by saying that the specificity of our test is 0% because none of those who do not have COVID-19 will ever be identified.
The Metric We Really Need To Look At: Positive Predictive Value (PPV)
The sensitivity and specificity of a given test do not change with the prevalence of the disease in the population being tested.
However, the proportion of false positives (people who do not have the disease but test positive) rises as the prevalence of the disease falls.
Though it may seem initially mystifying, this is an inescapable reality with any diagnostic test that is not 100% accurate. This is demonstrated below.
The ratio of the number of people who truly have the disease (true positives) compared to the number of people who test positive is defined as the positive predictive value (PPV) of the test.
Hence, the PPV of a test varies with the true incidence of the disease in the population being tested.
It is the PPV of a test that indicates the probability that a person who tests positive for a disease actually has the disease.
When one asks, “I tested positive for COVID. What are the chances I actually have the disease?” The PPV of the test is the answer they are looking for.
What happens when a reasonably accurate test is deployed upon a population that has a low incidence of disease?
The U.S. Food and Drug Administration describes it here ...
https://www.fda.gov/medical-devices/letters-health-care-providers/potential-false-positive-results-antigen-tests-rapid-detection-sars-cov-2-letter-clinical-laboratory
... Using a test that has an impressive 98% specificity on a population where 1 in 100 actually have the disease (a disease prevalence of 1%) will result in a PPV of 30%.
In other words, 70% of those who test positive will not have the disease. Seven out of 10 will be false positives . . .
[SNIP]