"The tests used to assess Covid, particularly those using Polymerase Chain Reaction (PCR), are not specific enough, and produce errors around 1% of the time. If the true prevalence of the virus is low, that error rate means we'd end up seeing lots of 'positive cases' that aren't actually Covid infections. Maybe even 91% of the 'cases' we think we've found are not actually false positives".
"This isn't a pandemic - it's a 'Casedemic', where there are many so-called cases, but not much actual disease."
There's nothing wrong with this reasoning per se. But, as we'll see below, the numbers that are plugged in bear little relation to reality, making this point entirely irrelevant.
Virus rates aren't low, breaking one of the assumptions of this idea. The above logic rests on only a small percentage of people having the virus (the first bullet point). But that's no longer realistic. At the time of writing, around 2% (1 in 50) are currently testing positive. If we use that number in the scenario described above (you may think this is somewhat circular, but the point is to show how changes in the background rate of infection can dramatically affect the final false positive rate), there are now 200 true positives and about 100 false positives. That reduces the final rate of false positives to 50%. That's still high - but as we'll see in point (3), another of the assumptions above is wrong as well.
The specificity of the test is very high, breaking another of the assumptions. Above, we supposed a test that was 99% specific (the second bullet point). In other words, it would have a 1% error rate where it incorrectly told Covid-negative people they were Covid-positive. But we have good reason to believe that this is far from the real specificity of the test. A paper published in November from Wuhan in China reported nearly 1,000,000 tests, but found only 300 cases. If every single one of these cases was a false positive (and there's no reason to think that's the case - but just hypothetically), the specificity of the test would be 99.97% (there's a similar story in New South Wales, which regularly tests hundreds of thousands of people and finds case numbers in the double-figures). To put this another way: the results from these places—as well as from the summer in the UK, where the large-scale ONS survey found rates of around 0.05% positive results—show it's impossible for the specificity of the test to be as low as 99.0%, or anywhere near it.
Using a specificity of 99.97%, along with a true rate of infection of 2%, reduces our final rate of false positives in the scenario above to 3%. Since the specificity of the test is likely to be even higher than that, the final false positive rate is likely to be even lower. The vast, vast majority of cases here would be true Covid infections.
"We know the specificity of our test must be very close to 100% as the low number of positive tests in our study means that specificity would be very high even if all positives were false. For example, in the most recent six-week period (31 July to 10 September), 159 of the 208,730 total samples tested positive. Even if all these positives were false, specificity would still be 99.92%.
"We know that the virus is still circulating, so it is extremely unlikely that all these positives are false. However, it is important to consider whether many of the small number of positive tests we do have might be false. There are a couple of main reasons we do not think that is the case.
"Symptoms are an indication that someone has the virus; therefore, if there are many false-positives, we would expect to see more false-positives occurring among those not reporting symptoms. If that were the case, then risk factors such as working in health care would be more strongly associated with symptomatic infections than with asymptomatic infections. However, in our data the risk factors for testing positive are equally strong for both symptomatic and asymptomatic infections."
The number of tests conducted (which just continuously rises over time) follows nowhere near the same pattern as cases. Since there's no reason to think the specificity of the tests mysteriously increased and decreased over time, it's far more realistic to think that the vast majority of the cases we observe in the second wave are true positives.