From the first weeks of the COVID-19 pandemic, media and governments have understandably sought out expert opinion to guide them. How did that go?
Most of us had never heard from any epidemiologists before 2020, but since then they are quoted nearly every day. So many media articles start with variations on the same theme – ‘Experts have warned that COVID-19 cases are on the rise again,’ or ‘Experts have called for restrictions to be tightened,’ or ‘Experts have warned against complacency’ about COVID-19.
Experts and the media have worked together to create waves of fear (we are all equally at risk – but we aren’t) to justify eternal vigilance, placing our societies on a constant war footing and periodically placing whole populations in home detention. If the current pandemic is seen to end, they will warn against the next one. After COVID-19 will come COVID 2024 or 2025. The distinguished Italian philosopher Giorgio Agamben rightly declared: ‘A society that lives in a perennial state of emergency cannot be a free society.’
The authority of experts is being used to suppress dissent. A few rogue dissidents claim that the world took the wrong path, but surely they must be ignored because science is objective truth, isn’t it? There is much criticism of ‘armchair experts’ who opine about the correct approach to pandemic management despite having no background knowledge of epidemiology. Experts in other fields are warned to ‘stay in their lane.’ The experts in the field have spoken, the science is clear, this must be done. Is that the end of the matter?
Not necessarily.
It sometimes helps to use analogies from other not currently controversial fields. Let’s look, for example, at two epic engineering projects in my part of the world.
First, the Danish architect Jørn Utzon won an international competition to design the Sydney Opera House with a lyrical sketch design featuring elegant, low shells of concrete. But the original design could not be built. The engineers had to explain ‘the facts of life’ to the architect, and eventually a variant was developed using shells based on a uniform sphere much closer to the vertical than in the original design. So, the technical team worked with the visionary architect to make his vision a reality.
Second, in the neighboring State of Victoria, we started building a high bridge across Melbourne’s river using the (at that time relatively new) box girder model. Unfortunately, the experts on this project got their calculations wrong, one of the big box sections collapsed during construction, crushing workers’ huts underneath with the loss of 35 lives (see this summary of the biggest civil engineering failure in our history).
From these examples we can draw two important lessons:
- Technical experts are essential and must be part of the team
- Experts can get it wrong, leading to disaster.
There was a critical decision point early in the COVID-19 pandemic when governments turned aside from the traditional approach of quarantining sick people and decided to quarantine the whole population, including massive numbers of healthy and asymptomatic people. They were greatly influenced by the autocratic Chinese government’s apparent success in suppressing the original Wuhan outbreak using extreme measures, and then by the infamous Report 9 (from Ferguson and the Imperial College London COVID-19 Response team), based on computational modeling.
This triggered a pandemic of modeling around the world with teams competing against each other to persuade governments to support the Ferguson Team’s recommendation to suppress the COVID-19 pandemic through a reduction of 75% in contacts outside household, school or workplace until a vaccine became available.
They assumed it was necessary to quarantine everybody in order to suppress transmission overall. But governments went even further than this, shutting down schools and workplaces as well.
There were several fundamental flaws in the reliance on modeling to shape public policy. First, although the models have evolved over the years to the point of being impressively sophisticated tools, they are nonetheless simplified virtual versions of reality, and the environment and the drivers that determine the evolution of pandemics include many unknown causative factors that cannot be included in the model.
Second, as I have pointed out before, the ICL teams’ recommendation for universal quarantine did not arise from their actual results, which clearly show a mix of measures including quarantine only for the over 70s as leading to the best outcomes. Their final recommendation was based on scientific opinion, which must be distinguished from scientific evidence.
This illustrates one of the critical principles at stake. Report 9 and its underpinning methodology show a high level of technical expertise, and it would be ridiculous for non-experts to dispute in detail the technical validity of the paper. However, there is a chain of logic which leads from technical findings to a policy recommendation that must be interrogated.
The recommendations in these papers had extraordinary impacts on people’s lives, leading to breaches of human rights (such as the right to walk outside your front door) on a scale never before seen. Experts can ascertain some facts using a methodology that only other experts can dispute, but the construction they put on those facts, their interpretation of them, does not always follow from the results.
There are many established principles in science which are not open to debate. It would also be ridiculous for a non-expert to dispute the validity of the laws of thermodynamics, for example. The fundamental science for calculating the stresses in reinforced concrete constructions as in our opera house and bridge examples was indeed presumably settled, although the novel constructions presented numerous challenges of implementation.
But the science relating to COVID-19 management is still an emerging field, in a much ‘softer’ area of science. This science is not yet settled, there are diverse findings in the literature, and different experts interpret the findings in different ways. Even when scientific principles are beyond doubt, their application to particular scenarios and policy questions is not self-evident. And scientific opinion in the field of health is distorted by commercial pressures to an extent unknown in other fields.
Of course, all the experts believe they make up their own mind free from such pressures, but this is why the relevant concept is known as ‘unconscious bias.’
Of course, groups of experts are not conspiring with each other to defraud the public – they believe strongly and sincerely in the advice they give. But the entire environment in which they give their advice is molded by commercial pressures, including the research pipeline itself, starting with choices about what will be researched.
Billions of dollars of public and corporate money were devoted to the discovery of vaccines against COVID-19, and nothing to the role of nutrients. The panels of experts that advise the US government on applications for vaccine approval accept everything that is put in front of them, even in the case of the recent applications for approval to vaccinate children from the age of six months, based on thin data showing alternation between low and negative efficacy depending on the timeframe (summarized for the Pfizer vaccine here).
Earlier in the pandemic, one group of scientists published the’ John Snow Memorandum,’ with the formal title: ‘Scientific consensus on the COVID-19 pandemic: we need to act now.’ They argued that there was a consensus that lockdowns were ‘essential to reduce mortality.’
The title was unwarranted as the purpose of their declaration was to condemn the authors of the Great Barrington Declaration for advocating the more traditional approach of selective quarantine and ‘focused protection.’
The mere existence of these two rival declarations falsifies the claim that there was a scientific consensus in favor of lockdowns. John Ioannidis undertook an analysis of the signatories and found that: ‘Both GBD and JSM include many stellar scientists, but JSM has far more powerful social media presence and this may have shaped the impression that it is the dominant narrative.’
So, there you have it – pro-lockdown scientists dominate the narrative, but this does not correspond to the actual balance of scientific opinion.
We should not be referring to ‘the science’ and ‘the experts’ on COVID-19 as if they are uniform entities. Two years on from the beginning of the pandemic, many observational studies have been published of outcomes. Some of these purport to show that lockdowns reduced transmission, a few that lockdowns reduced mortality.
Many of these pro-lockdown studies rely on contrasting actual outcomes with virtual reality, the projections of computational models of what might have been if governments had not intervened. Since no governments failed to intervene, this is a non-falsifiable scenario that consequently has little status as a scientific proposition.
Reviews of the literature that focus on empirical studies such as the Johns Hopkins University meta-analysis by Herby et al indicate that the benefits of lockdown are modest at best. The conclusions of meta-analyses are very dependent on the selection criteria that determine which studies are included and which are excluded.
A meta-analysis based on a different set of criteria might well come to different conclusions. But the Johns Hopkins team mount a strong case for their methodology, with a preference for a ‘counterfactual difference-in-difference approach’ comparing the difference between the epidemic curves in locations that imposed lockdowns as opposed to those that did not.
The Johns Hopkins team makes out a powerful case that the dominant narrative was mistaken, based on empirical data. Governments and their advisors need to consider contrarian findings as well as those that support the dominant narrative. In their advice to government, advisors and agencies should acknowledge the existence of these contrarian findings and justify their preference for the orthodox approach.
Governments need to have powerful reasons for imposing unprecedented restraints on individual liberties when there is in fact no scientific consensus that these are effective.
And they also need to take into account the other harms imposed by their policies in the form of ‘collateral damage’ or adverse effects. For example, the World Bank estimated that 97 million people were thrown into extreme poverty in 2020. These effects are usually seen as being caused by the pandemic, but were in fact caused by the countermeasures, including the closure of borders and the drastic reduction in mobility brought about by lockdowns.
The effect of poverty on mortality is well-established. Many experts have exaggerated the benefits of lockdowns and other coercive measures and ignored their adverse effects, a characteristic of medical culture more broadly. Governments need to be alerted to both sides of the ledger, credits and debits.
Governments would find it difficult to weigh competing technical findings in the balance, but it is not unreasonable to expect them to do so. We can make another analogy, this time to court proceedings. In a murder trial such as the famous case of Oscar Pistorius, both the prosecution and the defense can call expert witnesses to give their opinions about the forensic evidence (such as the trajectory of the bullets).
The opposing barristers will probe the testimony of each expert looking for weaknesses in their arguments and claims that they cannot support with scientific evidence. Then the court decides which witness is the more credible. A similar approach is taken in a commission of inquiry. And a similar approach can be taken in public policy through the use of ‘citizens’ juries.’ In my own professional experience of higher education regulation, panels of experts are invariably used to make assessments relating to the dark arts of academic quality or distributing research grants.
A court, a commission of inquiry and a citizens’ jury will use its own judgment in assessing the merits of expert opinion, and so should governments and the public. The age of deference to expert opinion is long gone. No group of experts is infallible, and no expert opinion is exempt from being challenged. We live in an age of accountability, and this applies just as much to experts as to any other group.
An important legal principle that needs to be considered carefully is the principle of necessity – was it necessary to impose mandates for both lockdowns and vaccination? The superficial approach is to cite the seriousness of the pandemic. Extreme situations may seem to call for extreme measures. But it is not self-evident that extreme measures are more effective than moderate measures – this has to be demonstrated in each case.
The authorities have to show that the marginal additional benefit from universal coercion through lockdown mandates made a significant difference compared to the voluntary reductions in mobility that happened before mandates were imposed.
What was the marginal benefit of confining everyone to their homes as opposed to confining only symptomatic and sick individuals? And what was the net marginal benefit (after subtracting harms)? These two strategies were not compared by the experts in their modeling, most probably because the parameters were not known.
There can be no advantage from confining people who are completely healthy and not infected. The case for lockdowns can only rest on the uncertainty about who is infected at any point in time, and so everyone is locked down in order to catch the ones who are infected and pre-symptomatic. But what difference did this make to outcomes?
At the outset, it may not have been possible to include these parameters in the modeling as the values were unknown. But if critical parameters such as these were not able to be modeled, this only reinforces the point that the modeling could not be a reliable guide for public policy, because the virtual world did not accurately reflect the real world.
Technical issues need to be debated between the technical experts. If the experts can resolve the issues, well and good. But if the issues are not yet resolved among the technical experts, and policy decisions must be made on the basis of technical knowledge, then governments need to seek out the best experts available. They need to know if the technical experts do not agree on which policy options will be the most effective. Policy experts need to make their own inquiries.
The first duty of decision-makers is to ask probing questions, such as: where is the evidence (remember modeling is not evidence) that going beyond the traditional model of quarantining the sick only is necessary?
There is a common underlying intellectual methodology for testing claims made against evidence available that underlies all decision-making processes, and which is the basis for the continuously evolving principles that are the foundation of our legal system, which have to assimilate the findings of experts in all fields to settle disputes in all fields and sectors.
This has been extended into a new model of ‘concurrent evidence conclaves,’ referred to in more colorful informal parlance as ‘hot-tubbing.’ Instead of experts only giving evidence separately to the court and being crossexamined separately by the barristers for the two sides, they are invited to preliminary conferences and debate the issues between them, sometimes with a neutral barrister chairing the discussion.
This deliberative process leads to a common report which is designed to elucidate where the experts agree, and isolate the areas where they disagree, which can be further explored in court. If diverse experts are needed, multiple conferences can be held, although there may also be benefit in having the experts from different disciplines enter into dialogue with each other.
Governments should seek out the best experts they can find, with a diversity of perspectives and disciplines, and put them into dialogue with each other. The aim in this case would be to arrive at recommendations for policy that all the experts can in fact agree on, as well as isolating those areas where they continue to disagree. Then the decision-maker should enter into dialogue with the experts.
Autocratic leaders will maintain that pandemics blow up suddenly and decisions must be made within 24 hours, so there is no time for a deliberative approach. But this is an excuse for not following a reliable process of decision-making. Interim measures can be put in place for a short period while the experts deliberate, but a searching process of examining and debating the evidence should then be followed to avoid the massive unintended consequences that can arise from persisting with the policies that you first thought if they cannot be justified by the emerging evidence later on.
Ultimately, governments should not be bound by the opinions of any particular group of experts who present their recommendations on the basis of what they see as objective science.
In his ruling in favor of a student nurse who had been denied placements after asking some probing questions about COVID-19 vaccine safety, Justice Parker of the New South Wales Supreme Court pointed out that:
Public health is a social science. It often requires that a balance be struck between people’s individual freedoms and the desirability of government action being taken in the collective interest to restrict the spread of disease. Inevitably that may be politically controversial.
Once we enter the sphere of public policy, this is everyone’s business, and everyone has the right to point out the issues in the policy formation process, including ethics and governance experts like myself who focus on the decision-making process.
There has been a general feeling that in a public health emergency, anything goes. But on the contrary, in a public health emergency, when so much is at stake, the utmost care needs to be taken to find the right path, and not to fall into error, leading to unintended consequences. This involves exploring different paths rather than mandating one path and preventing any possibility of reconsideration.
We should certainly take the advice of the best experts we can find. But when governments are considering imposing coercive measures, experts can only advise, they should not rule. Governments make these decisions (God help us!), and they should be made in the full knowledge of the range of expert opinions, their strengths and weaknesses.
So next time they should induce a broad range of experts to leap into a policy jacuzzi!
Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.