Brownstone » Brownstone Journal » Philosophy » Science Misconceived: How the Covid Epoch Wrecked Understanding

Science Misconceived: How the Covid Epoch Wrecked Understanding

SHARE | PRINT | EMAIL

“Trust the science” and “Follow the science” have been mantras incessantly repeated over media airwaves, in print and on the internet by select scientists, politicians, and journalists for nearly three years now, but have these claims confused political gain for scientific progress? In other words, do these pandemic buzzwords represent sound scientific reasoning or are they the product of misconceptions regarding the accepted pathway of scientific inquiry?

The larger issue is that use of these buzzwords may underlie deeper scientific misconceptions with respect to how research does and ought to operate. I discuss three such potential misconceptions of science and explain their relation to the current pandemic. 

Misconception #1: Science tells you what to do

At the heart of “Follow the science” is the idea that scientific research instructs people on how to proceed given the resultant data of an experiment – if X is found, then you must do Y. Gabrielle Bauer for Brownstone Institute discusses this fallacious reasoning focusing chiefly on the fact that people, and not viruses or research findings, make decisions and that those decisions are based upon values. But one may say, science provides data and that data is integral in knowing what to do; therefore, the science does tell people how to act. 

Although science provides data and yes, it makes sense for personal and political decision-making to be “data driven,” it does not follow that the data alone instructs me or you or anyone to act one way or another. If you know that it is raining outside, does this fact alone tell you to: bring an umbrella, wear a raincoat, put on galoshes, all of the above, none of the above?

Facts in a vacuum are not instructions for how to act; rather they inform us on what is preferable given our background beliefs and values. If you do not mind getting wet on your morning run then your outfit will most likely differ from someone who is fearful of water damage to their clothing. In both cases the people know the exact same thing – it is raining – but they do not come to the same conclusion. This is because data does not give orders; it informs and provides a basis for guidance. 

Since data – that which is obtained during scientific research – informs decision-making, it is vital that parties tasked with making decisions have quality scientific data to use. One way in which this can occur is by including relevant parties in research as participants. When relevant parties are not included in research the data obtained is of limited use to them. The Covid-19 Phase III efficacy trials are a case in point. The BNT162b2 and mRNA-1273 trials excluded pregnant and breastfeeding women; thus for these individuals there was no scientific evidence for them to use to make their decision to vaccinate or not – no data on vaccine efficacy or safety. 

Harriette Van Spall, in the European Heart Journal, has commented that this move was unjustified because there was no evidence to suggest the vaccines would cause undue harm to pregnant women or their child. What’s more is that studies also began to show that pregnant women were at a higher risk of severe Covid-19 than non-pregnant individuals of the same age; meaning that if any group required scientific data on the efficacy of vaccination it would be those at highest risk of negative outcomes. 

Recent data from Hanna and colleagues published in JAMA Pediatrics showed that approximately 45% of participants provided breast milk samples that contained vaccine mRNA – it is possible that pregnant and breastfeeding women would have benefited from knowing this prior to deciding to vaccinate or not. 

To “Follow the science” then ought to entail a belief that scientific research should inform one with respect to some issue and not to tell one what to do – since it cannot do so. Science provides facts and figures, not instructions or commands. Since research provides facts, it is fundamental that those facts apply to persons making decisions and it becomes extremely difficult to know whether to, say, vaccinate or not if the demographic to which you belong is excluded from participating – rendering the data inapplicable. It’s tough to spout “Follow the science” when relevant demographics are not included in the science. What exactly are these individuals intended to follow? 

Misconception #2: Science is value-free

Another potential misconception regarding scientific inquiry is that researchers leave their values at the door and conduct value-free research. In scholarly settings this position, often referred to as the value-free ideal, has been claimed to be untenable because values figure in various steps of the scientific method.

A canonical example comes from Thomas Kuhn’s book The Structure of Scientific Revolutions, where he argues that far more than simply scientific evidence is used to push and pull researchers to endorse one theory over another. A more contemporary example is that of Heather Douglas in her book Science, Policy, and the Value-Free Ideal where she argues that social and ethical values play a role in the production and dissemination of science. 

The previous debate among scholars centered around whether values ought to exist in science, but the more contemporary debate centers around what kinds of values ought to exist. Kuhn and views like his contend that truth-seeking or epistemic values should figure: those values that aid in understanding the data and choosing appropriate conclusions to draw. Whereas Douglas and similar views maintain that additional values such as ethical concerns should be part and parcel to science as well. Regardless, it remains a currently unassailable position to conclude that values – however construed – do and should be part of science. This necessarily impacts what and how science is done. 

One reason why individuals may assume that values do not belong in science is because research ought to be objective and outside the purview of any one individual’s subjective beliefs – essentially scientists should have a view from nowhere. However, this reasoning runs into trouble the moment it leaves the station. Let us look to research on the topic for inspiration.

Potentially unbeknownst to laypeople, researchers are in control over what they study, how they study it, how the resultant data is collected and analyzed, and how the empirical results are reported. In fact, an article by Wicherts and colleagues published in Frontiers in Psychology describes 34 degrees of freedom (areas within research) that researchers can manipulate any way they like. These degrees of freedom have also been shown to be easily exploited – should researchers decide to – by Simmons and colleagues who conducted two mock experiments wherein they showed that truly inane hypotheses can be supported by evidence if the experimentation is carried out in a particular manner.

It has also been shown that one’s astrological sign plays a role in one’s health – but of course this resulted from the exploitation of degrees of freedom, namely testing multiple, non-prespecified hypotheses. Obtaining certain results may not be a function of scientific investigation, but rather potentially based upon the values that researchers import into their inquiry. 

This may all be fine and good, but how exactly do values impact researcher degrees of freedom – those aspects of experimentation under researcher control? For starters, imagine that you are a scientist. You first have to think about what you would like to research. You may choose a topic that interests you and would expand current understanding of the topic. But you may be pulled to a topic that concerns the well-being of others because you value helping people in need.

Whether you choose the former or latter topic, you have done so for reasons of values, epistemic – knowledge creation, or ethical – doing what is right. The same kind of reasoning will figure in who the experiment will be done on, how the experiment will proceed, what data is collected, how the data is analyzed, and what/how the data will be reported. 

A case in point is the exclusion of young children from some Phase III vaccine trials: individuals under the age of 18 were excluded. One reason for this may be that the researchers had reason to believe that children would be at undue risk of harm if they were included. The ethical value of preventing harm was prioritized to the exclusion of the epistemic value of learning how efficacious the vaccines would be in children. This reasoning may also apply to the exclusion of pregnant and breastfeeding women, as well as immunocompromised individuals. 

Additionally, values can be seen in the choice of endpoints in the vaccine trials as well. According to Peter Doshi in the British Medical Journal, the primary endpoint – what the researchers were primarily concerned with understanding – for the Phase III trials was prevention of symptomatic infection. Importantly, transmission of the virus – from vaccinated to vaccinated, or unvaccinated to unvaccinated, or vaccinated to unvaccinated, or unvaccinated to vaccinated – was not studied in these trials. 

Recently, Janine Small, President of Developed Markets, Pfizer commented that the Pfizer vaccine was not tested for stopping transmission prior to being released on the market. Since the vaccines have entered the market evidence shows that they do not appear to stop transmission because the viral load that can accumulate in both vaccinated and unvaccinated individuals is similar, as discovered in Nature Medicine. Even research published in the New England Journal of Medicine that shows vaccination does decrease transmission reports that this decrease wanes until 12 weeks post-vaccination where transmission becomes similar to those unvaccinated. 

Once more we can see that the choice to study whether the vaccines prevent transmission, or death, or hospitalization, or acute infection is up to those running the trial, and that these decisions tend to be based on values. For instance, Small remarked that Pfizer had to “move at the speed of science to understand what is taking place in the market.” Thus values stemming from capitalizing on a virgin market may be what oriented the research to focus on the endpoints it did. 

The science that has been performed during Covid-19 has often had a practical end goal. Typically this meant providing advice or a product to the public to aid in combating the virus. The downside to this is that research has moved quite fast, potentially because the speed of information and helpful products has been deeply valued. For instance the BNT162b2 and mRNA-1273 Phase III trials had an initial follow-up period of approximately two months, but both these trials stated that an ongoing follow-up of two years was scheduled. Two years and not two months is more aligned with the guidance from the FDA on this issue, which is that Phase III trials ought to last from one to four years in order to ascertain efficacy and adverse reactions. This rapidity may have been prioritized because people really could have benefited from quick access. However, this rapidity could also have been prioritized for reasons stemming from financial gain or other less ethical foundations. 

Regardless of the reasoning for the pace of research, the variables studied, and the demographics excluded, what should be clear is that science contains – for better or for worse – personal values. This means that both the scientists and those who “Follow the science” are making value-based decisions, however “data driven” such decisions are made out to be. That is to say, the research being done is not objective, but rather contains researcher held, subjective values. 

Misconception #3: Science is unbiased

Throughout the pandemic I have heard individuals say loudly that laypeople must “Trust the science,” which I continuously find odd considering that the landscape of scientific literature is remarkably divided. Thus which science am I or anyone else supposed to wholeheartedly trust? In a pointed article by Naomi Oreskes in Scientific American, she explains that science is a “process of learning and discovery.” More broadly this process moves in fits and starts and is not linear in its progression but moves hither and thither and sometimes relies on eureka moments that were unexpected.

Oreskes’ main point is that those who claim that “science is right” are wrong because they fundamentally misunderstand how science works. One study does not “prove” anything, and politicized science is not true in virtue of being sensationalized by those in power. It follows that if skepticism is the correct way to meet scientific evidence, then people should hardly be scolded for not “Trusting the science” as that is the correct attitude to take. 

This heralds in my Misconception #3 because individuals who tout “Trust the science” appear to believe that science and its presentation is unbiased. The reality is that science often entails swirls of disagreeing experts, some who expound that theory X is superior to theory Y, while others complain the opposite is true. The result is that additional empirical work is needed to iron out the details of each theory and show – experimentally and logically – why one theory really is superior. Bias however can seep into this process at two levels: researchers may knowingly or unknowingly construct experiments that aim to favour some hypothesis or degrade some other hypothesis; it can also enter in the presentation of the science – where one side of the debate is presented as if no debate exists. 

With respect to the first level of bias, that of the research itself, the most poignant examples stem from funding sources where it has been found in multiple domains that industry sponsored trials tend to produce more favourable results. For example, an analysis published in Intensive Care Medicine conducted by Lundh and colleagues concluded, “Drug and device studies sponsored by manufacturing companies have more favorable efficacy results and conclusions than studies sponsored by other sources.”

Similarly, a study published in JAMA Internal Medicine showed that industry-sponsored studies on sugar (sucrose) downplayed its role in coronary heart disease and singled out fat and cholesterol as responsible. The authors go so far as to say, “Policymaking committees should consider giving less weight to food industry–funded studies,” and instead focus on other research that takes seriously the effect of added sugars on heart disease. 

This may be an obvious point to make, that those with a financial interest in the outcome of a study may do things to ensure a positive result, but however obvious this point is there is research to back it up. More to the point, if it is so obvious, then how can it be that when billions of dollars are at stake the pharmaceutical companies vying for vaccine and antiviral market space may not do things to bias results?

A potential source of bias in Pfizer’s Phase III vaccine trial has been explained by Brook Jackson, who told the British Medical Journal about the errors committed by Ventavia Research Group, which was tasked with testing the vaccine. According to Jackson some of the errors included: “Lack of timely follow-up of patients who experienced adverse events,” “Vaccines not being stored at proper temperatures,” and “Mislabelled laboratory specimens,” among others. Outright errors in conducting research have the capacity to bias the results because the data obtained may reflect the errors made and not the impact of studied variables. 

Another example of potential bias is the use of certain statistical measures over others. According to Olliaro and colleagues in an article published in The Lancet Microbe the vaccine trials employed relative risk reduction which gave high marks to the vaccines for efficacy. However, should they have used absolute risk reduction, the effect measured would have been far lower.

For instance, the authors note the “relative risk reductions of 95% for the Pfizer–BioNTech, 94% for the Moderna–NIH, 91% for the Gamaleya, 67% for the J&J, and 67% for the AstraZeneca–Oxford vaccines.” And when absolute risk reduction is used the efficacy drops substantially, “1.3% for the AstraZeneca–Oxford, 1.2% for the Moderna–NIH, 1.2% for the J&J, 0.93% for the Gamaleya, and 0.84% for the Pfizer–BioNTech vaccines.” 

In addition to the bias that can be introduced during empirical research there is bias that can occur due to the representation of science by media, scientists, and politicians. Despite the fact that the scientific literature is not settled, those on the outside looking in – potentially with the aid of researchers – cherry-pick empirical information to present to the public. This method permits those selecting the information to paint a picture that fits a particular narrative and not the actual scientific landscape. Of importance this variety of bias makes it appear as though the research is definitive; this further entrenches the idea of “Trust the science.” 

A case in point is the different ways governments are handling vaccine booster programs. The CDC in the United States recommends people aged five and older should get a booster if their last vaccination was at least two months prior. Similarly, in Canada it is recommended, in certain circumstances, that individuals should get a booster three months after their last vaccination.

These recommendations stand in stark contrast to that of Denmark where the recommendation is as follows, “The risk of becoming severely ill from covid-19 increases with age. Therefore, people who have reached the age of 50 and particularly vulnerable people will be offered vaccination.” These countries have access to the same data, but have chosen to come to contrasting recommendations for their citizens – all of which are supposedly based on the science. 

Moreover, the slogan “Safe and effective” with respect to approved Covid-19 vaccines may also be an example of bias in the presentation of research because a group of Canadian scientists has recently penned a letter to the Chief Public Health Officer of Canada and the Minister of Health asking for more transparency regarding the risks and uncertainties of vaccination.

In essence, the letter makes plain that these scientists believe that the Canadian Government has not properly informed Canadian citizens. Despite this imputation, Health Canada states, “All COVID-19 vaccines authorized in Canada are proven safe, effective and of high quality” (bold in original), and south of the border the CDC notes that, “COVID-19 vaccines are safe and effective” (bold in original). At least certain scientists then, believe that additional scientific discourse is necessary to ensure citizens are properly informed and not biased, but the messages currently being received by citizens do not reflect this. 

Another example is that of transmission. It has been reported by the CBC that vaccines do in fact prevent transmission, but as mentioned earlier this is not the case. More intriguingly, around the time the vaccines entered the market, researchers theorized that simply based on the mechanisms of action it would be unlikely that the vaccines could prevent transmission

Science, its practice and dissemination, has the potential for bias to seep in at any time and it would be a mistake, as pointed out by Oreskes to assume that science is correct because of how it is done or who was involved or who has presented the findings. Despite such claims, the Covid-19 pandemic along with the slogan, “Trust the science” has altered the desired perspective from one of healthy skepticism to blind acceptance. Such non-critical acceptance of any data, let alone research occurring at “the speed of science,” should give pause. Science moves forward when objections are made and hypotheses are fine-tuned, not when agreement ensues simply because an authority has decreed it so. 

Recognizing Misconceptions

The misconceptions represent potential ways individuals have incorrectly viewed scientific research and its use during the pandemic and are reflective of the mantras employed along with the presentation and speed of discoveries. Recognition of these misconceptions should provide a more solid base from which to judge the truthfulness of scientific claims, the necessity of slogans, and the rigour of scientific research. Being informed ought to be the preferred method of moving through and ending this pandemic, but to be informed requires the realization of misconceptions and the know-how to think differently.



Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.

Author

  • Brownstone Institute icon

    Thomas Milovac is a PhD candidate in Applied Philosophy; his dissertation focuses on understanding the human and environmental impact of over-prescribed medications as assessed through the lens of environmental bioethics.

    View all posts

Donate Today

Your financial backing of Brownstone Institute goes to support writers, lawyers, scientists, economists, and other people of courage who have been professionally purged and displaced during the upheaval of our times. You can help get the truth out through their ongoing work.

Subscribe to Brownstone for More News

Stay Informed with Brownstone Institute