Brownstone » Brownstone Journal » Government » Regulatory Science as Propaganda
regulatory science and propaganda

Regulatory Science as Propaganda

SHARE | PRINT | EMAIL

For many, the nagging inkling that the state of policy-relevant and regulatory science was less robust and trustworthy than official sources claimed came roaring into focus with COVID-19. For those that had a nose for contradictions and inconsistency, the perpetual urgencies to believe the scientific claims of a handful of special scientists on the telly fell rather flat.

The global population was required to acquiesce to a brand-new technology, a gene therapy unaccompanied by genotoxicity or carcinogenicity studies, nor completed trials for pregnant mothers. A technology where heart risk was known from the get-go. Unbelievably, the endpoint in the clinical trials was never prevention of transmission, nor prevention of hospitalisation and death. 

In a pattern akin to the respect demanded of high priests, the only purveyors of God’s message; special scientists were the Final Word when it came to The Science and health-based risk throughout COVID-19. Like high priests, their scientific claims could not be questioned. If we didn’t acquiesce to the technology, we were not only anti-science and anti-vax. We would be anti-health.

How has The Science come to be the Final Word in modern societies? At its core, powerful institutions have exploited public confidence and trust that science is produced in a neutral and impartial manner. Governments and powerful institutions have taken this trust that science is objective, and capitalized. Because of the opportunity this presents, ‘objectivity is a priceless adjunct to governmental power.’

Sociologist and lawyer Sheila Jasanoff has theorised that objectivity has the tool-like qualities of a talisman – one that would ward off the appearance of political bias. For Jasanoff, impartiality through the use of science and evidence acts to ‘erase the stamps of agency and subjectivity.’

Yet policy-relevant science is a different beast from basic or research science. It does double duty. It must be acceptable scientifically and politically. The effect is that any claimed objectivity is subjective. It depends on what science is used, who the experts are, and how this science is valued, and this depends on political cultures and priorities. Such science is therefore ‘contingent, vulnerable to criticism and tends to unravel under adversarial challenge.’

But there’s more. Powerful shifts in the past 50 years have weakened the threads between the public and regulators, while more closely binding regulators to the industries that they are charged with regulating. Like sliding dials on an amplifier, the power of corporations has increased as they have consolidated and became more powerful. The capacity for public sector and regulatory scientists to broadly research risk has declined. 

Globally basic science and interdisciplinary funding has shrunk dramatically, while the problems that these types of research could shine a light on have asymmetrically expanded

Public-sector funding scopes direct science and research funding away from research that might untangle the relationships between biology, social life, and environmental emissions and exposures. Lawyers who seek to undertake interdisciplinary research also find themselves stymied. The consequence is that autonomous interdisciplinary experts that can inform government officials and challenge their decisions are scarce.

This long-read is drawn from a recent paper by New Zealand charity PSGR.

Regulation of technologies favour the interests of the regulated industries at every turn 

Knowledge is the currency of private industry, and regulators come to depend on industry expertise. Regulatory capture can happen from the get-go. If regulators are neither required nor funded to pursue inquiry outside regulator-industry relations, they are unlikely to.

Government agencies can engage in practices of public engagement that resemble consultation. In practice, the substituted activities fail to address the core issues that the public wants discussed. The substituted activities in effect perform transparency, accountability, and debate. Seasoned public interest advocates will back up this claim.

The no-go zones are extensive. Industry findings are by convention kept secret through commercial in confidence arrangements. Regulators often don’t scrutinise raw data. Literature reviews are either not undertaken, or regulatory protocols narrow what data is considered and fail to address the burden of disease known risk pathways – even the risk for human rights. Old modelling scenarios are prioritised while new modelling techniques are ignored. Outdated assumptions prevail, while real-world data such as epidemiological science or the relevance of new are ignored or dismissed. The issues can be systemic rather than isolated.

These practices are the norm with few exceptions. 

But the problem is, due to government policy decisions in public science and research, there’s no weight of scientific expertise to contradict regulatory positions nor identify new risk pathways.

Scientists at the Stockholm Institute have proposed that the release of chemicals and biotechnologies into the environment is out of control. Annual production and releases are increasing at a pace that outstrips global capacity for assessment and monitoring. It is because of the undone monitoring and science that the boundary has been exceeded. 

It’s a big problem. Funding policies that direct scientists to draw attention to broad, risk-based issues, including long-term, complex, across-system effects of biological systems that are difficult to predict and understand, have fallen off a cliff. At the same time releases of technologies have ramped up.

In the black hole where public good science should be, but isn’t. 

Policy levers delivered a big win to corporate industry. Public funding scopes have directed scientific research away from broad public-good enquiry; while government rules and guidelines lock in private industry information to support the release onto the market of technology and its emissions. 

In modern academic and public research environments controversial information that contradicts government policy or industry partners (or potential partners) is politically and professionally unwelcome. Funding for expensive research is extraordinarily difficult to secure, and most institutions have private industry partners to help drive research income. 

If scientists aren’t funded to consider difficult issues, that work won’t happen. They won’t review relevant science findings, provide context for issues that are ambiguous and complex, and help society navigate them. The work certainly won’t happen if it contradicts the interests of big business.

As with captured regulators, these research environments then pivot to reflect the aims and priorities of industry partners, and the funding scopes set by central government agencies. 

The effect is that policy-makers accept and defend the private industry claims, instead of challenging them. 

There’s no feedback loop where basic science and interdisciplinary teams are encouraged to critically review and triangulate the claims of corporations. Institutional knowledge and peer networks with expertise to pick apart complex issues have been eroded. Without the feedback into official and regulatory environments, raw data is not scrutinised, models reign supreme, and real-world data is neglected.

In this knowledge (and intelligence) abyss, private-industry scientists are the go-to for justifications and assurances that technologies and their effects are safe. Exclusively company-selected and supplied data dominates risk assessment. This unpublished data is directly used to establish so-called safe exposure levels.

How much of a technology you will be subjected to, from conception.

This is the status quo at the very time when modern nation states broadly lack the interdisciplinary scientific expertise to contest corporate claims. 

The scientific ignorance reverberates out. Governments can use technical statutes that legally sideline and displace broader principles that require their own officials to parse nebulous issues. Even if law includes broader principles, when scientists lack autonomy (funding) officials will default to lower-level technical rules. There’s no quorum of expertise to pick through the inadequacy of the technical approaches.

When citizens protest, and provide scientific studies they’re dismissed, because, well, they’re not scientists.

The effect is a fundamental democratic rift. It is the decoupling of nation states from independent information streams and meaningful critical enquiry. 

What is the term for information that is strategically managed and selectively presented to encourage a particular synthesis or perception? Propaganda. 

This is a massive issue, because in the 21st century scientific and technical information is fundamental to policy. As a political priority, the rails for science that tracks to safety claims are greased – in policy and in law. Feedback loops into legacy media then reflect these political positions.

Yet, (apparently inconveniently), democracy is contingent on robust, unbiased information. Information – as intelligence – should enable elected members and officials to protect the public good: protect health, rights, the democratic process and rule of law, and prevent abuse of power. Such information should steward society and our resources into the future. But it’s a black hole.

The contradictions are growing. Stewardship can’t occur when established public law principles concerning transparency and accountability are corrupted, through commercial in confidence arrangements and locked-in private industry data.

Like David and Goliath, information and expertise are now so lopsided that it doesn’t occur to government officials that their work is biased because of who they default to for information. Regulators are neither funded nor required to undertake critical enquiry. Officials wouldn’t contemplate contracting research enquiry on a complex issue. It would raise too many questions, and cost too much.

The game is weighted to private industry. Private industry science and knowledge has exploded, and public-good basic research has imploded.

Pick your technology, your medical solution, your emission, your digital solution 

Most are aware that chemical regulation is subpar, with chemicals used in the industrial, agrichemical, pharmaceutical, household, and personal care sectors underregulated. However, the democratic deficits, the captured regulatory processes, occur across a wide range of technologies including nanotechnology, biotechnology, geoengineering, and radiofrequency radiation

Do funding scopes enable researchers to review new digital IDs and central bank digital currencies (CBDCs) to the extent they deserve? How does the fiduciary relationship between the governed (you and me) and the governors change with increased surveillance capacity through public agency networks? Will CBDCs transfer power to reserve banks and the International Monetary Fund – away from elected representatives? Political cultures and processes make it extraordinarily difficult to contest taken-for-granted approaches that this is all for the better.

We don’t look at crescive, slow-moving harm. When does neuro-developmental delay, gut dysregulation, or cancer start? When are freedom and autonomy lost? These issues do not start in the doctors’ office; or when a government is officially labelled a socialist or communist state.

The knowledge deficits reverberate throughout our democratic machinery, shaping how media, the judiciary, Parliament, and the administrative sectors consider risk, wrangle with scientific concepts, (and relatedly) who they turn to for advice.

Industry directly profits from societal ignorance. The exact spot when their technology may start to harm: a human body, soil health, a waterway, human rights – will always be nebulous and ambiguous. Of course, regulation means lost profit. Complex interdisciplinary science concepts that draw attention to overarching principles and values is hard to do, and impossible when there’s no funding scope. How a receiving environment reacts depends on prior stressors, cumulative stressors, the age, and development stage and health of that environment. When a chilling effect on free speech occurs.

New Zealand’s science and research community, as an information (and intelligence) system, is insufficiently resourced to counter, contradict, or challenge political and financial power. If we consider information as noise, intelligence is that which is important to the relevant matter. Information is a distraction. When we lack experts who will sift through that information, to identify risk, we are thwarted. 

But if the information can never be contradicted, and we’re required to submit, it might be propaganda.

The axiom reverberates: innovation is central to the largest challenges the world faces

Science will solve every societal problem through innovation. Therefore, globally, science and research policies have mobilised research institutions to achieve this. The expansion of patent offices and joint ventures resides alongside persistent messaging that innovation, producing a new or improved product or process, will save us. The number of patents produced is a recognised proxy for GDP.

Innovation is so desirable in New Zealand that the entire science enterprise is locked inside the Ministry of Business, Innovation and Employment (MBIE). Science policy is directed to favour excellence and innovation

Every scientist knows that funding committees have no idea how to judge ‘excellence’ when a research proposal concerns complex interdisciplinary research proposals. Which bit is excellent? Who on the funding panel can judge this? Of course, innovation involves the development of a product or process. If a research proposal doesn’t involve applied research that has potential for an innovation as an outcome, it’s also more likely to be pushed down the funding ladder. 

A chilling effect occurs when science funding is precarious. No mid-career scientist will make politically controversial statements on technologies with complex unknowns. They’re not going to risk their professional reputations and potential funding streams. 

This is how public good, interdisciplinary basic science and research has been toppled. It’s why scientists struggle to discuss broad, nebulous, biological concepts and why new PhD students focus on narrow biological or technical areas of expertise. In modern environments, the polymaths, the multidisciplinary experts are not expert enough. We simply don’t make policy to fund science that might produce experts to counter the claims of industry. 

In this void, in scientific controversies, industry experts outgun public sector experts 

Saltelli et al (2022) describe these larger structural shifts as reflecting a broad colonisation of information, a form of strategic institutional, cultural capture of people and their governments’ role as their protector.

‘evidence can become a currency, which lobbyists use to purchase political leverage. This is due the asymmetry of knowledge and research resources between the corporate powers and regulators or politicians: an individual congressmen or woman, a staffer or civil servant may lack the information, often the crude data, which would be needed to design policy options. In these situations, the friendly lobbyist, provided with both, gains access and leverage.’

Without challenge, culture can act more like ideologies. As Piers Robinson (2018) has described

‘[T]he active promotion of particular world views can be seen as, in the first instance, the establisher of particular ideological constructs.’ 

Regulators often rely on very old science and unpublished studies to claim a particular level of exposure is safe. For example, the World Health Organization (WHO) safe drinking levels for pesticides often rely on levels that are derived from unpublished industry studies which are several decades old. It’s uncomfortable to think that the WHO’s safe level for glyphosate in drinking water is derived from an unpublished 1981 Monsanto study. Somewhat contradictorily, old authoritative data isn’t subject to the same high standards that regulators apply when they turn to decide which studies fit their guidelines for risk assessment.

No matter the burgeoning literature, nor court cases which uncover boatloads of studies which suggest risk at much lower levels than a 1981 Monsanto study. That old study remains in place, ruling the roost. 

Hormone level risks are only vaguely considered by regulators. One or two studies might be provided by industry, but the broader scientific literature is largely ignored. Toxicologists might be employed by regulatory authorities, but not endocrinologists. Conventional toxicology dose-response rules don’t apply when it comes to hormone level risk. Hormone-level effects and epidemiological studies can signal harm long before it is seen in toxicological studies.

Narrow regulatory reasoning doesn’t just apply to chemicals and biotechnologies. New Zealand’s standards for radiofrequency fields are over two decades old. No reviews have been undertaken to identify new pathways of risk, such as what the pulsing effect of radiofrequencies may do at the cellular level. 

With digital technologies a great deal of fuss is made about protecting the public’s privacy from private interests. Alongside privacy rights, human rights must also be considered. Information sharing across government agencies, the embedding of fair or biased algorithms to aid official decision-making, and extensive use of biometric data – ensemble, vastly expand the surveillance powers of the administrative state. 

Avoiding these technologies is not necessarily a choice. For young New Zealanders entering tertiary study, the digital identity scheme, RealMe, is the easiest way to slide into the tertiary system at a daunting time. 

Science advisers (known as honest brokers) could step up, but they don’t. They lack guidelines requiring them to be sceptical of private industry claims. Honest brokers could play a bigger role drawing attention to the gaping differences between science and technical information lodged by corporations and the evidence in the published literature on risk and harm. In being apolitical, they become directly political.

The potential for abuse of power is real. There’s no agency or department in New Zealand with sufficient powers and resourcing to investigate the capture and use of citizen information by allied public institutions. The culture of these agencies will be shaped by the laws and rules that keep them in check. But there’s no external overseers and the laws written by the Ministers intent on promoting the agency don’t encourage such activity. The private industry providers might have global ownership structures and collegial relationships that result in decisions being made over time that advance private interests at the expense of New Zealand citizens. But we don’t have research institutions that would undertake this high-level work.

When private industry information is not subject to robust debate and challenge, it’s propaganda

The information is produced for the purpose of permitting an activity to occur. The information has a tangible effect; it is to assure society that the activity is perfectly acceptable, and that society will not be adversely harmed. However, that information cannot be contested, and is asymmetrically weighted to favour powerful institutions. Corporations and government work closely to ensure that the information is acceptable, and the rules and guidelines are often light years behind the scientific literature. Conversely, the technologies used by the industry scientists are leading edge. Over and over again, it can be demonstrated that the rules and guidelines are so inadequate and archaic that it is likely that society might be misled and deceived by the assurances of safety. 

Should we call this scientific and technical information that persuades or manipulates us to acquiesce, information which is selectively presented to, as Wikipedia puts it, encourages a particular synthesis or perception, – propaganda?

Yes. 

When a weight of information supporting a particular political trajectory is organised and persuasive, when it strategically manipulates us to comply with a particular agenda or position, this can be regarded as propagandistic. A paper by Bakir et al (2018) theorised that persuasive communications strategies involving deception, incentivisation, and coercion can manipulate our opinions and affect our behaviour. 

The authors theorised that when organised, non-consensual persuasive strategies are in play, questions can be raised as to how well our democracies’ function. Public naivety produces consequences, as by 

Remaining unaware of how manipulation and propaganda function, through strategies of deception, incentivization and coercion, inhibits our ability to critically examine persuasive strategies and to develop better, less manipulative, modes of persuasion more suitable to democratic politics. 

The question of how uncontested private industry information might be considered propaganda, and a major stumbling block for democracy was recently discussed in a paper published by New Zealand charity Physicians and Scientists for Global Responsibility (PSGR).

So often, individuals and groups that query the safety of a technology or its outcome are derided as conspiracy theorists. However, as we discuss in the paper, the conspiracy is not with us. 

‘The conspiracy is in the rules, the guidelines and laws which are produced behind closed doors. The conspiracy is when publics, expert and lay contribute to public consultations, but their discussions and evidence remain unaddressed and met with silence. The conspiracy is in public-private stakeholder meetings with dominant institutional providers; in global meetings where public access is forbidden or impossible; and in the entrenchment and upholding of commercial in-confidence agreements that privilege the corporate sector over societal interests. The conspiracy is in elite formations of publicly-paid officials and scientists that blind their eyes to years of evidence which demonstrate that industry-produced data finds in favour of industry. The conspiracy is when judges defer to Crown lawyers whose primary interest is in deploying the technology in question; and when select committees also defer to government departments whose key aim has been to deploy the technology in question.

When science and technical information is used in this way, it is not science and it is not impartial. It is a tool. An instrument. This market-science forms the backdrop of a form of organised, persuasive communication, referred to as propaganda.’

Corporate lobbies have colonised the world of science. The chain of industry influence extends from our personal devices where our information is drawn from the messaging of our governments and legacy media channels, to policy development, the construction of laws, the culture of institutional research and our regulatory agencies. 

Until we recognise the game that is being played, it is difficult to step back, and recognise that a grand distortion of democracy is occurring in front of our noses. In the words of economist Basu Kaushik, we are the elé belé’s, the player who 

‘thinks he is participating but who is in truth merely being allowed to go through the motions of participation. Apart from him, everybody playing knows that he is not to be taken seriously. A goal scored by him is not a real goal.’

Scientific norms have been supplanted by scientific ideology, but we must believe it. The arbitrariness of what rules authorities accept, in order to legitimize what science is acceptable, have all the appearances of diktats by high priests. 

The knowledge systems that could democratically inform and protect health, human rights, and prevent abuse of power are simply not entertained by policy-makers, nor included in the regulatory matrix.

Scientists and experts are on the frontlines, reasoning and demanding updates to regulatory rules to account for new knowledge and wider understandings of risk. But the barriers to change are extraordinary and gains often make little difference. Industry hegemony, arising from the networks of industry relations that surge between governments, regulatory agencies, and company computers, preserves the status quo. Scientists and researchers might get listened to, but their information won’t get actioned.

The intelligence that we depend upon, to steward democracy, has been denied, dismissed and appropriated. It is church and state, and the new church is the industry-funded laboratory, the clergy, the industry experts. When the gospel of safety is preached, and we cannot challenge it, it is propaganda.

Further reading:

PSGR (2023) When does science become propaganda? What does this suggest for democracy? Bruning, J.R., Physicians & Scientists for Global Responsibility New Zealand. ISBN 978-0-473-68632-1



Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.

Author

  • J.R. Bruning is a consultant sociologist (B.Bus.Agribusiness; MA Sociology) based in New Zealand. Her work explores governance cultures, policy and the production of scientific and technical knowledge. Her Master’s thesis explored the ways science policy creates barriers to funding, stymying scientists’ efforts to explore upstream drivers of harm. Bruning is a trustee of Physicians & Scientists for Global Responsibility (PSGR.org.nz). Papers and writing can be found at TalkingRisk.NZ and at JRBruning.Substack.com and at Talking Risk on Rumble.

    View all posts

Donate Today

Your financial backing of Brownstone Institute goes to support writers, lawyers, scientists, economists, and other people of courage who have been professionally purged and displaced during the upheaval of our times. You can help get the truth out through their ongoing work.

Subscribe to Brownstone for More News

Stay Informed with Brownstone Institute