Like braids of a rope, scientific and technical knowledge, policy and law, intertwine to produce rules and permissions, inserting tech into daily life. Just as braided ropes evenly distribute tension, scientific and technical knowledge act to underpin policy. These policies lock in laws, guidelines and standards, authoritative permissions – that theoretically shape the release stewardship of chemical compounds, biotechnologies (also known as novel entities) and digital technologies.
These processes lie on a continuum between the democratic – where scientific knowledge arises through a social process and its values that underpin how decisions are made and technocratic, a perspective favoured by commercial and industrial interests, where the ‘solution is to get more and better science into the decisions.’
The technocratic is winning.
Call it a portfolio – a recipe – an amalgam – institutional ways of thinking and resourcing persistently direct doubt and uncertainty to favour commercial and industrial interests. The scientific and technical knowledge that winds through policy and regulatory environments are inevitably produced by the stakeholder – the industry seeking market access for their commercial product.
In controversies over the safety of these compounds and technologies, new knowledges in the published scientific literature consistently remain outside government scopes and guidelines. Paradoxically, and undemocratically, the industry science and data – the substantive evidence that supports their claims – is by convention, locked away from public view.
At the same time, in the most perfect double movement, independent, public interest science and research that might look at the hazard or risk of these substances and technologies and triangulate industry claims is radically underfunded while regulators lack inquisitorial power.
A massive upswing in the release of technologies has occurred in the 21st century, so the pace of this fusing of science, policy and law has accelerated far beyond 20th century norms.
But digital technologies represent a massive frontier of risk to not just health or the environment – but to democracy, and governments don’t want to talk about it.
The decline of long-form documentary and investigative journalism means that governments don’t have to. The legacy media persistently avoids discussion of contested and controversial issues at the interstices of policy, science law and ethics. Experts in public law, ethicists and basic scientists, the very people might draw attention to industry capture, are strangely silent. It’s a perfect storm.
Risk Beyond Privacy
New technological frontiers weld biometric and digital identity data into the mainframes of governments and large private institutions. In this new frontier, partnerships with the private sector are ordinary, industry consultants provide expertise, apps and plugins enhance framework operability, while creating new opportunities to manage information.
The closed-door public-private arrangements carry with them the potential for systemic and sustained abuse of power – political and financial.
Policy rhetoric, and the consequent legislation that provides oversight on digital identity frameworks and privacy in digital environments, normatively focus on risk from the release of private information into the public sphere. In this frame, there is little discussion or problematisation regarding the interagency personal information-sharing process which scales up government power.
What happens when citizens dissent or refuse to comply with policy? What happens when law consistently privileges private corporations, and citizens protest, in environments where access permissions to services and resources can be easily toggled on or off?
It’s not just surveillance to mine personal data for commercial profit, or data colonialism. These technologies, and the potential for repurposing private information through surveillance activities, magnify the potential for loss of bodily sovereignty over behaviour – human freedom – if such behaviour deviates from government policy and expectation.
The new technological frontiers permit, with the toggling of access permissions, the potential for nudging stripped bare. What we might call authoritarianism.
Under-Regulation in New Zealand’s Digital Ecosystem?
In New Zealand, new legislation, the Digital Identity Services Trust Framework Bill is underway.
The public were permitted to submit to this Bill, and 4,500 submitted. Of the publics who submitted, 4,049 were dismissed by the Economic Development, Science and Innovation Committee, for they submitted in the last two days. Many issues were claimed to be out of scope, with the Select Committee, stating:
Many submissions also compared this bill to social credit systems, centralised state control of identity (for example, the removal of physical driver licences), and moving to a cashless society using digital currencies. None of these ideas are related to the content of this bill.
The Select Committee are correct.
As myself and colleagues noted in a submission, the Bill is very narrowly configured, and the Framework Principles are shallowly drafted. It is a technical instrument. It is meant to govern decision-making in the public interest. The public was excluded from early consultation processes, while industry and the large information-swapping Ministries were included and this set the stage for mindsets that did not speak to broader principles and risks.
The Honorable David Parker is the Minister responsible for this statutory trust framework for digital identity services. The Bill provides for the establishment of a ‘trusted framework’ authority and board, who will be responsible for guidance and oversight of ‘the framework.’ The Bill does not put in place at-arm’s-length funding to provide the new authority (the regulator) with autonomous inquisitorial power. Somehow the authority and board will come up with the answers. For the service providers, it’s an opt-in framework, and a fee-paying model.
Unfortunately, regulatory environments are a product of institutional culture and resourcing. When a service is paid for, ultimately, the providers, absent other influences, think like the institutions they are paid to regulate. Fee-paying models ultimately bend the institution towards a service mentality.
The Bill is yet to become an Act. But the performative ‘trust’ rhetoric has blithely skimmed over potential institutional conflicts of interest (COIs). Government contractors, stakeholders and private interests will not only be ‘accredited providers’ of digital services. These providers will be in positions where their activities can potentially overlap with national surveillance and security measures, where the global institutions that own these ‘providers’ are faced with tempting access to data and information.
The Privacy Commissioner is charged to protect the privacy of individuals. In addition to education and encouraging the reporting of incidents, staff have a nominal budget of NZ$2 million for active compliance and enforcement. The Privacy Commissioner is not looking under the hood to check whether agencies are conducting themselves responsibly with their handling of private data.
The sharing of the biometric and digital data of citizens is operational across New Zealand government agencies and permitted by the Privacy Act 2020. Webbed networks of digital information sharing are already occurring in New Zealand through approved information sharing agreements (ASIAs) across government platforms. ASIAs have increased since the start of the pandemic. It’s the backend sharing of data that ordinary Kiwis don’t see.
(The Privacy Commissioner recent held a consultation on privacy regulation of biometrics, and while this was widely covered by consultancy firms; the legacy media didn’t report that this was happening.)
A mooted Consumer Data Right Bill, overseen the Honorable Dr David Parker will join this legislative framework. As Clark has explained:
A consumer data right (CDR) is a mechanism that requires data holders, such as banks and electricity retailers, to safely and securely share data with third parties (like fintech companies) following consent from the customer.
Not surprisingly, the Fintech industry can’t wait. It’s difficult to understand where the Privacy Act stops, and this Bill might start.
Then we have RealMe, the front end of New Zealand’s digital identity system – the public login service. A facial photo is required using a facial recognition system called Identity Check. RealMe is a mandated all-of-government ICT Common Capability, ‘it’s a technology that can be used by 1 or more agencies, or across all-of-government, to support business outcomes.’
The backend is the verified personal information that is held by the Department of Internal Affairs (DIA). It is maintained and developed by Datacom. Currently, the biometric data held by the DIA includes facial images and liveness testing. The liveness test is in the form of a video.
The DIA’s resources and operations have expanded considerably in the years 2011-2022. In 2011, total appropriations were $268,239,000. In 2022 the budget sits at $1,223,005,000. The DIA annual income has increased by a billion.
What’s also a little, well, whiffy, is the fact that the Department of Internal Affairs (DIA) is the department responsible for the back-end management of personal data, the administration of the Electronic Identity Verification Act 2012 which includes RealMe – but then they’re also planning to have oversight of the proposed Digital Identity Services Trust Framework Act.
And of course, the DIA already has a bundle of contracts with corporations as well.
A digital drivers licence is in play. Of course, police have access to driver data digitally now. But this would integrate biometric facial recognition data and contain more information which, presumably might then be accessed by other agencies in ASIAs. The DIA is leading the biometric database work which would enable the digital drivers licence functionality.
Of course, the economic and social benefits of digital identity is estimated to be between 0.5 and 3 per cent of GDP – so roughly $1.5 to $9 billion in NZD. A mere $2 million for the Privacy Commissioner is pitiful, and no apparent budgetary requirement is set aside as a foresight measure for the digital trust framework.
Civil society has been remaindered outside of the policy development stages, and then largely dismissed. Once new frameworks are in place, regulators that are underfunded and absent an obligation to conduct active enquiry, can only provide a smokescreen of legitimacy.
Through these processes, we can see that legislation turns towards narrow issues of individual privacy, but neglects to look at the increasing powers of the supervisory agencies and their existing relationships with the industries that they will be charged with overseeing.
What often remains outside regulatory considerations is the potential for the scalability of new technologies to profoundly amplify risk and hazard. The scalability potential of biotechnology for example, is not a primary consideration in risk assessment.
Citizens who submitted to the ‘trust framework’ were interested in how ‘trust’ might be eroded. Whether information and intelligence could be scaled up to shape behaviour and coerce publics at the population level.
The digital identity systems and privacy legislation has zeroed in on narrow, instrumental issues, while failing to draw attention to larger democratic themes, including an obligation to protect the public interest. The regulators are under-resourced and lack strong inquisitorial powers.
What could possibly go wrong?
Power and Social Control
Environments shape knowledge systems, whether at the individual level, for the official in government, or at the population level. Knowledge aggregates as intelligence, shaping culture and behaviour – whether it’s autonomous and purposeful, or defensive and reactionary.
Surveillance is normal. From ancient China and Rome to Jeremy Bentham’s 18th-century panopticon, to Five Eyes and pandemic management; surveillance and information management (or dominance) enables tactical disarmament of threats, and ensures minimal disruption of political agendas. Surveillance is one form of knowledge aggregation, and is accepted by the public in order to (theoretically at least) promote national security.
As James Madison, the fourth US president conceded, ‘Knowledge will forever govern ignorance.’
The systemic ruptures, or crises of the last 30 years have acted to intensify private interest power, as processes of democratic deliberation, and the sovereignty of individual nation-states falter.
The structures that surround us shape our behaviour. Sociologist Michel Foucault described how the shift into offices and factories produced a ‘new mechanism of power’ which arose from the productivity and oversight over bodies. This new frontier was envisaged as a:
‘closely meshed grid of material coercions rather than the physical existence of a sovereign, and it therefore defined a new economy of power.’
For Foucault, the shift was not only of the population under surveillance, but the ‘force and efficacy’ of those with oversight.
Foucault referred to this as disciplinary power – requiring both surveillance and training. In 1979 Foucault drew on Bentham’s panopticon – an all-seeing central point of observation, producing a state of permanent visibility of the subject to emphasise the power came from not only being watched, but not knowing when observation might occur. For Foucault the panopticon was not only as a machine, but as a laboratory, to ‘carry out experiments, to alter behaviour, to train or correct individuals. To experiment with medicines and monitor their effects. To try out different punishments on prisoners, according to their crimes and character, and to seek the most effective ones.’
When civil society understand or suspect surveillance, society is more likely to modify its behaviour. What occurs at the level of the individual ripples out to population modification and hence, supervisory control. The power of social control through observation was brought to life by Orwell in the book 1984.
Innovation Has Displaced Knowledge
Techno-scientific culture is the inevitable consequence of four decades of innovation-centric policies that valorise research and science for economic gain. Science and technology for innovation has displaced public good basic science. Innovation produces new knowledge and valuable patents. Patent production is viewed as a proxy for GDP. Indeed, the majority of funding for New Zealand’s science system is controlled by the Ministry for Science, Innovation and Economy.
For techno-scientific, economic growth centric policymakers, benefit dovetails – for society, the economy and the commercial developer, and society advances. Feedback loops from the public and regulators correct issues when there are safety concerns, new discoveries improve technologies further, and so on.
However, it’s not quite like that.
Governments ordinarily codevelop policy and legal frameworks around potentially risky technologies with industry stakeholders. Officials and regulators default to source advice from their reference network, the industry experts. This occurs when they are developing state and international level policy (setting the scope) which informs local legislation, as well as through design and development of regulatory policy.
The experts as stakeholders have spent proportionately more time in the lab/with the data, assessing information, and identifying the problematic characteristics that might impact market access and the marketability of their products. They have the practical and theoretical expertise.
This produces automatic knowledge asymmetries, and it’s through this process that the regulators, bend to think like the regulated.
The regulatory model for digital identities and trust frameworks is ripped from the corporate playbook for the authorisation of novel entities – manmade substances and biotechnologies.
The Point Where Stewardship Falters
There are two major steps to getting technologies on the market and keeping them there. The introduction and authorising of technologies when they are new, when we don’t know much about them. This includes the development of policies; regulatory protocols; guidelines; as well as end points which prove safety in laboratory studies.
Then later, there is the process of understanding what happens as the social and scientific literature builds a picture of risk or harm; and adjusting policies to ensure that human and environmental health are protected.
Our local, regional, nation-state and global governments are great at the first bit – supporting industries and partner organisations to develop the policies, protocols and guidelines (like endpoints) to get the technologies onto the market.
But they are terrible at the second bit – identifying risk or harm. They’re terrible at creating a space for research and science where non-industry researchers and scientists can identify – not only acute risk – but low level, chronic harm. Harm might arise from multiple contaminants in drinking contaminated water that are not regulated as a whole, or might arise from multiple technical decisions that ensure permissions are granted based on behaviour.
Slow moving, barely noticeable events can be just as devastating over longer time periods – or more.
Blackboxing Knowledge and Risk
The shifts to defer to industry science favours underregulation of technologies at least five ways. First, through the development of complex laws and technical guidelines that can narrowly codify regulatory logics away from broader understanding of risk. This downplays discussions about values, such as when are children, or democratic freedoms, harmed by an activity. Second, through stakeholder networks, dominant industries with COIs secure privileged access to the development of policy. Third, through the primacy of commercial in confidence and data protection agreements that set aside democratic norms of transparency. Fourth, through the absence of non-industry funded research and science that might identify and understand complex risk scenarios otherwise downplayed by industry science and regulatory frameworks. Fifth, (and relatedly) through the absence of non-industry scientific expertise that can then feed back into regulatory and policy arenas, triangulate and (where necessary) contest industry claims.
These processes produce ignorance and encourage techno-optimism. They lock in industry science as authoritative. They black-box risk. Black-boxing enables institutions to delay, dismiss and disregard uncomfortable knowledge that has the potential to undermine institutional principles, arrangements and goals. Industry power is amplified through the privileged and often confidential, two-way cross-talk between governments and private sector institutions which elides democratic norms of transparency and accountability.
This black-boxing decouples democracy from the development and oversight of policy and law. Norms of transparency and accountability are required in order to highlight errors, fraud and bad public and corporate practice. Non-industry experts can embed norms of protection and precaution in the governing of technologies, which might be dismissed by technical approaches.
These processes tip the regulatory scales in favour of organisations in times of controversy, as technocratic logics leave regulators without tools to navigate public good knowledge, the impact of COIs and cultural and social values – and make socio-ethical judgements for public benefit.
Policy that judges how an invention may disrupt social and biological life can never be certain. Risk governance inevitably requires the juggling of forms of (imperfect) judgement, stretching beyond the technical to consider unknowns that encompass complexity, system dynamics and uncertainty. It involves experts, officials and publics coming together as socio-technical demosphere.
The Points Where Science Bends
Governance policy processes drenched in conflicts of interest.
For the regulated technologies, the data used to identify risk and safety – for stewardship – is inevitably selected and supplied by the major industries with the financial COIs. Whether it’s a chemical compound, a biotechnology or a digital technology, government regulators deal with applicants, sponsors, or service providers. The industries seeking approval and seeking to maintain market access are responsible for supplying the data that proves safety and responsibility.
Institutional networks and early access to policy development create profound power asymmetries, keeping publics, including indigenous, civil and human rights groups at arms’ length.
The COIs are buried in secret data, governance arrangements and system architecture.
Massive ownership structures drive and perpetuate feedback loops of power and influence. Power exerts itself in many ways, it can be instrumental (such as lobbying power), structural (based on size and insight from business activities; and discursive – the power to promote ideas and shape social, economic and cultural perspectives.
It is not just the dovetailing of manufactured ignorance, where controversial or non-industry science is suppressed; and where industry data is the default. Power lies in the global webs of relations, where massive institutional investors converge with global lobbyist organisations, to shape policy for nation-state application. There’s no effort to engage with civil society, to codevelop policy, and to let indigenous and civil rights groups shape these policies. No effort at all.
Information aggregators such as Google can support governments to track population movements; join digital identity scheme lobby groups and as ‘stakeholders’ have early access to policy development processes that are unavailable to publics. Google, of course, is owned by institutional investors and the institutions have complex, braided ownership structures.
Entities such as Google can join with other tech giants to establish self-governing ‘Trusted Cloud Principles:’ and they can have joint ventures with vaccine developers, such as Google’s parent Alphabet’s partnership with GlaxoSmithKline.
States surveil and then engage private industry to take action, whether through the Trusted News Initiative, Twitter and Facebook or PayPal. Algorithms shape who is known, and consequently, what is known. Pandemic practices have provided the fertile soil for such complicities, enabling the rise of these secret arrangements.
At this same instance, global central banks, governments and their associate lobbyist institutions produce information releases and white papers urging the benefits of central bank digital currencies. While rhetorically gifted lobbyists claim digital currency activities will promote financial inclusion, in reality, this is the weak spot – the contested boundary, for ordinarily, those with the least, often lack the capacity and resources to access technologies such as smartphones.
Unresolvable antinomies arise from these ownership structures, the pervasive political and financial conflicts of interest and the black-boxed digital information hidden in hard drives.
Reserve banks have always had the capacity to ‘print money’ whether as physical currency or as a digital ledger. In New Zealand, with NZ$8.5 billion in circulation, recent consultation affirmed the importance of ‘cold, hard, cash.’
The cold hard truth is that social policies that reduce inequality and reduce barriers to small business entrepreneurship that can challenge institutional lock-in, are required.
The golden egg – commercial in confidence agreements
Contrary to democratic norms of transparency, the industry data required by regulators for decision-making is ordinarily kept secret due to commercial in confidence agreements (CICAs). This occurs across every technology that you can think of.
At risk of being heretical, are CICAs modernity’s’ Ark of the Covenant? Housing valuable secrets most can’t see that only a privileged few ever had have access to? Does the sheer quantity of these agreements that are now held by governments, inevitably corrupt the original purposes of CICAs, – instead weaponizing them, so as to aggregate and sustain power and authority?
The absence of non-industry science
In contrast, governments don’t meaningfully fund our public science institutions or our regulators; to insist that they can broadly monitor and assess risk in order to triangulate industry claims once a technology is released. In addition, CICAs often prevent access to compounds and technologies so that independent scientists can research them.
Independently-produced science and research can, and does, identify unknown, off-target and unanticipated risks that may be outside policy or regulatory consideration; outside the scope of the study design, or have not been identified from review of industry data. We’ve seen this with pesticides, biotechnology, personal care products; ultraprocessed food; pharmaceuticals, PFAS, food additives, and plastics such as phthalates and bisphenols. Ensemble and over time, these exposures drive an appreciable disease burden.
This sort of public good science, which is often interdisciplinary or transdisciplinary, can explore chemistry, biology, and integrate new techniques (such as machine learning) to scrutinise biomarker and epidemiological data. Public good research traverses questions of ethics, such as the potential for harm in pregnancy or early childhood. The type of research that can analyse new knowledges about the technologies as the literature paints a picture of risk or harm.
Pick a chemical, a biotechnology, an emission, a digital platform. Then look for the non-industry scientists with secure tenure and secure funding that can confidently speak to complexity, uncertainty and risk, and stretch across disciplinary silos while they problematise.
They’re as rare as hens’ teeth and certainly not mid-career.
Now consider digital identity systems and the solid evidence that indicates it is likely that data-anonymization doesn’t work, the pervasive human rights implications, omnipresent surveillance, and the predatory monetization practices already in play. Dumb stuff will happen. Monitoring capability is scaling tremendously.
Who and where is critical work exploring institutional power, surveillance, digital technologies and ethics undertaken at a meaningful level? If citizens are to trust – civil societies require robust critical thought, arm’s length from the most highly funded agencies and Ministries.
Industry scientists do not discuss principles of protection, query right and wrong, challenge economic norms and think about the long game of social and political life.
Regulators in name only
Regulators are simply never granted investigative, or inquisitorial powers. This is common for chemical compounds, biotechnologies – but it is clearly evident in New Zealand’s ‘trust framework’ and privacy governance structures.
Technology and chemical regulators ordinarily lack meaningful budgets to detect anomalies, disruptions and threats before harm occurs. They fail to look at risk beyond guideline frameworks.
What could we require of regulators? That they conduct methodological (as opposed to cherry picked) literature reviews of the published science; report on legal decisions from offshore jurisdictions; and demand that public scientists fill in the gaps left unmet by the industry science and data provision. This is not currently the case.
The failure to fund research and science to triangulate industry claims, the diminishing of social science, ethics and public law, nicely dovetails with predominantly impotent regulatory environments.
Digital Expansionism
These shifts have encouraged policy, legal and regulatory cultures that marginalise and set aside a language of risk that should encompass uncertainty and complexity. These processes set aside, and outrightly dismiss the values and principles enshrined as democratic norms, such as transparency and accountability.
They’re captured.
It’s not surprising then, that scientists recently stated that the production and release of anthropogenic novel entities (chemicals and biotechnologies) have so extensively escaped our capacity to effectively steward them, that the very out-of-control nature of their releases constitutes a transgression of the planetary boundary for chemicals and biotechnologies. They’ve escaped the safe operating space.
Man-made emissions and exposures are all-encompassing, permeating daily life and resulting in the subjection of the individual to potentially harmful technologies from conception. Dietary, atmospheric and other environmental exposures can’t be avoided.
The impossibility of avoidant action effectively, as sociologist Ulrich Beck pointed out in his 2009 book Risk Society, represented a loss of bodily sovereignty. Beck imagined civil society, engaged in navigating endless risk scenarios, in a risk society, as they struggled to judge and navigate endless exposures and emissions that their ancestors were never required to contemplate.
Repurposing Potential Built in to System Architecture
The out-of-control burgeoning risk appears to now underpin digital identity systems where ‘trust’ and ‘responsibility’ is designed by the institutions with the COIs.
With the turn to digital regulatory frameworks, risk pivots from emissions or exposures, to risk from surveillance and policy instruments. These instruments contain exceptional potential to nudge, coerce and force compliance in daily life, distorting personal autonomy and sovereignty.
Digital identity systems and the associated technologies present a dual-purpose opportunity for governments. As much rhetoric tells us, they are convenient and prima facie trusted. They will reduce fraud and simplify access to public and private, goods and services. The rhetorical focus concerns the crafting of legislation to protect privacy.
But with a back-end of digital identity systems owned by the state; ASIAs that permit cross government sharing; biometrics that can sew together identities; and global providers of artificial intelligence and algorithms there are new opportunities. The potential for this information to be repurposed as compliance-related behavioural technologies, to control and shape the behaviour of citizens are outside the scope of all the Bills and consultations.
An Official Information Act request to understand the current strategic direction of government regarding digital identity and citizen biometrics has just been delayed by the Honorable Dr David Clark. It’s concerning because at the same time, Jacinda Ardern’s office has deflected a request to understand why she pushed out her original COVID-19 emergency powers in September 2022.
Governments can use data from identity systems to toggle access permissions on and off. This can promote or restrict certain behaviours.
When tied to central bank digital currency, access to resources (through digital currency and/or tokens) can be time-specified and for a limited purpose. Permissions can be shaped to restrict access to narrowly approved goods and services, and/or alter consumption patterns.
We’ve already seen pandemic policies require healthy populations to submit to injection of a novel biological entity for which the private safety and efficacy data was hidden via automatic data protection agreements. The Attorney General the Honorable General David Parker controlled development of the overarching legislation, the COVID-19 Public Health Response Bill. The Bill failed to incorporate the principles of the Health Act 1956 – leaving protection of health outside of statutory obligations, while ignoring principles of infectious disease.
Throughout 2020-2022 secret, unpublished clinical trial data privileged – while secret guidelines consistently acted in favour of – the mRNA gene therapy manufacturer. Authoritative secret data ensured healthy people were required to succumb to a novel gene therapy or be stripped of rights of access, participation and community.
In similar fashion to the Digital Identity Services Trust Framework Bill, the COVID-19 Response Amendment Bill (No.2) consultation resulted in the broad dismissal of New Zealand public input.
Direct presentations to members of Parliament drew attention to the evidence in the scientific literature that the mRNA gene therapy was harmful, that it waned, that infection breakthroughs were common were ignored, in favour of clinical trial data. The Attorney General advised the public that the amendment bill did not adversely impact human rights.
Through the privileging of the corporation and the corporate science, ethical norms, where health, fairness and freedom converge, so as to navigate difference – were stripped from public debate. Also jettisoned was the capacity to act precautionarily in pervasively complex and uncertain environments, in order to prevent off-target harms.
The secret vaccine data, the idea that a coronavirus could be contained by interventions, produced more secrets. The introduction of passports, the implicit permission across populations that surveillance was appropriate and possible and the gagging of doctors. Passport acceptance locked in a novel precedent. Populations would accept a drug, justified by secret industry data – even though it could permit or deny them access to taken-for-granted services and places of community, dependent on their medical status.
Cultural Capture
Opaque digital identity systems and the coexisting governmental and private sector frameworks can be repurposed – some might say weaponised – to shape behaviour. The digital instrumentation, the system architecture, the evidence around the safety of the imagined biotechnologies and technical policy fixes, lies in the arms of the companies, their lobbyist affiliates, outsourced grunt work and the government relationships. If algorithms can create bellwethers for economic changes, what else can they do?
Due to the absence of public science to challenge, contradict and contest corporate science and data provision, and the pervasive default to industry data at all levels of government, we have before us not merely regulatory capture, but systemic, cultural capture.
The default position of relying on industry science to underpin policy is a function of the decline of public good science and the rise of industry power. Industry knowledge and expertise, and industry culture pervades the drafting of related laws and guidelines.
The inability to judge anything beyond economic and technical principles manifests as endemic structural corporatism. The two-way cross-talk directly privileges the institutions with the vested (political and financial) interest, while directly marginalising civil society and non-industry aligned scientists.
Saltelli et al (2022) have described the ways of thinking in policy and regulatory environments, that privilege industry, and result in officials thinking like industry scientists, acts to produce cultural capture.
‘Cultural capture linked to science as a source of evidence for policy-making have become a fertile ground for corporate penetration, leading to actions targeting different aspects of science for the policy system.’
Sociologist Ulrich Beck in his 2009 book Risk Society observed that this institutional shift upstream of industry expertise, from the regulatory environment, into active policy-making, diminished parliament’s position as the political centre of decision-making. The rise of stakeholder experts produced a double movement, the ‘technocratic closing off of the scope for decision-making in the parliament and the executive, and the rise of power and influence groups organised corporatively.’
Thus, politics and decision-making inevitably ‘migrated from the official arenas – parliament, government, political administration – into the gray [sic] area of corporatism.’
When cultures are captured, the industry data is imagined as ‘apolitical’ while the publicly produced data is viewed as political and controversial.
It is cultural capture that reinforces the tensile strength, the working load of the braided rope. Cultural capture reinforces technical dogma, alongside policy and law. The embedded narrative of economic supremacy aside uncertainty, precaution, and the messiness of co-deliberation.
In these environments democracy becomes performative – an administrative sham. There is little place for meaningful democracy.
This is how industry capture of science, policy and law, human and environmental health risk, pivots now, to freedom, sovereignty, and democracy risk.
The potential for abuse of political and financial power is tremendous.
Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.