Brownstone » Brownstone Institute Articles » Conflicts of Interest in Science: History of Influence, Scandal, and Denial
Conflicts of Interest in Science: History of Influence, Scandal, and Denial

Conflicts of Interest in Science: History of Influence, Scandal, and Denial

SHARE | PRINT | EMAIL

In December 1953, the CEOs of America’s leading tobacco companies cast aside competitive rancor and gathered at New York City’s Plaza Hotel to confront a menace to their incredibly profitable industry. An emergent body of science published in elite medical journals cast doubt on the safety of cigarettes and threatened to destroy a half-century of corporate success. Joining them at the Plaza was John W. Hill, the president of America’s top public relations firm, Hill & Knowlton. Hill would later prove a decisive savior. 

Hill had closely studied Edward Bernays, whose work on propaganda in the 1920s and 1930s laid the foundation of modern public relations and defined common techniques to manipulate popular opinion. Hill understood that any traditional campaign would fail to sway society, which perceived advertising as little more than corporate propaganda. Effective public relations required comprehensive off-stage management of the media. At its best, it left no fingerprints. 

Instead of ignoring or denigrating new data that found tobacco dangerous, Hill proposed the opposite: embrace science, trumpet new data, and demand more, not less research. By calling for more research, which they would then fund, tobacco companies could harness academic scientists in a battle to confront a major scientific controversy and amplify skeptical views of the relationship between tobacco and disease. Such a scheme would let companies shroud themselves in doubt and uncertainty—core principles of the scientific process, in which every answer leads to new questions. 

Hill & Knowlton’s campaign for the five largest US tobacco companies corrupted science and medicine for decades to follow, laying the foundation for financial conflicts of interest in science, as other industries mimicked tobacco’s techniques to protect their own products from government bans and regulations—later, from consumer lawsuits. While tactics have varied over time, the core strategy has changed little since tobacco wrote the playbook, providing a menu of techniques now employed across industries. 

To position themselves as more science than the science itself, corporations hire academics as advisors or speakers, appoint them to boards, fund university research, support vanity journals, and provide academic scholars with ghostwritten manuscripts to which they can add their names and publish in peer-reviewed journals with sometimes little or no effort. These tactics create an alternative scientific realm that drowns out the voices of independent researchers and call into question the soundness of impartial data. 

To further undermine impartial scientists, industries secretly support think tanks and corporate front groups. These organizations echo and amplify company studies and experts, counter articles in the media, and launch campaigns against independent academics, often trying to get their research retracted or perceived as second-rate and untrustworthy to the public and media. 

To counter corporate influence, academic and government bodies have repeatedly turned to conflict of interest policies and calls for greater transparency and financial disclosure. Philip Handler, the President of the National Academies of Science (NAS) during the early 1970s, proposed the first conflicts of interest policy which the NAS Council approved in 1971.

The policy drew sharp rebukes from leading scientists who called it “insulting” and “undignified,” creating a pattern that continues today. Whenever a scandal erupts that finds companies exerting undue influence on science, calls for greater transparency and more stringent ethics requirements are countered with assertions that current rules are fine and further scrutiny is not needed. 

However, a growing body of literature finds that arguments against financial conflicts of interest reforms are unsubstantiated, lacking in intellectual rigor, and ignorant of the peer-reviewed research on financial influence. Although conflicts of interest policies have become more prevalent, their content and essential requirements have evolved little since the National Academies introduced their first rules.

In fact, the controversy over corporate control of science continues to dog the Academies. Over 40 years after introducing their first conflicts of interest policy, the Academies were once again caught in a scandal, after complaints that committee members preparing reports for the Academies have cozy ties to corporations. 

Investigative reporters found that nearly half the members of a 2011 Academies report on pain management had ties to companies that manufacture narcotics, including opioids. A separate newspaper investigation discovered that the NAS staff member who selected the committee members for a report on the regulation of the biotechnology industry was simultaneously applying to work for a biotech nonprofit. Many of the committee members he chose were found to have undisclosed financial ties to biotech corporations. As this review of history will show, the Academy is not alone in confronting conflicts of interest in a cycle of denial, scandal, reform, and more denial. 

Early Years 

Concern over corporate influence on science is relatively modern, having emerged in the 1960s. In the early 20h century, private foundations and research institutes funded the vast majority of scientific research in the United States. This changed after World War II, when the national government began pouring increasing amounts of money into scientific programs. Physicist Paul E. Klopsteg best expressed the apprehension many scientists felt about the government controlling the research agenda. As the Associate Director for Research at the National Science Foundation in 1955, he worried that federal funding for science could allow the government to hijack the mission of universities. 

“Does such a vision make you uneasy?” Klopsteg asked, in a rhetorical fashion. “It should; for it requires scant imagination to picture therein a bureaucratic operation that would irresistibly and inevitably take a hand in the affairs of our institutions of higher learning.” 

The government’s influence over science can be evaluated by examining budget numbers. From its first year of operations in 1952, the National Science Foundation’s budget ballooned from $3.5 million to almost $500 million in 1968. The National Institutes of Health saw equally large increases, growing from $2.8 million in 1945 to over $1 billion in 1967. By 1960, the government supported over 60 percent of research. 

During this period, the scientific community focused on conflicts of interest that affected scientists who either worked in government or who were funded by government agencies, especially researchers in military and space science research programs. Even while using the term “conflict of interest,” scientists discussed the matter only within a narrow legal context.

When Congress held hearings about conflicts of interest in science, they concerned scientists who were government contractors for the Atomic Energy Commission or National Aeronautics and Space Administration while also having financial interests in private research or consulting companies. 

Worries about government influence over science were also apparent in 1964. That year, both the American Council on Education and the American Association of University Professors developed conflicts of interest policies that only discussed research funded by the government. 

By examining the appearance of the phrase “conflicts of interests” in the journal Science over the past century, we can see how the term has changed in context and meaning, reflecting researchers’ concerns about the power of external forces in shaping science. In the early years, the term surfaced in the journal’s pages in reference to scientists’ relations to government. Over time, this shifted to incidents and discussions involving industry. This uneasiness with industry seems to have increased with time and with strengthening kinship between universities and corporate partners. 

Tobacco Creates Parallel Science 

After an initial meeting with tobacco company leaders in late 1953, Hill & Knowlton created a sophisticated strategy to shroud the emerging science about tobacco in skepticism. Skeptics have always existed in science. In fact, skepticism is a fundamental value of science. But tobacco repurposed skepticism by flooding the research field with money to study the relationship between smoking and disease, and positioning the industry as scientific advocates while shaping and amplifying a public message that tobacco’s potential dangers were an important scientific controversy. 

Historian Allan M. Brandt of Harvard University noted, “Doubt, uncertainty, and the truism that there is more to know would become the industry’s collective new mantra.” 

This Trojan Horse intrusion avoided many potential downfalls of a direct assault. Attacking researchers could backfire and be viewed as bullying; issuing statements of safety could be dismissed by a cynical public as self-serving, or worse, dishonest. But emphasizing the need for more research allowed the tobacco industry to seize the moral high ground from which they could then peer down onto emerging data, gently guiding new research to spur a spurious debate. While pretending the goal was science, tobacco companies would repurpose research for public relations

Public relation firms had decades of expertise at stage-managing the media to counter information that harmed their clients. But by controlling the research agenda and the scientific process, tobacco companies could manage journalists even better than in the past. Instead of manipulating journalists to fight on their side of a public debate, companies would create the debate and then harness the media to publicize it for them. 

As part of their initial plan, tobacco companies sought experts to discredit new research that might find links between tobacco and lung cancer. After companies collected public statements of physicians and scientists, Hill & Knowlton then produced a compendium of experts and their quotes. Not content with just funding individual scientists and research projects, Hill proposed creating an industry-funded research center. This call for new research broadcasted a subtle message that current data was outdated or flawed, and by partnering with academic scientists and their universities, it created the impression that the tobacco industry was committed to finding the right answers. 

“It is believed,” Hill wrote, “that the word ‘Research’ is needed in the name to give weight and added credence to the Committee’s statements.” By branding tobacco as a proponent of research, Hill made science the solution to possible government regulation. This strategy would lead to almost half a century of collusion between tobacco corporations and university researchers. 

The Tobacco Industry Research Committee (TIRC) became central to Hill & Knowlton’s strategy of co-opting academia. When TIRC was officially formed, over 400 newspapers ran an ad announcing the group with the title, “A Frank Statement to Cigarette Smokers.” The ad noted that tobacco had been accused of causing all sorts of human diseases, yet, “One by one those charges have been abandoned for lack of evidence.” The ads then pledged that companies would fund, on behalf of consumers, new research to study tobacco’s health effects: 

We accept an interest in people’s health as a basic responsibility, paramount to every other consideration in our business. We believe the products we make are not injurious to health. We always have and always will cooperate closely with those whose task it is to safeguard the public health. 

The Executive Director of TIRC was W.T. Hoyt, a Hill & Knowlton employee, who operated TIRC from his firm’s New York office. Hoyt had no scientific experience, and before joining the PR firm, he sold advertising for the Saturday Evening Post. The tobacco industry would later conclude “most of the TIRC research has been of a broad, basic nature not designed to specifically test the anti-cigarette theory.” 

After retiring as CEO of Brown & Williamson, Timothy Hartnett became the first full-time chairman of TIRC. The statement announcing his appointment reads: 

It is an obligation of the Tobacco Industry Research Committee at this time to remind the public of these essential points: 

  1. There is no conclusive scientific proof of a link between smoking and cancer. 
  2. Medical research points to many possible causes of cancer …. 
  3. A full evaluation of statistical studies now underway is impossible until these studies have completed, fully documented and exposed to scientific analysis through publication in accepted journals. 
  4. The millions of people who derive pleasure and satisfaction from smoking can be reassured that every scientific means will be used to get all the facts as soon as possible. 

The TIRC began operating in 1954 and almost all of its $1 million budget was spent on fees to Hill & Knowlton, media ads, and administrative costs. Hill & Knowlton hand-picked TIRC’s science advisory board (SAB) of academic scientists who peer-reviewed grants which had been screened previously by TIRC staff. Hill & Knowlton favored scientists who were skeptics of tobacco’s ill health effects, especially skeptics who smoked. 

Instead of delving into research about tobacco’s links to cancer, most of TIRC’s program focused on answering basic questions about cancer in areas such as immunology, genetics, cell biology, pharmacology, and virology. The TIRC funding of universities helped chill discourse and debate that argued tobacco might cause disease, while also allowing tobacco companies the prestige of associating with academics, as few TIRC scientists took strong positions against tobacco. 

While launching TIRC, Hill & Knowlton also moved to reshape the media environment by developing a large, systematically cross-referenced library on tobacco-related issues. As one Hill & Knowlton executive explained

One policy that we have long followed is to let no major unwarranted attack go unanswered. And that we would make every effort to have an answer in the same day—not the next day or the next edition. This calls for knowing what is going to come out both in publications and in meetings….This takes some doing. And it takes good contacts with the science writers. 

Although their positions were not grounded in substantive peer-reviewed literature, Hill & Knowlton broadcast the opinions of a small group of skeptics on cigarette science, making it appear as if their views were dominant in medical research. These skeptics allowed TIRC to quickly counter any assault against tobacco. In many cases, TIRC rebutted new findings even before they had become public. This campaign succeeded because it hijacked science journalists’ love of controversy and commitment to balance. 

“Given the penchant of the press for controversy and its often naive notion of balance, these appeals were remarkably successful,” Brandt concluded.

Not satisfied with passive forms of media control like advertising and press releases, Hill & Knowlton practiced aggressive outreach to authors, editors, scientists, and other opinion makers. Personal face-to-face contacts were critical, and after every press release, TIRC would initiate a “personal contact.” Hill & Knowlton systematically documented this courtship of newspapers and magazines to urge journalistic balance and fairness to the tobacco industry. During these encounters, TIRC emphasized that the tobacco industry was committed to the health of cigarette smokers and scientific research, while urging skepticism about statistical studies finding harm. 

Finally, TIRC presented journalists with contacts of “independent” skeptics to ensure accurate journalistic balance. In short, after creating the controversy, Hill & Knowlton then co-opted reporters to cover the debate, leading to stories that concluded tobacco science was “unresolved.” 

Despite Hill & Knowlton’s behind-the-scenes management of TIRC to provide a veneer of scientific credibility, scientists advising TIRC balked about the board’s independence and their professional credibility among peers. To calm these fears, Hill & Knowlton created the Tobacco Institute in 1958, at the behest of R. J. Reynolds. 

An industry attorney later recounted that “the creation of a separate organization for public information was hit upon as a way of keeping [TIRC scientists] inviolate and untainted in [their] ivory tower while giving a new group a little more freedom of action in the public relations field.” Having protected the “science” mission of TIRC, Hill & Knowlton operated the Tobacco Institute as an effective political lobby in Washington to counter congressional hearings and potential agency regulations. As it had in advertising and media, the tobacco industry innovated new strategies with the Tobacco Institute to manipulate the regulatory and political environment. 

Hill & Knowlton’s success became evident in 1961. When tobacco hired the firm in 1954, the industry sold 369 billion cigarettes. By 1961, companies sold 488 billion cigarettes, and per capita cigarette use rose from 3,344 annually to 4,025, the highest in American history

In 1963, a New York Times story noted, “Surprisingly, the furor over smoking and health failed to send the industry into a slump. Instead, it sent it into an upheaval that has resulted in unforeseen growth and profits.” An official with the American Cancer Society told the paper, “When the tobacco companies say they’re eager to find out the truth, they want you to think the truth isn’t known…. They want to be able to call it a controversy.” 

During this time span, scientists seemed unperturbed by the conflicts of interest that arose when tobacco-funded university research and academics allied themselves with a corporate campaign. When the Surgeon General established an advisory committee on smoking and health in 1963, the committee did not have a conflict of interest policy. In fact, the tobacco industry was allowed to nominate and reject committee members. 

Although documents detailing tobacco’s tactics to hijack science only became public following litigation in the 1990s, this playbook created in the 1950s remains effective and has been copied by other industries. In order to disrupt scientific norms and stave off regulation, many corporations now make boilerplate claims of scientific uncertainty and lack of proof, and divert attention from product health risks by placing blame on individual responsibility. 

Before tobacco, both the public and scientific community believed that science was free of undue influence from special interests. However, tobacco repurposed science not to advance knowledge, but to undo that which was already known: cigarette smoking is dangerous. Instead of funding research to make new facts, tobacco spread money around to unmake that which was already a fact. Historian, Robert Proctor, of Stanford University has used the term “agnotology” to describe this process of constructing ignorance. 

To this day, society struggles to create policies to limit corporate influence over areas of science that advance the public interest and intersect with government regulations. We can thank the tobacco industry for inventing our modern crisis with conflicts of interest and financial transparency in science. 

Modern Scandal 

The late 1960s and early 1970s marked a period of political turmoil and social change in the United States. Confidence in government and social institutions plummeted with the Watergate scandal and a series of exposés that shined a harsh light on special interests that were manipulating Congress. At the same time, Congress created new federal agencies with broad mandates for protecting public health, elevating the role of scientists in federal policymaking.

The Environmental Protection Agency and the Occupational Safety and Health Administration, created in 1970, were charged with developing regulatory standards for a wide range of substances for which limited data existed. At the same time, the 1971 National Cancer Act brought attention to environmental factors related to cancer risk. 

Describing this period, sociologist Sheila Jasanoff remarked that science advisors had become a “fifth branch” of the government. But as medicine and science began to have a more direct impact on policy, they simultaneously came under greater public scrutiny, leading to controversies about scientific integrity. Media outlets at the time ran front-page stories about financial interests and apparent corruption regarding several issues that touched on the environment, consumer safety, and public health.

Prior to this, the public was rarely confronted with evidence about the dangers of radiation, chemical pesticides, and food additives and how these substances might cause cancer. Yet, as scientists and physicians found their professions more heavily scrutinized, society also demanded that they create policies to protect public health. 

In 1970, the National Academies faced accusations of pro-industry bias, after creating a committee to examine the health effects of airborne lead. Dupont and the Ethyl Corporation—the two companies that produced the most lead in the United States—employed 4 of the committee’s 18 experts. An Academies spokesperson defended the committee, arguing that members were selected on the basis of scientific qualifications, and that they advised the Academy as scientists, not as representatives of their employers. 

The President of the Academies during this period was Philip Handler, a former academic who consulted for numerous food and pharmaceutical companies and served on the Board of Directors of the food corporation Squibb Beech-Nut. Throughout his tenure, Handler continued to face criticism over his industry ties.

Handler attempted to thread the needle of conflicts of interest by pointing to the Academy’s obligation to work with the Department of Defense to protect the country. “[T]he question is not whether the Academy should do work for the Defense Department but how it goes about maintaining its objectivity in doing so,” he argued. Handler also advocated for more federal funding for graduate scientific education but cautioned that the “university must not become subservient to or the creature of the federal government by virtue of this financial dependency.” While arguing that government and industry funding was essential for science, he seemed to sidestep the obvious dilemma that this funding might compromise scientific independence. 

After the airborne lead committee kerfuffle, Handler proposed that new committee members disclose any potential conflicts that might arise during service for the Academy. This information would be shared among fellow committee members, not the public, and was aimed at providing information to the Academy that might be damaging if it became public through other avenues. The new conflicts of interest rules were limited to explicit financial relationships, but also considered “other conflicts,” that might be perceived as creating bias. 

Before implementing the new policy, Handler conducted an informal survey of committees and boards at NAS. Some responded that all members were in conflict, while others said scientists could not be biased. One committee member wrote, “Isn’t it probably true that unless a committee member has some possibility of [conflict of interest], it is not too likely that he will be a useful committee member?” In short, when scientists were prodded about the conflicts of interest and how this might bias their opinion, they inverted the problem by redefining conflicts of interest as “scientific expertise.” 

In August 1971, the Academy approved a one-page letter, titled “On Potential Sources of Bias,” to be filled out by potential advisory committee members. The letter noted that NAS committees were, to an “ever-increasing extent,” being asked to consider issues of “public interest or policy,” thus frequently requiring conclusions that rested on “value judgments” as well as data. Even when committee members are acting without bias, the letter stated, such charges can impugn committee reports and conclusions. Thus, individual members were asked to state “which [factors], in his judgment, others may deem prejudicial.” 

Many committee members viewed the statement as an accusation or challenge to their integrity, with some calling it “insulting” and “undignified.” Federal laws required government advisors to disclose financial conflicts such as grants or stocks, but the Academy’s statement delved into other sources of potential bias such as prior comments and membership in organizations. 

Still, concern about the Academy’s integrity arose the following year when its Food Protection Committee was accused of pro-industry bias and downplaying the cancer risks of food chemicals. Food companies partly funded the committee which included academics, who consulted for the food industry. Worries about industry influence were further inflamed in 1975, when Ralph Nader funded a former journalist for Science, Philip Boffey, to investigate the Academy’s ties to industry and how corporate financial support may have influenced their reports. 

Nonetheless, the Academy’s 1971 statement was a pioneering policy in conflicts of interest and the precursor to the Academy’s current practices. But a new element would enter the picture in 1980 when Congress passed the Bayh–Dole Act. This law allowed universities to own inventions created by professors with government funding and encouraged corporate collaborations to develop new products and bring them to market.

Within a year, many top academic centers and their faculty had signed lucrative licensing deals with pharmaceutical and biotechnology companies, dividing academics at American universities over uneasiness about scientific integrity and academic freedom. 

Current Evidence and Primacy of Pharmaceutical Companies 

In the early 1900s, the American Association of University Professors published a declaration of principles to guide academic life. In retrospect, this declaration seems quaint

All true universities, whether public or private, are public trusts designed to advance knowledge by safeguarding the free inquiry of impartial teachers and scholars. Their independence is essential because the university provides knowledge not only to its students but also to the public agency in need of expert guidance and the general society in need of greater knowledge; and… these latter clients have a stake in disinterested professional opinion, stated without fear or favor, which the institution is morally required to respect. 

Current university practices resemble these principles about as closely as modern sexual behavior smacks of the prim morality of the Victorian era. Just as the 1960s sexual revolution altered sexual behavior, tobacco transformed university practices by blurring the boundaries between corporate public relations and academic research. These changes have been most profound in medicine, where academic partnerships with the biotechnology industry have created both cures for several diseases and a pandemic of financial conflicts of interest

In effect, the pharmaceutical industry has repurposed tobacco’s campaign by co-opting academics to sell drugs. These financial conflicts of interest in academic biomedical research entered the public debate in the early 1980s, following a series of scientific misconduct scandals. In some cases, investigations revealed that faculty members fabricated or falsified data for products in which they had a financial interest. 

By then, two important laws helped bind academics to the biotech industry. In 1980, Congress passed the Stevenson-Wydler Technology Innovation Act and Bayh–Dole Act. The Stevenson-Wydler Act pushed federal agencies to transfer technologies they helped invent to the private sector, leading many universities to create technology transfer offices. The Bayh–Dole Act allowed small businesses to patent inventions created with federal grants, allowing universities to license products their faculty created. Both laws aimed to leverage federal agencies and funding to bring life-saving products to the public. However, the laws also pushed academics into a further alliance with industry. 

As the distinction between academic research and industry marketing continued eroding, the New England Journal of Medicine announced the first formal conflict of interest policy for any major science journal in 1984. In an editorial, the NEJM’s editor laid out concerns that required this new policy: 

Now, it is not only possible for medical investigators to have their research subsidized by businesses whose products they are studying, or act as paid consultants for them, but they are sometimes also principals in those businesses or hold equity interest in them. Entrepreneurialism is rampant in medicine today. Any new research development that has or might have commercial application attracts attention from established corporations or venture capitalists.

Reports of such developments released at press conferences, presented at scientific meetings, or published in journals may cause stock prices to rise abruptly and fortunes to be made almost overnight. Conversely, reports of unfavorable outcomes or serious side effects may rapidly devalue a particular stock. On more than one occasion during the past few years, the publication of an article in the Journal has been the direct cause of sharp fluctuations in stock prices. 

A year later, JAMA also instituted a conflict of interest policy. However, the two leading science journals did not catch up until 1992 (Science) and 2001 (Nature). Research finds that science disciplines have always lagged behind medicine in addressing financial bias. 

For example, in 1990, Harvard Medical School instituted financial conflict of interest policies, by limiting the types of commercial relationships clinical research faculty could have and setting a ceiling on financial interests. This appears to be the first attempt by a university to sharpen the distinction between academic research and corporate product development. Both the Association of American Medical Colleges and the Association of Academic Health Centers followed up that year by publishing guidance on financial conflicts of interest. 

In these same years, the National Institutes of Health proposed new rules to require that academics disclose financial interests to their institution and not consult or have equity in companies that might be affected by their research. In response, the NIH received 750 letters, with 90 percent opposing the proposed regulations as overly intrusive and punitive.

When the new rules became active in 1995, they only required disclosure of interests “that would reasonably appear to be directly and significantly affected by the research.” Unfortunately, the public who would benefit from greater independence of science does not seem to have weighed in on this process, and the academic institutions receiving the grants ended up enforcing the regulations themselves. 

However, these initial steps seemed to have had little effect in controlling the industry’s growing influence over medicine and the culture of universities. In 1999, the American Society of Gene Therapy (ASGT) was forced to declare certain financial arrangements off-limits in gene therapy trials, following a scandal in the first gene therapy clinical trial. Nonetheless, industry financing continued to dominate biomedicine, a trend which became clear in 1999 when the National Institutes of Health funded $17.8 billion for mostly basic research. In contrast, the leading 10 pharmaceutical companies spent $22.7 billion, on mostly clinical research. 

A spate of studies throughout the 1990s persisted in documenting corporate control over medicine. Research found that pharmaceutical companies affected clinicians’ decisions and that the research of academics with ties to industry was lower in quality and more likely to favor the study sponsor’s product. Negative findings were less likely to be published and more likely to have delayed publication. Especially worrying to academics was the media’s growing interest in stories that documented industry’s influence over medicine. 

While the Bayh–Dole Act generated profits for universities and academics, it also constructed a positive feedback loop, driving more academic research down an avenue of commercialization. Whatever boundaries between universities and industry that had previously existed seemed to have disappeared as academic interests became almost indistinguishable from corporate interests.

But the public’s demand for advanced medical discoveries was tempered by intolerance for even a whiff of impropriety by universities now firmly entangled in corporate research. A JAMA editorial described this as a struggle “to create a precarious equipoise between the world and values of commerce and those of traditional public service, a balance between Bayh–Dole and by-God.” 

Conflicts of interest captured attention again in 2000 when USA Today published an investigation that found that more than half of the advisors to the Food and Drug Administration (FDA) had financial relationships with pharmaceutical companies with interests in FDA decisions. Industry denied that these relationships created a problem and the FDA kept many of the financial details secret.

A separate study found that companies funded almost one of every three manuscripts published in the NEJM and JAMA. Experts concluded that financial conflicts of interest “is widespread among the authors of published manuscripts and these authors are more likely to present positive findings.” 

In retrospect, 2000 was a watershed event at JAMA. That year, the journal published a series of editorials examining the pharmaceutical industry’s growing influence over physicians and called for barriers to protect medicine from corporate corruption. One editor noted that the industry’s cultivation of physicians began in the first year of medical school when students received gifts from pharmaceutical companies.

“The enticement begins very early in a physician’s career: for my classmates and me, it started with black bags,” she wrote. The editor referenced one study which found that pharmaceutical companies fund purportedly “independent physicians” and that research found that those academics were more likely to present positive findings. 

A steady trickle of research in the 2000s continued documenting widespread conflicts of interest which eroded scientific integrity, and explored disclosure as a primary tool for remediation. However, one study discovered that barely half of the biomedical journals had policies requiring disclosure of conflicts of interest. Research also noted that companies appeared to be sponsoring studies as a tool to attack competitors’ products and these studies were likely being funded for commercial not scientific reasons.

Management of conflicts of interest remained erratic, and a systematic review of journals found that they were increasingly adopting disclosure policies, but those policies varied widely across disciplines, with medical journals more likely to have rules. In response to that environment, the Natural Resources Defense Council convened a meeting and released a report on strengthening conflict of interest rules at journals. 

Government investigations in the mid-to-late 2000s forced more biomedical conflicts of interest scandals onto the public stage. After the Los Angeles Times reported that some researchers at the National Institutes of Health had lucrative consulting agreements with industry, Congress held hearings, resulting in a tightening of conflict of interest policies for NIH employees. Federal investigations also began to force drug companies to disclose their payments to physicians on publicly available websites as part of corporate integrity agreements. 

Merck’s Vioxx scandal threw a spotlight on the pharmaceutical industry’s abuse of medical research in 2007. Documents made public during litigation found that Merck transformed peer-reviewed research into marketing brochures by ghostwriting studies for academics who rarely disclosed their industry ties.

Analyzing published articles, information Merck provided to the Food and Drug Administration, and Merck’s internal analysis, researchers found that Merck may have misrepresented the risk–benefit profile of Vioxx in clinical trials and attempted to minimize mortality risk in reports to the FDA. For one trial, company documents revealed that the lack of a data and safety monitoring board (DSMB) may have endangered patients. 

Lest anyone think that Merck was somehow unique in behavior, a JAMA editorial accompanying the papers referenced similar actions by other companies. “[M]anipulation of study results, authors, editors, and reviewers is not the sole purview of one company,” the editorial concluded.

In 2009, the Institute of Medicine (IOM) examined financial conflicts of interest in biomedicine, including research, education, and clinical practice. The IOM reported that companies paid large, undisclosed amounts to doctors to give marketing talks to colleagues, and that sales representatives provided gifts to physicians that influence prescribing. Clinical research with unfavorable results was sometimes not published, distorting the scientific literature for drugs prescribed for arthritis, depression, and elevated cholesterol levels.

In one example, negative studies about depression medications were withheld, causing a meta-analysis of the literature to find the drugs were safe and effective. A second meta-analysis that included the formerly withheld data found that risks outweighed benefits for all but one antidepressant. 

A fair reading of the IOM’s report would cause any reader to conclude that conflicts of interest are pervasive throughout medicine, corrupt academia, and sometimes lead to patient harm. One expert has argued that policies to stop bias and corruption have been completely ineffective, requiring nothing less than a paradigm change in medicine’s relationship with industry. Still, some research has found that the public remains largely unconcerned about these matters.

Perpetual Denial Machine 

The defensive response by academics to the 1971 National Academy’s first conflict of interest policy and the 1990 proposed regulations by the National Institutes of Health remains common to this day. Every attempt to control financial conflicts of interest and push for great transparency in science has been criticized by the scientific community, which seems perpetually satisfied with whatever ethics happen to be in place. 

For example, the NIH’s 1990 proposed guidelines were roundly denounced by the scientific community, resulting in gentler guidelines that allowed universities to self-regulate. Even with these weakened rules, a researcher later wrote, “At the present time, federal employees working in federal laboratories are constrained by numerous conflict of interest restrictions.” Because of this perceived harshness, the NIH director eased ethics policies for NIH employees in 1995 to increase recruitment of top scientists, by allowing federal workers to consult with industry. 

Rolling back these rules led to inevitable scrutiny in the form of a 2003 investigation by the Los Angeles Times that uncovered senior NIH scientists consulting with pharmaceutical companies, with one researcher later prosecuted by the Department of Justice. Congressional hearings and internal investigations then forced the NIH to introduce more stringent ethics rules for employees that restricted stock ownership and consulting with pharmaceutical companies.

Announcing the new restrictions, the NIH Director stated a need to “preserve the public’s trust” and address public perceptions regarding conflicts of interest. But as earlier, some scientists saw this second round of rules as punitive and overly restrictive, arguing that it would negate the agency’s ability to recruit top scientists. 

Indeed, academics persisted in involving themselves in research that tested their own company’s products on patients. In 2008, the Senate Finance Committee discovered that a Stanford University researcher had $6 million in equity in a company and was the primary investigator for an NIH grant that funded patient research on his company’s drug. Stanford denied any wrongdoing while also retaining a financial interest in the company. The NIH later terminated the clinical trial. 

Investigations by the Senate Finance Committee also uncovered numerous examples of academics failing to report financial ties to pharmaceutical companies when receiving NIH grants. This led to reforms that required stronger conflicts of interest rules for NIH grantees and passage of the Physician Payments Sunshine Act. The Sunshine Act, which I helped to write and pass, required companies to report payments to physicians, and the law has been replicated in many other countries. 

Despite the legislative success, the welcome in academia has been colder. In one example, Tufts University disinvited me from appearing at a conference on conflicts of interest held on their campus, which led one conference organizer to resign. Since these changes were implemented, industry and academia have attempted to roll back both provisions of the Sunshine Act and the new NIH rules

The Food and Drug Administration has had equally erratic responses to conflicts of interest. In 1999, a gene transfer experiment at the University of Pennsylvania killed volunteer patient Jesse Gelsinger. Both the investigator and the institution had financial interests in the tested product. The FDA then instituted more stringent conflicts of interest disclosure requirements for researchers and forbade those dealing with patients from holding equity, stock options, or comparable arrangements in companies sponsoring the trial. 

“So, my son, doing the right thing, was killed by a system and people rife with conflicts of interest, and real justice has been found to be very lax. It’s essentially business as usual,” Gelsinger’s father later wrote.

Driven in part by the Vioxx scandal, the FDA commissioned a 2006 study by the Institute of Medicine. That report found excessive conflicts of interest on the FDA’s expert advisory panels which review new drugs and devices. The report recommended that a majority of the panelists should have no ties to the industry. “FDA’s credibility is its most crucial asset, and recent concerns about the independence of advisory committee members… have cast a shadow on the trustworthiness of the scientific advice received by the agency,” the report concluded. 

In 2007, Congress responded, passing a new law that updated the Food, Drug, and Cosmetic Act that placed more stringent requirements on how the FDA handled conflicts of interest. In classic fashion, a senior FDA official later protested that the rules were harming the agency’s ability to find qualified experts for advisory panels.

These claims were rebutted in a letter to the FDA Commissioner, citing evidence that nearly 50 percent of research academics have no ties to industry and that approximately one-third of these researchers are full professors. Nonetheless the FDA outcry appeared effective and when Congress updated FDA legislation in 2012, the new law removed the previous demands that the FDA tighten control of financial conflicts of interest. 

Even the journals themselves have joined the receding tide in handling conflicts of interest. After implementing the first conflict of interest policy in 1984, the NEJM updated its policies in 1990, prohibiting the authors of editorials and review articles from having any financial interests with a company that could benefit from a drug or medical device discussed in the article.

The new rules created a firestorm of protest, with some calling them “McCarthyism” and others referring to them as “censorship.” Eventually, the rules were weakened. Under a new editor in 2015, the NEJM published a series of essays that sought to deny that conflicts of interest corrupt science. 

Finally, another avenue for disclosing hidden conflicts of interest between industry and public scientists is through open records requests. Federal or state freedom of information laws enable investigative journalists and others to request documents relating to the publicly funded activity of many kinds, including scientific research. But in recent years, those laws have come under attack by the Union of Concerned Scientists and some members of the scientific community. Experts on freedom of information laws have dismissed these efforts as misguided, with one scholar referring to them as “gibberish.” 

Even if compliance with current public records laws remains intact, the number of journalists using this tool is not large and is declining. In recent years, many journalists have also gone to work for the industries they once reported on. And like medicine, journalism has struggled with conflicts of interest problems, with most media outlets lacking clear policies for both reporters and the sources they cite.

The Physicians Payments Sunshine Act has been used to uncover doctors, who are also reporters and who have received compensation from the pharmaceutical industry. And just as in science, the pharmaceutical, food, and biotech industries have secretly funded journalists to attend conferences on subjects that they cover in order to sway public perception. 

Endless Search for Solutions 

This brief history of financial conflicts of interest only attempts to examine the direct lineage that begins with tobacco, tracing it to modern problems in biomedicine. Other examples exist in which corporations sought to undermine scientific integrity for financial gain, but there is little evidence that those efforts continued into the future. History is important because it explains why these campaigns began, how they were implemented, and the tactics they deployed. 

Historical wisdom also makes clear that reform efforts are always opposed, erode over time, and are then implemented again in the face of new scandals. As I was writing this chapter, the National Academies is implementing new conflicts of interest rules to deal with scandals involving two of their panels that had been stacked with academics who had ties to industry.

Additionally, the National Institutes of Health has been swept up in another controversy, with NIH officials soliciting donations from alcoholic beverage manufacturers to fund a $100 million study on the health effects of alcohol. The NIH later ended the partnership. The resulting criticism seems to have curbed the NIH from partnering with the pharmaceutical industry on a planned opioids research partnership worth roughly $400 million, in which industry would fund half the costs. 

The Institute of Medicine’s 2009 report noted that the current evidence base for conflict of research policies is not strong and more research on the matter could help guide future rules or regulations. Federal agencies have not leaped at this recommendation.

The judicial branch may be more promising. Federal settlements with drug companies have forced them to disclose their payments to physicians and private litigation has uncovered documents showing bias in purportedly independent scientific studies. The Senate has proposed the Sunshine in Litigation Act, which would require judges to make public documents that find products might harm the public, but this law has not been passed.

Tiny advances continue as PubMed announced in 2017 that it will include conflicts of interest statements with study abstracts, and research on the subject continues, even if the results are often ignored. Searching PubMed for the term “conflict of interest” in 2006, a researcher found 4,623 entries with only 240 appearing before 1990, and well more than half after 1999. 

Most fixes for conflicts of interest involve some type of funding disclosure. But even these can be ineffective and distracting as disclosure does not resolve or eliminate the problem. Institutions must also evaluate and act on this information in ways that include eliminating the relationship or restricting a scientist’s participation in some activities. 

Yet, some experts still attempt to dismiss the problem with conflicts of interest, by recasting the term as “confluence of interest.” Others trivialize the matter by elevating so-called “intellectual conflicts of interest” as similar in value. The Institute of Medicine carefully rejected such notions, stating, “Although other secondary interests may inappropriately influence professional decisions and additional safeguards are necessary to protect against bias from such interests, financial interests are more readily identified and regulated.” The IOM report concluded, “Such conflicts of interest threaten the integrity of scientific investigations, the objectivity of medical education, the quality of patient care, and the public’s trust in medicine.

Many scientists are incapable of understanding and accepting that financial conflicts of interest corrupt science because they believe that scientists are objective and too well-trained to be influenced by financial rewards, like all other human beings. In one example, researchers surveyed medical residents and found that 61 percent reported that they would not be influenced by gifts from pharmaceutical companies, while arguing that 84 percent of their colleagues would be influenced. One academic who researches conflicts of interest grew so irritated with scientists denying the science of financial influence that he wrote a parody for the BMJ that listed many of their most common denials. 

“What I find most frustrating is the extent to which leading physicians and scientists whose profession seems to require a commitment to some kind of evidence-based practice are unaware of the best evidence on motivated bias,” he wrote. “This literature is robust and well developed.” Indeed, it is time for scientists to stop being unscientific about the science on conflicts of interest and to cease substituting their personal opinions for peer-reviewed research. 

A wide range of other industries have carefully studied the tobacco industry playbook. As a result, they have come to better understand the fundamentals of influence within the sciences and the value of uncertainty and skepticism in deflecting regulation, defending against litigation, and maintaining credibility despite marketing products that are known to harm public health. “By making science fair game in the battle of public relations, the tobacco industry set a destructive precedent that would affect future debates on subjects ranging from global warming to food and pharmaceuticals,” scholars observed

At the heart of the matter lies money. As far back as 2000, experts questioned the ability of academic institutions to regulate financial conflicts of interests when they were so reliant on billions of dollars annually from the industry. In a 2012 symposium on conflicts of interest held at Harvard Law School, academic leaders noted that the problem has only grown more and more complex over time. University leaders avoid even discussing the imperative to regulate financial conflicts because they fear losing revenue. 

Brave policymakers must intervene and develop rules to avoid future scandals and continued loss of confidence in science. Most importantly, they must protect the public. 

This essay originally appeared as a chapter in “Integrity, Transparency and Corruption in Healthcare & Research on Health.” The book provides an overview of the healthcare sector and its struggle for effective corporate governance, and features essays by leading academics and journalists who detail cutting edge research and the real-world experiences of professionals.



Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.

Author

  • Paul Thacker

    Paul D. Thacker is an Investigative Reporter; Former Investigator United States Senate; Former Fellow Safra Ethics Center, Harvard University

    View all posts

Donate Today

Your financial backing of Brownstone Institute goes to support writers, lawyers, scientists, economists, and other people of courage who have been professionally purged and displaced during the upheaval of our times. You can help get the truth out through their ongoing work.

Subscribe to Brownstone for More News

Stay Informed with Brownstone Institute