Brownstone » Brownstone Journal » Censorship » The Censors Are Coming for Mental Health
The Censors Are Coming for Mental Health

The Censors Are Coming for Mental Health

SHARE | PRINT | EMAIL

In today’s open access to information, any amateur can stuff any claim with enough sweetened pie charts and cherry statistics to make any ideology appear appetizing. Truth has always been hard to come by, but nowadays is obscured by the relatively even ability of anyone with WiFi to pontificate publicly. And then, a pandemic. When the stakes are high, lives are on the line, and suddenly the blase allowance of conflicting ideas becomes a liability. People will die without accuracy.

And so, as legitimate fear seeks the comfort of direction, a new way of talking about medical information appears. Attach a prefix, dis- or mis-, and good ideas shall trump the bad. In a utopian world where absolute truth is decipherable, we are surely obliged to separate fact from fiction. But in a corruptible world, it’s worth remembering that medical patients (though not psychiatric ones) are encouraged to seek a second opinion in matters of life and death. 

Human beings, no matter how credentialed, are fallible participants in the mysteries of life, and doctors institutionalized with narrow sets of knowledge can therefore make errors of judgment. Not because they’re evil, but because they’re limited. All of us, and our certainties, are subject to revision.

Given that, the question becomes, who is certain enough of their knowledge that they may damn medical information in prefixes for us all?

Major online content platforms have an answer. They defer to institutions authorized by government bodies, such as the Centers for Disease Control and Prevention and the World Health Organization. These elite bodies of experts provide sets of standards that demarcate medical truth from falsity, which a hodgepodge of third-party fact-checking organizations then rely upon to hunt down bad information across the web.

Now, in the old days, censorship meant blacklisting (which still happens), but in an internet age where liabilities for unfairness are more visible to the public square, online companies more often engage in a soft censorship—allow the dissenter to speak, but lessen the chances they’ll be heard. As Facebook puts it, “Each time a fact-checker rates a piece of content as false, Facebook significantly reduces the content’s distribution so that fewer people see it…and we show strong warning labels and notifications to people who still come across it, try to share it or already have.”

Perhaps you believe that demoting bad medical information during a pandemic is a necessary strategy for saving lives. Surely there’s a compassionate case to be made that the common good is more sacrosanct than an individual’s liberty to vibrate their vocal cords in whatever contortions they want, wherever they want, no matter the ruin. The trouble is, new powers of authority rarely contain themselves. Instead, incrementally, they parasitize new territories.

So I was unfortunately unsurprised to see the New York Times—paper of record—publish an opinion piece titled “Joe Rogan is a Drop in the Ocean of Misinformation.” The authors, who worked on the imperiously–named Global Commission on Evidence to Address Societal Challenges, insist we are living in a manipulated marketplace where specious cures for anything and everything find their way far too easily into ailing bodies. Their solution: the soft censorship of not just pandemic unorthodoxy, but bad information across all medical fields.

We must, they propose, regulate flows of information to ensure that whatever medical advice we encounter online is best for us. Of course, they fail to mention who will lead that discernment, but we can hazard a guess they’d prefer a cosmopolitan run–of–the–mill MD over your village witch, a psychiatrist over their client.

Let us apply these authors’ suggestions to mental health, now that the field has graduated in the public eye into a bonafide hard science worthy of the designation “medical.” How might the downgrading of dissent in mental health impact accessibility to knowledge?

Imagine a Facebook group called “Coming Off Antipsychotics,” thousands of members strong. A commenter claims antipsychotics cause brain damage, maybe coaches another member restrained by a court order on how to quit taking them without getting caught. Now imagine that group in the censorious crosshairs of fact-checkers following standards set by major psychiatric institutions.

Indeed, to a profession that regularly uses coercion and force to keep clients medicated, any information that’s dissuasive against treatment is hazardous. This is why, for instance, a peer support worker in a conventional setting might be eagerly invited to share their recovery process when it follows protocol, but discouraged when it includes non–non-compliance: Saying “I got better when I accepted my illness, went to group, and found the right med” is much preferred by the authorities to “I got better when I ditched Haldol, took up kratom and weed, hooked into poker night at the local bar and joined a cult that worships Bastet the ancient cat Goddess.”

I fear a public health approach to so-called mental illness in the internet age will soon entail demoting online talk of violating treatment. To get rolling, all that’s needed is one incident wherein a member of that aforementioned Facebook group quits medications and acts dangerously in public view, for force–supporting organizations lie in wait ready to capitalize on the public’s fear.

And let’s be honest, when prefixes land on mental health information, they’re going to tag alternative modalities like Reiki, claims against the damage of shock, unconventional theories of causation, criticism of diagnoses as bogus constructs, folksy herbal cures, and so on. Never mind that my own saving grace has been the renegade psychiatric survivor movement, wherein I’ve met others who speak on their own terms, who’ve helped me clarify mine, who’ve never read me through a hospital note but asked that I narrate my reality instead.

“Health misinformation,” such as the kind that challenges psychiatric orthodoxy, “is a serious threat to public health,” proclaims the US Surgeon General. “It can cause confusion, sow mistrust, harm people’s health, and undermine public health efforts. Limiting the spread of health misinformation is a moral and civic imperative that will require a whole-of-society effort.”

“Limiting the spread.” Apparently, misinformation is now a virus capable of inoculating vulnerable hosts with discursive toxins that “undermine” public health. The task upon us is “moral,” and we do our “civic” duty when ensuring people accept that doctor knows best.

For what it’s worth, Facebook’s parent company, Meta, welcomes prefixes on bad information. As Joseph Bernstein notes in his illuminating article, “Bad News: Selling the Story of Disinformation,” these companies’ bottom line, always cash, is not threatened by framing the problem as one of information itself. Such myopia ensures the trust-busters, who could use anti-monopoly powers to weaken social media’s sway, instead stay at bay, while allowing propaganda-producing algorithms to remain obscure to regulation and consumer control. 

More importantly, it strategically obfuscates the structural reasons why people gravitate towards bad information—their economic lives are ruined, their communities have fallen apart, their religions are disintegrating, healthcare is bankrupting their families, drugs are destroying their neighbors, and their traditions are losing meaning. In the midst of such politically-induced rot, people quite reasonably distrust institutions and their sneering spokespeople who lied to them about WMDs, the 2008 financial crisis, the return of good jobs, the addictive nature of opioids, and on and on it goes.

So let me end with an anecdote—the mark of unscientific knowledge—for I’ve tasted my own flavor of rot: that of my body, decaying in autoimmune disease. When my spine was so bitten that I could no longer bend over to pull up socks, I too did something crazy (as pain will have you do). I sat down at my computer, googled “Ankylosing Spondylitis natural pain relief,” and through a series of meandering clicks, headed ever further into an unguarded dungeon where risky potions lie. Eat poop? Get bitten by the Mexican bark scorpion? 

Nah, I settled on an industrial solvent, a purely chemical byproduct from large-scale wood manufacturing. Even though the product’s intended use as a skin applicant was deemed dangerous by credentialed sources, I went further. I popped open the cap, recalled my halcyon days with Mr Jack Daniel’s, flipped my head back and gulped down a bitter shot. Like everything else, authorized or not, it didn’t take away the pain. But I felt a tingling sense of pride, maybe a little free. The Surgeon General would’ve been horrified.



Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.

Author

  • Steven Morgan

    Steven Morgan has worked in mental health peer support since 2005. Beginning in 2013, he worked for seven years with Intentional Peer Support as an international trainer and Operations Manager.

    View all posts

Donate Today

Your financial backing of Brownstone Institute goes to support writers, lawyers, scientists, economists, and other people of courage who have been professionally purged and displaced during the upheaval of our times. You can help get the truth out through their ongoing work.

Subscribe to Brownstone for More News

Stay Informed with Brownstone Institute