Brownstone » Brownstone Journal » Law » You Should Be Very Worried About the Digital Services Act
DSA

You Should Be Very Worried About the Digital Services Act

SHARE | PRINT | EMAIL

Article 11 of The EU Charter of Fundamental Rights, which replicates a part of Article 10 of the European Convention on Human Rights, protects the right of European citizens to “hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers,” and affirms that “the freedom and pluralism of the media shall be respected.” Sadly, the fate of freedom of expression in Europe now very much hangs in the balance, as the European Union has just enacted a law that empowers the Commission to significantly restrict the ability of citizens to use digital platforms to engage in robust and sincere democratic discourse. 

Under the recently enacted Digital Services Act, the European Commission may apply significant pressures upon digital platforms to curb “hate speech,” “disinformation,” and threats to “civic discourse,” all of which constitute notoriously vague and slippery categories, categories that have historically been co-opted to reinforce the narrative of the ruling class. By giving the European Commission broad discretionary powers to oversee Big Tech content moderation policies, this piece of legislation holds freedom of speech hostage to the ideological proclivities of unelected European officials and their armies of “trusted flaggers.” 

Purpose of the Digital Services Act

The stated purpose of the Digital Services Act (DSA) that has just come into force in Europe is to ensure greater “harmonisation” of the conditions affecting the provision of “intermediary” digital services, in particular online platforms that host content shared by their customers. The Act covers a bewildering array of issues, from consumer protection and the regulation of advertising algorithms to child pornography and content moderation. Among other purposes that appear in the wording of the Act, we find the fostering of “a safe, predictable and trusthworthy online environment,” the protection of citizens’ freedom of expression, and the harmonisation of EU regulations affecting online digital platforms, which currently depend on the laws of individual member States. 

The DSA Is Not As Innocent As It Appears

At a superficial glance, the Digital Services Act (DSA) might look rather innocuous. It places fairly formal requirements on “very large online platforms” such as Google, Twitter/X, Facebook and TikTok to have clear appeals procedures and to be transparent about their regulation of harmful and illegal content. For example, section 45 of the Act reads as a fairly light touch requirement that providers of online digital services (“intermediary services”) keep customers informed of terms and conditions and company policies: 

Providers of the intermediary services should clearly indicate and maintain up-to-date in their terms and conditions the information as to the grounds on the basis of which they may restrict the provision of their services. In particular, they should include information on any policies, procedures, measures and tools used for the purpose of content moderation, including algorithmic decision-making and human review, as well as the rules of procedure of their internal complaint-handling system. They should also provide easily accessible information on the right to terminate the use of the service.

But if you start to dig into the Act, you very soon discover that it is poisonous for free speech and is not in the spirit of Article 11 of the EU Charter of Fundamental Rights, which guarantees citizens the “freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.” Below, I detail certain aspects of the Act that, taken together, pose an unprecedented threat to freedom of speech in Europe:

1. The DSA (Digital Services Act) creates entities called “trusted flaggers” to report “illegal content” they identify on large online platforms. Online platforms are required by the act to respond promptly to reports of illegal content provided by these “trusted flaggers” nominated by member State-appointed “Digital Service Coordinators.” The Act requires large online platforms to “take the necessary measures to ensure that notices submitted by trusted flaggers, acting within their designated area of expertise, through the notice and action mechanisms required by this Regulation are treated with priority.”

2. Strictly speaking, while digital platforms are required to respond to reports of illegal content submitted by “trusted flaggers,” it appears from the wording of the Act that the platforms have discretion to decide how exactly to act upon such reports. They might, for example, disagree with the legal opinion of a “trusted flagger” and decide not to take down flagged content. However, they will face periodic audits of their actions’ compliance with the Act by auditors working on behalf of the European Commission, and these reviews will hardly look favourably upon a pattern of inaction in the face of flagged content.

3. The Digital Services Act also requires “very large online platforms” (platforms such as Google, YouTube, Facebook and Twitter) to undertake periodic “risk mitigation” assessments, in which they address “systemic risks” associated with their platforms, including but not limited to child pornography, “gender violence” (whatever that means), public health “disinformation,” and the “actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, as well as public security.” Platforms have “due diligence” obligations under the Act to take appropriate measures to manage these risks. Unlike a voluntary code of practice, opting out is not an option, and failure to comply with these “due diligence” obligations will be subject to hefty sanctions.

4. The sanctions attached to non-compliance with the Act are remarkable. The Commission, if it deems that a large online platform such as X/Twitter has not been in compliance with the DSA, may fine said platform up to 6 percent of its annual global turnover. Because the idea of non-compliance is hard to quantify and pretty vague (what exactly is required in order to meet “due diligence obligations” of systemic risk management?), it seems likely that companies that wish to avoid legal and financial headaches would prefer to err on the side of caution, and put on a show of “compliance” to avoid getting fined.

5. The periodic audits envisaged by this Act will serve as a tool for the Commission to pressure large online platforms into taking action to “manage” the “risks” of disinformation and threats to “civic discourse and electoral processes,” risks which are notoriously vague and are probably impossible to define in a politically impartial fashion. The threat lurking in the background of these audits and their associated “recommendations” is that the Commission may impose multi-billion dollar fines upon online platforms for non-compliance. Because of the rather vague idea of non-compliance with “due diligence obligations,” and the discretionary nature of the financial sanctions threatened in the DSA, this Act will create an atmosphere of legal uncertainty both for online platforms and for their users. It heavily incentivises online platforms to police speech in a way that passes muster with the EU Commission, around vague categories like “disinformation” and “hate speech,” and this will obviously have repercussions for end-users.

6. According to the European Commission, “hate motivated crime and speech are illegal under EU law. The 2008 Framework Decision on combating certain forms of expressions of racism and xenophobia requires the criminalisation of public incitement to violence or hatred based on race, colour, religion, descent or national or ethnic origin.” It is important to point out that the EU Commission favours expanding the categories of illegal hate speech at a Europe-wide level to include not only “race, colour, religion, descent or national or ethnic origin,” but also new categories (presumably, including things like gender identity). So illegal hate speech is a “moving target” and is likely to become ever broader and more politically charged as time goes on. According to the European Commission’s own website,

On 9 December 2021, the European Commission adopted a Communication which prompts a Council decision to extend the current list of ‘EU crimes’ in Article 83(1) TFEU to hate crimes and hate speech. If this Council decision is adopted, the European Commission would be able, in a second step, to propose secondary legislation allowing the EU to criminalise other forms of hate speech and hate crime, in addition to racist or xenophobic motives.

7. The most disturbing aspect of the DSA is the enormous power and discretion it places in the hands of the European Commission – notably, an unelected commission – to oversee compliance with the DSA and decide when online platforms are non-compliant with respect to their “due diligence obligations” to manage risks whose meaning is notoriously vague and manipulable, such as hate speech, disinformation, and anti-civic discourse. The European Commission is also giving itself the power to declare a Europe-wide emergency that would allow it to demand extra interventions by digital platforms to counter a public threat. There will be no legal certainty about when the EU Commission might declare an “emergency.” Nor is there any legal certainty about how the European Commission and its auditors will interpret “systemic risks” like disinformation and hate speech, or assess the efforts of service providers to mitigate such risks, since these are discretionary powers.

8 Nor is clear how the Commission could possibly undertake an audit of “systemic risks” of disinformation and risks to civic discourse and electoral processes without taking a particular view of what is true and untrue, salutary and harmful information, thus pre-empting the democratic process through which citizens assess these issues for themselves.

9. Nor is it clear which checks and balances will be in place to prevent the DSA from becoming a weapon for the EU Commission’s favourite causes, whether the war in the Ukraine, vaccine uptake, climate policy, or a “war on terror.” The broad power to declare a public emergency and require platforms to undertake “assessments” of their policies in response to that, combined with the broad discretionary power to fine online platforms for “non-compliance” with inherently vague “due diligence obligations,” give the Commission a lot of leeway to lord it over online platforms and pressure them to advance its favoured political narrative.

10. One particularly sneaky aspect of this Act is that the Commission is effectively making disinformation illegal *through a backdoor,* so to speak. Instead of clearly defining what they mean by “disinformation” and making it illegal – which would probably cause an uproar – they are placing a “due dilegence” requirement upon large online platforms like Twitter and Facebook to take discretionary measures against disinformation and to mitigate “systemic risks” on their platforms (which include the risk of “public health disinformation”). Presumably, the periodic audits of these companies’ compliance with the Act would look unkindly on policies that barely enforced disinformation rules.

So the net effect of the act would be to apply an almost irresistible pressure on social media platforms to play the “counter-disinformation” game in a way that would pass muster with the Commission’s auditors, and thus avoid getting hit with hefty fines. There is a lot of uncertainty about how strict or lax such audits would be, and which sorts of non-compliance might trigger the application of financial sanctions. It is rather strange that a legal regulation purporting to defend free speech would place the fate of free speech at the mercy of the broadly discretionary and inherently unpredictable judgments of unelected officials.

The only hope is that this ugly, complicated and regressive piece of legislation ends up before a judge who understands that freedom of expression means nothing if held hostage to the views of the European Commission on pandemic-preparedness, the Russia-Ukraine war, or what counts as “offensive” or “hateful” speech.

P.S. Consider this analysis as a preliminary attempt by someone not specialised in European law to grapple with the troubling implications of the Digital Services Act for free speech, based on a first reading. I welcome the corrections and comments of legal experts and those who have had the patience to wade through the Act for themselves. This is the most detailed and rigorous interpretation I have developed of the DSA to date. It includes important nuances that were not included in my previous interpretations and corrects certain misinterpretations – in particular, platforms are not legally required to take down all flagged content, and the people who flag up illegal content are referred to as “trusted flaggers,” not “fact-checkers.”).

Republished from the author’s Substack



Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.

Author

  • David Thunder

    David Thunder is a researcher and lecturer at the University of Navarra’s Institute for Culture and Society in Pamplona, Spain, and a recipient of the prestigious Ramón y Cajal research grant (2017-2021, extended through 2023), awarded by the Spanish government to support outstanding research activities. Prior to his appointment to the University of Navarra, he held several research and teaching positions in the United States, including visiting assistant professor at Bucknell and Villanova, and Postdoctoral Research Fellow in Princeton University’s James Madison Program. Dr Thunder earned his BA and MA in philosophy at University College Dublin, and his Ph.D. in political science at the University of Notre Dame.

    View all posts

Donate Today

Your financial backing of Brownstone Institute goes to support writers, lawyers, scientists, economists, and other people of courage who have been professionally purged and displaced during the upheaval of our times. You can help get the truth out through their ongoing work.

Subscribe to Brownstone for More News

Stay Informed with Brownstone Institute