Transatlantic Perspectives: Content Regulation, EU Digital Services Act and US Section 230

To better understand the European and American approaches to content regulation, we interviewed two distinguished experts: Florence G’sell, Digital, Governance and Sovereignty chair at Sciences Po, and Anupam Chander, professor of Law and Technology at Georgetown Law.

In recent years, both the European Union (EU) and the United States (US) have acted to redefine the relationship between tech platforms and society. The EU passed the Digital Services Act (DSA), while the US Supreme Court has examined cases relating to Section 230. At the end of March, the McCourt Institute will publish a governance brief on the decision-making processes and expected outcomes of the DSA and Section 230. This brief is informed by ongoing discussions with our academic partners, policymakers, and industry leaders, including during a roundtable on March 10th, 2023.

To better understand the European and American approaches to content regulation, we interviewed two distinguished experts: Florence G’sell, Digital, Governance and Sovereignty chair at Sciences Po, and Anupam Chander, professor of Law and Technology at Georgetown Law.

How did we get to this watershed moment? Can you give a historical perspective on the evolution of platform liability in the EU with the DSA and in the US with Section 230?

Florence G’sell: In the EU, the E-Commerce Directive 2000/31/EC, adopted in 2000, stipulated that providers of ‘hosting’ services cannot be held liable for the content that they store for their users, unless they obtain knowledge of the illegality of the content and fail to act expeditiously by removing the content. However, the E-Commerce Directive was not designed to regulate social networks as they exist today. As social networks have expanded, platforms have implemented content curation and moderation strategies and have increasingly relied on algorithmic tools to do so. But these strategies have proven insufficient in the face of the proliferation of toxic and even dangerous content, leading the EU to intervene.

After having long favored soft law and non-binding principles, the EU changed its strategy in the course of 2020 to opt for a real reform of the provisions of the E-Commerce Directive. The purpose of the DSA was never to abandon the principle of exemption from liability that has been enshrined in EU law for more than 20 years. The drafters of DSA have simply rewritten and clarified this exemption principle, while adding new obligations on providers designed to ensure both an effective fight against illegal content and an effective respect of users’ rights. These are the principles found in the final text of the Regulation (EU) 2022/2065 of 19 October 2022 on a Single Market for Digital Services, which came into force on 16 November 2022.

“The purpose of the DSA was never to abandon the principle of exemption from liability that has been enshrined in EU law for more than 20 years. The drafters of DSA have simply rewritten and clarified this exemption principle, while adding new obligations…”

– Florence G’sell

Anupam Chander: In June 1992, Bill Clinton, then Governor of Arkansas, selected for his Vice-Presidential running mate Senator Al Gore. This may seem an odd place to begin, but this was a fateful choice. It put someone who had been championing a national information network in the White House. Clinton had already begun promoting the idea of a “New Economy” that would support information industries. One of the cornerstones of the Clinton-Gore approach was not picking winners and losers, but the government making sure that there would be minimal regulatory barriers for commercial uses. In 1996, the White House published a draft white paper on “Global Electronic Commerce,” which declared that “[g]overnments should avoid undue restrictions on electronic commerce.”

That same year, Congress realized that vision through Section 230, a broad provision of the Communications Decency Act, which was itself a part of the Telecommunications Act of 1996. Section 230 was titled, “Protection for ‘Good Samaritan’ blocking and screening of offensive material,” but its crucial protection was not actually contingent on “Good Samaritan” behavior (though there are efforts to restrict its application in this way). Recognizing that the internet offered a “forum for a true diversity of political discourse,” that statute declared its goal “to promote the continued development of the Internet and other interactive computer services and other interactive media,” and “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation…” In 1997, the New York Times summarized the Clinton Administration’s simple mandate for the internet: “Let a thousand Web sites bloom.”

Section 230 would prove even more successful in spurring internet innovation than its sponsors could have imagined. As I wrote in 2014, Section 230 was one of the key laws that “made Silicon Valley.” Another key pillar was the Digital Millennium Copyright Act’s Title II, which provided safe harbors for internet providers from copyright infringement claims.

But Section 230 is not a get-out-of-jail-free card because it leaves open lots of claims for the platforms’ own behavior where the alleged harm does not emanate from the speech of others. If the interactive computer service develops the wrongful speech, it will lose the immunity. For example, when the website Roommates.com asked users to specify their age, sex, sexual orientation, and whether they have children, and then matched people according to their preferences on these lines, it developed the discriminatory speech and thus could not avail itself of Section 230 immunity. Courts have allowed claims against Amazon for defective products, imposed obligations on Stubhub to collect taxes from ticket resellers, enforced ordinances prohibiting AirBnB from booking unlicensed rentals, and required internet companies to comply with the Fair Credit Reporting Act.

The U.S. Supreme Court has never taken a Section 230 case, but it did so, somewhat unexpectedly in 2022. That case, Gonzalez v. Google, prompted an outpouring of amicus briefs, many from internet companies large and small seeking to defend their liability shield. (I helped write the amicus from Internet Law Scholars, defending the application of the immunity for targeted recommendations.) And this year, the Supreme Court is expected to take up two cases brought by an industry association NetChoice against Florida and Texas social media laws, which seek to limit platforms’ power to remove speech. After a quarter-century of relatively liberal rules for internet platforms in the United States, we are seeing a significant regulatory push.

“After a quarter-century of relatively liberal rules for internet platforms in the United States, we are seeing a significant regulatory push.”

– Anupam Chander

How does the DSA, in its elaboration and nascent implementation, compare to the debate around Section 230 in the US?

Florence G’sell: Section 230 of the Communications Act of 1934 (47 U.S.C. §230) provides for an extremely broad exemption from liability compared to what we have in Europe. It is not, as in EU law, an exemption based on the absence of knowledge of unlawful content, but an absolute exemption for all content posted by third parties. It is interesting to note that the “Good Samaritan” principle (section 230(c)(2)(A)) was included in the DSA during the negotiations at the request of the major platforms. Article 7 of the DSA thus provides that platforms do not lose the benefit of the exemption from liability when they engage, “in good faith and in a diligent manner,” in investigations to identify and remove illegal content.

It is very difficult to compare the two, if only because the exemption from liability is more limited in the EU than in the US. Still, two major points must be highlighted. First, the DSA provides for a principle of limited exemption but also obligations intended to ensure an effective fight against illicit content. Meanwhile, it is designed to be articulated with national laws that define illegal content, and many European countries, such as France, have legislation that criminalizes hate speech and even disinformation in some cases. This is not at all the case in the US where freedom of expression is understood very broadly, based on the provisions of the First Amendment.

Second, the procedural safeguards provided by the DSA, which are intended to allow users to be informed of moderation decisions affecting them, the reasons for them, and the possibility to challenge them, are debated in the US as to their compatibility with the First Amendment. It is usually considered that internet service providers have the constitutional right, as private entities, to choose the content they wish to host or publish, and thus to moderate content at their discretion. Requiring platforms to give reasons for their moderation decisions, or even to reverse such decisions, could go against this constitutionally guaranteed right. This debate is currently taking place over the SB 7072 law passed in Florida, which provided, among other things, that platforms notify and explain their moderation decisions. This law was censured by the 11th Circuit, but the case could be reviewed by the Supreme Court (Moody v Netchoice).

How future-proof are these regulations considering rapid technological innovations, including the rise of immersive environments like the metaverse, Web 3, decentralization, and mass AI-generated content?

Florence G’sell: The issue of decentralization is quite difficult for lawyers to apprehend because it raises the question of the applicable law and the competent jurisdictions. This question has been raised since the beginning of the Internet and is now being raised again in the context of networks whose nodes are difficult to locate. My impression is that decentralized environments will be regulated more by contract than by statute, and that ad hoc dispute resolution tools will increasingly be developed for them.

“… Decentralized environments will be regulated more by contract than by statute, and that ad hoc dispute resolution tools will increasingly be developed for them.”

– Florence G’sell

Still, the existing European regulations are intended to be applied or adapted to these new environments. Highly immersive environments, for example, raise renewed questions in terms of personal data protection but will definitely be governed by the General Data Protection Regulation (GDPR). It will be necessary to consider how to protect our emotional data or how to manage the risks of manipulation or mass surveillance, but a framework already exists. This framework is currently evolving with the negotiations on the AI Act, which are taking into consideration the development of extended reality and the issues raised by generative tools like ChatGPT.

The DSA is not particularly suited to decentralized or distributed environments for which it will be difficult to identify a single administrator and within which there will probably not be any “Very Large Online Platform” as defined by the DSA (above 45 million users). In addition, the DSA may not be adapted to immersive environments in which it will be up to those controlling the architecture to combat violent or deviant behavior that may not be manifested in written or verbal form. It will nonetheless be possible to keep existing principles and adapt their implementation, as, for example, the rule that administrators of immersive spaces must inform users of the reasons that led them to suspend the use of an avatar. It should also be remembered that the DSA acknowledges, for the first time, the importance of “terms of use” in content moderation. The DSA thus imposes transparency on those terms and allows European and national regulators to make recommendations on their content. The DSA creates the conditions for a dialogue between regulators and tech companies that will prove to be decisive as new environments develop in which contract, much more than statute, will serve to regulate online behaviors.

Anupam Chander: Whether the current regulatory framework for liability can be adapted to new technologies remains an open question. Section 230 immunity is lifted for any content developed by the platform itself. This means that its applicability depends very much on what the company is doing in the specific case. When the AI generates the content, Section 230 is likely unavailable. Chatbots may face liability risks, as may search engines that purport to tell you the answer. Companies offering such services may seek to use contractual disclaimers to avoid liability. Indeed, some of these companies warn users not to trust the output of their own AI systems. Many of these companies also indicate that the services are in “beta” testing mode.

“When the AI generates the content, Section 230 is likely unavailable. Chatbots may face liability risks, as may search engines that purport to tell you the answer.”

– Anupam Chander

Web3 companies should be able to access Section 230’s liability shield when they are simply providing technologies for communication and not developing the communications themselves. Decentralized platforms like Mastodon should also be able to obtain Section 230’s protections, in part because 230 protects not only platforms but also users. Metaverse platforms should generally also be protected for carrying the speech of others, just as Facebook is immunized from claims based on the speech of its users.

Florence G’sell holds the Sciences Po Digital, Governance and Sovereignty Chair and is a Professor of private law at the University of Lorraine.

Anupam Chander is the Scott K. Ginsburg Professor of Law and Technology at Georgetown University Law Center.

Subscribe to Our Newsletter Today

Project Liberty’s newsletters keep you on the frontlines of the people, protocols and possibilities that are building a better tech future. Sign up to receive updates weekly.

Loading...