Section 230 of the Communications Decency Act, a pivotal law shaping the modern internet, provides broad immunity to online service providers for third-party content. Enacted in 1996 as part of the Telecommunications Act, it has been both praised for fostering online innovation and criticized for potentially enabling harmful content. As debates over its scope and impact continue, policymakers and tech companies grapple with balancing free speech, content moderation, and platform accountability in the digital age.
The Communications Decency Act, which included Section 230, was introduced in 1996 to address concerns about online indecency and obscenity. While the Supreme Court struck down the CDA's anti-indecency provisions in Reno v. American Civil Liberties Union (1997), Section 230 remained intact1. Early legal challenges helped establish the broad scope of Section 230's protections. In Zeran v. America Online, Inc. (1997), the Fourth Circuit Court of Appeals rejected attempts to narrow the immunity, ruling that Section 230 barred liability even when a provider had notice of allegedly defamatory content2. This interpretation was widely adopted by other courts, solidifying Section 230's role in shaping internet law and fostering the growth of online platforms23.
The core of Section 230 lies in two key provisions. Section 230(c)(1) states that no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider12. This shields online platforms from liability for user-generated content. Section 230(c)(2) offers "Good Samaritan" protection, allowing providers to moderate content in good faith without fear of legal repercussions3. These provisions have been interpreted broadly by courts, applying to both publishers and distributors, and covering a wide range of online services beyond just social media platforms34.
Recent years have seen intensified debates over Section 230's scope and impact, with calls for reform coming from both sides of the political aisle. Critics argue that the law's broad protections enable the spread of misinformation and harmful content, while supporters maintain it is crucial for preserving free speech online. In May 2024, a bipartisan proposal was introduced to sunset Section 230 by December 1, 2025, aiming to pressure tech companies and lawmakers to collaborate on a new legal framework12. This proposal reflects growing concerns about the law's applicability in the age of artificial intelligence and the increasing power of large tech platforms.
While Section 230 provides broad immunity, it does contain specific exceptions. These carve-outs limit protection in cases involving:
Federal criminal law
Intellectual property law
State laws consistent with Section 230
Certain privacy laws applicable to electronic communications
Federal and state laws relating to sex trafficking
The most notable exception was introduced by FOSTA-SESTA in 2018, which removed immunity for content related to sex trafficking1. These exceptions allow for legal action against online platforms in specific circumstances, balancing the need for innovation with accountability for certain types of harmful content23.