top of page

Meta and Google Face Mounting Legal Pressure as Court Cases Chip Away at Section 230

For three decades, Section 230 of the Communications Decency Act has served as the bedrock legal protection for internet platforms, shielding companies like Meta and Google from liability over content posted by their users.


That protection is now being tested in courtrooms across the country, as plaintiffs' attorneys have developed targeted legal theories specifically designed to circumvent it — and in several high-profile cases, they are winning.


In a landmark verdict last week in Los Angeles, a jury found Meta and Google's YouTube negligent in a personal injury case that alleged the platforms had intentionally engineered addictive features — including autoplay, recommendation algorithms, and notification systems — that caused serious mental health harm to a minor.


It was the first time a jury held social media companies liable for the specific design of their products, not merely for the content hosted on them. Separately, a jury in New Mexico found Meta liable in a case involving child safety. The financial penalties so far are limited — less than $400 million combined — but the precedents they establish could reverberate across the industry for years.


A third case, filed in the days following those verdicts, takes aim at Google's AI Mode — the company's AI-powered search feature that generates its own summaries and links.


Victims of Jeffrey Epstein filed a class action lawsuit alleging that Google's AI Mode surfaced their personal identifying information, including names, phone numbers, and email addresses, in a way that exposed them to harassment and threats.


The plaintiffs' attorneys argue that because Google's AI is generating its own content rather than passively indexing third-party information, it falls outside the traditional Section 230 framework. "This is AI mode coming up with its own content," one attorney told CNBC. "That's something that's not been explored very thoroughly by the courts."


The common thread across all of these cases is the deliberate effort to route around Section 230 by targeting how platforms are designed rather than what content they host. Legal experts say the strategy has been years in the making.


"The plaintiffs' bar is winning the war against section 230 through systematic, relentless litigation that is causing divots and chinks in its protection," said Eric Goldman, a law professor at Santa Clara University. Both Meta and Google said they plan to appeal last week's verdicts, but the legal trajectory is becoming harder to dismiss as isolated setbacks.


The stakes are unusually high at this particular moment. As social media companies lean further into AI — feeding recommendation systems with more powerful models, generating more of their own content, and deploying AI-powered features that increasingly blur the line between platform and publisher — the legal exposure created by these verdicts is likely to grow, not shrink.


For the technology industry's largest players, the coming wave of appeals and potential Supreme Court cases could determine whether the legal architecture that enabled the modern internet survives the AI transition intact.

bottom of page