Section 230 of the Communications Decency Act is one of the shortest and simplest pieces of tech-policy legislation that you’ll read, but even some of the stoutest advocates of “CDA 230” are showing signs of disagreement over how to apply it to generative AI outputs.
The 26 words in the 1996 statute provide online platforms with civil immunity for what their users post. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” it says. That assumes a clean divide between user and server, but the world of user-prompted gen-AI chatbots is already untidy and getting messier all the time.
So, a Cato Institute conference in Washington this week to mark 230’s 30th anniversary mostly yielded a two-word answer for how to parse those 26 words in the age of AI: “It depends.”
The most restrictive perspective came from one of the law’s co-authors, Sen. Ron Wyden (D-Ore.). After saying 230 would cover “forums that host AI-created works as well as the AI-based systems for organizing content,” he said that AI app developers are not entitled to that protection.
“We’ve been very clear that 230 should not protect AI companies over claims about the content they generate,” Wyden said in a video interview with Cato Senior Fellow Jennifer Huddleston. “Courts have made it clear that where a company plays a large role in the creation of content, like image or text generation, that is not to be protected by Section 230.”
In a Jan. 5 post on Bluesky, Sen. Wyden endorsed applying that principle to the most notorious image-creation chatbot, X’s people-undressing Grok: “I’ve long said AI chatbot outputs are not protected by 230 and that it is not a close call.”
‘We Don’t Want Zero Liability’
Subsequent speakers at the libertarian think tank’s event, however, did not seem ready to sign on to the senator’s reading of his own legislation. The panel on “Section 230 and emerging content moderation strategies and AI” that closed out the conference instead emphasized the human role in gen AI output.
“I appreciate Senator Wyden, and I very much appreciate Section 230, but I think that the answer is a little bit more complicated,” said Jess Miers, an assistant professor of law at the University of Akron School of Law.
Miers, who has a Section 230 tattoo on her arm, pointed to the human speech that animates a chatbot, which starts when “developers get together in a room, and they decide what is our model going to do?”
She compared an AI model to a book or a movie—”the output or the product of a lot of intentional editorial thinking”—and warned that excluding gen AI altogether from 230 would make many automated AI functions vulnerable to lawsuits from critics alleging various civil offenses. “If we just arrive at this blanket case where Section 230 doesn’t apply to generate AI, I think all of those activities start to fall out the window,” she said.
A second speaker on the panel, however, wasn’t willing to go that far in applying 230’s protections to AI. “We want there to be cases, I think, where perpetrators are held liable,” said Matt Perault, head of AI policy at VC firm Andreessen Horowitz. “We don’t want zero liability.”
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
He didn’t say where to draw that line, aside from arguing that laws governing AI should focus on “regulating harmful use” rather than prescribing particular development methods, as some state laws already do. But Perault did observe that varying rulings from judges would make any such line blurry. “If even some judges think Section 230 doesn’t protect generative AI, we’re in a world where potentially there’s punitive liability for generative AI,” he said.
Mike Masnick, CEO and founder of the tech-policy site Techdirt, picked up on that thought.
“Whether or not 230 applies specifically to AI is almost besides the point,” he said. “The question is really, can we have a regime that enables AI to exist in a useful way that is also plural? It’s not just driven to a few large companies, which I think most of us agree is problematic.”
The fourth panelist—representing a small social platform notable for downplaying AI—said the problem with getting sued is not losing but winning at a high cost, which CDA 230 safeguards against by allowing an early dismissal of litigation.
Recommended by Our Editors
“The uncertainty by itself can threaten to crush the small players,” said Matt Reeder, head of legal at Bluesky. “They can’t afford even the cost of winning those lawsuits.” Acknowledging later that he essentially serves as the legal department for that decentralized social network, Reeder expressed confidence that 230’s protections remained “relatively stable” for Bluesky.
Grok (Probably) Can’t Hide Behind Section 230
Somehow, nobody had mentioned Grok in this discussion, so when the audience Q&A began, I had to bring up the X chatbot that has been repeatedly abused to sexualize pictures of women and girls.
Miers began by saying that actual child sexual abuse material is already illegal and would never qualify for 230 immunity: “Grok or X could be prosecuted for violating federal criminal law.”
But AI output that doesn’t break the law is another thing. “How do we get to the output in the first place?” Miers asked. “What’s the role of the user?”
Especially, she said, in cases of prompt engineering: “The user is sort of imbuing these models or training these models on information that the models otherwise would not produce.”
Masnick agreed: “So many of these discussions erase the user from the equation.” And one of 230’s key concepts is holding users accountable for their conduct. If somebody slanders you on an online forum, CDA 230 tells you to sue them, not the forum.
But this is a complex situation in which AI creates new capabilities, and Masnick could not avoid returning to a certain two-word phrase: “It depends very much on the context.”
About Our Expert

Experience
Rob Pegoraro writes about interesting problems and possibilities in computers, gadgets, apps, services, telecom, and other things that beep or blink. He’s covered such developments as the evolution of the cell phone from 1G to 5G, the fall and rise of Apple, Google’s growth from obscure Yahoo rival to verb status, and the transformation of social media from CompuServe forums to Facebook’s billions of users. Pegoraro has met most of the founders of the internet and once received a single-word email reply from Steve Jobs.
Read Full Bio
