By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Character.AI opens a back door to free speech rights for chatbots
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Character.AI opens a back door to free speech rights for chatbots
News

Character.AI opens a back door to free speech rights for chatbots

News Room
Last updated: 2025/05/10 at 6:58 AM
News Room Published 10 May 2025
Share
SHARE

Should AI chatbots have the same rights as humans?

Common sense says no — while such a far-fetched idea might make for good sci-fi, it has no place in American law. But right now, a major tech company is trying to bring that idea to life, pressing a federal court to extend legal protections historically primarily afforded to humans to the outputs of an AI bot. 

Character.AI, one of the leading AI companion bot apps on the market, is fighting for the dismissal of a wrongful death and product liability lawsuit concerning the death of 14-year-old Sewell Setzer III. As co-counsel to Sewell’s mother, Megan Garcia, and technical advisor on the case, respectively, we’ve been following these motions closely and with concern. 

In a hearing last week, Character.AI zeroed in on its core argument: that the text and voice outputs of its chatbots, including those that manipulated and harmed Sewell, constitute protected speech under the First Amendment.

But… how? The argument is subtle — deftly designed to remain inconspicuous even as it radically reshapes First Amendment law. Character.AI claims that a finding of liability in the Garcia case would not violate its own speech rights, but its users’ rights to receive information and interact with chatbot outputs as protected speech. Such rights are known in First Amendment law as “listeners rights,” but the critical question here is, “If this is protected speech, is there a speaker or the intent to speak?” If the answer is no, it seems listeners’ rights are being used to conjure up First Amendment protections for AI outputs that don’t deserve them. 

Character.AI claims that identifying the speaker of such “speech” is complex and not even necessary, emphasizing instead the right of its millions of users to continue interacting with that “speech.” 

But can machines speak? Character.AI’s argument suggests that a series of words spit out by an AI model on the basis of probabilistic determinations constitutes “speech,” even if there is no human speaker, intent, or expressive purpose. This ignores a cornerstone of First Amendment jurisprudence, which says that speech — communicated by the speaker or heard by the listener — must have expressive intent. Indeed, last year four Supreme Court justices in the Moody case said the introduction of AI may “attenuate” a platform owner from its speech.

In essence, Character.AI is leading the court through the First Amendment backdoor of “listeners’ rights” in order to argue that a chatbot’s machine-generated text — created with no expressive intent  — amounts to protected speech. 

SEE ALSO:

AI companions unsafe for teens under 18, researchers say

This defies common sense. A machine is not a human, and machine-generated text should not enjoy the rights afforded to speech uttered by a human or with intent or volition.

Mashable Light Speed

Regardless of how First Amendment rights for AI systems are framed — as the chatbot’s own “speech,” or as a user’s right to interact with that “speech” — the result, if accepted by the court, would still be the same: an inanimate chatbot’s outputs could win the same speech protections enjoyed by real, living humans. 

If Character.AI’s argument succeeds in court, it would set a disturbing legal precedent and could lay the groundwork for future expansion and distortion of constitutional protections to include AI products. The consequences are too dire to allow such a dangerous seed to take root in our society.

The tech industry has escaped liability by cloaking itself in the protections of the First Amendment for over a decade. Although corporate personhood has existed since the late 19th century, free speech protections were historically limited to human individuals and groups until the late 1970s and peaked in 2010 with the Supreme Court’s Citizens United case. Tech companies have eagerly latched onto “corporate personhood” and protected speech, wielding these concepts to insulate themselves from liability and regulation. In recent years, tech companies have argued that even their conduct in how they design their platforms — including their algorithms, and addictive social media designs — actually amounts to protected speech.

But, at least with corporate personhood, humans run and control the corporations. With AI, the tech industry tells us that the AI runs itself — often in ways humans can’t even understand.

Character.AI is attempting to push First Amendment protections beyond their logical limit — with unsettling implications. If the courts humor them, it will mark the constitutional beginnings of AI creeping toward legal personhood.

This may sound far-fetched, but these legal arguments are happening alongside important moves by AI companies outside of the courtroom. 

AI companies are fine-tuning their models to appear more human-like in their outputs and to engage more relationally with users — raising questions about consciousness and what an AI chatbot might “deserve.” Simultaneously, AI companies are funneling resources into newly established “AI welfare” research, exploring whether AI systems might warrant moral consideration. A new campaign led by Anthropic aims to convince policymakers, business leaders, and the general public that their AI products might one day be conscious and therefore worthy of consideration. 

In a world where AI products have moral consideration and First Amendment protections, the extension of other legal rights isn’t that far off. 

We’re already starting to see evidence of AI “rights” guiding policy decisions at the expense of human values. A representative for Nomi AI, another chatbot company, recently said they did not want to “censor” their chatbot by introducing guardrails, despite the product offering a user step-by-step instructions for how to commit suicide. 

Given the tech industry’s long-standing pattern of dodging accountability for its harmful products, we must lay Character.AI’s legal strategy bare: it’s an effort by the company to shield itself from liability. By slowly granting rights to AI products, these companies hope to evade accountability and deny human responsibility — even for real, demonstrated harms. 

We must not be distracted by debates over AI “welfare” or tricked by legal arguments granting rights to machines. Rather, we need accountability for dangerous technology — and liability for the developers who create it.

Meetali Jain is the founder and director of the Tech Justice Law Project, and co-counsel in Megan Garcia’s lawsuit against Character.AI. Camille Carlton is policy director for the Center for Humane Technology, and is a technical expert in the case. This column reflects the opinions of the writers.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article FreeBSD 14.3 Beta 2 Brings WiFi Fixes, Reproducible ARM64 Kernel Binary Build
Next Article Netflix’s ‘Moments’ Feature Lets You Easily Share Your Favorite Clips
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

BYD to introduce low-cost EV to Europe: executive · TechNode
Computing
Rootstock
News
Servicenow launches an AI platform to integrate any model or agent
Mobile
Too Good to Be True? Why One of Eric’s Picks Actually “Has It All”
News

You Might also Like

News

Rootstock

0 Min Read
News

Too Good to Be True? Why One of Eric’s Picks Actually “Has It All”

12 Min Read
News

Popular burger chain that’s ‘superior to Chipotle’ to open six new locations

4 Min Read
News

Soviet spacecraft CRASHES into Earth after getting stuck in orbit for 50 years

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?