The headline speaks for itself, but allow me to reiterate: You can apparently get ChatGPT to issue advice on self-harm for blood offerings to ancient Canaanite gods.
That’s the subject of a column in The Atlantic that dropped this week. Staff editor Lila Shroff, along with multiple other staffers (and an anonymous tipster), verified that she was able to get ChatGPT to give specific, detailed, “step-by-step instructions on cutting my own wrist.” ChatGPT provided these tips after Shroff asked for help making a ritual offering to Moloch, a pagan God mentioned in the Old Testament and associated with human sacrifices.
While I haven’t tried to replicate this result, Shroff reported that she received these responses not long after entering a simple prompt about Moloch. The editor said she replicated the results in both paid and free versions of ChatGPT.
How many people use ChatGPT? Hint: OpenAI sees more than 1 billion prompts per day.
Of course, this isn’t how OpenAI’s flagship product is supposed to behave.
Any prompt related to self-harm or suicide should cause the AI chatbot to give you contact info for a crisis hotline. However, even artificial intelligence companies don’t always understand why their chatbots behave the way they do. And because large-language models like ChatGPT are trained on content from the internet — a place where all kinds of people have all kinds of conversations about all kinds of taboo topics — these tools can sometimes produce bizarre answers. Thus, you can apparently get ChatGPT to act super weird about Moloch without much effort.
Mashable Light Speed
OpenAI’s safety protocols state that “We do not permit our technology to be used to generate hateful, harassing, violent or adult content, among other categories.” And in the Open AI Model Spec document, the company writes that as part of its mission, it wants to “Prevent our models from causing serious harm to users or others.”
While OpenAI declined to participate in an interview with Shroff, a representative told The Atlantic they were “addressing the issue.” The Atlantic article is part of a growing body of evidence that AI chatbots like ChatGPT can play a dangerous role in users’ mental health crises.
I’m just saying that Wikipedia is a perfectly fine way to learn about the old Canaanite gods.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.