A well-known Stanford professor is accused of including fake AI-generated citations in a legal argument on the dangers of deepfakes.
Minnesota has proposed a law that will enforce legal restrictions on using deepfakes around election time. Professor Jeff Hancock, a founding director of the Stanford Social Media Lab, submitted a legal argument in support of the bill, the Minnesota Reformer reports.
However, journalists and legal professors have been unable to locate some of the studies cited in the argument, such as “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance.”
Some commentators suggest this could be a sign that parts of the argument were generated by artificial intelligence, which has a history of making up answers.
Opponents of Minnesota’s bill have argued that these “AI hallucinations” make the professor’s legal argument less reliable. The court filing, from conservative and Republican Representative Mary Franson, said the mysterious citations “calls the entire document into question.”
Professor Hancock is a well-known name in the field of misinformation. One of his TED talks, “The Future of Lying,” has racked up over 1.5 million views on YouTube, and he also stars in a documentary on misinformation that is available on Netflix.
Professor Hancock has yet to publically respond to the allegations against him.
Recommended by Our Editors
Last year, two New York lawyers were sanctioned after submitting a legal brief that included fake citations generated by OpenAI’s ChatGPT
Elon Musk’s X is spearheading a comparable lawsuit challenging California’s Defending Democracy From Deepfake Deception Act of 2024, which will also impose limits on creating and sharing deepfakes around election time, on First Amendment grounds.
Get Our Best Stories!
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.