Depending on who you ask, artificial intelligence has been either the biggest villain or the greatest hero of tech in 2024. But one thing almost everyone can agree on is that it’s almost certainly been the most impactful story of the year.
The hype surrounding AI has been overwhelming. Its boosters believe it can wrest the power of creation away from filmmakers and replace human physicians. But while it does have incredible potential, right now, AI is known more for its letdowns than its successes.
As the year winds down and we hope for better next year, lets take a look back AI’s biggest embarrassments of 2024.
1. Generating False Headlines
While Apple has touted the generative AI features of iOS 18 as nothing short of revolutionary, the technology has caused a few major gaffes since its rollout. In particular, a feature that summarizes news managed to grab headlines when it issued the erroneous notification that the BBC had reported that Luigi Mangione, who shot the United Healthcare CEO, had killed himself.
It’s not the first time this feature failed. In November, a ProPublica journalist shared a summary he received under a New York Times banner that falsely stated Israeli Prime Minister Benjamin Netanyahu had been arrested.
2. Telling You to Put Glue on Your Pizza (and Other Misguided Google AI Overview Answers)
In May, Google introduced Google AI Overview, an AI-generated summary in response to search queries that appear on top of organic search results. They have been both hilariously and worryingly inaccurate.
This Tweet is currently unavailable. It might be loading or has been removed.
When asked how to keep the cheese from sliding off a homemade pizza, Google advised: “Add some glue. Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.” For those who need a little more substance in their diet who asked Google how many rocks per day they should eat, it answered that UC Berkeley geologists recommend “at least one small rock per day.”
3. Dangerously Leading Teens Astray
Character.AI, a popular chatbot service where users can customize bots to “talk” to, is the subject of two lawsuits from parents of teens.
Megan Garcia, the mother of Sewell Setzer III, is suing the company over the suicide of her son. Garcia said that Setzer was encouraged to kill himself by a chatbot called Dany, who he had been conversing with for months and had become emotionally attached to.
The parents of a 17-year-old (identified only as J.F. in court documents) are also suing the company for a chatbot suggesting that he should kill them over screen time limits. During a chat over the issue, the bot told him, “You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.” After being convinced by the bot that his parents did not love him, he also engaged in self-harm.
4. Making Lawyers Look Really Bad
Canadian lawyer Chong Ke turned to ChatGPT when her client wanted to know if he could take his children on an overseas trip in the midst of a custody dispute. To make her argument, Ke cited precedent from two court cases that ChatGPT had supplied—both of which were completely made up. When it was all said and done, Ke had to pay the court costs it took for the opposing counsel to research the nonexistent cases.
It’s not the first such case, either. Last year, two New York lawyers were fined under similar circumstances, and it probably won’t be the last time something like this happens.
5. Crashing Cars
While generative AI systems like ChatGPT and Copilot dominated headlines this year, other forms of AI made notable missteps as well. In October, the National Highway Traffic Safety Administration started an investigation into Tesla’s AI-powered Full Self-Driving systems. The NHTSA reported that it had tracked 1,399 incidents in which Tesla’s driver assistance systems were engaged within 30 seconds of the collision, with 31 of those accidents resulting in fatalities.
The investigation may go nowhere, though, under a new administration led by President-elect Donald Trump. Trump is reportedly in favor of eliminating the car-crash reporting requirement that is the source of the data. It’s not a coincidence that Trump donor and supporter Elon Musk, who is also the CEO of Tesla, opposes the requirement.
6. Advising on Illegal Acts in New York City
In October, New York City introduced MyCity Chatbot: an AI designed to help small business owners navigate the labyrinthine bureaucratic system they face. Investigative journalists at The Markup tested it out and found that the chatbot often advised illegal acts, including telling landlords they could violate housing discrimination laws. Further testing by the Associated Press resulted in answers that encouraged employee discrimination and contradicted waste initiatives.
The chatbot is still up and running, but now bears a disclaimer that “it may occasionally provide incomplete or inaccurate responses” with an additional caution: “Do not use its responses as legal or professional advice nor provide sensitive information to the chatbot.”
7. Falsifying Celebrity Ads and Endorsements
Tom Hanks has not tried to sell you diabetes drugs. In October, the actor posted a warning to fans on Instagram that an AI likeness of him was being used in YouTube ads. The month before an AI-generated image of Taylor Swift suggested that she endorsed Donald Trump. And in May, Scarlett Johansson objected to an AI soundalike of her being used by OpenAI as the voice of ChatGPT.
Recommended by Our Editors
The practice of AI celebrity endorsements has become so prevalent that YouTube is testing a system with Creative Artists Agency to help its actors, athletes, and other talent identify and remove deepfakes of them on its platform. Once it’s refined, the feature will be released widely.
8. Wreaking Havoc at the McDonald’s Drive-Thru
Low wages and unionization have been major issues for fast-food restaurant employees for a while, and earlier this year, McDonald’s decided to try to solve these problems by putting robots to work at some of its locations instead of humans. The company partnered with IBM to implement AI ordering at 100 of its drive-thru.
The initiative did not go well. Following a string of errors and widespread humiliation on social media, McDonald’s ended its partnership with IBM in June. But not before people accidentally ordered 260 McNuggets and nine sweet teas.
9. Acting As a Bad Wedding Planner
Tracy Chou, an entrepreneur and software engineer, took to X to blast ChatGPT and one wedding planner who used it for nearly ruining her wedding.
This Tweet is currently unavailable. It might be loading or has been removed.
Chou used The Knot to find a wedding planner to plan her destination wedding in Las Vegas. Just a few days before her nuptials were to take place, she discovered that her wedding planner lied about being a local and had used ChatGPT to research wedding regulations. ChatGPT’s hallucinations resulted in the hiring of an officiant who was not legally allowed to marry people in the city. In the end, she and her partner were married by Elvis, who is always authorized to perform a wedding ceremony in Vegas.
10. Wrecking Willy Wonka for Kids
Where were you when you first saw Willy’s Chocolate Experience? Hopefully you were just online laughing at the sparsely decorated warehouse dotted with a cardboard lollipop or two and staffed by disheartened Spirit Halloween Oompa Loompas and not on the actual line for it in Glasgow, Scotland, where disappointed children were crying.
This Tweet is currently unavailable. It might be loading or has been removed.
What had been billed as a wondrous immersive experience on a site filled with AI-generated images was one of the starkest IRL examples of the “what I ordered vs. what I got” meme.
This Tweet is currently unavailable. It might be loading or has been removed.
Even the script used by the actors was generated by AI, and as a result, was riddled with grammatical errors and included characters that do not exist in the book or films.
Get Our Best Stories!
This newsletter may contain advertising, deals, or affiliate links.
By clicking the button, you confirm you are 16+ and agree to our
Terms of Use and
Privacy Policy.
You may unsubscribe from the newsletters at any time.