Tiktok had actually planned to introduce a new AI function. Using automatically generated summaries, users should find out what is happening in a video. The original idea was to provide more contextual information as well as recommend similar products. However, as Business Insider reports, this plan backfired. Because the AI sometimes delivered bizarre results, the video platform had to withdraw the new function.
AI produces bizarre hallucinations
Generative AI is now basically good at recognizing image and video content and the new Tiktok tool is actually said to have summarized some posts in an apt and helpful way. For others, however, it failed completely. The AI described a video of influencer Charli D’Amelio, in which she speaks to the camera alone in front of a white wall, as “a collection of different blueberries with different toppings.” A dog trainer’s contribution, in which he explained the four-legged friends’ behavior in certain situations, was again summed up as “a captivating display of complex origami art”.
The new AI tool has been tested in a limited number of markets for several months. The feedback was sometimes devastating: On Reddit, for example, a user described the AI-generated summaries as completely unrelated random text that had nothing to do with the video shown. Tiktok then canceled the test phase, as a company spokesman confirmed to Business Insider. The operator of the video platform did not want to say which AI models were used for the new function. According to a feature description in the app, the tool relied on either Tiktok’s own AI technology or third-party products.
Editorial recommendations
${content}
${custom_anzeige-badge}
${custom_tr-badge}
${section}
${title}
Trust in AI remains low
Tiktok isn’t the only company to make headlines with AI hallucinations. For example, when Google introduced its AI summaries, the new feature claimed that a dog played in the NHL. The tip to mix glue into pizza sauce so that the cheese doesn’t slip off attracted a lot of attention. As Der Spiegel reported, this advice was based on a joke post on Reddit that the AI obviously didn’t recognize as such.
Such incidents also influence public trust in AI: According to a survey by KPMG in 2025, two thirds of people in Germany said they used AI tools – but only 32 percent trust the reliability of their answers. Tiktok has now adjusted the faulty function. As the company announces, the technology will in future focus on identifying products in videos instead of describing the entire content.
Top Article
${content}
${custom_anzeige-badge}
${custom_tr-badge}
${section}
${title}
