AI Errors and Hallucinations
Using AI for some business operations, like streamlining sales or fast-tracking customer support interactions, is fine. But when it comes to things like food safety, people are understandably a bit trepidatious.
That’s because the technology is nothing if not prone to errors. Since launching in 2022, AI models like ChatGPT have been defined by their hallucinations, citing fake sources, providing incorrect answers, and generally getting it wrong when it matters most.
Even worse, the companies powering these AI models are far from concerned about the negative impacts of the technology. In fact, a recent study found that nearly every company working on a major AI platform is missing the mark on safety.
All that to say, AI hasn’t proven that it can get the job done without creating more work for human employees checking its every output. Until it does, we need to be very skeptical of using it for things like food and drug safety inspections.
