The reliability of the chatbot has been a topic of debate since the current revolution of artificial intelligence began. Even those who work on the most advanced models. After the launch of the GPT-5, the maximum head of ChatgPT, Nick Turley, has participated in a podcast where he has recognized that still You shouldn’t trust your chatbot as the main source of information.
When starting a conversation with chatgpt, users can see the following text at the bottom of the screen: ‘Chatgpt can make mistakes. Check the important information ‘. This remains thus with the new GPT-5 language model released by OpenAI last week: it can offer a second opinion, But it is still far from being perfectly precise.
Chatbot’s reliability
“The problem with reliability is that there is a strong discontinuity between very reliable and 100% reliable, in terms of the way the product is conceived”explains Turley. «Until we consider that we are demonstrably more reliable than a human expert in all areas, not only in some, Cinmate that we will continue to recommend that they review your answer«.
It may be tempting to believe the response of a chatbot or a general description of the AI in Google’s search results. However, generative AI tools (not only chatgpt) tend to “Hallucinate” or invent things. This is because they are mainly designed to predict the response to a consultation, based on the information of their training data.
However, generative the AI models They do not have a concrete understanding of the truth. If you talk to a doctor, a therapist or a financial advisor, that person should be able to give you the correct answer for a situation, not just the most likely. The AI offers, in general, the answer that it considers most likely, without the need to contrast it with specific experience in the sector.
While AI is quite good to guess, it is still only an assumption. Turley acknowledged that the tool works better when combined with something that provides a better understanding of the facts, such as a traditional search engine or the specific internal data of a company. «I still believe that the right product are the LLMs connected to the fundamental truth, and that is why we incorporate the search to Chatgpt; I think it makes a big difference »he said.
The executive explains that GPT-5, the new extensive language model that supports chatgpt, is a “Huge improvement” As for hallucinations, But still far from being perfect. “I trust that we will eventually solve the problem of hallucinations”but not in the short term.
The summary is that Chatbots’ reliability is far from being perfecteven when they work on the most advanced large language models. The recommendation for companies and users (if they are making decisions based on the results of chatbots) is that humans re -check the results.