Sam Altman, CEO of Openai, alerted the world in an intervention from the US Federal Reserve: “I am very nervous because an imminent and serious fraud crisis is coming.” The leader expressed it in the context of a speech where he reviewed how malicious actors are defeating authentication systems through artificial intelligence (AI).
A problem that is already real. Sam Altman’s notices are framed in a reality that the authorities have already warned. The FBI last year of the growing use of AI by cybercriminals. The methods described to deceive federal government officials focused on voice cloning and visging.
Scams with cloned voices are increasingly frequent, although there are seemingly simple solutions such as “family passwords.” However, there are many situations where something like this cannot be established in advance, as would be the call of a politician.
Nor of a bishop, as has happened in a Sevillian brotherhoods, who have warned that the voice of Monsignor Teodoro Muñoz is being cloned asking for Bizums. The INCIBE also takes time detailing cases of voice impersonation in which the scammers request money making family members. In addition, the domains are triggered in cases of phishing, and agents will not help, but quite the opposite.
Altman has interests in the way of solving it. The problem of Altman alerts coincides with real cases and alerts of the authorities. However, the OpenAi CEO is warning about a present and future situation of which it is interested. In addition to the position for which it is more famous today, Altman is co -founder and remains involved in the controversial WorldCoin verification company (now World).
The director for Europe of Worldcoin told us this way a couple of years ago:
“It is increasingly difficult distinguish if you are talking to a human or if you are looking at something created by an AI. How do humanity prove in the era of artificial intelligence? This is where Altman with Alex Blania, a German physicist in Caltech, began working on this project “
New solutions after stopping. WorldCoin defended that Iris scan is the most infallible method to verify that we are human, although after the problems with the AEPD they had to accept that asking for the scan of iris was too much, and they have a new alternative with World ID credentials, a system that identifies us based on something universal: passports.
Not anyone, yes, but those who have NFC. They also have World ID Deep Face, a tool that serves to confirm in video calls that the people involved are human.
OpenAI is part of the problem. In 2019, more than two years before Chatgpt’s media explosion, Openai announced that the full version of GPT-2, its brand new language model, was not going to spread to the public. The reason? The fear that a “misuse” such as that Sam Altman alert would wreak havoc. As the Infibe with Depseek did.
However, with the much more powerful GPT-3 (which was based chatgpt in launch) and GPT-4 there was no caution. In fact, according to the Wall Street Journal, which led in part to the dismissal of Sam Altman himself was that he said that three new functionalities had been approved by a security committee created in conjunction with Microsoft … and then discover that only one of those three had really approved.
Ilya Sutskever wants to fix it. To this was added permission to carry out security evidence in India without having asked the Board or the Security Committee. The story ended with Ilya Sutskever leaving OpenAi after having co -founded. On the way to creating Safe Superintelligence Inc (SSI), a company that pursued precisely a superintelligence with “nuclear” security. Without having “nothing”, it is worth more than 30,000 million.
Images | OpenAI, Worldcoin
In WorldOfSoftware | 017 has been attending cybersecurity for five years. The question is who calls and for what