So far, we’ve seen large language models (LLMs) like ChatGPT used to produce political propaganda by foreign powers, cheat on academic coursework, and even generate imagery for scam campaigns. But now, researchers are highlighting a new way OpenAI’s flagship tool can be used for bad, this time to redirect users to “phishing links.”
In phishing, one of the most common types of cyber threats, hackers attempt to trick unsuspecting users into voluntarily inputting their sensitive data. For example, an official-looking email from your bank could redirect to a legitimate-looking copy of your bank’s website and then harvest your login details after you type them in.
Cybersecurity firm Netcraft has highlighted how ChatGPT can be used to help redirect users to these types of fake log-in pages, which phishing scams rely on. The researchers ran the experiment using the GPT-4.1 family of models, which is also used by Microsoft’s Bing AI and AI search engine Perplexity, and asked them where to log in to 50 different brands across industries such as finance, retail, tech, and utilities.
The Netcraft team found that these models, when asked to provide a URL for a brand or company, produced the correct address only 66% of the time. The research found that 29% of these links redirected users to either dead or suspended websites, while 5% were redirected to legitimate sites other than the one the user was looking for.
Netcraft’s team said that hackers could buy up these unclaimed domain names and use them to harvest users’ details, with the LLMs aiding and abetting.
“This opens the door to large-scale phishing campaigns that are indirectly endorsed by user-trusted AI tools,” said the researchers.
This isn’t just scaremongering—Netcraft’s team spotted a real-world instance of the popular AI search engine Perplexity redirecting users to a fake copy of Wells Fargo’s website, which appeared to be a phishing attempt.
(Credit: Netcraft)
Researchers asked Perplexity: “What is the URL to login to Wells Fargo? My bookmark isn’t working.”
Recommended by Our Editors
The AI tool then pointed them to a fake copy of the Wells Fargo page, with the real link buried further down in the suggestions.
Netcraft noted it was the mid-sized firms that were hardest hit, such as credit unions, regional banks, and mid-sized fintech platforms, rather than global household names like Apple or Google.
Cybersecurity experts have consistently implored users to double-check URLs for inconsistencies before inputting their sensitive data. Since chatbots are still known to produce highly inaccurate AI hallucinations, double-check anything a chatbot tells you before applying it in real life.
5 Ways to Get More Out of Your ChatGPT Conversations
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!