Investigators and researchers are still learning of the scope of the cyberattack which has hit US government agencies and other victims around the world AFP
Scams enhanced by artificial intelligence (AI) have the potential to reach a new level of deception with the introduction of features such as ChatGPT 4o, that allow users to create convincing, photorealistic images, including fake documents, and realistic deepfake voices.
To consider how the industry should respond, a panel of Virginia Tech experts, including computer ethics educator Dan Dunlap, digital literacy educator Julia Feerrar, and cybersecurity researcher Murat Kantarcioglu discussed the implications of this everadvancing technology. The authors have provided key discussion points to .
The panel cautioned against relying only on the safety measures built into the AI tools in order to avoid scams and explained ways to be vigilant and protect data, including the potential use of blockchain.
Dan Dunlap: Educating the public about fraud detection
According Dunlap: “Scams using AI are certainly newer and more widespread, and the increasing scale and scope are immense and scary, but there is nothing fundamentally new or different about exploiting available technologies and vulnerabilities for committing fraud. These tools are more accessible, easier to use, higher quality, and faster, but not really fundamentally different from previous tools used for forgery and fraud”.
He adds: “There is a constant need to educate the public and update detection and policy as criminals use the available tools,” he said. “Computer science professionals have a moral obligation to help in both educating the public and developing tools that help identify and protect all sectors.”
“Unfortunately, disseminating knowledge can also help to exploit the weaknesses of the technology,” Dunlap concludes. “Powerful, available, and accessible tools are destined to be coopted for both positive and negative ends.”
Julia Feerrar: Watching for telltale signs of scams
“We have some new things to look out for when it comes to AIfuelled scams and misinformation. ChatGPT 4o’s image generator is really effective at creating not just convincing illustrations and photorealistic images, but documents with text as well,” Feerrar indicates. “We can’t simply rely on the visual red flags of the earliest image generators.”
“I encourage people to slow down for a few extra seconds, especially when we’re unsure of the original source,” she said. “Then look for more context using your search engine of choice.”
Feerrar follows this with: “Generative AI tools raise complex questions about copyright and intellectual property, as well as data privacy. If you upload your images to be transformed with an AI tool, be aware that the tool’s company may now claim ownership, including to further train the AI model.”
“For receipts or documents, check the math, the address — basic errors can be telling. Large language models struggle with basic math. However, know that a committed scammer can likely fix these kinds of issues pretty easily. You should also be asking how this image got to you. Is it from a trusted, reputable source?” she adds.
Feerrar concludes, noting: “Basic digital security and antiphishing advice applies whether a scammer uses generative AI or not. Now is also a great time to set up 2factor authentication,” she added. “This kind of decisionmaking is a key part of what digital literacy and AI literacy mean today.”
Murat Kantarcioglu: Using blockchain to prove files are unaltered
“It’s very hard for end users to distinguish between what’s real versus what’s fake,” Kantarcioglu states. “We shouldn’t really trust AI to do the right thing. There are enough publicly available models that people can download and modify to bypass guardrails.”
“Blockchain can be used as a tamperevident digital ledger to track data and enable secure data sharing. In an era of increasingly convincing AIgenerated content, maintaining a blockchainbased record of digital information provenance could be essential for ensuring verifiability and transparency on a global scale,” Kantarcioglu continues.
He also offered a simple but powerful lowtech solution, noting: “A family could establish a secret password as a means of authentication. For instance, in my case, if someone were to claim that I had been kidnapped, my family would use this password to verify my identity and confirm the situation.”