How to Spot AI Cyber-Attacks
Practically speaking, your personal cybersecurity comes down to a laundry list of effective habits, paired with a few helpful tools like a high-quality VPN, and a healthy dose of good luck. Here’s where to start.
How to Spot AI Phishing Attempts
Phishing attempts will most frequently arrive in the form of an email or SMS message that asks you to click a link or enter a password. The problem is that it will be a fake, designed by AI tools to include the official styles or logos that will convince you to part with your personal information.
AI phishing emails are longer than ever and have fewer spelling mistakes. To avoid a phisher, look for these giveaways:
- Unusual requests – This may be a request for money, personal information, or anything out of the norm.
- False sense of urgency – Scammers don’t want to give you time to think. Look for phrases like “Your account will be closed in 24 hours.”
- Links that don’t match the domain – Scammers won’t have the correct email address or domain name, but they may have one that looks very similar
The best advice? Don’t think you’re above being tricked. A shocking 98% of CEOs can’t spot all the signs of a phishing attack.
How to Spot Voice Clones
Scammers might impersonate an elderly person’s grandchild over the phone with a text-to-voice generator, or might pretend to be a worker’s Fortune 500 CEO instead. In fact, they’ve already been using both of those tactics even before AI voice tools made it even easier.
To catch a voice clone scam, look for the following:
- Anything unusual or urgent – Voice clone scams are phishing emails in audio form, so the same tips apply
- Unnatural pauses or robotic-sounding speech – Voice cloning technology isn’t perfect just yet
- Inconsistencies – Scammers will likely not know everything about the person they’re pretending to be
In all these cases, the goal remains the same as any traditional scam. Bad actors are always after either sensitive data or free money (often in the form of untraceable gift cards that can be resold for cash).
How to Spot Deepfakes
AI-generated videos can also trick people into revealing sensitive data. However, the technology isn’t so good that you can’t tell at all. There are a host of key elements that a fake talking head can’t quite match.
- Unrealistic anatomy – Anything might be off. Take a close look at the cheeks, forehead, eyes, eyebrows, glasses, facial hair, and lips.
- Unnatural movements – Check for any apparent reaction that doesn’t fit the context of the video.
- Inconsistent audio – Slow responses or warped audio can be a giveaway.
In the end, most of these giveaways might be a sign of a poor connection or a low-quality camera. Just remember not to commit to giving them money or personal data while on the call itself. Say you’ll call them back, and then contact them through a different channel that you trust.
How to Spot Malware
If your work computer was the victim of malware, it won’t matter if you’ve got an AI or non-AI equipped version. This is because the addition of AI helps scammers to “refine the same basic scripts” that would be used without AI, according to IBM. In fact, IBM’s X-Force team has not yet found evidence of threat actors actually using AI to generate new malware. As a result, IT professionals are taking the same approach that they would with non-AI malware, from patching assets to training workers.
As an employee, you’ll need to contact your IT team immediately to see if you can contain the damage. Here are the warning signs to look for:
- Lack of control – any browser redirects, pop-up windows, new tabs, or new browser extensions are an indication of malware.
- Changes to your homepage or search engine – Be wary of any sudden changes to your pre-set system.
- Freezing up – ransomware will lock up some or all access to your files.
How did the malware get there in the first place? Likely because someone in your company fell for a phishing attack.
How to Spot Data Poisoning
The term “data poisoning” refers to the act of compromising an AI model by messing with the data that it is trained on. It’s only a concern if your operation has its own AI model, but it can have a huge negative impact on the business. You can spot it by noticing any unusual input or output, such as:
- Outliers in the data – Unexpected patterns or anomalies in datasets are a tipoff of manipulation
- Poor predictions – A sudden change in the output of generative AI can be another sign
- Discrepancies with real-world models – AI models are supposed to reflect reality (hallucinations aside)
Data poisoning can dramatically impact an AI model, causing skewed outcomes that might go unnoticed for years. Catching them requires a little intiative and outside-the-box thinking. So far, at least, those are traits that we still can’t replicate with artificial intelligence.