By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: 4 Ways Scammers Are Using AI To Trick You (And How To Stay Safe) – BGR
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > 4 Ways Scammers Are Using AI To Trick You (And How To Stay Safe) – BGR
News

4 Ways Scammers Are Using AI To Trick You (And How To Stay Safe) – BGR

News Room
Last updated: 2025/12/24 at 3:19 AM
News Room Published 24 December 2025
Share
4 Ways Scammers Are Using AI To Trick You (And How To Stay Safe) – BGR
SHARE






earthphotostock/Shutterstock

Whether we were ready for it or not, generative AI technology has circulated far and wide across the internet. Most of the major tech brands are pushing their own AI-powered tools and assistants, and some of these chatbots even outperform ChatGPT. However, in an unfortunate side effect, those tools and assistants have found their way into the hands of those who probably shouldn’t have them. Scammers of just about every variety have taken to AI quite well, using it to both enhance their existing phishing scams and escalate efforts like identity theft to worrying new heights.

The notion of a scammer using AI tools to make convincing facsimiles of you and your loved ones, or tailoring their scams directly to your interests, is naturally very frightening. However, much like how nearly every online scam has a visible string to pull on, so too do those enhanced by AI. AI is powerful, but it is definitely not infallible, and if you know the right signs to look for and employ some common sense safety measures, even AI-enhanced scammers won’t be able to steal your money or identity.

AI-enhanced phishing scams


Concept of a phishing scam on a phone.
Just_super/Getty Images

Phishing scams are one of the most traditional forms of online scamming. A scammer sends you an email, chat request, or other form of communication and attempts to trick you into clicking a fake link, which then solicits you for valuable information like login credentials or banking info. Regular phishing scams are already bad enough, but AI tools have allowed scammers to fine-tune their methods further.

By feeding large quantities of genuine messages and statements from major brands and companies into an LLM, scammers can swiftly generate phishing emails designed to more closely mimic the way those particular companies talk to users and customers. Where before a phishing email may have obvious typos or bizarre formatting, with AI assistance, it can look nearly indistinguishable from the genuine article.

However, anyone who has seen enough generated text from a chatbot knows that LLMs have their own distinct quirks. If you receive a suspicious, unsolicited email from a site or business you’ve previously used, try pulling up a previous email and comparing it to the suspicious one. You may notice some small inconsistencies in language and word choice. Additionally, check the email address the suspicious message came from. If it’s different from the address you usually get messages from that brand from, that’s a big red flag.

AI-enhanced spear phishing and catfishing


Concept of an individual committing identity theft
Mdisk/Shutterstock

In addition to traditional phishing scams, scammers have also utilized AI to enhance their efforts at spear phishing and catfishing. Spear phishing is when a scammer tailors their scam to your publicly available personal information, such as what you’ve posted to a personal social media account. Catfishing is a similar scam in which a scammer creates a false identity in an effort to get close to you and solicit money, a tactic often used for romance scams.

Similar to regular phishing scams, AI can also be used to enhance these other kinds of scams. Rather than feeding an LLM corporate messages, though, a scammer may feed it your information, scraped from your social media accounts, in an effort to create a virtual person who just so happens to share all of your hobbies and interests. These tactics could also be used to create a facsimile of you yourself in an effort to target your friends or family members.

While it’s unpleasant to think that everyone on every social media platform is out to get you, it’s unfortunately something of a necessary precaution. If you’re contacted by a stranger via direct message or email, treat them with a healthy degree of skepticism, and immediately break contact if they ask you for money, try to pressure you into investing, or urge you to purchase something like shady products or crypto. In the event a scammer is trying to mimic you with AI, you should give your friends and family members a secondary, private means of contacting you. If they ever receive a message from “you” that seems off-base, they should contact you through this second method to verify.

Voice cloning


A person wearing headphones looking at an audio wavelength.
Tero Vesalainen/Shutterstock

A classic scam conducted over the phone involves the scammer calling someone, frequently an elderly individual who may be hard of hearing, and saying “it’s me, it’s me,” and waiting for them to drop a name they can latch onto. The obvious weakness of this scam is that, more often than not, the scammer sounds nothing like the person they’re claiming to be. This is another avenue where scammers have found AI quite useful.

AI has the well-advertised ability to mimic human voices via analyzed audio clips. If there is a lot of audio of your voice online from videos or social media posts, a scammer could have an LLM skim all of that audio to create a fairly convincing mockup of your voice and speech patterns. This mockup could be used to solicit money from your friends or family members. In an even darker example, some scammers have used these mockups to convince individuals that they’ve taken their family members hostage and demand ransom.

Similarly to catfishing, a good countermeasure for cloned voices of your friends or family is to have a secondary means of communication you can consult when you’re unsure of their authenticity. These secondary means could also prove invaluable if you find yourself in that dark circumstance where a scammer tells you they’ve kidnapped them. If the cloned voice is merely trying to talk to you, try asking them a question that only you and this other individual would know, something you’re sure they haven’t posted publicly online. Odds are good they’ll either fumble the answer and hang up or become angry and try to pressure you, which is a good time for you to hang up.

Imposter deepfakes


Creating an AI deepfake of a man.
Tero Vesalainen/Shutterstock

In addition to voices, AI has proven itself adept at generating convincing video footage. The practice of using this function to create false footage of specific individuals is commonly known as deepfake scamming. Again, this is accomplished through combing large quantities of data to assemble the fake, specifically visual data like videos posted online. Unsurprisingly, this function has also been put toward malicious purposes.

Scammers will create deepfakes of popular online influencers, celebrities, or other individuals that have a lot of video data to draw from. These deepfakes can then be used to send private video messages to targeted individuals, or create public-facing videos with trap links for things like contests or giveaways. One scammer managed to trick an individual by creating a fake conference call with high-ranking executives at their job. Theoretically, this function could also be used to duplicate individuals like friends or family, though this would probably be harder to do than a celebrity unless the friend or family member in question posts a comparable amount of personal video.

If you happen to receive a video from an influencer you follow that’s not coming from their actual account, it is almost definitely a fake, even if it seems to look like them. You should be particularly suspicious of direct, unsolicited communications from such individuals. If someone tries to host a video call with you to convince you of their authenticity, that’s an opportunity for you to expose them with AI’s critical weakness: ask them to turn their head or perform some manner of complex hand motion. As powerful as AI has become, it still can’t handle those kinds of fine details, and the deepfake’s face or hand will morph into an unpleasant mishmash.



Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article How to Protect Yourself This Holiday Season How to Protect Yourself This Holiday Season
Next Article Instagram’s New Notes Feature & How It Works | Instagram’s New Notes Feature & How It Works |
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Gravastar Mercury V60 Pro Review
Gravastar Mercury V60 Pro Review
Gadget
Fight 'Stranger Things' Withdrawal With This '80s Horror Movie, Free on Tubi
Fight 'Stranger Things' Withdrawal With This '80s Horror Movie, Free on Tubi
News
Qualcomm’s Xqci RISC-V Extension Now Deemed Non-Experimental For LLVM 22
Qualcomm’s Xqci RISC-V Extension Now Deemed Non-Experimental For LLVM 22
Computing
I’ve Tested 45 Scooters on the Streets of New York—Here Are My 9 Favorites
I’ve Tested 45 Scooters on the Streets of New York—Here Are My 9 Favorites
Gadget

You Might also Like

Fight 'Stranger Things' Withdrawal With This '80s Horror Movie, Free on Tubi
News

Fight 'Stranger Things' Withdrawal With This '80s Horror Movie, Free on Tubi

7 Min Read
Innovation is up at UK firms, but funding remains a challenge – UKTN
News

Innovation is up at UK firms, but funding remains a challenge – UKTN

2 Min Read

Mon, 01/05/2026 – 18:00 – Editors Summary

1 Min Read
Nvidia GeForce Now’s Time Limit Will Stop Gamers After 100 Hours Each Month
News

Nvidia GeForce Now’s Time Limit Will Stop Gamers After 100 Hours Each Month

5 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?