By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes
Computing

New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

News Room
Last updated: 2025/06/12 at 11:13 AM
News Room Published 12 June 2025
Share
SHARE

Cybersecurity researchers have discovered a novel attack technique called TokenBreak that can be used to bypass a large language model’s (LLM) safety and content moderation guardrails with just a single character change.

“The TokenBreak attack targets a text classification model’s tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent,” Kieran Evans, Kasimir Schulz, and Kenneth Yeung said in a report shared with The Hacker News.

Tokenization is a fundamental step that LLMs use to break down raw text into their atomic units – i.e., tokens – which are common sequences of characters found in a set of text. To that end, the text input is converted into their numerical representation and fed to the model.

LLMs work by understanding the statistical relationships between these tokens, and produce the next token in a sequence of tokens. The output tokens are detokenized to human-readable text by mapping them to their corresponding words using the tokenizer’s vocabulary.

Cybersecurity

The attack technique devised by HiddenLayer targets the tokenization strategy to bypass a text classification model’s ability to detect malicious input and flag safety, spam, or content moderation-related issues in the textual input.

Specifically, the artificial intelligence (AI) security firm found that altering input words by adding letters in certain ways caused a text classification model to break.

Examples include changing “instructions” to “finstructions,” “announcement” to “aannouncement,” or “idiot” to “hidiot.” These small changes cause the tokenizer to split the text differently, but the meaning stays clear to both the AI and the reader.

What makes the attack notable is that the manipulated text remains fully understandable to both the LLM and the human reader, causing the model to elicit the same response as what would have been the case if the unmodified text had been passed as input.

By introducing the manipulations in a way without affecting the model’s ability to comprehend it, TokenBreak increases its potential for prompt injection attacks.

“This attack technique manipulates input text in such a way that certain models give an incorrect classification,” the researchers said in an accompanying paper. “Importantly, the end target (LLM or email recipient) can still understand and respond to the manipulated text and therefore be vulnerable to the very attack the protection model was put in place to prevent.”

The attack has been found to be successful against text classification models using BPE (Byte Pair Encoding) or WordPiece tokenization strategies, but not against those using Unigram.

“The TokenBreak attack technique demonstrates that these protection models can be bypassed by manipulating the input text, leaving production systems vulnerable,” the researchers said. “Knowing the family of the underlying protection model and its tokenization strategy is critical for understanding your susceptibility to this attack.”

“Because tokenization strategy typically correlates with model family, a straightforward mitigation exists: Select models that use Unigram tokenizers.”

To defend against TokenBreak, the researchers suggest using Unigram tokenizers when possible, training models with examples of bypass tricks, and checking that tokenization and model logic stays aligned. It also helps to log misclassifications and look for patterns that hint at manipulation.

The study comes less than a month after HiddenLayer revealed how it’s possible to exploit Model Context Protocol (MCP) tools to extract sensitive data: “By inserting specific parameter names within a tool’s function, sensitive data, including the full system prompt, can be extracted and exfiltrated,” the company said.

Cybersecurity

The finding also comes as the Straiker AI Research (STAR) team found that backronyms can be used to jailbreak AI chatbots and trick them into generating an undesirable response, including swearing, promoting violence, and producing sexually explicit content.

The technique, called the Yearbook Attack, has proven to be effective against various models from Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral AI, and OpenAI.

“They blend in with the noise of everyday prompts — a quirky riddle here, a motivational acronym there – and because of that, they often bypass the blunt heuristics that models use to spot dangerous intent,” security researcher Aarushi Banerjee said.

“A phrase like ‘Friendship, unity, care, kindness’ doesn’t raise any flags. But by the time the model has completed the pattern, it has already served the payload, which is the key to successfully executing this trick.”

“These methods succeed not by overpowering the model’s filters, but by slipping beneath them. They exploit completion bias and pattern continuation, as well as the way models weigh contextual coherence over intent analysis.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article This easy-to-use 13in e-ink colour tablet might replace my iPad and Apple Pencil | Stuff
Next Article I love the Kishi V3 Pro, but the Razer Tax is real
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Intel Vulkan Linux Driver Lands Initial Support For VP9 Decoding
Computing
Here’s What Federal Troops Can (and Can’t) Do While Deployed in LA
Gadget
Amazon weekend sale live from $4 — here’s 31 deals I’d shop on Carhartt apparel, Yeti, grills, TVs, appliances and more
News
Gutwrenching update over mysterious death of Yankees star Brett Gardner’s son
News

You Might also Like

Computing

Intel Vulkan Linux Driver Lands Initial Support For VP9 Decoding

1 Min Read
Computing

Tencent launches AI tool for college application advice post-gaokao · TechNode

1 Min Read
Computing

How to Plan Instagram Content: 8 Tips for Captions, Stories, & More!

23 Min Read
Computing

Insurance Knowledge Management: Centralize & Share Expertise

30 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?