If you recently lost money to a crypto scam, or if you invest in crypto and have watched your investments plummet for unknown reasons, you may be wondering where all that money ended up. Well, the answer might be North Korea, as we reported earlier this week. North Korean hackers have stolen over $2 billion in cryptocurrency this year and are targeting ransomware and other financial scams against individuals more frequently than in previous years. Even worse, that number might go up: the year isn’t over yet.
Next, if you’ve recently uploaded your government ID to Discord, you might want to keep an eye out for an email from [email protected]. The hackers responsible for the breach that affected Discord last week reportedly stole the personal data of 5.5 million unique users, including millions of photos of IDs and potentially other sensitive information. Discord says the number is closer to 70,000 people, all of whom uploaded their IDs to appeal age-verification decisions, and that they’re reaching out to the impacted users. All of this underscores the privacy issues inherent with age verification and the rush to adopt it. But hey, won’t someone think of the children’s data?
Of course, that’s not the only hack kicking around this week. We reported last Friday that hackers are threatening to release over 1 billion stolen records from more than 39 companies, including notable names such as Disney, Toyota, and McDonald’s, if those companies don’t pay up. Aside from this being yet another indicator that the real data security threat to companies these days is ransomware, it’s intriguing how they got the data. By compromising user accounts at smaller companies (in this case, support tools built to manage massive Salesforce deployments at other large companies), they can steal high-value data by breaching low-value accounts. The irony isn’t lost on us, though, that this is all happening while the Cybersecurity Information Sharing Act (CISA) program, designed to share vulnerabilities between the public and private sectors, is offline due to the government shutdown.
In other news that made me facepalm, this week, the Asahi ransomware hack I mentioned in last week’s roundup continues to wreak havoc on the Japanese brewer. The company’s domestic supply has been running out while operations have been suspended, although representatives have said they plan to try to restart limited production using “partial manual order processing,” which essentially means going back to pen, paper, and word of mouth. Everything that’s old is new again.
Employees Regularly Paste Company Secrets Into ChatGPT
Among the many issues with the rapid adoption of LLM-based AI is that security and privacy are often afterthoughts rather than engineering principles. So it’s no surprise that if you tell people to use it for everything, they’re going to use it for everything, including things you might not have intended—like sensitive data and corporate documents. The Register reports that a survey by LayerX, a security company, revealed that 45% of enterprise employees are using generative AI tools like ChatGPT, and of those, 77% regularly copy/paste data into their queries. Of that group, 22% of respondents admitted to copying personally identifiable information (PII) or payment card information (PCI) numbers and other financial data.
The survey’s authors blame the rapid and uncontrolled rise of these tools in and out of workplaces, and, of course, the assumption that if you have access to something at work that it must be secure somehow. They note that the problem extends beyond bad actors using LLMs to exfiltrate sensitive data as well. Users were observed uploading sensitive code and other corporate documents to ChatGPT, using both enterprise and personal accounts. Hilariously, the survey also revealed that people vastly prefer using ChatGPT at work, especially over competitors that are eager to attract enterprise users, such as Microsoft’s Copilot.
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
Signal Adds New Cryptographic Defense Against Quantum Attacks
VPN providers aren’t the only ones concerned with post-quantum encryption. Bleeping Computer reports that Signal, our Editors’ Choice for secure messaging services, has also introduced a new component designed to protect your most sensitive conversations and messages from future attacks, called the Sparse Post Quantum Ratchet (SPQR). You don’t have to understand the details to benefit from the protection, though, since the goal is to make sure that your Signal data is encrypted to the point where even if it’s harvested today and someone tries to use a hypothetical quantum computer in the future to break it, they won’t be able to.
Recommended by Our Editors
Even better, the new protocol was developed with the help of cryptography researchers at institutions such as PQShield, NYU, and AIST, among others, and has already been independently verified by ProVerif to ensure it’s robust enough to deploy today and protect tomorrow. The protocol will roll out gradually to all users and will be smart enough to revert to the current level of protection if a Signal user is communicating with a client or platform that doesn’t support SPQR. Then, when the upgrade is eventually complete and all users are on board, the protocol will become mandatory.
BCI: The Stuff of Nightmares or Dreams?
Raise your hand if you’ve ever seen the classic film Ghost in the Shell, or any of the TV installments that came afterward. If your hand is up, you’ve probably seen (in fiction, obviously) what could happen if a skilled hacker were capable of manipulating the sight, hearing, or even perspective of someone with cybernetic components in their brain. After all, if everything is online, even our senses, wouldn’t it make sense that someone interested in making you blind would try to find a way to disrupt the vision centers of your brain? This feature at Dark Reading addresses the topic using current technology and security threats.
In short, they point out that even in its current, relatively early research and experimentation phases, brain-computer interfaces (BCIs) use current technology that has well-documented vulnerabilities. Whether they’ve been exploited, however, is anyone’s guess, and they argue that security should be incorporated at the design stage because the risks are so high. After all, no one wants to live in a future where an attacker could use ransomware to hold your limbs hostage until you pay them, or encrypt data in your own mind until you pay them to unlock it. As science fiction rapidly becomes science fact, it has been difficult to convince companies involved in BCI technology to prioritize security over rapid trials and potential profits, which could lead us to potentially dangerous places.
About Our Expert

Alan Henry
Managing Editor, Security
Experience
I’ve been writing and editing stories for almost two decades that help people use technology and productivity techniques to work better, live better, and protect their privacy and personal data. As managing editor of PCMag’s security team, it’s my responsibility to ensure that our product advice is evidence-based, lab-tested, and serves our readers.
I’ve been a technology journalist for close to 20 years, and I got my start freelancing here at PCMag before beginning a career that would lead me to become editor-in-chief of Lifehacker, a senior editor at The New York Times, and director of special projects at WIRED. I’m back at PCMag to lead our security team and renew my commitment to service journalism. I’m the author of Seen, Heard, and Paid: The New Work Rules for the Marginalized, a career and productivity book to help people of marginalized groups succeed in the workplace.
Read Full Bio