A federal government employee has reportedly leaked a sensitive API key linked to Elon Musk’s xAI platform — and it could have serious implications for both national security and the future of AI development.
According to a report from TechRadar, Marko Elez, a 25-year-old software developer with the Department of Government Efficiency (DOGE), accidentally uploaded xAI credentials to GitHub while working on a script titled agent.py.
That key granted access to at least 52 private large language models from xAI, including the latest version of Grok (grok‑4‑0709), a GPT-4-class model powering some of Musk’s most advanced AI services.
The exposed credentials remained active for a concerning period of time, raising major questions about access control, data security, and the growing use of AI across U.S. government systems.
Why this matters
Elez had high-level clearance and access to sensitive databases used by agencies like the Department of Justice, Homeland Security and the Social Security Administration.
If the xAI credentials were abused before being revoked, it could open the door to misuse of powerful language models, from scraping proprietary data to impersonating internal tools.
This incident follows a string of DOGE-related security lapses and adds to a growing chorus of criticism over how the agency; formed under Elon Musk’s influence to improve government efficiency, manages internal safeguards.
What was leaked
The leaked key was embedded in a GitHub repository owned by Elez and exposed publicly.
It provided backend access to xAI’s model suite, including Grok-4, without any apparent usage restrictions.
Researchers who discovered the leak were able to confirm its validity before the repository was taken down, but not before it could have been scraped by others.
The most recent Grok models are used not only for public-facing services like X (formerly Twitter) but also within Musk’s federal contracts.
This means the API leak may have inadvertently created a potential attack surface across both commercial and governmental systems.
Bigger than just one key
This is a warning sign that AI tools with enormous power are being handled casually, even those held by government insiders.
Philippe Caturegli, CTO at cybersecurity firm Seralys, told TechRadar: “If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors.”
Elez has been involved in previous DOGE controversies, including inappropriate social media behavior and apparent disregard for cybersecurity protocols.
The takeaway
At the time of writing, xAI has not issued a statement, and the leaked API key has not been officially revoked, according to reports. So as of now, xAI hasn’t disabled that key, making it a continuing security concern.
Meanwhile, government officials and watchdogs are calling for stricter credential management policies and better oversight of tech collaborations involving high-stakes AI infrastructure.
While this breach may not immediately affect the average user, it highlights a broader issue: the increasingly blurred lines between public and private AI development, and the very real need for transparency, accountability, and better data hygiene in both sectors.
For now, the key takeaway is this: as AI systems become more powerful, the humans behind them must be even more careful. As we are already seeing, one careless upload could unlock a world of risk.
Follow Tom’s Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
More from Tom’s Guide
Back to Laptops