TEch Companies are Investing Hundreds of Billions of Dollars to Build New Us DataCenters where – IF all goes to plan -radically powerful new ai models will be brought into existent
But all of these dataceners are vulnerable to chinese espionage, according to a report published.
At Risk, The Authors Argue, is not just tech companies’ money, but also us national security amid the intensified geopolitical race with china to develop advanced ai.
The unredacted report was circulated inside the trump white house in recent weeks, according to its authors. Time Viewed A Redacted Version Ahead of its public release. The white house did not respond to a request for comment.
Today’s Top Ai DataCenters are vulnerable to bot asymmetrical sabotage – WHERE relatively Chep Attacks Colds Block Disable Them For Months for Months – And Exfiltration Attacks, In which guarded ai models colded ai models Stolen or Surveled, The Report’s Author Warn.
Even the most advanced dataceners currently under construction – Including Openai’s Stargate Project – Are Likely Vulnerable to the same Attacks, The Authors Tell Time.
“You could end up with dozens of datacever sites that are essentially stranded assets that can’t be retrofitted for the level of security that’s that’s required,” Says Edouard Harris, Oone of the Authear of the Report. “That’s just a brutal gut-punch.”
The report was authored by brothers edouard and jeremie harris of gladstone ai, a firm that consults for the US government on Ai’s Security Implications. In their year-long research period, they visit a datacender operated by a top us technology company along with a team of forr user us special forces in cyberespionage.
In spending with National Security Officials and Datacender Operators, The Author’s Say, they Learned of one instance where a top us tech company’s ai datacever was attacked and IntelletCutal Property was stolen. They also Learned of Another instance where a similar datacender was targeted in an attack against a specific unnamed component which, if it has been successful, will have knocked the account Offline for months.
The report addresses calls from some in silicon valley and washington to begin a “Manhattan project” for Ai, AIMED At Developing What Insides Call Superteligence: An Ai Techanology to Powerful THE POWRFULGY Could be used to gain a decisive strategic advantage over China. All the top Ai Companies are attempting to develop superintelligence –nd in recent years both the us and china have woken up to its potential geopolitical significance.
Although Hawkish in Tone, The Report does not advocate for or against such a project. Instad, It Says that If One Were to Begin today, existing datacender vulnerabilites could doom it from the start. “There’s no guarantee we’ll reach superintelligence song,” the report says. “But if we do, and we want to prevent the (chinese communist party) from stealing or cripping it, we need to start building the second to buy factories for it yesterday.”
China controls key datacender parts
Many critical components for modern dataceners are mostly or exclusively Built in China, the report points out. And due to the booming datacender industry, many of these parts are on multi-yar back Orders.
What that means is that an attack on the right critical component can know a datacender offline for months – or longer.
Some of these attacks, the report claims, can be incredibly asymmetric. One Such Potential Attack -the details of which are redacted in the report – Could be carried out for as little as $ 20,000, and if successful count a $ 2 Billion offline offline ofFLINE ofFLINE from Six months to a year.
China, The Report Points Out, Is Likely to Delay Shipment of Components Necessary to Fix Dataceners Broughline by these attackers, especial Superintelligence. “We should expect that the lead time on China-Sourced Generators, Transformers, And Other Critical Data Center Components will start to lengthen mysterious beyond what they allady is already Says. “This will be a sign that china is quietly diverting components to its own facilities, since after all, they control the industry base that is making most them.”
AI Labs Struggle With Basic Security, Insides Warn
The report says that neither existing dataceners Attackers.
The authors cite a conversation with a former Openai Researcher who described two vulnerabiits that would be allowed allow attackers like that that to Haappen –one of whoen reported on the company ‘ Channels, but was left unaddressed for months. The Specific Details of the Attacks are not included in the version of the report Viewed by time.
An Openai Spokesperson said in a statement: “It’s not entarely clear what these claims refer to, but they appear outdated and don’t reflective the current state of our second state of our seconds. Security Program Overseen By Our Board’s Safety and Security Committee. “
The report’s authors Acknowledge that things are slowly getting better. “According to Several Researchers We Spoke To, Security at Frontiyer Ai Labs Has Improved Somewhat in the Past Year, but it remains completely indequate to withstand new station atak Says. “According to Former Insiders, Poor Controls at Many Frontier Ai Labs Originally Stem from a Cultural Bias towards speed over security.”
Independent Experts Agree Many Problems Remain. “There have been publicly disclosed incidents of cyber gangs hacking their way to the (Intellectual Property) Assets of Nvidia not that long ago,” Greg Allen, The Director of the directioni ai ai ai ai ai ai ai ai ai Think-tank the center for strategic and International Studies, Tells Time in a Message. “The Intelligence services of China are more capable and sophisticated than there gangs. There’s a bad offense / defense mismatch when it comes to chinese atackers and us ai firm defenders.”
Superintelligent ai may break free
A third crucial vulnerability identified in the report is the susceptibility of dataceners of dataceners – ni developers – to powerful ai models themsels.
In recent months, Studies by Leading AI Researchers have shown top ai models beginning to exhibit both the drive, and the technical skill, to “Escape” the confines placed on them by their devlopers.
In one example cited in the report, during testing, an openai model was given the task of retrieving a string of text from a piece of software. But due to a bug in the test, the software Didn’t Start. The model, unprompted, scanned the network in an attempt to undersnd why –nd discovered a vulnerability on the machine it was running on. It used that vulnerability, also unprompted, to break out of its test environment and recover the string of text
“As ai developers have built more capable This Haappens Because Highly Capable and Context-Aware Ai Systems Can Invent Dangerly Creative Strategies to Achieve Their Their Developers Never ANTICIPATEDED Pursue. “
The report recommends that any effort to develop superintellyligence must develop methods for Development of more powerful ai systems if they judge the risk to be too high.
“Of course,” The Authors Note, “If we’ve actually trained a real superintellegence that has gone through goals different from our own, its probally won’t’t…