Earlier this week, social media was wowed by images from the streets of Chinese cities showing senior citizens lining up to have OpenClaw, the always-on AI assistant, installed on their laptops, desktops, and other devices. Areas like Shenzhen and Wuxi offered subsidies to try to scale up adoption of the tool and capitalize on its capabilities. An enormous proportion of all OpenClaw instances installed worldwide, as tracked by public dashboards, emanate from China.
But just as quickly as China adopted OpenClaw, it now appears to be shunning it. The country’s internet emergency response center has issued an official warning about the risks the technology poses. The central government has sent out diktats to government agencies and state-owned enterprises, warning them against installing OpenClaw on their systems. The private sector has also responded. The same pop-up providers of installation services are now offering to uninstall unwanted OpenClaw instances for a fee.
“It’s almost a notice from the Department of Stating the Bleeding Obvious,” says Alan Woodward, a cybersecurity professor at the University of Surrey in England. “Everyone has been saying ‘don’t be so silly as to give agentic AI access to any valuable data.’” Yet Woodward points out that China’s response is more than that—they appear to recognize that AI adoption has been so rapid that it presents a prime target for supply chain attacks. “Attackers were bound to produce malicious add-ons and plug-ins,” he says.
China can’t seem to make up its mind about what to make of OpenClaw, says Ryan Fedasiuk, a fellow at the American Enterprise Institute covering China and its tech development. “Beijing is simultaneously banning OpenClaw on government networks while local governments in Shenzhen and Wuxi are subsidizing companies that build on top of it,” he says. That points to a dual focus, Fedasiuk reckons.
“The Chinese government aims to capture the economic upside of agentic AI while keeping it out of the party-state’s own bloodstream,” Fedasiuk says. However, how long that balance can hold is debatable, not least because of the way every private-sector actor is trying to adopt agentic AI, he adds.
“Banning agents in 2026 is like trying to ban spreadsheets in 1985, or Google Sheets in 2013,” he says. “The productivity gains are enormous, and the opportunity cost of abstaining from the use of agents will eventually become untenable.”
Still, Fedasiuk points out that China’s OpenClaw ban seems eminently sensible. “Governments should be alarmed by the cybersecurity implications of AI agents,” he says. “Social norms around the technology are progressing such that many hackers will soon no longer need to crack the encryption that guards valuable files or digital services, but merely gaslight a piece of software that has already been given access to them.” The problem is that it’s out of step with current thinking about AI.
