Stay informed with free updates
Simply log in to the Artificial intelligence myFT Digest – delivered straight to your inbox.
The outgoing head of the US Department of Homeland Security believes Europe’s ‘hostile’ relationship with tech companies is hampering a global approach to regulating artificial intelligence, which could lead to security vulnerabilities.
Alejandro Mayorkas told the Financial Times that the US – home to the world’s top artificial intelligence groups, including OpenAI and Google – and Europe are not on “strong footing” due to a difference in regulatory approach.
He stressed the need for “harmonization across the Atlantic” and expressed concern that relations between governments and the technology industry in Europe are “more adversarial” than in the US.
“Different governance of a single asset creates a potential for disorder, and disorder creates a vulnerability from a safety and security perspective,” Mayorkas said, adding that companies would also struggle to navigate different regulations in different jurisdictions.
The warning comes after the EU brought into force its AI law this year, considered the strictest laws governing the emerging technology anywhere in the world. It introduces restrictions on high-risk AI systems and rules designed to create more transparency into how AI groups use data.
The UK government is also planning to introduce legislation that would force AI companies to give access to their models for security assessments.
In the US, newly elected President Donald Trump has pledged to cancel his predecessor, Joe Biden’s executive order on AI, which established a safety institute to conduct voluntary testing on models.
Mayorkas said he did not know whether the U.S. security establishment “would remain” under the new administration, but warned that prescriptive laws could “stifle and harm U.S. leadership” in the rapidly developing sector.
Mayorkas’ comments highlight the fault lines between European and US approaches to AI oversight, as policymakers try to balance innovation with security concerns. DHS is tasked with protecting the security of the United States from threats such as terrorism and cybersecurity.
That responsibility will fall to Kristi Noem, the South Dakota governor whom Trump chose to lead the department. The president-elect has also appointed venture capitalist David Sacks, a critic of technology regulation, as his AI and crypto czar.
In the US, attempts to regulate the technology have been thwarted by fears it could stifle innovation. In September, California Governor Gavin Newsom vetoed an AI safety bill that would govern the technology within the state, citing such concerns.
The Biden administration’s early approach to AI regulation has been accused of being both too heavy-handed and not going far enough.
Silicon Valley venture capitalist Marc Andreessen said during a podcast interview this week that he was “very scared” of government officials’ plans for AI policy after meetings with Biden’s team this summer. He described the officials as “out for blood”.
Republican Senator Ted Cruz also recently warned of “heavy-handed” foreign regulatory influence on the sector by policymakers in Europe and Britain.
Mayorkas said, “I am concerned about the rush to legislate at the expense of innovation and inventiveness because the gentleman knows our regulatory apparatus and our legislative apparatus is not nimble.”
He defended his department’s preference for “descriptive” rather than “prescriptive” guidelines. “The mandatory structure is dangerous in a rapidly evolving world.”
DHS has been actively integrating AI into its operations, with the goal of demonstrating that government agencies can implement new technologies while ensuring safe implementation.
It has deployed generative AI models to train refugee agents and conduct role-play interviews. This week it launched an internal DHS AI chatbot, powered by OpenAI through Microsoft’s Azure cloud computing platform.
During his tenure, Mayorkas established a framework for the secure deployment of AI in critical infrastructure, and made recommendations for cloud and computing providers, AI developers, infrastructure owners, and operators on addressing risks. This included monitoring the physical security of data centers, powering AI systems and monitoring activities, evaluating models for risk, biases and vulnerabilities, and protecting consumer data.
“We need to work well with the private sector,” he added. “They are a key stakeholder in our country’s critical infrastructure. Most of it is actually owned and operated by the private sector. We must adopt a model of partnership and not a model of adversity or tension.”