The Trump administration plans to do just that use artificial intelligence to write federal transportation rules, according to data from the U.S. Department of Transportation and interviews with six agency staffers.
The plan was presented to DOT officials last month at a demonstration of AIs “Potential to revolutionize the way we write regulations,” says attorney Daniel Cohen wrote to colleagues. The demonstration, Cohen wrote, would “demonstrate exciting new AI tools available to DOT rule writers to help us do our work better and faster.”
Discussion of the plan continued last week under the agency’s leadership, according to meeting notes reviewed by ProPublica. Gregory Zerzan, the agency’s general counsel, said at that meeting that President Donald Trump “is very excited about this initiative.” Zerzan seemed to suggest that the DOT was at the forefront of a broader federal effort, calling the department the “tip of the spear” and “the first agency fully capable of using AI to set rules.”
Zerzan seemed primarily interested in the amount of regulation AI could produce, rather than its quality. “We don’t need a perfect rule on XYZ. We don’t even need a very good rule on XYZ,” he said, according to meeting minutes. “We want good enough.” Zerzan added, “We’re flooding the zone.”
These developments have alarmed some at DOT. The agency’s rules cover virtually every facet of transportation safety, including regulations that keep planes in the air, prevent gas pipelines from exploding and prevent freight trains containing toxic chemicals from sliding off the rails. Why, some staffers wondered, would the federal government outsource the writing of such critical standards Unpleasant an emerging technology notorious for making mistakes?
The answer from the plan’s boosters is simple: speed. Writing and reviewing complex federal regulations can take months, sometimes years. But with DOT’s version of Google Gemini, employees could generate a proposed rule in minutes or even seconds, two DOT employees who attended the December demonstration recalled the presenter saying. Anyway, most of what is in the preambles of DOT regulatory documents is just “word salad,” one employee recalled the presenter saying. Google Gemini can do word salad.
Zerzan reiterated the ambition to accelerate regulation with AI during the meeting last week. The goal is to dramatically compress the timeline in which transportation rules are drafted so that they can go from idea to full design in just 30 days, ready for review by the Office of Information and Regulatory Affairs, he said. That should be possible, he said, because “it shouldn’t take more than twenty minutes to get a draft line out of Gemini.”
The DOT plan, which has not been previously reported, represents a new front in the Trump administration’s campaign to integrate artificial intelligence into the work of the federal government. This government is not the first to use AI; Federal agencies have been gradually integrating the technology into their work for years, including to translate documents, analyze data and categorize public comments, among other applications. But the current government is particularly enthusiastic about the technology. Trump issued multiple executive orders last year in support of AI. Director of the Office of Management and Budget in April Russell Vought distributed a memo calling for an acceleration of its use by the federal government. Three months later, the government released an ‘AI action plan’ which contained a similar directive. However, none of these documents explicitly called for using AI to write regulations, as DOT now plans to do.
Those plans are already in progress. The department used AI to draft a yet-unpublished Federal Aviation Administration rule, according to a DOT official briefed on the matter.
Skeptics say that so-called large language models such as Gemini and ChatGPT should not be trusted with the complex and consequential responsibilities of governance, as these models are prone to errors and incapable of human reasoning.. But proponents see AI as a way to automate mindless tasks and wring efficiencies out of a slow-moving federal bureaucracy.
Such optimism was on display earlier this month in a windowless conference room in Northern Virginia, where federal technology officials gathered at a meeting AI summit, discussed adopting an “AI culture” in government and “upskilling” the federal workforce to use the technology. Those federal representatives included Justin Ubert, division chief for cybersecurity and operations at DOT’s Federal Transit Administration, who spoke on a panel about the Department of Transport’s plans for “rapid adoption” of artificial intelligence. Many people see humans as a “bottleneck” that slows AI, he noted. But eventually, Ubert predicted, humans will revert to just a supervisory role, monitoring “AI-to-AI interactions.” Ubert declined to speak to ProPublica on the record.
A similar optimistic attitude about AI’s potential permeated the presentation at DOT in December, which was attended by more than 100 DOT employees. including division heads, senior lawyers and regulatory agency officials. The presenter enthusiastically told them that Gemini can handle 80% to 90% of the work of writing regulations, while DOT employees could do the rest, one participant recalled the presenter’s saying.
To illustrate this, the presenter asked for a suggestion from the audience on a topic on which DOT might need to write a Notice of Proposed Rulemaking, a public submission outlining an agency’s plans to implement a new regulation or amend an existing regulation. He then plugged the topic keywords into Gemini, which produced a document that looked like a notice of proposed rulemaking. However, it appeared that the actual text of the Code of Federal Regulations was missing, an employee recalled.
The presenter expressed little concern that the AI-produced regulatory documents could contain so-called hallucinations – erroneous text that often generated by large language models like Gemini – according to three attendees. In any case, that’s where DOT personnel would come into the picture, he said. “It seemed like his vision for the future of regulation at DOT was that it would be our job to proofread this machine product,” one employee said. “He was very excited.” (Attendees couldn’t clearly remember the name of the main presenter, but three said they thought it was Brian Brotsos, the agency’s acting Chief AI Officer. Brotsos declined to comment and referred questions to the DOT press office.)
A DOT spokesperson did not respond to a request for comment; Cohen and Zerzan also did not respond to messages seeking comment. A Google spokesperson had no comment.
The December presentation left some DOT employees deeply skeptical. Regulation is complicated work, they said: which requires expertise in the subject in question, but also in the field of existing laws, regulations and case law. Errors or mistakes in DOT regulations can lead to lawsuits or even injuries and deaths in the transportation system. Some rule writers have decades of experience. But all that seemed to be ignored by the presenter, those present said. “It seems wildly irresponsible,” said one, who like the others requested anonymity because they were not authorized to speak publicly on the matter.
Mike Horton, DOT’s former acting chief artificial intelligence officer, criticized the plan to use Gemini to write regulations, likening it to “having an intern in high school doing your regulations.” (He said the plan was not yet in the works when he left the agency in August.) Noting the life-or-death stakes of transportation safety rules, Horton said the agency’s leaders “want to go fast and break things, but going fast and breaking things means people are going to get hurt.”
Academics and researchers who track the use of AI in government had mixed opinions about the DOT plan. If agency rule writers use the technology as a kind of research assistant with sufficient oversight and transparency, it could be useful and save time. But if they cede too much responsibility to AI, that could lead to shortages in crucial regulations and collide with the requirement that federal rules must be based on reasoned decision-making.
“The fact that these tools can produce many words does not mean that those words together will produce a high-quality government decision,” said Bridget Dooling, a professor at Ohio State University who studies administrative law. “It’s so tempting to try to figure out how to use these tools, and I think it would make sense to try. But I think it should be done with a lot of skepticism.”
Ben Winters, director of AI and privacy at the Consumer Federation of America, said the plan was particularly problematic datum the exodus of subject matter experts from government as a result of government cuts to the federal workforce last year. DOT has suffered a net loss of almost 4,000 out of 57,000 employees since Trump returned to the White House, including more than 100 lawyers, federal records show.
Elon Musk’s Department of Government Efficiency has been a strong supporter of AI adoption within government. In July, The Washington Post reported on a leaked DOGE presentation calling for the use of AI to eliminate half of all federal regulations. and do this in part by having AI draft regulatory documents. “Writing is automated,” the presentation read. DOGE’s AI program “automatically prepares all filing documents for attorneys to edit.” DOGE and Musk did not respond to requests for comment.
The White House did not answer a question about whether the administration plans to use AI in rulemaking at other agencies as well. Four top technology officials in the government said they were not aware of such a plan. As for DOT’s “tip of the spear” claim, two of these officials expressed skepticism. “There’s a lot of saying, ‘We want to appear to be a leader in federal AI adoption,’” one person said. “I think it’s mainly a marketing thing.”
Alex Mierjeski contributed research.
ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.
