Following recent discussions over AI contributions to the LLVM open-source compiler project, they have come to an agreement on allowing AI/tool-assisted contributions but that there must be a human involved that is first looking over the code before opening any pull request and similar. Strictly AI-driven contributions without any human vetting will not be permitted.
LLVM has committed its “human in the loop” policy for tool-assisted contributions given an increase in AI-driven garbage:
“Over the course of 2025, we observed an increase in the volume of LLM-assisted nuisance contributions to the project. Nuisance contributions have always been an issue for open-source projects, but until LLMs, we made do without a formal policy banning such contributions. However, LLMs are here, so we are adopting this policy, abbreviated as “human in the loop”, which requires that every contribution has a human author attesting to the value of that contribution, and that it is high enough quality that it is worth the time it takes to review the contribution.”
The policy in full can be found via this commit but the heart of it comes down to:
“LLVM’s policy is that contributors can use whatever tools they would like to craft their contributions, but there must be a **human in the loop**. **Contributors must read and review all LLM-generated code or text before they ask other project members to review it.** The contributor is always the author and is fully accountable for their contributions. Contributors should be sufficiently confident that the contribution is high enough quality that asking for a review is a good use of scarce maintainer time, and they should be **able to answer questions about their work** during review.
We expect that new contributors will be less confident in their contributions, and our guidance to them is to **start with small contributions** that they can fully understand to build confidence. We aspire to be a welcoming community that helps new contributors grow their expertise, but learning involves taking small steps, getting feedback, and iterating. Passing maintainer feedback to an LLM doesn’t help anyone grow, and does not sustain our community.
Contributors are expected to **be transparent and label contributions that contain substantial amounts of tool-generated content**. Our policy on labelling is intended to facilitate reviews, and not to track which parts of LLVM are generated. Contributors should note tool usage in their pull request description, commit message, or wherever authorship is normally indicated for the work. For instance, use a commit message trailer like Assisted-by: [name of code assistant]. This transparency helps the community develop best practices and understand the role of these new tools.”
Fairly straight forward and coming at a time that the Linux kernel and other open-source projects are also deciding on their AI policy/documentation.
