The Trump Administration is evaluating whether AI developers would need to submit models for government review before public release to address national security and misuse risks.
The White House is considering a policy that would require top-tier frontier AI companies to submit their models for government review before release, according to The New York Times.
The objective is to identify national security risks before deployment, including potential misuse for cyberattacks, biological threat modeling, or large-scale disinformation.
The details of this review mechanism are currently undefined. The Administration hasn’t identified which agency would oversee the process, what triggers a review, or whether the requirement would apply broadly or only to the most advanced systems. Further, it is unclear how much power the government could wield to delay or block a model’s release, or whether the process would function as a risk-disclosure mechanism without enforcement power.
The proposed policy represents a shift in approach by the administration, which previously held that big frontier developers self-police their models for risks.
For companies, the proposal signals a potential move toward pre-market compliance obligations, similar to regulatory models used in pharmaceuticals or defense technologies. If implemented, developers may be required to document model capabilities, disclose testing results, and demonstrate mitigation measures before release.

