Skip to content
Menu
Menu
1280x250

House Introduces The TRAIN Act, Seeking Transparency In AI Training Models

The Transparency and Responsibility for Artificial Intelligence Networks Act would allow copyright owners to seek subpoenas for records showing whether their works were used to train AI models.

 

On January 22, 2026, Representative Madeleine Dean (D-Pa.) and Representative Nathaniel Moran (R-Tex.) introduced H.R. 7209, the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act, in the U.S. House of Representatives. The bill was referred that day to the House Committee on the Judiciary.

The proposed law would add a new Section 514 to Chapter 5 of Title 17 of the United States Code to establish an administrative subpoena process for copyright owners to obtain records or copies relating to artificial intelligence models.

Under the bill text, the subpoena process would enable a copyright owner or an authorized representative who has a good-faith belief that a generative AI model was trained on their copyrighted works to file a proposed subpoena with a U.S. district court clerk. If the proposed subpoena and sworn statement meet statutory criteria, the clerk would be required to issue and sign the subpoena.

The bill defines “artificial intelligence” by reference to the National Artificial Intelligence Initiative Act of 2020 and describes an “artificial intelligence model” as a component of an information system using computational, statistical, or machine-learning techniques to produce outputs from inputs. A “Generative artificial intelligence model” is one that generates synthetic content, such as text, images, audio, or video. “Developer” refers to a person or state or local government agency that designs, codes, produces, owns, or substantially modifies a generative model, excluding noncommercial end users. “Training material” covers individual works or components used to train a generative model.

If issued, subpoenas would require developers to provide records or copies sufficient to identify the training material used in model training. The bill would require confidentiality for disclosed material and permit courts to impose sanctions if a subpoena is sought in bad faith. Noncompliance with a subpoena could create a rebuttable presumption that the developer made copies of the copyrighted work.

Representative Dean said in a press release that “there is no path for creators to know if their work has been used — without their permission and without compensation — to train AI models” and that “our laws must catch up” to address AI training practices. Representative Moran added, “We must advance it [AI] responsibly by protecting American creators while encouraging technologies that reward creativity, collaboration, and proper attribution.”

The TRAIN Act remains at the committee stage and would take effect upon enactment into law, applying to federal district court proceedings nationwide.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.