U.S. Senator Cynthia Lummis has introduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), the first standalone bill offering AI developers conditional legal immunity from civil lawsuits in exchange for comprehensive transparency requirements. The legislation would require companies to publicly disclose training data, evaluation methods, and system specifications while maintaining traditional liability standards for professionals using AI tools in their practice.
What you should know: RISE creates a “safe harbor” provision that shields AI developers from civil suits only when they meet strict disclosure requirements.
- Developers must publish detailed model cards containing training data, evaluation methods, performance metrics, intended uses, and limitations.
- Companies must also provide model specifications including full system prompts and behavioral instructions, with any trade-secret redactions requiring written justification.
- The protection disappears if developers miss 30-day update deadlines or act recklessly, and it doesn’t apply to fraud or knowing misrepresentation.
The big picture: The bill addresses growing concerns about opaque AI systems by establishing transparency as the price for limited liability protection.
- If enacted, the measure would take effect December 1, 2025, applying only to conduct after that date.
- Both the Senate and House would need majority votes to pass the bill, followed by President Trump‘s signature.
Why this matters: The legislation represents Congress’s first concrete attempt to balance AI innovation with accountability through mandatory transparency requirements.
- Senator Lummis, a Wyoming Republican, frames the approach as “simple reciprocity” where developers gain legal protection in exchange for openness about their systems.
- The bill aims to resolve uncertainty around liability rules that currently “chills investment and leaves professionals unsure where responsibility lies.”
Professional liability remains intact: The legislation preserves existing standards of care for doctors, lawyers, engineers, and other “learned professionals.”
- A physician who misreads an AI-generated treatment plan or lawyer who files an unvetted AI-written brief remains liable to clients.
- The safe harbor specifically excludes non-professional use and maintains traditional malpractice standards.
What they’re saying: Lummis emphasized the need for public accountability in AI development.
- “Bottom line: If we want America to lead and prosper in AI, we can’t let labs write the rules in the shadows,” Lummis wrote on X. “We need public, enforceable standards that balance innovation with trust.”
Industry reaction: Daniel Kokotajlo from the AI Futures Project, a nonprofit organization who advised on the bill’s drafting, offers cautious support with reservations.
- He flags an opt-out loophole where companies can simply accept liability and keep specifications secret, “limiting transparency gains in the riskiest scenarios.”
- The 30-day disclosure window between releases and required documentation “could be too long during a crisis.”
- Over-redaction risks exist where firms might excessively black out information under intellectual property protection claims.
Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits