U.S. Senator Cynthia Lummis' proposed Responsible Innovation and Safe Expertise (RISE) Act has ignited discussions about balancing AI innovation with accountability. The legislation, introduced last week as the nation's first targeted liability reform for professional-grade AI, seeks to shield developers from civil lawsuits while requiring transparency about system capabilities.
Legal experts remain divided on the bill's core provision protecting AI companies from lawsuits when professionals like doctors or lawyers misuse their tools. Syracuse University's Hamid Ekbia calls the approach "timely but imbalanced," noting it places disproportionate responsibility on end-users while offering developers broad immunity.
——This creates dangerous asymmetry in risk allocation—— Ekbia told Cointelegraph, emphasizing that model specifications alone don't ensure safe deployment. Meanwhile, Shipkevich Attorneys' Felix Shipkevich defends the provision as necessary to prevent "limitless exposure" for unpredictable AI behaviors.
The AI Futures Project's Daniel Kokotajlo acknowledges the bill's progress but highlights critical gaps. "Publishing technical specs without third-party audits creates false security," he said, noting companies can opt out of transparency by accepting liability. Current disclosures omit crucial details about training data biases and system objectives that professionals need for informed decisions.
【Notable omission】The legislation doesn't address direct consumer-AI interactions, leaving unresolved questions about cases like the Florida chatbot suicide incident where no professional intermediary existed.
While the RISE Act adopts a risk-based framework focused on processes, the EU's withdrawn AI Liability Directive emphasized individual rights. Shipkevich observes European laws typically demand upfront compliance with safety standards, contrasting with the U.S. bill's post-deployment transparency model.
University of Surrey's Ryan Abbott underscores the healthcare implications, where AI sometimes outperforms human practitioners. "Who bears liability when removing the physician from the loop actually improves outcomes?" he questions, highlighting uncharted legal territory.
Most analysts view the bill as a starting point requiring refinement. Americans for Responsible Innovation's Justin Bullock suggests strengthening audit requirements, while Shipkevich advocates adding concrete risk management obligations. As the December 2025 effective date approaches, stakeholders anticipate heated debates over the appropriate balance between innovation incentives and public protection.
——The real test will be whether Congress can evolve this framework to address emerging AI harms without stifling progress—— concluded an industry insider familiar with the drafting process.