Bryan Rotella on AI’s “Killswitch”

California lawmakers are considering a sweeping new bill that would require tech companies to install a so-called “AI kill switch” — a government-regulated mechanism designed to prevent artificial intelligence from developing “hazardous capabilities,” such as launching weapons or triggering catastrophic systems.

But according to AI policy specialist and trial lawyer Bryan Rotella, the proposal represents an overreaction that could stall innovation and stifle progress in industries where AI is already saving lives.

“An AI kill switch would be like putting an expensive safety brake on a car that isn’t even moving yet,” Rotella explained on NewsNation. “It’s premature, and it risks stopping amazing developments — including early cancer detection tools in healthcare — before they ever reach patients.”

Rotella pointed out that the fear driving this legislation feels more like science fiction than science fact. “A lot of people are treating AI like Oppenheimer’s atomic bomb,” he said. “They’re reacting as if Chernobyl already happened.”

The real issue, Rotella argued, isn’t that AI will destroy humanity — it’s that we haven’t yet built the ethical and operational frameworks to use it responsibly. “The real danger isn’t sentient AI. It’s negligence and greed. The question isn’t whether AI will take over — it’s whether companies are putting profits over people.”

Instead of forcing developers to design arbitrary kill switches, Rotella advocates for policies that require “humans in the loop.” These would ensure every AI-driven system has a qualified professional overseeing its actions — much like compliance officers in finance or healthcare.

“AI needs a culture of safety,” Rotella said. “Just as the Sarbanes-Oxley Act created accountability in financial institutions, AI companies need compliance plans and safety directors to ensure human oversight. That’s where regulation should start — not with science-fiction panic buttons.”

He cited recent class action lawsuits against health insurers accused of using AI to wrongly deny claims as an example of real-world harm that’s already happening — and where oversight could make an immediate difference.

Rotella also warned that if the proposed law passes, the economic consequences for California could be severe.

“If this legislation becomes law, you’ll see a mass exodus of AI companies from California,” he said. “States like Florida or Texas — where I’m based — could become the next major hubs for technology in the 21st century. This could cripple Silicon Valley’s leadership in innovation for decades.”

While AI undoubtedly poses challenges that must be addressed through thoughtful governance, Rotella’s message is clear: the focus should be on building guardrails that protect people without derailing progress.

“AI is learning fast — but so can we,” he concluded. “Let’s keep a human in the driver’s seat instead of pulling the emergency brake before the car leaves the driveway.”