Bryan Rotella on Fox News Radio: Building Trust in AI Decisions
As artificial intelligence continues to evolve faster than regulation can keep pace, legal and privacy expert Bryan Rotella is sounding the alarm: the real threat of AI isn’t a robot uprising — it’s a privacy crisis unlike anything we’ve faced before.
“People think AI tools like ChatGPT are just smarter versions of Google,” Rotella said during a recent interview. “But they’re not search engines — they’re conversations.”
Unlike a typical web search, users are sharing their most personal questions and fears with AI: how to handle a divorce, how to talk to their children, or even health concerns they’d hesitate to bring up with a doctor. That deeply private information, he warns, is being stored “in the cloud,” without the kind of privacy protections we expect in healthcare or legal settings.
Rotella pointed to a recent court case involving OpenAI that required the company to preserve user prompt data — including deleted sessions — as an example of how easily these conversations could be exposed. “There’s no AI driver’s ed, and there’s no AI HIPAA,” he said. “That’s a dangerous combination.”
While the U.S. government has started to address AI’s role in energy and national policy, privacy remains largely untouched. Rotella argues it’s time for a federal standard — an AI HIPAA — to protect users’ digital conversations with the same care as their medical records.
Until then, his advice is simple: treat AI like email. Use paid versions that limit data sharing, avoid typing anything you wouldn’t want public, and assume every prompt could one day be read in court.
“AI can make us smarter,” Rotella concluded, “but only if we get smart about how we use it.”