
How AI Conversations Are Already Court Evidence
How AI Conversations Are Already Court Evidence
Date:
October 03, 2025
>> Click here to read the full article at the Washington Examiner.
In his recent Washington Examiner article, Bryan Rotella argues that artificial intelligence is rapidly transforming medicine—but not necessarily in ways that patients can trust. The piece, titled “Artificial Intelligence, Healthcare, Trust & Professional Guardrails,” takes us deep into the tension that is emerging between innovation and accountability. (Read the full article here: Washington Examiner: “Artificial Intelligence, Healthcare, Trust & Professional Guardrails”.)
Rotella opens by acknowledging that AI holds extraordinary promise for healthcare—more accurate diagnostics, faster drug discovery, smarter patient care. But he immediately warns that without professional boundaries and ethical guardrails, AI’s power could work against those it’s meant to help.
He points out three critical tension points:
Trust Gap: Patients already distrust large systems. If AI becomes the “doctor behind the scenes,” how can people know their data is safe, their care is personalized, and errors are held accountable?
Lack of Regulation: AI in medicine is racing ahead of laws and standards. What happens when an algorithm errs? Who is liable—the hospital, the software company, or the clinician?
Standards of Practice: If AI tools act like clinicians, at what point must they be held to the same standards as doctors or nurses? Rotella argues that professionals must remain in control and accountable.
Rotella makes a compelling case that adopting AI first and regulating later is a dangerous gamble. He warns that care providers, tech companies, and even policymakers are already slipping into a mindset of “ship it, then fix it.” But that approach doesn’t work when human lives are at stake.
He suggests several guardrail principles:
Transparency: AI systems should disclose how decisions are made (where feasible).
Accountability: People—not just code—must take responsibility when errors occur.
Standards: The medical profession must define ethical norms for AI usage.
Patient Rights: Patients should retain control over how AI is used in their care.
Without proper guardrails, AI in healthcare risks becoming a black box: patients won’t know how decisions are made, clinicians won’t know how to defend choices, and companies can hide behind “proprietary algorithms.” If trust collapses, people may refuse advanced care—exactly what cutting-edge AI promises to improve.
With guardrails in place, however, AI can fulfill its potential: to augment human clinicians, not replace them; to offer personalized treatment with transparency; and to deliver innovation people can trust.