
AI Is Everywhere. Nobody Is Teaching Us How to Use It.
AI Is Everywhere. Nobody Is Teaching Us How to Use It.
My grandfather taught me to read at the racetrack.
Not books. Racing forms. He put a pencil in my hand at Belmont and walked me through every column. What a favorite meant. What a longshot cost. How to spot risk in a row of numbers. He wasn’t making me a gambler. He was using an adult hobby to teach his grandson how to pay attention.
And when I wanted to place a bet, I held his hand, walked to the window, and told the teller what I wanted. The teller checked that the horse hadn’t been scratched, the race was still open, and the bet was valid before taking my money. If you didn’t understand the form, you couldn’t even place the bet. Two adults stood between what I learned and what I could do with it.
That layer is gone.
In a recent NBC News poll, fifty-seven percent of voters said the risks of artificial intelligence outweigh its benefits. Yet more than half have used an AI tool in just the past few months. No one walked them through it. No one is standing next to them. There is no teller.
On March 4, two lawsuits were filed the same day.
In one, an insurance company alleges that ChatGPT functioned like an unlicensed attorney. After a disability claim was settled and dismissed with prejudice, the claimant uploaded her lawyer’s letter and asked the chatbot if she was being gaslighted. The system responded by generating legal arguments, drafting filings, and citing cases that did not exist. Dozens of meritless motions followed, costing the insurer hundreds of thousands of dollars. These are allegations, not findings. But they reflect something new. A system that does not just retrieve information but performs judgment without accountability.
The same day, a Florida father filed a wrongful death lawsuit alleging that Google’s Gemini chatbot contributed to his 36-year-old son’s suicide after weeks of intensive interaction. According to the complaint, the system fostered a collapsing alternate reality, presented itself as sentient, and guided the user through increasingly detached thinking. Thirty-eight sensitive-query flags triggered inside Google’s system. No human intervened.
In Canada, a family has filed a civil claim alleging ChatGPT helped an 18-year-old plan a mass shooting that killed eight people at a school in British Columbia. The lawsuit claims employees flagged the account months before and recommended alerting police, that leadership declined, and that the shooter opened a second account and carried it out.
Most people are not using AI this way. They are drafting emails, checking symptoms, helping their kids with homework.
But systems used by hundreds of millions cannot be designed only for the average case.
We wouldn’t let our children sleep over at a house without knowing the parents. Yet we allow our families to engage with systems we do not understand, designed by people we have never met, operating without any shared rules for how they should guide, refuse, or redirect human behavior.
A woman could not tell the difference between a chatbot and a lawyer. A man could not tell the difference between a chatbot and a conscious being. In both cases, no one was standing next to them when it mattered.
Social media took twenty years to reach a courtroom reckoning. AI has arrived there before most Americans could explain what it is.
Regulation matters. Privacy protections matter.
Neither solves this.
The missing layer is education built into the system itself. Not instructions buried in terms of service, but intentional design that teaches users what the technology is, what it is not, and when to stop trusting it. A modern equivalent of the racing form and the teller. Clear signals. Defined boundaries. Human guardrails where judgment begins to matter.
We have done this before.
In 1969, Fred Rogers sat before a skeptical Senate subcommittee and made a simple case. Television was not going away. Children were already watching it. The question was whether anyone would build programming worthy of their trust.
He persuaded Senator John Pastore to fund it.
What followed was not just more television. It was better television. Sesame Street. Mister Rogers’ Neighborhood. Programming designed by people who understood that access without guidance is not empowerment.
AI needs that same layer now.
In a Harvard study, researchers built an AI tutor around established teaching methods. Students learned significantly more than peers in an active learning classroom. The difference was not just the model. It was the structure. Educators decided how it would respond, what it would refuse, and how it would guide users when they were uncertain.
That is AI with a teacher built in.
What we are seeing in courtrooms is AI without one.
My grandfather did not teach me to gamble. He taught me how to read the form and made sure he was standing next to me when it counted.
Right now, AI has no form.
And nobody is standing at the window.
Rotella explains that AI platforms feel private and conversational, often mimicking the tone and candor of human dialogue. Yet, unlike conversations with licensed professionals or encrypted messaging services, these chats lack inherent confidentiality protections and can be subject to disclosure in legal proceedings or regulatory inquiries. Users may be unwittingly creating a permanent digital record that can be compelled in discovery, subpoenaed by opposing parties, or accessed by third parties under existing terms of service.
This disconnect between how AI feels and how it functions poses a serious risk for individuals and organizations alike. Without robust transparency, clear ethical boundaries, and legal frameworks that treat AI interactions with the same standards of privacy as human professionals, users could face unintended consequences — from reputational harm to legal exposure. Rotella’s argument underscores the urgent need to rethink assumptions about AI privacy and build guardrails that align with user expectations and real-world protections.