The Rules of the Road for AI: Trust Before Adoption

By Rotella Rotella, Published exclusively in The Hill 12.18.25

Most Americans trust things they do not fully understand every day. 

They trust traffic lights. Crosswalks. Driver’s education. The rules are visible; the expectations are shared. Trust follows. 

That was not always true. 

In the early 1900s, American streets were chaos. Automobiles were hailed as the future, but they arrived faster than society could prepare for them. Horses, wagons, pedestrians and cars all fought for the same space. Accidents piled up. Public confidence collapsed. Innovation was real, but fear spread faster. 

What saved the automobile was not deregulation. It was trust. 

The person who understood that best is someone almost no one remembers today. William Phelps Eno watched traffic chaos consume American cities and reached a simple conclusion: innovation cannot scale unless ordinary people understand the rules that govern it. His rules of the road did not slow progress. They made modern life possible. 

Artificial intelligence has reached the same moment. 

President Trump’s recent executive order limiting state-level obstruction of national AI policy is an acknowledgment that AI is no longer theoretical. Federal clarification arrives when the road is already crowded. 

The risk is that most people will glaze over that reality. Executive orders can feel abstract — above our heads, and easy to ignore. But they are not. 

America does not have an AI innovation problem. It has a trust problem. Trust only works when people understand the rules, whether they never touch AI, use it casually, or rely on it constantly.

AI adoption is moving at a pace we have never seen before. In roughly two years, generative AI use reached close to 40 percent of American adults, outpacing the early spread of the internet and personal computers. At the same time, only about a third of Americans trust businesses to use AI responsibly, even as usage continues to grow. 

That combination is volatile.  

Some argue that states should regulate AI independently. On paper, that sounds reasonable. In practice, it guarantees chaos. Fifty rulebooks do not create safety. They create confusion. 

Imagine if red lights meant stop in Florida but yield in Ohio. Imagine if crosswalks protected pedestrians in some states but not in others. Imagine if driver’s education taught different meanings for the same road signs depending on which border you crossed. 

No one would trust the system. No one would drive. 

AI works the same way. Algorithms do not stop at state lines. A hiring tool used in Texas screens applicants in New York. A healthcare model trained in California influences care decisions in Florida. Fragmentation does not protect people. It just confuses them. 

Uniform rules of the road never eliminated local control. Speed limits still vary. Licensing ages differ. Enforcement looks different. But the rules that make trust possible are shared. 

Here is the risk most Americans do not realize they are already taking. 

People are talking to AI as if it were a trusted adviser. They confide. They vent. They speculate. They assume those conversations disappear. 

They do not. 

AI chats are not privileged. They are not confidential. They are recorded, retained and increasingly discoverable in lawsuits and government investigations. In one recent arson case, investigators relied on a suspect’s AI chat history to identify and charge him. 

That was justice. But it should give people pause. If AI conversations can surface in criminal cases, they can surface in civil litigation, employment disputes, custody battles and regulatory probes. Words typed casually today can reappear years later, stripped of context and weaponized. 

This shadow AI exposure is already reshaping lawsuits, reputations and relationships. That is how trust breaks. 

Let’s look at another analogy. Nuclear power promised almost limitless progress until a single meltdown in Ukraine poisoned global trust and rewrote the rules for everyone. One accident in one place reset expectations around the world.

 

AI is running the same risk. A single major failure, breach or scandal could become its Chernobyl, an event so corrosive to confidence that it halts adoption and invites heavy-handed control far beyond the borders where it occurs.

While Americans debate whether AI rules are too complex or happening too soon, China is rapidly standardizing AI governance frameworks and promoting them internationally, positioning itself as the predictable, safety-first alternative in global rulemaking. Safety defined by control is not an American value. But trust vacuums get filled.

What William Phelps Eno understood then is what AI needs now: rules that are simple enough to explain, visible enough for people to see, and supervised by humans who can intervene. Even federal institutions are now converging on this reality through the NIST AI Risk Management Framework

Rules of the road only work when people understand them. If people do not understand the rules, they assume there are none. And when trust breaks, innovation does not slow — it melts down. 

America has done this before. It built trust at scale when innovation threatened chaos. We just need to remember how.