My Uber executed a perfect left turn before waiting with extraordinary patience at a red light, never inching forward as I have so often done in my frustration with all that stands between me and my destination. All things considered, the drive was going smoothly, and the experience was enhanced by the fact that I didn’t have to make conversation with the driver. Not because I had requested no talking—but because there was no driver.
Instead, the steering wheel turned on its own accord, using the information collected by dozens of LIDAR sensors around the car that effectively mapped out the vehicle’s surroundings. Interestingly, the Waymo didn’t gather the surprised glances I had expected; the lack of a person in the driver’s seat seemed to be just another regular aspect of Austin traffic.
As the Waymo demonstrated, AI has already become a well-incorporated aspect of daily life—we use it everywhere, from answering silly questions to navigating through the incredibly complex maze that is the urban layout.
The rapidly increasing implementation of AI begs a series of questions: Who should regulate as it continues to grow? Is it a state responsibility, like Waymo is? Or should the federal government also play a role in AI regulation?
The AI moratorium recently struck from President Donald Trump’s “One Big, Beautiful Bill” posed those same questions to U.S leaders. The conflict between consumer protection and competitive innovation is the driving force behind this dilemma, and Texas provides an excellent example of what good federal AI regulation should look like.
State-enacted laws offer a more effective and tailored approach to local problems and limitations; however, it’s clear that there is also an important role for the federal government in constructing nation-wide AI standards.
A well-constructed federal framework could help states that would otherwise have difficulty passing impactful AI legislation to mitigate obscene material, promote transparency, and make conducting business for AI companies across state borders easier.
OpenAI and Anduril were among the tech companies that voiced their support for the moratorium, arguing a well-grounded claim that the lack of a federal framework meant AI businesses have to comply with dozens of sets of regulations, slowing growth.
However, completely removing regulatory power from the states attempts to impose a one-size-fits-all, top-down system that doesn’t promise good results.
As Justice Brandeis pointed out, “a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.” In other words, states having the power to regulate AI creates room for individual political experiments which the federal government can use to shape nationwide legislation.
That is the genius of state powers. They allow for a competitive market of political ideas which the government can look to for inspiration, selecting the ones that have succeeded in balancing the line between encouraging technological innovation and limiting its’ harmful aspects.
One state that has been able to maintain a healthy balance between consumer protection and innovation is Texas, which has successfully implemented a regulatory framework that prioritizes accountability, transparency, and security without creating overburdensome regulations like what can be seen in California.
The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) is a Texan bill that provides a good example of what laws could create a federal framework for limiting AI’s harmful effects; it creates common-sense guidelines, increases accountability, and mandates transparency while giving AI companies a “sandbox” in which to innovate with greater freedom.
Legislation like TRAIGA has made Texas attractive for both consumers and AI companies, and numerous statistics show that Texas’ approach to balancing consumer protection with innovation in technology policy has been successful:
Put simply, previous Texan legislation on tech has a history of creating an attractive environment for business and individuals alike, even over states that claim to be leading AI innovation.
The U.S should use Texas as a proven model for federal regulation to prevent the misuse of AI without removing regulatory power from the states. This would foster the best structure for maintaining technological competitiveness with countries like China, while keeping consumers protected from harmful AI content.