AIConstitutional LawFeaturedFederalismSutherland InstituteTechnology

Context for Congress’s failed moratorium on state Artificial Intelligence regulations

​One of the many, many features of the One Big Beautiful Bill Act – which was removed before final passage of the bill – was a provision that would have prohibited states that chose to apply to a specific pot of tech funding from implementing legal regulations on Artificial Intelligence technologies for a period of time (initially 10 years, then amended to five years). Specifically, the language would have prevented states that received the federal funding from “”enforce[ing] any law or regulation regulating … artificial intelligence systems.”

This proposed federal policy reflects the fact that AI is already making Americans’ lives better through things like personalizing education, improving healthcare, and making our jobs better. However, a survey commissioned by the Institute for Family Studies found that 55% of 2024 voters opposed the federal moratorium and only 18% supported it.

More interesting than public opinion on the AI moratorium idea is the federalism argument about it.

Federalism and AI regulation

One major concern was that the moratorium would have restricted the authority of the states – essentially that it contradicted the principle of federalism. The National Conference of State Legislatures argued that, for example, it would have prevented state and local governments from making necessary decisions about where AI facilities are located and how they are operated. It also argued that it would have prevented state-based experimentation in “privacy, cybersecurity, fraud, workforce, education and public safety” and “potentially leave communities vulnerable in the face of rapidly advancing technologies.”

However, this argument misses an important aspect of federalism. Our federal constitution was enacted more than 200 years ago in opposition to a system of unchecked lawmaking power for states, represented by the Articles of Confederation. Federalism actually protects and promotes the authority of Congress to make laws in the particular realms enumerated in the U.S. Constitution.

Article I, Section 8 of the U.S. Constitution states that “Congress shall have power to … regulate commerce with foreign nations, and among the several states.” Being no respecter of state borders, the development and deployment of AI across the nation is a matter of interstate commerce. Additionally, Article VI of the U.S. Constitution states that “the laws of the United States which shall be made in pursuance [of the Constitution] … shall be the supreme law of the land.” Being a matter of interstate commerce, Congress is well within its proper and constitutional role under federalism to preempt state regulation of AI.

Indeed, this type of provision was understood at the Founding to be a particularly important feature because Congress could prevent states from hampering trade and economic development by enacting laws that discriminated against businesses or citizens in other states. Congress could ensure that regional and national economic development would not be blocked by a patchwork of state and local laws.

When Congress acts under this authority, conflicting state laws are preempted.

These types of concerns are in play in the current debates over AI regulation. Donald Bryson of the John Locke Foundation argued: “Without a clear framework for cooperation between federal and state actors, we risk building a patchwork of conflicting local mandates that confuses developers, deters investment, and isolates jurisdictions from national progress.” He also warned about the risks of “premature regulation” adopted before the potential risks needing to be addressed and the risks of possible regulation are clear.

Neil Chilson at the Abundance Institute similarly argues that state laws could use broad and sweeping definitions that impact even technologies like spell checking.

AI regulation landscape in the states

Since the AI moratorium did not move forward, the regulatory landscape among the states has elevated in importance.

About half of states prohibit the use of AI-generated “deepfakes” (“believable, realistic videos, pictures, audio, and text of events which never happened”) in elections. Arkansas has laws prohibiting sexual materials involving children created by AI and prohibiting nonconsensual use of an individual’s image in AI. Utah law prohibits the use of personal information collected by AI mental health chatbots. Other states have addressed additional implications of AI.

Sometimes state legislative initiatives shape federal law. Two states have enacted regulations of app stores, and other states are considering legislation. Now, Congress is considering a regulation similar to these state approaches.

So, perhaps state approaches to AI regulation will eventually prompt Congressional action. The risk, of course, is that states will enact divergent rules that serve their particular political or local interest, but which undermine the common good and general welfare created by regulatory certainty that promotes interstate commerce.

Lawmakers at the state and federal legislation should take these risks seriously. The wellbeing of American families, the success of American businesses, and the national security of America depends on it.

Moving forward, policymakers at the federal level should respond to the robust national debate over AI regulation and craft a thoughtful framework that protects Congress’s appropriate constitutional authority to regulate interstate commerce, responds to understandable state concerns, and ensures the principles of public health, safety, and innovation are properly balanced now and in the future.

Dallyn Edmunds, a policy intern at Sutherland Institute provided excellent research assistance for this explainer.

Source link

Related Posts

1 of 4