America’s AI Reckoning: What Every State Is Doing About Artificial Intelligence Right Now

Something remarkable happened in statehouses across America in 2025: for the first time in the country’s history, every single state legislature, along with Puerto Rico, the U.S. Virgin Islands, and Washington, D.C., introduced legislation dealing with artificial intelligence. The technology that powers chatbots, healthcare diagnostics, hiring algorithms, and weapons systems has finally forced itself into every corner of the American democratic process simultaneously.

AI, the use of computer systems to perform tasks that normally require human intelligence, such as learning and decision-making, has the potential to spur innovation and transform industry and government alike. As AI advances and widespread adoption of these tools increases, government, business, and the public are grappling with the risks and benefits of deploying these systems across nearly every domain of modern life.

According to the National Conference of State Legislatures (NCSL), 38 states adopted or enacted around 100 measures during the 2025 session, a historic legislative wave that touched everything from who legally “owns” AI-generated content to whether a robot can be used to stalk someone. What follows is a comprehensive look at the landscape, the themes, and the most consequential laws to emerge from this unprecedented moment in American governance.

Why States Are Acting Now

The surge in state-level AI legislation is not occurring in a vacuum. The federal government has moved fitfully on AI regulation, with the Trump administration revoking earlier executive orders on AI safety while simultaneously treating frontier AI as national security infrastructure. Into that regulatory vacuum, states have stepped, sometimes cooperatively, sometimes in ways that create a patchwork of conflicting obligations for companies operating nationally.

State governments are simultaneously users and regulators of AI. Governments across the country are exploring how AI can enhance customer service, improve health care facility inspections, and advance roadway safety. But legislators, industry, and other stakeholders have also engaged in robust discussions about the concerns around potential misuse or unintended consequences. The dual role as adopter and watchdog gives state legislatures a unique vantage point that federal regulators often lack.

State legislators across the nation introduced over 1,000 measures on artificial intelligence in 2025, as generative AI products and other AI uses surged. The debates in statehouses addressed questions about the potential for AI to create harm while also acknowledging the enormous economic opportunity it presents.

Six States, Six Different Frontiers

The breadth of state AI legislation in 2025 is best understood through the specific laws that emerged — each one addressing a different pressure point where AI intersects with people’s lives. The NCSL highlighted the following as representative of the session’s scope:

Arkansas

AI-Generated Content Ownership

Clarifies who owns AI-generated content: the person who provides data to train the model, or an employer when the content is produced in the course of employment duties. The law specifies that generated content cannot infringe on existing copyright or intellectual property rights.

Montana

“Right to Compute” Law

Sets requirements for AI-controlled critical infrastructure, including risk management policies based on NIST standards. Crucially, the law also prohibits the government from restricting private ownership or use of computational resources for lawful purposes, unless a compelling government interest exists.

New York

Government AI Transparency + Worker Protections

Requires state agencies to publish a public inventory of automated decision-making tools. Strengthens worker protections by prohibiting AI systems from affecting rights under collective bargaining agreements or causing the displacement of state employees.

New Jersey

AI Whistleblower Protections

Adopted a resolution urging generative AI companies to voluntarily commit to employee whistleblower protections, a softer but signal-setting approach ahead of potential mandatory legislation in future sessions.

North Dakota

AI Stalking & Harassment Ban

Prohibits individuals from using an AI-powered robot to stalk or harass others, expanding existing harassment and stalking laws to explicitly cover AI-driven conduct, an early example of applying traditional criminal law to new AI-enabled behaviors.

Oregon

Medical Title Protection

Specifies that a non-human entity, including an AI agent, cannot use the licensed titles of medical professionals such as “registered nurse” or “certified medication aide,” protecting patients from AI systems misrepresenting their qualifications.

The Major Themes Shaping State AI Law

Beyond individual laws, the 2025 session revealed consistent thematic priorities that cut across state lines, political affiliations, and industry sectors.

Consumer Protection and Algorithmic Discrimination. The most commonly enrolled or enacted frameworks include AI’s application in healthcare, chatbots, and innovation safeguards. Legislatures signaled an interest in balancing consumer protection with support for AI growth, including testing novel innovation-forward mechanisms such as sandboxes and liability defenses. Several states focused on what the policy community calls “high-risk” AI systems that make consequential decisions about employment, healthcare, housing, or financial services. Some laws target high-risk AI systems that, when deployed, make consequential decisions related to employment, housing, healthcare, insurance coverage, or other areas with a significant risk of discrimination.

Healthcare AI. Compared with 2024, policymakers have shifted from broad governance frameworks toward more use-case-specific regulation, particularly in mental-health chatbots, payor algorithms, and high-risk clinical tools. Illinois enacted a landmark law prohibiting AI systems in therapy from making independent therapeutic decisions or directly interacting with clients without licensed professional oversight. States introduced bills on AI in health insurance claim denials, requiring human review before adverse decisions can take effect. States introduced twenty-one bills focused on AI chatbots and passed seven laws focused on AI-enabled chatbots, five of which had a specific mental health focus, driven by concerns about patient safety.

Workplace Protections. While most cross-sectoral AI bills would cover the use of AI in employment, several states and cities targeted sectoral legislation for this use case. New York’s Local Law 144 requires employers to conduct regular audits to ensure that algorithms used in employment decisions are unbiased. Other states moved to require advance notice to workers whenever AI is used in hiring, discipline, or compensation decisions, a category of protection that labor advocates are pushing to expand nationally.

Transparency and Disclosure. User-facing disclosures became the most common safeguard, with eight of the enrolled or enacted laws and regulations requiring that individuals be informed when AI is being used. This ranged from requiring chatbots to identify themselves as non-human, to mandating that government agencies publish public inventories of their AI systems, to compelling health insurers to disclose when AI influenced a coverage decision.

The Federal Vacuum and What States Are Filling

A critical context for understanding the 2025 state AI wave is the near-absence of comprehensive federal AI regulation. While the EU’s AI Act has created a sweeping regulatory framework in Europe, the United States has relied on a patchwork of existing laws, agency guidance, and executive action, much of which has been inconsistent or reversed.

NCSL’s federal advocacy focus includes artificial intelligence, with a focus on more public education, more money for research, more control of deepfakes, and no or limited federal preemption. That last point is crucial: states are actively resisting attempts by the federal government or industry to preempt their authority to regulate AI. The argument from state legislators is that the people closest to constituents and the harms AI can cause them are better positioned to craft responsive rules than a single federal standard.

At the same time, a proliferating patchwork of 50 different AI regulatory frameworks creates genuine challenges for companies operating nationally. The same product may be legal in one state and penalized in another. A chatbot cleared in State A might trigger a licensing or disclosure violation in State B. Without a tailored rollout and compliance review, a uniform national strategy puts businesses at risk.

This tension between state autonomy and national coherence is likely to define the next phase of AI governance debates and the NCSL’s role as a convener and resource for state legislators will be central to how it plays out.

What’s Being Tracked Separately and Why It Matters

The NCSL’s 2025 AI legislation page intentionally excludes several high-profile AI technology categories that are tracked in separate databases: facial recognition, deepfakes, and autonomous vehicles. This scope decision is itself revealing. The volume of legislation on those topics alone was sufficient to warrant dedicated tracking, which underscores just how pervasive AI-specific lawmaking has become.

In 2024, NCSL tracked over 450 bills in 23 different AI-related categories and found three legislative trends rising to the top: consumer protection, deepfakes, and government use of AI. At least half the states addressed deepfakes through new laws targeting the technology’s use in elections and sexually explicit materials. In 2025, those threads continued while new categories AI in housing, criminal justice, algorithmic pricing, and agentic AI systems, began emerging as the next frontier for state legislative action.

Looking Ahead: What 2026 Will Bring

The 2025 session was historic in its breadth, but many observers believe it was also a foundation rather than a conclusion. Several large states, including California and Colorado, have enacted laws with phased compliance dates extending into 2026 and beyond. Enforcement mechanisms across the states are only beginning to mature. And the technology itself is advancing faster than any legislative calendar.

Looking ahead to 2026, issues like definitional uncertainty remain persistent while newer trends around topics like agentic AI and algorithmic pricing are starting to emerge. Agentic AI systems that autonomously take sequences of actions in the world, not just generate text poses novel legal questions about liability, oversight, and consent that no state has yet addressed comprehensively.

Meanwhile, the politics of AI regulation are themselves shifting. Once seen as a primarily progressive concern, AI accountability legislation has attracted bipartisan sponsorship in states from Texas to Connecticut. The harms that constituents experience from algorithmic decisions denied insurance claims, biased hiring, and misleading health chatbots do not map neatly onto partisan lines. That cross-partisan dimension may prove to be the most durable force sustaining the state AI legislation wave in the years to come.

What is clear from 2025 is that America’s statehouses are no longer waiting for Washington to lead on AI. They are writing the rules themselves, one state, one session, one bill at a time.

Artificial Intelligence 2025 Legislation

Blog Notes: I was not paid to write this blog post, and I will not receive any compensation if you follow the links. I have utilized AI technology and tools in the creation of this blog post, but everything has been edited by me for reader consumption and accuracy. If you have any questions, please feel free to contact me by completing the contact form on the front page of my website.

Latest Blog Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Content is protected !!