Utah Has Found the Right Middle Ground on Artificial Intelligence


June 12, 2025

We don’t have to choose between innovation and safety. With the right structure, we can have both.

By Spencer Cox & Margaret Busse

As artificial intelligence becomes more integrated into our professional and personal lives, policymakers across the country are grappling with how — and whether — to respond. In the past two years, we’ve seen an explosion of interest in regulating AI. There has been focus on high-risk applications, existential concerns, or edge-case criminal activity. While the instinct to act is understandable, the risk of overreach is real. We’ve seen before how premature or overly broad regulation can choke off promising innovation before it ever reaches its potential.

Fortunately, there’s a better way — one that avoids both paralysis and overreaction. In Utah, we’ve developed a new model that protects the public interest without getting in the way of progress. It’s a framework grounded in flexibility, transparency, and a belief that innovation has a critical role in our quality of life.

In 2024, Utah passed the Artificial Intelligence Policy Act, sponsored by Senator Kirk Cullimore and Representative Jeff Moss. The legislation, which passed unanimously, created the country’s first Office of AI Policy. It wasn’t a reaction to crisis. It was a proactive effort — developed in close collaboration with leaders from industry, academia, and government — to build a more adaptive approach to emerging technologies.

The office’s first responsibility is operating a voluntary regulatory mitigation program. Under this system, companies developing or deploying AI can request a mitigation agreement: temporary, tailored exemptions from existing state laws that may not yet account for AI’s capabilities. In exchange, the company agrees to oversight terms set by the office, including transparency and outcome reporting. This program fulfills several public policy objectives, including promoting innovative AI deployment, retaining proportionate consumer protection, and facilitating long-term data-driven regulation.

One company that has already taken advantage of the program is ElizaChat, which uses AI to support teen mental health. Given the complex regulatory landscape around health care and education, ElizaChat saw the program as a way to move forward responsibly — working with regulators, rather than around them. It’s a model that builds trust while keeping innovation moving.

The office also runs a learning laboratory, which studies policy issues where AI is already having real-world effects. This includes both areas where harm is occurring and areas where beneficial innovation might be constrained by unclear or outdated regulations. The lab brings together experts, gathers real-world data, and offers evidence-based recommendations to the legislature. Numerous essential policy questions regarding AI require attention, such as its integration into regulated sectors like education, health care, and insurance; its effects on consumer data privacy; and the social effect of deepfakes and AI companions. Crucially, the office is charged with identifying policy solutions that address the effects of technology, while steering clear of mandates on how AI systems are created and developed.

Governments have had varying levels of success at handling previous technological shifts. In the case of social media, early inaction led to lasting harm: invasive data-collection practices that captured intimate details of people’s lives, and opaque algorithms that have contributed to mental health challenges — especially among young people. These weren’t inevitable outcomes. They were the result of delayed, fragmented policymaking. We shouldn’t repeat that mistake with AI.

In that spirit, the learning lab’s first report focused on AI and mental health. After consulting hundreds of people involved in both areas and in their intersection, the report provided policy recommendations in three domains:

  1. Legal protections for developers who follow best practices
  2. Guidance for mental health professionals using AI tools
  3. Stronger consumer protections — especially limits on data sales and targeted advertising

These recommendations became Utah’s HB 452, which passed unanimously and earned broad support from developers, professional associations, libertarian groups, and consumer advocates. The success of the bill shows what’s possible when policy is grounded in dialogue and data — not fear or ideology.

Our aim in Utah isn’t to regulate AI more. It’s to regulate it better. This framework creates space for innovation while giving policymakers the tools to learn quickly and act responsibly. It’s a way to encourage experimentation and adaptation, not to constrain it.

As federal agencies and other states explore their own approaches, we hope they’ll consider the Utah model. Many of AI’s most consequential applications — especially in education, health, and public services — are shaped at both the state and federal level. Better coordination, flexible models, and real-world testing can help us unlock AI’s potential without repeating past mistakes.

Utah’s first year under this new model suggests something encouraging: We don’t have to choose between innovation and safety. With the right structure, we can have both.

Spencer Cox, a Republican, is governor of Utah. Margaret Busse is the executive director of Utah’s Department of Commerce.