Utah’s AI Regulatory Learning Laboratory: Early Signals From Health Care Mitigation Agreements

Introduction

Utah has positioned itself at the forefront of state-level artificial intelligence governance through enactment and implementation of the Utah Artificial Intelligence Policy Act (the “AI Policy Act”).  The statute establishes an AI Learning Laboratory and authorizes temporary regulatory mitigation agreements that allow the deployment of AI technologies under reduced regulatory constraint in a program generally referred to as an “AI Sandbox.”  

Utah is not alone in establishing an AI Sandbox, but it is well ahead of other jurisdictions.  Both Texas and Delaware have authorized the development of regulatory sandboxes, but in Texas the regulatory authorities continue to work on developing the program, and in Delaware the legislature is awaiting proposed legislation from the regulatory authority.  Other states, including New Hampshire, Oklahoma, Pennsylvania and Ohio, are considering AI Sandbox legislation.

With three health care–adjacent mitigation agreements now in place – Dentacor, ElizaChat, and Doctronic – the Utah AI Sandbox is up and running, allowing us to get a better picture of how it is being utilized and begin to evaluate the effectiveness of the approach.

I. A Distinctive Statutory Model for AI Governance (but will it last?)

The Utah AI Policy Act creates a regulatory learning laboratory designed to study AI technologies and their risks, and the effectiveness of regulatory approaches in real-world deployments.  The primary benefit for those participating in a learning laboratory is the ability to participate in a regulatory mitigation agreement, which allows the participant to operate outside the bounds of specified regulatory control. 

The structure creates an implicit quid pro quo between the state and participants:

  • Participants receive the ability to deploy AI technology under reduced regulatory risk or expanded operational latitude.

  • The state receives structured access to data, outcomes, and operational insight into emerging AI tools to inform its approach to regulation.

Although the AI Policy Act is expressly intended to encourage AI development while protecting consumers, uncertainty exists as to how it will be viewed under evolving federal AI policy.

The Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence (Dec. 11, 2025) directs federal agencies to identify and discourage state laws deemed unduly burdensome to AI development, with potential consequences for federal funding tied to broadband and digital infrastructure programs, including the Broadband Equity, Access, and Deployment (BEAD) Program.  The Executive Order does not purport to invalidate state laws; rather, it uses the spectre of decreased funding to the states to compel action.  Whether the Executive Order is a legal exercise of executive authority has not yet been tested in the courts.

Regardless, whether Utah’s law could be characterized as “onerous” under that framework is unclear. On one hand, the Act facilitates experimentation and explicitly avoids permanent restrictions. On the other, it imposes eligibility criteria, reporting, and safeguards that could be framed as regulatory friction.

II. Participating in a Mitigation Agreement

Organizations or individuals wishing to enter a regulatory mitigation agreement must first apply for and become a participant in the program.  A person or organization wishing to participate in the program can apply during open call periods determined by the Office of Artificial Intelligence Policy (OAIP), and has the opportunity to engage directly with the OAIP during the application process.

The OAIP will consider applicants against a criterion that includes consideration of how well positioned, through expertise and business activity, the applicant is relative to potential regulation and the OAIP’s learning agenda, the participants nexus to Utah and compliance background.  The OAIP is free to consider additional criteria as well.

To participate in a mitigation agreement, a learning laboratory participant must meet both statutory and regulatory requirements and, as with participation in the learning laboratory, the OAIP will consider applicants against a criterion.

The statute requires that to participate in a mitigation agreement, a learning laboratory participant must demonstrate to the OAIP that it has both the appropriate technical expertise and capability to develop and test the proposed AI technology and the financial resources to participate.  In addition, the AI technology must provide “substantial consumer benefits” that outweigh identified risks associated with the mitigated regulations, and the participant must have a plan to monitor deployment and minimize those risks.  Finally, the scale, scope, and duration of proposed AI testing must be appropriately limited based on risk assessments.

The OAIP may consult with other state agencies when evaluating an application.  The regulatory requirements are simple, and include paying a fee, but the OAIP may consider any relevant factor when evaluating an application, including an assessment of the applicant’s ability to comply with reporting, data usage, cybersecurity, disclosure and conflict related requirements that may be applied, and whether the applicant participates in another state’s regulatory mitigation program. 

Mitigation agreements are for a term of 12 months, but can be extended up to another 12 months, and the OAIP may consider a variety of factors to determine whether to grant an extension, including “the prospects of additional learning”.  

These criteria reflect the limitations on the program.  The legislation does not create a “wild west” of AI technology deployment; rather, it contemplates a program that is rich in upfront due diligence, limited in scope and calibrated to mitigate risk.

III. What the First Three Health AI Agreements Reveal

With Dentacor, ElizaChat, and Doctronic now operating under mitigation agreements, we can begin to evaluate the program as implemented rather than merely as designed.  While we cannot evaluate the OAIP’s diligence standards or efforts, we can look at the mitigation agreements and see how the OAIP is implementing the statutory requirements around deployment of AI technology.

A. Narrow and Calibrated Mitigation

The mitigation agreements are short, largely standardized, and conservative in scope. They provide targeted regulatory relief, tailored to the service being offered:  

The ElizaChat offering consists of AI supported services to improve teen mental health.  The mitigation requires that ElizaChat “identify and prevent the ElizaChat app from engaging in the practice of mental health therapy or any other licensed professional practice.”  ElizaChat must monitor the app to determine whether this occurs, and if it does, ElizaChat must report and cure the incident.  If such an incident occurs, and ElizaChat takes the required steps to report and cure the incident, then the Utah Division of Professional Licensing will forgo any enforcement action related to the unlawful practice of mental health therapy.  

While the mitigation provided to Doctronic is the same in form – the commitment to forgo enforcement of laws and rules – the application is broader.  Doctronic’s AI offering assists consumers in refilling prescriptions.  Doctronic’s AI offering is a tool designed to provide a service reserved for licensed professionals – it does not flirt with the line of what constitutes a professional activity, like the ElizaChat model; rather, it clearer steps over it.  The mitigation, accordingly, is broader and prospective, protecting the company from enforcement of professional licensing requirements for the scope of the service.  

The mitigation provided to Dentacor goes one step further than the other two mitigation examples by affirmatively expanding the scope of practice for dental hygienists who use the Dentacor technology as contemplated.  Under this agreement, dental hygienists are permitted to diagnose, with the assistance of the Dentacor technology, a set of conditions, an activity not otherwise permitted to hygienists.  

The distinction between enforcement forbearance and scope of practice expansion seems to be rooted in the offering – neither ElizaChat nor Doctronic include a human within the specific service offered, whereas the Dentacor offering enhances the capabilities of a hygienist.  It demonstrates the breadth of the discretion the OAIP is willing to exercise and how the OAIP may be thinking about regulation for different kinds of offerings.

Regardless, these mitigations are narrowly drafted, and none provide relief from any laws or regulations not specifically referenced.   Each agreement explicitly preserves all legal requirements not expressly mitigated.  Interestingly, none of the agreements requires the parties to make representations regarding the qualification of their technology with FDA, a common requirement in other programs, such as payment systems.  Perhaps wisely, the OAIP seems to want to avoid any challenges to the scope of its authority.

For participants, these mitigation provisions are instructive:

  • Regulatory mitigation will be narrow and rationally related to the offering.  Potential participants should analyse the approach taken with these agreements and plan for how the OAIP may approach mitigation based on the dynamics at work in their offering.  Key questions to ask:

    • Is there a “human in the loop” that may narrow the need, in OAIP’s eyes, for mitigation?

    • Does the offering run squarely into prohibited conduct that will need immediate or prospective mitigation, or is mitigation required only under certain conditions?

    • Does the offering implicate in any way the activities of individuals (subject to professional licensing) or third parties who may be subject to regulatory jeopardy if the offering is deployed?

  • Regulatory relief from OAIP will be narrowly drawn around the areas where OAIP’s jurisdictional authority ends.  Accordingly, potential participants must understand their continued compliance with all other state and federal laws and should consider those laws in relation to the individuals and third parties who may be impacted in the offering.  If they see risk, then deployment may not be as effective as hoped.

B. Common Data, Cybersecurity, Consent, and Reporting Obligations

All three agreements impose data security and cybersecurity controls aligned with Utah governmental standards.  These controls include a prohibition on commercial uses of data.  These obligations do not appear onerous and are standardized across all three agreements, indicating that the state has an approach it wishes to adhere to.

The ElizaChat and Doctronic agreements require explicit patient or user notice and acknowledgment of AI use.  In both cases, non-clinical users are engaging directly with the AI tool.  Under the Dentacor agreement no such obligation exists, presumably because the technology is being utilized by clinicians who must be trained on the technology.

All three agreements include detailed and tailored periodic reporting to the state on the utilization of the technology.  While these obligations vary in intensity, they are a consistent feature of every agreement and reflect a baseline expectation for participation in the program.

For potential participants the take-away is clear: 

  • Participants must plan for robust data and cybersecurity efforts (which they may already have).

  • Participants must be prepared to report critical information to the state.  These would be unique reporting obligations and related only to Utah operations.  Accordingly, systems would have to be put in place to isolate the required information for reporting purposes.

C. Human Intervention as a Common Requirement

Although the sample size is small, all three programs require human intervention at defined points.  Dentacor requires dentist availability and oversight when hygienists use AI-assisted diagnostics.  ElizaChat mandates escalation to licensed clinicians for crisis, distress, or high-risk indicators.  Doctronic requires pharmacist override authority and physician review, including phased mandatory physician validation.

While this may reflect early caution rather than permanent policy, it aligns with existing FDA guidance on clinical decision support and prevailing professional licensure norms.

For participants, the critical lesson is that, at least for now, identifying human intervention, even if not key to the potential offering, may be required, even if only when certain conditions are met.  While it is likely that participants will already have a “human in the loop” with respect to their offering in order to comply, the exact location of this intervention in a workflow may be different under the Utah program than within a normal regulated setting.  

  • Participants may need to rethink human intervention in their offering to conform to the goals of the program, including the goal of risk mitigation in a less regulated environment, as opposed to their efforts in a more regulated environment, which may include compliance goals.

D. Statutorily Required Safeguards

Consistent with the AI Policy Act, each agreement incorporates detailed safeguards addressing operational limits, escalation protocols, data handling, and auditability and monitoring.  These safeguards are not uniform; rather, they are tailored to the technology and risk profile of each participant, reinforcing the statute’s flexible design.  Generally, they appear in a document that appears to be a part of the participant’s proposal to the OAIP for participation.  Accordingly, participants can participate in defining the safeguards they will deploy.

  • Participants should take seriously the OAIP’s efforts to ensure that appropriate safeguards are designed for program participation.  Participants may have a significant voice in the design; they should take the opportunity address these issues.

IV. Preliminary Assessment and Risks

It is too soon to evaluate outcomes or consumer impact or the effectiveness of the Utah AI Learning Lab in developing more effective regulation.  The anticipated reciprocal benefits of the lab are clear: controlled experimentation for the state for purposes of regulatory development and regulatory clarity for participants for at least one year and perhaps two in the deployment of its technology.  While valuable, there are some questions potential participants and other states considering developing similar sandbox schemes should consider.

For participants, how valuable is the opportunity to operate in one state under regulatory mitigation for a limited amount of time?  There is value, no doubt, in the operational experience and in the functional data that is produced, which can translate into quicker commercial traction when the regulatory environment changes and easier pathways to demonstrate safety and value where opportunity arises.  In addition, there may be reputational gains to be made, that can translate into easier development of strategic relationships. 

But these are generally more long-term value propositions and are unlikely to translate into immediate commercial value.  Certainly, the commercial learnings cannot be translated immediately into action in either Utah (post mitigation agreement) or another state if the mitigation is necessary to operate.  This is particularly true in that the statute appears to permit a maximum term of 24 months for mitigation agreements.  

Separately, potential participants should consider the costs associated with operating in a unique way for purposes of only this program, which may include unique reporting and compliance efforts.  This may be burdensome for early-stage companies.  In addition, potential participants should consider the reputational risk associated with participating in the program should they fail to meet program requirements.

Accordingly, potential participants should consider the value proposition, risks and costs carefully before applying.

For states, will highly particularized operating models generate sufficiently generalizable findings to produce better regulation?  While the particularity of a single operation may not be uniformly applied across all possible operations, it is likely that the state can extrapolate around different operating models.  With these three agreements, for example, once could see how the basic models offered – clinical decision support, non-professional service offerings that come very close to the line, and a narrow band of professional services – could be used as models for services focusing in different clinical areas using the same techniques.  Accordingly, the value to states seems clearer, but requiring technical attention.

Conclusion

Utah’s AI Learning Laboratory represents a distinctive and pragmatic attempt to approach the tricky question of responsible AI governance, through experimentation rather than abstraction. The first health care mitigation agreements demonstrate both the promises and the constraints of this approach.  The factors that allow us to determine how effective an AI Sandbox may be in place, but we will need to (a) see the outcomes – the state’s ability to translate narrow lessons into broader regulatory insight – (b) see the evolution of federal policy – and the extent to which it aligns with this kind of state effort –  and (c) see how popular these programs are with AI innovators to fully evaluate the effectiveness of the program.

 Utah Code §§ 13-72-201, 13-70-301.

 Utah Code § 13-72-302(4).

 Utah Code § 13-72-302(1).

 Utah R166-72-3.

 Utah R166-72-4(1).

 Utah Code § 13-72-303(1).

 Utah Code § 13-72-303(2).

 Utah R166-72-6(2)(b).

 Utah R166-72-6(3)).

 Utah Code § 13-72-305.


Next
Next

Your AI Holiday Survival Guide