Chat GPT told me that use of AI tools in digital health may implicate hundreds of different laws… is that true?
- Jarrod Rainey

- Feb 19, 2025
- 6 min read
Updated: Mar 12
With both the widespread focus and rapid adoption of AI tools, many digital health organizations are feeling pressure, from both internal and external stakeholders and business arrangements, to formalize the organization’s approach to AI (e.g., preparing a formal policy/procedure on use of AI). Larger organizations are now often incorporating in every business arrangement an AI addendum or similar terms that attempt to force business partners into complying with the organization’s approach to use of AI.
As a result of these pressures, our clients often ask us to provide generalized guidance for their organization’s use of AI tools. The purpose of this article is to explain the challenges in developing generalized guidance on this subject. The reality is that legal considerations on the use of AI tools are very use-case-specific, even in a specific industry sector like digital health.
COMMON QUESTIONS OUR CLIENTS ASK US ON THIS TOPIC INCLUDE:

To back up, it is helpful to level set on the use of AI tools with two points of context. First, there is a wide variety of AI tools. The fact that a definition of AI is required at the outset of any commentary on the subject is reflective of the challenge of providing a generalized approach to the subject. I am using the term “AI tools” in a broad sense, capturing any machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments (a definition reflected in 15 U.S.C. 9401).
Second, the perception that AI tools are innovative and new technologies is not really accurate. Many in organizational leadership do not realize that their organization already uses AI tools in their dayto-day operations and has been doing so for many years. For example, many existing workflows already incorporate “AI tools” (e.g., predictive text functionality in operating systems, facial recognition security software, email spam filters or summaries, etc.). In all likelihood, almost every organization, especially those in digital health, are already using AI tools in their workflows even if they haven’t made a conscious decision to do so. So, it is true that some AI tools are innovative and rapidly developing, but it is also true that other legacy technologies may already be considered AI tools (or will be developed in the near future to include AI functionality) due to the broad definition.

There are common legal considerations across the various use cases for AI tools. The common issues of security, privacy, and safety arise in essentially any AI tool that an organization may seek to deploy. The Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in 2023 (which has now been withdrawn and removed from the White House website) contained eight “guiding principles” for responsible AI development that attempted to apply broadly to all use cases. Developing an organizational approach and/or policy to address these common legal considerations and/or guiding principles (even if they’ve been formally withdrawn) is relatively straightforward, though it is debatable how useful such a generalized approach or policy can be given the wide variation in use cases. Nonetheless, many enterprise organizations (like payors, employers, and other large organizations) are now requiring digital health organizations to adopt and disclose their AI policies and procedures before doing any kind of business arrangement with them, which effectively makes such policies a necessity even if they aren’t legally required or particularly useful at this time.
The challenge for digital health organizations is to understand how the various AI tool use cases implicate laws outside of those common legal considerations. In digital health, some of these legal considerations include FDA governance; state medical board rules and standards of care (including licensure and telehealth laws); and health information privacy laws (including HIPAA, FTC, and state laws). It is true that these additional legal considerations could encompass hundreds of different laws. However, to underscore again how the application of these laws is use-case specific, not all of these laws are implicated even if a digital health organization wants to deploy AI tools on its online platform or website. For example, the legal considerations for deploying a clinical AI tool for patient treatment purposes are different than the legal considerations for deploying a customer service chatbot on a marketing website or using an AI tool to screen prospective employment candidates. An organization can have an overarching AI policy and procedure that applies across these various use cases, but creating a true comprehensive legal playbook for analyzing every potential new AI tool is not really feasible under the current state of regulation.
The disjointed attempt to regulate AI tools with new laws underpins the challenge in an organization creating a generalized and comprehensive AI playbook. Organizations are familiar with developing compliance frameworks that take into account multiple, sometimes overlapping laws touching on different subjects. But keeping up with the various federal and state initiatives to regulate AI tools is very challenging without specific use cases in mind. There are many reputable organizations tracking AI policy, rules, and proposed legislation. These are laudable efforts, but a review of these trackers quickly reveals how many disparate subjects are covered by AI tools, even if loosely focused on a single industry like health care.
While it is not certain, it is possible that the FDA will become the primary federal regulator for use of AI tools in clinical practice. The FDA has been regulating AI tools in medical devices for decades. More recently, the FDA has issued several action plans and guidance documents pertaining to use cases falling under its authority to regulate medical devices. It can be a complex legal analysis just to determine if a specific use case currently falls within the FDA’s purview. But there is also debate about whether the FDA is the appropriate regulator for all AI tool use cases in clinical practice. For example, former FDA Commissioner Scott Gottlieb recently called into question if the FDA’s latest guidance goes too far in regulating AI tools as medical devices. Given the Trump Administration’s focus on deregulation at the federal level and its latest Executive Order on AI titled Removing Barriers to American Leadership in Artificial Intelligence (whose title not so subtly hints at less rather than more federal regulation), a plausible outcome at least in the near term is that there is no primary federal regulator of the use of AI tools in clinical practice. Recent reports indicate that much of the FDA’s senior staff was recently dismissed by the Trump Administration, which may foreshadow a shift in the FDA’s fundamental approaches to regulation.
Even if the FDA becomes the primary regulator for digital health AI tools (to the extent they fall within its jurisdiction), there are real questions about whether the FDA’s approach will be harmonized with state agencies, such as medical boards. A good example of this point is the debate on use of AI to prescribe certain low-risk drugs. This debate actually precedes the current AI tool phenomenon under topics like standing orders/protocols or asynchronous care that can be used to prescribe to a patient without a prescribing clinician ever having a face-to-face interaction with the patient. The next evolution of this debate is whether having a clinician directly involved in the patient care process for certain low-risk drugs is really necessary at all. Several bills introduced at the federal level over the past few years, including most recently the Healthy Technology Act of 2025 (H.R.238), would create a pathway for getting FDA clearance for an AI tool that functions as a practitioner for prescribing certain drugs. However, even if such a law were enacted at the federal level, it’s unclear how state regulators (particularly medical boards) would react. It’s entirely possible that use of such AI tools would be legal at the federal level but inappropriate under certain state rules. For example, an AI tool authorized to prescribe in California may not be authorized in New York.
In summary, I write this article to provide context on why a generalized approach to use of AI tools is likely to be part of a digital health organization’s compliance efforts, but it’s unlikely to be a comprehensive way to vet all use cases of AI tools. There are likely to be some use cases of AI tools that trigger no different legal review than any other technology that an organization uses. Other use cases may trigger additional legal considerations, including murky areas like FDA regulation where the future is not certain. For now, digital health organizations are forced to do legal scoping at the outset of any contemplated use case just to determine what laws are implicated.



Comments