My experience in medicine allows me to distinguish between genuine innovation and subtle reclassification that fundamentally alters practice while appearing unchanged. Artificial intelligence has recently attracted considerable attention, including the widely circulated assertion that AI has been “legally authorized to practice medicine” in the United States. Interpreted literally, this claim is inaccurate. No medical board has licensed a machine. No algorithm has sworn an oath, accepted fiduciary duty, or assumed personal liability for patient harm. No robot physician is opening a clinic, billing insurers, or standing before a malpractice jury.
However, stopping at this observation overlooks the broader issue. Legal concepts of liability are currently being redefined, often without public awareness.
A significant transformation is underway, warranting more than either reflexive dismissal or uncritical technological enthusiasm. The current development is not the licensure of artificial intelligence as a physician, but rather the gradual erosion of medicine’s core boundary: the intrinsic link between clinical judgment and human accountability. Clinical judgment involves making informed decisions tailored to each patient’s unique needs and circumstances, requiring empathy, intuition, and a deep understanding of medical ethics.
Human accountability refers to the responsibility healthcare providers assume for these decisions and their outcomes. This erosion is not the result of dramatic legislation or public debate, but occurs quietly through pilot programs, regulatory reinterpretations, and language that intentionally obscures responsibility. Once this boundary dissolves, medicine is transformed in ways that are difficult to reverse.
The main concern isn’t whether AI can refill prescriptions or spot abnormal lab results. Medicine has long used tools, and healthcare providers generally welcome help that reduces administrative tasks or improves pattern recognition. The real issue is whether medical judgment—deciding on the right actions, patients, and risks—can be viewed as a computer-generated outcome separated from moral responsibility. Historically, efforts to disconnect judgment from accountability have often caused harm without taking ownership.
Recent developments clarify the origins of current confusion. In several states, limited pilot programs now allow AI-driven systems to assist with prescription renewals for stable chronic conditions under narrowly defined protocols. At the federal level, proposed legislation has considered whether artificial intelligence might qualify as a “practitioner” for specific statutory purposes, provided it is appropriately regulated. These initiatives are typically presented as pragmatic responses to physician shortages, access delays, and administrative inefficiencies. While none explicitly designates AI as a physician, collectively they normalize the more concerning premise that medical actions can occur without a clearly identifiable human decision-maker.
In practice, this distinction is fundamental. Medicine is defined not by the mechanical execution of tasks, but by the assignment of responsibility when outcomes are unfavorable. Writing a prescription is straightforward; accepting responsibility for its consequences—particularly when considering comorbidities, social context, patient values, or incomplete information—is far more complex. Throughout my career, this responsibility has continuously resided with a human who could be questioned, challenged, corrected, and held accountable. When Dr. Smith makes an error, the family knows whom to contact, ensuring a direct line to human accountability. No algorithm, regardless of sophistication, can fulfill this role.
The primary risk is not technological, but regulatory and philosophical. This transition represents a shift from virtue ethics to proceduralism. When lawmakers and institutions redefine medical decision-making as a function of systems rather than personal acts, the moral framework of medicine changes. Accountability becomes diffuse, harm is more difficult to attribute, and responsibility shifts from clinicians to processes, from judgment to protocol adherence. When errors inevitably occur, the prevailing explanation becomes that ‘the system followed established guidelines.’ Recognizing this transition clarifies the shift from individualized ethical decision-making to mechanized procedural compliance.
This concern is not theoretical. Contemporary healthcare already faces challenges related to diluted accountability. I have observed patients harmed by algorithm-driven decisions become lost among administrators, vendors, and opaque models, with no clear answer to the fundamental question: Who made this decision? Artificial intelligence significantly accelerates this problem. An algorithm cannot provide moral explanations, exercise restraint based on conscience, refuse actions due to ethical concerns, or admit error to a patient or family.
Proponents of increased AI autonomy frequently cite efficiency as justification. Clinics are overwhelmed, physicians are experiencing burnout, and patients often wait months for care that should take only minutes. These concerns are legitimate, and any honest clinician recognizes them. However, efficiency alone does not justify altering the ethical foundation of medicine. Systems optimized for speed and scale often sacrifice nuance, discretion, and individual dignity. Historically, medicine has resisted this tendency by emphasizing that care is fundamentally a relationship rather than a transaction.
Artificial intelligence risks inverting this relationship. When systems, rather than individuals, deliver care, the patient is no longer engaged in a covenant with a clinician but becomes part of a workflow. The physician assumes the role of machine supervisor or, more concerningly, serves as a legal buffer absorbing liability for decisions not personally made. Over time, clinical judgment gives way to protocol adherence, and moral agency gradually diminishes.
AI also introduces a subtler and more dangerous problem: the masking of uncertainty. Medicine lives in ambiguity. Evidence is probabilistic. Guidelines are provisional. Patients rarely present as clean datasets. Clinicians are trained not merely to act, but to hesitate—to recognize when information is insufficient, when intervention may cause more harm than benefit, or when the proper course is to wait. Imagine a scenario in which the AI recommends discharge, but the patient’s spouse appears fearful, highlighting the tension between algorithmic decision-making and human intuition. Such real-world friction underscores the stakes of ambiguity.
AI systems do not experience uncertainty; they generate outputs. When incorrect, they often do so with unwarranted confidence. This characteristic is not a programming flaw, but an inherent feature of statistical modeling. Unlike experienced clinicians who openly express doubt, large language models and machine-learning systems cannot recognize their own limitations. They produce plausible responses even when the data is insufficient. In medicine, plausibility without substantiation can be hazardous.
As these systems are integrated earlier into clinical workflows, their outputs increasingly influence subsequent decisions. Over time, clinicians may begin to trust recommendations not due to their validity, but because they have become normalized. Judgment gradually shifts from active reasoning to passive acceptance. In such circumstances, the ‘human-in-the-loop’ serves as little more than a symbolic safeguard.
Advocates frequently assert that AI will only ‘augment’ clinicians rather than replace them. However, this reassurance is tenuous. Once AI demonstrates efficiency gains, economic and institutional pressures tend to drive increased autonomy. If a system can safely refill prescriptions, it may soon be permitted to initiate them. If it can accurately diagnose common conditions, the necessity of physician review is questioned. If it outperforms humans in controlled benchmarks, tolerance for human variability diminishes.
Given these trends, implementing specific safeguards is essential. For example, mandatory discrepancy audits on 5% of AI-driven decisions could serve as a concrete check, ensuring alignment between AI recommendations and human clinical judgment, while providing regulators and hospital boards with actionable metrics to monitor AI integration.
These questions are not posed with ill intent; they emerge naturally within systems focused on cost containment and scalability. However, they indicate a future where human judgment becomes the exception rather than the norm. In such a scenario, individuals with resources will continue to receive human care, while others are directed through automated processes. Two-tier medicine will result not from ideology, but from optimization.
What makes this moment especially precarious is the absence of clear lines of accountability. When an AI-driven decision harms a patient, who is responsible? Is the clinician nominally overseeing the system? The institution that deployed it? The vendor that trained the model? The regulator that approved its use? Without explicit answers, responsibility evaporates. And when responsibility evaporates, trust soon follows.
Medicine is fundamentally dependent on trust. Patients place their bodies, fears, and often their lives in the hands of clinicians. This trust cannot be transferred to an algorithm, regardless of its sophistication. It is grounded in the assurance that a human being is present—someone capable of listening, adapting, and being accountable for their actions.
Rejecting artificial intelligence entirely is unnecessary. When used judiciously, AI can reduce clerical burdens, identify patterns that may elude human detection, and support clinical decision-making. It can enable physicians to devote more time to patient care rather than administrative tasks. However, realizing this future requires a clear commitment to maintaining human responsibility at the core of medical practice.
‘Human-in-the-loop’ must signify more than symbolic oversight. It should require that a specific individual be responsible for each medical decision, understand its rationale, and retain both the authority and the obligation to override algorithmic recommendations. It must also entail transparency, explainability, and informed patient consent, as well as a commitment to investing in human clinicians rather than substituting them with AI.
The primary risk is not the excessive power of artificial intelligence, but rather the willingness of institutions to relinquish responsibility. In pursuit of efficiency and innovation, there is a danger that medicine will become a technically advanced, administratively streamlined field, yet lacking in moral substance.
As we consider the future, it is essential to ask: What kind of healer do we envision at the bedside in 2035? This question calls for collective moral imagination, encouraging us to shape a future where human responsibility and compassionate care stay at the heart of medicine practice. Mobilizing collective agency will be crucial to ensuring that advances in artificial intelligence enhance, rather than undermine, these fundamental values.
Artificial intelligence has not been licensed to practice medicine. But medicine is being quietly reengineered around systems that do not bear moral weight. If that process continues unchecked, we may one day discover that the physician has not been replaced by a machine, but by a protocol—and that when harm occurs, there is no one left to answer for it.
That would not be progress. It would be an abdication.
Join the conversation:


Published under a Creative Commons Attribution 4.0 International License
For reprints, please set the canonical link back to the original Brownstone Institute Article and Author.









