
AI in Medicine: Support Tool or Legal Minefield for Physicians?
As artificial intelligence (AI) tools become increasingly prevalent in clinical settings, a new JAMA Health Forum viewpoint cautions that physicians are facing mounting legal and ethical risks with limited support. The article, co-authored by researchers from Johns Hopkins and the University of Texas, highlights how physicians are increasingly held solely accountable for AI-assisted decisions, even when algorithms are flawed or unclear. This “superhumanization” of physicians, the authors argue, intensifies burnout and may increase diagnostic errors, particularly in the absence of clear liability laws or institutional safeguards.
The study urges health systems to implement AI training programs, decision-support checklists, and interdisciplinary feedback loops to help clinicians calibrate their trust in algorithmic output. Without such systemic support, the burden of interpreting and justifying AI recommendations remains dangerously one-sided. As New York practices rapidly adopt AI technologies, MSSNY continues to advocate for physician-centered regulations and institutional standards that protect both clinicians and patients.
Calibrating AI Reliance—A Physician’s Superhuman Dilemma (Patil, PhD, Myers PhD, Lu-Myers, MD, MPH, JAMA Health Forum, 4/21).


