Description
Healthcare insurers are increasingly relying on artificial intelligence (AI) tools to make coverage decisions, approving or denying claims for patient care, sometimes in seconds. A recent survey by the National Association of Insurance Commissioners (NAIC) found that most insurers were using AI to decide whether to authorize coverage—either before treatment (in pre-authorization decisions) or after. While these AI tools offer the possibility of speeding coverage decisions, they can also allow insurers to maximize profits by denying patient claims. Patients, especially those with complex medical needs, may then be forced to pay for their own care, go without care, or attempt to appeal.
Health insurers’ use of AI and machine learning (ML) to make these decisions has caused an uproar. According to NAIC, AI “can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability.” Consumers are bringing class-action lawsuits and numerous states have taken steps to limit insurers’ use of AI. Meanwhile, new companies are offering AI tools to help patients dispute coverage denials—creating an AI vs. AI insurance battleground.
What are the ethical, legal, and medical implications of this rise in the use of AI for health insurance coverage decisions? What regulations or other safeguards are needed? And what is the future of AI in controlling what health care patients receive? Join three experts to explore this difficult conflict between law, ethics, and medicine.
The webinar is free and open to the public.
Speakers
Daniel Schwarcz, JD
Daniel Schwarcz, JD, is Fredrikson & Byron Professor of Law and Distinguished University Teaching Professor at the University of Minnesota. An award-winning teacher and scholar, his research principally focuses on insurance law and regulation, spanning a broad range of issues such as systemic risk, regulatory federalism, health insurance, and coverage litigation. His work also explores the impact of AI on legal education, the practice of law, and consumer protection. His scholarship has been widely published, and he is often interviewed and quoted by the media. Schwarcz has testified to U.S. Congressional committees on more than a half-dozen occasions and regularly serves as an expert witness in insurance-oriented disputes.
Jennifer D. Oliva, JD, MBA
Jennifer D. Oliva, JD, MBA, is Professor of Law and Val Nolan Faculty Fellow in the Maurer School of Law at Indiana University Bloomington. She serves as a Research Scholar at Georgetown Law’s O’Neill Institute for National & Global Health Law; as a Senior Scholar with the University of California San Francisco (UCSF) and University of California Law Consortium on Law, Science & Health Policy; on the Johns Hopkins University and UCSF Opioid Industry Documents Archive (OIDA) National Advisory Committee; and on the National Pain Advocacy Center’s Science & Policy Advisory Council. Oliva's research and teaching interests include health law and policy, privacy law, evidence, torts, and complex litigation. Her scholarship has been widely published, and she has won numerous awards for her research, teaching, and service. Oliva is an elected member of the American Law Institute (ALI).
Isaac S. Kohane, MD, PhD
Isaac S. Kohane, MD, PhD, is the inaugural chair of Harvard Medical School’s Department of Biomedical Informatics, whose mission is to develop the methods, tools, and infrastructure required for a new generation of scientists and care providers to move biomedicine rapidly forward by taking advantage of the insight and precision offered by big data. Kohane develops and applies computational techniques to address disease at multiple scales, from whole health care systems to the functional genomics of neurodevelopment. He also has worked on AI applications in medicine since the 1990’s, including automated ventilator control, pediatric growth monitoring, detection of domestic abuse, diagnosing autism from multimodal data, and most recently assisting clinicians using whole genome sequence and clinical histories to diagnose rare or unknown disease patients. He is a member of the National Academy of Medicine, the American Society for Clinical Investigation, and the American College of Medical Informatics. He is the inaugural Editor-in-Chief of NEJM AI and co-author of The AI Revolution in Medicine.
Moderators
Francis X. Shen, JD, PhD
Francis X. Shen, JD, PhD is a Professor of Law and Faculty Member of the Graduate Program in Neuroscience at the University of Minnesota, where he directs the Shen Neurolaw Lab. Shen is also Co-Chair of the Consortium on Law on Values in Health, Environment & the Life Sciences; Chief Innovation Officer for the MGH Center for Law, Brain & Behavior (CLBB) in the MGH Department of Psychiatry; founding Director of the Dana Foundation Career Network in Neuroscience & Society; and Co-Director of the Neurotech Justice Accelerator at Mass General Brigham, a Dana Center Initiative.
Susan M. Wolf, JD
Susan M. Wolf, JD is a Regents Professor; McKnight Presidential Professor of Law, Medicine & Public Policy; Faegre Drinker Professor of Law; and Professor of Medicine at the University of Minnesota. She is Chair of the University’s Consortium on Law and Values in Health, Environment & the Life Sciences. She is an elected member of the National Academy of Medicine (NAM) and a Fellow of the American Association for the Advancement of Science (AAAS).
Additional Information
Disclosures
It is the policy of the University of Minnesota to ensure balance, independence, objectivity, and scientific rigor in all of its sponsored educational activities. All participating faculty are required to disclose to the program audience any financial relationships related to the subject matter of this program. Disclosure information is reviewed in advance to manage and resolve any possible conflicts of interest. Specific disclosure information for each faculty member will be shared with the audience prior to the faculty’s presentation.
None of the speakers or moderators for this event have relevant disclosures.
Resources
Farrar, Olivia. "AI Is Making Medical Decisions—But for Whom?" Harvard Medical School Department of Biomedical Informatics. May 23, 2025.
Monahan, Amy and Daniel Schwarcz. "Rules of Medical Necessity." Iowa Law Review 107, no. 2 (2022): 423-493.
Oliva, Jennifer D. "Regulating Healthcare Coverage Algorithms." Indiana Law Journal 100, no. 4 (2025): 1861-1889.
Phillips, Josephine A. "Algorithms Deny Humans Health Care." The Regulatory Review. March 18, 2025.
Prince, Anya ER and Daniel Schwarcz. "Proxy Discrimination in the Age of Artificial Intelligence and Big Data." Iowa Law Review 105, no. 3 (2020): 1257-1318.
Schwarcz, Daniel. "Health-Based Proxy Discrimination, Artificial Intelligence, and Big Data." Houston Journal of Health, Law & Policy 21, no. 1 (2021): 95-124.
Schwarcz, Daniel and Josephine Wolff. "The Limits of Regulating AI Safety through Liability and Insurance: Lessons from Cybersecurity." Minnesota Legal Studies Research Paper 2025-46. Last revised on August 29, 2025.
Yu, Kun-Hsing, Elizabeth Healey, Tze-Yun Leong, et al. "Medical Artificial Intelligence and Human Values." The New England Journal of Medicine 390, no. 20 (2024): 1895-1904.
Land Acknowledgment
The University of Minnesota Twin Cities is built within the traditional homelands of the Dakota people. It is important to acknowledge the peoples on whose land we live, learn, and work as we seek to improve and strengthen our relations with our tribal nations. We also acknowledge that words are not enough. We must ensure that our institution provides support, resources, and programs that increase access to all aspects of higher education for our American Indian students, staff, faculty, and community members.