Designing AI That Patients Trust

Moral Foundations for Human-Centered Healthcare

A whitepaper published on 1st October 2025, written by Richa Nevatia1 and Lanyi Zhu1 (University of Washington, Communication Leadership), and Emma Wigdahl2, Alla Zarifyan2, Andrew Kucheriavy2 (Intechnic).

Header image for the article- A nurse showing a digital portal to a patient on her tablet device

As generative AI becomes increasingly embedded in healthcare delivery, understanding how patients perceive and judge its use is vital. This white paper explores the moral triggers and mitigators that shape patient acceptance of AI through a national survey of 275 U.S. patients, grounded in social psychologist Jonathan Haidt’s moral foundations theory. 

Key findings reveal that 38.5% of U.S. patients would reject any use of AI in their healthcare outright, while the remaining 61.5% are highly sensitive to issues of privacy, control, data management and expert oversight. Patients are most supportive of AI when it offers tangible benefits, complies with HIPAA, allows data opt-outs, and includes clear validation by medical professionals. 

Importantly, moral judgments highly engage five of the six foundational dimensions and extend beyond safety concerns to include autonomy, fairness, and transparency. The paper provides evidence-based recommendations spanning product design, com`munications, and organizational strategy for developing AI systems that align with human values, demonstrating that ethical implementation is not only a compliance necessity but a long-term competitive advantage.