Researchers across many disciplines are increasingly utilizing artificial intelligence (AI), including large language models (LLMs) such as ChatGPT to support empirical research and data analysis, academic writing, peer review, and development of new tools. The broad reach of AI in research raises pressing ethical questions about scientific integrity, authorship, data privacy, bias, and equity. Related issues include how trainees and students should be instructed to use and acknowledge the use of AI tools in their research. Ethical guidance from research institutions, professional organizations, journals, and governmental oversight authorities is only beginning to emerge, and ethical oversight of AI in research also remains in flux.
This conference will bring together leading experts from a range of disciplines, from biomedical sciences to the humanities, to confront the challenge of ethical use of AI in research. National leaders will discuss how AI is being used in research, the challenges to research ethics and integrity, current guidance on using AI in research and publication, including how to address concerns that training sets for LLMs may not be sufficiently representative, leading to biased models. Speakers will also debate how LLMs should be used in academic writing and peer review, and how students should use these tools. The conference will consider when and how researchers should seek informed consent to use of AI in research protocols, and how IRBs can effectively provide oversight for research with AI tools. The conference will offer recommendations for researchers, students, administrators, and IRB professionals on how to ensure ethical use of AI in research.
Conference Agenda
Annual Research Ethics Day Conference - The Ethical Use of Artificial Intelligence in Research & Scholarship: Challenges & Emerging Guidance
9:00am Central Time | Welcome & Land Acknowledgment |
Moderator: Susan M. Wolf, JD, Regents Professor; McKnight Presidential Professor of Law, Medicine & Public Policy; Faegre Drinker Professor of Law; Professor of Medicine; Chair, Consortium on Law and Values in Health, Environment & the Life Sciences, University of Minnesota
| |
9:15am | Ethical Use of AI in Research -- How is AI being used in research? What guidance is emerging on ethical and trustworthy AI? |
Moderator: Francis X. Shen, JD, PhD, Professor of Law; Co-Chair, Consortium on Law on Values in Health, Environment & the Life Sciences, University of Minnesota
| |
10:15am | Bias & Inclusion -- How should AI/ML tools be developed and used to avoid bias and ensure responsible use? |
Moderator: Genevieve Melton-Meaux, MD, PhD, FACMI, FACS, FACSRS, Professor of Surgery; Senior Associate Dean of Health Informatics & Data Science; Director, Center for Learning Health System Sciences; Core Faculty, Institute for Health Informatics; Co-Chair, Data Science Initiative; Associate Director for the Clinical NLP Research Group; Program Director for the Clinical Informatics Fellowship, University of Minnesota; Chief Health Informatics & AI Officer, MHealth Fairview
| |
11:15am | Break |
11:30am | Norms on AI/ML in Scholarship -- How should AI and large language models (LLM) be used in academic writing and peer review? What ethical norms should apply to students, faculty, researchers, peer reviewers, and journals? |
Moderator: Connie White Delaney, PhD, RN, FAAN, FACMI, Professor and Dean, School of Nursing, University of Minnesota; Knowledge Generation Lead, National Center for Interprofessional Practice & Education
| |
12:30pm | Lunch Break |
1:00pm | Informed Consent – When does use of AI constitute research with human subjects? Do researchers have duties to secure participant consent to the use of AI in a research protocol? Should informed consent address potential uses of AI in secondary research on the data collected? |
Moderator: Constantin Aliferis, MD, PhD, FACMI, Professor of Medicine; Director, Institute for Health Informatics; Chief Research Informatics Officer, Clinical and Translational Science Institute, University of Minnesota
| |
2:00pm | Oversight of AI in Research – How should IRBs & other oversight bodies evaluate use of AI in research? Should research oversight programs themselves use AI to evaluate protocols and compliance? |
Moderator: Stevie Chancellor, PhD, Assistant Professor, Department of Computer Science & Engineering, University of Minnesota
| |
2:55pm | Closing Remarks |
| |
3:00pm | Adjourn |
Speaker Biographies
Constantin Aliferis, MD, PhD, FACMI, is Professor of Medicine at the University of Minnesota and serves as the University's CTSI Chief Research Informatics Officer as well as Director of the Institute for Health Informatics (IHI). Dr. Aliferis leads CTSI’s Biomedical Informatics Program, where he oversees the Best Practices Informatics Core, a consulting and collaborative science core created to generate and link datasets on demand, architect systems, perform sophisticated modeling and analysis, and assemble multidisciplinary teams to meet researcher needs. His research is focused on high dimensional modeling and analysis designed to transform biomedical data into novel actionable scientific knowledge. Areas of broad interest are: use of advanced informatics and analytics to accelerate the sophistication, volume, quality, and reproducibility of scientific research; precision medicine; and quality and cost improvements in healthcare using Big Data approaches.
Leo Anthony Celi, MD, MSc, MPH, is Clinical Research Director and Principal Research Scientist at the MIT Laboratory for Computational Physiology (LCP), Associate Professor at Harvard Medical School, and a practicing intensive care unit (ICU) physician at the Beth Israel Deaconess Medical Center (BIDMC). He is also Editor-in-Chief of PLoS Digital Health. Dr. Celi is the Principal Investigator behind the Medical Information Mart for Intensive Care (MIMIC) and its offsprings, MIMIC-CXR, MIMIC-ED, MIMIC-ECHO, and MIMIC-ECG. With close to 100k users worldwide, an open codebase, and close to 10k publications in Google Scholar, the datasets have shaped the course of machine learning in healthcare in the United States and beyond. In partnership with hospitals, universities, and professional societies across the globe, Dr. Celi and his team have organized over 50 datathons in 22 countries, bringing together students, clinicians, researchers, and engineers to leverage data routinely collected in the process of care.
Stevie Chancellor, PhD, is an Assistant Professor in the Department of Computer Science & Engineering at the University of Minnesota. Her research combines approaches from Human-Computer Interaction and AI to build and critically evaluate human-centered AI systems, focusing on high-risk health behaviors in online communities. Using machine learning, she uses digital trace data from millions of interactions on social media to identify high-risk behaviors. At the same time, she critically evaluates these predictive approaches and develops more ethical and compassionate research practices for computer science.
Connie White Delaney, PhD, RN, FAAN, FACMI, serves as Professor and Dean in the School of Nursing at the University of Minnesota, and is the Knowledge Generation Lead for the National Center for Interprofessional Practice and Education. She served as Associate Director of the Clinical Translational Science Institute – Biomedical Informatics, and Acting Director of the Institute for Health Informatics (IHI) in the Academic Health Center from 2010-15. She is an active researcher in data and information technology standards for nursing, health care, and interprofessional practice and education; big data science; and integrative informatics. Dean Delaney is the first Fellow in the College of Medical Informatics to serve as a Dean of Nursing, and was an inaugural appointee to the USA Health Information Technology Policy Committee, Office of the National Coordinator, and Office of the Secretary for the U.S. Department of Health and Human Services (HHS).
Judy Wawira Gichoya, MD, MS, is an Associate Professor in the Department of Radiology and Imaging Sciences at Emory University School of Medicine, where she serves as Co-Director of the Healthcare AI Innovation and Translational Informatics (HITI) Lab. Trained as both an informatician and an interventional radiologist, her work is centered around using data science to study health equity. Her group works in four areas -- building diverse datasets for machine learning (e.g., the Emory Breast dataset), evaluating AI for bias and fairness, validating AI in real world settings, and training the next generation of data scientists (both clinical and technical students) through hive learning and village mentoring. She serves as Associate Editor and Program Director for the RSNA’s Radiology: Artificial Intelligence Trainee Editorial Board and the medical students machine learning elective. She was recognized as a 2023 Emerging Scholar by the National Academy of Medicine.
Mary L. Gray, PhD, is Senior Principal Researcher at Microsoft Research and a Fellow at Harvard University’s Berkman Klein Center for Internet and Society. She is on research leave from Indiana University, where she holds a faculty position in the Luddy School of Informatics, Computing, and Engineering with affiliations in Anthropology, Gender Studies, and the Media School. Dr. Gray chairs the Microsoft Research Ethics Review Program — the only federally-registered institutional review board of its kind in Tech. She is a leading expert in the field of AI and ethics, particularly research methods at the intersections of computer and social sciences. Her research looks at how technology access, material conditions, and everyday uses of technologies transform people’s lives. In 2020, Dr. Gray was named a MacArthur Fellow(opens in new tab) for her contributions to anthropology and the study of technology, digital economies, and society.
Isaac (Zak) Kohane, MD, PhD, is the inaugural Chair of the Department of Biomedical Informatics, the Marion V. Nelson Professor of Biomedical Informatics at Harvard Medical School, and Professor of Pediatrics at Boston Children’s Hospital. He is also Editor-in-Chief of NEJM AI. Over the last 30 years, Dr. Kohane’s research agenda has been driven by the vision of what biomedical researchers could do to find new cures, provide new diagnoses, and deliver the best care available if data could be converted more rapidly to knowledge and knowledge to practice. He develops and applies computational techniques to address disease at multiple scales: from whole healthcare systems as “living laboratories” to the functional genomics of neurodevelopment with a focus on autism. Prof. Kohane has published several hundred papers in the medical literature and coauthored The AI Revolution in Medicine: GPT-4 and Beyond (2023). He is a member of the Institute of Medicine and the American Society for Clinical Investigation.
Alex John London, PhD, is the K&L Gates Professor of Ethics and Computational Technologies, co-lead of the K&L Gates Initiative in Ethics and Computational Technologies, Director of the Center for Ethics and Policy, and Chief Ethicist at the Block Center for Technology and Society at Carnegie Mellon University. An elected Fellow of The Hastings Center, Prof. London’s work focuses on ethical and policy issues surrounding the development and deployment of novel technologies in medicine, biotechnology, and artificial intelligence, on methodological issues in theoretical and practical ethics, and on cross-national issues of justice and fairness. He is a member of the World Health Organization (WHO) Expert Group on Ethics and Governance of AI and is currently a co-leader of the ethics core for the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING).
Bradley Malin, PhD, is the Accenture Professor of Biomedical Informatics, Biostatistics, and Computer Science, as well as Vice Chair for Research Affairs in the Department of Biomedical Informatics at Vanderbilt University Medical Center. His research is on the development of technologies to enable artificial intelligence and machine learning (AI/ML) in the context of organizational, political, and health information architectures. He co-directs the AI Discovery and Vigilance to Accelerate Innovation and Clinical Excellence (ADVANCE) Center. He is also co-director of the Center for Genetic Privacy and Identity in Community Settings (GetPreCiSe), an NIH Center of Excellence on Ethical, Legal, and Social Implications Research (CEER); the Ethics Core of the NIH Bridge2AI program; and the Infrastructure Core of the NIH Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD).
Genevieve Melton-Meaux, MD, PhD, FACMI, FACS, FACSRS, serves as Professor of Surgery, Senior Associate Dean of Health Informatics and Data Science, Director for the Center for Learning Health System Sciences, and Faculty Fellow, Core Faculty in the Institute for Health Informatics at the University of Minnesota. She serves as the Chief Health Informatics and AI Officer for M Health Fairview leading informatics including Clinical Decision Support (CDS) and Health IT optimization and Fairview's AI program. Her research interests include surgical informatics, improving note usage in EHRs, evaluating technology solutions in practice, clinical colorectal surgery, advancing learning health system capabilities and the generation of real-world evidence, and clinical natural language processing (NLP). National leadership includes serving as Immediate Past-President of the American College of Medical Informatics and President of the American Medical Informatics Association.
Shashank Priya, PhD, serves as the University of Minnesota’s Vice President for Research & Innovation. In this position, he oversees a $1+ billion research enterprise across all campuses and facilities. He manages units responsible for administration of sponsored projects, research and regulatory compliance, and technology commercialization, as well as 10 interdisciplinary academic centers and institutes. He also oversees a growing corporate engagement portfolio for the University. Dr. Priya is a Professor of Chemical Engineering and Materials Science.
David B. Resnik, JD, PhD, is a Bioethicist at the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH) and Senior Advisor for Research Integrity to the NIH Director of Intramural Research. He was an Associate and Full Professor of Medical Humanities at the Brody School of Medicine at East Carolina University (ECU) from 1998 to 2004, and an Associate Director of the Bioethics Center at ECU and University Health Systems from 1998 to 2004. He has published more than 300 articles and 10 books on ethical, social, legal, and philosophical issues in science, technology, and medicine. He is a Fellow of the American Association for the Advancement of Science. Dr. Resnik is an Associate Editor of the journal Accountability in Research and has written extensively on the ethics of using AI in research and publication.
Vardit Ravitsky, PhD, is the President and CEO of The Hastings Center, an independent, nonpartisan bioethics research institute. Dr. Ravitsky joined the Center from the University of Montreal where she was a Professor in the Bioethics Program, School of Public Health. She is also a Senior Lecturer on Global Health and Social Medicine at Harvard Medical School. Her research in bioethics focuses on ethical, legal, and social implications of genomics and assisted reproductive technologies, with an emphasis on emerging biotechnologies and their implications for women’s autonomy and for disability rights. She also studies the ethics of AI in biomedicine. She is a Principal Investigator on two Bridge2AI research projects funded by the National Institutes of Health that expand the use of AI in biomedical and behavioral research. She is immediate past-President, and currently Vice President, of the International Association of Bioethics.
Francis X. Shen, JD, PhD, is a Professor and Solly Robins Distinguished Research Fellow at the University of Minnesota Law School, and also a Faculty Member in the UMN Graduate Program in Neuroscience. He is Co-Chair of the Consortium on Law and Values in Health, Environment & the Life Sciences. He directs the Shen Neurolaw Lab and is Chief Innovation Officer of the Center for Law, Brain & Behavior at Massachusetts General Hospital. His research interests include the ethical, legal, and social implications of emerging neuroimaging and neuromodulation technologies and AI.
Effy Vayena, PhD, is a Professor of Bioethics at the Swiss Institute of Technology (ETH) whose research focuses on issues at the intersection of medicine, data, and ethics. As a professor of health policy, she founded the Health Ethics and Policy Lab to tackle pressing questions that arise through technological advances such as genomic technologies in healthcare and research. She has been appointed a Visiting Professor at the Center for Bioethics at Harvard Medical School and a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University, where she was previously a Fellow. Prof. Vayena is an elected member of the Swiss Academy of Medical Sciences, and co-chairs the WHO expert advisory group on Artificial Intelligence health ethics and governance.
Jeannette M. Wing, PhD, is the Executive Vice President for Research at Columbia University and Professor of Computer Science. In her EVPR role, she has overall responsibility for the University’s research enterprise. She joined Columbia in 2017 as the inaugural Avanessians Director of the Data Science Institute. Prior to Columbia, Dr. Wing was Corporate Vice President of Microsoft Research and served as Assistant Director for Computer and Information Science and Engineering at the National Science Foundation. Dr. Wing’s research contributions have been in the areas of trustworthy AI, security and privacy, specification and verification, concurrent and distributed systems, programming languages, and software engineering. She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science (AAAS), Association for Computing Machinery (ACM), and Institute of Electrical and Electronic Engineers (IEEE), and is a member of the National Academy of Engineering.
Susan M. Wolf, JD, is a Regents Professor; McKnight Presidential Professor of Law, Medicine & Public Policy; Faegre Drinker Professor of Law; and Professor of Medicine at the University of Minnesota. She is Chair of the University’s Consortium on Law and Values in Health, Environment & the Life Sciences. She is an elected member of the National Academy of Medicine (NAM) and a Fellow of the American Association for the Advancement of Science (AAAS). Prof. Wolf is a member of the National Academies Strategic Council for Research Excellence, Integrity, and Trust. Her research focuses on ethical, legal, and societal issues in biomedicine and the implications of emerging technologies including genomics, neuroscience, and bioengineering.
Planning Committee
Joanne Billings, MD, MPH, is an Associate Professor of Medicine in the Division of Pulmonary, Allergy, Critical Care and Sleep Medicine (PACCS). She holds a leadership role with the Research and Innovation Office (RIO) as Associate Vice President for Research Integrity and Compliance. She serves as Co-Medical Director of CTSI’s Clinical Translational Research Services (CTRS) core. Dr. Billings's primary research and clinical work focuses on people with cystic fibrosis. She oversees the PACCS Division clinical research program.
Shashank Priya, PhD, serves as the University of Minnesota’s Vice President for Research and Innovation. In this position, he oversees a $1+ billion research enterprise across all campuses and facilities. He manages units responsible for administration of sponsored projects, research and regulatory compliance, and technology commercialization, as well as 10 interdisciplinary academic centers and institutes. He also oversees a growing corporate engagement portfolio for the University. Dr. Priya is a Professor of Chemical Engineering and Materials Science.
Danielle Rintala, MS, directs the Risk Intelligence and Compliance Team (RIACT) in the Office of the Vice President for Research and Innovation (RIO) at the University of Minnesota. RIACT monitors near- and long-term research risks; conducts compliance investigations, ensuring compliance in research-associated financial transactions and research registries; and manages Responsible Conduct of Research (RCR) training and the Certified Approver program. Prior to joining the University of Minnesota, she was the Associate Director of Research Compliance and Biosafety Officer at the University of Wisconsin-Milwaukee.
Francis Shen, JD, PhD, is an expert at the intersection of law and neuroscience, as well as law and artificial intelligence. He is a Member of the Faculty of the UMN Graduate Program in Neuroscience, and Co-Chair of the UMN Consortium on Law and Values in Health, Environment & the Life Sciences. He directs the Shen Neurolaw Lab and conducts empirical, legal, and ethical research to examine how insights from neuroscience and artificial intelligence can make the legal system more just and effective. He also explores the ethical, legal, and social implications of advances in neurotechnology.
Susan M. Wolf, JD, is a Regents Professor; McKnight Presidential Professor of Law, Medicine & Public Policy; Faegre Drinker Professor of Law; and Professor of Medicine at the University of Minnesota. She is Chair of the University’s Consortium on Law and Values in Health, Environment & the Life Sciences. She is an elected member of the National Academy of Medicine (NAM) and a Fellow of the American Association for the Advancement of Science (AAAS). Prof. Wolf is a member of the National Academies Strategic Council for Research Excellence, Integrity, and Trust. Her research focuses on ethical, legal, and societal issues in biomedicine and the implications of emerging technologies including genomics, neuroscience, and bioengineering.
Advisory Committee
2025 Research Ethics Day Conference Advisory Committee
Alonso Guedes, DVM, MS, PhD, DACVAA
Resources
- Aguilar N, Landau AY, Mathiyazhagan S, et al. Applying Reflexivity to Artificial Intelligence for Researching Marginalized Communities and Real-World Problems. Proceedings of the 56th Hawaii International Conference on System Sciences 2023;712-721. https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1361&context=hicss-56.
- Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective. BMC Medical Informatics and Decision Making 2020;20(310):1-9. doi: https://doi.org/10.1186/s12911-020-01332-6.
- Bernstein MS, Levi M, Magnus D, et al. ESR: Ethics and Society Review of Artificial Intelligence Research. arXiv 2021;2106.11521v2:1-18. doi: https://doi.org/10.48550/arXiv.2106.11521.
- Blau W, Vinton GC, Enriquez J, et al. Protecting Scientific Integrity in an Age of Generative AI. PNAS 2024;121(22)(e2407886121);1-3. doi: https://doi.org/10.1073/pnas.2407886121.
- Bouhouita-Guermech S, Gogognon P, Bélisle-Pipon JC. Specific Challenges Posed by Artificial Intelligence in Research Ethics. Frontiers in Artificial Intelligence 2023;6(1149082). doi: https://doi.org/10.3389/frai.2023.1149082.
- Celi LA, Cellini J, Charpignon M-L, et al. Sources of Bias in Artificial Intelligence that Perpetuate Healthcare Disparities – A Global Review. PLoS Digital Health 2022;1(3):e0000022. doi: https://doi.org/10.1371/journal.pdig.0000022.
- Chen Y, Clayton EW, Novak LL, Anders S, Malin B. Human-Centered Design to Address Biases in Artificial Intelligence. Journal of Medical Internet Research 2023;25:1-10. doi: https://doi.org/10.2196/43251.
- Cohen IG, Slottje A. Artificial Intelligence and the Law of Informed Consent. Research Handbook on Health, AI and the Law 2024;167-182. doi: https://doi.org/10.4337/9781802205657.00017.
- Cohen IG. Informed Consent and Medical Artificial Intelligence: What to Tell the Patient? Georgetown Law Journal 2020;108;1425-1469. doi: https://dx.doi.org/10.2139/ssrn.3529576.
- COPE. Authorship and AI Tools. First published Feb. 13, 2023. https://publicationethics.org/cope-position-statements/ai-author.
- Dalrymple D, Skalse J, Bengio Y, Russell S, Tegmark M, Seshia S, Omohundro S, Szegedy C, Goldhaber B, Ammann N, Abate A, Halpern J, Barrett C, Zhao D, Zhi-Xuan T, Wing J, Tenenbaum. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. arXiv 2024;1-30. doi: https://doi.org/10.48550/arXiv.2405.06624.
- Drazen JM, Haug CJ. Trials of AI Interventions Must Be Preregistered. NEJM AI 2024;1(4). doi: https://doi.org/10.1056/AIe2400146.
- Eto T, Heath E. Artificial Intelligence Human Subjects Research (AI HSR) IRB Reviewer Checklist. 2022. https://www.academia.edu/66425895/Artificial_Intelligence_Human_Subjects_Research_AI_HSR_IRB_Reviewer_Checklist.
- European Commission, ERA Forum Stakeholders’ Document. Living Guidelines on the Responsible Use of Generative AI in Research. 2024. https://research-and-innovation.ec.europa.eu/document/download/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en?filename=ec_rtd_ai-guidelines.pdf.
- Ferretti A, Ienca M, Sheehan M, et al. Ethics Review of Big Data Research: What Should Stay and What Should Be Reformed? BMC Medical Ethics 2021;22(51):1-13. doi: https://doi.org/10.1186/s12910-021-00616-4.
- Flanagin A, Bibbins-Domingo K, Berkwits M, et al. Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA Network 2023;329(8):637-639. doi: https://doi.org/10.1001/jama.2023.1344.
- Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots. JAMA 2023;330(8):702-703. doi: https://doi.org/10.1001/jama.2023.12500.
- Flanagin A, Pirracchio R, Khera R. Reporting Use of AI in Research and Scholarly Publication – JAMA Network Guidance. JAMA 2024;331(13):1096-1098. doi: https://doi.org/10.1001/jama.2024.3471.
- Friesen P, Douglas-Jones R, Marks M, et al. Governing AI-Driven Health Research: Are IRBs Up to the Task? Ethics in Human Research 2021;43(2):35-42. doi: https://doi.org/10.1002/eahr.500085.
- Gallifant J, Bitterman DS, Celi LA, Gichoya JW, Matos J, McCoy LG, Pierce RL. Ethical Debates Amidst Flawed Healthcare Artificial Intelligence Metrics. npj Digital Medicine 2024;7(243):1-3; doi: https://doi.org/10.1038/s41746-024-01242-1.
- Gallifant J, Nakayama LF, Gichoya JW, et al. Equity Should Be Fundamental to the Emergence of Innovation. PLoS Digital Health 2023;2(4):e0000224. doi: https://doi.org/10.1371/journal.pdig.0000224.
- Gichoya JW, Banerjee I, Bhimireddy AR, et al. AI Recognition of Patient Race in Medical Imaging: A Modelling Study. Lancet 2022;4(6);E406-E414. doi: https://doi.org/10.1016/S2589-7500(22)00063-2.
- Gichoya JW, Thomas K, Celi LA, et al. AI Pitfalls and What Not to Do: Mitigating Bias in AI. British Journal of Radiology 2023;96(1150):1-8; doi: https://doi.org/10.1259/bjr.20230023.
- Gil Y. Will AI Write Scientific Papers in the Future? Presidential Address. AI Magazine 2021;42:3-15. doi: https://doi.org/10.1609/aaai.12027.
- Godwin RC, Bryant AS, Wagener BM, et al. IRB-Draft Generator: A Generative AI Tool to Streamline the Creation of Institutional Review Board Applications. SoftwareX 2024;25:101601;1-5. doi: https://doi.org/10.1016/j.softx.2023.101601.
- Gray ML. A Human Rights Framework for AI Research Worthy of Public Trust. Issues in Science and Technology; May 21, 2024. doi: https://doi.org/10.58875/ERUU8159.
- Harvard Library, Research Guides. Artificial Intelligence for Research and Scholarship. Last updated Aug. 19, 2024. https://guides.library.harvard.edu/c.php?g=1330621&p=9798082.
- Hendricks-Sturrup R, Simmons M, Anders S, et al. Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach. Journal of Medical Internet Research AI 2023;2(e52888). doi: https://doi.org/10.2196/52888.
- Hosseini M, Rasmussen LM, Resnik DB. Using AI To Write Scholarly Publications. Accountability in Research 2024;31(7);715-723. doi: https://doi.org/10.1080/08989621.2023.2168535.
- Hosseini M, Resnik DB. Guidance Needed for Using Artificial Intelligence to Screen Journal Submissions for Misconduct. Research Ethics 2024. doi: https://doi.org/10.1177/17470161241254052.
- Hosseini M, Resnik DB, Holmes K. The Ethics of Disclosing the Use of Artificial Intelligence Tools in Writing Scholarly Manuscripts. Research Ethics 2023;19(4):449-465. doi: https://doi.org/10.1177/17470161231180449.
- ICMJE. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated May 2023. https://www.icmje.org/news-and-editorials/icmje-recommendations_annotated_may23.pdf.
- ICMJE. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated January 2024. https://www.icmje.org/icmje-recommendations.pdf.
- Ienca M, Vayena E. Ethical Requirements for Responsible Research with Hacked Data. Nature Machine Intelligence 2021;3:744-748. doi: https://doi.org/10.1038/s42256-021-00389-w.
- Jordan SR. Designing an Artificial Intelligence Research Review Committee. Future of Privacy Forum 2019. https://fpf.org/wp-content/uploads/2019/10/DesigningAIResearchReviewCommittee.pdf.
- Kaebnick GE, Magnus DC, Kao A, et al. Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing. Medicine, Health Care and Philosophy 2023;26:499-503. doi: https://doi.org/10.1007/s11019-023-10176-6.
- Kaushik D, Lipton ZC, London AJ. Resolving the Human-Subjects Status of Machine Learning’s Crowdworkers: What Ethical Framework Should Govern the Interaction of ML Researchers and Crowdworkers? Queue 2024;21(6):101-127. doi: https://doi.org/10.1145/3639452.
- Koller D, Beam A, Manrai A, Ashley E, Liu X, Gichoya J, Holmes C, Zou J, Dagan N, Wong TY, Blumenthal D, Kohane I. Why We Support and Encourage the Use of Large Language Models in NEJM AI Submissions. NEJM AI 2023;1(1):1-3; doi: https://doi.org/10.1056/AIe2300128.
- Li H, Moon JT, Purkayastha, et al. Ethics of Large Language Models in Medicine and Medical Research. Lancet Digital Health 2023;5(6):e333-e335. doi: 10.1016/S2589-7500(23)00083-3.
- Liebrenz M, Schleifer R, Buadze A, et al. Generating Scholarly Content with ChatGPT: Ethical Challenges for Medical Publishing. Lancet 2023;5(3):E105-E106. doi: https://doi.org/10.1016/S2589-7500(23)00019-5.
- London AJ. Artificial Intelligence in Medicine: Overcoming or Recapitulating Structural Challenges to Improving Patient Care? Cell Reports Medicine 2022;3(5);1-8. doi: https://doi.org/10.1016/j.xcrm.2022.100622.
- Makridis CA, Boese A, Fricks R, et al. Informing the Ethical Review of Human Subjects Research Utilizing Artificial Intelligence. Frontiers in Computer Science 2023;5:1235226;1-8. doi: https://doi.org/10.3389/fcomp.2023.1235226.
- McCradden MD, Joshi S, Anderson JA, London AJ. A Normative Framework for Artificial Intelligence as a Sociotechnical System in Healthcare. Patterns 2023;4(11):1-9. doi: https://doi.org/10.1016/j.patter.2023.100864.
- Microsoft Research. Project Resolve. Accessed Dec. 9. https://www.microsoft.com/en-us/research/project/project-resolve/.
- Nasr M, Carlini N, Hayase J, Jagielski M, Cooper AF, Ippolito D, Choquette-Choo CA, Wallace E, Tramer F, Lee K. Scalable Extraction of Training Data from (Production) Language Models. arXiv 2023;1-64. doi: https://doi.org/10.48550/arXiv.2311.17035.
- National Academy of Medicine. Health Care Artificial Intelligence Code of Conduct. https://nam.edu/programs/value-science-driven-health-care/health-care-artificial-intelligence-code-of-conduct/.
- National Academy of Sciences. US-UK Scientific Forum on Science in the Age of AI. June 11-12, 2024. https://www.nasonline.org/event/us-uk-scientific-forum-on-science-in-the-age-of-ai/.
- National Academies of Sciences, Engineering, and Medicine. AI for Scientific Discovery: Proceedings of a Workshop. Washington, DC: National Academies Press 2024. doi: https://doi.org/10.17226/27457.
- National Academies of Sciences, Engineering, and Medicine. Fostering Responsible Computing Research: Foundations and Practices. National Academies Press 2022. doi: https://doi.org/10.17226/26507.
- National Institutes of Health (NIH). The Use of Generative Artificial Intelligence Technologies Is Prohibited for the NIH Peer Review Process. NOT-OD-23-149. June 23, 2023. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html.
- National Institutes of Health (NIH). Use of Generative AI in Peer Review. Frequently Asked Questions (FAQs). Last updated Aug. 2, 2024. https://grants.nih.gov/faqs#/use-of-generative-ai-in-peer-review.htm.
- National Science Foundation (NSF). Notice to Research Community: Use of Generative Artificial Intelligence Technology in the NSF Merit Review Process. Dec. 14, 2023. https://new.nsf.gov/news/notice-to-the-research-community-on-ai.
- Nature. Editorial. Tools Such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for their Use. Nature 2023;613:612. doi: https://doi.org/10.1038/d41586-023-00191-1.
- Palmer K. AI Threatens to Cement Racial Bias in Clinical Algorithms. Could it Also Chart a Path Forward? STAT News , Sept. 11, 2024. https://www.statnews.com/2024/09/11/embedded-bias-series-artificial-intelligence-risks-of-bias-in-medical-data/.
- Patton DU, Landau AY, Mathiyazhagan S. ChatGPT for Social Work Science: Ethical Challenges and Opportunities. Journal of the Society for Social Work and Research 2023;14(3);553-562. doi: https://doi.org/10.1086/726042.
- Penn State, AI Hub. AI Guidelines. 2023. https://ai.psu.edu/guidelines/.
- Perni S, Lehmann LS, Bitterman DS. Patients Should Be Informed When AI Systems Are Used in Clinical Trials. Nature Medicine 2023;29;1890-1891. doi: https://doi.org/10.1038/s41591-023-02367-8.
- PNAS Nexus. Information for Authors. Accessed Oct. 23, 2024. https://academic.oup.com/pnasnexus/pages/general-instructions?login=false.
- PNAS. PNAS Author Center: Editorial and Journal Policies. Accessed Oct. 23, 2024. https://www.pnas.org/author-center/editorial-and-journal-policies#authorship-and-contributions.
- Porsdam Mann S, Vazirani AA, Aboy M, Earp BD, Minssen T, Cohen IG, Savulescu. Guidelines for Ethical Use and Acknowledgement of Large Language Models in Academic Writing. Nature Machine Intelligence 2024; doi: https://doi.org/10.1038/s42256-024-00922-7.
- Resnik DB, Hosseini M. The Ethics of Using Artificial Intelligence in Scientific Research: New Guidance Needed for a New Tool. AI and Ethics 2024;1-19. doi: https://doi.org/10.1007/s43681-024-00493-8.
- Science. Science Journals: Editorial Policies. Accessed Oct. 23, 2024. https://www.science.org/content/page/science-journals-editorial-policies#:%7E:text=Artificial%20intelligence%20(AI).,explicit%20permission%20from%20the%20editors.
- Shaw J, Ali J, Atuire CA, et al. Research Ethics and Artificial Intelligence for Global Health: Perspectives from the Global Forum on Bioethics in Research. BMC Medical Ethics 2024;25(46);1-9. doi: https://doi.org/10.1186/s12910-024-01044-w.
- Sleigh J, Hubbs S, Blasimme A, Vayena E. Can Digital Tools Foster Ethical Deliberation? Humanities and Social Science Communications 2024;11(117):1-10. doi: https://doi.org/10.1057/s41599-024-02629-x.
- Thorp HH. ChatGPT Is Fun, But Not an Author. Science 2023;379(6630):13. doi: https://doi.org/10.1126/science.adg7879.
- Thorp HH. Genuine Images in 2024. Science 2024;383(6678):7. doi: https://doi.org/10.1126/science.adn7530.
- U.S. Department of Health and Human Services, Office for Human Research Protections (OHRP), SACHRP Recommendations. Considerations for IRB Review of Research Involving Artificial Intelligence. Approved July 1, 2022. https://www.hhs.gov/ohrp/sachrp-committee/recommendations/attachment-e-july-25-2022-letter/index.html.
- University of California Berkeley, Office of the Chancellor, Office of Ethics, Risk and Compliance Services. Appropriate Use of Generative AI Tools. 2024. https://oercs.berkeley.edu/privacy/privacy-resources/appropriate-use-generative-ai-tools.
- University of Michigan, Generative Artificial Intelligence. Accessed Nov. 21, 2024. https://genai.umich.edu/. U-M Guidance for Faculty/Instructors. Accessed Nov. 21, 2024. https://genai.umich.edu/resources/faculty.
- University of Minnesota, Technology Help. Artificial Intelligence: Appropriate Use of Generative AI Tools. 2024. https://it.umn.edu/services-technologies/resources/artificial-intelligence-appropriate-use. Navigating AI @ UMN. 2024. https://it.umn.edu/navigating-ai-umn.
- University of Minnesota Libraries. ChatGPT, Copilot, and Other AI Tools. Updated Sept. 17, 2024. https://libguides.umn.edu/chatgpt.
- University of Minnesota Duluth, Information Technology Systems and Services. Artificial Intelligence at UMD. 2024. https://itss.d.umn.edu/service-catalog/academic-technology/ai.
- University of Utah, Office of the Vice President for Research. Guidance on the Use of AI in Research. July 13, 2023. https://attheu.utah.edu/facultystaff/vpr-statement-on-the-use-of-ai-in-research/.
- Wing JM. Trustworthy AI. Communications of the ACM 2021;64(10):64-71. doi: https://doi.org/10.1145/3448248.
- Wing JM, Wooldridge M. Findings and Recommendations of the May 2022 US-UK AI Workshop. 2022;1-37. https://par.nsf.gov/servlets/purl/10390650.
- Yang Y, Zhang H, Gichoya JW, Katabi D, Ghessemi M. The Limits of Fair Medical Imaging AI in Real-World Generalization. Nature Medicine 2024;30:2838-2848. doi: https://doi.org/10.1038/s41591-024-03113-4.
- Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK. Medical Artificial Intelligence and Human Values. The New Eng;land Journal of Medicine 2024;390(20):1895-1904. doi: https://doi.org/10.1056/NEJMra2214183.
Presented by the Research & Innovation Office (RIO); Consortium on Law and Values in Health, Environment & the Life Sciences; Masonic Cancer Center; and Clinical Translational Science Institute, University of Minnesota.
This conference is part of Research Ethics Week (March 3 - March 7, 2025), during which the University of Minnesota focuses on professional development and best practices to ensure safety and integrity in research. A list of Research Ethics Week events is posted at this link.
Follow us on Twitter: @UMNconsortium
Join the conversation by using #ResearchEthics2025
Disclosure information is pending.
Land Acknowledgment:
The University of Minnesota - Twin Cities is built within the traditional homelands of the Dakota people. It is important to acknowledge the peoples on whose land we live, learn, and work as we seek to improve and strengthen our relations with our tribal nations. We also acknowledge that words are not enough. We must ensure that our institution provides support, resources, and programs that increase access to all aspects of higher education for our American Indian students, staff, faculty, and community members.
Continuing Education Information:
Accreditation Statement
In support of improving patient care, University of Minnesota, Interprofessional Continuing Education is jointly accredited by the Accreditation Council for Continuing Medical Education (ACCME), the Accreditation Council for Pharmacy Education (ACPE), and the American Nurses Credentialing Center (ANCC) to provide continuing education for the healthcare team.
Credit Designation Statements
American Medical Association (AMA)
The University of Minnesota, Interprofessional Continuing Education designates this live activity for a maximum of 4.75 AMA PRA Category 1 Credits™. Physicians should claim only the credit commensurate with the extent of their participation in the activity.
Minnesota Board of Continuing Legal Education
An application will be submitted to the Minnesota Board of Continuing Legal Education. Determination of credit eligibility is pending review.
Other Healthcare Professionals
Other healthcare professionals who participate in this CE activity may submit their statement of participation to their appropriate accrediting organizations or state boards for consideration of credit. The participant is responsible for determining whether this activity meets the requirements for acceptable continuing education.
Note: You must participate in the live webinar to be eligible to claim credit. Credit will not be made available for viewing the recording.