Research Using AI: Checklist for Oversight, IRBs

Speakers at a recent meeting of Public Responsibility in Research & Medicine (PRIMR) addressed the challenges posed by artificial intelligence (AI) when used in research protocols, including privacy and security concerns.[1] Donella S. Comeau, M.D.—a neuroscientist and vice chair of the Mass General Brigham institutional review board (IRB)—proposed strategies to “streamline IRB oversight of AI research.” As described on her slides, these are:

  • Demystifying AI: Tackle AI’s opacity to align understanding across developers, users, and regulators.

  • Governance Evolution: Craft governance frameworks that heighten transparency and accountability beyond conventional norms.

  • IRB Role Expansion: Broaden IRB functions to address AI risks across communities and systems proactively.

  • AI-Specific Protocols: Develop guidelines tailored to AI’s complexity, embedding ethical standards for sound innovation.

  • Regulatory Adaptation: Stay agile with [Food and Drug Administration] guidelines and AI’s intricacies to ascertain when oversight is required.

  • Full Lifecycle Oversight: Reform IRB strategies to cover AI’s entire lifecycle, ensuring accountability and ethical integrity.

  • Ethical Framework Development: Propel IRBs to adopt frameworks that support ethical AI development, acknowledging broader impacts.

This document is only available to subscribers. Please log in or purchase access
 


    Would you like to read this entire article?

    If you already subscribe to this publication, just log in. If not, let us send you an email with a link that will allow you to read the entire article for free. Just complete the following form.

    * required field