Written by 7:41 pm AI Guidelines, Healthcare, Medical

### Enhancing Health Regulation Through Aviation-Inspired Safety Standards

What is the likelihood of dying in a plane crash? According to a 2022 report released by the Intern…

The probability of perishing in a plane crash is remarkably low, with the International Air Transport Association’s 2022 report indicating an industry fatality risk of 0.11. To put this into perspective, an individual would have to embark on a daily flight for approximately 25,214 years to encounter a fatal accident with certainty. The aviation sector, known for its stringent regulations, is now capturing the interest of MIT scientists as a potential model for overseeing artificial intelligence (AI) in healthcare.

Marzyeh Ghassemi, an assistant professor at the MIT Department of Electrical Engineering and Computer Science (EECS) and Institute of Medical Engineering Sciences, and Julie Shah, an H.N. Slater Professor of Aeronautics and Astronautics at MIT, have identified transparency challenges in AI models. Drawing parallels between aviation and healthcare, they aim to prevent biased AI models from adversely affecting marginalized patients.

Ghassemi, a principal investigator at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) and the Computer Science and Artificial Intelligence Laboratory (CSAIL), along with Shah, assembled a diverse team of experts from MIT, Stanford University, the Federation of American Scientists, Emory University, University of Adelaide, Microsoft, and the University of California San Francisco. This collaborative effort led to a research project accepted at the Equity and Access in Algorithms, Mechanisms and Optimization Conference.

The historical evolution of aviation, particularly its transition into automation, mirrors the current trajectory of AI development. The debate on AI explainability, stemming from the “black box” issue, underscores the importance of understanding how AI models arrive at their decisions without misleading users.

The rigorous training process for commercial airline captains, requiring 1,500 hours of flight time and specialized instruction, spans approximately 15 years. This meticulous training regimen serves as a potential blueprint for educating medical professionals on utilizing AI tools in clinical settings effectively.

The research paper advocates for a reporting system akin to the Federal Aviation Agency’s (FAA) approach for pilots, offering “limited immunity” to encourage the reporting of unsafe health AI tools without fear of severe repercussions. This proactive reporting mechanism aims to enhance patient safety and foster a culture of accountability in healthcare.

In the realm of healthcare, the World Health Organization’s 2023 report highlights that one in ten patients in high-income countries suffers harm due to medical errors during hospitalization. Despite this alarming statistic, underreporting of medical errors persists in healthcare, primarily due to punitive measures that prioritize individual blame over systemic reform.

The paper proposes leveraging existing government agencies, such as the FDA, FTC, and NIH, to regulate health AI effectively. Additionally, it suggests establishing an independent auditing authority to assess malfunctioning health AI systems, similar to the National Transportation Safety Board (NTSB) in aviation.

As the regulatory landscape evolves rapidly, stakeholders emphasize the need for balanced oversight to ensure the safety and efficacy of AI technologies in healthcare. The convergence of technological advancement and regulatory frameworks presents a unique opportunity to shape the future of AI governance while fostering innovation and patient well-being.

Ultimately, the collaborative efforts of researchers and policymakers aim to cultivate a regulatory environment that safeguards patients, promotes equity, and harnesses the transformative potential of AI in healthcare.

Visited 2 times, 1 visit(s) today
Last modified: January 18, 2024
Close Search Window
Close