Written by 10:10 pm Healthcare, Medical

### Integrating AI Assessment: Key Insights from Various Radiology Societies

In a recently issued statement from multiple radiology societies including the RSNA and ACR, resear…

Describing artificial intelligence (AI) as “the most disruptive force in radiology in many years,” researchers representing five prominent radiology societies, including the American College of Radiology (ACR), the Radiological Society of North America (RSNA), and the European Society of Radiology (ESR), have released a multinational statement outlining practical considerations for evaluating, implementing, and overseeing AI tools in radiology.

Published concurrently in five distinct journals, such as Radiology: Artificial Intelligence, Insights into Imaging, and the Journal of the American College of Radiology (JACR), this collaborative statement addresses potential biases associated with AI utilization, methods for assessing clinical precision, objectives for monitoring AI software, financial implications, and long-term sustainability.

Highlighted Insights from the Statement:

1. The experts underscored the importance of thorough cost-benefit and return on investment (ROI) analyses tailored to the healthcare environment and specific circumstances when contemplating the integration of adjunctive AI tools like AI-enabled opportunistic screening. The benefits of AI in outpatient imaging centers or fee-for-service hospitals may include an uptick in identified findings necessitating follow-up examinations or management, as well as enhanced efficiency in emergency departments and reduced length of stays.

2. While the ability of AI to alleviate the escalating workload amidst a radiologist shortage has been acknowledged, the researchers observed that the ancillary benefits of decreased burnout and enhanced recruitment of radiologists are supplementary advantages that may not offset the costs of AI implementation significantly.

3. The researchers cautioned against AI implementations that solely transmit AI-generated results to an existing Picture Archiving and Communication System (PACS) due to the risk of automation bias among radiologists and the lack of information available to referring physicians regarding the accuracy and specifics of the AI model in use.

4. The statement authors advocate for the adoption of a system, such as a cloud-native environment, that allows radiologists to engage with and potentially adjust AI outcomes while providing feedback to AI vendors.

In a cloud-native setting, both the PACS and AI models can exchange radiology data and AI outcomes. Ensuring the acceptance and storage of AI results alongside radiologist feedback, enhancing data security, and perpetually monitoring AI accuracy are critical technical components that are facilitated in cloud-native systems, as articulated by lead author Adrian Brady, M.D., and colleagues.

Further Insights on Reported Error Rates

5. The discrepancy between reported error rates during AI model testing and real-world application is highlighted by the authors. They stress the need to account for variations in scanner manufacturers, protocols, disease prevalence, and local demographics when deploying AI software. In addition to error frequency, the evaluation of AI models should consider error detectability, correctability, and the potential patient impact of AI errors.

Strategic Deployment of AI Software

6. Concentrating the implementation of AI models in healthcare settings with a higher prevalence of specific diseases can enhance the acceptance of the model. For instance, focusing on conditions like pneumothorax (PTX) in inpatient chest X-rays can reduce false positives and increase radiologists’ acceptance based on accuracy.

Acknowledging the Value of Continuous Monitoring with AI Models

7. Continuous monitoring of AI models and sharing assessments across various sites and regions through an AI data registry is emphasized. This approach enables participants in the registry to pinpoint local issues affecting AI model performance or broader problems linked to potential software updates.

Factors Influencing Acceptance of AI Models

8. Identifying impactful cases can aid in gaining stakeholder support for AI models. Demonstrating instances where AI significantly influences patient outcomes or operational efficiencies can serve as compelling examples of AI’s potential impact to stakeholders like referring physicians and facility administrators.

9. Automation bias, algorithmic bias, and user-interface (UI) design play crucial roles in the evaluation and acceptance of AI models. Notably, text-only UI outputs outperformed radiologist readers for pulmonary nodule detection in a study, while AI image overlays, often preferred by radiologists, did not enhance the performance of reviewing radiologists.

Visited 2 times, 1 visit(s) today
Last modified: January 23, 2024
Close Search Window
Close