Written by 7:26 am AI, Discussions, Uncategorized

– Unveiling the AI “Black Box” Dilemma: A Comprehensive Analysis

A group in Japan has provided evidence that so-called “intrasource balance” in COVID-19…

To mitigate the potential risks associated with the suboptimal performance of AI deep-learning models, a research team based in Japan has presented compelling evidence highlighting the significance of intrasource stability within COVID-19 chest x-ray datasets. This pivotal discovery was officially documented in the Scientific Reports journal on November 3.

Despite the commendable intercategory equilibrium within the statistical dataset, a group spearheaded by Dr. Zhang Zhang from Tohoku University in Sendai unearthed a critical issue stemming from the utilization of an intrasource imbalanced x-ray database, leading to substantial educational bias.

The team articulated, “Our investigation underscores that the imbalance within the training data source can yield unreliable outcomes from deep-learning models.”

In efforts to circumvent the repercussions of intercategory imbalance (ICI), which arises from variations in data volume across distinct categories, researchers diligently aggregate data from diverse medical institutions while crafting deep-learning AI frameworks for COVID-19 detection.

Nevertheless, the researchers expounded that due to the presence of ICI within each medical facility, medical data are frequently siloed and procured under diverse conditions across healthcare settings, a phenomenon referred to as intra-source imbalance (ISI). They underscored that despite its relative neglect, this imbalance significantly impacts the efficacy of DL models.

In a bid to contrast the performance of a deep learning model trained on an intrasource imbalanced chest x-ray dataset against one trained on an intrasource balanced dataset for COVID-19 diagnosis, the team embarked on a thorough investigation into the effects of ISI on DL architectures.

The dataset comprised 3,761 high-quality COVID-19 images sourced from five distinct public facilities under the Qata-COV19 dataset, alongside 3,761 low-quality x-ray images from seven additional public institutions. Conversely, the BIMCV dataset encapsulated 2,461 positive and 2,461 negative CXR images from a singular public service.

Essentially, the team adopted a cross-dataset approach wherein they trained the VGG-16 deep learning model on the original Qata-COV19 dataset images before subjecting it to testing on the BIMCV dataset.

Upon training and assessing the VGG-16 model on the initial Qata-COV19 images, the outcomes showcased exceptional performance with all area under the curve (AUC) metrics exceeding 0.99. Notably, even when crucial anatomical regions were obscured in the Qata-COV19 images, the model exhibited consistent proficiency, a noteworthy albeit perplexing observation.

Conversely, the deep-learning model demonstrated adeptness in identifying conditions within the BIMCV images, yet the research findings indicated a significant decline in AUC values when the lung regions were either obscured or delineated in these x-rays.

The researchers emphasized, “The divergent outcomes observed across varied datasets underscore the correlation between unreliable performance and ISI.”

Furthermore, the study shed light on the prevalent “black-box problem” inherent in deep learning, signifying a lack of transparency and explication regarding the decision-making process behind the model’s predictions, despite its commendable performance in COVID-19 detection.

The authors underscored the ongoing concerns surrounding the reliability of deep learning models, stressing the inherent challenges in discerning errors in model performance without a comprehensive understanding of the underlying machine-generated predictions.

In light of their findings, the researchers advocate for heightened awareness regarding intrasource stability during data collection to mitigate potential training biases. They concluded, “Our investigation highlights the inherent risks associated with leveraging an intrasource imbalanced dataset.”

Visited 2 times, 1 visit(s) today
Last modified: February 26, 2024
Close Search Window