The primary US health insurance company, United Healthcare, is presently embroiled in a class-action lawsuit alleging that an AI algorithm named nH Predict was utilized to unjustly reject claims for extended care by elderly patients.
Allegations in the lawsuit filed in the US District Court for the District of Minnesota assert that UnitedHealthcare employed the nH Predict algorithm to make healthcare decisions, resulting in premature and unjust cessation of payment for medical services. The plaintiffs, representing individuals covered by United Healthcare’s Medicare Advantage plans, claim that the insurer wrongfully denied their claims, compelling them to cover essential medical treatments out of pocket. The lawsuit, if successful, could impact thousands of individuals and result in billions of dollars in damages, according to a plaintiffs’ attorney.
The lawsuit contends that requests for extended care, such as in-home care and skilled nursing facilities, made by elderly individuals were routinely turned down by the nH Predict engine, developed by NaviHealth, a company acquired by UnitedHealth in 2020. It is alleged that 90% of these denials are overturned upon appeal to federal administrative law judges, underscoring the algorithm’s purported inaccuracies. Furthermore, the plaintiffs argue that United Health’s utilization of the technology violates state and federal healthcare regulations and patient contract agreements.
The Future of Reliable AI
Instances like this may be influenced by recent government directives on AI, which stress the importance of accountability and transparency in AI systems, particularly in sectors like healthcare where AI decisions can significantly impact lives. The executive order aims to mitigate scenarios where AI systems unfairly withhold crucial care from vulnerable groups, like the elderly, by advocating for responsible AI use and ensuring that AI systems are developed and deployed in a manner that upholds fairness and prevents bias.
The accusations against UnitedHealthcare underscore the critical need for ethical and responsible AI, especially in healthcare. It is imperative for organizations to prioritize the societal implications of AI efficacy as AI continues to assume a more prominent role in decision-making processes. This includes ensuring that AI algorithms are precise, transparent, and aligned with patients’ best interests, especially in critical decisions concerning medical treatment and insurance.
Ryan Elmore, an AI Innovation Fellow at West Monroe, highlights the delicate balance between effective oversight and misconceptions regarding AI regulations. However, well-meaning oversight may be hindered by a lack of expertise and conflicting financial motivations.
The Role of Data Governance in AI Decision-Making
Effective data governance and management practices are crucial in light of this legal action. Data governance establishes protocols for managing an organization’s data assets, ensuring their integrity, security, and usability. Organizations that mishandle their data risk undermining its value.
Here’s why it is particularly relevant in this context:
- Patient Safety and Privacy: Maintaining the confidentiality and integrity of patient data is vital for safeguarding individual privacy and security in healthcare. Accurate and secure patient data are essential for AI algorithms to make informed and ethical decisions regarding patient care in AI-driven healthcare decision-making.
- Data Integrity and Usability: By setting standards and procedures for data management, organizations ensure the integrity and security of their data while complying with regulatory mandates. This is critical when AI systems are employed to make pivotal decisions in patient care, as the reliability and integrity of the data directly impact patient outcomes.
- Shared Responsibility and Collaboration: Data governance fosters a culture of shared responsibility among team members by actively engaging in the development of guidelines and best practices that enhance the quality of patient data. This collective responsibility ensures that the data used by AI systems is reliable and reflects the collaborative expertise of healthcare professionals in AI-driven decision-making.
- Informed Decision-Making: Governance facilitates decision-making by ensuring that data is accessible to the right individuals at the right times for the right reasons. This is especially pertinent in healthcare AI applications, where informed decision-making is critical for patient well-being.
Ethical Considerations in Business
The lawsuit against UnitedHealthcare serves as a stark reminder of the potential repercussions of AI misuse in healthcare, whether intentional or inadvertent. To prevent scenarios where AI algorithms unfairly withhold essential care from vulnerable populations, the recent executive order on AI underscores the importance of ethical and accountable AI practices, particularly in critical sectors like healthcare. It is imperative for healthcare organizations to advocate for ethical AI use, mitigate bias in algorithms and data, and uphold elevated standards of accountability and fairness in all automated decision-making processes.
Keyur Desai, a healthcare data professional, views this as another ethical challenge confronting leaders in data and analytics. However, through close collaboration between data, technology, and business executives, these challenges can be effectively addressed. Human progress does not pause for ethical considerations, emphasizing the ongoing need for ethical advancement in tandem with technological innovation.