Written by 9:45 am AI, Latest news

– Caution Issued in UK Regarding Unsupervised AI Chatbots for Crafting Social Care Plans

University of Oxford study shows benefits and risks of technology to healthcare, but ethical issues…

Britain’s overburdened caregivers require all available assistance, but the utilization of unregulated AI bots is a contentious issue, as per researchers advocating for a stringent ethical framework in the AI revolution within social care.

A recent pilot study conducted by scholars at the University of Oxford revealed that certain care providers had employed generative AI chatbots like ChatGPT and Bard to formulate care plans for individuals under their care.

This practice raises concerns regarding patient confidentiality, as highlighted by Dr. Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who conducted a survey among care organizations for this study.

Dr. Green emphasized the potential risks associated with inputting personal data into generative AI chatbots, noting that such data is utilized to train the language model, posing a threat of unauthorized disclosure of sensitive information.

She cautioned that caregivers acting upon flawed or biased information generated by AI could inadvertently cause harm, potentially resulting in substandard care plans.

Despite these challenges, Dr. Green acknowledged the possible advantages of AI integration in social care, particularly in streamlining administrative tasks and facilitating more frequent reviews of care plans. While she currently refrains from endorsing such practices, she noted ongoing efforts by organizations to develop apps and websites for this purpose.

The healthcare and care sectors have already begun leveraging technology based on extensive language models. For instance, PainChek utilizes AI-powered facial recognition through a mobile app to assess pain levels in non-verbal individuals, while Oxevision employs infrared cameras in seclusion rooms within NHS mental health trusts to monitor patient safety and activity levels.

Innovative projects like Sentai, which utilizes Amazon’s Alexa speakers for care reminders, and initiatives by the Bristol Robotics Lab targeting memory-impaired individuals demonstrate the potential of AI in enhancing care services and safety measures.

Amid concerns about AI replacing human roles, particularly in creative fields, the social care sector faces workforce shortages with millions of unpaid carers supporting their loved ones. Experts like Professor Lionel Tarassenko emphasize the role of AI in upskilling individuals with limited experience to enhance their caregiving abilities, rather than outright replacement.

However, some care managers express apprehensions about potential regulatory violations and license implications associated with AI adoption. Mark Topps, a social care professional and podcast co-host, highlights the industry’s cautious approach pending regulatory guidance to avoid unintended repercussions.

In response to these challenges, a collaborative effort involving 30 social care organizations aims to establish guidelines for responsible generative AI use in social care. Dr. Green, leading this initiative, envisions the development of enforceable best practices in partnership with regulatory bodies to ensure ethical and effective AI integration in the sector.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: March 10, 2024
Close Search Window
Close