Written by 1:41 am AI, Education

### The High Cost of Implementing AI Surveillance in Educational Institutions

As schools deploy AI based tools for suicide risk detection, it is important to recognize the known…

Suicide has become the second leading cause of death among American youth aged 10 to 14. The issue of youth suicide has worsened recently, partly due to a nationwide deficiency of mental health professionals, especially in schools. Having an on-site psychologist, counselor, or social worker can aid in identifying at-risk youth and initiating appropriate interventions.

To address this challenge, school administrators, grappling with significant funding and staffing shortages, are increasingly turning to technology for solutions. Companies like Bark, Gaggle, GoGuardian, and Securly have developed AI-based student monitoring software to track students’ computer activities and detect signs of mental health struggles. This technology operates discreetly in the background of students’ school-issued devices and accounts, flagging behaviors that may suggest a risk of self-harm.

Although this monitoring software is in use nationwide, many parents and community members are unaware of its existence. While students may have a sense that their school devices are being monitored, they likely have limited knowledge of the extent of surveillance. Despite the noble goal of identifying suicide risk, the use of AI surveillance raises concerns about privacy invasion and potential unintended consequences.

Our research, focusing on inequality, mental health, and technology policy, involved interviews with school staff to explore the benefits and risks of this software. While some believe that the monitoring software can help identify at-risk students not previously identified by school staff, concerns persist regarding its effectiveness and potential drawbacks.

One major issue is the threat to student privacy posed by AI-based monitoring. The software’s continuous operation on school-issued devices allows for the collection of extensive data about students’ lives. Additionally, families may face challenges opting out of the monitoring, as consent is often a prerequisite for using school devices, presenting financial barriers for many families.

Furthermore, there are concerns that using AI algorithms to identify at-risk students could exacerbate inequalities, such as disproportionately flagging LGBTQ+ students’ online activities. The lack of transparency in how alerts are generated makes it difficult to assess and correct any biases in the system.

Moreover, the software’s alerts, once triggered, rely on schools to determine the appropriate response. Instances were reported where alerts led to student discipline rather than mental health support, potentially escalating issues. In some cases, alerts are automatically directed to law enforcement during non-school hours, raising concerns about inappropriate interventions and exacerbating existing disparities in school discipline.

Ultimately, the efficacy of tools in accurately detecting suicide risk remains uncertain, with limited evidence on the software’s outcomes and impact on student well-being. Stakeholders must carefully consider the benefits and challenges of AI-based monitoring, ensuring transparency, privacy safeguards, and regulatory oversight to prevent unintended harm and promote positive mental health outcomes for students.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 27, 2024
Close Search Window
Close