Written by 12:49 pm AI problems, AI Threat

– Rising AI Advancements Heighten Government Social Media Surveillance Risks

New AI tools are exacerbating problems with social media surveillance.

Not only your acquaintances but also law enforcement agencies nationwide, such as the F.B.I. and the Department of Homeland Security (DHS), are scrutinizing individuals’ online activities. These surveillance initiatives are poised to expand further as artificial intelligence (AI) advances to revolutionize the digital landscape, offering quicker, more precise data analyses and the ability to generate text, video, and audio content that closely resembles human output.

Even without the integration of AI, these monitoring programs already have a broad reach, despite social media’s potential to aid law enforcement in crime investigations. Applications designed for “situational awareness,” like those operated by various DHS units or police departments preparing for public events, often come with certain safeguards. However, they frequently extend their focus to monitoring political and social movements, particularly those involving marginalized communities. For instance, the National Operations Center of DHS issued numerous reports on the racial justice protests in 2020. In a similar vein, the Boston Police Department monitored social media posts of Black Lives Matter activists and categorized discussions on Muslim cultural practices as “extremist,” even in the absence of any indication of violence. Law enforcement agencies do not limit their surveillance to public posts alone; for example, the Memphis police created a fake Instagram profile to engage with and gather information on Black Lives Matter activists.

Internal assessments within governmental bodies have raised doubts about the efficacy of extensive social media monitoring. The DHS General Counsel’s office scrutinized the collection of social media and other publicly available information by officials in 2021 following complaints of the department’s involvement in surveilling racial justice protestors. The review revealed that the data gathered yielded “limited value” despite efforts to identify new threats. Similarly, the Brennan Center challenged the Trump administration’s mandate for nearly all visa applicants to disclose their social media handles to the State Department annually, affecting approximately 15 million individuals, to assist in immigration screening. Intelligence officials overseeing the investigation concluded that collecting social media identifiers did not contribute significantly to the screening process, echoing previous research findings. Even in programs targeting potential national security risks like animal trafficking, the Department of Homeland Security admitted that linking social media data to national security concerns was often inconclusive. An audit by the DHS Inspector General further revealed a lack of evaluation of these programs’ effectiveness, rendering them inadequate for informing future strategies. Despite the absence of demonstrable benefits to national security, the government continues to gather, analyze, and retain social media data.

The proliferation and challenges of social media surveillance are likely to be exacerbated by the adoption of new AI tools, including relational models, which organizations are eager to implement.

The utilization of generative AI is expected to enable law enforcement to deploy covert accounts more swiftly. Online undercover operations have historically sparked controversy, especially when used for general surveillance rather than targeting specific criminal activities. Creating fake online personas is now simpler and more cost-effective, allowing officers to interact with individuals and extract personal information, such as their social connections, inadvertently. New AI technologies can fabricate accounts with diverse interests and connections to engage authentically with online users, saving time and resources for law enforcement. The ease of surveillance facilitated by AI may disrupt the traditional relationship between citizens and authorities, as acknowledged by the Supreme Court. Concerns are compounded by the varying restrictions imposed by police agencies on undercover operations, with many allowing unsupervised online monitoring without proper documentation or justification, a practice also observed in governmental entities like DHS.

Despite the hype surrounding social media monitoring tools, they often operate on a rudimentary basis, as highlighted by the Brennan Center’s research. These tools, although shrouded in secrecy by their vendors, exhibit significant shortcomings. Many tools lack mechanisms to address bias or employ scientific methodologies to identify relevant datasets accurately. Relying on keywords and phrases to flag potential threats can skew the perspective needed to differentiate genuine threats from benign discussions, such as those related to video games. Advanced language models like ChatGPT could potentially enhance this capability or create the perception of improvement, leading to increased reliance on these tools.

The widespread adoption of AI is expected to exacerbate the challenges posed by unreliable data sources, further complicating issues of credibility and authenticity. The proliferation of false and misleading information on social media platforms, coupled with the amplification of such content by bots and fake accounts that mimic human behavior, presents a significant challenge. The ease with which fake news and identities can be propagated through generative AI contributes to a tainted online information environment. Moreover, AI systems are prone to generating false information, a prevalent issue among relational AI models.

The integration of generative AI technology exacerbates longstanding concerns, raising questions about the impact on First Amendment rights posed by social media monitoring. The persistent issue of bias in computational tools, including content moderation practices that favor certain speech and predictive policing algorithms that disproportionately target Black individuals, remains a critical challenge. For instance, recent revelations on Instagram illustrate how Arabic bios containing specific phrases resulted in users being labeled as “terrorists.”

In response to these challenges, President Biden’s executive order on AI and the Office of Management and Budget’s guidelines for federal agencies emphasize the importance of addressing these issues. The OMB directive identifies social media monitoring as an AI application that infringes on individuals’ rights, mandating organizations to uphold transparency, evaluate effectiveness, and mitigate bias and other risks when utilizing this technology. However, these regulations do not extend to intelligence and national security operations, although they may influence police practices.

Visited 2 times, 1 visit(s) today
Last modified: January 16, 2024
Close Search Window
Close