A recent report reveals that Meta’s Concerned AI division is now focusing on other AI initiatives.
By Wes Davis, a weekend editor specializing in the latest technology trends and innovations. With a background in tech journalism dating back to 2020, he has contributed extensively to reports and reviews in the field.
Meta has allegedly restructured its Responsible AI (RAI) team to allocate more resources towards the advancement of generative artificial intelligence. This development was first disclosed by The Data, referencing an internal communication they had access to.
As per the report, the majority of RAI members will transition to Meta’s conceptual AI team, while some will be involved in the company’s AI infrastructure projects. Meta has emphasized its commitment to fostering AI ethically, maintaining a dedicated section outlining its “pillars of accountable AI,” which encompass accountability, transparency, safety, privacy, and more.
Quoting Jon Carvill, a Meta spokesperson, The Data’s report affirms that the company will persist in advocating for the responsible and secure growth of AI. Despite the team restructuring, Carvill assured that these members will continue to contribute to critical cross-Meta initiatives related to concerned AI development and utilization.
At the time of publication, Meta had not provided a response to media inquiries.
Earlier this year, the team underwent a reorganization, as reported by Business Insider, resulting in significant cuts that significantly impacted the RAI department. The report suggested that the RAI team, established in 2019, operated with substantial autonomy, subject to rigorous client consultations before implementing any initiatives.
The primary objective of RAI was to identify flaws in Meta’s AI training methodologies, ensuring diverse data sets were utilized to mitigate issues such as bias on its platforms. Instances of algorithmic failures on Meta’s social platforms, such as a language model issue on Facebook leading to wrongful arrests, biased image outputs from WhatsApp’s AI sticker feature under specific inputs, and Instagram’s algorithms inadvertently promoting child sexual abuse content, underscored the necessity of RAI’s oversight.
Meta’s initiatives, alongside similar endeavors by tech giants like Microsoft, reflect a global push to establish regulatory frameworks for AI development. The US government has engaged in partnerships with AI companies, with President Biden mandating federal agencies to devise AI safety protocols. Meanwhile, the European Union has introduced its regulatory framework, the Artificial Intelligence Act, as it grapples with the complexities of regulating AI technologies.