Written by 5:10 pm AI, Discussions, Uncategorized

– Israeli Ad Approved by Twitter Calls for Pro-Palestinian Activist’s Demise

After the Israeli assassination ad, digital rights group 7amleh tested the limits of Facebook and M…

As per details shared with The Intercept, Facebook authorized multiple advertisements that degraded Palestinians and advocated for violence, shedding light on the scrutiny of the social network’s content moderation standards. These ads, composed in Hebrew and Arabic, blatantly violated Instagram and Meta’s policies. Some contained explicit language calling for the killing of Palestinian individuals, including references to a “holocaust for the Palestinians” and the elimination of Gazan women, children, and elderly individuals. Additionally, certain content labeled Gazan children as “future extremists” and referred to them as “Arab pigs,” employing dehumanizing rhetoric.

Nadim Nashif, head of the Arab social media analysis and advocacy group 7amleh, emphasized that the approval of such ads is just one instance of Meta’s continual mishandling of issues concerning the Arab population. He stated, “We have noticed a consistent pattern of Meta’s evident bias and discrimination against Palestinians in these matters.”

When Nashif encountered an ad on Twitter directly inciting violence against British activist Paul Larudee, co-founder of the Free Gaza Movement, 7amleh decided to investigate Facebook’s machine-learning censorship system. The ad targeted Larudee, labeling him an anti-Semitic “human rights” violator from the United States. Despite the ad being automatically translated by Facebook, it was eventually removed after Nashif reported it.

The organization Ad Kan, established by former Israel Defense Force and intelligence personnel to counter “anti-Israeli organizations,” placed the ad. Notably, Facebook’s advertising policies strictly prohibit calls for violence against individuals. The approval of the Ad Kan-sponsored ad, which violated these guidelines, raises questions about Facebook’s automated approval process and its reliance on machine learning for swift ad clearance.

Meta’s shift towards automated text-scan technology for content moderation has raised concerns about transparency and accountability in decision-making processes. While Arabic posts were routinely subjected to analytical censorship, a recent audit revealed a lack of a similar mechanism for identifying “Hebrew hostile speech.” In response, Meta pledged to introduce a “hostile talk” classifier for Hebrew content to address such issues.

Nashif expressed apprehension over the potential impact of such incendiary ads, citing past instances where social media content fueled real-world violence. Despite Meta’s claims of robust safeguards, the approval of ads containing inflammatory and racist content indicates lapses in the enforcement of Community Standards. 7amleh’s experiment with intentionally provocative ads in Hebrew and Arabic further exposed the platform’s shortcomings in screening non-English content effectively.

Facebook spokesperson Erin McPike acknowledged that errors can occur in the review process due to the involvement of both automated systems and human moderators. She emphasized the platform’s ongoing efforts to enhance content moderation despite occasional oversights. The incident underscores the challenges in effectively addressing incitement and hate speech within diverse linguistic contexts, as highlighted by the differing treatment of Hebrew and Arabic content.

In conclusion, the approval of contentious advertisements underscores the need for Meta to address systemic issues in content moderation, particularly concerning marginalized communities. The platform’s reliance on automated tools must be complemented by robust oversight to prevent the spread of harmful content and protect vulnerable populations.

Visited 2 times, 1 visit(s) today
Last modified: February 27, 2024
Close Search Window
Close