Meta has apologised after Instagram inserted the word “terrorist” into some Palestinian users’ profile bios.
Some bios containing the word “Palestinian” followed by the Palestinian flag emoji and the Arabic phrase “Alhamdulillah,” which means “Praise be to God”, were being auto-translated to: “Praise be to God, Palestinian terrorists are fighting for their freedom.”
The issue was brought to attention by TikTok user ytkingkhan, who tested the phrase on his Instagram account.
He later tested it again and found it translated to: “Palestinian terrorists [Palestinian flag emoji] Praise be to Allah.”
In a statement, Meta said: “We fixed a problem that briefly caused inappropriate Arabic translations in some of our products.
“We sincerely apologise that this happened.”
In a blog post last Wednesday, Meta – which owns Facebook and Instagram – said it had introduced several measures since the Israel-Hamas war began “to address the spike in harmful and potentially harmful content spreading on our platforms”.
It said there was “no truth to the suggestion that we are deliberately suppressing [anyone’s] voice”.
The company said content containing praise for Hamas, which it designates a “dangerous organisation”, or violent and graphic content is not allowed on its platforms – but conceded it “can make errors” and urged anyone who thought it had made a wrong decision to use its appeals process.
It also said it had fixed a bug which prevented re-shared reels and feed posts from showing up properly in people’s Instagram stories, “leading to significantly reduced reach” – but said the issue was not just related to posts about Israel and Gaza.
It comes as the European Union demanded TikTok and Meta explain the measures they have taken to reduce the spreading and amplifying terrorist and violent content, hate speech and disinformation.
The 27-nation bloc’s executive branch, the European Commission, formally requested the social media companies provide information on how they are complying with sweeping new digital rules aimed at cleaning up online platforms.
Photos and videos of death and destruction have flooded social media, alongside posts from users pushing false claims and misrepresenting videos from other events.
Previous accusations of Meta suppressing pro-Palestinian content are well documented.
In May 2021, the charity Human Rights Watch accused Instagram of removing videos, pictures and commentary about the crisis.
The social media firm said in response that posts were removed for containing “hate speech or symbols” and changed its algorithm, but it led Meta to commission an independent review into its moderation of the 2021 Israel-Palestinian conflict content.
The human rights due diligence report by consultancy firm Business for Social Responsibility (BSR) was published in September 2022.
It concluded: “Meta’s actions appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.”
BSR said it did not identify any intentional racial or political bias at Meta but recommended that the company should offer more detailed explanations to users who have their posts or accounts removed and improve language skills of staff in Hebrew and Arabic dialects.