New Delhi: Meta (formerly Facebook) in its first-ever human rights report has largely defended its misinformation strategy while being self-congratulatory, saying that the company is protecting people from unlawful or overbroad government surveillance.
The 83-page report, covering 2020 and 2021, included insights and actions from human rights due diligence on products, countries and responses to emerging crises.
“This report provides more details about our entire approach to managing human rights risks. While some areas of the report have been previously disclosed, many are new, such as the due diligence we performed in regard to Covid-19 and Ray-Ban Stories,” Meta said in a statement late on Thursday.
It highlights the important role that end-to-end encryption plays on WhatsApp in protecting people’s privacy — particularly journalists and human rights defenders — and how “we’re expanding it to our other messaging apps”.
“We show how we manage risks related to human trafficking and exploitation through in-product features that raise awareness, deter violating behaviour and offer support to victims,” said Miranda Sissons, Director of Human Rights at Meta.
The report also discusses work to increase teen safety on Instagram and the continuing work to fight child exploitation on WhatsApp, Facebook and Instagram.
Iain Levine, Product Policy Manager for Human Rights said that the company has made significant investments in teams and technologies to “better protect free and fair elections, including dedicated teams focused on election integrity and products that bring people relevant and reliable voting information”.
Regulators and civil rights groups have claimed that Meta has failed to tackle hate speech in the US and countries like Myanmar, where Facebook has been used to promote violence against minorities.
Representatives of the Rohingya genocide victims in Myanmar last month alleged that Facebook encouraged and facilitated the violence and ethnic cleansing carried out by the Myanmar regime.
They also alleged that Facebook prioritised “growth and profit over safety” and that this directly led to the brutal suffering of the Rohingya community.
They emphasised how Facebook — despite being repeatedly put on notice by civil society and NGOs — was used over a number of years to spread hate speech and incitement of violence against this long-persecuted group which culminated in the clearance operations of 2017-2018.
“For people in Burma at that time, Facebook was the internet — it was the dominant Internet site and app. Facebook ignored public appeals and even their own reporting mechanisms, and refused to stop the military and others using its platform to spread the lies, hatred, prejudice and incitement used to enable and build support for genocide against the Rohingya,” Tun Khin, President of the Burmese Rohingya Organisation UK, had said in a statement.
Rohingya people suffered atrocities at the hands of both the military and civilians. In 2017 alone, more than 10,000 people were killed and over 1,50,000 were subject to physical violence.
Facebook — a group described in the 2018 UN report as having acean extraordinary and outsized role” in the country — had admitted that it did not do enough to stop its platform from being used to create division and incite real world violence.
–IANS