The Australian Muslim Advocacy Network (AMAN) has lodged a complaint against social media giant, Facebook, through the Australian Human Rights Commission. It is alleging Facebook has breached discrimination and hate speech law under the Racial Discrimination Act.
As an advisor to AMAN, I am able to put forward this complaint as an Arab Australian woman, and in the course of my work, also representing other racial and ethnic communities within the Muslim community.
To keep the community in the loop, I share some of our arguments as follows.
Dehumanisation of Muslims, allowed openly on Facebook, has had dire consequences for many culturally and linguistically diverse communities that make up the Muslim community. It has also had perilous consequences for asylum seekers and immigrants, who are often conflated with Muslims in these narratives.
Discrimination protections under the Racial Discrimination Act includes race, ethnicity, descent, colour and national origin, as well as immigrant status. Those protections cover direct and indirect discrimination.
While these problems are really unpleasant to confront, they are not intractable or unsolvable. Helpful strategies are in reach. It is largely a matter of Facebook’s will and leadership.
Previously Facebook has taken down pages where AMAN has done all the heavy-lifting in terms of documenting all the hate speech violations and escalated the issue through national media.
Facebook staff have advised me that it is unlikely any action will be taken unless we document all the violations of a page or group over time. This task involves resources we don’t have, with an immense psychological toll imposed by continually perusing hateful, dehumanising and violent material. The scale of the problem also makes this weeding approach a losing and everlasting battle.
While we have observed that Facebook’s auto-detection tools and reporting tools do not reliably identify anti-Muslim hate speech and calls to violence contained in comment threads this is not the only reason that the ‘weeding’ approach is a problem.
Facebook focuses on ‘weeding out’ hateful comments rather than on the materials that dehumanise and trigger the hate speech – acting too late after the damage is done.
Facebook should be responsible for the harm it creates. It can act systemically to reduce the manipulation of its users before the hate speech occurs (for example by disrupting and deterring hateful online echo chambers and dehumanising materials); and it has the resources to apply Australian law. Leaving the burden on human rights advocates like me amounts to discrimination.
Facebook appears to allow Pages or Groups to continue on their platforms if they superficially characterise or position themselves as ‘anti-Islam’ or ‘counter jihad’, even when Middle Eastern, African, South Asian and Asian people are being routinely and dangerously dehumanised and discriminated against.
This harm is achieved through sharing stories (often published and curated by hate actors) that contain falsely contextualised, retitled or false news, to incite dehumanising responses and disgust from users. Many of the images used in the ‘stories’ shared to those Pages or Groups represent Arabs who look like me and members of my family, attributing subhuman, depraved, inferior qualities.
These stories presents people within our community as a hostile and homogenous mass, who are incapable of human warmth, independent thought or feeling, and who are trained by their religion to behave as subhumans.
Often these news blogs have no authors, editors, editorial guidelines, but a clear and destructive purpose. Australia’s media guidelines recommend that racial and religious terms not be used in headlines unless critically relevant, to avoid typecasting over time.
Media’s regulatory framework also prizes factual accuracy. Misrepresentation or misleading reporting is discouraged. And yet a range of these curated blogs reach audiences of hundreds of thousands of people through Facebook.
Despite repeated representations by AMAN, Facebook has also failed to explicitly acknowledge or exclude anti-Muslim and anti-Islam conspiracy theories.
This failure causes us constant anxiety about the repetition of the Christchurch massacre. It enables us to be constructed as an ‘existential threat’ to society, which has immense impacts in terms of hate speech and hate crime, prejudice and discrimination. It legitimises the place of those narratives in mainstream discourse.
It erodes our security and sense of belonging and makes our children especially vulnerable. It also presents significant harm and disadvantage as we internalise those narratives. Being able to psychologically and socially integrate our religious identity, and expressing my faith as an Arab Australian is fundamentally compromised by the mainstreaming of these toxic narratives.
Facebook has recognised and taken action on COVID misinformation/disinformation because of its risk to public health and safety. Racism is a public health and safety concern.
While Facebook has rightly acted to recognise harmful stereotypes targeting Jewish people as part of violence-inducing conspiracy networks in 2020, it has failed to do so for Muslim people, despite the Christchurch massacre 2019, the Oslo massacre in 2011 and numerous terror and hate crime attacks between and since.
Meanwhile a recent study showed the extent to which Australian mosques have been targeted. Hate incidents continue to be very under-reported due to a lack of hate crime laws, but academically scrutinised analysis of death threats towards and abuse of Muslims, and physical assault of Muslim women and girls, shows how Muslim identity has been crafted, dehumanised and demonised. Facebook contributes substantially to real life endangerment for Australian Muslims and people of various ethnicities who are also part of the Islamic community.
Further to the claim of discrimination, we contend that section 18C has been contravened. This section makes content unlawful that reasonably likely, in all the circumstances, to offend, insult, humiliate or intimidate a person or group on the basis of their race, colour, descent national or ethnic origin. Where hate speech is done for two or more reasons, it still comes under s18C if one of the reasons is on the basis stated above, ‘whether or not it is the dominant reason or a substantial reason for doing the act’.
The changes we seek aren’t once off, but aimed at generating more sustained and serious responsibility from the platform. As part of this engagement, we have developed a model for measuring and identifying dehumanising conduct over time – a model that could be used in a range of geographical and social contexts.
Facebook’s human rights policy commits the platform to stand alongside human rights advocates and uphold international law standards. Now it’s time to put that to the test.