Lawyer and human rights advocate Nyadol Nyuon no longer shares pictures of her children online, fearing for their safety as a result of the sustained abuse she has experienced on social media.
Speaking at a federal parliamentary inquiry into online safety, Ms Nyuon said a volumetric attack of online hatred was a key factor in her decision to take three months off work to “try and reconnect with a sense of safety”.
Lawyer and human rights advocate Nyadol Nyuon told a federal parliamentary inquiry she was “constantly on watch” as a result of the abuse is subjected to online.
“I was struck across all social media platforms and trolled predominantly with racist abuse,” Ms Nyuon told the hearing on Thursday.
“I am constantly on watch through the abuse that pops up almost on a daily basis.”
The federal government commissioned the inquiry to scrutinise the toxic effects of social media and to put a spotlight on the shortcomings of online platforms in responding to abusive conduct and hate speech.
Ms Nyuon, chair of refugee advocacy group Harmony Alliance, said she had struggled to obtain help for women from migrant communities who were subject to a “campaign of terror” of online abuse by men, which included sharing personal images without consent.
“There are instances where the abuse online is conducted entirely in language different than English, which means that reporting it to Facebook, or to the social media organisation, doesn’t do anything because it seems that they don’t have staff or sufficient staff to respond to this.”
Rita Jabri Markwel, a lawyer with the Australian Muslim Advocacy Network, said AI technology used by Facebook and others was failing to detect large swathes of hate speech and dehumanising content, leaving minority groups like hers to pick up the burden of this responsibility.
“We monitor pages and groups. We collect evidence, and it’s exhausting and traumatising because you’re basically reading people who want to see you dead, who want to see your people dead because you’re Muslim,” Ms Markwel said.
She said the group had been engaged with Facebook for more than 18 months to help improve their moderation of hate speech, but likened the process to “trying to do something in quicksand”.
“We submitted a lot of proposals. Facebook continually sought us to do the heavy lifting in providing the evidence, which they would then delete from their platform while not progressing any of the recommendations about targeting the bad actors or accounts,” she said.
Mia Garlick, director of policy for Meta Australia, previously called Facebook Australia, said in a statement the company’s policies prohibited hate speech and AI technology proactively detected 96.5 percent of content ultimately removed.
“We regularly work with experts, non-profits, and stakeholders to help make sure Facebook is a safe place for everyone and update our policies in response to their feedback. If content doesn’t breach our policies but breaches local law we will restrict access to content,” Ms Garlick said.
Over the course of two days of public hearings this week, the inquiry has also heard evidence about the questionable utility of proposed “anti-trolling” laws for people seeking reprieve from online abuse.
The draft laws are aimed at helping ordinary people “unmask” anonymous commenters defaming them online, enabling them to sue those commenters for defamation, but have been criticised by leading legal experts.
Ms Nyuon said she had reservations about the laws, saying they would be of little use in fighting volumetric trolling by hundreds of people and, due to the cost of legal action, were not a “tool to be accessed equally”.
Michael Bradley, managing partner at Marque Lawyers, which has represented a number of people who received defamation concerns notices from Liberal MP Andrew Laming over tweets, said the proposed changes would make the law “even more inequitable because only the wealthy and powerful have access to it in reality”.
In a separate effort to address the issue of serious cyber bullying, the federal government beefed up the powers of the esafety commissioner through the passage of the Online Safety Act this year. Under a new adult cyber abuse scheme, which comes into operation from January, the esafety commissioner will be able to issue take down orders to digital platforms to remove serious abusive online content within 24 hours or face fines of up to $555,000.
The threshold of intervention is a high bar, covering only the most serious of abusive posts or images, intended to cause serious psychological or physical harm, including volumetric trolling. Name-calling and character attacks, such as accusing someone of being a paedophile or using violent language such as “I hope you get bashed”, alone will not trigger the esafety’s powers.
Peter Wertheim, chief executive of the Executive Council of Australian Jewry, said there was a gap in the esafety commissioner’s powers by focusing on abuse aimed at individuals and not abuse aimed at groups or racial communities, such as anti-Semitic hate speech not targeted at a specific person.
“Focusing fully on discourse that relates to individuals misses harmful behaviour that focuses on collectivities that results in harm to individuals,” Mr Wertheim said.
Writer and disability advocate Carly Findlay, told the inquiry she felt she had “absolutely no recourse” in addressing the onslaught of online abuse she experienced as someone who lives with ichthyosis, a rare genetic disorder that leaves her skin red and painful.
“I feel like there is a disconnect in the seriousness of online spaces,” she said. “There’s an expectation that we’re just going to switch off, that we’re going to be able to walk away, but for many of us particularly disabled people, marginalised people, it’s our safety, it’s our community. We can’t just switch off, we need it to be safer.“
Most Viewed in Politics
From our partners
Source: Read Full Article