Social media giant Meta is stepping in to manage online discourse around the proposed Indigenous Voice to Parliament, promising a comprehensive strategy for Facebook, Instagram and the new Twitter rival Threads to combat misinformation, voter interference, and hate speech.
Subscribe now for unlimited access.
or signup to continue reading
Social media is one of the top sources of information about the proposal to enshrine an Indigenous advisory body in the constitution, but it is arguably the largest source of misinformation and disinformation.
There are 18 million Australian users of Facebook every month, 14 million Australian users of Instagram every month and, globally, more than 70 million people have signed up for Threads, so far.
In a proactive move despite the lack of a voting date and after concerns about social media election influence and interference, the director of public policy for Meta Australia Mia Garlick has revealed Meta has been preparing for the Voice referendum for a "long time" and is "leaning into expertise from previous elections."
She announced, in a newsroom post, that specialised global teams poised to identify and take action on offending material, and Meta is engaged with the Australian government and security agencies. Meta is also providing a one-off funding boost to expand its third-party fact-checking program.
"Our fact checkers are independent and work to reduce the spread of misinformation across Meta's services," Ms Garlick wrote.
"When they rate something as false, we significantly reduce its distribution so fewer people see it. We also notify people who try to share something rated as false and add a warning label with a link to a debunking article."
READ MORE
Meta is also working with RMIT CrossCheck, a team of online verification experts, to increase monitoring for misinformation trends in the lead-up to the referendum. It is promising to share information with journalists and other stakeholders.
In December, RMIT CrossCheck found the conservative group behind the Fair Australia "no" campaign, Advance, had published false information about the Voice in a series of Facebook ads saying the proposition would provide "one race of people with special rights and privileges". Advance rejects the criticism. In May, RMIT's FactLab found that a man described by the "no" campaign as being the grandson of the land rights activist Vincent Lingiari denied the connection and said he was unsure about the Voice.
The social media giant also said it wants to empower users to spot false information about the Voice. It is to launch a new media literacy campaign with Australian Associated Press, building on its pre-2022 federal election "Check The Facts" campaign.
Outside influence is also on its radar.
"We have specialised global teams to identify and take action against threats to the elections and referendums, including signs of coordinated inauthentic behaviour across our apps," Ms Garlick said.
"We are also coordinating with the government's election integrity assurance taskforce and security agencies in the lead up to the referendum. We've also improved our AI so that we can more effectively detect and block fake accounts, which are often behind this activity."
There have been a series of scandals about social media use during general elections around the world, particularly the 2016 election which catapulted Donald Trump into the White House. Mr Trump's campaign took great advantage of Facebook algorithms. Social media monitoring firm Cambridge Analytica used highly sensitive personal data taken from Facebook users without their knowledge to manipulate them with targeted digital ads, perhaps even Russian agents.
One of the Voice initiatives is providing ad credits to enable charity and not-for-profit groups to amplify their messaging on Voice-related topics.
Describing the referendum as a "significant moment" for Australia, Ms Garlick said many Australians were expected to use digital platforms to engage in advocacy, express their views, or participate in democratic debate.
The other main arm of the Meta initiative is dealing with hate speech. Many social media users are already frustrated by the policies of social media giants regarding hate speech, but the racially charged debate over the Voice is heating up.
"No one should have to experience hate or racial abuse online and we don't want it on our platforms," she said.
"We recognise that hate speech can be offensive, even when implicit or veiled. We have rules against different types of harm, including hate speech, and don't allow attacks against people based on their protected characteristics, which includes race, religion and sexual orientation."
Meta promises if hate speech is being used to attack, it will be removed. But it will be allowed if it is being shared to condemn. It promises there are dedicated Meta teams in place to review and remove offending content.
It has been consulting with Aboriginal and Torres Strait Islander people on its approach to the referendum.
"Feedback we've heard is that Aboriginal and Torres Strait Islander Peoples may need additional support, before, during and after the referendum," Ms Garlick said. "With this in mind, we're partnering with ReachOut to create a dedicated youth mental health initiative."
Later in July, Meta is planning to host a training session with MPs, advocacy groups, and not-for-profit groups on safely using its platforms in the lead up to the referendum. This includes Facebook's latest moderation tool, Moderation Assist.