Fake news, is tech friend or foe?
Misinformation and fake news have become so rampant there’s no telling what’s true anymore. Does the fault fall on tech platforms?
The crackdown Down Under
"Digital platforms must take responsibility for what is on their sites and take action when harmful or misleading content appears," said Australia’s Minister of Communications, Paul Fletcher.
Under new laws that will bolster government efforts to rein in Big Tech, the Australian Communications and Media Authority, the country’s media regulator, is allowed to force internet companies to share data about how they handle misinformation and disinformation. About time, we would say.
The planned laws are a response to an ACMA investigation that revealed four-fifths of Australian adults encountered misinformation on COVID-19 and that 76% thought online platforms should do more to police false and misleading content online.
The authority also noted that disinformation, which involves intentionally spreading false information to influence politics or sow discord, was continuing to target Australians even if Facebook claimed it had removed four of these campaigns from 2019 to 2020.
DIGI, an Australian body representing Facebook, Alphabet's Google, Twitter, and video site TikTok, said it supported the recommendations and noted it had already set up a system to process complaints about misinformation—something that should have been done from the start.
“Bombing minds” in the EU
The new Australian law aligns with efforts in Europe to curb damaging online content. The European Union said it wants even tougher measures to stop disinformation given what Russian state-owned media has produced during the invasion of Ukraine.
"I will propose a new mechanism that will allow us to sanction those malign disinformation actors," claimed Foreign Policy Chief Josep Borrell. He told the European Parliament that the EU should be able to freeze assets and ban travel to those deemed responsible, singling out Russian state-owned television network Russia Today and news agency Sputnik as examples of "instruments to push this narrative to manipulate and mislead."
Turns out Moscow was not just bombing houses and infrastructure in Ukraine. They were targeting Russians with fake news and disinformation too.
"They are bombing their minds," Borrell added.
Misinformation kills (literally)
In the US, Biden’s Surgeon General Dr. Vivek Murthy requested that major tech platforms submit information about the scale of COVID-19 misinformation on social networks, search engines, crowdsourced platforms, e-commerce platforms, and instant messaging systems, starting with common examples of vaccine misinformation documented by the Centers for Disease Control and Prevention.
The notice demanded the companies submit “exactly how many users saw or may have been exposed to…misinformation,” as well as aggregate data on demographics that may have been exposed to or affected by the misinformation. This includes those that engaged in the sale of unproven COVID-19 products, services, and treatments.
On the flipside
Unknowingly, tech companies like Google, Facebook, Twitter have been doing their part even before the war in Ukraine broke out—said tech companies like Google, Facebook, and Twitter.
Some of their initiatives included removing state-affiliated media in the past in response to sanctions and terrorist designations, pausing political ads, and rolling out new features to add “friction” to the spread of disinformation. Considering events like the Capitol Riot, this is quite hard to believe.
They’ve also formed special teams to respond to crises and introduced or directed users to special account security measures in response to emergency situations.
We all agree misinformation is a serious problem. But is it on us to spot fake news? Based on a survey last December 2021, 51% of Filipinos find it difficult to spot fake news, so this might be a tall order. While tech companies say they’re doing their part, we can’t help but wonder: Is it enough and is it consistent?