Meta VS. Misinformation
Content moderation in the time of the elections. (Photo: The Blue Diamond Gallery)
In time for the general elections on May 9—and in a redemption arc from the 2016 Philippine elections which weaponized social media—Meta announced their Filipino-trained Artificial Intelligence technology’s role in the fight against misinformation. The AI proactively detects and removes hate speech, bullying, harassment, and content that violates their violence and incitement policies to prevent malicious coordinated campaigns from spreading.
Recently, Meta removed over 400 accounts, Pages, and Groups in the Philippines—including a network maintained by the New People’s Army—that systematically violated Meta’s Community Standards and evaded enforcement.
Meta stated that the people behind the activity claimed to be hacktivists and relied primarily on authentic and duplicate accounts to amplify content about Distributed Denial of Service (DDoS) attacks and account recovery, as well as compromise Philippine news entities.
Meaningful connections not so meaningful
Several Pages and Groups switched their focus to the elections to increase their following. A few examples of context switching: A Page that once shared dance videos renamed itself to “Bongbong Marcos news,” while another that started off as a politician’s Page changed its name to “Your Financial Answer” and began posting loan advice.
Others posed as authentic communities. Meta removed users from Vietnam, Thailand, and the US that catfished as members of local communities to monetize the attention on the elections. In February, Meta identified a cluster of Pages operated by spammers in Vietnam who used VPNs to make it seem like they were based in the Philippines. Using names like Philippines Trending News, Duterte Live, Related to Francis Leo Marcos, and Pinas News, they shared live footage while purporting to be news sources on the ground. Then, they drove people to clickbait websites filled with ads.
Meta identified several posts with spam-like rates to drive people to particular Pages or off-platform websites. In one case, a social media management agency used a network of over 700 accounts to post and share both political and entertainment content. In other cases, Meta found inauthentic engagement activity run by the same people in support of multiple candidates in the same election at once.
Because we live in a society, somehow an online Leni-Kiko rally on Animal Crossing and a BBM-Sara rally on Roblox have proven to be more authentic than other Facebook groups these days.
Teaming up and taking receipts
In this fight against misinformation, Meta has partnered with independent third-party fact-checkers in the Philippines—AFP (Agence France-Presse), Rappler, and Vera Files—to review the accuracy of certain content and provide additional context with increased capacity to promote reliable information. All fact-checking partners are certified by the nonpartisan International Fact-Checking Network (IFCN) and review content in English and Filipino.
The company explained that when a piece of content is rated false, Meta informs users who share the content and adds a warning label linking to an article disproving the claim. For Pages, Groups, profiles, websites and Instagram accounts that repeatedly share content rated False or Altered, Meta reduces the distribution of their posts and removes their spot among recommendations, as well as their ability to monetise and advertise.
Moreover, advertisers are now required to be authorized to run ads about the elections, politics, and certain categories of social issues with “Paid for by” disclaimers. These types of ads will also appear in the Ads Library for everyone to see what ads are running, who saw them, and how much was spent—if you ever want to see where your taxes go.

