2025-05-09

This blog is a report from a workshop held in the Philippines on 12th April 2025, which explored the human rights risks posed by generative AI and deepfakes in the lead-up to the 2025 elections. Participants examined how these technologies are weaponized for disinformation and political repression and shared strategies to defend truth and democracy.

This blog was written by our partner, AlterMidya, with contributions from the WITNESS team.

Why This Workshop Mattered

As the 2025 Philippine midterm elections approach, the rise of generative AI and deepfakes has posed a serious threat to democratic integrity. In response, a workshop by WITNESS, in collaboration with AlterMidya, was held in April 2025—just a month before the election—to equip journalists, activists, and media workers with critical skills to identify, counter, and report on these digital threats.

As AI-generated content becomes more accessible and convincing, its misuse has rapidly evolved into a powerful tool for manipulating public opinion, spreading disinformation, and undermining trust in electoral processes.



Arul Prakkash, WITNESS’ Senior Program Manager for Asia and the Pacific, facilitated the program, commencing with questions on the participants’ hopes and concerns on using generative AI tools. The quick sharing of responses manifested distress on the challenge of verifying the products of AI, as well as its accessibility and unclear ethics in its use. Moreover, the participants expressed their concerns about its potential to accelerate fake news and spread harmful narratives, especially in politically sensitive contexts. Examples of which have previously been documented in the region.

Generative AI platforms such as Sora, Runway, and Midjourney can now easily produce realistic photos, videos and audio. When exploited, these tools enable trolls and bad-faith actors to create disinformation at a large scale. Deepfakes, for example, have been used to fabricate endorsements, fake rallies and produce misleading campaign videos that are difficult for the public to verify.

AI as a Tool for Surveillance and Red-Tagging

Given the landscape of Philippine politics, the emergence of technology such as generative AI tools and deepfakes has allowed their weaponization to push forward propaganda, especially against progressive groups and candidates.

During the workshop, Ian Angelo Aragoza, Education Officer of the Computer Professionals’ Union, discussed misuse of the supposed innovation as a tool for perpetrating red-tagging and surveillance of targeted groups and individuals.

Red-tagging refers to the act of labeling individuals or organizations as being linked to armed or subversive activities, often without credible evidence. This practice places them at risk of harassment, violence, or even extrajudicial actions. This tactic has been used to discredit, intimidate, and endanger activists, journalists, and civil society actors. With AI-generated content, red-tagging can now spread faster, appear more convincing, and become harder to debunk. This further escalates the risks for those targeted.



Citing one of the most recent studies, Aragoza noted the presence of at least 14 Facebook pages dedicated to labeling groups and individuals as members of leftist political movements. These pages often use AI-generated and deepfake content to spread red-tagging narratives, which receive huge engagements, simultaneously harming the images of its targeted entities, despite the Commission on Elections (COMELEC) recognizing that red-tagging and any form of discrimination as an election violation.

Meanwhile, the insights of the participants on this matter allowed productive sharing of potential actions to counter the given circumstances. Among the perceived resolutions in addressing the issue was through a person-to-person strategy, where individuals could be informed of the basic identification of AI-generated products and deepfakes, such as unclear or deformed hands and low-quality textual images. Despite this, given the fast-paced nature of advancements, such hints may soon be more challenging to identify. Therefore, workshop participants recognized the need to hold big tech companies accountable for this and to urge governments to regulate generative AI tools as long-term solutions to the matter.

“Fighting disinformation today isn’t just about fact-checking — it’s about building trust, standing together and understanding how fast these AI-generated lies can spread.”

Beyond disinformation campaigns, questions also remain about the transparency of the broader Philippine electoral system. The final report of Kontra Daya and Vote Report PH, during the 2022 national election highlighted many technical issues, such as Vote Counting Machine (VCM) errors and Secure Digital (SD) Card failures, as well as electioneering and election process-related circumstances, all of which have contributed to worsening public trust. These systemic issues raise questions about the results of a technology-reliant electoral process.

Strengthening Our Defenses: The SIFT Approach

Recognizing an urgent need for verification tactics, the discussion moved to highlight some basic steps one might take when encountering potential AI-generated content. Content, which, if unfettered, could be capable of posing harm and/or disinformation.

Among the strategies presented was SIFT:

Stop before sharing the content and thoroughly checking the details;

Investigate the source in terms of its credibility;

Find better coverage to gather more reliable information;

Trace claims, quotes, and media to the original context.

This approach was paired with discussions on ethical reporting, which highlighted the significance of transparency and a sharing of the processes and tools that were utilized in analyzing content. The importance of educating the public on lesser-known aspects of verifying AI-generated content was also raised, as well as the need to provide proper context of the content analyzed.

The workshop also introduced tools and support systems, such as WITNESS’ Deepfakes Rapid Response Force (DRRF), which helps journalists and human rights defenders analyse potentially manipulated media. The DRRF offers fast-turnaround analysis for urgent cases, especially in high-stakes political contexts, such as elections or conflicts, where human rights and democracy are under threat.

Meanwhile, calls to action on this matter were also discussed, specifically focused on the need to advocate for manual counting and electronic transmission, also known as the hybrid system, to secure the integrity of the election.



From Tools to Movements: The Way Forward

As the workshop came to a close, participants reflected on the urgency of coordinated action. Technical tools alone are not enough. Defending democracy requires long-term investment in grassroots collaboration, community-based media, and legislative reform. As AI continues to evolve, defending truth and democracy will require both digital resilience and collective strategy.

“Disinformation is not just a tech issue, it is a political and economic one”.

For more practical tools and deeper context on documenting elections in the age of AI and disinformation, check out these resources:

Filming Tips for Documenting the Philippine Elections in the Age of AI

A practical guide for safely, ethically, and effectively filming during the 2025 elections — developed with local activists.

Community-Based Approaches to Verification

A guide to grassroots strategies for verifying visual media and countering disinformation.

Things to Know Before Using AI Detection Tools

Insights into the current capabilities—and limits—of AI detection.

Spotting Deepfakes in an Election Year: How AI Detection Tool Work – and Sometimes Fail

A practical guide on how to spot deepfakes during election cycles and protect against AI-driven disinformation.

In the face of deepfakes and disinformation, our strongest defense is not just detection but the power of communities to tell their own truths, shape their narratives, and hold power to account.

About AlterMidya: Altermidya – People’s Alternative Media Network is a national network of independent and progressive media outfits, institutions, and individuals with the aim to promote pro-people journalism by amplifying the issues and stories of marginalized sectors.

Published on 8 May 2025.

The post Defending Democracy: Human Rights Implications of Generative AI and Deepfakes in the 2025 Philippine Elections appeared first on WITNESS Blog.

Show more