“2” in “Digital Hate”
2
THE POLITICAL TROLLING INDUSTRY IN DUTERTE’S PHILIPPINES
Everyday Work Arrangements of Disinformation and Extreme Speech
Jonathan Corpus Ong
BEFORE THE TERMS FAKE NEWS FACTORIES AND TROLL ARMIES entered the global lexicon after the shocking revelations from the Brexit referendum and Donald Trump’s election, the Philippines had already elected President Rodrigo Duterte based on a savvy campaign that ran largely on social media. While initially a cost-saving maneuver for an “outsider” candidate lacking national political machinery (BBC Trending 2016), this investment proved efficient as his angry anti-establishment narrative was amplified by the lurid posts of vociferous digital influencers and “clickbaity” headlines of imposter “fake news” websites. A Facebook executive later dubbed the Philippines as “patient zero” in the global misinformation epidemic (Harbath 2018), drawing attention to how social and historical convergences of deep populist sentiment, technological diffusion, and corruption within local creative industries gave rise to disinformation innovations and the proliferation of “extreme speech” in online environments (Pohjonen and Udupa 2017).
In this chapter, I discuss how and why the Philippines became this patient zero. Drawing from a larger project that involved myself and Jason Cabanes conducting interviews and participant observation with the actual authors of fake news and operators of fake accounts on Facebook (Ong and Cabanes 2018), I argue that the illness is much older and deeper than Duterte and his crass campaigners. Through an inquiry into the work arrangements behind the production of digital disinformation, this chapter sheds light onto the ways in which “fake news factories” and “troll armies” grew primarily out of the professional practice of political consultancies from within the Philippines’s creative industries. Arguing against “moral panic” explanations of fake news that attribute this phenomenon to dark technological alchemy, foreign interference, or the exceptional evils of current political leaders (e.g., Ressa 2016), this chapter approaches the everyday aspects of deceptive political campaigning, which enlists many complicit strategists, influencers, and fake account operators for professional projects.
This approach is inspired by the literature of production studies in media and communications research that aims to explore, from the bottom-up, workers’ “creativity within constraints” in media production processes (Mayer, Caldwell, and Banks 2009). By recording the intentions and experiences of disinformation producers in their own words, we can shed light on their opaque institutional procedures, the social conditions that led them to this kind of work, and the cultural scripts they use to justify this work to others and themselves. Bridging discussions in production studies and digital labor research (e.g., Casilli 2016; Roberts 2016) with the emerging area of disinformation studies, I argue that the chief architects of networked disinformation are themselves architects of precarious labor arrangements in the creative industries that make workers vulnerable to slipping into the digital underground. By exploring the Philippines as a non–Euro-American case of a phenomenon usually discussed in terms of the West, I also seek to contribute a global perspective to the issue of disinformation and digital publics and to provoke broader metatheoretical reflection on social transformations and digital innovations “from the South” (Arora 2019; Srinivasan, Diepeveen, and Karekwaivanane 2019).
Digital Disinformation as Everyday Digital Labor
This chapter contributes to the emerging area of disinformation studies in three significant ways.
The first aim is to reimagine disinformation producers and fake news authors not as exceptional villains but as ordinary digital workers. This approach requires a spirit of openness to understand why people take on political trolling projects in the first place. Considering them as precarious workers rather than evil masterminds helps us think about what safety nets we might set up to prevent them from slipping into the digital underground.
To achieve this perspective, this chapter engages with and is inspired by valuable ethnographic work on the sense-making processes of populist publics, such as Arlie Hochschild’s (2016) work with white working-class voters in Louisiana and Hillary Pilkington’s (2016) ethnography of members of the English Defence League in the United Kingdom. Their work invites us to dive into the “deep stories” behind populist publics’ anger and resentment—that is, the careful construction of good-and-evil narratives that establish people’s particular visions of the world. I build on their work in two ways. First, I unpack the ways in which disinformation strategists have strategically weaponized populist publics’ anger and resentment with the liberal elite establishment by creating mistrust of establishment institutions and mainstream media while exploiting the vulnerabilities of social media’s attention economy. Second, I advance a comparative perspective to the Euro-American literature on global populism through a specific inquiry into the Philippines case. I am inspired by the growing literature on populism in the Philippines, such as with Cleve Arguelles’s (2016) and Nicole Curato’s (2016) illuminating writings about Duterte’s supporters in poor slums in the Philippines. Unlike the working-class Americans who feel left behind as “strangers in their own land” (Hochschild 2016), the Filipino populist public is one that, I argue, is feeling as if they are finally finding a voice through new opportunities for speech in social media. I build on Arguelles’s and Curato’s work by expanding on their focus on the rationalities of “working-class” or “lower-class” Duterte supporters and paying attention to the precarious middle class and professional elites who are complicit with an illiberal system as long as they gain or maintain power for themselves.
The second aim is to steer the narrative about technology in politics away from fetishizing new technologies toward seeing technologies as part of broader communication environments. Previous studies on digital disinformation devote exclusive attention to the impact of one medium, platform, or technology on political outcomes—for example, doing one whole study on Twitter bots (Woolley and Guilbeault 2017) and their impact on elections. Instead of panicked speculation about the powerful effects of a particular new technological platform on gullible masses, I zoomed out to consider digital weapons as part of a broader artillery for mass deception. This big-picture approach, which follows the spirit of frameworks for polymedia (Madianou and Miller 2013) and media-as-environment (Silverstone 2007), enables better understanding of how architects of disinformation design campaigns that travel across new and traditional media platforms. This approach is also consistent with disinformation studies research that emphasizes the ways in which techniques of trolling (Phillips 2015) and “attention hacking” (Marwick and Lewis 2017) fundamentally target the vulnerabilities of mainstream media using strategies of influence and manipulation coming from social media. It also connects with important research tracing historical precedents of digital disinformation and deceptive campaigning, such as how the phenomenon of “serial calling” in Kenyan and Ugandan radio talk shows predates contemporary political bloggers or social media influencers (Brisset-Foucault 2018; Gagliardone 2015).
The third aim is to shift the discussion from content regulation to process regulation. Intervention initiatives in the Philippines, such as those by Rappler—a key leader at the front lines of this worthy fight—involve lobbying Facebook and Google to flag fake or offensive content or to blacklist fake news sites. This alternative approach is to identify the systems and industries that normalize and incentivize fake news production. By shining a spotlight on the organizations and industries that are culpable and complicit in the production of fake news, we can then demand greater transparency and accountability from them with regard to their work arrangements and output. I am inspired by studies in digital labor that explore precarity in the work arrangements of global outsourcing (Casilli 2016; Roberts 2016). These studies highlight the interrelationship of the production contexts with the (problematic, racist, misogynistic) media content that persists on digital platforms.
A key inspiration is Pohjonen and Udupa’s (2017) concept of “extreme speech.” Their concept rejects tendencies in current disinformation debates to apply, from the top down, the legal frame of “hate speech” and its “assumptions around politeness, civility, or abuse as universal features of communication with little cultural variation” (Pohjonen and Udupa 2017, 1174). As an intervention to binary classifications between hate speech and acceptable speech, they propose that media and policy research draw from anthropological approaches that are sensitive to emic categories and the “complex politics involved in labeling certain kinds of speech as one thing or another” (1187). This insight is helpful in this study as I explore how deceptive political campaigning and professionalized online trolling exist within the same continuum as normalized corporate marketing practices of click-army mobilization and influencer management. But although this form of extreme speech has roots in corporate marketing, the key difference is the way in which angry narratives of mistrust of mainstream media and establishment institutions are systematically constructed and circulated for the benefit of particular political leaders (see Cabanes, Anderson, and Ong 2019).
Methods
The project from which this chapter draws consisted of one year of collaborative research with Jason Cabanes, consisting of in-depth interviews with twenty disinformation architects, at both managerial and staff levels, and participant observation of Facebook pages and communities and Twitter accounts used by our informants (see Ong and Cabanes 2018). Through our personal contacts in the advertising and public relations (PR) industry, we met the strategists behind corporate and political campaigns. We explained our research interest in digital labor, particularly in how digital political operations work in the Philippines. From our initial interviews with managers, we used snowball sampling to recruit lower-level workers to translate strategy to the language of the street.
Interviews
Table 2.1 presents the different categories of operators we interviewed and their roles in digital disinformation campaigns. A digital campaign team is often led by a senior professional with a background in advertising and the PR industry. Based on the campaign objective, the strategist then assembles digital influencers and fake account operators to carry out specific communication objectives.
The professional backgrounds of chief disinformation architects say a lot about the roots of digital political operations in the advertising and PR industry. These strategists maintain day jobs as account or creative directors for what is known in the industry as “boutique agencies,” local (rather than multinational) advertising and PR firms. They apply their expertise in social media content management for consumer household brands to political clients.
Chief architects work closely with anonymous digital influencers, who each maintain multiple social media accounts with large numbers of followers (between fifty thousand and two million). These popular pages carry distinctive branding and have regular updates of humorous, inspirational, or pop culture or celebrity-oriented content. During campaign periods, these pages are “activated” and can be seen promoting hashtags and memes favorable to their clients, who are often corporate brands or celebrities but occasionally political clients. Page owners remain anonymous to their followers, and there is no disclosure of paid content to their followers. These anonymous digital influencers maintain day jobs as computer programmers, search engine optimization specialists, or marketing and finance staff.
Meanwhile, what we call “community-level fake-account operators” are usually new college graduates, members of politicians’ administrative staff, and online freelance workers who juggle various clients commissioning piecemeal digital work. We learned that many of these operators also work from provinces outside Metro Manila.
Table 2.1 Respondent list (N = 20)
Before the interview, we briefed our informants that our approach was empathetic, as we did not intend to cast moral judgments on the participants’ actions or write investigative journalism to “expose” troll account operations and name and shame the politicians involved in these activities. Conscious of the sensitive nature of labels such as “troll,” especially in our initial recruitment of participants, we only used words such as trolling and fake accounts once we had established rapport during the course of the interviews or our respondents themselves opened up about trolling work.
One ethnographic surprise from our interviews was discovering that many fake-account operators and anonymous digital influencers are gay and transgender people. We learned from informal chats with them how they switch between “male” and “female” voices when operating their multiple fake accounts and use snarky Filipino gay humor to poke fun at their online rivals. Strategically using affordances for anonymity in social media while maximizing opportunities for monetizing different identities, they make use of skills at “gender code-switching” to effectively deploy the appropriate digital persona to suit the objectives of a campaign. Although some gay and transgender people we met refused to be formally interviewed—presumably, to avoid the risk of their identities being “exposed”—we gained some insight into the specific conditions of “purple collar labor” (David 2015), as it applies to the digital disinformation industry. Gay and transgender people are often assumed to have mastery of the latest pop culture references, exuberant image management skills (e.g., from their dating profiles to message board memberships), and fan mobilization discipline (e.g., from Miss Universe online voting techniques) that guarantee vivaciousness and “spirit” for the social media accounts and campaigns they handle.
Online Participant Observation
We supplemented our interviews with participant observation of online communities. We observed more than twenty publicly accessible Facebook groups and pages and Twitter accounts supporting various political players at both national and local levels. We made sure to include explicitly pro- and anti-Duterte groups and pages without explicit expression of candidates or political parties they support but that claim to curate “social media news.”
Through participant observation, we examined the content and visual aesthetics of posts crafted by influencers and “followed the trail” of how those posts travel across Facebook groups and were retweeted across platforms. We observed the tone and speech styles of replies and comments to the original posts. This allowed us to better understand how digital disinformation campaigns were translated as specific posts or memes. During our fieldwork, some participants showed us fake accounts they operated and even shared their passwords to those accounts. This view provided us with an opportunity to compare and contrast what our participants said in the context of the interview with what they actually did in the online profiles they created, as we were able to check the digital traces they left in their Facebook histories.
Ethics
Following the protocols of university research ethics, we told our informants that we would disidentify information that could be traced to individuals. This is why some details about the digital disinformation campaigns that our participants wanted to keep confidential are occasionally discussed in more general terms.
Networked Disinformation Projects and Moral Justifications
The common image of people involved in doing so-called paid troll work—as is the case for most digital laborers in the Global South—is that of the exploited worker in a “digital sweatshop” or a “click farm.” They are thought to spend their days executing monotonous and clerical tasks within highly regimented and exploitative arrangements. In the specific context of digital work for politics, for instance, Rongbin Han (2015) narrates the precarious labor arrangements that buttress China’s “fifty-cent army,” the state-sponsored workers who are paid to act like “spontaneous grassroots support[ers]” on online discussion boards. Following strict pay structures that emphasize quantity over quality of posts and often inflexible instructions in posting content, Han’s insightful research demonstrates how rigid work arrangements lead to easy identification of posts authored by fifty-cent workers.
In contrast to the Chinese case, what I found in the Philippines is that digital political operations are more diversified, with operators working with clients across the political spectrum and occupying a hierarchy of roles. I define “networked disinformation” as the distributed labor of political deception to a set of loosely organized workers. This convenient structure provides a way for people to displace responsibility and project the stigma associated with the label of “paid troll” (bayaran) onto other people.
The first key feature of networked disinformation is its project-based nature, for which workers are employed on short-term contracts with their clients, who measure the delivery of output along specific industry criteria and metrics. These jobs are often taken on as added or sideline work, as people maintain day jobs in advertising, online marketing, or serving as politicians’ administrative staff. As distributed labor, different workers are enlisted to achieve discrete objectives while having only loose and informal connections with their fellow workers. Often disinformation workers do not share the same office and are not always clear how certain campaigns relate to the overall objectives of political clients.
The second key feature of networked disinformation is how it is rooted in the general principles and strategies of advertising and PR. We discovered in our project that networked disinformation for Filipino politicians are hyperextensions of corporate marketing practices, in which techniques of “attention hacking” (Marwick and Lewis 2017) were first tested in advertising and PR campaigns for soft drinks or shampoo brands and then transposed to political marketing. Both campaign principles and work structures follow models developed in advertising and PR, and they are applied to campaigns for politicians across the political spectrum, beyond just Duterte’s party, contrary to misleading reports such as in an Oxford Internet Institute study (Bradshaw and Howard 2017).
Because of the project-based and loosely organized nature of disinformation activity, workers within and across teams engaged in constant one-upmanship, which affected the quality of the disinformation work that they did. Although campaigns were designed at the top with a certain objective, distributing the execution of this objective among workers in competition with each other led to unpredictable consequences. Because disinformation producers are incentivized by strategists in a competitive matrix of reach and engagement, some end up producing racist or misogynist content that was not agreed upon at the beginning. For instance, one particularly misogynistic meme aiming to humiliate a journalist went viral even though the original intent was simply to discredit the news agency to which she belonged.
This aspect brings us to the third key feature of networked disinformation: its project-based and distributed labor arrangement enables moral displacement and denial. Among all the workers we met during the project, nobody self-identified as a “troll” or a producer of fake news; these labels were always projected to either imagined others of pure villainy and total power or “real” supporters or political fans. Real fans with enthusiastic zeal for their candidate are said to be more likely to be invested to make personalized attacks and hateful expressions in online arguments compared with professional, but casual, disinformation architects like them. Disinformation producers often engage in moral justifications that their work is not actually trolling or fake news. They mobilize various denial strategies that allow them to displace moral responsibility, often by citing that political consultancy is only one project or sideline that does not define their whole identity. The project-based nature of disinformation work makes moral displacement easier, given the casual, short-term nature of the arrangement, which downplays commitment and responsibility to the broader sphere of political practice.
Moral justifications differ across the three levels of disinformation architects. At the top level, strategists are more likely to express discourses of gamification and fictionalization to justify their work. They draw from cultural scripts based on Western entertainment (“It’s like being Olivia Pope of Scandal”) to video games (“It’s game over when you’re found out”) to fictionalize the dangerous consequences of their actions and block feelings of real involvement. They even express a certain “thrill” to breaking the rules of the game, similar to experiences of “fun” in breaking taboos or generating humor in politics (Hervik 2019; Tuters and Hagen, chap. 5; Udupa, chap. 6).
During our fieldwork, for example, I met 29-year-old digital strategist Rachel, who shared, “I’d actually like to phase out our company practice of paying out journalists to seed or delete news because they can be entitled or unscrupulous. The reason why I’m more passionate and committed about online work is because I don’t like the politics in journalism.” In this quote, it is interesting how corruption in mainstream media is used as a moral justification to dispose of institutionalized practice by replacing it with another version—equally lacking in scruples and ultimately benefiting themselves. By expressing statements that normalize or even exaggerate evil or corruption in existing public institutions, these ambitious workers imagine themselves as self-styled agents of positive change.
At the middle level, influencers are more likely to express discourses of normalization to justify disinformation production. They cite how they do exactly the same work to promote corporate brands and entertainment products or even volunteer their digital expertise for free to support fandoms such as for celebrities or beauty pageant titlists. At the bottom level, community-level fake account operators have primarily financial or material reasons to justify their work; they take this on as added work and are often persuaded by others to take this on for extra cash. Unlike lead strategists and some digital influencers, who expressed “fun” and “thrill” in designing new forms of extreme speech for political clients, few lower-level workers cited ever experiencing fun while doing political troll projects. Many lower-level workers cited being pressured, intimidated, and harassed in their job, including by their demanding bosses and clients. To me, this unevenness in experiences of fun in transgressive digital production highlights the precarity of those workers in exploitative and emotionally draining race-to-the-bottom work arrangements (Cassili 2016; Roberts 2016).
My colleague and I were struck to learn how disinformation workers create implicit rules for themselves and their colleagues to help them manage the social pressures and moral burdens. Workers drew their own moral boundaries (“In a flame war, I only poke fun at people’s bad grammar, but I will never slut shame”) and created support systems (“If I don’t really support the politician hiring me, then I pass on the account to someone I know who’s a real fan”) and even sabotaged the authenticity of their own avatar (see Ong and Cabanes 2018 for an ethnographic portrait of a politician’s staff member who was “peer-pressured” to create a fake account in the name of “team spirit” during the election campaign season).
In the following sections, I discuss the three kinds of disinformation producers and identify their motivations and backgrounds, reflecting on how they become complicit in the work of political deception.
Advertising and PR Strategists as Chief Disinformation Architects
At the top level of networked disinformation campaigns are advertising and PR executives who take on the role of high-level political operators. Usually they occupy leadership roles in boutique agencies and handle portfolios of corporate brands while maintaining consultancies with political clients on the side. They transpose tried-and-tested industry techniques for reputation building and spin to networked disinformation campaigns. With a record of launching Facebook community pages and achieving worldwide trending status for digital campaigns for household brands, telecommunications companies, or celebrities, many executives saw political consultancy as a new challenge for them to apply their skills and leverage their networks.
The chief architects we met often complained that they are undervalued at times by politicians and their primary handlers. But again, they also saw tremendous opportunity in the reluctance of traditional political campaigners toward engaging digital media. This dynamic allows digital strategists to establish themselves as the de facto pioneers in a platform that they know will come to dominate the future of political propaganda. As one strategist told us, “The Philippines does not realize that it is sitting on a stockpile of digital weapons,” as she recognizes that Filipino digital workers are highly entrepreneurial and resourceful, whether they be the computer hackers who infamously coded the Y2K virus or the platform freelance workers who diligently work with their global clientele.
Chief architects also see digital disinformation as an opportunity to disrupt existing social hierarchies and challenge established power players in political campaigning.
It is evident that some strategists also relish the thrill and adrenaline rush they get from their risky projects. The 40-year-old executive Dom told us, “Maybe if I had this power ten years ago, I would have abused it and I could toy with you guys (kung ano-ano gagawin ko sa inyo). But now I’m in my forties, it’s a good thing I have a little maturity on how to use it. But what I’m really saying is we can fuck your digital life without you even knowing about it.” In that moment, I shuddered to imagine the fates of powerless folks who had crossed this woman.
Anonymous Digital Influencers
At the middle level of the hierarchy of networked disinformation are digital influencers. It is important to distinguish between key opinion leaders, such as celebrities and political pundits who maintain public personas, and anonymous microinfluencers who work more clandestine political operations. In our research, we focused on the anonymous digital influencers, who usually operate one or more anonymous accounts (e.g., comedy or inspirational pages on Twitter or Facebook) that entertain their followers with their specific brand of hilarity or commentary while occasionally slipping paid content into their feed. These influencers harness their astute understanding of the public’s pop culture tastes, political sentiments, and social media behaviors to become expert attention hackers.
These digital influencers expect between fifty thousand and two million followers to share and like their messages with the aim of gaming Twitter trending rankings and creating viral posts on Facebook so as to influence mainstream media coverage. Translating the campaign plans of the advertising and PR strategists, they use snark, humor, or inspirational messaging consistent with the social media personas they operate to author posts that are favorable or unfavorable to particular politicians and are often anchored by a hashtag agreed upon between them and the chief architects.
A few digital influencers take on the role of being second-level subcontractors. As sub-subcontractors, they work as intermediaries between chief architects and their fellow digital influencers to whom they redistribute disinformation work.
All the digital influencers we met perform this role on a part-time, per-project basis. Many of them have day jobs in IT, corporate marketing, and other sideline work such as online community management for celebrities’ fan clubs. For the most part, this kind of work has the trappings of the “precarious middle-class” lifestyle common to most kinds of freelance digital work in the Philippines. Central to this role is the enjoyment of being in an aspirational work environment. They recall with pride how they do disinformation work while booked overnight in a five-star hotel suite or in a mansion in a gated village. They also get excited by the material and symbolic rewards that chief architects promise to the best performing digital worker in the team, which includes giving away the latest iPhone model or arranging a meet-and-greet with a top-level celebrity.
As part of the precarious middle class, anonymous digital influencers are driven by financial motivations in their disinformation work. They have previously endured less stable and financially and socially rewarding jobs in the creative and digital industries and see influencer work as giving them more freedom, including when choosing clients. Curiously, we found that there is usually alignment between digital influencers and the political clients they serve—not in terms of ideology or issues but in terms of fan admiration. Some influencers hired by political clients they do not like would subcontract work to a fellow influencer they know who is a “real fan” of that politician.
In our fieldwork, we met several digital influencers who are transgender women who operate approximately six anonymous accounts with diverse “brands” and different gender and sexual identities. We observed “gender code-switching” (David 2015) in how they chose to translate campaign objectives in various ways by using male, female, and gay “voices” in these different fake accounts. We observed how some male accounts often aimed for positive campaigns, given the inspirational nature of their content; female “bikini troll” accounts aim to use overt sexuality to gain followers and distribute deceptive content to them; and gay male accounts use snarky gay humor to poke fun at politicians’ actions for negative campaigning. These influencers maximize monetization opportunities of the different identities they manage by creating gendered performances of political trolling for their clients.
Community-level Fake-Account Operators
At the lowest level of the networked disinformation hierarchy are community-level fake-account operators. These workers are tasked with following what I call “script-based disinformation work,” which is often posting the strategists’ previously designed written and/or visual content (memes) on a predetermined schedule and affirming and amplifying strategists’ and influencers’ key messages through likes and shares. Community-level fake-account operators are tasked with posting a prescribed number of posts or comments on Facebook community groups, news sites, or rival politicians’ pages each day. By actively posting content from generic greetings to political messages within Facebook community groups, they are often responsible for maintaining activity and initiating bandwagon effects that would drive real grassroots supporters to actually come out and visualize their support for politicians.
The fact-account operators usually post positive messages of support for the politician and note their agreement with favorable news articles. At other times, they can initiate quarrels with supporters of rival politicians. They use ad hominem attacks, making fun of other people’s bad grammar, often as a way of shutting down an opponent’s argument. They mention that their ultimate failure as fake-account operators on Facebook is when they are called out as a fake account (“That’s game over! That usually shuts us up.”).
Labor arrangements differ for this kind of low-level troll work. Most respondents in our study were fake-account operators working within a politician’s own administrative staff. These workers are usually junior-level employees tasked with “helping” a political campaign, and they usually begrudge the fact that there is no additional pay for this kind of publicly derided work that they did not originally sign up for. Other fake-account operators whom we have yet to formally interview face-to-face are freelancers who are paid on a per-day basis to achieve a set number of posts or comments or office-based fake-account operators who work in a “call center” type of arrangement—some operate in provinces that are the bailiwicks of politicians.
Community-level fake-account operators are driven primarily by financial motivations. We found out that some of their fake accounts on Facebook or Twitter had prior histories before their political trolling work. Some fake accounts had been used when they were once part of pyramid schemes. These “networking” schemes required them to visually display groups of friends; fake accounts were one way to artificially manufacture group support. Many fake account operators appear to be workers who have previously tried many other risky enterprises as a means to achieve financial stability.
From Content Regulation to Process Regulation
By discussing disinformation workers’ social and financial motivations and moral justifications, this chapter ultimately aims for better understanding of the vulnerabilities in the political and media ecosystems that make political trolling a sideline job that is hard to refuse in the Philippine context. Digital disinformation is not an all-new Duterte novelty; it is the culmination of the most unscrupulous trends in the Philippines’s media and political culture. Many of the disinformation techniques were tried and tested in marketing shampoos and soft drinks before hyperextending them to marketing politicians and their ideas. The difference is that seeded hashtags that aim for historical revisionism or drowning out dissent pose greater and unfathomable dangers to political futures. Within the broader context of Duterte’s drug war and authoritarian populism, digital disinformation has volatile consequences in amplifying the culture of violence and impunity experienced in the streets. Although disinformation producers imagine themselves as ordinary entrepreneurs within an inherently corrupt media ecosystem, they neglect to say that they consistently circulate anti-establishment narratives that fuel not only mistrust in “elite” political leaders but also, crucially, in mainstream media. This narrative works to their advantage, as digital disinformation workers see themselves as “change-making” entrants competing with legal media to best represent the voice of “the people” (see also Chakravartty and Roy 2017).
While I emphasize the significant impact of the structural and institutional contexts in which disinformation workers are embedded, it is important not to absolve these workers of their moral responsibility. As my colleague and I discovered (Ong and Cabanes 2018), these individuals have capacities for agency in ways they translate, execute, or even resist in the production process. The moral failure is their complicity and collusion with evil infrastructures in the desire to gain political, social, and financial benefits. Although the ethnographically inspired approach of this study begins with an imperative for empathy to understand the conditions that push people to engage in precarious disinformation work, I assign great culpability to the chief architects, who are at the top level of influence and who benefit the most from the architecture they have built.
The production studies approach taken in this chapter also adds to current public debate about what the “right” interventions are for digital disinformation. In the Philippines, most responses from journalists and civil society have focused on fact-checking initiatives. Journalists have enlisted the support of the influential Catholic Bishops Council of the Philippines in circulating a blacklist of “fake news websites” (almost all associated with Duterte). Embattled journalists targeted by Duterte’s administration have also received some support from Facebook in awarding them contracts as third-party fact-checkers who flag content to downvote (rather than completely censor) within the algorithm. Although these initiatives are well meaning, I am cautious because these approaches are not inclusive and comprehensive enough—they aim to catch content when it has already been produced and to “oxidize” this bad content through repetition (Phillips 2018), which might even contribute to further polarization (Wardle and Derakshan 2017).
A production studies approach demands spotlighting mechanisms that can prevent this kind of work from being produced in the first place. This approach means inviting open discussion about self-regulation in the media and creative industries, which have treated disinformation work as an open industry secret, and encouraging transparency in political marketing and advertising, particularly in the context of elections. The ethnographic material gathered in this chapter is meant to inform collaborations with civil society actors, election lobby groups, and lawyers to encourage greater transparency and accountability in digital campaigning. I argue that it is important for politicians in the Philippines to disclose their digital campaigns’ content and strategy, which escape public visibility given platform affordances of microtargeting. This matter is of urgent concern because the May 2019 midterm elections witnessed a proliferation of disinformation innovations and further expansion of the Philippine political trolling industry (Ong, Tapsell, and Curato 2019). This issue has global repercussions, as other democracies such as India (Sharma 2019) similarly struggle to engage with innovations brought on by big tech and their historical antecedents in the unregulated practice of political consultancies.
References
Arguelles, Cleve. 2016. “Grounding Populism: Perspective from the Populist Publics.” MA thesis, Central European University, Budapest.
Arora, Payal. 2019. “Politics of Algorithms, Indian Citizenship, and the Colonial Legacy.” In Global Digital Cultures: Perspectives from South Asia, edited by Aswin Punathambekar and Sriram Mohan, 37–52. Ann Arbor: University of Michigan Press.
Banks, Miranda J., Vicki Mayer, and Bridget Conor. 2015. Production Studies, The Sequel!: Cultural Studies of Media Industries. New York: Routledge.
BBC Trending. 2016. “Trolls and Triumph: A Digital Battle in the Philippines.” December 7. https://www.bbc.com/news/blogs-trending-38173842.
Bradshaw, Samantha, and Philip Howard. 2017. Troops, Trolls, and Troublemakers: A Global Inventory of Organized Social Media Manipulation. Computational Propaganda Research Project. Oxford: Oxford University. https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/07/Troops-Trolls-and-Troublemakers.pdf.
Brisset-Foucault, Florence. 2018. “Serial Callers: Communication Technologies and Political Personhood in Contemporary Uganda.” Ethnos 83 (2): 255–273.
Cabanes, Jason, C. W. Anderson, and Jonathan Corpus Ong. 2019. “Fake News and Scandal.” In The Routledge Companion to Media and Scandal, edited by Howard Tumber and Silvio Waisbord: 115–125. London: Routledge.
Casilli, Antonio. 2016. “Digital Labor Studies Go Global: Toward a Digital Decolonial Turn.” International Journal of Communication 11:3934–3954.
Chakravartty, Paula, and Srirupa Roy. 2017. “Mediatized Populisms: Inter-Asian Lineages.” International Journal of Communication 11:4073–4092.
Curato, Nicole. 2016. “Politics of Anxiety, Politics of Hope: Penal Populism and Duterte’s Rise to Power.” Journal of Current Southeast Asian Affairs 35 (3): 91–109.
David, Emmanuel. 2015. “Purple Collar Labor: Transgender Workers and Queer Value at Global Call Centers in the Philippines.” Gender and Society 29 (2): 169–194.
Gagliardone, Iginio. 2015. “‘Can You Hear Me?’ Mobile-Radio Interactions and Governance in Africa.” New Media and Society 18 (9): 2080–2095.
Han, Rongbin. 2015. “Manufacturing Consent in Cyberspace: China’s ‘Fifty-Cent Army.’” Journal of Current Chinese Affairs 44 (2): 105–134.
Harbarth, Katie. 2018. “Protecting Election Integrity on Facebook.” Presented at 360/OS, Berlin, Germany. https://www.youtube.com/watch?time_continue=76&v=dJ1wcpsOtS4.
Hervik, Peter. 2019. “Ritualized Opposition in Danish Online Practices of Extremist Language and Thought.” International Journal of Communication 13:3104–3121.
Hochschild, Arlie. 2016. Strangers in Their Own Land. New York: New Press.
Madianou, Mirca, and Daniel Miller. 2013. Migration and Media: Transnational Families and Polymedia. New York: Routledge.
Marwick, Alice, and Rebecca Lewis. 2017. Media Manipulation and Disinformation Online. New York: Data and Society Research Institute. https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf.
Ong, Jonathan Corpus, and Jason Cabanes. 2018. “Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines.” Newton Tech4Dev Network. https://newtontechfordev.com/wp-content/uploads/2018/02/ARCHITECTS-OF-NETWORKED-DISINFORMATION-FULL-REPORT.pdf.
Ong, Jonathan Corpus, Ross Tapsell, and Nicole Curato. 2019. “Social Media in the 2019 Philippine Midterm Election: A Public Report of the Digital Disinformation Tracker Project.” New Mandala. https://www.newmandala.org/wp-content/uploads/2019/08/Digital-Disinformation-2019-Midterms.pdf.
Phillips, Whitney. 2015. This Is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. London: MIT Press.
———. 2018. “The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators.” Data and Society Research Institute. https://datasociety.net/output/oxygen-of-amplification/.
Pilkington, Hilary. 2016. Loud and Proud: Passion and Politics in the English Defence League. Manchester: Manchester University Press.
Pohjonen, Matti, and Sahana Udupa. 2017. “Extreme Speech Online: An Anthropological Critique of Hate Speech Debates.” International Journal of Communication 11:1173–1191.
Ressa, Maria. 2016. “Propaganda War: Weaponizing the Internet.” Rappler, October 3. https://www.rappler.com/nation/148007-propaganda-war-weaponizing-internet.
Roberts, Sarah. 2016. “Commercial Content Moderation: Digital Laborer’s Dirty Work.” In The Intersectional Internet: Race, Sex, Class and Culture Online, edited by Safiya Umoja Noble and Brendesha M. Tynes, 147–160. New York: Peter Lang.
Sharma, Amogh Dhar. 2019. “How Far Can Political Parties in India Be Made Accountable for Their Digital Propaganda.” Scroll.in, May 10. https://scroll.in/article/921340/how-far-can-political-parties-in-india-be-made-accountable-for-their-digital-propaganda.
Silverstone, Roger. 2007. Media and Morality: On the Rise of Mediapolis. Cambridge: Polity.
Srinivasan, Sharath, Stephanie Diepeveen, and George Karekwaivanane. 2019. “Rethinking Publics in Africa in a Digital Age.” Journal of Eastern African Studies 13 (1): 2–17.
Wardle, Claire, and Hossein Derakhshan. 2017. “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.” Council of Europe Report DGI(2017)09. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c.
Woolley, Samuel C. and Douglas R. Guilbeault. 2017. “Computational Propaganda in the United States of America: Manufacturing Consensus Online.” Working Paper 2017.5, Project on Computational Propaganda. Oxford: Oxford University.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.