Fake News and Anthropology: A Conversation on Technology, Trust, and Publics in an Age of Mass Disinformation Part III

Emergent Conversation 9

A discussion with Andrew Graan, Adam Hodges, Meg Stalcup moderated by Mei-chun Lee

How to Spot Fake News, from Wikimedia Commons. CC BY 4.0-IFLA.

This Emergent Conversation is part of a PoLAR Online series, Digital Politics, which will also include a Virtual Edition with open access PoLAR articles. Anthropologists Adam Hodges, Andrew Graan, and Meg Stalcup joined this virtual conversation to share their thoughts on fake news, disinformation, and political propaganda. It was moderated by PoLAR Digital Editorial Fellow Mei-chun Lee. The conversation will be published in three installments. This is Part III of the discussion. Part I is available here, and part II is available here.

Mei-chun Lee:  This rich conversation has led us to think about fake news from different angles: from how trust has now been shaped differently through new social networks, to how disseminating and policing fake news become practices of participating in “split publics”; from the “affective engagement” of reading, commenting, and forwarding fake news, to “a chain of authentication” that amplifies the power of fake news; from the undemocratic developments and effects of presumingly decentralizing, democratizing digital media, to fake news as “an attention hijacker” that nonetheless confirms one’s worldview. For the last question, I want to turn from analysis to action. Let me ask straightforwardly: what should we do? In response to public criticism, Facebook and Twitter have been ramping up efforts to remove fake accounts and misinformation, deploying real name policies, and working with third-party fact-checking partners to identity potential “fake news.” Twitter takes a further step to ban political ads. Meanwhile, some countries, for example, Singapore, Germany, and France, have begun to make laws to regulate misinformation. What do you think about these measures to fight fake news? How can effective regulation and accountability be established? How do we rebuild trust? Beside the tech giants and states, what actions can we, as academics and citizens, take?

Adam Hodges:  Thank you for raising this question, Mei-chun. I think this conversation would be incomplete without attempting to address what we should do. I’m a firm proponent of a public-facing anthropology, which means we, as scholars, need to look for ways to communicate the relevance of our work and ideas to the types of public conversations you mention. Beyond the theorizing we do and the descriptive work to better understand the participatory behaviors and practices that underlie the spread of fake news, can we draw out the implications of this work for the important public debates taking place today? In many ways, I think that’s easier said than done. But at a minimum, scholars can speak as informed citizens to weigh in on these debates.

First, I think it’s important to recognize that the problem of fake news is fundamentally a human problem even while it is largely perceived and dealt with as a technological problem. I would suggest that we need more focus on human-centered solutions if we are to truly tackle the problem of fake news. I think this is why we need to find ways to foster more anthropological thinking at companies such as Facebook and Twitter as they ramp up efforts to deal with this vexing issue.

Take, for example, the efforts by Facebook to remove fake accounts that Mei-chun cited. In his recent speech at Georgetown University, Mark Zuckerberg emphasized: “The solution [to misinformation] is to verify the identities of accounts getting wide distribution and get better at removing fake accounts.” Facebook’s efforts to tackle fake news largely centers on this approach to identifying and removing inauthentic accounts. From the perspective of a technology company, this makes sense because the policy involves a technological solution: Facebook uses its own machine learning tools to detect inauthenticity and automates the process of verifying the identities and locations of those wanting to run political ads on the platform.

But this is only a partial solution. As we know from our discussion above, fake news involves an “intertextual web of participatory behaviors—behaviors that privilege forms of lateral or horizontal communication” that involve authentic actors in addition to fake accounts and posts spread by bots. Meg noted that “when fake news is very successful, it’s usually a combination of computational propaganda, genuine uptake by individuals, and mainstream media attention.”

Computer scientist Kate Starbird brings in the type of anthropological thinking—the human-centered, socially-oriented perspectives—that engineering-focused companies like Facebook tend to lack, especially at the upper echelons of their leadership. Starbird and colleagues (2019) provide a rich examination of the different ways misinformation spreads online. As they suggest, policies centered on inauthenticity may fall short because the spread of misinformation is fundamentally “collaborative and participatory” in nature. There are multiple “entanglements” between real individuals and inauthentic accounts. This cannot be addressed by simply identifying and removing fake accounts.

Twitter’s approach to removing all political ads from its platform strikes me as a more pragmatic although temporary measure. To be sure, defining what constitutes “political” is fraught with its own controversies (something both Zuckerberg and Twitter’s Jack Dorsey apprehend), but at least Twitter recognizes that paid political advertisements that spread misinformation constitute a social and political problem that they are unable to deal with (technologically or otherwise) at this moment. The short-term solution is to issue a moratorium on political ads on their platform until they (as a company) and we (as a society) are able to come up with better solutions. I give Twitter credit for not pretending they can solve this all on their own with technological tools. Facebook’s hubris in this regard is more than problematic, especially coming from a company of its size and power.

In his Georgetown speech, Zuckerberg conflated “free expression” with paid political ads, equating giving people a “voice” to allowing politicians to spread misinformation. Simply “provide a government ID and prove your location if you want to run political ads or a large page,” Zuckerberg explained. “You can still say controversial things, but you have to stand behind them with your real identity and face accountability.” I don’t share Zuckerberg’s view of “accountability” in this context. In his Georgetown speech, he seemed to invoke a “marketplace of ideas” metaphor for Facebook’s hands-off approach to political ads: “We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying.” The implication seems to be that when people see politicians blatantly spreading misinformation in paid ads, they will reject their ideas and see them as lying demagogues. This certainly does not accord with lived experience in recent history, nor does it acknowledge the political economy that underlies information exchange in the so-called marketplace of ideas. Providing a platform for people to engage in free expression and exercise their voice is one thing. Providing additional opportunities for those with money to amplify intentionally false claims through paid advertising is something completely different. It’s disingenuous to pretend they’re the same. Facebook shirks responsibility with this mirage of neutrality.

At least when it comes to organic posts and new stories, Facebook is working with third-party fact-checkers, as Mei-chun cited. If the problem of fake news is fundamentally a human problem; then we need solutions that leverage human institutions. This returns to our opening discussion about trust. In my opening comments, I noted how we live in a post-trust era and that fact-checking efforts will fall short as long as trust in institutions continues to erode. I think the promise in this third-party fact-checking effort lies not so much in its attempt to correct the record. As I said earlier, the problem of fake news does not merely stem from a lack of correct information and pretending that’s the root of the problem is misleading. But what I find promising about third-party efforts such as this one is that it attempts to build a new trustworthy institution of “community reviewers” that represents a cross-section of Facebook users in the United States.

If the spread of fake news is, in a sense, crowd-sourced; then shouldn’t a solution, in a sense, be crowd-sourced? Such an approach would help develop a new public (or publics), in Andrew’s terms, with its own “set of participation norms, metadiscourses and language ideologies that mediate how one (and thus who) participates” in social media spheres. Wikipedia provides a good example of such a public that (although imperfect) helps govern the dissemination of accurate information in an online space—an institution that operates in a way that garners trust among visitors to the site. It involves organic contributions and engages contributors in lateral forms of participatory behaviors through a “social organization of interdiscursivity” (Gal 2018, cited earlier by Andrew). There are plenty of other examples of online and offline institutions that provide positive models for developing the new social structures that will need to be put in place to regulate social media platforms and ensure accountability. The key will be to find and foster these human-centered solutions without falling into the trap that all we need is more AI tools instead of doing the difficult relationship-building and institution-building work needed.

Also, I would argue for more critical media literacy education for the digital age. Cultivating our own internal spam filters is necessary for sifting through all the digital noise. How to recognize, judge, and evaluate the sources of information. How to judge the credibility of claims and assess evidence. How to distinguish between organic and paid content while recognizing the potential biases of any type of content. How to recognize trolling behavior and avoid feeding the trolls. How to question assumptions and read with a critical eye. How to recognize when fake news hijacks our attention and attempts to reset the agenda, as Meg noted. These are all skills that anthropology education—and the liberal arts more broadly—can foster.

Meg Stalcup:  What Adam said sounds right to me, particularly the inadequacy of thinking about disinformation as a technological problem when it’s fundamentally human; the need to rethink institutions of trust; and the importance of fostering strong media literacy skills.

I think there is a tendency to talk about “fake news,” and computational propaganda more generally, as something that “happens,” as the inevitable outcome of diffuse, agent-less forces. And from the view of the reader or receiver or target, I get that—it appears in front of us, and is getting more convincing, with deep fakes and natural language processing bots, and botnets and so on. What the methodological tools of anthropology allow us to do, though, it take this to a human scale. We can examine the people, and money, behind such things. A wide range of sites, technologies, modes of power, and assemblages of governance and laws, bear on these issues of trust and suspicion, and can be studied. Human beliefs and behaviors are also the intended targets, whether it’s to sway us to go click on a monetized site or vote for a candidate. From an anthropological perspective, understanding what counts as “fake news” means looking at how something comes to count as true, what underlies belief or disbelief, and, beyond what people say, what they don’t say, and what they actually do. A good project will tackle the ways that information—which includes disinformation—is claimed as true, and acted upon: is it authorized by experts? Government officials? Scholars? Movie stars? Instagram influencers? Who is paying for it and who is making money? And the thing is, not just who but how? What are the practices and techniques? These are all possible sites for inquiry, and intervention.

The context of what is happening today is media, of course, a complex media ecology in which television, film, video games, print literature and journalism, participate with the internet and social media. It’s worth mapping this for whatever area one develops as a research project. The general characteristics of digital media are pertinent for thinking about how to study disinformation specifically. As Andrew put it, there is something about contemporary media technologies and digital culture that has created an ecology in which fake news practices and discourses can thrive. Underlying inequalities structure access to the internet (Hine 2015, 6), shape how disinformation can spread, and demand attention. For example, net neutrality is routinely breached by mobile carriers who compete for customers by offering “zero rating” plans. These allow specific applications to work without using one’s data, typically Facebook and WhatsApp. In many countries, it’s precisely those with less money who are most affected by this stripped down version of the internet, as they effectively have unlimited social media access, but would need wifi to fact check anything. Or, facial recognition and search engines have racial biases (Noble 2018), suggesting that research is needed on the discursive erasures yet continued reality of embodiment (Amrute 2016). Basically, we can’t consider tech infrastructure and its material affordances in isolation. The legislation and policies governing how they are run have a major impact, which indicates them as loci for action by citizens and academics.

What history shows is that it’s not the technology which is bad or good. We’ve seen that with the advent of every new media. Instead, each technology has certain affordances that can be relatively authoritarian or relatively democratic (Benkler, Faris, and Roberts 2018, 8). Disinformation relies on the internet’s capillarity to spread, but becomes dangerous in silos. Research in Network Propaganda, linked to just above, shows that the problem in the US election was that the far right was only interacting with the far right, while the rest of the political spectrum was attentive to the far right and also broadly exchanging information. A very distorted version of reality was created in that isolated silo or bubble, which is one of the ingredients in radicalization (discussed mostly around YouTube, but related to what Letícia Cesarino [2019] has called the Bolsosphere, which includes Twitter and WhatsApp).

This siloing relied on certain tendencies. One of the main ones is the delegitimization we’ve been discussing of previous sources of authority and their logics: journalism, universities, government, science itself. We see instead popular epistemologies gaining the upper hand. We can’t find a way out of the fake news impasse if we address only facts themselves or the ways they are communicated, as though these things stood alone. Media literacy can’t be considered in isolation from what is authorizing those “alternative facts” and how, or it just won’t work. Taking a step back means looking at the forms of information, their aesthetics, their political deployment, these kinds of specific epistemologies, and their internal logics. This is another place that I think anthropological work can make a difference. The internet, algorithms, and artificial intelligence are in many ways the infrastructure of the everyday. The dual task this presents is studying when and why this infrastructure takes center stage, as it has with fears about fake news and electoral manipulation, and calling it back to attention when naturalized and overlooked.

Andrew Graan:  Thank you, Mei-chun, for this question, the always important, often difficult question: What is to be done? First, I just want to say that I really appreciate how Adam and Meg each undertook the labor of “defining the problem” of fake news in route to imagining and proposing actions on it. From their responses, it is abundantly clear how fake news is not solely (or even mostly) a problem of technology but rather has social, legal, commercial, affective and epistemic dimensions. My own reply is thus quite indebted to the work that they have already done, a humble addition to their thoughtful responses.

Thus, inspired by Mei-chun’s final question and thinking across our conversation, I cannot help but focus on the question that Adam and Meg bring into relief, namely, what kind of problem is fake news? One popular thesis that we have discussed is that fake news, as fabricated news reports, is simply propaganda for the digital age, and like propaganda it risks tricking or duping media consumers. But, in our conversation, we have challenged that thesis. Adam persuasively argued that we need a new theory of propaganda in order to understand how fake news works as disinformation. In doing so, he refuted the commonplace “hypodermic needle” model of media influence (see also Spitulnik 1993) in showing how “fake news” exploits existing social relations and media infrastructures by which media circulate. That is, fake news, to circulate at all, depends on friends and acquaintances sharing posts of all sorts as well as on radio programs, TV shows, print media, politicians, pundits, etc. who create an interdiscursive environment in which some fabricated news report makes sense. Adam’s phrase “chains of authentication” nicely summed this up. In effect, this argument decenters “fake news” as a problem. Fake news does not “inject” disinformation into people’s minds all by itself.  Rather, it is a bug within a larger practice and political economy of publicity.

Meg’s excellent point that fake news more often than not functions as an “attention grabber” only adds credence to this argument. From this perspective, fake news is arguably just a form of waste produced by the particularity of social media environments. It is a nuisance that requires time and effort to clean up, from the mundane labor of media users who skim but ultimately disregard fake news posts to the exceptional labor of public officials and PR consultants on “damage control” in the wake of fabricated allegations about, for example, the safety of vaccines or the health of a political candidate.

According to both of these analyses, fake news is a problem. But it is far from the occasionally apocalyptic framings by which fake news results in the collapse of democracy. (And arguably, democracy collapsed long ago. Or at least major and obvious impediments to democratic practice, like racism, economic inequality, and the oligarchic takeover of official politics, are obscured by this framing.)

So, again, what kind of problem is fake news? As I have argued elsewhere (Graan 2018), on one level, the discourse on fake news (as fabricated news reports) that emerged in the wake of the 2016 US presidential election constitutes a moral panic. This is not to say that fabricated news reports are innocuous but that the passions and fervor aroused by the fake news issue vastly exceed the magnitude of the problem. Arguably, fake news is the symptom not the cause.

Of what then is fake news a symptom? First, as I argued in my 2018 essay, fake news, at least in the US, is conditioned by a public sphere organized by logics of promotion and marketing, where “the semiotic packaging of news content seems to have become more significant than the veracity and plurality of the news content itself” (Boyer and Yurchak 2010, 198). That is, the commodity value of a news piece increasingly resides in a particular kind of aesthetics that elevates journalistic slant and even sensationalism. Furthermore, the procedures of constructing this commodity value of news items are increasingly divorced from the techniques of authentication that had been used to affix truth status. Adam’s analysis of Mark Zuckerberg’s speech at Georgetown makes a similar point. As Adam argues, by asserting that political advertising is free speech, Facebook relegates fact-checking to the public sphere all the while it harvests advertising revenue. Meg’s point that fake news does not just “happen,” but relies on people, money, and communication infrastructures is thus indispensable.

Second, drawing again on Meg’s and Adam’s interventions, fake news is also symptomatic of a new social distribution of trust. Here, I find Meg’s point, that with fake news, trust has not been lost but rather replaced to be especially useful. For part of what we have been describing, I think, is not blanket social mistrust, but new and differing patterns of deep trust and mistrust. Dedicated Trump supporters deeply trust the man and deeply mistrust the information sources that he and they label as fake news. Dedicated Trump opponents deeply mistrust the man, seeing him as a fount of vicious fabrications and lies, while trusting the “fact-checked” media.  This is what I earlier called a split public. But, I take Meg’s point that part of the participation norms differentiating these publics are epistemic and affective in nature.

And so, if we see the problem of fake news in this way, as a problem that reflects a particular kind of commodification and a particular distribution of (mis)trust, then, what is to be done?

Obviously, I don’t claim to have all the answers—or any answers for that matter. And, I felt Adam’s and Meg’s suggestions were resoundingly excellent. But, here are a few additional things that come to mind.

First, if we see fake news as the result of the particular way in which news items have been constructed as commodities, then I do think that some form of regulation could be in order. I second Adam’s argument that Twitter’s refusal to sell political advertisements is a responsible short-term action. Twitter’s decision in effect closes the market for a particular sort of news commodity.

On a larger scale, however, we should all take note of the European Union’s General Data Protection Regulation (GDPR), which went into effect in 2018. (A similar policy was adopted by the state of California at the start of this year.) The regulation sets standards on data security and restricts how companies collect and use consumers’ online data. And while I agree with many who find the GDPR to be far from perfect, it does constitute an important precedent.

In particular, the GDPR’s policies on user privacy and data security will affect how firms harvest and commodify user data. As it is now well established, the growth of social media companies has depended on their ability to collect users’ data and then use or sell this data for the purpose of targeted advertising. As the saying goes, “if you are not paying for it, you are the product.”  (See Srinivasan 2019 for a condensed, straightforward account of this.) The collection and monetization of user data is a general condition for much of the commercial internet and it perpetuates the production of content (often described as clickbait) to attract users and the use of cookies to gather data on users’ further web browsing. Regulation like the GDPR, in principle at least, can shield users from unrevealed data collection and thereby interrupt the commoditization of user data that spurs on the clickbait-ification of the internet. And, while I don’t have high hopes that the GDPR will be totally transformative in this regard, it is nonetheless an important first attempt to address the issue and one from which we can all learn. As Shoshana Zuboff 2019 states, the GDPR, “has already taken us much further ahead than we’ve been during the last 20 years. Now we have the possibility of standing on the shoulders of the GDPR in order to develop the kinds of regulatory regimes that are specifically targeted at these mechanisms” of surveillance capitalism.

Second, if we see fake news as the result of a new distribution of social (mis)trust, then I would see the cultivation of new and expanded practices of publicity as potentially positive. For example, as Adam highlights, media literacy education can be an effective tool to develop practices of publicity in many places, and such a curriculum has been celebrated in my current home in Finland.

Beyond this, however, I also take inspiration from news consumption practices that I encountered while conducting research in Macedonia (now North Macedonia) in the 2000s. At the time, as one Macedonian friend described to me, Macedonia was a country of news-addicts. And as I witnessed in my research on news media, it was not uncommon for Macedonians to read or skim several different newspapers on a regular basis or to switch from TV channel to TV channel to string together different networks’ broadcast of the evening news.  On one level, the general emphasis on news indexed the precarity and uncertainty of life in Macedonia in the 2000s and a widespread (and not inaccurate) sense that the future of the small country depended on policy decisions made elsewhere, whether in Washington, London, Frankfurt or Brussels. On another level, though, this practice of consuming multiple news sources, or multiple versions of the “same” news story, reflected a general ideology on news as always incomplete truth. In talking to people about their news consumption habits, I would be told that all Macedonian newspapers were biased and to some degree untrustworthy. For the inquisitive news consumer, then, the burden of the truth required a critical review of several different news accounts. Only then could a consumer “read between the lines” and determine what a story was really about.

This kind of news consumption practice enacts a particular kind of media literacy but it also, I feel, highlights the relation of news consumption practices to the distribution of (mis)trust. In Macedonia in the 2000s there was a general mistrust of news media and hence the artful hermeneutics of reading between the lines. But, in the polarized environment of the US under Trump, there is not a general mistrust of news media but a combination of intense mistrust and intense trust. For some, Breitbart speaks the truth. For others, the New York Times deserves the Nobel Peace Prize.

Thus beyond formal media literacy curricula, part of me, inspired by my Macedonian interlocutors so many years ago, would even advocate for the cultivation of a generalized, mistrustful reader, where we need not more trust but rather more suspicion!

Finally, I also feel that when we refine what we mean when we describe fake news as a problem, it also gives us a platform to understand what we might describe as positive developments. As Meg reminded us, the technology that enables fake news is neither bad nor good in itself. And, if we condemn profiteers maliciously or cynically trafficking in fabricated news reports, let us also recognize the epochal importance of #BlackLivesMatter, of #MeToo, of #Occupy, and so on.

And with that closing thought, let me give my heartfelt gratitude to Mei-chun for the invitation to participate in this conversation and to Mei-chun, Adam and Meg for a discussion which was fun, educational, and inspiring in equal measure. Thanks!

Andrew Graan is a Lecturer in Social and Cultural Anthropology at the University of Helsinki. A cultural and linguistic anthropologist, his research examines the politics of public spheres in North Macedonia. He earned his PhD in anthropology from the University of Chicago in 2010. His current project, “Brand Nationalism: Neoliberal Statecraft and the Politics of Nation Branding in Macedonia,” examines how the coordinated efforts to regulate public communication that are found in nation branding projects constitute a wider program of economic and social governance. 

 

Adam Hodges is a linguistic anthropologist and adjunct assistant professor at the University of Colorado Boulder. His books include When Words Trump Politics: Resisting a Hostile Regime of Language (2019, Stanford University Press) and The ‘War on Terror’ Narrative (2011, Oxford University Press). His articles have appeared in the American Anthropologist, Discourse & Society, Language & Communication, Language in Society, and the Journal of Linguistic Anthropology.

 

Meg Stalcup is Assistant Professor of Anthropology at the University of Ottawa, where she does research and teaches on media and visual anthropology, science and technology studies, ethics, and methods. Her current project, “Sensing Truth: The Aesthetic Politics of Information in Digital Brazil” looks at institutional and epistemological aspects of media in four cases: health, politics, environment, and security. Previous work has been published in Anthropological Theory, Visual Anthropology Review, and Theoretical Criminology, among other places.

Works Cited

Amrute, Sareeta. 2016. Encoding Race, Encoding Class: Indian IT Workers in Berlin. Durham: Duke University Press.

Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press.

Boyer, Dominic, and Alexei Yurchak. 2010. “AMERICAN STIOB: Or, What Late-Socialist Aesthetics of Parody Reveal about Contemporary Political Culture in the West.” Cultural Anthropology 25 (2): 179–221. https://doi.org/10.1111/j.1548-1360.2010.01056.x.

Cesarino, Letícia. 2019. “On Digital Populism in Brazil.” PoLAR: Political and Legal Anthropology Review (blog). April 25, 2019. https://polarjournal.org/2019/04/15/on-jair-bolsonaros-digital-populism/.

Graan, Andrew. 2018. “The Fake News Mills of Macedonia and Other Liberal Panics.” Society for Cultural Anthropology (blog). April 25, 2018. https://culanth.org/fieldsights/the-fake-news-mills-of-macedonia-and-other-liberal-panics-1.

Hine, Christine. 2015. Ethnography for the Internet: Embedded, Embodied and Everyday. London: Bloomsbury Publishing.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Spitulnik, Debra. 1993. “Anthropology and Mass Media.” Annual Review of Anthropology, 293–315.

Starbird, Kate, Ahmer Arif, and Tom Wilson. 2019. “Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations.” Proceedings of the ACM on Human-Computer Interaction 3 (CSCW): 1–26. https://doi.org/10.1145/3359229.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s