Authors:
(1) Filipo Sharevski, DePaul University;
(2) Benjamin Kessell, DePaul University.
Table of Links
Abstract and Introduction
2 Internet Activism and Social Media
2.1 Hashtag Activism
2.2 Hacktivism
3 Internet Activism and Misinformation
3.1 Grassroots Misinformation Operations
3.2 Mainstream Misinformation Operations
4 Hacktivism and Misinformation
4.1 Research Questions and 4.2 Sample
4.3 Methods and Instrumentation
4.4 Hacktivists’ Profiles
5 Misinformation Conceptualization and 5.1 Antecedents to Misinformation
5.2 Mental Models of Misinformation
6 Active Countering of Misinformation and 6.1 Leaking, Doxing, and Deplatforming
6.2 Anti-Misinformation “Ops”
7 Misinformation Evolution and 7.1 Counter-Misinformation Tactics
7.2 Misinformation Literacy
7.3 Misinformation hacktivism
8 Discussion
8.1 Implications
8.2 Ethical Considerations
8.3 Limitations and 8.4 Future Work
9 Conclusion and References
7 Misinformation Evolution
As there is virtually no cost of disseminating misinformation [85], it is unlikely that the online discourse will shed off the alternative narratives soon. If this gloomy prediction will eventually materialize [78] or the Internet will improve because the new technologies will upgrade public’s ability to judge the quality and veracity of content [4], remains an open issue. Because the hacktivists are nonetheless stakeholders in resolving this issue, our third research question aimed to bring their prediction of how online spaces will fare with trolling, memes, and falsehoods in the near future.
7.1 Counter-Misinformation Tactics
The hacktivists in our sample unanimously posit that “it is hard for social media platforms to keep up with removing it, so people stepping in to help is going to be of critical importance” [P13] for preserving a healthy discourse. The mobilization for “justice and truth as a cause” [P15] is important not just for curbing misinformation but “reclaiming information back from the political hold” [P1]. To help “expose misinformation charlatans” [P4], hacktivists call for maintaining a code of conduct where “no leak, doxing, or exposure action should cause anyone else harm (physical, reputation, mental)” [P3].
To begin with, P3 recommends that we should “stop treating disinformation as a freedom of speech.” As misinformers usually use this cloak to act very aggressively on social media, the next step is to “identify what their weakness are and what triggers them – deplatforming or provocation?” [P14]. If the misinformers are unresponsive spreaders, then “exposing, doxing, and putting their real faces through OSINT” [P15] is in due place not just on mainstream social media but also alt-platforms, forums and everywhere on Internet. If they itch for a provocation, then “orchestrated saturation” [P5] might work better with “shitposts, absurd trolling, and ridiculing memes” [P18]. Here, the hacktivists note, it is vitally important to a priori distance from a “political whataboutery” [P14] and avoid “coming across as censorship, disagreement, canceling that only could cause argument or dismissal” [P5].
Some of the hacktivists were on the opinion that “doxing is not hacking anymore per se because you can get stuff with a credit card and documents could be easily faked nowadays” [P1]. One possible tactic, proposed by P1, was to “find exploits, vulnerabilities in their platforms and step-by-step expose misinformers’ amateurish way of doing trolling, using bots, and feeding think tanks to get a credibility behind their propaganda.” Another tactic, proposed by P2, was “doxing for the purpose of having advertisers pull from supporting known misinformer influencers, like for example in the case of Andy Ngo.” Proposing more of a hybrid hacktivist tactics, P4 suggested “a latent, yet coordinated psychological warfare where psychologists rip apart these people, conduct serious OSINT to find incriminating leaks on them, and even pay for billboards and radio ads to publicly shame them.” Along these lines, P11 even suggested throwing the book at them, targeting them with a social engineering attack and attempting to compromise a piece of their core infrastructure, be that their servers, Internet access, or bot credentials.”
7.2 Misinformation Literacy
Hacktivists in our sample echo the sentiment regarding the social media users’ susceptibility to false information found in scientific literature: laziness to check facts [P2] [89], resistance to authoritative suggestions [P7] [57], allegiance [P13] [120], and simple ignorance [P16] [17]. As people that resort to action, hacktivists do feel the obligation to propose ways for addressing this susceptibility. In the view of P5, “misinformation needs to be seen as something everyone is being watched for, and not just one group of people on the left or the right,” A “misinformation social contract” [136] necessitates interventions such as “a critical thinking curricula in schools” [P18], “teaching hacking OpSec skills as social responsibility and rise to action” [P5], and “forcing professional communication norms on platforms” [P16].
As hacktivists have little control over these interventions, they were happy to help with a development of “truthspreading bots for a ‘standoff ’ with misinformation-spreading bots” as something that could append the practice of leaks, doxing, and exposure [P13]. They recognized that these “truth-spreading bots” must help ordinary users to better find and locate facts, as information literacy is the single most effective one in dispelling falsehoods [55]. Hacktivists reiter ate that platforms do have to let “misinformation to float on social media and make bots visible, so they gets overwhelmed with factual information” to demonstrate to ordinary users how to do help themselves [P14].
Regardless if it these stances are realistic or not, the hacktivists in our sample believe that the current approach to raising misinformation literacy is ineffective because it does not signal an “unbiased attitude” [P7] to the social media users in the wrong. Instead of an educational and respectable tone, “rather a ‘cancel culture’ infused or a ‘your opinion is wrong’ tone” [P3] plagues any attempt to help people to navigate and locate factual information. Rejection of misinformation, as a result of misinformation literacy, must come as an agreement that “scientific facts do not have political properties, even if the social media platforms inherently do” [P5].
7.3 Misinformation hacktivism
The participants in our sample acknowledge that orchestrated misinformation hacktivism, bar individual instances of ops against misinformers, is largely absent from social media. For the hacktivists to assume misinformation as a worthy cause for action, the conflict between the past “hacking for political causes” and [58] future “hacking against using falsehoods in furthering political causes” [22] must be resolved. Though this conflict is complex and evolving, several of the hacktivists worried that it could nevertheless create a “division between the hacktivists on political lines” [P2].
As a relative threat to the misinformation activism, one participant mentioned the hijacking of the hacktivists image for self-promotion, e.g. “some like to portrait themselves as woke gods of the web with zero fuck-ups” [P12]. Another threat is the temptation of using misinformation against misinformation, as in the #OpJane campaign [P10]. While this strategy is true to the “fight-fire-with-fire” approach, it might backfire in circumstances where abiding to the hacktivist ethic comes secondary to expressing social and political angst on social media [79]. On top of this, one could argue that this conflict per se might be hard to resolve in the misinformation instance as external propaganda, because even if the hacktivists are “hacking for the homeland,” they nonetheless are doing it on political terms [26].