Authors:
(1) Filipo Sharevski, DePaul University;
(2) Benjamin Kessell, DePaul University.
Table of Links
Abstract and Introduction
2 Internet Activism and Social Media
2.1 Hashtag Activism
2.2 Hacktivism
3 Internet Activism and Misinformation
3.1 Grassroots Misinformation Operations
3.2 Mainstream Misinformation Operations
4 Hacktivism and Misinformation
4.1 Research Questions and 4.2 Sample
4.3 Methods and Instrumentation
4.4 Hacktivists’ Profiles
5 Misinformation Conceptualization and 5.1 Antecedents to Misinformation
5.2 Mental Models of Misinformation
6 Active Countering of Misinformation and 6.1 Leaking, Doxing, and Deplatforming
6.2 Anti-Misinformation “Ops”
7 Misinformation Evolution and 7.1 Counter-Misinformation Tactics
7.2 Misinformation Literacy
7.3 Misinformation hacktivism
8 Discussion
8.1 Implications
8.2 Ethical Considerations
8.3 Limitations and 8.4 Future Work
9 Conclusion and References
8.1 Implications
The new brand of misinformation, our findings show, draws the ire of the hacktivists, reprehending the hijacked discourse for political and propagandistic proposes. The “fight-fire-with-fire” response – leaks, doxing, and deplatforming – though individually employed by some of the participants in our sample, is yet to be orchestrated and tested against serious disinformation outfits that, unfortunately, are still out there on social media [48]. The early evidence outside of the US shows that this orchestration works as the IT Army leaked data from Russian organizations in response to the troll farms’ disinformation narrative that Ukraine is committing genocide against Russians in the Donbas region [23].
The hacktivists’ resoluteness to go after the misinformers would certainly have implications for the content/user moderation on social media, user participation, and future of Internet activism overall. Moderating users and content on social media was, and still is, the response by the mainstream platforms to the political and public health misinformation [108]. Alternative platforms like Gab, Gettr, and Parler, seen as the seeding grounds for this misinformation [133], on the other hand, never did, nor currently do, employ any content/user moderation [107]. While the content/user moderation incites a migration from the mainstream to the alt-platforms [133], it remains to be seen whether the deplatforming will have the same effect. Mainstream social media had a mixed response to leaks and doxing in the past (e.g. allowing WikiLeaks [114] and barring the Hunter Biden’s laptop leaks [28]), so this also adds uncertainty if and how the hacktivists’ “fight-fire-withfire” approach will be allowed, moderated, or perhaps even forced to migrate entirely outside of the social media space.
Trolling and memes might still maintain the popularity amongst the misinformers, however, the latest modes of social media participation like short videos on TikTok open new “fronts” for both the misinformers and the hacktivists. TikTok has increasingly been tested as the next “battlefield” of alternative narratives with evidence of health and abortion misinformation [8, 112] and an individual engagement by at least one of participants. Recalling that the hacktivists’ #OpJane was waged in response to the abortion ban laws in Texas and called for “misinformation-against-(mis)information” [38], it is yet to see how the leaks, doxing, and deplatforming will materialize with meme-ified videos and trolling. TikTok claims it does health and abortion misinformation moderation [123], but evidence shows that this is lax and largely ineffective [14], adding an additional incentive for shifting the disinformation campaigns on this platform.
TikTok is also the next platform for Internet activism where the hashtag activism is appended with videos expanding the developing news narratives, such as the coverage of the Black Lives Matter movement and the Capitol riot [72]. TikTok presents content not just from viral hashtags but also their variations (e.g. #abotion but also #abôrtion [112]) so the threat of hashtag hijacking, co-opting, and counter hash tagging will inevitably materialize here too. This particular affordance likely will allow for weaponizing deepfakes in appending the hashtag war in near future, as they already appeared in misinformation videos about the COVID-19 pandemic on TikTok [106]. All of these developments would certainly necessitate a dynamic adaptation in the way doxing, leaking, and deplatforming is performed in order not just to avoid disintegration of the Internet activism and hacktivism, but prevent another paucity in action that brought the state-sponsored misinformation en masse on social media in the first place [42].
8.2 Ethical Considerations
The purpose of our study was not to generalize to a population; rather, to explore the contemporary hacktivists’ relationship with misinformation in depth. To avoid misleading readers, we did not report percentages, names, or tools, tactics, and procedures mentioned during the interviews. A full moral evaluation of the suggested countering and/or utilizing misinformation is out of scope of this paper, though we condemn any action of leaks, doxing, exposure, or rumors that could result in an individual harm of any form. We are careful with our study not to infringe upon the hacktivist’s aesthetics nor to cause any negative actions with our findings.
We would, however, point out that our engagement with, rather than a disavowal of, the hacktivists can help in refining and revisiting some of the over-simplistic hacktivism portraits of toxic vigilantism, nihilism, and criminality [39]. While we maintain that each operation – misinformation hacktivism related or not – has to be morally justified separately, we find reasonable to identify with the ideas and suggestions put forth by the hacktivists in our study, as they are in conformity with the Levy’s hacker ethos [65] and the democratic vision of the Internet [43]. We also accept and support the idea of “fight fire with fire” action identified in our findings, as it seeks to fill a resistance void arising from a scale mismatch between institutional regulation, lax participation policies and perverse incentives of all the platforms, as well as the experience of living with misinformation in our everyday discourse [105].