By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: The Data Science Behind r/antiwork’s Upvotes | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > The Data Science Behind r/antiwork’s Upvotes | HackerNoon
Computing

The Data Science Behind r/antiwork’s Upvotes | HackerNoon

News Room
Last updated: 2025/06/16 at 9:29 PM
News Room Published 16 June 2025
Share
SHARE

Table of Links

Abstract and 1. Introduction

2. Related Work

3. Methodology

4. Results

5. Discussion

6. Conclusion, References, and Appendix

3 METHODOLOGY

3.1 Data

We downloaded all posts and comments on the r/antiwork subreddit from January 1 2019 to July 31 2022 using the PushShift API[9] [3]. We only considered posts with at least one associated comment as a proxy for duplicate posts referencing the same event, off-topic and spam posts, as well as posts that received no user engagement for other reasons. The data set contained 304,096 posts and 12,141,548 comments. These posts were made by 119,746 users (posters) and the comments were made by 1,298,451 users (commenters).

We preprocessed the data set to remove comments that could potentially bias our analysis. We filtered out comments that: (i) were removed by users or moderators, but remain in the data set as placeholders (comments are typically removed for violating community guidelines), or (ii) were comments from bots (e.g. the AutoModerator bot, or where the body of the comment began “I am a bot. . . ”, as many do by convention). After filtering, 11,665,342 comments remained in the data set (96.1%). We removed posts that had zero comments after filtering, leaving 284,449 posts (93.5%)

3.2 Definitions

3.2.1 User Types. In our analysis, we compare the behaviour of two groups of users that we refer to as “light” and “heavy” users of r/antiwork. We define light posters or commenters as those with only a single post or comment in the data set, respectively. A majority of posters are light posters (75.1%) and a high percentage of commenters are light commenters (42.5%). We define heavy posters or commenters as the top 1% of users ranked in descending order by number of posts or comments, respectively. Overall, heavy posters made 10.1% of posts and heavy commenters were responsible for 29.8% of comments.

3.2.2 Time Periods. For our topic modeling analysis, we divided the data set into three time periods:

• Period 1: January 1 2019–October 14 2021

• Period 2: October 15 2021–January 24 2022

• Period 3: January 25 2022–July 31 2022

These periods are delineated by two events in the mainstream media: the publication of a Newsweek article[10], which was the first example of a mainstream media article linking to a viral post[11] on r/antiwork (October 15 2021) and the Fox News interview with Doreen Ford (January 25 2022). Period 2 is highlighted as a grey box in all figures where the 𝑥-axis represents time.

3.3 Change Point Detection

We use Classification And Regression Trees (CART) for change point detection [5]. CART is a non-parametric method that uses a decision tree to recursively segment the predictor space into purer, more homogeneous intervals (often called “splitting”). This segmentation process is terminated by a complexity parameter

Figure 2: Total number of daily posts submitted to r/antiwork that received at least one comment. A large proportion of posts (29.6%) were made by light posters. Red dashed lines are results from change point detection.Figure 2: Total number of daily posts submitted to r/antiwork that received at least one comment. A large proportion of posts (29.6%) were made by light posters. Red dashed lines are results from change point detection.

Figure 3: Total number of daily comments on r/antiwork. A large proportion of comments (29.8%) were made by heavy commenters. Red dashed lines are results from change point detection.Figure 3: Total number of daily comments on r/antiwork. A large proportion of comments (29.8%) were made by heavy commenters. Red dashed lines are results from change point detection.

that regularises the cost of growing the tree by adding a penalty for adding additional partitions (“pruning”). In our case, we fit a regression tree with the dependent variable as the number of posts or comments, and the predictor space as each day from January 1 2019–July 31 2022. We used the rpart R package to create regression models [32], the Gini index for splitting and a complexity parameter of 0.01 for pruning.

3.4 Topic modelling

We use Latent Dirichlet Allocation (LDA) for topic modelling [4]. LDA is a generative model that defines a set of latent topics by estimating the document-topic and topic-word distributions within documents for a predefined number of topics. In our case, we consider each post to be a document and the contents of that document as the concatenation of all comments for that post. We do not include the post text as part of the document because a large proportion of post bodies are composed of images. We preprocessed comments for topic modelling by removing URLs and stop words, replacing accented characters with their ASCII equivalents, replacing contractions with their constituent words, and lemmatizing all words. Finally, we filtered out posts with fewer than 50 comments leaving 11,368,863 comments (97.5%) across 181,913 posts (64.0%) for topic modelling.

LDA was applied to each of the three time periods separately (see Section 3.2.2). Periods 1, 2 and 3 contained 40,794; 71,470 and 69,649 posts, respectively. We evaluate the quality of topic models using the 𝐶𝑢𝑐𝑖 coherence score [24] to select the optimal number of topics. Each topic was labelled by a human annotator with knowledge of r/antiwork and topics were aligned between models using those labels and the Jensen-Shannon distance between topic-word distributions. Topic modelling was performed using the Gensim Python library [26].


[9] https://pushshift.io/

[10] https://www.newsweek.com/1639419

[11] https://www.reddit.com/r/antiwork/comments/q82vqk/

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article The cracks in the OpenAI-Microsoft relationship are reportedly widening | News
Next Article Get $79 off this Jackery Portable Solar Generator and power up anywhere
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Apple CEO and Brad Pitt Promote ‘F1 The Movie’ at Fifth Avenue Store
News
Biome Releases v2.0 Beta
News
New Flodrix Botnet Variant Exploits Langflow AI Server RCE Bug to Launch DDoS Attacks
Computing
This recent Google Pixel 10 leak has me very excited about the camera upgrade | Stuff
Gadget

You Might also Like

Computing

New Flodrix Botnet Variant Exploits Langflow AI Server RCE Bug to Launch DDoS Attacks

3 Min Read
Computing

Intel THC Linux Driver Ready To Support Wake-on-Touch “WoT”

2 Min Read
Computing

Rising 2nm chip costs may drive Samsung to use Chinese suppliers for Galaxy S26 OLED panels · TechNode

1 Min Read
Computing

Dell laptop prices in Nigeria​

16 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?