By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Comedians vs. AI: The Ethics of Satire and Safety Filtering | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Comedians vs. AI: The Ethics of Satire and Safety Filtering | HackerNoon
Computing

Comedians vs. AI: The Ethics of Satire and Safety Filtering | HackerNoon

News Room
Last updated: 2026/03/07 at 8:40 AM
News Room Published 7 March 2026
Share
Comedians vs. AI: The Ethics of Satire and Safety Filtering | HackerNoon
SHARE

Table of Links

Abstract and 1. Introduction

  1. Methods
  2. Quantitative Results and Creativity Support Index
  3. Qualitative Results from Focus Group Discussions
  4. Discussion
  5. Mitigations and Conclusion and Acknowledgments
  6. Ethical Guidance References

A. Related Work on Computational Humour, AI and Comedy

B. Participant Questionaire

C. Focus

ABSTRACT

We interviewed twenty professional comedians who perform live shows in front of audiences and who use artificial intelligence in their artistic process as part of 3-hour workshops on “AI x Comedy” conducted at the Edinburgh Festival Fringe in August 2023 and online. The workshop consisted of a comedy writing session with large language models (LLMs), a human-computer interaction questionnaire to assess the Creativity Support Index of AI as a writing tool, and a focus group interrogating the comedians’ motivations for and processes of using AI, as well as their ethical concerns about bias, censorship and copyright. Participants noted that existing moderation strategies used in safety filtering and instruction-tuned LLMs reinforced hegemonic viewpoints by erasing minority groups and their perspectives, and qualified this as a form of censorship. At the same time, most participants felt the LLMs did not succeed as a creativity support tool, by producing bland and biased comedy tropes, akin to “cruise ship comedy material from the 1950s, but a bit less racist”. Our work extends scholarship about the subtle difference between, one the one hand, harmful speech, and on the other hand, “offensive” language as a practice of resistance, satire and “punching up”. We also interrogate the global value alignment behind such language models, and discuss the importance of community-based value alignment and data ownership to build AI tools that better suit artists’ needs. Warning: this study may contain offensive language and discusses self-harm.

1 INTRODUCTION

1.1 Motivation: investigate the potential and implications of LLMs for comedy writing

Recent work on the intersection of AI and comedy has demonstrated [24, 54, 68, 70, 99, 106] an appetite for comedians to (try to) write humorous material using AI tools like Large Language Models (LLMs). We conducted an empirical study to better understand the current state of LLMs as comedy-writing support tools, their usecases and limitations, and artists’ opinions on ethical questions regarding their use in a comedy-writing context. The complexity of comedy can help expose some limitations of LLMs. To participate, we recruited 20 professional comedians who use AI in their artistic processes and who perform live shows in front of audiences (10 in person at the Edinburgh Festival Fringe, 10 online) for a 3-hourlong workshop on “AI x Comedy”. As detailed in Section 2, the workshop consisted of a comedy writing session with generally-available instruction-tuned LLMs (ChatGPT [77, 79] and Bard [26, 98]), a human-computer interaction questionnaire, and focus group discussions on the use of LLMs in comedy writing and ethical concerns.

1.1.1 Using LLMs for humour, a task with human-level difficulty. Trying to combine humour and machine intelligence is a longstanding subject of scientific enquiry [85, 93, 102], and is perceived as a fundamental challenge. According to computational humour researchers like Winters [106], “humans are the only known species that use humor for making others laugh” [20, 38]. Winters [106] argues that one of the modern formal humor theories points to incongruity [51] (whereby the setup points in one direction and the punch line in another) as a basic element [38, 85, 87] [1]. As we discuss in Section 5.2.2, producing and resolving incongruity is a task with human-level difficulty. We situate LLMs for comedy within broader computational humour research and AI-assisted comedy performance [67, 71, 88] in Appendix A.

1.1.2 The utility of instruction-tuned LLMs as Creativity Support Tools. Similar to previous empirical studies on the use of LLMs for creative writing [19, 21–23, 37, 49, 53, 71, 83, 114], we asked the artists about their motivations and processes for using LLMs. We asked about the potential and limitations of language models as Creativity Support Tools [22], and quantified the Creativity Support Index of LLMs for comedy writing [25]. We report our results in Section 3.

1.1.3 Socio-Technical Systems concerns with LLMs for creative writing. Inspired by Dev et al. [29], we leveraged community engagement with large generative models, and interrogated the diverse, intersectional identities of the comedians using AI in a creative context. In addition to their reasons for using (or not using) LLMs as comedy-writing tools, we asked participants about the ethical considerations of using AI. The study was conducted at the time of the Writers’ Guild of America (WGA) 2023 strike [75]. Participants raised and addressed questions on the scrutiny of AI and on concerns around AI’s impacts, both on intellectual property and artistic copyright, and on artists’ livelihoods. We report their opinions in Section 4 and discuss these concerns in Section 5.3.

1.2 Investigating hypotheses about using LLMs to write comedy

Based on previous work and on the authors’ personal experience of AI as creativity support tools, we hypothesized—prior to conducting our study—that participants would express negative opinions of LLMs for co-creativity on four issues: expressing stereotyped (Sect. 1.2.1) or bland (Sect. 1.2.4) language, censorship (Sect. 1.2.2) and missing context (Sect. 1.2.3). We review literature on these four hypotheses below, as they become the basis of our mixed methods study

1.2.1 Biases in large language models. Gender and racial biases embedded within machine learning models have been extensively documented [1, 11, 12, 14, 15, 18, 30, 42, 94, 105]. These biases include sexism and racism [11, 94], homophobia and transphobia [31, 84], Islamophobia [1], the perpetuation of Western colonial mindsets [73], Anglocentrism [115], and in-group vs. out-group social identity biases [48]. In their extensive reviews, Bommasani et al. [15], Rauh et al. [86] identified two broad kinds of harms resulting from such biases: intrinsic harms such as representational bias (due to misrepresentation, overrepresentation, and underrepresentation of specific social groups), and extrinsic harms, the downstream consequences of biased models, including representational and performance disparities. As we show in Section 4, the study participants noticed a few examples of representational harm and many examples of underrepresentation harm (also called allocational harm in [86]), such as erasure when LLMs refused to generate content for certain demographic groups.

1.2.2 Potential censorship of speech labeled as “offensive”. Comedians often pepper their language with profanities and their material with provoking themes. As we discuss in Section 5.1.2 (and confirmed by the study participants in Section 4), offensive language that would be perfectly acceptable at a comedy club may get “censored” by instruction-tuned LLMs that “refuse” to answer “offensive” prompts.

This problem has been observed in automated moderation of online content, such as hate speech detectors that suppress social media posts by queer communities and drag queens [31, 32], or posts using African-American Vernacular English [3, 112]. Amironesei and Díaz [3] called it censorship. Rauh et al. [86] studied algorithmic moderation of social media posts by the Perspective API, noting that “authors of the comment may be harmed if their content is incorrectly flagged as toxic” by the moderation tool. Similar erasure due to the cultural hegemony embedded in image generators has been studied in [83]. We relate the participants’ experience, similarly frustrated that the LLM tools “considered” their own identity and comedy material as problematic and necessary to censor.

Díaz et al. [32] defined offensive language as non-normative “language that uses terminology that is noted as offensive but which is not perceived as offensive in particular contexts of use”, and studied its use by minorities as a form of resistance, for “socially productive uses of decoratively offensive language”, aiming to reclaim “offensive” language and resist oppression. Just like the minority groups described in [32], many comedians (who may be members of minority groups themselves) often use offensive jokes to punch up, and satire (“to challenge existing social structures”) to build empathy, rather than to punch down (“silence others”).

1.2.3 Missing context. Context is key to disambiguate offensive language from hate speech. LLMs, like social media posts, cause “context collapse” [65] by providing a limited amount of information to understand their meaning, particularly when using mock impoliteness. Specifically, “in-group usage of reclaimed slurs can be considered acceptable, depending on who uses them” [28, 86]. Moreover, the context of comedy extends beyond the language to other factors including the audience and the venue.

1.2.4 Homogeneisation. Bommasani et al. [15] warned that “the application of foundation models across domains has the potential to act as an epistemically and culturally homogenising force, spreading one perspective, often a socially dominant one, across multiple domains of application”. In the arts, this means that AI-generated artifacts may lead to a homogeneisation of aesthetic styles [34, 104], further reinforced by curation algorithms [34, 64]. For creative writing, empirical studies showed that instruction-tuned LLMs reduce the diversity of content in co-writing tasks [80], and that LLMgenerated stories did not pass the Torrance Test of Creative Writing according to metrics of “fluency, flexibility, originality and elaboration” [21]. Similarly to Qadri et al. [83], we ran focus groups with artists to interrogate cultural (Western) biases (see Sections 2.3 and 2.4).

1.2.5 Investigating hypotheses via a mixed-methods study. In our empirical study, we ask participants questions on all four problems identified in Section 1.2, namely about bias, censorship, context and homogeneity. In Section 5, we build upon scholarship on cultural value alignment of language models, the moderation of offensive and harmful speech, and the use of offensive speech and satire as a form of resistance, to revisit the global cultural value alignment of LLMs and propose community-based alignment to build LLMs that better suit comedians’ creative needs[2].

:::info
This paper is available on arxiv under CC BY 4.0 license.

:::

:::info
Authors:

(1) Piotr W. Mirowski∗, Google DeepMind London, UK ([email protected]);

(2) Juliette Love∗, Google DeepMind London, UK ( [email protected]);

(3) Kory Mathewson, Google DeepMind Montréal, QC, Canada ([email protected]);

(4) Shakir Mohamed, Google DeepMind London, UK ([email protected]).

:::

∗Both authors contributed equally to this research.

[1] Alternative humour theories include the Aristotelian Relief Theory [5] of tension and release whereby we let out our psychic energy connected with repressed topics, the Superiority Theory [46] whereby we laugh at others’ misfortunes to feel better about ourselves, and the Benign Violation Theory [69].

[2] We deliberately do not generalize our findings beyond comedy, as some professionals and the computational creativity community have historically embraced LLM tools in a way that fits their creative practice, whether building on the glitch aesthetic [64, 82] or designing interactive experiences [81].

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article AWS Introduces Nested Virtualization on EC2 Instances AWS Introduces Nested Virtualization on EC2 Instances
Next Article Nothing Headphone A vs AirPods Max: Comparing the headphones Nothing Headphone A vs AirPods Max: Comparing the headphones
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Fire-Starting Phones, Dinosaur Bots, and AI That Judges Your Hairline: The Weirdest Tech at MWC 2026
Fire-Starting Phones, Dinosaur Bots, and AI That Judges Your Hairline: The Weirdest Tech at MWC 2026
News
The rain in Seville is wonderful and now it is also converted into energy with the new CSIC solar panels
The rain in Seville is wonderful and now it is also converted into energy with the new CSIC solar panels
Mobile
Despite its age, Nvidia’s Shield TV still receives updates and it’s not over
Despite its age, Nvidia’s Shield TV still receives updates and it’s not over
Mobile
AMD GAIA 0.16 Introduces C++17 Agent Framework For Building AI PC Agents In Pure C++
AMD GAIA 0.16 Introduces C++17 Agent Framework For Building AI PC Agents In Pure C++
Computing

You Might also Like

AMD GAIA 0.16 Introduces C++17 Agent Framework For Building AI PC Agents In Pure C++
Computing

AMD GAIA 0.16 Introduces C++17 Agent Framework For Building AI PC Agents In Pure C++

1 Min Read
The 5 Best Suits From Marvel’s Spider-Man | HackerNoon
Computing

The 5 Best Suits From Marvel’s Spider-Man | HackerNoon

8 Min Read
Educational Byte: What is a Crypto ETF? | HackerNoon
Computing

Educational Byte: What is a Crypto ETF? | HackerNoon

6 Min Read
A Further Exploration of the AGI Delusion  | HackerNoon
Computing

A Further Exploration of the AGI Delusion | HackerNoon

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?