Authors:
(1) Arkadiy Saakyan, Columbia University ([email protected]);
(2) Shreyas Kulkarni, Columbia University;
(3) Tuhin Chakrabarty, Columbia University;
(4) Smaranda Muresan, Columbia University.
Editor’s note: this is part 1 of 6 of a study looking at how well large AI models handle figurative language. Read the rest below.
Table of Links
Abstract
Large Vision-Language models (VLMs) have demonstrated strong reasoning capabilities in tasks requiring a fine-grained understanding of literal images and text, such as visual question-answering or visual entailment. However, there has been little exploration of these models’ capabilities when presented with images and captions containing figurative phenomena such as metaphors or humor, the meaning of which is often implicit. To close this gap, we propose a new task and a high-quality dataset: Visual Figurative Language Understanding with Textual Explanations (V-FLUTE). We frame the visual figurative language understanding problem as an explainable visual entailment task, where the model has to predict whether the image (premise) entails a claim (hypothesis) and justify the predicted label with a textual explanation. Using a human-AI collaboration framework, we build a highquality dataset, V-FLUTE, that contains 6,027 instances spanning five diverse multimodal figurative phenomena: metaphors, similes, idioms, sarcasm, and humor. The figurative phenomena can be present either in the image, the caption, or both. We further conduct both automatic and human evaluations to assess current VLMs’ capabilities in understanding figurative phenomena.[1]
1 Introduction
Figurative language is integral to human communication, enabling a variety of communicative goals (Roberts and Kreuz, 1994), including affective communication (Fussell and Moss, 2014). Figurative language presents a significant challenge to computational approaches as it
requires understanding of implicit meaning behind an expression (Stowe et al., 2022; Shutova, 2011; Veale et al., 2016; Zhou et al., 2021). Recently, Chakrabarty et al. (2022) proposed a task and dataset for Figurative Language Understanding through Textual Explanations (FLUTE) that frames the problem as an explainable textual entailment covering a variety of figurative language phenomena in text: metaphors, similies, idioms, and sarcasm. This dataset has been used successfully to advance and benchmark the capabilities of LLMs for understanding figurative language in text (Saakyan et al., 2022; Ziems et al., 2024; Sravanthi et al., 2024; Dey et al., 2024).
However, figurative meaning is also prevalent in visual phenomena, such as visual metaphors (Akula et al., 2023; Chakrabarty et al., 2023), multimodal sarcasm (Desai et al., 2022), and humor (Hessel et al., 2023; Hwang and Shwartz, 2023). Yet so far most of the work on vision and language models (VLMs) has focused on understanding literal meaning in images and captions (e.g., ScienceQA (Lu et al., 2022), MMMU (Yue et al., 2024)) including work on explainable visual entailment (Kayser et al., 2021). Building on the idea of FLUTE (Chakrabarty et al., 2022) for text, we present a new dataset for visual figurative language understanding with textual explanations (V-FLUTE). Our dataset contains 6,027 instances spanning diverse figurative phenomena. Each instance contains an image (premise) and a textual claim (hypothesis) that is either entailed or contradicted by the image. Deciding the entailment relation requires the vision-language model to understand the implicit meaning in both the visual and textual modalities. Our dataset contains figurative phenomena present in the image, in the caption, or in both. In addition, to mitigate the dependence on spurious correlations, to more rigorously investigate reasoning capabilities, and to promote explainability, our task requires the model to generate a plausible explanation for the output label. See Figure 1 for two examples from our dataset.
We make the following contribution towards assessing VLMs ability to understand multimodal figurative phenomena:
• V-FLUTE, a high-quality dataset of 6,027 instances built using a human-LLMs collaboration framework covering several phenomena: metaphors, similies, idioms, sarcasm, and humor (Section 3). We will make the dataset available.
• A suite of evaluations to assess current VLMs’ capabilities on this new task of explainable visual figurative entailment (Section 4.2 and 4.3).
• A detailed human evaluation with error analysis yielding insights into types of errors for different classes of models (Section 5).
[1] Code and data will be available at github.com/asaakyan/V-FLUTE