Generative Images - Generative Imageries: Challenges of Visual Communication (Research) in the Age of AI
Visual Communication Conference
By Isaac Bravo in conference
November 20, 2024
Abstract
Here is my presentation titled Computational Analysis of Manipulated Visual Content in Climate Change Discourse on Twitter, at the Conference: Generative Images - Generative Imageries: Challenges of Visual Communication (Research) in the Age of AI. Bremen, Germany.
Date
November 20 – 22, 2024
Time
12:00 AM
Location
Bremen, Germany
Event
Abstract:
The integration of Artificial Intelligence (AI) into visual communication, particularly within climate change debates, shows a key change in how public discourse is influenced by media (Chian & Lee, 2023; Krishnan et al., 2023). Simultaneously, manipulated images pose a misinformation risk when viewers cannot determine the credibility of what they see (He, 2021). We focus on the polarized topic of climate change to explore how manipulated images shared on Twitter may contribute to polarizing debates between believers and sceptics (deniers) in anthropogenic climate change. This study contributes to two important aspects in the debate on visual communication in the age of AI: a) How manipulated images are spreading on the social media, and the resulting impact on public debates, and b) how methodological advances in machine learning and computer vision can help scientists to detect and analyse manipulated images.
Climate change is a global phenomenon which has received considerable media attention in recent decades (IPCC, 2022). Experts have recognized the urgency of addressing the impacts of this phenomenon and understanding how people perceive and engage with it (Falkenberg et al., 2022). In the digital media environment, the emergence of climate change information on social media, especially visual content, has changed how individuals understand this phenomenon and how it encourages collective action (Pearce et al., 2019). Despite the scientific evidence of the anthropocentric origin of climate change, contrarian voices still reject this reality and the related risks (though in many cases, these are minority opinions (Whitmarsh, 2011). The causes of such positions include disagreement between scientists (Patt, 2007) and people’s attitudes and beliefs (Kahan et al., 2012).
Social media allows the circulation of opinions and generative content that may support or deny certain beliefs and/or facts. The emergence of image generative models has made it much easier for people to create ‘deep fake’ or synthetic images, and simultaneously, computer science research has started to improve their detection (Guan, 2022). Generative visual content can, for example, exaggerate or misrepresent climate change-related phenomena, mislead individuals, fuel scepticism regarding the veracity of climate change, and potentially erode trust in media and scientific institutions (Capstick & Pidgeon, 2014). Due to the limited development of computational studies, previous work mainly focused on analysing visual elements of climate change using qualitative approaches and small samples (Schäfer, 2020; Harb et al., 2020; Metag et al., 2016). Notably, there is a lack of studies on the prevalence and impact of manipulated images and how users interact with this type of visual content related to climate change on social media.
This study aims to answer the following research question: Do ‘real’ vs manipulated images about climate change on Twitter lead to different levels of engagement and interactions between believers and skeptics? This study adopts a multimodal and computational approach combining automated image and text analysis to examine more than one million of images, and replies shared by Twitter users between 2019 and 2022 in the context of climate change. For the data collection, we sampled all tweets that included an image and the term “climate change” or the hashtag “#climatechange” in English. We use a Large Language Models to classify generative content from real images. Then, we use different computational techniques such as topic modelling (BERTopic) and Latent Semantic Scaling (LSS) to analize and classify comments between believers and sceptics.
Preliminary results reveal user differences in the distribution of engagement between generative and real images, as well as a concentration of user interactions around specific topics related mainly to the consequences of climate change. Furthermore, these differences concern not only the type of visual content engaged by deniers and believers but also how these users react to it. Here, we generally see that the believers exhibit more engagement than deniers when they are exposed to real images. While generative images only involve a small share of the total number of images shared on Twitter, they can lead to considerable user engagement and directly impact people’s understanding of climate change.This study makes a theoretical and methodological contribution to the field of visual communication: From a theoretical perspective, we delve into the strategies adopted by believers and sceptics on Twitter, specifically comparing the use of generative and real images on a polarized climate change debate. The methodological contribution lies in using a computer science approach to detect generative images within the context of climate change-related data. By employing advanced detection techniques, the study also contributes to our understanding of the effectiveness of existing models in discerning between authentic and generative content in the climate change debate on social media platforms.