Synthetic Lies: Misinformation in the Age of Large Language Models
May 31 @ 10:00 am - 11:00 am
Over the past decade, large language models (LLMs) have rapidly evolved, demonstrating remarkable capabilities in generating texts that are almost indistinguishable from human-written content, and in some cases, even perceived to be more credible. As LLM tools like ChatGPT increasingly penetrate public discourse, it is critical to understand the potential risks posed by their scalability, effectiveness, and customizability. This talk presents our research on examining the characteristics of AI-generated misinformation compared to human-created misinformation. Our work also evaluates the applicability of two common misinformation solutions: detection models and assessment guidelines. By highlighting the challenges posed by AI-generated misinformation, I will conclude by discussing implications for the future development of intervention strategies, detection models, and responsible design of LLM technologies.
Jiawei Zhou is a PhD student in Human-Centered Computing at the Georgia Institute of Technology, specializing in Human-AI Interaction and Social Computing. She adopts a theory-guided approach using quantitative and qualitative methods to understand the impacts of collective narratives (such as misinformation, hate speech, and counterspeech) and the role of generative AI in addressing or exacerbating related societal challenges. In particular, her word addresses real-world challenges such as harmful content, responsible use of language models, and social support for vulnerable groups. Her research has been published in top-tier computer science venues including ACM CHI, CSCW, UbiComp/IMWUT, and IEEE ICHI. She has received a paper award at CHI and has been supported by grants from NSF, CDC, and NIH.