AI-generated content (AIGC)
Large Language Models
Mechanisms of belief
Visual misinformation
Personality
🎓 PhD in Psychology, 2021 - 2025
Hong Kong University (HKU)
🎓 B.S. in Psychobiology, 2016-2020
Minor in Cognitive Science
Specialization in Computing
University of California, Los Angeles (UCLA)
Python, R, Qualtrics, MATLAB (EEGLAB), PsychoPy, Jamovi
LLM API (e.g., ChatGPT, Deepseek, Llama, Qwen), Midjourney
English, Chinese (Mandarin), Chinese (Cantonese), Japanese, French
Summary:
The development of artificial intelligence (AI) allows rapid creation of AI-synthesized images. In a pre-registered experiment, we examine how properties of AI-synthesized images influence belief in misinformation and memory for corrections. Realistic and probative (i.e., providing strong evidence) images predicted greater belief in false headlines. Additionally, we found preliminary evidence that paying attention to properties of images could selectively lower belief in false headlines. Our findings suggest that advances in photorealistic image generation will likely lead to greater susceptibility to misinformation, and that future interventions should consider shifting attention to images.
Summary:
AI-generated images are becoming increasingly realistic, raising concerns about a potential “infodemic” in which visual evidence online can no longer be trusted. In this preregistered study, we test whether media literacy guidance can reduce susceptibility to AI-generated visual misinformation. Participants received either specific tips on how to identify AI-generated images, general misinformation-detection tips, or no guidance. Providing specific tips significantly improved participants’ ability to discern between true and false headlines and reduced belief in AI-generated misinformation more than general tips. However, both interventions also slightly reduced belief in real headlines. The findings suggest that as AI-generated imagery becomes more widespread, targeted guidance on how to detect it may be more effective than general media literacy advice—while also highlighting the challenge of preserving trust in accurate information.
Summary:
Corrections alone rarely erase the impact of misinformation. In this EEG study, we examine how the brain encodes and later evaluates corrected information. Providing an alternative explanation alongside a retraction significantly reduced the continued influence effect and enhanced neural markers of memory encoding and recollection during later truth judgments. The findings suggest that replacing misinformation with a plausible alternative strengthens corrective processing in memory, offering new insight into the neurocognitive mechanisms that make misinformation so persistent, and how to counter it more effectively.
Last Update: 12 Mar. 2026