MTU Cybersecurity Colloquium

Organizer

Dr. Bo Chen (Computer Science)

Coordinators

Dr. Xinyu Lei (Computer Science)
Dr. Kaichen Yang (Electrical and Computer Engineering)
Dr. Ronghua Xu (Applied Computing)


Next Colloquium

March 23, 2026
Building 19 (ChemSci), Room 104
12PM - 1PM

Mirage in the Eyes: Hallucination Attack on Multi-modal Large Language Models with Only Attention Sink

Presenter: Haoyang Chen

Fusing visual understanding into language generation, Multimodal Large Language Models (MLLMs) are revolutionizing visual-language applications. Yet, these models are often plagued by the hallucination problem, which involves generating inaccurate objects, attributes, and relationships that do not match the visual content. In this work, we delve into the internal attention mechanisms of MLLMs to reveal the underlying causes of hallucination, exposing the inherent vulnerabilities in the instruction-tuning process. We propose a novel hallucination attack against MLLMs that exploits attention sink behaviors to trigger hallucinated content with minimal image-text relevance, posing a significant threat to critical downstream applications. Distinguished from previous adversarial methods that rely on fixed patterns, our approach generates dynamic, effective, and highly transferable visual adversarial inputs, without sacrificing the quality of model responses. Comprehensive experiments on 6 prominent MLLMs demonstrate the efficacy of our attack in compromising black-box MLLMs even with extensive mitigating mechanisms, as well as the promising results against cutting-edge commercial APIs, such as GPT-4o and Gemini 1.5. Our code is available at https://huggingface.co/RachelHGF/Mirage-in-the-Eyes.

Past Colloquiums