Soheil Feizi’s Post

View profile for Soheil Feizi, graphic

Founder & CEO, Associate Professor, AI/ML

LLMs are powerful but prone to 'hallucinations'—false yet plausible outputs. In our #NeurIPS2024 paper, we introduce a compute-efficient method for detecting hallucinations in single responses using hidden states, attention maps, and output probabilities. Our approach achieves >45x speedups over traditional methods while significantly boosting detection accuracy across diverse datasets. Link to the paper: https://lnkd.in/eBWNXRkx Joint work with Gaurang Sriramanan Siddhant Bharti Vinu Sankar Sadasivan Shoumik Saha Priyatham Kattakinda

  • text, letter
Mehrin Kiani, PhD

ML Scientist @ Protect AI | PhD in Computer Science

1mo

Interesting read! Liked how the proposed method uses eigenvalue analysis of an LLMs’ internal representation to determine if a given text is hallucinated. I would also hypothesize that a greater understanding of an LLMs brain map might inform us about their capabilities and limitations. 

Tom Hurst

Assistant Director of Graduate Education at University of Maryland College Park

1mo

Maybe someday LLMs will be ironically replaced by non-halluncinating Large Synaptic Diagrams.

See more comments

To view or add a comment, sign in

Explore topics