2025
Trust Me, I'm Wrong: High-Certainty Hallucinations in LLMs.
CoRR, February, 2025

2024
Distinguishing Ignorance from Error in LLM Hallucinations.
CoRR, 2024

Constructing Benchmarks and Interventions for Combating Hallucinations in LLMs.
CoRR, 2024

2023
Interpreting Embedding Spaces by Conceptualization.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.
CoRR, 2022