×
2024
On the Vulnerability of Safety Alignment in Open-Access LLMs.
[DOI]
Jingwei Yi
,
Rui Ye
,
Qisi Chen
,
Bin Zhu
,
Siheng Chen
,
Defu Lian
,
Guangzhong Sun
,
Xing Xie
,
Fangzhao Wu
Proceedings of the Findings of the Association for Computational Linguistics, 2024