사업성과 BK21 FOUR 산업혁신 애널리틱스 교육연구단

학술대회발표

2024 Safe-Embed: Unveiling the Safety-Critical Knowledge of Sentence Encoders

페이지 정보

작성자 관리자 작성일 24-10-01 18:29

본문

저자
Jinseok Kim, Jaewon Jung, Sangyeop Kim, Sohyung Park, Sungzoon Cho
학술대회명
1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024) @ ACL 2024 Workshop
개최지
태국
발표형식
포스터발표
년도
2024

Abstract

Despite the impressive capabilities of Large Language Models (LLMs) in various tasks, their vulnerability to unsafe prompts remains a critical issue. These prompts can lead LLMs to generate responses on illegal or sensitive topics, posing a significant threat to their safe and ethical use. Existing approaches attempt to address this issue using classification models, but they have several drawbacks. With the increasing complexity of unsafe prompts, similarity search-based techniques that identify specific features of unsafe prompts provide a more robust and effective solution to this evolving problem. This paper investigates the potential of sentence encoders to distinguish safe from unsafe prompts, and the ability to classify various unsafe prompts according to a safety taxonomy. We introduce new pairwise datasets and the Categorical Purity (CP) metric to measure this capability. Our findings reveal both the effectiveness and limitations of existing sentence encoders, proposing directions to improve sentence encoders to operate as more robust safety detectors. Our code is available at https://github.com/JwdanielJung/Safe-Embed