Shielding Prompts from LLM Data Leaks

Opinion An interesting IBM NeurIPS 2024 submission from late 2024 resurfaced on Arxiv last week. It proposes a system that can automatically intervene to protect users from submitting personal or sensitive information into a message when they are having a conversation with a Large Language Model (LLM) such as ChatGPT. The mock-ups shown above were […]The post Shielding Prompts from LLM Data Leaks appeared first on Unite.AI.