Blog Post

AI - Azure AI services Blog
8 MIN READ

Azure AI announces Prompt Shields for Jailbreak and Indirect prompt injection attacks

FedericoZarfati's avatar
Mar 28, 2024
Our Azure OpenAI Service and Azure AI Content Safety teams are excited to launch a new Responsible AI capability called Prompt Shields. Prompt Shields protects applications powered by Foundation Mode...
Published Mar 28, 2024
Version 1.0