Today, we are excited to announce a preview of "correction," a new capability within Azure AI Content Safety's groundedness detection feature. With this enhancement, groundedness detection not only identifies inaccuracies in AI outputs but also corrects them, fostering greater trust in generative AI technologies.
What is Groundedness Detection?
Groundedness detection is a feature that identifies ungrounded or hallucinated content in AI outputs, helping developers enhance generative AI applications by pinpointing responses that lack a foundation in connected data sources.
Since we introduced groundedness detection in March of this year, our customers have asked us: “What else can we do with this information once it’s detected besides blocking?” This highlights a significant challenge in the rapidly evolving generative AI landscape, where traditional content filters often fall short in addressing the unique risks posed by Generative AI hallucinations.
Introducing the Correction Capability
This is why we are introducing the correction capability. Empowering our customers to both understand and take action on ungrounded content and hallucinations is crucial, especially as the demand for reliability and accuracy in AI-generated content continues to rise.
Building on our existing Groundedness Detection feature, this groundbreaking capability allows Azure AI Content Safety to both identify and correct hallucinations in real-time before users of generative AI applications encounter them.
How Correction Works
To use groundedness detection, a generative AI application must connect to grounding documents, which are used in document summarization and RAG-based Q&A scenarios.
Step-by-Step Guide for Groundedness Detection
What are generative AI hallucinations?
Hallucinations refer to the generation of content that lacks support in grounding data. This phenomenon is particularly associated with large language models (LLMs), which can unintentionally produce misleading information.
This issue can become critical in high-stakes fields like medicine, where accurate information is essential. While AI has the potential to improve access to vital information, hallucinations can lead to misunderstandings and misrepresentation, posing risks in these important domains.
Why the Correction Capability Matters
The introduction of this correction capability is significant for several reasons.
Other Generative AI Grounding Tactics
In addition to using Groundedness Detection and its new correction capability, there are several steps you can take to enhance the grounding of your generative AI applications. Key actions include adjusting your system message and connecting your generative application to reliable data sources.
Getting Started with Groundedness Detection
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.