Exploring Azure AI Content Safety: Navigating Safer Digital Experience
Published Aug 17 2023 11:41 PM 2,806 Views




This article serves as your compass in navigating the intricate landscape of content safety. Delve into the realm of Azure AI Content Safety, an innovative offering from Microsoft Azure designed to identify and address harmful content originating from both users and AI systems within various digital platforms. These tools empower developers to create platforms that prioritize safety and inclusivity and organizations can explore the core features of Azure AI Content Safety to have a safer online experience for everyone.
Several use cases:

  • Social Media comment moderation
  • AI-Generated Text in Customer Service
  • Gaming
  • Crisis response
  • Healthcare

    Azure AI Content Safety Studio is an online tool designed to handle potentially offensive, risky, or undesirable content using cutting-edge content moderation ML models. It provides templates and customized workflows, enabling users to choose and build their own content moderation system. Users can upload their own content or try it out with provided sample content.
    Content safety endpoint call returns 4 flags:  Hate, Self harm, Sexual, Violence

    Azure AI Content Safety Pricing


Start with Azure Content Safety Studio 

  • Create Content safety resource in Azure portal
  • Get Endpoint and Key for Azure Content Safety Resource



HTTP Snippet- Try in POSTMAN
POST /contentsafety/text:analyze?api-version=2023-04-30-preview HTTP/1.1
Host: openaicontentsafetydemo.cognitiveservices.azure.com
Ocp-Apim-Subscription-Key: efae***********fa0ac*6affc
Content-Type: application/json
Content-Length: 161

"text": "Education is the most powerful weapon which you can use to change the world.",
"categories": [


    "blocklistsMatchResults": [],
    "hateResult": {
        "category": "Hate",
        "severity": 0
    "selfHarmResult": {
        "category": "SelfHarm",
        "severity": 0
    "sexualResult": {
        "category": "Sexual",
        "severity": 0
    "violenceResult": {
        "category": "Violence",
        "severity": 0


Scenario 1 : Customer feedback or Comments gets created in SharePoint List. You want to get Content Safety flags




Create a PowerAutomate

  • Here are the steps to create a Power Flow:

    1. Go to flow.microsoft.com and click on "Flows".
    2. Visit https://make.powerautomate.com/
    3. Click "New Flow" and name it "OpenAIContentSafety".
    4. Set HTTP action



      Get HTTP URI and key from STEP#1 

      Set Body
      "text": "PASTE YOUR CONTENT HERE",

Add Parse JSON

Content body variable from HTTP action

Scheme Paste below content

    "properties": {
        "blocklistsMatchResults": {
        "hateResult": {
        "selfHarmResult": {
        "sexualResult": {
        "violenceResult": {
  Set the variables with Content safety categories as shown below. Value of each variable is variable from Parse JSON step

  Update the value in SharePoint list



Scenario 2 : upload a PDF and ask questions and validate Content Safety before chatting with data.
Refer to GITHUB repo for PDF extraction full demo.

  • PDF uploaded and data extracted using computer vision API
  • Powe Automate created in previous step for Content safety is also triggered on extracted content to get insights Content safety flags. Extracted Content Safety Flags are displayed on the below screen.
  • In PowerApps Set OnSelect property: 
    Set(contentsafetytext, OpenAIContentSafety.Run(TextInput.Text))
    where OpenAIContentSafety is the name of the PowerAutomate workflow to get content safety flags as created in previous scenario.
  • If any flag value is great than 0 then Open AI chat button is disabled.





Version history
Last update:
‎Aug 24 2023 11:49 AM
Updated by: