Exploring Azure AI Content Safety: Navigating Safer Digital Experience
Published Aug 17 2023 11:41 PM 2,655 Views
Microsoft

SAVITAMITTAL_0-1692377824966.png

 

SAVITAMITTAL_0-1692341218142.png

This article serves as your compass in navigating the intricate landscape of content safety. Delve into the realm of Azure AI Content Safety, an innovative offering from Microsoft Azure designed to identify and address harmful content originating from both users and AI systems within various digital platforms. These tools empower developers to create platforms that prioritize safety and inclusivity and organizations can explore the core features of Azure AI Content Safety to have a safer online experience for everyone.
Several use cases:

  • Social Media comment moderation
  • AI-Generated Text in Customer Service
  • Gaming
  • Crisis response
  • Healthcare

    Azure AI Content Safety Studio is an online tool designed to handle potentially offensive, risky, or undesirable content using cutting-edge content moderation ML models. It provides templates and customized workflows, enabling users to choose and build their own content moderation system. Users can upload their own content or try it out with provided sample content.
    Content safety endpoint call returns 4 flags:  Hate, Self harm, Sexual, Violence

    Azure AI Content Safety Pricing
    SAVITAMITTAL_0-1692368548179.png

     

Start with Azure Content Safety Studio 
https://learn.microsoft.com/en-us/azure/ai-services/content-safety/studio-quickstart

  • Create Content safety resource in Azure portal
  • Get Endpoint and Key for Azure Content Safety Resource

SAVITAMITTAL_1-1692166130100.png

 

HTTP Snippet- Try in POSTMAN
POST /contentsafety/text:analyze?api-version=2023-04-30-preview HTTP/1.1
Host: openaicontentsafetydemo.cognitiveservices.azure.com
Ocp-Apim-Subscription-Key: efae***********fa0ac*6affc
Content-Type: application/json
Content-Length: 161

{
"text": "Education is the most powerful weapon which you can use to change the world.",
"categories": [
"Hate","Sexual","SelfHarm","Violence"
]
}

RESPONSE

{
    "blocklistsMatchResults": [],
    "hateResult": {
        "category": "Hate",
        "severity": 0
    },
    "selfHarmResult": {
        "category": "SelfHarm",
        "severity": 0
    },
    "sexualResult": {
        "category": "Sexual",
        "severity": 0
    },
    "violenceResult": {
        "category": "Violence",
        "severity": 0
    }
}


STEP#2

Scenario 1 : Customer feedback or Comments gets created in SharePoint List. You want to get Content Safety flags

SAVITAMITTAL_7-1692334706801.png

 

STEP#3

Create a PowerAutomate

  • Here are the steps to create a Power Flow:

    1. Go to flow.microsoft.com and click on "Flows".
    2. Visit https://make.powerautomate.com/
    3. Click "New Flow" and name it "OpenAIContentSafety".
    4. Set HTTP action

      SAVITAMITTAL_1-1692435005369.png

       


      Get HTTP URI and key from STEP#1 

      Set Body
      {
      "text": "PASTE YOUR CONTENT HERE",
      "categories"[
      "Hate"
      "Sexual"
      "SelfHarm"
      "Violence"
      ]

Add Parse JSON
SAVITAMITTAL_1-1692331180290.png

Content body variable from HTTP action

Scheme Paste below content

{
    "type""object",
    "properties": {
        "blocklistsMatchResults": {
            "type""array"
        },
        "hateResult": {
            "type""object"
        },
        "selfHarmResult": {
            "type""object"
        },
        "sexualResult": {
            "type""object"
        },
        "violenceResult": {
            "type""object"
        }
    }
}
  Set the variables with Content safety categories as shown below. Value of each variable is variable from Parse JSON step
 
SAVITAMITTAL_2-1692331674795.png

  Update the value in SharePoint list

 

SAVITAMITTAL_5-1692334496596.png

Scenario 2 : upload a PDF and ask questions and validate Content Safety before chatting with data.
Refer to GITHUB repo for PDF extraction full demo.
https://github.com/msavita-cloud/OpenAIPowerApp
SAVITAMITTAL_0-1692338327402.png

  • PDF uploaded and data extracted using computer vision API
  • Powe Automate created in previous step for Content safety is also triggered on extracted content to get insights Content safety flags. Extracted Content Safety Flags are displayed on the below screen.
  • In PowerApps Set OnSelect property: 
    Set(contentsafetytext, OpenAIContentSafety.Run(TextInput.Text))
    where OpenAIContentSafety is the name of the PowerAutomate workflow to get content safety flags as created in previous scenario.
  • If any flag value is great than 0 then Open AI chat button is disabled.
SAVITAMITTAL_1-1692339048204.png

 

 

 


 

Co-Authors
Version history
Last update:
‎Aug 24 2023 11:49 AM
Updated by: