Event banner

AMA: Security for AI

Event Ended
Tuesday, Dec 03, 2024, 10:30 AM PST
Online

Event details

Looking for tips on how to prepare your environment for secure AI adoption? What’s the best way to protect your AI stack and sensitive data? Join this Ask Microsoft Anything session to get answers to your questions from the teams adopting security for AI at Microsoft! Big questions or small ones—-we’re here to help you confidently embrace the age of AI with industry-leading cybersecurity and compliance solutions.

 

This session is part of Tech Community Live: Microsoft Security edition. Add it to your calendar, select Attend for event reminders, and post your questions and comments below! This session will be recorded and available on demand shortly after conclusion of the live event.

Heather_Poulsen
Updated Dec 03, 2024
  • ChillBill77's avatar
    ChillBill77
    Copper Contributor

    Security CoPilot both uses SCU's for processing but also LogAnalytics Workspaces. 
    Where will the results be stored? and what is stored, the RAW data from the retrieval or just the end result?

  • TrevorRusher's avatar
    TrevorRusher
    Icon for Community Manager rankCommunity Manager

    Welcome! The Security for AI AMA will start soon. What questions do you have for our experts? Post them now and we’ll use them to kick off the hour!

  • TrevorRusher's avatar
    TrevorRusher
    Icon for Community Manager rankCommunity Manager

    Welcome to the Security for AI AMA and the Tech Community Live: Microsoft Security editionLet's get started! Please post your questions here in the Comments. We will be answering questions in the live stream—and others will be answering here in the Comments

  • TrevorRusher's avatar
    TrevorRusher
    Icon for Community Manager rankCommunity Manager

    Don't be shy! This is a great forum to ask your questions about Security for AI, and also to share feedback and information about use cases and scenarios you need to support. Post your thoughts and questions now in the Comments.

  • TrevorRusher's avatar
    TrevorRusher
    Icon for Community Manager rankCommunity Manager

    That concludes today’s Security for AI AMAThanks to everyone who was able to join us live - and to those catching up on demand!

  • Pearl-Angeles's avatar
    Pearl-Angeles
    Icon for Community Manager rankCommunity Manager

    In addition to the questions posted on this page, we also answer questions posted in reply to the event on other social channels (LinkedIn, X, etc.). Below are the questions the panelists answered, along with a timestamped link:

    Question -- What do you see as the main security challenges associated with AI adoption? What are we hearing from customers? - answered at 1:45.

    Question from LinkedIn -- What is Microsoft doing in regards to AI and ethical hacking? - answered at 3:58.

    Question -- When you developed Security Copilot, how did you test it to ensure that it performed its intended use? As a customer, what if Copilot responds with unintended or unhelpful info? - answered at 6:30.

    Question -- You spoke about permissions, what type of cleanup or preparations should we take to make sure that the right people have access to the right details to help with risk analysis and mitigation? Are there recommendations for permission models? - answered at 10:27.

    Question -- If we want to leverage generative AI with something like M365 Copilot, do we have to use a Microsoft solution for identity and access, or do third-parties work? - answered at 13:42.

    Question -- What are the key differences between Microsoft Defender for Cloud’s AI Security Posture Management (SPM) and Microsoft Purview’s Data Security Posture Management (DSPM) for AI? - answered at 15:24.

    Question --  How does the ML model in Purview work to help us improve data governance for our organization specifically? What type of signals or policies does it analyze and what types of recommendations can or will it provide? - answered at 17:52.

    Question -- Trying to research. Our leadership team is worried about employees sharing sensitive data inadvertently with consumer AI apps. What solution would allow us to best control that flow of information? - answered at 20:57.

    Question -- What's the best way to detect threats to AI workloads and gen AI LOB apps within our organization?  - answered at 26:00.

    Question -- Some employees are worried about how much access AI has to their email, Teams chats, etc. How do we address those concerns? - answered at 27:23.

    Question -- Curious for others watching today, do most organizations configure AI data ingestion at the company or maybe team level---informing employees of what is shared---or is leaving it to the individual to opt in or out a more common scenario? - answered at 31:00.

    Question -- Could you provide some examples of governance policies or restrictions that admins can implement for deployable GPT models in Azure AI? - answered at 34:44.

    Question -- I'm an app developer, but mostly low-code apps. I'm looking at building a few "AI lite" solutions, but I don't have a lot of experience with security and compliance controls. Where should I start? - answered at 41:16.

    Question -- Keeping up with the threat landscape feels overwhelming in the era of AI. How can we differentiate between threat vectors, emerging attack surfaces, and new or amplified risks? - answered at 43:51.

    Question -- Do Microsoft AI solutions comply with the EU AI Act? - answered at 45:15.

    Question -- Is there an "AI security baseline" that covers a cross-solution view for basics on identity, access, data protection, and privacy? - answered at 49:35.

Date and Time
Dec 3, 202410:30 AM - 11:30 AM PST