Event banner
AMA: Security for AI
Event details
In addition to the questions posted on this page, we also answer questions posted in reply to the event on other social channels (LinkedIn, X, etc.). Below are the questions the panelists answered, along with a timestamped link:
Question -- What do you see as the main security challenges associated with AI adoption? What are we hearing from customers? - answered at 1:45.
Question from LinkedIn -- What is Microsoft doing in regards to AI and ethical hacking? - answered at 3:58.
Question -- When you developed Security Copilot, how did you test it to ensure that it performed its intended use? As a customer, what if Copilot responds with unintended or unhelpful info? - answered at 6:30.
Question -- You spoke about permissions, what type of cleanup or preparations should we take to make sure that the right people have access to the right details to help with risk analysis and mitigation? Are there recommendations for permission models? - answered at 10:27.
Question -- If we want to leverage generative AI with something like M365 Copilot, do we have to use a Microsoft solution for identity and access, or do third-parties work? - answered at 13:42.
Question -- What are the key differences between Microsoft Defender for Cloud’s AI Security Posture Management (SPM) and Microsoft Purview’s Data Security Posture Management (DSPM) for AI? - answered at 15:24.
Question -- How does the ML model in Purview work to help us improve data governance for our organization specifically? What type of signals or policies does it analyze and what types of recommendations can or will it provide? - answered at 17:52.
Question -- Trying to research. Our leadership team is worried about employees sharing sensitive data inadvertently with consumer AI apps. What solution would allow us to best control that flow of information? - answered at 20:57.
Question -- What's the best way to detect threats to AI workloads and gen AI LOB apps within our organization? - answered at 26:00.
Question -- Some employees are worried about how much access AI has to their email, Teams chats, etc. How do we address those concerns? - answered at 27:23.
Question -- Curious for others watching today, do most organizations configure AI data ingestion at the company or maybe team level---informing employees of what is shared---or is leaving it to the individual to opt in or out a more common scenario? - answered at 31:00.
Question -- Could you provide some examples of governance policies or restrictions that admins can implement for deployable GPT models in Azure AI? - answered at 34:44.
Question -- I'm an app developer, but mostly low-code apps. I'm looking at building a few "AI lite" solutions, but I don't have a lot of experience with security and compliance controls. Where should I start? - answered at 41:16.
Question -- Keeping up with the threat landscape feels overwhelming in the era of AI. How can we differentiate between threat vectors, emerging attack surfaces, and new or amplified risks? - answered at 43:51.
Question -- Do Microsoft AI solutions comply with the EU AI Act? - answered at 45:15.
Question -- Is there an "AI security baseline" that covers a cross-solution view for basics on identity, access, data protection, and privacy? - answered at 49:35.