Event details
Security practitioners, bring your real questions.
Join Microsoft Security experts live at RSA Conference or online via Tech Community for a live AMA focused on data and AI security in real-world environments. Ask how data protection, AI security, and governance actually work in production—what teams are turning on first, what can wait, and where things commonly break.
This session is designed for practitioners who want practical answers, not future-state vision.
What to expect
• Live Q&A with Microsoft Security SMEs and real practitioners
• Questions submitted both in person and online
• No slides, no pitches, no roadmap discussions
• Real usage, real challenges, real answers
Whether you’re attending RSA in person or joining remotely, this is your chance to get straight answers from people building and securing AI at scale.
38 Comments
- Trevor_Rusher
Community Manager
The live event has concluded! We are going to do our best to post all the questions and answers that occurred live here on Tech Community so our online audience can share in the knowledge. Reminder that this event page is evergreen so feel free to come back and see all the wisdom imparted from our experts anytime. Thanks!
- chmcconnell
Microsoft
Question from the live event:
Do you differentiate between personal agents that act as an extension of an individual user and shared agents that multiple people or teams use? Do you treat them as two different entities from a security and governance perspective?
- David_Broaddus
Microsoft
Answer:
The rules and policies will be very similar, but we have the ability to determine is this a sperate identity or other things to customize your experiences and policies to set guardrails for each of them. We divide agents into agents that assist and agents that are autonomous which we call user agents. They are all AI agents so they are treated as such but if they are being on behalf of the user, external customer, or the broader organization will determine their access.
- chmcconnell
Microsoft
Question from the live event:
In the past, it was easier to distinguish between human users and infrastructure or workloads. Now with AI agents acting more autonomously and making decisions, how does Microsoft think about agents from a security perspective? Are AI agents treated more like a new type of insider, or more like infrastructure and automated workflows?
- David_Broaddus
Microsoft
Answer:
We treat agents as first-class citizens within the organization, and we are tuning all of our capabilities to recognize AI agents as such. We are including checks on the agents and making sure that we have guard rails on access and giving them their own set of permissions and policies just as we would with human users.
- chmcconnell
Microsoft
Question from the live event:
When we think about customers’ compliance requirements, such as CMMC and GDPR, and the role Security Copilot plays as a force multiplier, do we have any data or metrics that show how much it helps accelerate compliance or improve compliance outcomes for customers?
- David_Broaddus
Microsoft
Answer:
We have benchmarks in relation to phishing trials not specifically for compliance. On the phishing trial side, we have seen up to a 60% improvement in productivity
- chmcconnell
Microsoft
Question from the live event:
You mentioned oversharing and daily sharing risks, which is something many of us are dealing with. Could you talk about how organizations should prepare their data environment to reduce oversharing and get it ready for AI and Copilot? What steps should we be taking today?
- David_Broaddus
Microsoft
Answer:
First, we recommend you turn on some of your auditing capabilities to know what's going on and then activate the quick template to understand what is being shared with your application and how much data. This runs quickly and gives you an analysis of the amount of data that is sensitive and what are the permissions of the data. This is what we call oversharing reports giving you just-in-time detection of sensitive data as it goes into AI or from AI allowing for this data to be blocked through Data Loss Prevention or audited.
- chmcconnell
Microsoft
Question from the live event:
One challenge we’ve had is knowing the right way to engage with different Microsoft teams when we’re building something together or integrating our benchmarks into Microsoft tools. Who should be the first point of contact in those situations to move things forward efficiently? Should we start with Product Management, our account team, or someone else?
- David_Broaddus
Microsoft
Answer:
The Product Management team is a great start to strengthen and understand what connections need to be made and will be the focal point for all orchestration.
- chmcconnell
Microsoft
Question from the live event:
Recently, our Microsoft account team spoke to us about Security Copilot with GitHub integration. From a value perspective, what security scenarios does this integration help address, and who is it primarily for?
- David_Broaddus
Microsoft
Answer:
There is no integration between Security Copilot and GitHub. Security Copilot uses its own orchestration engine within the product.
- chmcconnell
Microsoft
Question from the live event:
Someone raised a good point. If I understand correctly, the approach here is to capture prompts based on certain risk criteria rather than collecting everything. Some competitors capture all activity using appliances or virtual machines and then let analysts search through everything. Is Microsoft’s approach to capture prompts selectively based on risk criteria, or is there a plan to capture everything in a similar way?
- David_Broaddus
Microsoft
Answer:
We capture everything by default, but you can consent or not to capture everything giving configurability and flexibility to administrators by default to decide to capture or take action.
After this you can make decisions based on your role and outcomes which is also available in the experience.
- chmcconnell
Microsoft
Question from the live event:
From a security operations center perspective, how can teams get visibility into the prompts users are submitting, and what does the investigation or response workflow look like today?
- David_Broaddus
Microsoft
Answer:
We log who is prompting and who is asking the question and other key questions. We call this audit of activity telemetry that's captured end to end throughout the lifecycle AI agent. Anything that the agent interacts with or anything that interacts with the agent including tools and knowledge sources, that activity is captured and available for proof of compliance depending on what is needed and this is the bedrock for all of our investigations.
After this, the responses of the content itself which is passed to the agent or the response of the agent is based on the configurations of your tenant as an admin where you can store things by default.
All of this enriches the metadata available to data security administrators, analysts, and others who need it and can use this to say make decisions on agent risk during investigations and further analysis. - chmcconnell
Microsoft
Follow-up question:
If a user submits a prompt requesting access to sensitive content, such as documents from the CEO’s mailbox, where is that prompt logged, and which service allows security teams to view or investigate that activity?- David_Broaddus
Microsoft
Follow-up Answer:
That is captured as part of observability and is available base on the roles and permissions you set in the different portals based on what your role is.
Different roles will have different needs to manage the same prompt data so visibility will be based on their needs and roles and can be configured within these appropriate portals.
- chmcconnell
Microsoft
Question from the live event:
With the new E7 licensing changes taking effect May 1, will these new capabilities work through the existing gateway, or will they be deployed and managed separately?