Microsoft Defender
13 TopicsProtect AI apps with Microsoft Defender
Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed — all from one place. Whether it’s a sanctioned tool or a shadow AI app, you’re equipped to set the right policies and respond fast to emerging threats. Microsoft Defender gives you the visibility to track complex attack paths — linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry. Rob Lefferts, Microsoft Security CVP, joins me in the Mechanics studio to share how you can safeguard your AI-powered environment with a unified security approach. Identify and protect apps. Instantly surface all generative AI apps in use across your org — even unsanctioned ones. How to use Microsoft Defender for Cloud Apps. Extend AI security to internally developed apps. Get started with Microsoft Defender for Cloud. Respond with confidence. Stop attacks in progress and ensure sensitive data stays protected, even when users try to bypass controls. Get full visibility in Microsoft Defender incidents. Watch our video. QUICK LINKS: 00:00 — Stay in control with Microsoft Defender 00:39 — Identify and protect AI apps 02:04 — View cloud apps and website in use 04:14 — Allow or block cloud apps 07:14 — Address security risks of internally developed apps 08:44 — Example in-house developed app 09:40 — System prompt 10:39 — Controls in Azure AI Foundry 12:28 — Defender XDR 14:19 — Wrap up Link References Get started at https://aka.ms/ProtectAIapps Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - While generative AI can help you do more, it can also introduce new security risks. Today, we’re going to demonstrate how you can stay in control with Microsoft Defender to discover the GenAI cloud apps that people in your organization are using right now and approve or block them based on their risk. And for your in-house developed AI apps, we’ll look at preventing jailbreaks and prompt injection attacks along with how everything comes together with Microsoft Defender incident management, to give you complete visibility into your events. Joining me once again to demonstrate how to get ahead of everything is Microsoft Security CVP, Rob Lefferts. Welcome back. - So glad to be back. - It’s always great to have you on to keep us ahead of the threat landscape. In fact, since your last time on the show, we’ve seen a significant increase in the use of generative AI apps, and some of them are sanctioned by IT but many of them are not. So what security concerns does this raise? - Each of those apps really carries their own risk, and even in-house developed apps aren’t necessarily immune to risk. We see some of the biggest risks with Consumer apps, especially the free ones, which are often designed to collect training data as users upload files into them or paste content into their prompts that can then be used to retrain the underlying model. So, before you know it, your data might be part of the public domain, that is, unless you get ahead of it. - And as you showed, this use of your data is often written front and center in the terms and conditions of these apps. - True, but not everyone reads all the fine print. To be clear, people go into these apps with good intentions, to work more efficiently and get more done, but they don’t always know the risks; and that’s where we give you the capabilities you need to identify and protect Generative AI SaaS apps using Microsoft Defender for Cloud Apps. And you can combine this with Microsoft Defender for Cloud for your internally developed apps alongside the unified incident management capabilities in Microsoft Defender XDR where the activities from both of these services and other connected systems come together in one place. - So given just how many cloud apps there are out there and a lot of companies building their own apps, where would you even start? - Well, for most orgs, it starts with knowing which external apps people in your company are using. If you don’t have proactive controls in place yet, there’s a pretty good chance that people are bringing their own apps. Now to find out what they’re using, right from the unified Defender portal, you can use Microsoft Defender for Cloud Apps for a complete view of cloud apps and websites in use inside your organization. The signal comes in from Defender-onboarded computers and phones. And if you’re not already using Defender for Cloud Apps, let me start by showing you the Cloud app catalog. Our researchers at Microsoft are continually identifying and classifying new cloud apps as they surface. There are over 34,000 apps across all of these filterable categories that are all based on best practice use cases across industries. Now if I scroll back up to Generative AI, you’ll see that there are more than 1,000 apps. And I’ll click on this control to filter the list down, and it’s a continually expanding list. We even add to it when existing cloud apps integrate new gen AI capabilities. Now once your signal starts to come in from your managed devices, moving back over to the dashboard, you’ll see that I have visibility into the full breadth of Cloud Apps in use, including Generative AI apps and lots of other categories. The report under Discovered apps provides visibility into the cloud apps with the broadest use within your managed network. And from there, you can again see categories of discovered apps. I’ll filter by Generative AI again, and this time it returns the specific apps in use in my org. Like before, each app has a defined risk score of 0 to 10, with 10 being the best, based on a number of parameters. And if I click into any one of them, like Microsoft Copilot, I can see the details as well as how they fair for general areas, a breadth of security capabilities, as well as compliance with standards and regulations, and whether they appear to meet legal and privacy requirements. - And this can save a lot of valuable time especially when you’re trying to get ahead of risks. - And Defender for Cloud Apps doesn’t just give you visibility. For your managed devices enrolled into Microsoft Defender, it also has controls that can either allow or block people from using defined cloud apps, based on the policies you have set as an administrator. From each cloud app, I can see an overview with activities surrounding the app with a few tabs. In the cloud app usage tab, I can drill in even more to see usage, users, IP addresses, and incident details. I’ll dig into Users, and here you can see who has used this app in my org. If I head back to my filtered view of generative AI apps in use, on the right you can see options to either sanction apps so that people can keep using them, or unsanction them to block them outright from being used. But rather than unsanction these apps one-by-one like Whack-a-Mole, there’s a better way, and that’s with automation based on the app’s risk score level. This way, you’re not manually configuring 1,000 apps in this category; nobody wants to do that. So I’ll head over to policy management, and to make things easier as new apps emerge, you can set up policies based on the risk score thresholds that I showed earlier, or other attributes. I’ll create a new policy, and from the dropdown, I’ll choose app discovery policy. Now I’ll name it Risky AI apps, and I can set the policy severity here too. Now, I’m going to select a filter, and I’ll choose category first, I’ll keep equals, and then scroll all the way down to Generative AI and pick that. Then, I need to add another filter. In this case, I’m going to find and choose risk score. I’ll pause for a second. Now what I want to happen is that when a new app is documented, or an existing cloud app incorporates new GenAI capabilities and meets my category and risk conditions, I want Defender for Cloud Apps to automatically unsanction those apps to stop people from using them on managed devices. So back in my policy, I can adjust this slider here for risk score. I’ll set it so that any app with a risk score of 0 to 6 will trigger a match. And if I scroll down a little more, this is the important part of doing the enforcement. I’ll choose tag app as unsanctioned and hit create to make it active. With that, my policy is set and next time my managed devices are synced with policy, Defender for Endpoint will block any generative AI app with a matching risk score. Now, let’s go see what it looks like. If I move over to a managed device, you’ll remember one of our four generative AI apps was something called Fakeyou. I have to be a little careful with how I enunciate that app name, and this is what a user would see. It’s clearly marked as being blocked by their IT organization with a link to visit the support page for more information. And this works with iOS, Android, Mac, and, of course, Windows devices once they are onboarded to Defender. - Okay, so now you can see and control which cloud apps are in use in your organization, but what about those in-house developed apps? How would you control the AI risks there? - So internally developed apps and enterprise-grade SaaS apps, like Microsoft Copilot, would normally have the controls and terms around data usage in place to prevent data loss and disallow vendors from training their models on your data. That said, there are other types of risks and that’s where Defender for Cloud comes in. If you’re new to Defender for Cloud, it connects the security team and developers in your company. For security teams, for your apps, there’s cloud security posture management to surface actions to predict and give you recommendations for preventing breaches before they happen. For cloud infrastructure and workloads, it gives you insights to highlight risks and guide you with specific protections that you can implement for all of your virtual machines, your data infrastructure, including databases and storage. And for your developers, using DevOps, you can even see best practice insights and associated risks with API endpoints being used, and in Containers see misconfigurations, exposed secrets and vulnerabilities. And for cloud infrastructure entitlement management, you can find out where you have potentially overprovisioned or inactive entitlements that could lead to a breach. And the nice thing is that from the central SecOps team perspective, these signals all flow into Microsoft Defender for end-to-end security tracking. In fact, I have an example here. This is an in-house developed app running on Azure that helps an employee input things like address, tax information, bank details for depositing your salary, and finding information on benefits options that employees can enroll into. It’s a pretty important app to ensure that the right protections are in place. And for anyone who’s entered a new job right after graduation, it can be confusing to know what benefits options to choose from, things like 401k or IRA for example in the U.S., or do you enroll into an employee stock purchasing program? It’s actually a really good scenario for generative AI when you think about it. And if you can act on the options it gives you to enroll into these services, again, it’s super helpful for the employees and important to have the right controls in place. Obviously, you don’t want your salary, stock, or benefits going into someone else’s account. So if you’re familiar with how generative AI apps work, most use what’s called a system prompt to enforce basic rules. But people, especially modern adversaries, are getting savvy to this and figuring out how to work around these basic guardrails: for example, by telling these AI tools to ignore their instructions. And I can show you an example of that. This is our app’s system prompt, and you’ll see that we’ve instructed the AI to not display ID numbers, account numbers, financial information, or tax elections with examples given for each. Now, I’ll move over to a running session with this app. I’ve already submitted a few prompts. And in the third one, with a gentle bit of persuasion, basically telling it that I’m a security researcher, for the AI model to ignore the instructions, it’s displaying information that my company and my dev team did not want it to display. This app even lets me update the bank account IBAN number with a prompt: Sorry, Adele. Fortunately, there’s a fix. Using controls as part of Azure AI Foundry, we can prevent this information from getting displayed to our user and potentially any attacker if their credentials or token has been compromised. So this is the same app on the right with no changes to the system message behind it, and I’ll enter the prompts in live this time. You’ll see that my exact same attempts to get the model to ignore its instructions no matter what I do, even as a security researcher, have been stopped in this case using Prompt Shields and have been flagged for immediate response. And these types of controls are even more critical as we start to build more autonomous agentic apps that might be parsing messages from external users and automatically taking action. - Right, and as we saw in the generated response, protection was enforced, like you said, using content safety controls in Azure AI Foundry. - Right, and those activities are also passed to Defender XDR incidents, so that you can see if someone is trying to work around the rules that your developers set. Let me quickly show you where these controls were set up to defend our internal app against these types of prompt injection or jailbreak attempts. I’m in the new Azure AI Foundry portal under safety + security for my app. The protected version of the app has Prompt shields for jailbreak and indirect attacks configured here as input filters. That’s all I had to do. And what I showed before was a direct jailbreak attack. There can also be indirect attacks. These methods are a little sneakier where the attacker, for example, might poison reference data upstream with maybe an email sent previously or even an image with hidden instructions, which gets added to the prompt. And we protect you in both cases. - Okay, so now you have policy protections in place. Do I need to identify and track issues in their respective dashboards then? - You can, and depending on your role or how deep in any area you want to go, all are helpful. But if you want to stitch together multiple alerts as part of something like a multi-stage attack, that’s where Defender XDR comes in. It will find the connections between different events, whether the user succeeded or not, and give you the details you need to respond to them. I’m now in the Defender XDR portal and can see all of my incidents. I want to look at a particular incident, 206872. We have a compromised user account, but this time it’s not Jonathan Wolcott; it’s Marie Ellorriaga. - I have a feeling Jonathan’s been watching these shows on Mechanics to learn what not to do. - Good for him; it’s about time. So let’s see what Marie, or the person using her account, was up to. It looks like they found our Employee Assistant internal app, then tried to Jailbreak it. But because our protections were in place, this attempt was blocked, and we can see the evidence of that from this alert here on the right. Then we can see that they moved on to Microsoft 365 Copilot and tried to get into some other finance-related information. And because of our DLP policies preventing Copilot from processing labeled content, that activity also wouldn’t have been successful. So our information was protected. - And these controls get even more important, I think, as agents also become more mainstream. - That’s right, and those agents often need to send information outside of your trust boundary to reason over it, so it’s risky. And more than just visibility, as you saw, you have active protections to keep your information secure in real-time for the apps you build in-house and even shadow AI SaaS apps that people are using on your managed devices. - So for anyone who’s watching today right now, what do you recommend they do to get started? - So to get started on the things that we showed today, we’ve created end-to-end guidance for this that walks you through the entire process at aka.ms/ProtectAIapps; so that you can discover and control the generative AI cloud apps people are using now, build protections into the apps you’re building, and make sure that you have the visibility you need to detect and respond to AI-related threats. - Thanks, Rob, and, of course, to stay up-to-date with all the latest tech at Microsoft, be sure to keep checking back on Mechanics. Subscribe if you haven’t already, and we’ll see you again soon.1.5KViews1like0CommentsProtect data used in prompts with common AI apps | Microsoft Purview
Protect data while getting the benefits of generative AI with Microsoft Defender for Cloud Apps and Microsoft Purview. Safeguard against shadow IT risks with Microsoft Defender for Cloud Apps, unveiling hidden generative AI applications. Leverage Microsoft Purview to evaluate data exposure, automating policy enforcement for enhanced security. Ensure compliance with built-in data protections in Copilot for Microsoft 365, aligned with organizational policies set in Microsoft Purview, while maintaining trust and mitigating risks seamlessly across existing and future cloud applications. Erin Miyake, Microsoft Purview’s Principal Product Manager, shares how to take a unified approach to protecting your data. Block sensitive data from being used with generative AI. See how to use data loss prevention policies for content sensitivity in Microsoft Purview. Locate and analyze generative AI apps in use. Auto-block risky apps as they’re classified using updated risk assessments, eliminating the need to manually control allowed and blocked apps. See how it works. Create data loss prevention policies. Secure data for generative AI. Steps to get started in Microsoft Purview’s AI Hub. Watch our video here: QUICK LINKS: 00:00 — Secure your data for generative AI 01:16 — App level experiences 01:46 — Block based on data sensitivity 02:45 — Admin experience 03:57 — Microsoft Purview AI Hub 05:08 — Set up policies 05:53 — Tailor policies to your needs 06:35 — Set up AI Hub in Microsoft Purview 07:09 — Wrap Up Link References: For information on Microsoft Defender for Cloud Apps go to https://aka.ms/MDA Check out Microsoft Purview capabilities for AI go to https://aka.ms/PurviewAI/docs Watch our episode on Copilot for Microsoft 365 data protections at https://aka.ms/CopilotAdminMechanics Watch our episode about Data Loss Prevention policy options at https://aka.ms/DLPMechanics Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Generative AI with large language models like GPT is fast becoming a central part of everyday app experiences. With hundreds of popular apps now available and growing. But do you know which generative AI apps are being adopted via shadow IT inside your organization? And if your sensitive data is at risk? -Today I am going to show you a unified approach to protecting your data while still getting the benefits of generative AI. With Microsoft Defender for Cloud Apps to help you quickly see what risky generative AI apps are in use and Microsoft Purview to assess your sensitive data exposure so that you can automate policy enforced protections based on data sensitivity and the AI app in use. -Now, this isn’t to say that there aren’t safe ways to take advantage of generative AI with work data right now. Copilot for Microsoft 365, for example, has the unique advantage of data protections built in that respect your organization’s data security and compliance needs. This is based on the policies you set in Microsoft Purview for your data in Microsoft 365. -That said, the challenge is in knowing which generative AI apps that people are using inside your organization to trust. What you want is to have policies where you can “set it and forget it” so that existing and future cloud apps are visible to IT. And if the risk thresholds you set are met, they’re blocked and audited. Let’s start with the user experience. Here, I’m on a managed device. I’m not signed in with a work account or connected to a VPN and I’m trying to access an AI app that is unsanctioned by my IT and security teams. -You’ll see that the Google Gemini app in this case, and this could be any app you choose, is blocked with a red SmartScreen page and a message for why it was blocked. This app level block is based on Microsoft Defender for Endpoint with Cloud App policies. More on that in a second. Beyond app level policies, let’s try something else. You can also act based on the sensitivity of the data being used with generative AI. For example, the copy and paste of sensitive work data from a managed device into a generative AI app. Let me show you. -I have a Word document open, which contains sensitive information on the left and on the right I have OpenAI’s ChatGPT web experience running, and I’m signed in using my personal account. This file is sensitive because it includes keywords we’ve flagged in data loss prevention policies for a confidential project named Obsidian. Let’s say I want to summarize the content from the confidential Word Doc. -I’ll start by selecting all the texts I want and copied it into my clipboard, but when I try to paste it into the prompt, you’ll see that I’m blocked and the reason why. This block was based on an existing data loss prevention policy for content sensitivity defined in Microsoft Purview, which we’ll explore in a moment. Importantly, these examples did not require that my device used a VPN with firewall controls to filter sites or IP addresses, and I didn’t have to use my working email account to sign into those generative AI apps for the protections to work. -So let’s switch gears to the admin perspective to see what you can do to find generative AI apps in use. To get started, you’ll run Cloud discovery and Microsoft Defender for cloud apps. It’s a process that can parse network traffic logs for most major providers to discover and analyze apps and use. Once you’ve uploaded your networking logs, analysis can take up to 24 hours. And that process then parses the traffic from your network logs and brings it together with Microsoft’s intelligent and continuously updated knowledge base of cloud apps. -The reports from your cloud discovery show you app categories, risk levels from visited apps, discovered apps with the most traffic, top entities, which can be users or IPs along with where various app headquarters locations are in the world. A lot of this information is easily filtered and there are links into categories, apps, and sub reports. In fact, I’ll click into generative AI here to filter on those discovered apps and find out which apps people are using. -From here, you can manually sanction or unsanction apps from the list, and you can create policies to automatically unsanction and block risky apps as they’re added to this category based on continuously updated risk assessments so that you don’t need to keep returning to the policy to manually add apps. Next, to protect high value sensitive information that’s where Microsoft Purview comes in. -And now with the new AI hub, it can even show you where sensitive information is used with AI apps. AI Hub gives you a holistic view of data security risks and Microsoft Copilot and in other generative AI assistants in use. It provides insights about the number of prompts sent to Microsoft Copilot experiences over time and the number of visits to other AI assistants. Below that is where you can see the total number of prompts with sensitive data across AI assistants used in your organization, and you can also see the sensitive information types being shared. -Additionally, there are charts that break down the number of users accessing AI apps by insider risk severity level, including Microsoft Copilot as well as other AI assistants in use. Insider risk severity levels for users reflect potentially risky activities and are calculated by insider risk management and Microsoft Purview. Next in the Activity Explorer, you’ll find a detailed view of the interactions with AI assistants, along with information about the sensitive information type, content labels, and file names. You can drill into each activity for more information with details about the sensitive information that was added to the prompt. -All of this detail super useful because it can help you fine tune your policies further. In fact, let’s take a look at how simple it is to set up policies. From the policies tab, you can easily create policies to get started. I’ll choose the fortify your data security for generative AI policy template. It’s designed to protect against unwanted content sharing with AI assistants. -You’ll see that this sets up built-in risk levels for Adaptive Protection. It also creates data loss prevention policies to prevent pasting or uploading sensitive information by users with an elevated risk level. This is initially configured in test mode, but as I’ll show you can edit this later, and if you don’t have labels already set up, default labels for content classification will be set up for you so that you can preserve document access rights in Copilot for Microsoft 365. -After you review the details, it’s just one click to create these policies. And as I mentioned, these policies are also editable once they’ve been configured, so you can tailor them to your needs. I’m in the DLP policy view and here’s the policy we just created in AI Hub. I’ll select it and edit the policy. To save time, I’ve gone directly to the advanced rules option, and I’ll edit the first one. -Now, I’ll add the sensitive info type we saw before. I’ll search for Obsidian, select it, and add. Now, if I save my changes, I can move to policy mode. Currently I’m in test mode, and when I’m comfortable with my configurations, I can select turn the policy on immediately, and within an hour the policy will be enforced. And for more information about data loss prevention policy options, check out our recent episode at aka.ms/DLPMechanics. -So that’s what AI Hub and Microsoft Purview can do. And if you’re wondering how to set it up for the first time, the good news is when you open AI Hub, once you have audit enabled, and if you have Copilot for Microsoft 365, you’ll already start to see analytics insights populated. Otherwise, once you turn on Microsoft Purview audit, it takes up to 24 hours to initiate. -Then you’ll want to install the Microsoft Purview browser extension to detect risky user activity and get insights into user interactions with other AI assistant. And onboard devices to Microsoft Purview to take advantage of endpoint DLP capabilities to protect sensitive data from being shared. So as I demonstrated today, the combination of both Microsoft Defender for Cloud Apps and Microsoft Purview gives you the visibility you need to detect risky AI apps in use with your sensitive data and enforce automated policy protections. -To learn more about implementing Microsoft Defender for Cloud Apps, go to aka.ms/MDA. To learn more about implementing Microsoft Purview capabilities for AI, go to aka.ms/PurviewAI/docs. And for a deeper dive on Copilot for Microsoft 365 protections, check out our recent episode at aka.ms/CopilotAdminMechanics. Of course, keep watching Microsoft Mechanics for the latest tech updates, and thanks for watching.7.9KViews1like0CommentsProtecting Public Data and Trust with Azure Security and Microsoft Entra – A State DOJ Case
On June 27, 2022 - California Department of Justice launched a new Firearms Dashboard Portal with altruistic intentions to “improve transparency and information sharing for firearms-related data” and “balance its duties to provide gun violence and firearms data to support research efforts while protecting the personal identifying information in the data the Department collects and maintains”. Fast forward less than 30 days, the Attorney General’s office is now being sued by two different parties, a national non-profit on July 1 and a group of four CA citizens on July 18 respectively. Both lawsuits are predicated on the assumption that the bold text was not upheld. The CA DOJ and the CA Attorney General are not alone in facing the three pressures that incited this particular incident. Citizens want greater transparency when it comes to community health data, criminal activity, and other politically impacted domains like firearm ownership. This data comes in many forms (databases, video footage, internal reports/memos, court documents, etc.) and resides on a broad array of digital locations. This data is of special interest to hackers looking to leverage Personal Identifiable Information (PII) for financial gain OR hacktivists desiring to expose a particular truth or perceived truth by leaking the information to the broader public.3.6KViews1like0CommentsEnterprise Grade Protection for Small & Medium Businesses | Microsoft Defender for Business
Specially built for businesses with up to 300 employees, go beyond traditional AV to proactively protect your devices, to help prevent attacks, and respond to sophisticated threats with the newly announced Microsoft Defender for Business.6KViews2likes0Comments