best practices
145 TopicsSecuring multicloud (Azure, AWS & GCP) with Microsoft Defender for Cloud: Connector best practices
Many organizations run workloads across multiple cloud providers and need to maintain a strong security posture while ensuring interoperability. Microsoft Defender for Cloud is a cloud-native application protection platform (CNAPP) solution that helps secure these environments by providing unified visibility and protection for resources in AWS and GCP alongside Azure. Planning for multicloud security with Microsoft Defender for Cloud As customers adopt Microsoft Defender for Cloud in multicloud environments, Microsoft provides several resources to support planning, deployment, and scalable onboarding: Planning Guides: Multicloud Protection Planning Guide that walks through key design considerations for securing multicloud with Microsoft Defender for Cloud. Deployment Guides: Connect your Azure subscriptions - Microsoft Defender for Cloud. With the right planning and adoption strategy, onboarding to Microsoft Defender for Cloud can be smooth and predictable. However, support cases show that some common challenges can still arise during or after onboarding AWS or GCP environments. Below, we walk through frequent multicloud scenarios, their symptoms, and recommended troubleshooting steps. Common multicloud connector problems and how to resolve them 1. Problem: Removed cloud account still appears in Microsoft Defender for Cloud The AWS/GCP account is deleted or removed from your organization, but in Microsoft Defender for Cloud it still appears under connected environments. Additionally, security recommendations for resources in the deleted account may still show up in recommendations page. Cause Microsoft Defender for Cloud does not automatically delete a cloud connector when the external account is removed. The security connector in Azure is a separate object that remains unless explicitly removed. Microsoft Defender for Cloud isn’t aware that the AWS/GCP side was decommissioned as there’s no automatic callback to Azure when an AWS account is closed. Therefore, the connector and its last known data linger until manually removed. Solution Delete the connector to clean up the stale entry. Use one of the following methods. Option 1: Use the Azure portal Sign in to the Azure portal. Go to Microsoft Defender for Cloud > Environment settings. Select the AWS account or GCP project that no longer exists. Select Delete to remove the connector. Option 2: REST API Delete the connector by using the REST API: Security Connectors - Delete - REST API (Azure Defender for Cloud). Note: If a multicloud organization connector was set up and the organization was later decommissioned or some accounts were removed, there would be several connectors to clean up. Start by deleting the organization’s management account connector, then remove any remaining child connectors. Removing connectors in this order helps prevent leftover dependencies. Additional guidance see: What you need to know when deleting and re-creating the security connector(s) in Defender for Cloud. 2. Problem: Identity provider is missing or partially configured After running the AWS CloudFormation template, the connector setup fails. Microsoft Defender for Cloud shows the AWS environment in an error state because the identity link between Azure and AWS is not established. On the AWS side, the CloudFormation stack exists, but the required OIDC identity provider or the IAM role trust policy that allows Microsoft Defender for Cloud to assume the role via web identity federation is missing or misconfigured. Cause The AWS CloudFormation template doesn’t match the correct Azure subscription or tenant. This can happen if: You were signed in to the wrong Azure directory when generating the template. You deployed the template to a different AWS account than intended. In both cases, the Azure and AWS IDs won’t align, and the connector setup will fail. Solution Verify your Azure directory and subscription. In the Azure portal, go to Directories + subscriptions and make sure the correct directory and subscription are selected before you set up the connector. Clean up the incorrect configuration In AWS, delete the CloudFormation stack and any IAM roles or identity providers it created. In Microsoft Defender for Cloud, remove the failed connector from Environment settings. Re-create the connector. Follow the steps in Connect your Azure subscriptions - Microsoft Defender for Cloud to generate and deploy a new CloudFormation template using the correct Azure and AWS accounts. Verify the connection. After the connection succeeds, the AWS environment shows Healthy in Microsoft Defender for Cloud. Resources and recommendations begin appearing within about an hour. 3. Problem: Duplicate security connector prevents onboarding When an AWS or GCP connector is added in Microsoft Defender for Cloud, onboarding fails with an error that indicates another connector with the same hierarchyId already exists. In the Azure portal, the environment shows Failed, and no resources appear in Microsoft Defender for Cloud. Cause Microsoft Defender for Cloud allows only one connector per cloud account within the same Microsoft Entra ID tenant. The hierarchyId uniquely identifies the cloud account (for example, an AWS account ID or a GCP project ID). If the account was previously onboarded in another Azure subscription within the same tenant, you can’t onboard it again until the existing connector is removed. Solution Find and remove the existing connector and then retry onboarding. Step 1: Identify the existing connector Sign in to the Azure portal. Go to Microsoft Defender for Cloud > Environment settings. Check each subscription in the same tenant for a pre-existing AWS account or GCP project connector. If you have access, you can also query Azure Resource Graph to locate existing connectors: | resources | where type == "microsoft.security/securityconnectors" | project name, location, properties.hierarchyIdentifier, tenantId, subscriptionId Step 2: Remove the duplicate connector Delete the connector that uses the same hierarchyId. Follow the steps outlined in the previous troubleshooting scenario for deleting security connectors. Step 3: Retry onboarding After the connector is removed, add the AWS or GCP connector again in the target subscription. If the error persists, verify that all duplicate connectors were deleted and allow a short time for changes to propagate. Conclusion Microsoft Defender for Cloud supports a strong multicloud security strategy, but cloud security is an ongoing effort. Onboarding multicloud environments is only the first step. After onboarding, regularly review security recommendations, alerts, and compliance posture across all connected clouds. With the right configuration, Microsoft Defender for Cloud provides a single source of truth to maintain visibility and control as threats continue to evolve. Further Resources: Microsoft Defender for Cloud – Multicloud Security Planning Guide – Start here to design your strategy for AWS/GCP integration, with guidance on prerequisites and best practices. Connect your AWS account - Microsoft Defender for Cloud. Connect your GCP project - Microsoft Defender for Cloud. Troubleshoot connectors guide - Microsoft Defender for Cloud. We hope this guide helps you successfully implement end-to-end ingestion of Microsoft Intune logs into Microsoft Sentinel. If you have any questions, feel free to leave a comment below or reach out to us on X @MSFTSecSuppTeam.How to Ingest Microsoft Intune Logs into Microsoft Sentinel
For many organizations using Microsoft Intune to manage devices, integrating Intune logs into Microsoft Sentinel is an essential for security operations (Incorporate the device into the SEIM). By routing Intune’s device management and compliance data into your central SIEM, you gain a unified view of endpoint events and can set up alerts on critical Intune activities e.g. devices falling out of compliance or policy changes. This unified monitoring helps security and IT teams detect issues faster, correlate Intune events with other security logs for threat hunting and improve compliance reporting. We’re publishing these best practices to help unblock common customer challenges in configuring Intune log ingestion. In this step-by-step guide, you’ll learn how to successfully send Intune logs to Microsoft Sentinel, so you can fully leverage Intune data for enhanced security and compliance visibility. Prerequisites and Overview Before configuring log ingestion, ensure the following prerequisites are in place: Microsoft Sentinel Enabled Workspace: A Log Analytics Workspace with Microsoft Sentinel enabled; For information regarding setting up a workspace and onboarding Microsoft Sentinel, see: Onboard Microsoft Sentinel - Log Analytics workspace overview. Microsoft Sentinel is now available in the Defender Portal, connect your Microsoft Sentinel Workspace to the Defender Portal: Connect Microsoft Sentinel to the Microsoft Defender portal - Unified security operations. Intune Administrator permissions: You need appropriate rights to configure Intune Diagnostic Settings. For information, see: Microsoft Entra built-in roles - Intune Administrator. Log Analytics Contributor role: The account configuring diagnostics should have permission to write to the Log Analytics workspace. For more information on the different roles, and what they can do, go to Manage access to log data and workspaces in Azure Monitor. Intune diagnostic logging enabled: Ensure that Intune diagnostic settings are configured to send logs to Azure Monitor / Log Analytics, and that devices and users are enrolled in Intune so that relevant management and compliance events are generated. For more information, see: Send Intune log data to Azure Storage, Event Hubs, or Log Analytics. Configure Intune to Send Logs to Microsoft Sentinel Sign in to the Microsoft Intune admin center. Select Reports > Diagnostics settings. If it’s the first time here, you may be prompted to “Turn on” diagnostic settings for Intune; enable it if so. Then click “+ Add diagnostic setting” to create a new setting: Select Intune Log Categories. In the “Diagnostic setting” configuration page, give the setting a name (e.g. “Microsoft Sentinel Intune Logs Demo”). Under Logs to send, you’ll see checkboxes for each Intune log category. Select the categories you want to forward. For comprehensive monitoring, check AuditLogs, OperationalLogs, DeviceComplianceOrg, and Devices. The selected log categories will be sent to a table in the Microsoft Sentinel Workspace. Configure Destination Details – Microsoft Sentinel Workspace. Under Destination details on the same page, select your Azure Subscription then select the Microsoft Sentinel workspace. Save the Diagnostic Setting. After you click save, the Microsoft Intune Logs will will be streamed to 4 tables which are in the Analytics Tier. For pricing on the analytic tier check here: Plan costs and understand pricing and billing. Verify Data in Microsoft Sentinel. After configuring Intune to send diagnostic data to a Microsoft Sentinel Workspace, it’s crucial to verify that the Intune logs are successfully flowing into Microsoft Sentinel. You can do this by checking specific Intune log tables both in the Microsoft 365 Defender portal and in the Azure Portal. The key tables to verify are: IntuneAuditLogs IntuneOperationalLogs IntuneDeviceComplianceOrg IntuneDevices Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) 1. Open Advanced Hunting: Sign in to the https://security.microsoft.com (the unified portal). Navigate to Advanced Hunting. – This opens the unified query editor where you can search across Microsoft Defender data and any connected Sentinel data. 2. Find Intune Tables: In the Advanced hunting Schema pane (on the left side of the query editor), scroll down past the Microsoft Sentinel Tables. Under the LogManagement Section Look for IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices in the list. Microsoft Sentinel in Defender Portal – Tables 1. Navigate to Logs: Sign in to the https://portal.azure.com and open Microsoft Sentinel. Select your Sentinel workspace, then click Logs (under General). 2. Find Intune Tables: In the Logs query editor that opens, you’ll see a Schema or tables list on the left. If it’s collapsed, click >> to expand it. Scroll down to find LogManagement and expand it; look for these Intune-related tables: IntuneAuditLogs, IntuneOperationalLogs, IntuneDeviceComplianceOrg, and IntuneDevices Microsoft Sentinel in Azure Portal – Tables Querying Intune Log Tables in Sentinel – Once the tables are present, use Kusto Query Language (KQL) in either portal to view and analyze Intune data: Microsoft 365 Defender Portal (Unified) Azure Portal (Microsoft Sentinel) In the Advanced Hunting page, ensure the query editor is visible (select New query if needed). Run a simple KQL query such as: IntuneDevice | take 5 Click Run query to display sample Intune device records. If results are returned, it confirms that Intune data is being ingested successfully. Note that querying across Microsoft Sentinel data in the unified Advanced Hunting view requires at least the Microsoft Sentinel Reader role. In the Azure Logs blade, use the query editor to run a simple KQL query such as: IntuneDevice | take 5 Select Run to view the results in a table showing sample Intune device data. If results appear, it confirms that your Intune logs are being collected successfully. You can select any record to view full event details and use KQL to further explore or filter the data - for example, by querying IntuneDeviceComplianceOrg to identify devices that are not compliant and adjust the query as needed. Once Microsoft Intune logs are flowing into Microsoft Sentinel, the real value comes from transforming that raw device and audit data into actionable security signals. To achieve this, you should set up detection rules that continuously analyze the Intune logs and automatically flag any risky or suspicious behavior. In practice, this means creating custom detection rules in the Microsoft Defender portal (part of the unified XDR experience) see [https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules] and scheduled analytics rules in Microsoft Sentinel (in either the Azure Portal or the unified Defender portal interface) see:[Create scheduled analytics rules in Microsoft Sentinel | Microsoft Learn]. These detection rules will continuously monitor your Intune telemetry – tracking device compliance status, enrollment activity, and administrative actions – and will raise alerts whenever they detect suspicious or out-of-policy events. For example, you can be alerted if a large number of devices fall out of compliance, if an unusual spike in enrollment failures occurs, or if an Intune policy is modified by an unexpected account. Each alert generated by these rules becomes an incident in Microsoft Sentinel (and in the XDR Defender portal’s unified incident queue), enabling your security team to investigate and respond through the standard SOC workflow. In turn, this converts raw Intune log data into high-value security insights: you’ll achieve proactive detection of potential issues, faster investigation by pivoting on the enriched Intune data in each incident, and even automated response across your endpoints (for instance, by triggering playbooks or other automated remediation actions when an alert fires). Use this Detection Logic to Create a detection Rule IntuneDeviceComplianceOrg | where TimeGenerated > ago(24h) | where ComplianceState != "Compliant" | summarize NonCompliantCount = count() by DeviceName, TimeGenerated | where NonCompliantCount > 3 Additional Tips: After confirming data ingestion and setting up alerts, you can leverage other Microsoft Sentinel features to get more value from your Intune logs. For example: Workbooks for Visualization: Create custom workbooks to build dashboards for Intune data (or check if community-contributed Intune workbooks are available). This can help you monitor device compliance trends and Intune activities visually. Hunting and Queries: Use advanced hunting (KQL queries) to proactively search through Intune logs for suspicious activities or trends. The unified Defender portal’s Advanced Hunting page can query both Sentinel (Intune logs) and Defender data together, enabling correlation across Intune and other security data. For instance, you might join IntuneDevices data with Azure AD sign-in logs to investigate a device associated with risky sign-ins. Incident Management: Leverage Sentinel’s Incidents view (in Azure portal) or the unified Incidents queue in Defender to investigate alerts triggered by your new rules. Incidents in Sentinel (whether created in Azure or Defender portal) will appear in the connected portal, allowing your security operations team to manage Intune-related alerts just like any other security incident. Built-in Rules & Content: Remember that Microsoft Sentinel provides many built-in Analytics Rule templates and Content Hub solutions. While there isn’t a native pre-built Intune content pack as of now, you can use general Sentinel features to monitor Intune data. Frequently Asked Questions If you’ve set everything up but don’t see logs in Sentinel, run through these checks: Check Diagnostic Settings Go to the Microsoft Intune admin center → Reports → Diagnostic settings. Make sure the setting is turned ON and sending the right log categories to the correct Microsoft Sentinel workspace. Confirm the Right Workspace Double-check that the Azure subscription and Microsoft Sentinel workspace are selected. If you have multiple tenants/directories, make sure you’re in the right one. Verify Permissions Make Sure Logs Are Being Generated If no devices are enrolled or no actions have been taken, there may be nothing to log yet. Try enrolling a device or changing a policy to trigger logs. Check Your Queries Make sure you’re querying the correct workspace and time range in Microsoft Sentinel. Try a direct query like: IntuneAuditLogs | take 5 Still Nothing? Try deleting and re-adding the diagnostic setting. Most issues come down to permissions or selecting the wrong workspace. How long are Intune logs retained, and how can I keep them longer? The analytics tier keeps data in the interactive retention state for 90 days by default, extensible for up to two years. This interactive state, while expensive, allows you to query your data in unlimited fashion, with high performance, at no charge per query: Log retention tiers in Microsoft Sentinel. We hope this helps you to successfully connect your resources and end-to-end ingest Intune logs into Microsoft Sentinel. If you have any questions, leave a comment below or reach out to us on X @MSFTSecSuppTeam!513Views1like0CommentsEstimate Microsoft Sentinel Costs with Confidence Using the New Sentinel Cost Estimator
One of the first questions teams ask when evaluating Microsoft Sentinel is simple: what will this actually cost? Today, many customers and partners estimate Sentinel costs using the Azure Pricing Calculator, but it doesn’t provide the Sentinel-specific usage guidance needed to understand how each Sentinel meter contributes to overall spend. As a result, it can be hard to produce accurate, trustworthy estimates, especially early on, when you may not know every input upfront. To make these conversations easier and budgets more predictable, Microsoft is introducing the new Sentinel Cost Estimator (public preview) for Microsoft customers and partners. The Sentinel Cost Estimator gives organizations better visibility into spend and more confidence in budgeting as they operate at scale. You can access the Microsoft Sentinel Cost Estimator here: https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator What the Sentinel Cost Estimator does The new Sentinel Cost Estimator makes pricing transparent and predictable for Microsoft customers and partners. The Sentinel Cost Estimator helps you understand what drives costs at a meter level and ensures your estimates are accurate with step-by-step guidance. You can model multi-year estimates with built-in projections for up to three years, making it easy to anticipate data growth, plan for future spend, and avoid budget surprises as your security operations mature. Estimates can be easily shared with finance and security teams to support better budgeting and planning. When to Use the Sentinel Cost Estimator Use the Sentinel Cost Estimator to: Model ingestion growth over time as new data sources are onboarded Explore tradeoffs between Analytics and Data Lake storage tiers Understand the impact of retention requirements on total spend Estimate compute usage for notebooks and advanced queries Project costs across a multi‑year deployment timeline For broader Azure infrastructure cost planning, the Azure Pricing Calculator can still be used alongside the Sentinel Cost Estimator. Cost Estimator Example Let’s walk through a practical example using the Cost Estimator. A medium-sized company that is new to Microsoft Sentinel wants a high-level estimate of expected costs. In their previous SIEM, they performed proactive threat hunting across identity, endpoint, and network logs; ran detections on high-security-value data sources from multiple vendors; built a small set of dashboards; and required three years of retention for compliance and audit purposes. Based on their prior SIEM, they estimate they currently ingest about 2 TB per day. In the Cost Estimator, they select their region and enter their daily ingestion volume. As they are not currently using Sentinel data lake, they can explore different ways of splitting ingestion between tiers to understand the potential cost benefit of using the data lake. Their retention requirement is three years. If they choose to use Sentinel data lake, they can plan to retain 90 days in the Analytics tier (included with Microsoft Sentinel) and keep the remaining data in Sentinel data lake for the full three years. As notebooks are new to them, they plan to evaluate notebooks for SOC workflows and graph building. They expect to start in the light usage tier and may move to medium as they mature. Since they occasionally query data older than 90 days to build trends—and anticipate using the Sentinel MCP server for SOC workflows on Sentinel lake data—they expect to start in the medium query volume tier. Note: These tiers are for estimation purposes only; they do not lock in pricing when using the Microsoft Sentinel platform. Because this customer is upgrading from Microsoft 365 E3 to E5, they may be eligible for free ingestion based on their user count. Combined with their eligible server data from Defender for Servers, this can reduce their billable ingestion. In the review step, the Cost Estimator projects costs across a three-year window and breaks down drivers such as data tiers, commitment tiers, and comparisons with alternative storage options. From there, the customer can go back to earlier steps to adjust inputs and explore different scenarios. Once done, the estimate report can be exported for reference with Microsoft representatives and internal leadership when discussing the deployment of Microsoft Sentinel and Sentinel Platform. Finalize Your Estimate with Microsoft The Microsoft Sentinel Cost Estimator is designed to provide directional guidance and help organizations understand how architectural decisions may influence cost. Final pricing may vary based on factors such as deployment architecture, commitment tiers, and applicable discounts. We recommend working with your Microsoft account team or a Security sales specialist to develop a formal proposal tailored to your organization’s requirements. Try the Microsoft Sentinel Cost Estimator Start building your Microsoft Sentinel cost estimate today: https://microsoft.com/en-us/security/pricing/microsoft-sentinel/cost-estimator.985Views0likes0CommentsAccelerate Agent Development: Hacks for Building with Microsoft Sentinel data lake
As a Senior Product Manager | Developer Architect on the App Assure team working to bring Microsoft Sentinel and Security Copilot solutions to market, I interact with many ISVs building agents on Microsoft Sentinel data lake for the first time. I’ve written this article to walk you through one possible approach for agent development – the process I use when building sample agents internally at Microsoft. If you have questions about this, or other methods for building your agent, App Assure offers guidance through our Sentinel Advisory Service. Throughout this post, I include screenshots and examples from Gigamon’s Security Posture Insight Agent. This article assumes you have: An existing SaaS or security product with accessible telemetry. A small ISV team (2–3 engineers + 1 PM). Focus on a single high value scenario for the first agent. The Composite Application Model (What You Are Building) When I begin designing an agent, I think end-to-end, from data ingestion requirements through agentic logic, following the Composite application model. The Composite Application Model consists of five layers: Data Sources – Your product’s raw security, audit, or operational data. Ingestion – Getting that data into Microsoft Sentinel. Sentinel data lake & Microsoft Graph – Normalization, storage, and correlation. Agent – Reasoning logic that queries data and produces outcomes. End User – Security Copilot or SaaS experiences that invoke the agent. This separation allows for evolving data ingestion and agent logic simultaneously. It also helps avoid downstream surprises that require going back and rearchitecting the entire solution. Optional Prerequisite You are enrolled in the ISV Success Program, so you can earn Azure Credits to provision Security Compute Units (SCUs) for Security Copilot Agents. Phase 1: Data Ingestion Design & Implementation Choose Your Ingestion Strategy The first choice I face when designing an agent is how the data is going to flow into my Sentinel workspace. Below I document two primary methods for ingestion. Option A: Codeless Connector Framework (CCF) This is the best option for ISVs with REST APIs. To build a CCF solution, reference our documentation for getting started. Option B: CCF Push (Public Preview) In this instance, an ISV pushes events directly to Sentinel via a CCF Push connector. Our MS Learn documentation is a great place to get started using this method. Additional Note: In the event you find that CCF does not support your needs, reach out to App Assure so we can capture your requirements for future consideration. Azure Functions remains an option if you’ve documented your CCF feature needs. Phase 2: Onboard to Microsoft Sentinel data lake Once my data is flowing into Sentinel, I onboard a single Sentinel workspace to data lake. This is a one-time action and cannot be repeated for additional workspaces. Onboarding Steps Go to the Defender portal. Follow the Sentinel Data lake onboarding instructions. Validate that tables are visible in the lake. See Running KQL Queries in data lake for additional information. Phase 3: Build and Test the Agent in Microsoft Foundry Once my data is successfully ingested into data lake, I begin the agent development process. There are multiple ways to build agents depending on your needs and tooling preferences. For this example, I chose Microsoft Foundry because it fit my needs for real-time logging, cost efficiency, and greater control. 1. Create a Microsoft Foundry Instance Foundry is used as a tool for your development environment. Reference our QuickStart guide for setting up your Foundry instance. Required Permissions: Security Reader (Entra or Subscription) Azure AI Developer at the resource group After setup, click Create Agent. 2. Design the Agent A strong first agent: Solves one narrow security problem. Has deterministic outputs. Uses explicit instructions, not vague prompts. Example agent responsibilities: To query Sentinel data lake (Sentinel data exploration tool). To summarize recent incidents. To correlate ISVs specific signals with Sentinel alerts and other ISV tables (Sentinel data exploration tool). 3. Implement Agent Instructions Well-designed agent instructions should include: Role definition ("You are a security investigation agent…"). Data sources it can access. Step by step reasoning rules. Output format expectations. Sample Instructions can be found here: Agent Instructions 4. Configure the Microsoft Model Context Protocol (MCP) tooling for your agent For your agent to query, summarize and correlate all the data your connector has sent to data lake, take the following steps: Select Tools, and under Catalog, type Sentinel, and then select Microsoft Sentinel Data Exploration. For more information about the data exploration tool collection in MCP server, see our documentation. I always test repeatedly with real data until outputs are consistent. For more information on testing and validating the agent, please reference our documentation. Phase 4: Migrate the Agent to Security Copilot Once the agent works in Foundry, I migrate it to Security Copilot. To do this: Copy the full instruction set from Foundry Provision a SCU for your Security Copilot workspace. For instructions, please reference this documentation. Make note of this process as you will be charged per hour per SCU Once you are done testing you will need to deprovision the capacity to prevent additional charges Open Security Copilot and use Create From Scratch Agent Builder as outlined here. Add Sentinel data exploration MCP tools (these are the same instructions from the Foundry agent in the previous step). For more information on linking the Sentinel MCP tools, please refer to this article. Paste and adapt instructions. At this stage, I always validate the following: Agent Permissions – I have confirmed the agent has the necessary permissions to interact with the MCP tool and read data from your data lake instance. Agent Performance – I have confirmed a successful interaction with measured latency and benchmark results. This step intentionally avoids reimplementation. I am reusing proven logic. Phase 5: Execute, Validate, and Publish After setting up my agent, I navigate to the Agents tab to manually trigger the agent. For more information on testing an agent you can refer to this article. Now that the agent has been executed successfully, I download the agent Manifest file from the environment so that it can be packaged. Click View code on the Agent under the Build tab as outlined in this documentation. Publishing to the Microsoft Security Store If I were publishing my agent to the Microsoft Security Store, these are the steps I would follow: Finalize ingestion reliability. Document required permissions. Define supported scenarios clearly. Package agent instructions and guidance (by following these instructions). Summary Based on my experience developing Security Copilot agents on Microsoft Sentinel data lake, this playbook provides a practical, repeatable framework for ISVs to accelerate their agent development and delivery while maintaining high standards of quality. This foundation enables rapid iteration—future agents can often be built in days, not weeks, by reusing the same ingestion and data lake setup. When starting on your own agent development journey, keep the following in mind: To limit initial scope. To reuse Microsoft managed infrastructure. To separate ingestion from intelligence. What Success Looks Like At the end of this development process, you will have the following: A Microsoft Sentinel data connector live in Content Hub (or in process) that provides a data ingestion path. Data visible in data lake. A tested agent running in Security Copilot. Clear documentation for customers. A key success factor I look for is clarity over completeness. A focused agent is far more likely to be adopted. Need help? If you have any issues as you work to develop your agent, please reach out to the App Assure team for support via our Sentinel Advisory Service . Or if you have any other tips, please comment below, I’d love to hear your feedback.430Views2likes0CommentsMicrosoft Defender for Cloud Customer Newsletter
What's new in Defender for Cloud? Kubernetes gated deployment is now generally available for AKS automatic clusters. Use help to deploy the Defender for Containers sensor to use this feature. More information can be found here. Grouped recommendations are converted into individual ones to list each finding separately. While grouped recommendations are still available, new individual recommendations are now marked as preview and are not yet part of the Secure Score. This new format will allow for better prioritization, actionable context and better governance and tracking. For more details, please refer to this documentation. Check out other updates from last month here! Check out monthly news for the rest of the MTP suite here! Blogs of the month In March, our team published the following blog posts we would like to share: Defending Container Runtime from Malware with Defender for Containers Modern Database Protection: From Visibility to Threat Detection with Defender for Cloud New innovations in Microsoft Defender to strengthen multi-cloud, containers, and AI model security Defending the AI Era: New Microsoft Capabilities to Protect AI Defender for Cloud in the field Revisit the malware automated remediation announcement since this feature is now in GA! Automated remediation for malware in storage Visit our YouTube page GitHub Community Check out the new Module 28 in the MDC Lab: Defending Container Runtime from Malware with Microsoft Defender for Containers Defending Container Runtime from Malware Visit our GitHub page Customer journey Discover how other organizations successfully use Microsoft Defender for Cloud to protect their cloud workloads. This month we are featuring ManpowerGroup, a global workforce solutions leader, deployed Microsoft 365 E5, and Microsoft Security to modernize and future-proof their cyber security platform. ManpowerGroup leverages Entra ID, Defender for Endpoint, Defender for Identity, Defender for O365, Defender for Cloud, Microsoft Security Copilot and Purview to transform itself as an AI Frontier Firm. Join our community! We offer several customer connection programs within our private communities. By signing up, you can help us shape our products through activities such as reviewing product roadmaps, participating in co-design, previewing features, and staying up-to-date with announcements. Sign up at aka.ms/JoinCCP. We greatly value your input on the types of content that enhance your understanding of our security products. Your insights are crucial in guiding the development of our future public content. We aim to deliver material that not only educates but also resonates with your daily security challenges. Whether it’s through in-depth live webinars, real-world case studies, comprehensive best practice guides through blogs, or the latest product updates, we want to ensure our content meets your needs. Please submit your feedback on which of these formats do you find most beneficial and are there any specific topics you’re interested in https://aka.ms/PublicContentFeedback. Note: If you want to stay current with Defender for Cloud and receive updates in your inbox, please consider subscribing to our monthly newsletter: https://aka.ms/MDCNewsSubscribeModern Database Protection: From Visibility to Threat Detection with Microsoft Defender for Cloud
Databases sit at the heart of modern businesses. They support everyday apps, reports and AI tools. For example, any time you engage a site that requires a username and password, there is a database at the back end that stores your login information. As organizations adopt multi-cloud and hybrid architectures, databases are generated all the time, creating database sprawl. As a result, tracking and managing every database, catching misconfigurations and vulnerabilities, knowing where sensitive information lives, all becomes increasingly difficult leaving a huge security gap. And because companies store their most valuable data, like your login information, credit card and social security numbers, in databases, databases are the main target for threat actors. Securing databases is no longer optional, yet getting started can feel daunting. Database security needs to address the gaps mentioned above – help organizations see their databases to help them monitor for misconfigurations and vulnerabilities, sensitive information and any suspicious activities that occur within the database that are indicative of an attack. Further, database security must meet customers where they are – in multi-cloud and hybrid environments. This five part blog series will introduce and explore database-specific security needs and how Defender for Cloud addresses the gaps through its deep visibility into your database estate, detection of misconfiguration, vulnerabilities and sensitive information, threat protection with alerts and Integrated security platform to manage it all. This blog, part one, will begin with an overview of today’s database infrastructure security needs. Then we will introduce Microsoft Defender for Cloud’s unique database protection capabilities to help address this gap. Modern Database Architectures and Their Security Implications Modern databases can be deployed in two main ways: on your own infrastructure or as a cloud service. In an on-premises or IaaS (Infrastructure as a Service) setup, you manage the underlying server or virtual machine. For example, running a SQL Server on a self-managed Windows server—whether in your data center or on a cloud VM in Azure or AWS—is an IaaS deployment (Microsoft Defender for Cloud refers to these as “SQL servers on machines”) that require server maintenance. The other approach is PaaS (Platform as a Service), where a cloud provider manages the host server for you. In a PaaS scenario, you simply use a hosted database service (such as Azure SQL Database, Azure SQL Managed Instance, Azure Database for PostgreSQL, or Amazon RDS) without worrying about the operating system or server maintenance. In either case, you need to secure both the database host (the server or VM) and the database itself (the data and database engine). It’s also important to distinguish between a database’s control plane and data plane. The control plane includes the external settings that govern your database environment—like network firewall rules or who can access the system. The data plane involves information and queries inside the database. An attacker might exploit a weak firewall setting on the control plane or use stolen credentials to run malicious queries on the data plane. To fully protect a database, you need visibility into both planes to catch suspicious behavior. Effective database protection must span both IaaS and PaaS environments and monitor both the control plane and data plane because they are common targets for threat actors. Security teams can then detect suspicious activity such as SQL injections, brute-force attempts, and lateral movement through your environment. A Unified Approach to Database Protection Built for Multicloud Modern database environments are fragmented across deployment models, database ownership, and teams. Databases run across IaaS and PaaS, span control and data planes, and in multiple clouds, yet protection is often pieced together from disconnected point solutions Microsoft Defender for Cloud is a cloud native application protection platform (CNAPP) solution that provides a unified, cloud-native approach to database protection—bringing together discovery, posture management, and threat detection across SQL (Iaas and Paas), open-source relational databases (OSS), and Cosmos DB databases. Defender for Cloud’s database protection uses both agent-based and agentless solutions to protect database resources on-premises, hybrid, multi-cloud and Azure. A lightweight agent-based solution is used for SQL servers on Azure virtual machines or virtual machines hosted outside Azure and allows for deeper inspection, while an agentless approach for managed databases stored in Azure or AWS RDS resources provide protection with seamless integration. Additionally, Defender for Cloud brings in other signals from the cloud environment, surfacing a secure score for security posture, an asset inventory, regulatory compliance, governance capabilities, and a cloud security graph that allows for proactive risk exploration. The value of database security in Defender for Cloud starts with pre and post breach visibility. Vulnerability assessment and data security posture management helps security admins understand their database security posture and, by following Defender for Cloud’s recommendations, security admins can harden their environment proactively. Vulnerability assessments scans surface remediation steps for configurations that do not follow industry’s best practices. These recommendations may include enabling encryption when data is at rest where applicable or database server should restrict public access ranges. Data security posture management in Defender for Cloud automatically helps security admins prioritize the riskiest databases by discovering sensitive data and surfacing related exposure and risk. When databases are associated with certain risks, Defender for Cloud will provide its findings in three ways: risk-based security recommendations, attack path analysis with Defender CSPM and the data and AI dashboard. The risk level is determined by other context related to the resource like, internet exposure or sensitive information. This way, Security admins will have a solid understanding of their database environment pre-breach and will have a prioritized list of resources to remediate based on risk or posture level. While we can do our best to harden the environment, breaches can still happen. Timely post-breach response is just as important. Threat detection capabilities within Defender for Cloud will identify anomalous activity in near real time so SOC analytes can take action to contain the attack immediately. Defender for Cloud monitors both the control and the data plane for any anomalous activity that indicates a threat, from brute force attack detections to access and query anomalies. To provide a unified security experience, Defender for Cloud natively integrates with the Microsoft Defender Portal. The Defender portal brings signals from Defender for Cloud to provide a single cloud-agnostic security experience, equipping security teams with tools like secure score for security posture, attack paths, and incidents and alerts. When anomalous activities occur in the environment, time is of the essence. Security teams must have context and tools to investigate a database resource, both in the control plan and the data plane, to remediate and mitigate future attacks quickly. Defender for Cloud and the Defender portal brings together a security ecosystem that allows SOC analysts to investigate, correlate activities and incidents with alerts, contain and respond accordingly. Take Action: Close the Database Blind Spot Today Modern database environments demand more than isolated controls or point solutions. As databases span hybrid and multiple clouds, security teams need a unified approach that delivers visibility, context, and actionable protection where the data lives. Microsoft Defender for Cloud provides organizations the visibility into all of your databases in a centralized Defender portal using its unique control and data plane findings so that security teams can identify misconfigurations. prioritize them based on cloud-context risk-based recommendations or proactively identify other attack scenarios using the attack path analysis while SOC analysts can investigate alerts and act quickly. Follow this story for part two. We’ll go into Defender for Cloud’s unique visibility into database resources to find misconfiguration gaps, sensitive information exposure, and contextual risks that may exist in your environment. Resources: Get started with Defender for Databases. Learn more about SQL vulnerability assessment. Learn more about Data Security Posture Management Learn more about Advanced Threat Protection Reviewers: YuriDiogenes, lisetteranga, talberdahMicrosoft Defender for Cloud Customer Newsletter
Check out monthly news for the rest of the MTP suite here! What's new in Defender for Cloud? Now in public preview, Defender for Cloud provides threat protection for AI agents built with Foundry, as part of the Defender for AI Services plan. Learn more about this in our documentation. Defender for Cloud’s Defender for SQL on machines plan provides a simulated alert feature to help validate deployment and test prepared security team for detection, response and automation workflows. For more details, please refer to this documentation. Check out other updates from last month here. Blogs of the month In February, our team published the following blog post we would like to share: Extending Defender's AI Threat Protection to Microsoft Foundry Agents Defender for Cloud in the field Revisit the announcement on the new Secure Score model and the enhancements available in the Defender Portal. New Secure Score model and Defender portal enhancements GitHub Community Module 12 in Defender for Cloud’s lab has been updated to include alert simulation! Database protection lab - module 12 Customer journey Discover how other organizations successfully use Microsoft Defender for Cloud to protect their cloud workloads. This month we are featuring ContraForce. ContraForce, a cybersecurity startup, built its platform on Microsoft’s robust security and AI ecosystem. Contraforce, while participating in Microsoft for Startup Pegasus program, addressed the issue of traditional, complex, and siloed security stacks by leveraging Microsoft Sentinel, Defender XDR, Entra ID and Microsoft Foundry. ContraForce was able to deliver enterprise-grade protection at scale, without the enterprise-level overhead. As a result, measured key outcomes like 90%+ incident automation, 93% reduced cost per incident, and 60x faster incident response. Join our community! We offer several customer connection programs within our private communities. By signing up, you can help us shape our products through activities such as reviewing product roadmaps, participating in co-design, previewing features, and staying up-to-date with announcements. Sign up at aka.ms/JoinCCP. We greatly value your input on the types of content that enhance your understanding of our security products. Your insights are crucial in guiding the development of our future public content. We aim to deliver material that not only educates but also resonates with your daily security challenges. Whether it’s through in-depth live webinars, real-world case studies, comprehensive best practice guides through blogs, or the latest product updates, we want to ensure our content meets your needs. Please submit your feedback on which of these formats do you find most beneficial and are there any specific topics you’re interested in https://aka.ms/PublicContentFeedback. Note: If you want to stay current with Defender for Cloud and receive updates in your inbox, please consider subscribing to our monthly newsletter: https://aka.ms/MDCNewsSubscribeData lake tier Ingestion for Microsoft Defender Advanced Hunting Tables is Now Generally Available
Today, we’re excited to announce the general availability (GA) of data lake tier ingestion for Microsoft XDR Advanced Hunting tables into Microsoft Sentinel data lake. Security teams continue to generate unprecedented volumes of high‑fidelity telemetry across endpoints, identities, cloud apps, and email. While this data is essential for detection, investigation, and threat hunting, it also creates new challenges around scale, cost, and long‑term retention. With this release, users can now ingest Advanced Hunting data from: Microsoft Defender for Endpoint (MDE) Microsoft Defender for Office 365 (MDO) Microsoft Defender for Cloud Apps (MDA) directly into Sentinel data lake, without requiring ingestion into the Microsoft Sentinel Analytics tier. Support for Microsoft Defender for Identity (MDI) Advanced Hunting tables will follow in the near future. Supported Tables This release enables data lake tier ingestion for Advanced Hunting data from: Defender for Endpoint (MDE) – DeviceInfo, DeviceNetworkInfo, DeviceProcessEvents, DeviceNetworkEvents, DeviceFileEvents, DeviceRegistryEvents, DeviceLogonEvents, DeviceImageLoadEvents, DeviceEvents, DeviceFileCertificateInfo Defender for Office 365 (MDO) – EmailAttachmentInfo, EmailEvents, EmailPostDeliveryEvents, EmailUrlInfo, UrlClickEvents Defender for Cloud Apps (MDA) – CloudAppEvents Each source is ingested natively into Sentinel data lake, aligning with Microsoft’s broader lake‑centric security data strategy. As mentioned above, Microsoft Defender for Identity will be available in the near future. What’s New with data lake Tier Ingestion Until now, Advanced Hunting data was primarily optimized for near‑real‑time security operations and analytics. As users extend their detection strategies to include longer retention, retrospective analysis, AI‑driven investigations, and cross‑domain correlation, the need for a lake‑first architecture becomes critical. With data lake tier ingestion, Sentinel data lake becomes a must-have destination for XDR insights, enabling users to: Store high‑volume Defender Advanced Hunting data efficiently at scale while reducing operation overhead Extend security analytics and data beyond traditional analytics lifespans for investigation, compliance, and threat research with up to 12 years of retention Query data using KQL‑based experiences across unified datasets with the KQL explorer, KQL Jobs, and Notebook Jobs Integrate data with AI-driven tooling via MCP Server for quick and interactive insights into the environment Visualize threat landscapes and relational mappings while threat hunting with custom Sentinel graphs Decouple storage and retention decisions from real‑time SIEM operations while building a more flexible and futureproof Sentinel architecture Enabling Sentinel data lake Tier Ingestion for Advanced Hunting Tables The ingestion pipeline for sending Defender Advanced Hunting data to Sentinel data lake leverages existing infrastructure and UI experiences. To enable Advanced Hunting tables for Sentinel data lake ingestion: Within the Defender Portal, expand the Microsoft Sentinel section in the left navigation. Go to Configuration > Tables. Find any of the listed tables from above and select one. Within the side menu that opens, select Data Retention Settings. Once the options open, select the button next to ‘Data lake tier’ to set the table to ingest directly into Sentinel data lake. Set the desired total retention for the data. Click save. This configuration will allow Defender data to reside within each Advanced Hunting table for 30 days while remaining accessible via custom detections and queries, while a copy of the logs is sent to Sentinel data lake for usage with custom graphs, MCP server, and benefit from the option of retention up to 12 years. Why data lake Tier Ingestion Matters Built for Scale and Cost Efficiency Advanced Hunting data is rich—and voluminous. Sentinel data lake enables users to store this data using a lake‑optimized model, designed for high‑volume ingestion and long‑term analytical workloads while making it easy to manage table tiers and usage. A Foundation for Advanced Analytics With Defender data co‑located alongside other security and cloud signals, users can unlock: Cross‑domain investigations across endpoint, identity, cloud, and email Retrospective hunting without re‑ingestion AI‑assisted analytics and large‑scale pattern detection Flexible Architecture for Modern Security Teams Data lake tier ingestion supports a layered security architecture, where: Workspaces remain optimized for real‑time detection and SOC workflows The data lake serves as the cost-effective and durable system for security telemetry Users can choose the right level of ingestion depending on operational needs, without duplicating data paths or cost. Designed to Work with Existing Sentinel and XDR Experiences This GA release builds on Microsoft Sentinel’s ongoing investment in unified data configuration and management: Native integration with Microsoft Defender XDR Advanced Hunting schemas Alignment with existing Sentinel data lake query and exploration experiences Consistent management alongside other first‑party and third‑party data sources Consistent experiences within the Defender Portal No changes are required to existing Defender deployments to begin using data lake tier ingestion. Get started To learn more about Microsoft Sentinel Data Lake and managing Defender XDR data within Sentinel, visit the Microsoft Sentinel documentation and explore how lake‑based analytics can complement your existing security operations. We look forward to seeing how users use this capability to explore new detection strategies, perform deeper investigations, and build long‑term security habits.4.4KViews3likes0Comments