microsoft sentinel
772 TopicsIngesting Google Cloud Logs into Microsoft Sentinel: Native vs. Custom Architectures
Overview of GCP Log Types and SOC Value Modern Security Operations Centers (SOCs) require visibility into key Google Cloud Platform logs to detect threats and suspicious activities. The main log types include: GCP Audit Logs – These encompass Admin Activity, Data Access, and Access Transparency logs for GCP services. They record every administrative action (resource creation, modification, IAM changes, etc.) and access to sensitive data, providing a trail of who did what and when in the cloud. In a SOC context, audit logs help identify unauthorized changes or anomalous admin behavior (e.g. an attacker creating a new service account or disabling logging). They are essential for compliance and forensics, as they detail changes to configurations and access patterns across GCP resources. VPC Flow Logs – These logs capture network traffic flow information at the Virtual Private Cloud (VPC) level. Each entry typically includes source/destination IPs, ports, protocol, bytes, and an allow/deny action. In a SOC, VPC flow logs are invaluable for network threat detection: they allow analysts to monitor access patterns, detect port scanning, identify unusual internal traffic, and profile ingress/egress traffic for anomalies. For example, a surge in outbound traffic to an unknown IP or lateral movement between VMs can be spotted via flow logs. They also aid in investigating data exfiltration and verifying network policy enforcement. Cloud DNS Logs – Google Cloud DNS query logs record DNS requests/responses from resources, and DNS audit logs record changes to DNS configurations. DNS query logs are extremely useful in threat hunting because they can reveal systems resolving malicious domain names (C2 servers, phishing sites, DGA domains) or performing unusual lookups. Many malware campaigns rely on DNS; having these logs in Sentinel enables detection of known bad domains and anomalous DNS traffic patterns. DNS audit logs, on the other hand, track modifications to DNS records (e.g. newly added subdomains or changes in IP mappings), which can indicate misconfigurations or potential domain hijacking attempts. Together, these GCP logs provide comprehensive coverage: audit logs tell what actions were taken in the cloud, while VPC and DNS logs tell what network activities are happening. Ingesting all three into Sentinel gives a cloud security architect visibility to detect unauthorized access, network intrusions, and malware communication in a GCP environment. Native Microsoft Sentinel GCP Connector: Architecture & Setup Microsoft Sentinel offers native data connectors to ingest Google Cloud logs, leveraging Google’s Pub/Sub messaging for scalable, secure integration. The native solution is built on Sentinel’s Codeless Connector Framework (CCF) and uses a pull-based architecture: GCP exports logs to Pub/Sub, and Sentinel’s connector pulls from Pub/Sub into Azure. This approach avoids custom code and uses cloud-native services on both sides. Supported GCP Log Connectors: Out of the box, Sentinel provides connectors for: GCP Audit Logs – Ingests the Cloud Audit Logs (admin activity, data access, transparency). GCP Security Command Center (SCC) – Ingests security findings from Google SCC for threat and vulnerability management. GCP VPC Flow Logs – (Recently added) Ingests VPC network flow logs. GCP Cloud DNS Logs – (Recently added) Ingests Cloud DNS query logs and DNS audit logs. Others – Additional connectors exist for specific GCP services (Cloud Load Balancer logs, Cloud CDN, Cloud IDS, GKE, IAM activity, etc.) via CCF, expanding the coverage of GCP telemetry in Sentinel. Each connector typically writes to its own Log Analytics table (e.g. GCPAuditLogs, GCPVPCFlow, GCPDNS, etc.) and comes with built-in KQL parsers. Architecture & Authentication: The native connector uses Google Pub/Sub as the pipeline for log delivery. On the Google side, you will set up a Pub/Sub Topic that receives the logs (via Cloud Logging exports), and Sentinel will subscribe to that topic. Authentication is handled through Workload Identity Federation (WIF) using OpenID Connect: instead of managing static credentials, you establish a trust between Azure AD and GCP so that Sentinel can impersonate a GCP service account. The high-level architecture is: GCP Cloud Logging (logs from services) → Log Router (export sink) → Pub/Sub Topic → (Secure pull over OIDC) → Azure Sentinel Data Connector → Log Analytics Workspace. This ensures a secure, keyless integration. The Azure side (Sentinel) authenticates as a Google service account via OIDC tokens issued by Azure AD, which GCP trusts through the Workload Identity Provider. Below are the detailed setup steps: GCP Setup (Publishing Logs to Pub/Sub) Enable Required APIs: Ensure the GCP project hosting the logs has the IAM API and Cloud Resource Manager API enabled (needed for creating identity pools and roles). You’ll also need owner/editor access on the project to create the resources below. Create a Workload Identity Pool & Provider: In Google Cloud IAM, create a new Workload Identity Pool (e.g. named “Azure-Sentinel-Pool”) and then a Workload Identity Provider within that pool that trusts your Azure AD tenant. Google provides a template for Azure AD OIDC trust – you’ll supply your Azure tenant ID and the audience and issuer URIs that Azure uses. For Azure Commercial, the issuer is typically https://sts.windows.net/<TenantID>/ and audience api://<some-guid> as documented. (Microsoft’s documentation or Terraform scripts provide these values for the Sentinel connector.) Create a Service Account for Sentinel: Still in GCP, create a dedicated service account (e.g. email address removed for privacy reasons). This account will be what Sentinel “impersonates” via the WIF trust. Grant this service account two key roles: Pub/Sub Subscriber on the Pub/Sub Subscription that will be created (allows pulling messages). You can grant roles/pubsub.subscriber at the project level or on the specific subscription. Workload Identity User on the Workload Identity Pool. In the pool’s permissions, add a principal of the form principalSet://iam.googleapis.com/projects/<WIF_project_number>/locations/global/workloadIdentityPools/<Pool_ID>/* and grant it the role roles/iam.workloadIdentityUser on your service account. This allows the Azure AD identity to impersonate the GCP service account. Note: GCP best practice is often to keep the identity pool in a centralized project and service accounts in separate projects, but as of late 2023 the Sentinel connector UI expected them in one project (a limitation under review). It’s simplest to create the WIF pool/provider and the service account within the same GCP project to avoid connectivity issues (unless documentation confirms support for cross-project). Create a Pub/Sub Topic and Subscription: Open the GCP Pub/Sub service and create a Topic (for example, projects/yourproject/topics/sentinel-logs). It’s convenient to dedicate one topic per log type or use case (e.g. one for audit logs). As you create the topic, you can add a Subscription to it in Pull mode (since Sentinel will pull messages). Use a default subscription with an appropriate name (e.g. sentinel-audit-sub). You can leave the default settings (ack deadline, retention) as is, or extend retention if you want messages to persist longer in case of downtime (default is 7 days). Create a Logging Export (Sink): In GCP Cloud Logging, set up a Log Sink to route the desired logs into the Pub/Sub topic. Go to Logging > Logs Router and create a sink: Give it a descriptive name (e.g. audit-logs-to-sentinel). Choose Cloud Pub/Sub as the Destination and select the topic you created (or use the format pubsub.googleapis.com/projects/yourproject/topics/sentinel-logs). Scope and filters: Decide which logs to include. For Audit Logs, you might include ALL audit logs in the project (the sink can be set to include admin activity, data access, etc., by default for the whole project or even entire organization if created at org level). For other log types like VPC Flow Logs or DNS, you’d set an inclusion filter for those specific log names (e.g. logName:"compute.googleapis.com/vpc_flows" to capture VPC Flow Logs). You can also create organization-level sinks to aggregate logs from all projects. Permissions: When creating a sink, GCP will ask to grant the sink service account publish rights to the Pub/Sub topic. Accept this so logs can flow. Once the sink is created, verify logs are flowing: in Pub/Sub > Subscriptions, you can “Pull” messages manually to see if any logs appear. Generating a test event (e.g., create a VM to produce an audit log, or make a DNS query) can help confirm. At this point, GCP is set up to export logs. All requisite GCP resources (IAM federation, service account, topic/subscription, sink) are ready. Google also provides Terraform scripts (and Microsoft supplies Terraform templates in their GitHub) to automate these steps. Using Terraform, you can stand up the IAM and Pub/Sub configuration quickly if comfortable with code. Azure Sentinel Setup (Connecting the GCP Connector) With GCP publishing logs to Pub/Sub, configure Sentinel to start pulling them: Install the GCP Solution: In the Azure portal, navigate to your Sentinel workspace. Under Content Hub (or Data Connectors), find Google Cloud Platform Audit Logs (for example) and click Install. This deploys any needed artifacts (the connector definition, parser, etc.). Repeat for other GCP solutions (like GCP VPC Flow Logs or GCP DNS) as needed. Open Data Connector Configuration: After installation, go to Data Connectors in Sentinel, search for “GCP Pub/Sub Audit Logs” (or the relevant connector), and select it. Click Open connector page. In the connector page, click + Add new (or Add new collector) to configure a new connection instance. Enter GCP Parameters: A pane will prompt for details to connect: you need to supply the Project ID (of the GCP project where the Pub/Sub lives), the Project Number, the Topic and Subscription name, and the Service Account Email you created. You’ll also enter the Workload Identity Provider ID (the identifier of the WIF provider, usually in format projects/<proj>/locations/global/workloadIdentityPools/<pool>/providers/<provider>). All these values correspond to the GCP resources set up earlier – the UI screenshot in docs shows sample placeholders. Make sure there are no typos (a common error is mixing up project ID (name) with project number, or using the wrong Tenant ID). Data Collection Rule (DCR): The connector may also ask for a Data Collection Rule (DCR) and DCE (Data Collection Endpoint) names. Newer connectors based on CCF use the Log Ingestion API, so behind the scenes a DCR is used. If required, provide a name (the docs often suggest prefixing with “Microsoft-Sentinel-” e.g., Microsoft-Sentinel-GCPAuditLogs-DCR). The system will create the DCR and a DCE for you if not already created. (If you installed via Content Hub, this is often automated – just ensure the names follow any expected pattern.) Connect: Click Connect. Sentinel will attempt to use the provided info to establish connectivity. It performs checks like verifying the subscription exists and that the service account can authenticate. If everything is set up properly, the connector will connect and start streaming data. In case of an error, you’ll receive a detailed message. For example, an error about WIF Pool ID not found or subscription not found indicates an issue in the provided IDs or permissions. Double-check those values if so. Validation: After ~15-30 minutes, verify that logs are arriving. You can run a Log Analytics query on the new table, for example: GCPAuditLogs | take 5 (for audit logs) or GCPVPCFlow | take 5 for flow logs. You should see records if ingestion succeeded. Sentinel also provides a “Data connector health” feature – enable it to get alerts or status on data latency and volume for this connector. Data Flow and Ingestion: Once connected, the system works continuously and serverlessly: GCP Log Router pushes new log entries to Pub/Sub as they occur. The Sentinel connector (running in Azure’s cloud) polls the Pub/Sub subscription. It uses the service account credentials (via OIDC token) to call the Pub/Sub API and pull messages in batches. This happens at a defined interval (typically very frequently, e.g. every few seconds). Each message (which contains a log entry in JSON) is then ingested into Log Analytics. The CCF connector uses the Log Ingestion API on the backend, mapping the JSON fields to the appropriate table schema. The logs appear in the respective table (with columns for each JSON field or a dynamic JSON column, depending on the connector design). Sentinel’s built-in parser or Normalized Schemas (ASIM) can be used to query these logs in a friendly way. For instance, the Audit Logs solution includes KQL functions to parse out common fields like user, method, status, etc., from the raw JSON. This native pipeline is fully managed – you don’t have to run any servers or code. The use of Pub/Sub and OIDC makes it scalable and secure by design. Design Considerations & Best Practices for the Native Connector: Scalability & Performance: The native connector approach is designed for high scale. Pub/Sub itself can handle very large log volumes with low latency. The Sentinel CCF connectors use a SaaS, auto-scaling model – no fixed infrastructure means they will scale out as needed to ingest bursts of data. This is a significant advantage over custom scripts or function apps which might need manual scaling. In testing, the native connector can reliably ingest millions of log events per day. If you anticipate extremely high volumes (e.g. VPC flow logs from hundreds of VMs), monitor the connector’s performance but it should scale automatically. Reliability: By leveraging Pub/Sub’s at-least-once delivery, the integration is robust. Even if Azure or the connector has a transient outage, log messages will buffer in Pub/Sub. Once the connector resumes, it will catch up on the backlog. Ensure the subscription’s message retention is adequate (the default 7 days is usually fine). The connector acknowledges messages only after they’re ingested into Log Analytics, which prevents data loss on failures. This reliability is achieved without custom code – reducing the chance of bugs. Still, it’s good practice to use Sentinel’s connector health metrics to catch any issues (e.g., if the connector hasn’t pulled data in X minutes, indicating a problem). Security: The elimination of persistent credentials is a best practice. By using Workload Identity Federation, the Azure connector obtains short-lived tokens to act as the GCP service account. There is no need to store a GCP service account key file, which could be a leak risk. Ensure that the service account has the minimal roles needed. Typically, it does not need broad viewer roles on all GCP resources – it just needs Pub/Sub subscription access (and logging viewer only if you choose to restrict log export by IAM – usually not necessary since the logs are already exported via the sink). Keep the Azure AD application’s access limited too: the Azure AD app (which underpins the Sentinel connector) only needs to access the Sentinel workspace and doesn’t need rights in GCP – the trust is handled by the WIF provider. Filtering and Log Volume Management: A common best practice is to filter GCP logs at the sink to avoid ingesting superfluous data. For instance, if only certain audit log categories are of interest (e.g., Admin Activity and security-related Data Access), you could exclude noisy Data Access logs like storage object reads. For VPC Flow Logs, you might filter on specific subnetworks or even specific metadata (though typically you’d ingest all flows and use Sentinel for filtering). Google’s sink filters allow you to use boolean expressions on log fields. The community recommendation for Firewall or VPC logs, for example, is to set a filter so that only those logs go into that subscription. This reduces cost and noise in Sentinel. Plan your log sinks carefully: you may create multiple sinks if you want to separate log types (one sink to Pub/Sub for audit logs, another sink (with its own topic) for flow logs, etc.). The Sentinel connectors are each tied to one subscription and one table, so separating by log type can help manage parsing and access. Coverage Gaps: Check what the native connectors support as of the current date. Microsoft has been rapidly adding GCP connectors (VPC Flow Logs and DNS logs were in Preview in mid-2025 and are likely GA by now). If a needed log type is not supported (for example, if you have a custom application writing logs to Google Cloud Logging), you might consider the custom ingestion approach (see next section). For most standard infrastructure logs, the native route is available and preferable. Monitoring and Troubleshooting: Familiarize yourself with the connector’s status in Azure. In the Sentinel UI, each configured GCP connector instance will show a status (Connected/Warning/Error) and possibly last received timestamp. If there’s an issue, gather details from error messages there. On GCP, monitor the Pub/Sub subscription: pubsub subscriptions list --filter="name:sentinel-audit-sub" can show if there’s a growing backlog (unacked message count). A healthy system should have near-zero backlog with steady consumption. If backlog is growing, it means the connector isn’t keeping up or isn’t pulling – check Azure side for throttling or errors. Multi-project or Org-wide ingestion: If your organization has many GCP projects, you have options. You could deploy a connector per project, or use an organization-level log sink in GCP to funnel logs from all projects into a single Pub/Sub. The Sentinel connector can pull organization-wide if the service account has rights (the Terraform script allows an org sink by providing an organization-id). This centralizes management but be mindful of very large volumes. Also, ensure the service account has visibility on those logs (usually not an issue if they’re exported; the sink’s own service account handles the export). In summary, the native GCP connector provides a straightforward and robust way to get Google Cloud logs into Sentinel. It’s the recommended approach for supported log types due to its minimal maintenance and tight integration. Custom Ingestion Architecture (Pub/Sub to Azure Event Hub, etc.) In cases where the built-in connector doesn’t meet requirements – e.g., unsupported log types, custom formats, or corporate policy to use an intermediary – you can design a custom ingestion pipeline. The goal of custom architectures is the same (move logs from GCP to Sentinel) but you can incorporate additional processing or routing. One reference pattern is: GCP Pub/Sub → Azure Event Hub → Sentinel, which we’ll use as an example among other alternatives. GCP Export (Source): This part remains the same as the native setup – you create log sinks in GCP to continuously export logs to Pub/Sub topics. You can reuse what you’ve set up or create new, dedicated Pub/Sub topics for the custom pipeline. For instance, you might have a sink for Cloud DNS query logs if the native connector wasn’t used, sending those logs to a topic. Ensure you also create subscriptions on those topics for your custom pipeline to pull from. If you plan to use a GCP-based function to forward data, a Push subscription could be used instead (which can directly call an HTTP endpoint), but a Pull model is more common for custom solutions. Bridge / Transfer Component: This is the core of the custom pipeline – a piece of code that reads from Pub/Sub and sends data to Azure. Several implementation options: Google Cloud Function or Cloud Run (in GCP): You can deploy a Cloud Function that triggers on new Pub/Sub messages (using Google’s EventArc or a Pub/Sub trigger). This function will execute with the message as input. Inside the function, you would parse the Pub/Sub message and then forward it to Azure. This approach keeps the “pull” logic on the GCP side – effectively GCP pushes to Azure. For example, a Cloud Function (Python) could be subscribed to the sentinel-logs topic; each time a log message arrives, the function runs, authenticates to Azure, and calls the ingestion API. Cloud Functions can scale out automatically based on the message volume. Custom Puller in Azure (Function App or Container): Instead of running the bridging code in GCP, you can run it in Azure. For instance, an Azure Function with a timer trigger (running every minute) or an infinite-loop container in Azure Kubernetes Service could use Google’s Pub/Sub client library to pull messages from GCP. You would provide it the service account credentials (likely a JSON key) to authenticate to the Pub/Sub pull API. After pulling a batch of messages, it would send them to the Log Analytics workspace. This approach centralizes everything in Azure but requires managing GCP credentials securely in Azure. Using Google Cloud Dataflow (Apache Beam): For a heavy-duty streaming solution, you could write an Apache Beam pipeline that reads from Pub/Sub and writes to an HTTP endpoint (Azure). Google Dataflow runs Beam pipelines in a fully managed way and can handle very large scale with exactly-once processing. However, this is a complex approach unless you already use Beam – it’s likely overkill for most cases and involves significant development. No matter which method, the bridge component must handle reading, transforming, and forwarding logs efficiently. Destination in Azure: There are two primary ways to ingest the data into Sentinel: Azure Log Ingestion API (via DCR) – This is the modern method (introduced in 2022) to send custom data to Log Analytics. You’ll create a Data Collection Endpoint (DCE) in Azure and a Data Collection Rule (DCR) that defines how incoming data is routed to a Log Analytics table. For example, you might create a custom table GCP_Custom_Logs_CL for your logs. Your bridge component will call the Log Ingestion REST API endpoint (which is a URL associated with the DCE) and include a shared access signature or Azure AD token for auth. The payload will be the log records (in JSON) and the DCR rule ID to apply. The DCR can also perform transformations if needed (e.g., mappings of fields). This API call will insert the data into Log Analytics in real-time. This approach is quite direct and is the recommended custom ingestion method (it replaces the older HTTP Data Collector API). Azure Event Hub + Sentinel Connector – In this approach, instead of pushing directly into Log Analytics, you use an Event Hub as an intermediate buffer in Azure. Your GCP bridge will act as a producer, sending each log message to an Event Hub (over AMQP or using Azure’s SDK). Then, you need something to get data from Event Hub into Sentinel. There are a couple of options: Historically, Sentinel provided an Event Hub data connector (often used for Azure Activity logs or custom CEF logs). This connector can pull events from an Event Hub and write to Log Analytics. However, it typically expects the events to be in a specific format (like CEF or JSON with a known structure). If your logs are raw JSON, you might need to wrap them or use a compatible format. This method is somewhat less flexible unless you tailor your output to what Sentinel expects. Alternatively (and more flexibly), you can write a small Azure Function that triggers on the Event Hub (using Event Hub trigger binding). When a message arrives, the function takes it and calls the Log Ingestion API (similar to method (a) above) to put it into Log Analytics. Essentially, this just decouples the pulling from GCP (done by the first function) and the pushing to Sentinel (done by the second function). This two-stage design might be useful if you want to do more complex buffering or retries, but it does introduce more components. Using an Event Hub in the pipeline can be beneficial if you want a cloud-neutral queue between GCP and Azure (maybe your organization already consolidates logs in an Event Hub or Kafka). It also allows reusing any existing tools that read off Event Hubs (for example, maybe feeding the same data to another system in parallel to Sentinel). This pattern – Cloud Logging → Pub/Sub → Event Hub → Log Analytics – has been observed in real-world multi-cloud deployments, essentially treating Pub/Sub + Event Hub as a bridging message bus between clouds. Data Transformation: With a custom pipeline, you have full control (and responsibility) for any data transformations needed. Key considerations: Message Decoding: GCP Pub/Sub messages contain the log entry in the data field, which is a base64-encoded string of a JSON object. Your code must decode that (it’s a one-liner in most languages) to get the raw JSON log. After decoding, you’ll have a JSON structure identical to what you’d see in Cloud Logging. For example, an audit log entry JSON has fields like protoPayload, resourceName, etc. Schema Mapping: Decide how to map the JSON to your Log Analytics table. You could ingest the entire JSON as a single column (and later parse in KQL), but it’s often better to map important fields. For instance, for VPC Flow Logs, you might extract src_ip, dest_ip, src_port, dest_port, bytes_sent, action and map each to a column in a custom table. This requires that you create the custom table with those columns and configure the DCR’s transformation schema accordingly. If using the Log Ingestion API, the DCR can transform the incoming JSON to the table schema. If using the Data Collector API (legacy, not recommended now), your code would need to format records as the exact JSON that Log Analytics expects. Enrichment (optional): In a custom pipeline, you could enrich the logs before sending to Sentinel. For example, performing IP geolocation on VPC flow logs, or tagging DNS logs with threat intel (if a domain is known malicious) – so that the augmented information is stored in Sentinel. Be cautious: enrichment adds processing time and potential failure points. If it’s light (like a dictionary lookup), it might be fine; if heavy, consider doing it after ingestion using Sentinel analytics instead. Filtering: Another advantage of custom ingestion is that you can filter events at the bridge. You might decide to drop certain events entirely (to save cost or noise). For example, if DNS query logs are too verbose, you might only forward queries for certain domains or exclude known benign domains. Or for audit logs, you might exclude read-only operations. This gives flexibility beyond what the GCP sink filter can do, since you have the full event content to decide. The trade-off is complexity – every filter you implement must be maintained/justified. Batching: The Log Ingestion API allows sending multiple records in one call. It’s more efficient to batch a bunch of log events (say 100 at a time) into one API request rather than call per event. Your function can accumulate a short batch (with some timeout or max size) and send together. This improves throughput and lowers overhead. Ensure the payload stays within API limits (~1 MB per post, and 30,000 events per post for Log Ingestion API). Pub/Sub and Event Hub also have batch capabilities – you may receive multiple messages in one invocation or read them in a loop. Design your code to handle variable batch sizes. Authentication & Permissions (Custom Pipeline): You will effectively need to handle two authentications: GCP → Bridge, and Bridge → Azure: GCP to Bridge: If using a GCP Cloud Function triggered by Pub/Sub, GCP handles the auth for pulling the message (the function is simply invoked with the data). If pulling from Azure, you’ll need GCP credentials. The most secure way is to use a service account key with minimal permissions (just Pub/Sub subscriber on the subscription). Store this key securely (Azure Key Vault or as an App Setting in the Azure Function, possibly encrypted). The code uses this key (a JSON file or key string) to initialize the Pub/Sub client. Google’s libraries support reading the key from an environment variable. Alternatively, you could explore using the same Workload Identity Federation concept in reverse (Azure to GCP), but that’s non-trivial to set up manually for custom code. A service account key is straightforward but do rotate it periodically. On GCP’s side, you might also restrict the service account so it cannot access anything except Pub/Sub. Bridge to Azure: To call the Log Ingestion API, you need an Azure AD App Registration (client ID/secret) with permissions or a SAS token for the DCR. The modern approach: create an AAD app, grant it the role Monitoring Data Contributor on your Sentinel workspace or explicitly grant the DCR permissions (Log Ingestion Data Contributor). Then your code can use the app’s client ID and secret to get a token and call the API. This is a secure, managed way. Alternatively, the DCR can be configured with a shared access signature (SAS) that you generate in Azure – your code could use that SAS token in the API URL (so that no interactive auth is needed). The older Data Collector API used the workspace ID and a primary key for auth (HMAC SHA-256 header) – some existing solutions still use that, but since that API is being deprecated, it’s better to use the new method. In summary: ensure the Azure credentials are stored safely (Key Vault or GCP Secret Manager if function is in GCP) and that you follow principle of least privilege (only allow ingest, no read of other data). End-to-End Data Flow Example: To make this concrete, consider an example where we ingest Firewall/VPC logs using a custom pipeline (this mirrors a solution published by Microsoft for when these logs weren’t yet natively supported): A GCP Log Sink filters for VPC Firewall logs (the logs generated by GCP firewall rules, which are part of VPC flow logging) and exports them to a Pub/Sub topic. An Azure Function (in PowerShell, as in the example, or any language) runs on a timer. Every minute, it pulls all messages from the Pub/Sub subscription (using the Google APIs). The function authenticates with a stored service account key to do this. It then decodes each message’s JSON. The function constructs an output in the required format for Sentinel’s Log Ingestion API. In this case, they created a custom Log Analytics table (say GCPFirewall_CL) with columns matching the log fields (source IP, dest IP, action, etc.). The function maps each JSON field to a column. For instance, json.payload.sourceIp -> src_ip column. It then calls the Log Ingestion REST API to send a batch of log records. The call is authorized with an Azure AD app’s client ID/secret which the function has in its config. Upon successfully POSTing the data, the function sends an acknowledgment back to Pub/Sub for those messages (or, if using the Pub/Sub client in pull mode, it acks as it pulls). This removal is important to ensure the messages don’t get re-delivered. If the send to Azure fails for some reason, the function can choose not to ack, so that the message remains in Pub/Sub and will be retried on the next run (ensuring reliability). The logs show up in Sentinel under the custom table, and can now be used in queries and analytics just like any other log. The entire process from log generation in GCP to log ingestion in Sentinel can be only a few seconds of latency in this design, effectively near-real-time. Tooling & Infrastructure: When implementing the above, some recommended tools: Use official SDKs where possible. For example, Google Cloud has a Pub/Sub client library for Python, Node.js, C#, etc., which simplifies pulling messages. Azure has an SDK for the Monitor Ingestion API (or you can call the REST endpoints directly with an HTTP client). This saves time versus manually crafting HTTP calls and auth. Leverage Terraform or IaC for repeatability. You can automate creation of Azure resources (Function App, Event Hub, etc.) and even the GCP setup. For instance, the community SCC->Sentinel example provides Terraform scripts. This makes it easier to deploy the pipeline in dev/test and prod consistently. Logging and monitoring: implement robust logging in your function code. In GCP Cloud Functions, use Cloud Logging to record errors (so you can see if something fails). In Azure Functions, use Application Insights to track failures or performance metrics. Set up alerts if the function fails repeatedly or if an expected log volume drops (which could indicate a broken pipeline). Essentially, treat your custom pipeline as you would any production integration – monitor its health continuously. Example Use-Case – Ingesting All Custom GCP Logs: One key advantage of a custom approach is flexibility. Imagine you have a custom application writing logs to Google Cloud Logging (Stackdriver) that has no out-of-the-box Sentinel connector. You can still get those logs into Sentinel. As one cloud architect noted, they built a fully custom pipeline with GCP Log Sink -> Pub/Sub -> Cloud Function -> Sentinel, specifically to ingest arbitrary GCP logs beyond the built-in connectors. This unlocked visibility into application-specific events that would otherwise be siloed. While doing this, they followed many of the steps above, demonstrating that any log that can enter Pub/Sub can ultimately land in Sentinel. This extensibility is a major benefit of a custom solution – you’re not limited by what Microsoft or Google have pre-integrated. In summary, the custom ingestion route requires more effort up front, but it grants complete control. You can tune what you collect, transform data to your needs, and integrate logs that might not be natively supported. Organizations often resort to this if they have very specific needs or if they started building ingestion pipelines before native connectors were available. Many will start with custom for something like DNS logs and later switch to a native connector once available. A hybrid approach is also possible (using native connector for audit logs, but custom for a niche log source). Comparison of Native vs. Custom Ingestion Methods Both native and custom approaches will get your GCP logs into Microsoft Sentinel, but they differ in complexity and capabilities. The table below summarizes the trade-offs to help choose the right approach: Aspect Native GCP Connector (Sentinel Pub/Sub Integration) Custom Ingestion Pipeline (DIY via Event Hubs or API) Ease of Setup Low-Code Setup: Requires configuration in GCP and Azure, but no custom code. You use provided Terraform scripts and a UI wizard. In a few hours you can enable the connector if prerequisites (IAM, Pub/Sub) are met. Microsoft’s documentation guides the process step by step. High-Code Setup: Requires designing and writing integration code (function or app) and configuring cloud services (Function Apps, Event Hub, etc.). More moving parts mean a longer setup time – possibly days or weeks to develop and thoroughly test. Suitable if your team has cloud developers or if requirements demand it. Log Type Coverage Supported Logs: Out-of-the-box support for standard GCP logs (audit, SCC findings, VPC flow, DNS, etc.). However, it’s limited to those data types Microsoft has released connectors for. (As of 2025, many GCP services are covered, but not necessarily all Google products.) If a connector exists, it will reliably ingest that log type. Any Log Source: Virtually unlimited – you can ingest any log from GCP, including custom application logs or niche services, as long as you can export it to Pub/Sub. You define the pipeline for each new log source. This is ideal for custom logs beyond built-ins. The trade-off is you must build parsing/handling for each log format yourself. Development & Maintenance Minimal Maintenance: After initial setup, the connector runs as a service. No custom code to maintain; Microsoft handles updates/improvements. You might need to update configuration if GCP projects or requirements change, but generally it’s “configure and monitor.” Support is available from Microsoft for connector issues. Ongoing Maintenance: You own the code. Updates to log schemas, API changes, or cloud platform changes might require code modifications. You need to monitor the custom pipeline, handle exceptions, and possibly update credentials regularly. This approach is closer to software maintenance – expect to allocate effort for bug fixes or improvements over time. Scalability Cloud-Scale (Managed): The connector uses Azure’s cloud infrastructure which auto-scales. High volumes are handled by scaling out processing nodes behind the scenes. GCP Pub/Sub will buffer and deliver messages at whatever rate the connector can pull, and the connector is optimized for throughput. There’s effectively no hard limit exposed to you (aside from Log Analytics ingestion rate limits, which are very high). Custom Scaling Required: Scalability depends on your implementation. Cloud Functions and Event Hubs can scale, but you must configure them (e.g., set concurrency, ensure enough throughput units on Event Hub). If logs increase tenfold, you may need to tweak settings or upgrade plans. There’s more possibility of bottlenecks (e.g., a single-threaded function might lag). Designing for scale (parallelism, batching, multi-partition processing) is your responsibility. Reliability & Resilience Reliable by Design: Built on proven Google Pub/Sub durability and Azure’s reliable ingestion pipeline. The connector handles retries and acknowledgements. If issues occur, Microsoft likely addresses them in updates. Also, you get built-in monitoring in Sentinel for connector health. Reliability Varies: Requires implementing your own retry and error-handling logic. A well-built custom pipeline can be very reliable (e.g., using Pub/Sub’s ack/retry and durable Event Hub storage), but mistakes in code could drop logs or duplicate them. You need to test failure scenarios (network blips, API timeouts, etc.). Additionally, you must implement your own health checks/alerts to know if something breaks. Flexibility & Transformation Standardized Ingestion: Limited flexibility – it ingests logs in their native structure into pre-defined tables. Little opportunity to transform data (beyond what the connector’s mapping does). Essentially “what GCP sends is what you get,” and you rely on Sentinel’s parsing for analysis. All logs of a given type are ingested (you control scope via GCP sink filters, but not the content). Highly Flexible: You can customize everything – which fields to ingest, how to format them, and even augment logs with external data. For example, you could drop benign DNS queries or mask sensitive fields before sending. You can consolidate multiple GCP log types into one table or split one log type into multiple tables if desired. This freedom lets you tailor the data to your environment and use cases. The flip side is complexity: every transformation is custom logic to maintain. Cost Considerations Cost-Efficient Pipeline: There is no charge for the Sentinel connector itself (it’s part of the service). Costs on GCP: Pub/Sub charges are minimal (especially for pulling data) and logging export has no extra cost aside from the egress of the data. On Azure: you pay for data ingestion and storage in Log Analytics as usual. No need to run VMs or functions continuously. Overall, the native route avoids infrastructure costs – you’re mainly paying for the data volume ingested (which is unavoidable either way) and a tiny cost for Pub/Sub (pennies for millions of messages). Additional Costs: On top of Log Analytics ingestion costs, you will incur charges for the components you use. An Azure Function or Cloud Function has execution costs (though modest, they add up with high volume). An Event Hub has hourly charges based on throughput units and retention. Data egress from GCP to Azure will be charged by Google (network egress fees) – this also applies to the native connector, though in that case GCP egress is typically quite small cost. If your pipeline runs 24h, ensure to factor in those platform costs. Custom pipelines can also potentially reduce Log Analytics costs by filtering out data (saving money by not ingesting noise), so there’s a trade-off: spend on processing to save on storage, if needed. Support & Troubleshooting Vendor-Supported: Microsoft supports the connector – if things go wrong, you can open a support case. Documentation covers common setup issues. The connector UI will show error messages (e.g., authentication failures) to guide troubleshooting. Upgrades/improvements are handled by Microsoft (e.g., if GCP API changes, Microsoft will update the connector). Self-Support: You build it, you fix it. Debugging issues might involve checking logs across two clouds. Community forums and documentation can help (e.g., Google’s docs for Pub/Sub, Azure docs for Log Ingestion API). When something breaks, your team must identify whether it’s on the GCP side (sink or Pub/Sub) or Azure side (function error or DCR issue). This requires familiarity with both environments. There’s no single vendor to take responsibility for the end-to-end pipeline since it’s custom. In short, use the native connector whenever possible – it’s easier and reliably maintained. Opt for a custom solution only if you truly need the flexibility or to support logs that the native connectors can’t handle. Some organizations start with custom ingestion out of necessity (before native support exists) and later migrate to native connectors once available, to reduce their maintenance burden. Troubleshooting Common Issues Finally, regardless of method, you may encounter some hurdles. Here are common issues and ways to address them in the context of GCP-to-Sentinel log integration: No data appearing in Sentinel: If you’ve set up a connector and see no logs, first be patient – initial data can take ~10–30 minutes to show up. If nothing appears after that, verify the GCP side: Check the Log Router sink status in GCP (did you set the correct inclusion filters? Are logs actually being generated? You can view logs in Cloud Logging to confirm the events exist). Go to Pub/Sub and use the “Pull” option on the subscription to see if messages are piling up. If you can pull messages manually but Sentinel isn’t getting them, the issue is likely on the Azure side. In Sentinel, ensure the connector shows as Connected. If it’s in an error state, click on it to see details. A common misconfiguration is an incorrect Project Number or Service Account in the connector settings – one typo in those will prevent ingestion. Update the parameters if needed and reconnect. Authentication or Connectivity errors (native connector): These show up as errors like “Workload Identity Pool ID not found” or “Subscription does not exist” in the connector page. This usually means the values entered in the connector are mismatched: Double-check the Workload Identity Provider ID. It must exactly match the one in GCP (including correct project number). If you created the WIF pool in a different project than the Pub/Sub, remember the connector (until recently) expected them in one project. Ensure you used the correct project ID/number for all fields. Verify the service account email is correct and that you granted the Workload Identity User role on it. If not, the Azure identity cannot assume the service account. Check that the subscription name is correct and that the service account has roles/pubsub.subscriber on it. If you forgot to add that role, Azure will be denied access to Pub/Sub. Ensure the Azure AD app (which is automatically used by Sentinel) wasn’t deleted or disabled in your tenant. The Sentinel connector uses a multi-tenant app provided by Microsoft (identified by the audience GUID in the docs), which should be fine unless your org blocked third-party Azure apps. If you have restrictions, you might need to allow Microsoft’s Sentinel multi-cloud connector app. Tip: Try running the Terraform scripts provided by Microsoft if you did things manually and it’s failing. The scripts often can pinpoint what’s missing by setting everything up for you. Partial data or specific logs missing: If some expected events are not showing up: Revisit your sink filter in GCP. Perhaps the filter is too narrow. For example, for DNS logs, you might need to include both _Default logs and DNS-specific log IDs. Or for audit logs, remember that Data Access logs for certain services might be excluded by default (you have to enable Data Access audit logs in GCP for some services). If those aren’t enabled in GCP, they won’t be exported at all. If using the SCC connector, ensure you enabled continuous export of findings in SCC to Pub/Sub – those findings won’t flow unless explicitly configured. Check Sentinel’s table for any clues – sometimes logs might arrive under a slightly different table or format. E.g., if the connector was set up incorrectly initially, it might have sent data to a custom table with a suffix. Use Log Analytics query across all tables (or search by a specific IP or timestamp) to ensure the data truly isn’t there. Duplicate logs or high event counts (custom ingestion): If your custom pipeline isn’t carefully handling acknowledgments, you might ingest duplicates. For instance, if your function crashes after sending data to Sentinel but before acking Pub/Sub, the message will be retried later – resulting in the same log ingested twice. Over time this could double-count events. Solution: Ensure idempotency or proper ack logic. One way is to include a unique ID with each log (GCP audit logs have an insertId which is unique per log event; VPC flow logs have unique flowID for each flow chunk). You could use that as a de-duplication key on the Sentinel side (e.g., ingest it and deduplicate in queries). Or design the pipeline to mark messages as processed in an external store. However, the simplest is to acknowledge only after successful ingestion and let Pub/Sub handle retries. If you notice duplicates in Sentinel, double-check that your code isn’t calling ack too early or multiple times. Log Ingestion API errors (custom pipeline): When calling the Log Ingestion API, you might encounter HTTP errors: 400 Bad Request – often schema mismatch. This means the JSON you sent doesn’t match the DCR’s expected format. Check the error details; the API usually returns a message indicating which field is wrong. Common issues: sending a string value for a column defined as integer, missing a required column, or having an extra column that’s not in the table. Adjust your transformation or DCR accordingly. 403 Forbidden – authentication failure. Your Azure AD token might be expired or your app doesn’t have rights. Make sure the token is fresh (fetch a new one for each function run, or use a managed identity if supported and authorized). Also verify the app’s role assignments. 429 Too Many Requests / Throttling – you might be sending data too fast. The Log Ingestion API has throughput limits (per second per workspace). If you hit these, implement a backoff/retry and consider batching more. This is rare unless you have a very high log rate. Azure Function timeouts – if using Functions, sometimes the default timeout (e.g., 5 minutes for an HTTP-triggered function) might be hit if processing a large batch. Consider increasing the timeout setting or splitting work into smaller chunks. Connector health alerts: If you enabled the health feature for connectors, you might get alerted that “no logs received from GCP Audit Logs in last X minutes” etc. If this is a false alarm (e.g., simply that there were genuinely no new logs in GCP during that period), you can adjust the alert logic or threshold. But if it’s a real issue, treat it as an incident: check GCP’s Cloud Logging to ensure new events exist (if not, maybe nothing happened – e.g., no admin activity in the last hour). If events do exist in GCP but none in Sentinel, you have a pipeline problem – refer to the earlier troubleshooting steps for auth/connectivity. Updating or Migrating pipelines: Over time, you might replace a custom pipeline with a native connector (or vice versa). Be cautious of duplicate ingestion if both are running simultaneously. For example, if you enable the new GCP DNS connector while your custom DNS log pipeline is still on, you’ll start ingesting DNS logs twice. Plan a cutover: disable one before enabling the other in production. Also, if migrating, note that the data may land in a different table (the native connector might use GCPDNS table whereas your custom went to GCP_DNS_Custom_CL). You may need to adjust your queries and workbooks to unify this. It could be worthwhile to backfill historical data for continuity if needed. By following these practices and monitoring closely, you can ensure a successful integration of GCP logs into Microsoft Sentinel. The end result is a centralized view in Sentinel where your Azure, AWS, on-prem, and now GCP logs all reside – empowering your SOC to run advanced detections and investigations across your multi-cloud environment using a single pane of glass.31Views4likes0CommentsWhat’s new in Microsoft Sentinel: January 2026
Welcome back! As we kick off the new year, we’re bringing key Ignite 2025 announcements into your day‑to‑day Sentinel experience so you can turn insights into measurable SecOps outcomes with the AI-ready Sentinel SIEM and platform. Building on last year’s momentum around AI-ready platforms, agentic defense, data lake innovation, and advanced SIEM capabilities, the January edition delivers security features designed to elevate your operations. This release brings powerful enhancements, including streamlined ingestion of Microsoft Defender data into the Sentinel data lake, deeper partner connector integrations, QRadar migration support, enriched UEBA insights, and a refreshed ASIM schema for consistent normalization. Together, these advancements help security teams simplify operations, strengthen detection, and unlock greater value from their data. What’s new Ingest Defender data directly into Sentinel data lake At Ignite 2025, Microsoft announced that Defender for Endpoint (MDE) data could be ingested directly into the Sentinel data lake. Building on this, we now support direct ingestion of Microsoft Defender for Office (MDO) and Microsoft Defender for Cloud Apps (MDA) data as well. You can choose to ingest supported XDR tables exclusively into the data lake tier by selecting the data lake tier option when configuring the retention settings. Table settings are easily managed through the built-in table management experience in the Defender portal, enabling cost-effective, long-term data retention without moving data to the analytics tier. This expansion delivers improved visibility, deeper historical analysis, reduced total cost of ownership, and empowers modern security operations with advanced capabilities. Growing connector ecosystem At Ignite 2025, Microsoft unveiled new connectors and integrations that seamlessly unify signals across multi-cloud and multiplatform environments. These innovations deliver enhanced visibility and scalable security insights across cloud, endpoint, and identity platforms. To explore the complete list of new solutions from Microsoft Sentinel and our third-party partners see – Ignite 2025: New Microsoft Sentinel Connectors Announcement. Accelerate your migration from QRadar to Microsoft Sentinel Microsoft is excited to announce support for QRadar-to-Microsoft Sentinel migrations through the enhanced, AI-powered SIEM migration experience. This new capability simplifies and streamlines the process by helping organizations efficiently migrate detection rules and enable required data connectors in Sentinel’s cloud-native SIEM. As a result, customers gain improved visibility, accelerated threat detection, and more modern security operations powered by Microsoft’s intelligent cloud. Early adopters are seeing smoother transitions with minimal disruption, more predictable outcomes, and greater value from their SIEM investment. In addition, Microsoft provides free migration support through the Cloud Accelerate Factory program. Eligible customers receive expert guidance to quickly deploy Sentinel and migrate from Splunk and QRadar using the new SIEM migration experience, in collaboration with their preferred migration partner. For details, contact your Microsoft representative or visit: https://aka.ms/FactoryCustomerPortal To learn more, tune in to our webinar on February 2, 2025, at 9:00AM PST: https://aka.ms/SecurityCommunity Announcing the Behaviors Layer within UEBA Now in public preview! Microsoft Sentinel’s Behaviors layer is a new UEBA capability that provides a high-level behavioral lens on top of raw security telemetry. Its goal is to answer the question “What happened? Who did what to whom?” in your environment by aggregating, sequencing, and enriching events into human-readable behaviors. Each “behavior” is a synthesized security event or pattern that describes an action (or sequence of actions) an entity performed. This includes rich context such as involved entities, MITRE ATT&CK tactics/techniques, and a plain-English description. Learn more here: Turn Complexity into Clarity: Introducing the New UEBA Behaviors Layer in Microsoft Sentinel Unified schema alignment for ASIM Following our Advanced Security Information Model (ASIM) reaching General Availability in September, this latest refresh delivers comprehensive alignment across all ASIM schemas to the latest ASIM standard. This update ensures: Consistent field coverage across all major activity types. A stable baseline for accelerating parser development, normalization improvements, and future ASIM-driven experiences. Key additions across older schemas: Inspection fields – enabling normalization of security findings across all activity types. Risk fields – providing consistent representation of source-reported risk. Full details are available here: Advanced Security Information Model (ASIM) schemas Additional resources Blogs: What’s new in Microsoft Sentinel: December 2025, Automating IOC hunts in Microsoft Sentinel data lake, Efficiently process high volume logs and optimize costs with Microsoft Sentinel data lake Documentation: Microsoft Sentinel data lake overview - Microsoft Security | Microsoft Learn, Use the SIEM migration experience - Microsoft Sentinel | Microsoft Learn Sign up for upcoming webinars: Feb. 2 | 9:00am | Accelerate your SIEM migration to Microsoft Sentinel Stay connected Check back each month for the latest innovations, updates, and events to ensure you’re getting the most out of Microsoft Sentinel. We’ll see you in the next edition!286Views1like0CommentsMicrosoft Sentinel Platform: Audit Logs and Where to Find Them
Looking to understand where audit activities for Sentinel Platform are surfaced? Look no further than this writeup! With the launch of the Sentinel Platform, a new suite of features for the Microsoft Sentinel service, users may find themselves wanting to monitor who is using these new features and how. This blog sets out to highlight how auditing and monitoring can be achieved and where this data can be found. *Thank you to my teammates Ian Parramore and David Hoerster for reviewing and contributing to this blog.* What are Audit Logs? Audit logs are documented activities that are eligible for usage within SOC tools, such as a SIEM. These logs are meant to exist as a paper trail to show: Who performed an action What type of action was performed When the action was performed Where the action was performed How the action was performed Audit logs can be generated by many platforms, whether they are Microsoft services or platforms outside of the Microsoft ecosystem. Each source is a great option for a SOC to monitor. Types of Audit Logs Audit logs can vary in how they are classified or where they are placed. Focusing just on Microsoft, the logs can vary based on platform. A few examples are: - Windows: Events generated by the operating system that are available in EventViewer - Azure – Diagnostic logs generated by services that can be sent to Azure Log Analytics - Defender – Audit logs generated by Defender services that are sent to M365 Audit Logs What is the CloudAppEvents Table? The CloudAppEvents table is a data table that is provided via Advanced Hunting in Defender. This table contains events of applications being used within the environment. This table is also a destination for Microsoft audit logs that are being sent to Purview. Purview’s audit log blade includes logs from platforms like M365, Defender, and now Sentinel Platform. How to Check if the Purview Unified Audit Logging is Enabled For CloudAppEvents to receive data, Audit Logging within Purview must be enabled and M365 needs to be configured to be connected as a Defender for Cloud Apps component. Enabling Audit Logs By default, Purview Auditing is enabled by default within environments. In the event that they have been disabled, Audit logs can be enabled and checked via PowerShell. To do so, the user must have the Audit Logs role within Exchange Online. The command to run is: Get-AdminAuditLogConfig | Format-List UnifiedAuditLogIngestionEnabled The result will either be true if auditing is already turned on, and false if it is disabled. If the result is false, the setting will need to be enabled. To do so: Install the Exchange Online module with: Import-Module ExchangeOnlineManagement Connect and authenticate to Exchange Online with an interactive window Connect-ExchangeOnline -UserPrincipalName USER PRINCIPAL NAME HERE Run the command to enable auditing Set-AdminAuditLogConfig -UnifiedAuditLogIngestionEnabled $true Note: It may take 60 minutes for the change to take effect. Connecting M365 to MDA To connect M365 to MDA as a connector: Within the Defender portal, go to System > Settings. Within Settings, choose Cloud Apps. Within the settings navigation, go to Connected apps > App Connectors. If Microsoft 365 is not already listed, click Connect an app. Find and select Microsoft 365. Within the settings, select the boxes for Microsoft 365 activities. Once set, click on the Connect Microsoft 365 button. Note: You have to have a proper license that includes Microsoft Defender for Cloud Apps in order to have these settings for app connections. Monitoring Activities As laid out within the public documentation, there are several categories of audit logs for Sentinel platform, including: Onboarding/offboarding KQL activities Job activities Notebook activities AI tool activities Graph activities All of these events are surfaced within the Purview audit viewer and CloudAppEvents table. When querying the table, each of the activities will appear under a column called ActionType. Onboarding Friendly Name Operation Description Changed subscription of Sentinel lake SentinelLakeSubscriptionChanged User modified the billing subscription ID or resource group associated with the data lake. Onboarded data for Sentinel lake SentinelLakeDefaultDataOnboarded During onboarding, default data onboard is logged. Setup Sentinel lake SentinelLakeSetup At the time of onboarding, Sentinel data lake is set up and the details are logged. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘SentinelLakeDefaultDataOnboarded’ | project AccountDisplayName, Application, Timestamp, ActionType KQL Queries There is only one type of activity for KQL available. These logs function similarly to how LAQueryLogs work today, showing details like who ran the query, if it was successful, and what the body of the query was. Friendly Name Operation Description Completed KQL query KQLQueryCompleted User runs interactive queries via KQL on data in their Microsoft Sentinel data lake For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘KQLQueryCompleted’ | project Timestamp, AccountDisplayName, Application, RawEventData.Interface, RawEventData.QueryText, RawEventData.TotalRows Jobs Jobs pertain to KQL jobs and Notebook jobs offered by the Sentinel Platform. These logs will detail job activities as well as actions taken against jobs. This will be useful for monitoring who is creating or modifying jobs, as well as monitoring that jobs are properly running. Friendly Name Operation Description Completed a job run adhoc JobRunAdhocCompleted Adhoc Job execution completed. Completed a job run scheduled JobRunScheduledCompleted Scheduled Job execution completed. Created job JobCreated A job is created. Deleted a custom table CustomTableDelete As part of a job run, the custom table was deleted. Deleted job JobDeleted A job is deleted. Disabled job JobDisabled The job is disabled. Enabled job JobEnabled A disabled job is reenabled. Ran a job adhoc JobRunAdhoc Job is triggered manually and run started. Ran a job on schedule JobRunScheduled Job run is triggered due to schedule. Read from table TableRead As part of the job run, a table is read. Stopped a job run JobRunStopped User manually cancels or stops an ongoing job run. Updated job JobUpdated The job definition and/or configuration and schedule details of the job if updated. Writing to a custom table CustomTableWrite As part of the job run, data was written to a custom table. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘JobCreated’ | project Timestamp, AccountDisplayName, Application, ActionType, RawEventData.JobName, RawEventData.JobType, RawEventData.Interface AI Tools AI tool logs pertain to events being generated by MCP server usage. This is generated any time that users operate with MCP server and leverage one of the tools available today to run prompts and sessions. Friendly Name Operation Description Completed AI tool run SentinelAIToolRunCompleted Sentinel AI tool run completed Created AI tool SentinelAIToolCreated User creates a Sentinel AI tool Started AI tool run SentinelAIToolRunStarted Sentinel AI tool run started For querying these activities, the query would be: CloudAppEvents | where ActionType == ‘SentinelAIToolRunStarted’ | project Timestamp, AccountDisplayName, ActionType, Application, RawEventData.Interface, RawEventData.ToolName Notebooks Notebook activities pertain to actions performed by users via Notebooks. This can include querying data via a Notebook, writing to a table via Notebooks, or launching new Notebook sessions. Friendly Name Operation Description Deleted a custom table CustomTableDelete User deleted a table as part of their notebook execution. Read from table TableRead User read a table as part of their notebook execution. Started session SessionStarted User started a notebook session. Stopped session SessionStopped User stopped a notebook session. Wrote to a custom table CustomTableWrite User wrote to a table as part of their notebook execution. For querying these activities, one example would be: CloudAppEvents | where ActionType == ‘TableRead’ Graph Usage Graph activities pertain to users modifying or running a graph based scenario within the environment. This can include creating a new graph scenario, deleting one, or running a scenario. Created a graph scenario GraphScenarioCreated User created a graph instance for a pre-defined graph scenario. Deleted a graph scenario GraphScenarioDeleted User deleted or disable a graph instance for a pre-defined graph scenario. Ran a graph query GraphQueryRun User ran a graph query. For querying these activities, one example would be: CloudAppEvents | where ActionType == 'GraphQueryRun' | project AccountDisplayName, IsExternalUser, IsImpersonated, RawEventData.['GraphName'], RawEventData.['CreationTime'] Monitoring without Access to the CloudAppSecurity Table If accessing the CloudAppSecurity table is not possible in the environment, both Defender and Purview allow for manually searching for activities within the environment. For Purview (https://purview.microsoft.com), the audit page can be found by going to the Audit blade within the Purview portal. For Defender, the audit blade can be found under Permissions > Audit To run a search that will match Sentinel Platform related activities, the easiest method is using the Activities – friendly names field to filter for Sentinel Platform. Custom Ingestion of Audit Logs If looking to ingest the data into a table, a custom connector can be used to fetch the information. The Purview Audit Logs use the Office Management API when calling events programmatically. This leverages registered applications with proper permissions to poll the API and forward the data into a data collection rule. As the Office Management API does not support filtering entirely within the content URI, making a custom connector for this source is a bit more tricky. For a custom connector to work, it will need to: Call the API Review each content URL Filter for events that are related to Sentinel Platform This leaves two options for accomplishing a custom connector route: A code-based connector that is hosted within an Azure Function A codeless connector paired with a filtering data collection rule This blog will just focus on the codeless connector as an example. A codeless connector can be made from scratch or by referencing an existing connector within the Microsoft Sentinel GitHub repository. For the connector, an example API call would appear as such: https://manage.office.com/api/v1.0/{tenant-id}/activity/feed/subscriptions/content?contentType=Audit.General&startTime={startTime}&endTime={endTime} When using a registered application, it will need ActivityFeed.Read read permissions on the Office Management API for it to be able to call the API and view the information returned. The catch with the Management API is that it uses a content URL in the API response, thus requiring one more step. Luckily, Codeless Connectors support nested actions and JSON. An example of a connector that does this today is the Salesforce connector. When looking to filter the events to be specifically the Sentinel Platform audit logs, the queries listed above can be used in the body of a data collection rule. For example: "streams": [ "Custom-PurviewAudit" ], "destinations": [ "logAnalyticsWorkspace" ], "transformKql": "source | where ActionType has_any (‘GraphQueryRun’, ‘TableRead’… etc) "outputStream": "Custom-SentinelPlatformAuditLogs” Note that putting all of the audit logs may lead to a schema mismatch depending on how they are being parsed. If concerned about this, consider placing each event into different tables, such as SentinelLakeQueries, KQLJobActions, etc. This can all be defined within the data collection rule, though the custom tables for each action will need to exist before defining them within the data collection rule. Closing Now that audit logs are flowing, actions taken by users within the environment can be used for detections, hunting, Workbooks, and automation. Since the logs are being ingested via a data collection rule, they can also be sent to Microsoft Sentinel data lake if desired. May this blog lead to some creativity and stronger monitoring of the Sentinel Platform!1.8KViews1like3CommentsTurn Complexity into Clarity: Introducing the New UEBA Behaviors Layer in Microsoft Sentinel
Security teams today face an overwhelming challenge: every data point is now a potential security signal, and SOCs are drowning in fragmented, high-volume logs from countless sources - firewalls, cloud platforms, identity systems, and more. Analysts spend precious time translating between schemas, manually correlating events, and piecing together timelines across disparate data sources. For custom detections, it’s no different. What if you could transform this noisy complexity into clear, actionable security intelligence? Today, we're thrilled to announce the release of the UEBA Behaviors layer - a breakthrough AI-based UEBA capability in Microsoft Sentinel that fundamentally changes how SOC teams understand and respond to security events. The Behaviors layer translates low-level, noisy telemetry into human-readable behavioral insights that answer the critical question: "Who did what to whom, and why does it matter?" Instead of sifting through thousands of raw CloudTrail events or firewall logs, you get enriched, normalized behaviors - each one mapped to MITRE ATT&CK tactics and techniques, tagged with entity roles, and presented with a clear, natural-language explanation. All behaviors are aggregated and sequenced within a time window or specific trigger, to give you the security story that resides in the logs. What Makes the Behaviors Layer Different? Unlike alerts - which signal potential threats - or anomalies - which flag unusual activity - behaviors are neutral, descriptive observations. They don't decide if something is malicious; they simply describe meaningful actions in a consistent, security-focused way. The Behaviors layer bridges the gap between alerts (work items for the SOC, indicating a breach) and raw logs, providing an abstraction layer that makes sense of what happened without requiring deep familiarity with every log source. While existing UEBA capabilities provide insights and anomalies for a specific event (raw log), behaviors turn clusters of related events – based on time windows or triggers – into security data. The technology behind it: Generative AI powers the Behaviors layer to create and scale the insights it provides. AI is used to develop behavior logic, map entities, perform MITRE mapping, and ensure explainability - all while maintaining quality guardrails. Each behavior is mapped back to raw logs, so you can always trace which events contributed to it. Real-World Impact: We've been working closely with enterprise customers during private preview, and their feedback speaks volumes about the transformative potential of the Behaviors layer: "We're constantly exploring innovative ways to detect anomalous behavior for our detection engineering and incident enrichment. Behaviors adds a powerful new layer that also covers third-party data sources in a multi-cloud environment - seamlessly integrable and packed with rich insights, including MITRE mapping and detailed context for deeper correlation and context-driven investigation." (Glueckkanja) "Microsoft's new AI-powered extension for UEBA enhances behavioral capabilities for PaloAlto logs. By intelligently aggregating and sequencing low-level security events, it elevates them into high-fidelity 'behaviors' - powerful, actionable signals. This enhanced behavioral intelligence significantly can improve your security operations. During investigations, these behaviors are immediately pointing to unusual or suspicious activities and providing a rich, contextual understanding of an entity's actions. They serve as a stable starting point for the analysts, instead of sifting through millions of logs." (BlueVoyant) How It Works: Aggregation and Sequencing The Behaviors layer operates using two powerful patterns: Aggregation Behaviors detect volume-based patterns. For example: "User accessed 50+ AWS resources in 1 hour." These are invaluable for spotting unusual activity levels and turning high-volume logs into actionable security insights. Sequencing Behaviors detect multi-step patterns that surface complex chains invisible in individual events. For example: "Access key created → used from new IP → privileged API calls." This helps you spot sophisticated tactics and procedures across sources. Once enabled, behaviors are aggregated and sequenced based on time windows and triggers tailored to each logic. When the time window closes or a pattern is identified, the behavior log is created immediately - providing near real-time availability. The behaviors are stored as records in Log Analytics. This means each behavior record contributes to your data volume and will be billed according to your Sentinel/Log Analytics data ingestion rates. Use Cases: Empowering Every SOC Persona The new Behaviors layer in Microsoft Sentinel enhances the daily workflows of SOC analysts, threat hunters, and detection engineers by providing a unified, contextual view of security activity across diverse data sources. SOC analysts can now investigate incidents faster by querying behaviors tied to the entities involved in an incident. For example, instead of reviewing 20 separate AWS API calls, a single behavior like “Suspicious mass secret access via AWS IAM” provides immediate clarity and context, with or without filtering on specific MITRE ATT&CK mapping. Simply use the following query (choose the entity you’re investigating): let targetTechniques = dynamic ("Password Guessing (T1110.001)"); // to filter on MITRE ATT&CK let behaviorInfoFiltered = BehaviorInfo | where TimeGenerated > ago(1d) | where AttackTechniques has_any (targetTechniques) | project BehaviorId, AttackTechniques; BehaviorEntities | where TimeGenerated > ago(1d) | where AccountUpn == ("user@domain.com") | join kind=inner (behaviorInfoFiltered) on BehaviorId Threat hunters benefit from the ability to proactively search for behaviors mapped to MITRE tactics or specific patterns, uncovering stealthy activity such as credential enumeration or lateral movement without complex queries. Another use case, is looking for specific entities that move across the MITRE ATT&CK chain within a specific time window, for example: let behaviorInfo = BehaviorInfo | where TimeGenerated > ago(12h) | where Categories has "Persistance" or Categories has "Discovery" // Replace with actual tactics | project BehaviorId, Categories, Title, TimeGenerated; BehaviorEntities | where TimeGenerated > ago(12h) | extend EntityName = coalesce(AccountUpn, DeviceName, CloudResourceId) // Replace with actual entity types | join kind=inner (behaviorInfo) on BehaviorId | summarize BehaviorTypes = make_set(Title), AffectedEntities = dcount(EntityName) by bin(TimeGenerated, 5m) | where AffectedEntities > 5 Detection engineers can build simpler, more explainable rules using normalized, high-fidelity behaviors as building blocks. This enables faster deployment of detections and more reliable automation triggers, such as correlating a new AWS access key creation with privilege escalation within a defined time window. Another example is joining the rarest behaviors with other signals that include the organization’s highest value assets: BehaviorInfo | where TimeGenerated > ago(5d) | summarize Occurrences = dcount(behaviorId), FirstSeen = min(TimeGenerated), LastSeen = max(TimeGenerated) by Title | order by Occurrences asc Supported Data Sources & Coverage This release focuses on most common non-Microsoft data sources that traditionally lack easy behavioral context in Sentinel. Coverage of more behaviors will expand over time - both within each data source and across new sources. Initial supported sources include: CommonSecurityLog - Specific vendors and logs: o Cyber Ark Vault o Palo Alto Threats AWS CloudTrail - Coverage for several AWS services like EC2, IAM, S3, EKS, Secrets Manager (common AWS management activities) GCPAuditLogs Once enabled, two new tables (BehaviorInfo and BehaviorEntities) will populate in your Log Analytics workspace. You can query these tables in Advanced Hunting, use them in detection rules, or view them alongside incidents - just like any other Sentinel data. If you already benefit from Defender behaviors (such as Microsoft Defender for Cloud Apps), the same query will show results for all sources. Ready to Experience the Power of Behaviors? The future of security operations is here. Don't wait to modernize your SOC workflows. Enable the Behaviors layer in Microsoft Sentinel today and start transforming raw telemetry into clear, contextual insights that accelerate detection, investigation, and response. Get started now: Understand pre-requisites, limitations, pricing, and use of AI in Documentation. Navigate to your Sentinel workspace settings, enable the Behaviors layer (a new tab under the UEBA settings) and connect the data sources. This is currently supported for a single workspace per tenant (best chosen by the ingestion of the supported data sources). Once enabled, explore the BehaviorInfo and BehaviorEntities tables in Advanced Hunting. If you already benefit from behaviors in XDR, querying the tables will show results from both XDR and UEBA. Start building detection rules, hunting queries, and automation workflows using the behaviors as building blocks. Share your feedback to help us improve and expand coverage.1.7KViews6likes0CommentsMonthly news - December 2025
Microsoft Defender Monthly news - December 2025 Edition This is our monthly "What's new" blog post, summarizing product updates and various new assets we released over the past month across our Defender products. In this edition, we are looking at all the goodness from November 2025. Defender for Cloud has its own Monthly News post, have a look at their blog space. 😎 Microsoft Ignite 2025 - now on-demand! 🚀 New Virtual Ninja Show episode: Advancements in Attack Disruption Vulnerability Remediation Agent in Microsoft Intune Microsoft Defender Ignite 2025: What's new in Microsoft Defender? This blog summarizes our big announcements we made at Ignite. (Public Preview) Defender XDR now includes the predictive shielding capability, which uses predictive analytics and real-time insights to dynamically infer risk, anticipate attacker progression, and harden your environment before threats materialize. Learn more about predictive shielding. Security Copilot for SOC: bringing agentic AI to every defender. This blog post gives a great overview of the various agents supporting SOC teams. Account correlation links related accounts and corresponding insights to provide identity-level visibility and insights to the SOC. Coordinated response allows Defenders to take action comprehensively across connected accounts, accelerating response and minimizing the potential for lateral movement. Enhancing visibility into your identity fabric with Microsoft Defender. This blog describes new enhancements to the identity security experience within Defender that will help enrich your security team’s visibility and understanding into your unique identity fabric. (Public Preview) The IdentityAccountInfo table in advanced hunting is now available for preview. This table contains information about account information from various sources, including Microsoft Entra ID. It also includes information and link to the identity that owns the account. Microsoft Sentinel customers using the Defender portal, or the Azure portal with the Microsoft Sentinel Defender XDR data connector, now also benefit from Microsoft Threat Intelligence alerts that highlight activity from nation-state actors, major ransomware campaigns, and fraudulent operations. For more information, see Incidents and alerts in the Microsoft Defender portal. (Public Preview) New Entity Behavior Analytics (UEBA) experiences in the Defender portal! Microsoft Sentinel introduces new UEBA experiences in the Defender portal, bringing behavioral insights directly into key analyst workflows. These enhancements help analysts prioritize investigations and apply UEBA context more effectively. Learn more on our docs. (Public Preview) A new Restrict pod access response action is now available when investigating container threats in the Defender portal. This response action blocks sensitive interfaces that allow lateral movement and privilege escalation. (Public Preview) Threat analytics now has an Indicators tab that provides a list of all indicators of compromise (IOCs) associated with a threat. Microsoft researchers update these IOCs in real time as they find new evidence related to the threat. This information helps your security operations center (SOC) and threat intelligence analysts with remediation and proactive hunting. Learn more. In addition the overview section of threat analytics now includes additional details about a threat, such as alias, origin, and related intelligence, providing you with more insights on what the threat is and how it might impact your organization. Microsoft Defender for Identity (Public Preview) In addition to the GA release of scoping by Active Directory domains a few months ago, you can now scope by Organizational Units (OUs) as part of XDR User Role-Based Access Control. This enhancement provides even more granular control over which entities and resources are included in security analysis. For more information, see Configure scoped access for Microsoft Defender for Identity. (Public Preview). New security posture assessment: Change password for on-prem account with potentially leaked credentials. The new security posture assessment lists users whose valid credentials have been leaked. For more information, see: Change password for on-prem account with potentially leaked credentials. Defender for Identity is slowly rolling out automatic Windows event auditing for sensors v3.x, streamlining deployment by applying required auditing settings to new sensors and fixing misconfigurations on existing ones. As it becomes available, you will be able to enable automatic Windows event-auditing in the Advanced settings section in the Defender portal, or using the Graph API. Identity Inventory enhancements: Accounts tab, manual account linking and unlinking, and expanded remediation actions are now available. Learn more in our docs. Microsoft Defender for Cloud Apps (Public Preview) Defender for Cloud Apps automatically discovers AI agents created in Microsoft Copilot Studio and Azure AI Foundry, collects audit logs, continuously monitors for suspicious activity, and integrates detections and alerts into the XDR Incidents and Alerts experience with a dedicated Agent entity. For more information, see Protect your AI agents. Microsoft Defender for Endpoint Ignite 2025: Microsoft Defender now prevents threats on endpoints during an attack. This year at Microsoft Ignite, Microsoft Defender is announcing exciting innovations for endpoint protection that help security teams deploy faster, gain more visibility, and proactively block attackers during active attacks. (Public Preview) Defender for Endpoint now includes the GPO hardening and Safeboot hardening response actions. These actions are part of the predictive shielding feature, which anticipates and mitigates potential threats before they materialize. (Public Preview) Custom data collection enables organizations to expand and customize telemetry collection beyond default configurations to support specialized threat hunting and security monitoring needs. (Public Preview) Native root detection support for Microsoft Defender on Android. This enables proactive detection of rooted devices without requiring Intune policies, ensuring stronger security and validating that Defender is running on an uncompromised device, ensuring more reliable telemetry that is not vulnerable to attacker manipulation. (Public Preview) The new Defender deployment tool is a lightweight, self-updating application that streamlines onboarding devices to the Defender endpoint security solution. The tool takes care of prerequisites, automates migrations from older solutions, and removes the need for complex onboarding scripts, separate downloads, and manual installations. It currently supports Windows and Linux devices. Defender deployment tool: for Windows devices for Linux devices (Public Preview) Defender endpoint security solution for Windows 7 SP1 and Windows Server 2008 R2 SP1. A Defender for endpoint security solution is now available for legacy Windows 7 SP1 and Windows Server 2008 R2 SP1 devices. The solution provides advanced protection capabilities and improved functionality for these devices compared to other solutions. The new solution is available using the new Defender deployment tool. Microsoft Defender Vulnerability Management (Public Preview) The Vulnerability Management section in the Microsoft Defender portal is now located under Exposure management. This change is part of the vulnerability management integration to Microsoft Security Exposure Management, which significantly expands the scope and capabilities of the platform. Learn more. (General Availability) Microsoft Secure Score now includes new recommendations to help organizations proactively prevent common endpoint attack techniques. Require LDAP client signing and Require LDAP server signing - help ensure integrity of directory requests so attackers can't tamper with or manipulate group memberships or permissions in transit. Encrypt LDAP client traffic - prevents exposure of credentials and sensitive user information by enforcing encrypted communication instead of clear-text LDAP. Enforce LDAP channel binding - prevents man-in-the-middle relay attacks by ensuring the authentication is cryptographically tied to the TLS session. If the TLS channel changes, the bind fails, stopping credential replay. (General Availability) These Microsoft Secure Score recommendations are now generally available: Block web shell creation on servers Block use of copied or impersonated system tools Block rebooting a machine in Safe Mode Microsoft Defender for Office 365 Microsoft Ignite 2025: Transforming Phishing Response with Agentic Innovation. This blog post summarizes the following announcements: General Availability of the Security Copilot Phishing Triage Agent Agentic Email Grading System in Microsoft Defender Cisco and VIPRE Security Group join the Microsoft Defender ICES ecosystem. A separate blog explains these best practices in more detail and outline three other routing techniques commonly used across ICES vendors. Blog series: Best practices from the Microsoft Community Microsoft Defender for Office 365: Fine-Tuning: This blog covers our top recommendations for fine-tuning Microsoft Defender for Office 365 configuration from hundreds of deployments and recovery engagements, by Microsoft MVP Joe Stocker. You may be right after all! Disputing Submission Responses in Microsoft Defender for Office 365: Microsoft MVP Mona Ghadiri spotlights a new place AI has been inserted into a workflow to make it better… a feature that elevates the transparency and responsiveness of threat management: the ability to dispute a submission response directly within Microsoft Defender for Office 365. Blog post: Strengthening calendar security through enhanced remediation.3.5KViews0likes0CommentsAnnouncing AI Entity Analyzer in Microsoft Sentinel MCP Server - Public Preview
What is the Entity Analyzer? Assessing the risk of entities is a core task for SOC teams - whether triaging incidents, investigating threats, or automating response workflows. Traditionally, this has required building complex playbooks or custom logic to gather and analyze fragmented security data from multiple sources. With Entity Analyzer, this complexity starts to fade away. The tool leverages your organization’s security data in Sentinel to deliver comprehensive, reasoned risk assessments for any entity you encounter - starting with users and urls. By providing this unified, out-of-the-box solution for entity analysis, Entity Analyzer also enables the AI agents you build to make smarter decisions and automate more tasks - without the need to manually engineer risk evaluation logic for each entity type. And for those building SOAR workflows, Entity Analyzer is natively integrated with Logic Apps, making it easy to enrich incidents and automate verdicts within your playbooks. *Entity Analyzer is rolling out in Public Preview to Sentinel MCP server and within Logic Apps starting today. Learn more here. **Leave feedback on the Entity Analyzer here. Deep Dive: How the User Analyzer is already solving problems for security teams Problem: Drowning in identity alerts Security operations centers (SOCs) are inundated with identity-based threats and alert noise. Triaging these alerts requires analyzing numerous data sources across sign-in logs, cloud app events, identity info, behavior analytics, threat intel, and more, all in tandem with each other to reach a verdict - something very challenging to do without a human in the loop today. So, we introduced the User Analyzer, a specialized analyzer that unifies, correlates, and analyzes user activity across all these security data sources. Government of Nunavut: solving identity alert overload with User Analyzer Hear the below from Arshad Sheikh, Security Expert at Government of Nunavut, on how they're using the User Analyzer today: How it's making a difference "Before the User Analyzer, when we received identity alerts we had to check a large amount of data related to users’ activity (user agents, anomalies, IP reputation, etc.). We had to write queries, wait for them to run, and then manually reason over the results. We attempted to automate some of this, but maintaining and updating that retrieval, parsing, and reasoning automation was difficult and we didn’t have the resources to support it. With the User Analyzer, we now have a plug-and-play solution that represents a step toward the AI-driven automation of the future. It gathers all the context such as what the anomalies are and presents it to our analysts so they can make quick, confident decisions, eliminating the time previously spent manually gathering this data from portals." Solving a real problem "For example, every 24 hours we create a low severity incident of our users who successfully sign-in to our network non interactively from outside of our GEO fence. This type of activity is not high-enough fidelity to auto-disable, requiring us to manually analyze the flagged users each time. But with User Analyzer, this analysis is performed automatically. The User Analyzer has also significantly reduced the time required to determine whether identity-based incidents like these are false positives or true positives. Instead of spending around 20 minutes investigating each incident, our analysts can now reach a conclusion in about 5 minutes using the automatically generated summary." Looking ahead "Looking ahead, we see even more potential. In the future, the User Analyzer could be integrated directly with Microsoft Sentinel playbooks to take automated, definitive action such as blocking user or device access based on the analyzer’s results. This would further streamline our incident response and move us closer to fully automated security operations." Want similar benefits in your SOC? Get started with our Entity Analyzer Logic Apps template here. User Analyzer architecture: how does it work? Let’s take a look at how the User Analyzer works. The User Analyzer aggregates and correlates signals from multiple data sources to deliver a comprehensive analysis, enabling informed actions based on user activity. The diagram below gives an overview of this architecture: Step 1: Retrieve Data The analyzer starts by retrieving relevant data from the following sources: Sign-In Logs (Interactive & Non-Interactive): Tracks authentication and login activity. Security Alerts: Alerts from Microsoft Defender solutions. Behavior Analytics: Surfaces behavioral anomalies through advanced analytics. Cloud App Events: Captures activity from Microsoft Defender for Cloud Apps. Identity Information: Enriches user context with identity records. Microsoft Threat Intelligence: Enriches IP addresses with Microsoft Threat Intelligence. Steps 2: Correlate signals Signals are correlated using identifiers such as user IDs, IP addresses, and threat intelligence. Rather than treating each alert or behavior in isolation, the User Analyzer fuses signals to build a holistic risk profile. Step 3: AI-based reasoning In the User Analyzer, multiple AI-powered agents collaborate to evaluate the evidence and reach consensus. This architecture not only improves accuracy and reduces bias in verdicts, but also provides transparent, justifiable decisions. Leveraging AI within the User Analyzer introduces a new dimension of intelligence to threat detection. Instead of relying on static signatures or rigid regex rules, AI-based reasoning can uncover subtle anomalies that traditional detection methods and automation playbooks often miss. For example, an attacker might try to evade detection by slightly altering a user-agent string or by targeting and exfiltrating only a few files of specific types. While these changes could bypass conventional pattern matching, an AI-powered analyzer understands the semantic context and behavioral patterns behind these artifacts, allowing it to flag suspicious deviations even when the syntax looks benign. Step 4: Verdict & analysis Each user is given a verdict. The analyzer outputs any of the following verdicts based on the analysis: Compromised Suspicious activity found No evidence of compromise Based on the verdict, a corresponding recommendation is given. This helps teams make an informed decision whether action should be taken against the user. *AI-generated content from the User Analyzer may be incorrect - check it for accuracy. User Analyzer Example Output See the following example output from the user analyzer within an incident comment: *IP addresses have been redacted for this blog* &CK techniques, a list of malicious IP addresses the user signed in from (redacted for this blog), and a few suspicious user agents the user's activity originated from. typically have to query and analyze these themselves, feel more comfortable trusting its classification. The analyzer also gives recommendations to remediate the account compromise, and a list of data sources it used during analysis. Conclusion Entity Analyzer in Microsoft Sentinel MCP server represents a leap forward in alert triage & analysis. By correlating signals and harnessing AI-based reasoning, it empowers SOC teams to act on investigations with greater speed, precision, and confidence. *Leave feedback on the Entity Analyzer hereIgnite 2025: New Microsoft Sentinel Connectors Announcement
Microsoft Sentinel continues to set the pace for innovation in cloud-native SIEMs, empowering security teams to meet today’s challenges with scalable analytics, built-in AI, and a cost-effective data lake. Recognized as a leader by Gartner and Forrester, Microsoft Sentinel is a platform for all of security, evolving to unify signals, cut costs, and power agentic AI for the modern SOC. As Microsoft Sentinel’s capabilities expand, so does its connector ecosystem. With over 350+ integrations available, organizations can seamlessly bring data from a wide range of sources into Microsoft Sentinel’s analytics and data lake tiers. This momentum is driven by our partners, who continue to deliver new and enhanced connectors that address real customer needs. The past year has seen rapid growth in both the number and diversity of connectors, ensuring that Microsoft Sentinel remains robust, flexible, and ready to meet the demands of any security environment. Today we showcase some of the most recent additions to our growing Microsoft Sentinel ecosystem spanning categories such as cloud security, endpoint protection, identity, IT operations, threat intelligence, compliance, and more: New and notable integrations BlinkOps and Microsoft Sentinel BlinkOps is an enterprise-ready agentic security automation platform that integrates seamlessly with Microsoft Sentinel to accelerate incident response and streamline operations. With Blink, analysts can rapidly build sophisticated workflows and custom security agents—without writing a single line of code—enabling agile, scalable automation with both Microsoft Sentinel and any other security platform. This integration helps eliminate alert fatigue, reduce mean time to resolution (MTTR), and free teams to focus on what matters most: driving faster operations, staying ahead of cyber threats, and unlocking new levels of efficiency through reliable, trusted orchestration. Check Point for Microsoft Sentinel solutions Check Point’s External Risk Management (ERM) IOC and Alerts integration with Microsoft Sentinel streamlines how organizations detect and respond to external threats by automatically sending both alerts and indicators of compromise (IOCs) into Microsoft Sentinel. Through this integration, customers can configure SOAR playbooks to trigger automated actions such as updating security policies, blocking malicious traffic, and executing other security operations tasks. This orchestration reduces manual effort, accelerates response times, and allows IT teams, network administrators, and security personnel to focus on strategic threat analysis—strengthening the organization’s overall security posture. Cloudflare for Microsoft Sentinel Cloudflare’s integration with Microsoft Sentinel, powered by Logpush, brings detailed security telemetry from its Zero Trust and network services into your SIEM environment. By forwarding logs such as DNS queries, HTTP requests, and access events through Logpush, the connector enables SOC teams to correlate Cloudflare data with other sources for comprehensive threat detection. This integration supports automated workflows for alerting and investigation, helping organizations strengthen visibility across web traffic and identity-based access while reducing manual overhead. Contrast ADR for Microsoft Sentinel Contrast Security gives Microsoft Sentinel users their first-ever integration with Application Detection and Response (ADR), delivering real-time visibility into application and API attacks, eliminating the application-layer blind spot. By embedding security directly into applications, Contrast enables continuous monitoring and precise blocking of attacks, and with AI assistance, the ability to fix underlying software vulnerabilities in minutes. This integration helps security teams prioritize actionable insights, reduce noise, and better understand the severity of threats targeting APIs and web apps. GreyNoise Enterprise Solution for Microsoft Sentinel GreyNoise helps Microsoft Sentinel users cut through the noise by identifying and filtering out internet background traffic that clutters security alerts. Drawing from a global sensor network, GreyNoise classifies IP addresses that are scanning the internet, allowing SOC teams to deprioritize benign activity and focus on real threats. The integration supports automated triage, threat hunting, and enrichment workflows, giving analysts the context they need to investigate faster and more effectively. iboss Connector for Microsoft Sentinel The iboss Connector for Microsoft Sentinel delivers real-time ingestion of URL event logs, enriching your SIEM with high-fidelity web traffic insights. Logs are forwarded in Common Event Format (CEF) over Syslog, enabling streamlined integration without the need for a proxy. With built-in parser functions and custom workbooks, the solution supports rapid threat detection and investigation. This integration is especially valuable for organizations adopting Zero Trust principles, offering granular visibility into user access patterns and helping analysts accelerate response workflows. Mimecast Mimecast’s integration with Microsoft Sentinel consolidates email security telemetry into a unified threat detection environment. By streaming data from Mimecast into Microsoft Sentinel’s Log Analytics workspace, security teams can craft custom queries, automate response workflows, and prioritize high-risk events. This connector supports a wide range of use cases, from phishing detection to compliance monitoring, while helping reduce mean time to respond (MTTR). MongoDB Atlas Solution for Microsoft Sentinel MongoDB Atlas integrates with Microsoft Sentinel to provide visibility into database activity and security events across cloud environments. By forwarding database logs into Sentinel, this connector enables SOC teams to monitor access patterns, detect anomalies, and correlate database alerts with broader security signals. The integration allows for custom queries and dashboards to be built on real-time log data, helping organizations strengthen data security, streamline investigations, and maintain compliance for critical workloads. Onapsis Defend Onapsis Defend integrates with Microsoft Sentinel Solution for SAP to deliver real-time security monitoring and threat detection from both cloud and on-premises SAP systems. By forwarding Onapsis's unique SAP exploit detection, proprietary SAP zero-day rules, and expert SAP-focused insights into Microsoft Sentinel, this integration enables SOC teams to correlate SAP-specific risks with enterprise-wide telemetry and accelerate incident response. The integration supports prebuilt analytics rules and dashboards, helping organizations detect suspicious behavior and malicious activity, prioritize remediation, and strengthen compliance across complex SAP application landscapes. Proofpoint on Demand (POD) Email Security for Microsoft Sentinel Proofpoint’s Core Email Protection integrates with Microsoft Sentinel to deliver granular email security telemetry for advanced threat analysis. By forwarding events such as phishing attempts, malware detections, and policy violations into Microsoft Sentinel, SOC teams can correlate Proofpoint data with other sources for a unified view of risk. The connector supports custom queries, dashboards, and automated playbooks, enabling faster investigations and streamlined remediation workflows. This integration helps organizations strengthen email defenses and improve response efficiency across complex attack surfaces. Proofpoint TAP Solution Proofpoint’s Targeted Attack Protection (TAP), part of its Core Email Protection, integrates with Microsoft Sentinel to centralize email security telemetry for advanced threat detection and response. By streaming logs and events from Proofpoint into Microsoft Sentinel, SOC teams gain visibility into phishing attempts, malicious attachments, and compromised accounts. The connector supports custom queries, dashboards, and automated playbooks, enabling faster investigations and streamlined remediation workflows. This integration helps organizations strengthen email defenses while reducing manual effort across incident response processes. RSA ID Plus Admin Log Connector The RSA ID Plus Admin Log Connector integrates with Microsoft Sentinel to provide centralized visibility into administrative activity within RSA ID Plus Connector. By streaming admin-level logs into Sentinel, SOC teams can monitor changes, track authentication-related operations, and correlate identity events with broader security signals. The connector supports custom queries and dashboards, enabling organizations to strengthen oversight and streamline investigations across their hybrid environments. Rubrik Integrations with Microsoft Sentinel for Ransomware Protection Rubrik’s integration with Microsoft Sentinel strengthens ransomware resilience by combining data security with real-time threat detection. The connector streams anomaly alerts, such as suspicious deletions, modifications, encryptions, or downloads, directly into Microsoft Sentinel, enabling fast investigations and more informed responses. With built-in automation, security teams can trigger recovery workflows from within Microsoft Sentinel, restoring clean backups or isolating affected systems. The integration bridges IT and SecOps, helping organizations minimize downtime and maintain business continuity when facing data-centric threats. Samsung Knox Asset Intelligence for Microsoft Sentinel Samsung’s Knox Asset Intelligence integration with Microsoft Sentinel equips security teams with near real-time visibility into mobile device threats across Samsung Galaxy enterprise fleets. By streaming security events and logs from managed Samsung devices into Microsoft Sentinel via the Azure Monitor Log Ingestion API, organizations can monitor risk posture, detect anomalies, and investigate incidents from a centralized dashboard. This solution is especially valuable for SOC teams monitoring endpoints for large mobile workforces, offering data-driven insights to reduce blind spots and strengthen endpoint security without disrupting device performance. SAP S/4HANA Public Cloud – Microsoft Sentinel SAP S/4HANA Cloud, public edition integrates with Microsoft Sentinel Solution for SAP to deliver unified, real-time security monitoring for cloud ERP environments. This connector leverages Microsoft’s native SAP integration capabilities to stream SAP logs into Microsoft Sentinel, enabling SOC teams to correlate SAP-specific events with enterprise-wide telemetry for faster, more accurate threat detection and response. SAP Enterprise Threat Detection – Microsoft Sentinel SAP Enterprise Threat Detection integrates with Microsoft Sentinel Solution for SAP to deliver unified, real-time security monitoring across SAP landscapes and the broader enterprise. Normalized SAP logs, alerts, and investigation reports flow into Microsoft Sentinel, enabling SOC teams to correlate SAP-specific alerts with enterprise telemetry for faster, more accurate threat detection and response. SecurityBridge: SAP Data to Microsoft Sentinel SecurityBridge extends Microsoft Sentinel for SAP’s reach into SAP environments, offering real-time monitoring and threat detection across both cloud and on-premises SAP systems. By funneling normalized SAP security events into Microsoft Sentinel, this integration enables SOC teams to correlate SAP-specific risks with broader enterprise telemetry. With support for S/4HANA, SAP BTP, and NetWeaver-based applications, SecurityBridge simplifies SAP security auditing and provides prebuilt dashboards and templates to accelerate investigations. Tanium Microsoft Sentinel Connector Tanium’s integration with Microsoft Sentinel bridges real-time endpoint intelligence and SIEM analytics, offering a unified approach to threat detection and response. By streaming real-time telemetry and alerts into Microsoft Sentinel,Tanium enables security teams to monitor endpoint health, investigate incidents, and trigger automated remediation, all from a single console. The connector supports prebuilt workbooks and playbooks, helping organizations reduce dwell time and align IT and security operations around a shared source of truth. Team Cymru Pure Signal Scout for Microsoft Sentinel Team Cymru’s Pure Signal™ Scout integration with Microsoft Sentinel delivers high-fidelity threat intelligence drawn from global internet telemetry. By enriching Microsoft Sentinel alerts with real-time context on IPs, domains, and adversary infrastructure, Scout enables security teams to proactively monitor third-party compromise, track threat actor infrastructure, and reduce false positives. The integration supports external threat hunting and attribution, enabling analysts to discover command-and-control activity, signals of data exfiltration and compromise with greater precision. For organizations seeking to build preemptive defenses by elevating threat visibility beyond their borders, Scout offers a lens into the broader threat landscape at internet scale. Veeam App for Microsoft Sentinel The Veeam App for Microsoft Sentinel enhances data protection by streaming backup and recovery telemetry into your SIEM environment. The solution provides visibility into backup job status, anomalies, and potential ransomware indicators, enabling SOC teams to correlate these events with broader security signals. With support for custom queries and automated playbooks, this integration helps organizations accelerate investigations, trigger recovery workflows, and maintain resilience against data-centric threats. WithSecure Elements via Function for Microsoft Sentinel WithSecure’s Elements platform integrates with Microsoft Sentinel to provide centralized visibility into endpoint protection and detection events. By streaming incident and malware telemetry into Microsoft Sentinel, organizations can correlate endpoint data with broader security signals for faster, more informed responses. The solution supports a proactive approach to cybersecurity, combining predictive, preventive, and responsive capabilities, making it well-suited for teams seeking speed and flexibility without sacrificing depth. This integration helps reduce complexity while enhancing situational awareness across hybrid environments, and for companies to prevent or minimize any disruption. In addition to these solutions from our third-party partners, we are also excited to announce the following connectors published by the Microsoft Sentinel team, available now in Azure Marketplace and Microsoft Sentinel content hub. Alibaba Cloud Action Trail Logs AWS: Network Firewall AWS: Route 53 DNS AWS: Security Hub Findings AWS: Server Access Cisco Secure Endpoint GCP: Apigee GCP: CDN GCP: Cloud Monitor GCP: Cloud Run GCP: DNS GCP: Google Kubernetes Engine (GKE) GCP: NAT GCP: Resource Manager GCP: SQL GCP: VPC Flow GCP: IAM OneLogin IAM Oracle Cloud Infrastructure Palo Alto: Cortex Xpanse CCF Palo Alto: Prisma Cloud CWPP Ping One Qualys Vulnerability Management Salesforce Service Cloud Slack Audit Snowflake App Assure: The Microsoft Sentinel promise Every connector in the Microsoft Sentinel ecosystem is built to work out of the box, backed by the App Assure team and the Microsoft Sentinel promise. In the unlikely event that customers encounter any issues, App Assure stands ready to assist to ensure rapid resolution. With the new Microsoft Sentinel data lake features, we extend our promise for customers looking to bring their data to the lake. To request a new connector or features for an existing one, contact us via our intake form. Learn More Microsoft Sentinel data lake Microsoft Sentinel data lake: Unify signals, cut costs, and power agentic AI Introducing Microsoft Sentinel data lake What is Microsoft Sentinel data lake Unlocking Developer Innovation with Microsoft Sentinel data lake Microsoft Sentinel Codeless Connector Framework (CCF) Create a codeless connector for Microsoft Sentinel What’s New in Microsoft Sentinel Microsoft App Assure App Assure home page App Assure services App Assure blog App Assure’s promise: Migrate to Sentinel with confidence App Assure’s Sentinel promise now extends to Microsoft Sentinel data lake RSAC 2025 new Microsoft Sentinel connectors announcement Microsoft Security Microsoft’s Secure Future Initiative Microsoft Unified SecOps3.9KViews2likes0CommentsAutomating Microsoft Sentinel: Part 2: Automate the mundane away
Welcome to the second entry of our blog series on automating Microsoft Sentinel. In this series, we’re showing you how to automate various aspects of Microsoft Sentinel, from simple automation of Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. So far, we’ve covered Part 1: Introduction to Automating Microsoft Sentinel where we talked about why you would want to automate as well as an overview of the different types of automation you can do in Sentinel. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: Introduction to Automating Microsoft Sentinel Part 2: Automation Rules [You are here] – Automate the mundane away Part 3: Playbooks 1 – Playbooks Part I – Fundamentals Part 4: Playbooks 2 – Playbooks Part II – Diving Deeper Part 5: Azure Functions / Custom Code Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 2: Automation Rules – Automate the mundane away Automation rules can be used to automate Sentinel itself. For example, let’s say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team and the severity of the alert needs to be raised to at least “high”. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems to make those changes. That way only the items that need that immediate escalation receive it, quickly and efficiently. Automation Rules In Depth So, now that we know what Automation Rules are, let’s dive in to them a bit deeper to better understand how to configure them and how they work. Creating Automation Rules There are three main places where we can create an Automation Rule: 1) Navigating to Automation under the left menu 2) In an existing Incident via the “Actions” button 3) When writing an Analytic Rule, under the “Automated response” tab The process for each is generally the same, except for the Incident route and we’ll break that down more in a bit. When we create an Automation Rule, we need to give the rule a name. It should be descriptive and indicative of what the rule is going to do and what conditions it applies to. For example, a rule that automatically resolves an incident based on a known false positive condition on a server named SRV02021 could be titled “Automatically Close Incident When Affected Machine is SRV02021” but really it’s up to you to decide what you want to name your rules. Trigger The next thing we need to define for our Automation Rule is the Trigger. Triggers are what cause the automation rule to begin running. They can fire when an incident is created or updated, or when an alert is created. Of the two options (incident based or alert based), it’s preferred to use incident triggers as they’re potentially the aggregation of multiple alerts and the odds are that you’re going to want to take the same automation steps for all of the alerts since they’re all related. It’s better to reserve alert-based triggers for scenarios where an analytic rule is firing an alert, but is set to not create an incident. Conditions Conditions are, well, the conditions to which this rule applies. There are two conditions that are always present: The Incident provider and the Analytic rule name. You can choose multiple criterion and steps. For example, you could have it apply to all incident providers and all rules (as shown in the picture above) or only a specific provider and all rules, or not apply to a particular provider, etc. etc. You can also add additional Conditions that will either include or exclude the rule from running. When you create a new condition, you can build it out by multiple properties ranging from information about the Incident all the way to information about the Entities that are tagged in the incident Remember our earlier Automation Rule title where we said this was a false positive about a server name SRV02021? This is where we make the rule match that title by setting the Condition to only fire this automation if the Entity has a host name of “SRV2021” By combining AND and OR group clauses with the built in conditional filters, you can make the rule as specific as you need it to be. You might be thinking to yourself that it seems like while there is a lot of power in creating these conditions, it might be a bit onerous to create them for each condition. Recall earlier where I said the process for the three ways of creating Automation Rules was generally the same except using the Incident Action route? Well, that route will pre-fill variables for that selected instance. For example, for the image below, the rule automatically took the rule name, the rules it applies to as well as the entities that were mapped in the incident. You can add, remove, or modify any of the variables that the process auto-maps. NOTE: In the new Unified Security Operations Platform (Defender XDR + Sentinel) that has some new best practice guidance: If you've created an automation using "Title" use "Analytic rule name" instead. The Title value could change with Defender's Correlation engine. The option for "incident provider" has been removed and replaced by "Alert product names" to filter based on the alert provider. Actions Now that we’ve tuned our Automation Rule to only fire for the situations we want, we can now set up what actions we want the rule to execute. Clicking the “Actions” drop down list will show you the options you can choose When you select an option, the user interface will change to map to your selected option. For example, if I choose to change the status of the Incident, the UX will update to show me a drop down menu with options about which status I would like to set. If I choose other options (Run playbook, change severity, assign owner, add tags, add task) the UX will change to reflect my option. You can assign multiple actions within one Automation Rule by clicking the “Add action” button and selecting the next action you want the system to take. For example, you might want to assign an Incident to a particular user or group, change its severity to “High” and then set the status to Active. Notably, when you create an Automation rule from an Incident, Sentinel automatically sets a default action to Change Status. It sets the automation up to set the Status to “Closed” and a “Benign Positive – Suspicious by expected”. This default action can be deleted and you can then set up your own action. In a future episode of this blog we’re going to be talking about Playbooks in detail, but for now just know that this is the place where you can assign a Playbook to your Automation Rules. There is one other option in the Actions menu that I wanted to specifically talk about in this blog post though: Incident Tasks Incident Tasks Like most cybersecurity teams, you probably have a run book of the different tasks or steps that your analysts and responders should take for different situations. By using Incident Tasks, you can now embed those runbook steps directly in the Incident. Incident tasks can be as lightweight or as detailed as you need them to be and can include rich formatting, links to external content, images, etc. When an incident with Tasks is generated, the SOC team will see these tasks attached as part of the Incident and can then take the defined actions and check off that they’ve been completed. Rule Lifetime and Order There is one last section of Automation rules that we need to define before we can start automating the mundane away: when should the rule expire and in what order should the rule run compared to other rules. When you create a rule in the standalone automation UX, the default is for the rule to expire at an indefinite date and time in the future, e.g. forever. You can change the expiration date and time to any date and time in the future. If you are creating the automation rule from an Incident, Sentinel will automatically assume that this rule should have an expiration date and time and sets it automatically to 24 hours in the future. Just as with the default action when created from an incident, you can change the date and time of expiration to any datetime in the future, or set it to “Indefinite” by deleting the date. Conclusion In this blog post, we talked about Automation Rules in Sentinel and how you can use them to automate mundane tasks in Sentinel as well as leverage them to help your SOC analysts be more effective and consistent in their day-to-day with capabilities like Incident Tasks. Stay tuned for more updates and tips on automating Microsoft Sentinel!1.7KViews2likes2CommentsFrom “No” to “Now”: A 7-Layer Strategy for Enterprise AI Safety
The “block” posture on Generative AI has failed. In a global enterprise, banning these tools doesn't stop usage; it simply pushes intellectual property into unmanaged channels and creates a massive visibility gap in corporate telemetry. The priority has now shifted from stopping AI to hardening the environment so that innovation can run at velocity without compromising data sovereignty. Traditional security perimeters are ineffective against the “slow bleed” of AI leakage - where data moves through prompts, clipboards, and autonomous agents rather than bulk file transfers. To secure this environment, a 7-layer defense-in-depth model is required to treat the conversation itself as the new perimeter. 1. Identity: The Only Verifiable Perimeter Identity is the primary control plane. Access to AI services must be treated with the same rigor as administrative access to core infrastructure. The strategy centers on enforcing device-bound Conditional Access, where access is strictly contingent on device health. To solve the "Account Leak" problem, the deployment of Tenant Restrictions v2 (TRv2) is essential to prevent users from signing into personal tenants using corporate-managed devices. For enhanced coverage, Universal Tenant Restrictions (UTR) via Global Secure Access (GSA) allows for consistent enforcement at the cloud edge. While TRv2 authentication-plane is GA, data-plane protection is GA for the Microsoft 365 admin center and remains in preview for other workloads such as SharePoint and Teams. 2. Eliminating the Visibility Gap (Shadow AI) You can’t secure what you can't see. Microsoft Defender for Cloud Apps (MDCA) serves to discover and govern the enterprise AI footprint, while Purview DSPM for AI (formerly AI Hub) monitors Copilot and third-party interactions. By categorizing tools using MDCA risk scores and compliance attributes, organizations can apply automated sanctioning decisions and enforce session controls for high-risk endpoints. 3. Data Hygiene: Hardening the “Work IQ” AI acts as a mirror of internal permissions. In a "flat" environment, AI acts like a search engine for your over-shared data. Hardening the foundation requires automated sensitivity labeling in Purview Information Protection. Identifying PII and proprietary code before assigning AI licenses ensures that labels travel with the data, preventing labeled content from being exfiltrated via prompts or unauthorized sharing. 4. Session Governance: Solving the “Clipboard Leak” The most common leak in 2025 is not a file upload; it’s a simple copy-paste action or a USB transfer. Deploying Conditional Access App Control (CAAC) via MDCA session policies allows sanctioned apps to function while specifically blocking cut/copy/paste. This is complemented by Endpoint DLP, which extends governance to the physical device level, preventing sensitive data from being moved to unmanaged USB storage or printers during an AI-assisted workflow. Purview Information Protection with IRM rounds this out by enforcing encryption and usage rights on the files themselves. When a user tries to print a "Do Not Print" document, Purview triggers an alert that flows into Microsoft Sentinel. This gives the SOC visibility into actual policy violations instead of them having to hunt through generic activity logs. 5. The “Agentic” Era: Agent 365 & Sharing Controls Now that we're moving from "Chat" to "Agents", Agent 365 and Entra Agent ID provide the necessary identity and control plane for autonomous entities. A quick tip: in large-scale tenants, default settings often present a governance risk. A critical first step is navigating to the Microsoft 365 admin center (Copilot > Agents) to disable the default “Anyone in organization” sharing option. Restricting agent creation and sharing to a validated security group is essential to prevent unvetted agent sprawl and ensure that only compliant agents are discoverable. 6. The Human Layer: “Safe Harbors” over Bans Security fails when it creates more friction than the risk it seeks to mitigate. Instead of an outright ban, investment in AI skilling-teaching users context minimization (redacting specifics before interacting with a model) - is the better path. Providing a sanctioned, enterprise-grade "Safe Harbor" like M365 Copilot offers a superior tool that naturally cuts down the use of Shadow AI. 7. Continuous Ops: Monitoring & Regulatory Audit Security is not a “set and forget” project, particularly with the EU AI Act on the horizon. Correlating AI interactions and DLP alerts in Microsoft Sentinel using Purview Audit (specifically the CopilotInteraction logs) data allows for real-time responses. Automated SOAR playbooks can then trigger protective actions - such as revoking an Agent ID - if an entity attempts to access sensitive HR or financial data. Final Thoughts Securing AI at scale is an architectural shift. By layering Identity, Session Governance, and Agentic Identity, AI moves from being a fragmented risk to a governed tool that actually works for the modern workplace.292Views0likes0CommentsIngesting Windows Security Events into Custom Datalake Tables Without Using Microsoft‑Prefixed Table
Hi everyone, I’m looking to see whether there is a supported method to ingest Windows Security Events into custom Microsoft Sentinel Data Lake–tiered tables (for example, SecurityEvents_CL) without writing to or modifying the Microsoft‑prefixed analytical tables. Essentially, I want to route these events directly into custom tables only, bypassing the default Microsoft‑managed tables entirely. Has anyone implemented this, or is there a recommended approach? Thanks in advance for any guidance. Best Regards, Prabhu Kiran23Views0likes0Comments