playbooks
163 TopicsHow to Include Custom Details from an Alert in Email Generated by a Playbook
I have created an analytics rule that queries Sentinel for security events pertaining to group membership additions, and triggers an alert for each event found. The rule does not create an incident. Within the rule logic, I have created three "custom details" for specific fields within the event (TargetAccount, MemberName, SubjectAccount). I have also created a corresponding playbook for the purpose of sending an email to me when an alert is triggered. The associated automation rule has been configured and is triggered in the analytics rule. All of this is working as expected - when a member is added to a security group, I receive an email. The one remaining piece is to populate the email message with the custom details that I've identified in the rule. However, I'm not sure how to do this. Essentially, I would like the values of the three custom details shown in the first screenshot below to show up in the body of the email, shown in the second screenshot, next to their corresponding names. So, for example, say Joe Smith is added to the group "Admin" by Tom Jones. These are the fields and values in the event that I want to pull out. TargetAccount = Admin MemberName = Joe Smith Subject Account = Tom Jones The custom details would then be populated as such: Security_Group = Admin Member_Added = Joe Smith Added_By = Tom Jones and then, the body of the email would contain: Group: Admin Member Added: Joe Smith Added By: Tom Jones1.7KViews0likes6CommentsUpdate: Changing the Account Name Entity Mapping in Microsoft Sentinel
The upcoming update introduces more consistent and predictable entity data across analytics, incidents, and automation by standardizing how the Account Name property is populated when using UPN‑based mappings in analytic rules. Going forward, Account Name property will consistently contain only the UPN prefix, with new dedicated fields added for the full UPN and UPN suffix. While this improves consistency and enables more granular automation, customers who rely on specific Account Name values in automation rules or Logic App playbooks may need to take action. Timeline Effective date: July 1, 2026. The change will apply automatically - no opt-in is required. Scope of impact Analytics Rules which include mapping of User Principal Name (UPN) to the Account Name entity field, where the resulting alerts are processed by Automation Rules or Logic App Playbooks that reference the AccountName property. What’s changing Currently When an Analytic Rule includes mapping of a full UPN (for example: 'user@domain.com') to the Account Name field, the resulting input value for the Automation Rule or/and Logic App Playbook is inconsistent. In some cases, it contains only the UPN prefix: 'user', and in other cases the full UPN ('user@domain.com'). After July 1, 2026 Account Name property will consistently contain only the UPN prefix The following new fields will be added to the entity object: AccountName (UPN prefix) UPNSuffix UserPrincipalName (full UPN) This change provides an enhanced filtering and automation logic based on these new fields. Example Before Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' or 'user@domain.com' (inconsistent) After Analytics Rule maps: 'user@domain.com' Automation Rule receives: Account Name: 'user' UPNSuffix: 'domain.com' Logic App Playbook and SecurityAlert table receives: AccountName: 'user' UPNSuffix: 'domain.com' UserPrincipalName: 'user@domain.com' Feature / Location Before After SecurityAlert table Logic App Playbook Entity Why does it matter? If your automation logic relies on exact string comparisons against the full UPN stored in Account Name, those conditions may no longer match after the update. This most commonly affects: Automation Rules using "Equals" condition on Account Name Logic App Playbooks comparing entity field 'accountName' to a full UPN value Call to action Avoid strict equality checks against Account Name Use flexible operators such as: Contains Starts with Leverage the new UPNSuffix field for clearer intent Example update Before - Account name will show as 'user' or 'user@domain.com' After - Account Name will show as 'user' Recommended changes: Account Name Contains/Startswith 'user' UPNSuffix Equals/Startswith/Contains 'domain.com' This approach ensures compatibility both before and after the change takes effect. Where to update Review any filters, conditions, or branching logic that depend on Account Name values. Automation Rules: Use the 'Account name' field Logic App Playbooks: Update conditions referencing the entity: 'accountName' For example: Automation Rule before the change: Automation Rule after the change: Summary A consistency improvement to Account Name mapping is coming on July 1, 2026 The change affects Automation Rules and Logic App Playbooks that rely on UPN to Account Name mappings New UPN related fields provide better structure and control Customers should follow the recommendations above before the effective change date657Views0likes0CommentsSAP & Business-Critical App Security Connectors
I validated what it takes to make SAP and SAP-adjacent security signals operational in a SOC: reliable ingestion, stable schemas, and detections that survive latency and schema drift. I focus on four integrations into Microsoft Sentinel: SAP Enterprise Threat Detection (ETD) cloud edition (SAPETDAlerts_CL, SAPETDInvestigations_CL), SAP S/4HANA Cloud Public Edition agentless audit ingestion (ABAPAuditLog), Onapsis Defend (Onapsis_Defend_CL), and SecurityBridge (also ABAPAuditLog). Because vendor API specifics for ETD Retrieval API / Audit Retrieval API aren’t publicly detailed in the accessible primary sources I could retrieve, I explicitly label pagination/rate/time-window behaviors as unspecified where appropriate. Connector architectures and deployment patterns For SAP-centric telemetry I separate two planes: First is SAP application telemetry that lands in SAP-native tables, especially ABAPAuditLog, ABAPChangeDocsLog, ABAPUserDetails, and ABAPAuthorizationDetails. These tables are the foundation for ABAP-layer monitoring and are documented with typed columns in Azure Monitor Logs reference. Second is external “security product” telemetry (ETD alerts, Onapsis findings). These land in custom tables (*_CL) and typically require a SOC-owned normalization layer to avoid brittle detections. Within Microsoft’s SAP solution itself, there are two deployment models: agentless and containerized connector agent. The agentless connector uses SAP Cloud Connector and SAP Integration Suite to pull logs, and Microsoft documents it as the recommended approach; the containerized agent is being deprecated and disabled on September 14, 2026. On the “implementation technology” axis, Sentinel integrations generally show up as: - Codeless Connector Framework (CCF) pollers/pushers (SaaS-managed ingestion definitions with DCR support). - Function/Logic App custom pipelines using the Logs Ingestion API when you need custom polling, enrichment, or a vendor endpoint that isn’t modeled in CCF. In my view, ETD and S/4HANA Cloud connectors are “agentless” from the Sentinel side (API credentials only), while Onapsis Defend and SecurityBridge connectors behave like push pipelines because Microsoft requires an Entra app + DCR permissions (typical Logs Ingestion API pattern). Authentication and secrets handling Microsoft documents the required credentials per connector: - ETD cloud connector requires Client Id + Client Secret for ETD Retrieval API (token mechanics unspecified). - S/4HANA Cloud Public Edition connector requires Client Id + Client Secret for Audit Retrieval API (token mechanics unspecified), and Microsoft notes “alternative authentication mechanisms” exist (details in linked repo are unspecified in accessible sources). - Onapsis Defend and SecurityBridge connectors require a Microsoft Entra ID app registration and Azure permission to assign Monitoring Metrics Publisher on DCRs. This maps directly to the Logs Ingestion API guidance, where a service principal is granted DCR access via that role (or the Microsoft.Insights/Telemetry/Write data action). For production, I treat these as “SOC platform secrets”: - Store client secrets/certificates in Key Vault when you own the pipeline (Function/Logic App); rotate on an operational schedule; alert on auth failures and sudden ingestion drops. - For vendor-managed ingestion (Onapsis/SecurityBridge), I still require: documented ownership of the Entra app, explicit RBAC scope for the DCR, and change control for credential rotation because a rotated secret is effectively a data outage. API behaviors and ingestion reliability For ETD Retrieval API and Audit Retrieval API, pagination/rate limits/time windows are unspecified in the accessible vendor documentation I could retrieve. I therefore design ingestion and detections assuming non-ideal API behavior: late-arriving events, cursor/page limitations, and throttling. CCF’s RestApiPoller model supports explicit retry policy, windowing, and multiple paging strategies, so if/when you can obtain vendor API semantics, you can encode them declaratively (rather than writing fragile code). For the SAP solution’s telemetry plane, Microsoft provides strong operational cues: agentless collection flows through Integration Suite, and troubleshooting typically happens in the Integration Suite message log; this is where I validate delivery failures before debugging Sentinel-side parsers. For scheduled detections, I always account for ingestion delay explicitly. Microsoft’s guidance is to widen event lookback by expected delay and then constrain on ingestion_time() to prevent duplicates from overlap. Schema, DCR transformations, and normalization layer Connector attribute comparison Connector Auth method Sentinel tables Default polling Backfill Pagination Rate limits SAP ETD (cloud) Client ID + Secret (ETD Retrieval API) SAPETDAlerts_CL, SAPETDInvestigations_CL unspecified unspecified unspecified unspecified SAP S/4HANA Cloud (agentless) Client ID + Secret (Audit Retrieval API); alt auth referenced ABAPAuditLog unspecified unspecified unspecified unspecified Onapsis Defend Entra app + DCR permission (Monitoring Metrics Publisher) Onapsis_Defend_CL n/a (push pattern) unspecified n/a unspecified SecurityBridge Entra app + DCR permission (Monitoring Metrics Publisher) ABAPAuditLog n/a (push pattern) unspecified n/a unspecified Ingestion-time DCR transformations Sentinel supports ingestion-time transformations through DCRs to filter, enrich, and mask data before it’s stored. Example: I remove low-signal audit noise and mask email identifiers in ABAPAuditLog: source | where isnotempty(TransactionCode) and isnotempty(User) | where TransactionCode !in ("SM21","ST22") // example noise; tune per tenant | extend Email = iif(Email has "@", strcat(substring(Email,0,2),"***@", tostring(split(Email,"@")[1])), Email) Normalization functions Microsoft explicitly recommends using SAP solution functions instead of raw tables because they can change the infrastructure beneath without breaking detections. I follow the same pattern for ETD/Onapsis custom tables: I publish SOC-owned functions as a schema contract. .create-or-alter function with (folder="SOC/SAP") Normalize_ABAPAudit() { ABAPAuditLog | project TimeGenerated, SystemId, ClientId, User, TransactionCode, TerminalIpV6, MessageId, MessageClass, MessageText, AlertSeverityText, UpdatedOn } .create-or-alter function with (folder="SOC/SAP") Normalize_ETDAlerts() { SAPETDAlerts_CL | extend AlertId = tostring(coalesce(column_ifexists("AlertId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("User",""), column_ifexists("user",""))) | project TimeGenerated, AlertId, Severity, SapUser, * } .create-or-alter function with (folder="SOC/SAP") Normalize_Onapsis() { Onapsis_Defend_CL | extend FindingId = tostring(coalesce(column_ifexists("FindingId",""), column_ifexists("id",""))), Severity = tostring(coalesce(column_ifexists("Severity",""), column_ifexists("severity",""))), SapUser = tostring(coalesce(column_ifexists("SAP_User",""), column_ifexists("user",""))) | project TimeGenerated, FindingId, Severity, SapUser, * } Health/lag monitoring and anti-gap I monitor both connector health and ingestion delay. SentinelHealth is the native health table, and Microsoft provides a health workbook and a schema reference for the fields. let lookback=24h; union isfuzzy=true (ABAPAuditLog | extend T="ABAPAuditLog"), (SAPETDAlerts_CL | extend T="SAPETDAlerts_CL"), (Onapsis_Defend_CL | extend T="Onapsis_Defend_CL") | where TimeGenerated > ago(lookback) | summarize LastEvent=max(TimeGenerated), P95DelaySec=percentile(datetime_diff("second", ingestion_time(), TimeGenerated), 95), Events=count() by T Anti-gap scheduled-rule frame (Microsoft pattern): let ingestion_delay=10m; let rule_lookback=5m; ABAPAuditLog | where TimeGenerated >= ago(ingestion_delay + rule_lookback) | where ingestion_time() > ago(rule_lookback) SOC detections for ABAP privilege abuse, fraud/insider behavior, and audit readiness Privileged ABAP transaction monitoring ABAPAuditLog includes TransactionCode, User, SystemId, and terminal/IP fields, so I start with a curated high-risk tcode set and then add baselines. let PrivTCodes=dynamic(["SU01","PFCG","SM59","RZ10","SM49","SE37","SE16","SE16N"]); Normalize_ABAPAudit() | where TransactionCode in (PrivTCodes) | summarize Actions=count(), Ips=make_set(TerminalIpV6,5) by SystemId, User, TransactionCode, bin(TimeGenerated, 1h) | where Actions >= 3 Fraud/insider scenario: sensitive object change near privileged audit activity ABAPChangeDocsLog exposes ObjectClass, ObjectId, and change types; I correlate sensitive object changes to privileged transactions in a tight window. let w=10m; let Sensitive=dynamic(["BELEG","BPAR","PFCG","IDENTITY"]); ABAPChangeDocsLog | where ObjectClass in (Sensitive) | project ChangeTime=TimeGenerated, SystemId, User=tostring(column_ifexists("User","")), ObjectClass, ObjectId, TypeOfChange=tostring(column_ifexists("ItemTypeOfChange","")) | join kind=innerunique ( Normalize_ABAPAudit() | project AuditTime=TimeGenerated, SystemId, User, TransactionCode ) on SystemId, User | where AuditTime between (ChangeTime-w .. ChangeTime+w) | project ChangeTime, AuditTime, SystemId, User, ObjectClass, ObjectId, TransactionCode, TypeOfChange Audit-ready pipeline: monitoring continuity and configuration touchpoints I treat audit logging itself as a monitored control. A simple SOC-safe control is “volume drop” by system; it’s vendor-agnostic and catches pipeline breaks and deliberate suppression. Normalize_ABAPAudit() | summarize PerHour=count() by SystemId, bin(TimeGenerated, 1h) | summarize Avg=avg(PerHour), Latest=arg_max(TimeGenerated, PerHour) by SystemId | where Latest_PerHour < (Avg * 0.2) Where Onapsis/ETD are present, I increase fidelity by requiring “privileged ABAP activity” plus an external SAP-security product finding (field mappings are tenant-specific; normalize first): let win=1h; Normalize_ABAPAudit() | where TransactionCode in ("SU01","PFCG","SM59","SE16N") | join kind=leftouter (Normalize_Onapsis()) on $left.User == $right.SapUser | where isempty(FindingId) == false and TimeGenerated1 between (TimeGenerated .. TimeGenerated+win) | project TimeGenerated, SystemId, User, TransactionCode, FindingId, OnapsisSeverity=Severity Production validation, troubleshooting, and runbook For acceptance, I validate in this order: table creation, freshness/lag percentiles, connector health state, and cross-check of event counts against the upstream system for the same UTC window (where available). Connector health monitoring is built around SentinelHealth plus the Data collection health workbook. For SAP agentless ingestion, Microsoft states most troubleshooting happens in Integration Suite message logs—this is where I triage authentication/networking failures before tuning KQL. For Onapsis/SecurityBridge-style ingestion, I validate Entra app auth, DCR permission assignment (Monitoring Metrics Publisher), and a minimal ingestion test payload using the Logs Ingestion API tutorial flow. Operational runbook items I treat as non-optional: health alerts on connector failure and freshness drift; scheduled rule anti-gap logic; playbooks that capture evidence bundles (ABAPAuditLog slice + user context from ABAPUserDetails/ABAPAuthorizationDetails); DCR filters to reduce noise and cost; and change control for normalization functions and watchlists. SOC “definition of done” checklist (short): 1) Tables present and steadily ingesting; 2) P95 ingestion delay measured and rules use the anti-gap pattern; 3) SentinelHealth enabled with alerts; 4) SOC-owned normalization functions deployed; 5) at least one privileged-tcode rule + one change-correlation rule + one audit-continuity rule in production. Mermaid ingestion flow:76Views5likes0CommentsUnifying AWS and Azure Security Operations with Microsoft Sentinel
The Multi-Cloud Reality Most modern enterprises operate in multi-cloud environments using Azure for core workloads and AWS for development, storage, or DevOps automation. While this approach increases agility, it also expands the attack surface. Each platform generates its own telemetry: Azure: Activity Logs, Defender for Cloud, Entra ID sign-ins, Sentinel analytics AWS: CloudTrail, GuardDuty, Config, and CloudWatch Without a unified view, security teams struggle to detect cross-cloud threats promptly. That’s where Microsoft Sentinel comes in, bridging Azure and AWS into a single, intelligent Security Operations Center (SOC). Architecture Overview Connect AWS Logs to Sentinel AWS CloudTrail via S3 Connector Enable the AWS CloudTrail connector in Sentinel. Provide your S3 bucket and IAM role ARN with read access. Sentinel will automatically normalize logs into the AWSCloudTrail table. AWS GuardDuty Connector Use the AWS GuardDuty API integration for threat detection telemetry. Detected threats, such as privilege escalation or reconnaissance, appear in Sentinel as the AWSGuardDuty table. Normalize and Enrich Data Once logs are flowing, enrich them to align with Azure activity data. Example KQL for mapping CloudTrail to Sentinel entities: AWSCloudTrail | extend AccountId = tostring(parse_json(Resources)[0].accountId) | extend User = tostring(parse_json(UserIdentity).userName) | extend IPAddress = tostring(SourceIpAddress) | project TimeGenerated, EventName, User, AccountId, IPAddress, AWSRegion Then correlate AWS and Azure activities: let AWS = AWSCloudTrail | summarize AWSActivity = count() by User, bin(TimeGenerated, 1h); let Azure = AzureActivity | summarize AzureActivity = count() by Caller, bin(TimeGenerated, 1h); AWS | join kind=inner (Azure) on $left.User == $right.Caller | where AWSActivity > 0 and AzureActivity > 0 | project TimeGenerated, User, AWSActivity, AzureActivity Automate Cross-Cloud Response Once incidents are correlated, Microsoft Sentinel Playbooks (Logic Apps) can automate your response: Example Playbook: “CrossCloud-Containment.json” Disable user in Entra ID Send a command to the AWS API via Lambda to deactivate IAM key Notify SOC in Teams Create ServiceNow ticket POST https://api.aws.amazon.com/iam/disable-access-key PATCH https://graph.microsoft.com/v1.0/users/{user-id} { "accountEnabled": false } Build a Multi-Cloud SOC Dashboard Use Sentinel Workbooks to visualize unified operations: Query 1 – CloudTrail Events by Region AWSCloudTrail | summarize Count = count() by AWSRegion | render barchart Query 2 – Unified Security Alerts union SecurityAlert, AWSGuardDuty | summarize TotalAlerts = count() by ProviderName, Severity | render piechart Scenario Incident: A compromised developer account accesses EC2 instances on AWS and then logs into Azure via the same IP. Detection Flow: CloudTrail logs → Sentinel detects unusual API calls Entra ID sign-ins → Sentinel correlates IP and user Sentinel incident triggers playbook → disables user in Entra ID, suspends AWS IAM key, notifies SOC Strengthen Governance with Defender for Cloud Enable Microsoft Defender for Cloud to: Monitor both Azure and AWS accounts from a single portal Apply CIS benchmarks for AWS resources Surface findings in Sentinel’s SecurityRecommendations table284Views4likes0CommentsEntity playbook in XDR
Hello All! In my Logic Apps Sentinel automations I often use the entity trigger to run some workflows. Some time ago there was information, that Sentinel will be moved to the Microsoft XDR, some of the Sentinel elements are already there. In XDR I can run playbook from the incident level, but I can't do it from the entity level - for example in the XDR when I clicked in the IP or when I open IP address page I can't find the Run playbook button or something like that. Do you know if the Run playbook on entity feature will be moved to XDR also? Best, Piotr K.93Views1like3CommentsCall to Action: Migrate Your Classic Alert‑trigger Automations to Automation Rules Before March 2026
Reminder: Following the Retirement Announcement published in March 2023, classic alert‑trigger automation in Microsoft Sentinel, where playbooks are triggered directly from analytic rules will be deprecated in March 2026. To ensure your alert‑driven automations continue to run without interruption, please migrate to automation rules. How to Identify the Impacted Rules To help you quickly identify if any analytic rules should be updated, we created a Logic App solution that identifies all the analytic rules which use classic automation. The solution is available in two deployment options: Subscription-level version – scans all workspaces in the subscription Workspace-level version – scans a single workspace (useful when subscription Owner permissions are not available) Both options create an output of a list of analytic rules which need to be updated as described in the next section. The solution is published as part of Sentinel's GitHub community solutions with deployment instructions and further details: sentinel-classic-automation-detection After deploying and running the tool, open the Logic App run history. The final action contains the list of the analytic rules which require migration. How to Migrate the identified Analytic Rules from Classic Alert‑trigger Automations Once you have identified the analytic rules using classic automation, follow the official Microsoft Learn guidance to migrate them to automation rules: migrate-playbooks-to-automation-rules In short: Identify classic automation in your analytic rules Create automation rules to replace the classic automation To complete the migration, remove the Alert Automation Classic Actions identified in step #1 by clicking on the three dots and 'Remove' Summary Classic automation will retire in March 2026 Use the detection tool to identify any remaining classic alert-trigger playbooks Follow the official Microsoft documentation to complete the required changes Act now to ensure your SOC automations continue to run smoothly1.3KViews0likes0CommentsXDR advanced hunting region specific endpoints
Hi, I am exploring XDR advanced hunting API to fetch data specific to Microsoft Defender for Endpoint tenants. The official documentation (https://learn.microsoft.com/en-us/defender-xdr/api-advanced-hunting) mentions to switch to Microsoft Graph advanced hunting API. I had below questions related to it: 1. To fetch the region specific(US , China, Global) token and Microsoft Graph service root endpoints(https://learn.microsoft.com/en-us/graph/deployments#app-registration-and-token-service-root-endpoints ) , is the recommended way to fetch the OpenID configuration document (https://learn.microsoft.com/en-us/entra/identity-platform/v2-protocols-oidc#fetch-the-openid-configuration-document) for a tenant ID and based on the response, the region specific SERVICE/TOKEN endpoints could be fetched? Since using it, there is no need to maintain different end points for tenants in different regions. And do we use the global service URL https://login.microsoftonline.com to fetch OpenID config document for a tenantID in any region? 2. As per the documentation, Microsoft Graph Advanced hunting API is not supported in China region (https://learn.microsoft.com/en-us/graph/api/security-security-runhuntingquery?view=graph-rest-1.0&tabs=http). In this case, is it recommended to use Microsoft XDR Advanced hunting APIs(https://learn.microsoft.com/en-us/defender-xdr/api-advanced-hunting) to support all region tenants(China, US, Global)?181Views0likes1CommentIngest IOC from Google Threat Intelligence into Sentinel
Hi all, I'm string to ingest IOCs from Google Threat Intelligence into Sentinel. I follow the guide at gtidocs.virutotal.com/docs/gti4sentinel-guide API KEY is correct. PS: I'm using standard free public API (created in Viru Total) Managed Identitity has been configured using the correct role. When I run the Logic APP, I received an HTTP error 403 "code": "ForbiddenError", "message": "You are not authorized to perform the requested operation" What's the problem ?? Regards, HA68Views0likes0CommentsSentinel Data Connector: Google Workspace (G Suite) (using Azure Functions)
I'm encountering a problem when attempting to run the GWorkspace_Report workbook in Azure Sentinel. The query is throwing this error related to the union operator: 'union' operator: Failed to resolve table expression named 'GWorkspace_ReportsAPI_gcp_CL' I've double-checked, and the GoogleWorkspaceReports connector is installed and updated to version 3.0.2. Has anyone seen this or know what might be causing the table GWorkspace_ReportsAPI_gcp_CL to be unresolved? Thanks!229Views1like2CommentsDefender is missing logs for files copied to USB device on Mac devices
Hello, I am currently facing an issue with Defender not logging files copied to USBs. Using the KQL below, I can only see .exe files copied, but nothing when it comes to .pdf, .docx. .zip and other standard file extensions. Has someone come across this issue before? Any help is greatly appreciated let UsbDriveMount = DeviceEvents | where ActionType=="UsbDriveMounted" | extend ParsedFields=parse_json(AdditionalFields) | project DeviceId, DeviceName, DriveLetter=ParsedFields.DriveLetter, MountTime=TimeGenerated, ProductName=ParsedFields.ProductName,SerialNumber=ParsedFields.SerialNumber,Manufacturer=ParsedFields.Manufacturer | order by DeviceId asc, MountTime desc; let FileCreation = DeviceFileEvents | where InitiatingProcessAccountName != "system" | where ActionType == "FileCreated" | where FolderPath !startswith "C:\\" | where FolderPath !startswith "\\" | project ReportId,DeviceId,InitiatingProcessAccountDomain, InitiatingProcessAccountName,InitiatingProcessAccountUpn, FileName, FolderPath, SHA256, TimeGenerated, SensitivityLabel, IsAzureInfoProtectionApplied | order by DeviceId asc, TimeGenerated desc; FileCreation | lookup kind=inner (UsbDriveMount) on DeviceId | where FolderPath startswith DriveLetter | where TimeGenerated >= MountTime | partition hint.strategy=native by ReportId ( top 1 by MountTime ) | order by DeviceId asc, TimeGenerated desc | extend HostName = iff(DeviceName has '.', substring(DeviceName, 0, indexof(DeviceName, '.')), DeviceName) | extend DnsDomain = iff(DeviceName has '.', substring(DeviceName, indexof(DeviceName, '.') + 1), "") | extend FileHashAlgorithm = 'SHA256'Solved141Views0likes2Comments