<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Azure Observability topics</title>
    <link>https://techcommunity.microsoft.com/t5/azure-observability/bd-p/AzureObservability</link>
    <description>Azure Observability topics</description>
    <pubDate>Mon, 20 Apr 2026 05:31:19 GMT</pubDate>
    <dc:creator>AzureObservability</dc:creator>
    <dc:date>2026-04-20T05:31:19Z</dc:date>
    <item>
      <title>Azure VMs host (platform) metrics (not guest metrics) to the log analytics workspace ?</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/azure-vms-host-platform-metrics-not-guest-metrics-to-the-log/m-p/4510014#M4672</link>
      <description>&lt;P&gt;Hi Team,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can some one help me how to send Azure VMs host (platform) metrics (not guest metrics) to the log analytics workspace ?&lt;/P&gt;&lt;P&gt;Earlier some years ago I used to do it, by clicking on “Diagnostic Settings”, but now if I go to “Diagnostic Settings” tab its asking me to enable guest level monitoring (guest level metrics I don’t want) and pointing to a Storage Account. I don’t see the option to send the these metrics to Log analytics workspace.&lt;/P&gt;&lt;P&gt;I have around 500 azure VMs whose host (platform) metrics (not guest metrics) I want to send it to the log analytics workspace.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Apr 2026 15:56:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/azure-vms-host-platform-metrics-not-guest-metrics-to-the-log/m-p/4510014#M4672</guid>
      <dc:creator>roopesh_shetty</dc:creator>
      <dc:date>2026-04-09T15:56:42Z</dc:date>
    </item>
    <item>
      <title>Dependency Agent Alternatives</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/dependency-agent-alternatives/m-p/4438361#M4649</link>
      <description>&lt;P&gt;Hello. The retirement notice for the Azure Dependency Agent (https://learn.microsoft.com/en-us/azure/azure-monitor/vm/vminsights-maps-retirement) recommends selecting an Azure Marketplace product as a replacement but is not specific about what product(s) offer similar functionality. Would appreciate more specific guidance and experiences from the wider community. Thanks.&lt;/P&gt;</description>
      <pubDate>Wed, 30 Jul 2025 19:21:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/dependency-agent-alternatives/m-p/4438361#M4649</guid>
      <dc:creator>Cory_Matieyshen</dc:creator>
      <dc:date>2025-07-30T19:21:45Z</dc:date>
    </item>
    <item>
      <title>Recent Logic Apps Failures with Defender ATP Steps – "TimeGenerated" No Longer Recognized</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/recent-logic-apps-failures-with-defender-atp-steps-quot/m-p/4419645#M4644</link>
      <description>&lt;P&gt;Hi everyone,&lt;/P&gt;&lt;P&gt;I’ve recently encountered an issue with Logic Apps failing on Defender ATP steps. Requests containing the&amp;nbsp;TimeGenerated&amp;nbsp;parameter no longer work—the column seems to be unrecognized. My code hasn’t changed at all, and the same queries run successfully in Defender 365’s Advanced Hunting.&lt;/P&gt;&lt;P&gt;For example, this basic KQL query:&lt;/P&gt;&lt;LI-CODE lang="kusto"&gt;DeviceLogonEvents 
| where TimeGenerated &amp;gt;= ago(30d)
| where LogonType != "Local" 
| where DeviceName !contains ".fr" 
| where DeviceName !contains "shared-"
| where DeviceName !contains "gdc-" 
| where DeviceName !contains "mon-"
| distinct DeviceName&lt;/LI-CODE&gt;&lt;P&gt;Now throws the error:&lt;BR /&gt;&lt;SPAN class="lia-text-color-8"&gt;Failed to resolve column or scalar expression named 'TimeGenerated'. Fix semantic errors in your query.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Removing&amp;nbsp;TimeGenerated&amp;nbsp;makes the query work again, but this isn’t a viable solution. Notably, the identical query still functions in Defender 365’s Advanced Hunting UI.&lt;/P&gt;&lt;P&gt;This issue started affecting a Logic App that runs weekly—it worked on&amp;nbsp;&lt;STRONG&gt;May 11th&lt;/STRONG&gt;&amp;nbsp;but failed on&amp;nbsp;&lt;STRONG&gt;May 18th&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Questions:&lt;/STRONG&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Has there been a recent schema change or deprecation of&amp;nbsp;TimeGenerated&amp;nbsp;in Defender ATP's KQL for Logic Apps?&lt;/LI&gt;&lt;LI&gt;Is there an alternative column or syntax we should use now?&lt;/LI&gt;&lt;LI&gt;Are others experiencing this?&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Any insights or workarounds would be greatly appreciated!&lt;/P&gt;</description>
      <pubDate>Mon, 02 Jun 2025 12:34:19 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/recent-logic-apps-failures-with-defender-atp-steps-quot/m-p/4419645#M4644</guid>
      <dc:creator>Trevax</dc:creator>
      <dc:date>2025-06-02T12:34:19Z</dc:date>
    </item>
    <item>
      <title>AKS Pod resource utilization (CPU/Memory) alert</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/aks-pod-resource-utilization-cpu-memory-alert/m-p/4413242#M4637</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;I am trying to set up an alert for AKS pod CPU/Memory utilization alert when a max utilization hits certain threshold (let's say &amp;gt;95%). Sample query for CPU utilization.&lt;/P&gt;&lt;P&gt;let cpuusage = materialize(Perf&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | where ObjectName == 'K8SContainer'&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | where CounterName has 'cpuUsageNanoCores'&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerNameParts = split(InstanceName, '/')&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerNamePartCount = array_length(ContainerNameParts) &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; PodUIDIndex = ContainerNamePartCount - 2,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ContainerNameIndex = ContainerNamePartCount - 1&amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex])&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | summarize AggregatedValue=max(CounterValue) by bin(TimeGenerated, 15m), ContainerName&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | project TimeGenerated, ContainerName, AggregatedValue&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | join kind = &amp;nbsp;inner &amp;nbsp; (&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KubePodInventory&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; | summarize arg_max(TimeGenerated, *) by ContainerName &amp;nbsp; &amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; | project Name, ContainerName, Namespace, ServiceName&amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; )&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; on ContainerName&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | project&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; TimeGenerated,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Name,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ServiceName,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ContainerName,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Namespace,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; CPU_mCores_Usage=AggregatedValue / 1000000);&lt;BR /&gt;let cpurequest=materialize(Perf&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | where ObjectName == 'K8SContainer'&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | where CounterName == 'cpuRequestNanoCores'&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerNameParts = split(InstanceName, '/')&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerNamePartCount = array_length(ContainerNameParts) &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; PodUIDIndex = ContainerNamePartCount - 2,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ContainerNameIndex = ContainerNamePartCount - 1&amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex])&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | project ContainerName, CounterValue &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | join kind = inner (KubePodInventory&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; //| summarize arg_max(TimeGenerated, 24h) by ContainerName, Name, Namespace &amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; | project Name, ContainerName, Namespace &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; )&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; on ContainerName&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | project Name, Namespace, ContainerName, CpuReq_in_mcores=(CounterValue / 1000000));&lt;BR /&gt;let cpulimits = materialize(Perf&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | where ObjectName == 'K8SContainer'&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | where CounterName == 'cpuLimitNanoCores'&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerNameParts = split(InstanceName, '/')&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerNamePartCount = array_length(ContainerNameParts) &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; PodUIDIndex = ContainerNamePartCount - 2,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ContainerNameIndex = ContainerNamePartCount - 1&amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend ContainerName = strcat(ContainerNameParts[PodUIDIndex], '/', ContainerNameParts[ContainerNameIndex])&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | extend CpuNanoCoreLimit= CounterValue&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | project ContainerName, CpuNanoCoreLimit&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | join kind = inner &amp;nbsp; (&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; KubePodInventory&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; | summarize arg_max(TimeGenerated, *) by ContainerName &amp;nbsp; &amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; | project Name, ContainerName, Namespace, ServiceName&amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; )&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; on ContainerName&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | project&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Name,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ServiceName,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; Namespace,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ContainerName,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; CPU_mCores_Limit=CpuNanoCoreLimit / 1000000);&lt;BR /&gt;cpulimits&lt;BR /&gt;| join cpurequest on ContainerName&lt;BR /&gt;| join cpuusage on ContainerName&lt;BR /&gt;| order by Namespace asc, ContainerName asc&amp;nbsp;&lt;BR /&gt;| extend CName = split(ContainerName, '/')&lt;BR /&gt;| extend PodName= Name&lt;BR /&gt;| extend Cpu_Perct_utilization=round((CPU_mCores_Usage / CPU_mCores_Limit) * 100, 2)&lt;BR /&gt;| project&lt;BR /&gt;&amp;nbsp; &amp;nbsp; TimeGenerated,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Namespace,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; ServiceName,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; PodName,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; CPU_mCores_Usage,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; CPU_mCores_Limit,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; CpuReq_in_mcores,&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Cpu_Perct_utilization&lt;BR /&gt;| sort by TimeGenerated desc&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But just wanted to modify the query little bit, wanted to get an alert only when utilization hits maximum continuously 3 times within 30 minutes (by keeping frequency of evaluation 10 min). Please advise.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 13 May 2025 06:58:59 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/aks-pod-resource-utilization-cpu-memory-alert/m-p/4413242#M4637</guid>
      <dc:creator>Ashok42470</dc:creator>
      <dc:date>2025-05-13T06:58:59Z</dc:date>
    </item>
    <item>
      <title>Getting empty response while running a kql query using rest api</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/getting-empty-response-while-running-a-kql-query-using-rest-api/m-p/4390632#M4634</link>
      <description>&lt;P&gt;Hello All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Trying to run a KQL query using power via rest API by passing azure Entra app id and secret key.&amp;nbsp; But we are getting empty response. Log analytics reader role is assigned on LA workspace and able to retrieve access token. When we try to run KQL query manually, we are seeing result. Below is sample snippet that i used, Not sure what is wrong with it? Any help would be highly appreciated.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;$tenantId = &amp;lt;Tenant id&amp;gt;&lt;BR /&gt;$clientId = &amp;lt;azure entra application app id&amp;gt;&lt;BR /&gt;$clientSecret = &amp;lt; app secret key&amp;gt;&lt;/P&gt;&lt;P&gt;# Log Analytics Workspace details&lt;BR /&gt;$workspaceId = &amp;lt;workspace ID&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;# Acquire a token&lt;BR /&gt;$body = @{&lt;BR /&gt;&amp;nbsp; &amp;nbsp; client_id = $clientId&lt;BR /&gt;&amp;nbsp; &amp;nbsp; scope = "https://api.loganalytics.io/.default"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; client_secret = $clientSecret&lt;BR /&gt;&amp;nbsp; &amp;nbsp; grant_type = "client_credentials"&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;$query = "AppRequests | limit 10"&lt;/P&gt;&lt;P&gt;$uri = "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token"&lt;BR /&gt;$response = Invoke-RestMethod -Uri $uri -Method Post -ContentType "application/x-www-form-urlencoded" -Body $body&lt;/P&gt;&lt;P&gt;$accessToken = $response.access_token&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Define the Log Analytics REST API endpoint&lt;BR /&gt;$baseUri = "https://api.loganalytics.io/v1/workspaces/$workspaceId/query"&lt;/P&gt;&lt;P&gt;# Set headers for the query&lt;BR /&gt;$headers = @{&lt;BR /&gt;&amp;nbsp; &amp;nbsp; Authorization = "Bearer $accessToken"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; "Content-Type" = "application/json"&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;# Prepare the request body&lt;BR /&gt;$requestbody = @{&lt;BR /&gt;&amp;nbsp; &amp;nbsp; query = $query&lt;BR /&gt;} | ConvertTo-Json&lt;/P&gt;&lt;P&gt;# Send the request&lt;BR /&gt;$response = Invoke-RestMethod -Uri $baseUri -Method Post -Headers $headers -Body $requestbody -Debug&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Display the results&lt;BR /&gt;$response&lt;/P&gt;</description>
      <pubDate>Fri, 07 Mar 2025 10:48:56 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/getting-empty-response-while-running-a-kql-query-using-rest-api/m-p/4390632#M4634</guid>
      <dc:creator>Ashok42470</dc:creator>
      <dc:date>2025-03-07T10:48:56Z</dc:date>
    </item>
    <item>
      <title>Need assistance on KQL query for pulling AKS Pod logs</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/need-assistance-on-kql-query-for-pulling-aks-pod-logs/m-p/4383508#M4629</link>
      <description>&lt;P&gt;I am trying to pull historical pod logs using below kql query. Looks like joining the tables; containerlog and KubePodInventory didn't go well as i see lot of duplicates in the output.&lt;/P&gt;&lt;P&gt;ContainerLog&lt;BR /&gt;//| project TimeGenerated, ContainerID, LogEntry&lt;BR /&gt;| join kind= inner &amp;nbsp; &amp;nbsp;(&lt;BR /&gt;&amp;nbsp; &amp;nbsp; KubePodInventory&lt;BR /&gt;&amp;nbsp; &amp;nbsp; | where ServiceName == "&amp;lt;&amp;lt;servicename&amp;gt;&amp;gt;"&lt;BR /&gt;&amp;nbsp; &amp;nbsp; )&lt;BR /&gt;&amp;nbsp; &amp;nbsp; on ContainerID&lt;BR /&gt;| project TimeGenerated, Namespace, ContainerID, ServiceName, LogEntrySource, LogEntry, Name1&lt;BR /&gt;| sort by TimeGenerated asc&lt;/P&gt;&lt;P&gt;Can someone suggest a better query?&lt;/P&gt;</description>
      <pubDate>Thu, 20 Feb 2025 09:59:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/need-assistance-on-kql-query-for-pulling-aks-pod-logs/m-p/4383508#M4629</guid>
      <dc:creator>Ashok42470</dc:creator>
      <dc:date>2025-02-20T09:59:48Z</dc:date>
    </item>
    <item>
      <title>Sentinel Incident Priority Mapping to SIR</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/sentinel-incident-priority-mapping-to-sir/m-p/4374240#M4626</link>
      <description>&lt;P&gt;Hi , we are working on implementing SIR module within our ServiceNow platform. And we have 5 level of priority within SIR (Critical, High, moderate, low, Planning) whereas sentinel has only 4 priorities (informational, Low, Medium, High). Interested to know how other organizations have handled and mapped these priorities. Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Wed, 05 Feb 2025 19:58:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/sentinel-incident-priority-mapping-to-sir/m-p/4374240#M4626</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2025-02-05T19:58:21Z</dc:date>
    </item>
    <item>
      <title>Multiple Failed SignIn Events</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/multiple-failed-signin-events/m-p/4372465#M4621</link>
      <description>&lt;P&gt;I've a&amp;nbsp; user for whom within the last week I'm consistently seeing more than 100 failed login events from this authorized device. And these seems to be something running at the back-end as these logs are&amp;nbsp; within 2/3 mins intervals.&amp;nbsp; The error messages goes as "Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access the resource." Also, the application that shows interrupted are:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;- Office Online Maker SSO&lt;/LI&gt;&lt;LI&gt;-&amp;nbsp;Office Online Core SSO&lt;/LI&gt;&lt;LI&gt;Office365 Shell WCSS-Client&lt;/LI&gt;&lt;LI&gt;SharePoint Online Web Client Extensibility-&amp;gt;Microsoft Graph&amp;nbsp;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Anyone has any insights into addressing this issue. Thanks&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2025 19:35:48 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/multiple-failed-signin-events/m-p/4372465#M4621</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2025-01-30T19:35:48Z</dc:date>
    </item>
    <item>
      <title>Sentinel Threat Intelligence Detection Rule</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/sentinel-threat-intelligence-detection-rule/m-p/4371536#M4620</link>
      <description>&lt;P&gt;I'm working on connecting various Threat Intelligenece TAXII with our sentinel platform. Does anyone have suggestions on the kind of detection rules using KQL we can build around these TAXII's. Most of the come with IP's, URLS, domain and hash values. Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jan 2025 00:11:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/sentinel-threat-intelligence-detection-rule/m-p/4371536#M4620</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2025-01-29T00:11:39Z</dc:date>
    </item>
    <item>
      <title>Are you getting the most out of your Azure Log Analytics Workspace (LAW) investment?</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/are-you-getting-the-most-out-of-your-azure-log-analytics/m-p/4367651#M4617</link>
      <description>&lt;P&gt;Using a LAW is a great way to consolidate various types of data (performance, events, security, etc.) and signals from multiple sources.&amp;nbsp;That's the easy part - mining this data for actionable insights is often the real challenge.&lt;BR /&gt;&lt;BR /&gt;One way we did this was by surfacing events related to disks across our physical server estate.&amp;nbsp;We were already sending event data to our LAW; it was just a matter of parsing it with KQL and adding to a Power Bi dashboard for additional visibility.&lt;BR /&gt;&lt;BR /&gt;The snippet from the Power Bi dashboard shows when the alert was first triggered and when the disk was eventually replaced.&lt;BR /&gt;&lt;BR /&gt;Here's the KQL query we came up with.&lt;/P&gt;&lt;LI-CODE lang=""&gt;let start_time=ago(30d);
let end_time=now();
Event
| where TimeGenerated &amp;gt; start_time and TimeGenerated &amp;lt; end_time
| where EventLog contains 'System'
| where Source contains 'Storage Agents'
| where RenderedDescription contains 'Drive Array Physical Drive Status Change'
| parse kind=relaxed RenderedDescription with * 'Drive Array Physical Drive Status Change. The ' Drive ' with serial number ""' Serial '"", has a new status of ' Status '. (Drive status values:'*
| project Computer, Drive, Serial, Status, TimeGenerated, EventLevelName&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can of course set up alerting with Alerts for Azure Monitor.&lt;BR /&gt;&lt;BR /&gt;I hope this example helps you get more value from your LAW.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 17 Jan 2025 14:40:39 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/are-you-getting-the-most-out-of-your-azure-log-analytics/m-p/4367651#M4617</guid>
      <dc:creator>Adeelaziz</dc:creator>
      <dc:date>2025-01-17T14:40:39Z</dc:date>
    </item>
    <item>
      <title>Effective Cloud Governance: Leveraging Azure Activity Logs with Power BI</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/effective-cloud-governance-leveraging-azure-activity-logs-with/m-p/4367649#M4616</link>
      <description>&lt;P&gt;We all generally accept that governance in the cloud is a continuous journey, not a destination. There's no one-size-fits-all solution and depending on the size of your Azure cloud estate, staying on top of things can be challenging even at the best of times.&lt;BR /&gt;&lt;BR /&gt;One way of keeping your finger on the pulse is to closely monitor your Azure Activity Log.&amp;nbsp;This log contains a wealth of&amp;nbsp;information ranging from noise to interesting to actionable data.&amp;nbsp;One could set up alerts for delete and update signals however, that can result in a flood of notifications.&lt;BR /&gt;&lt;BR /&gt;To address this challenge, you could develop a Power Bi report, similar to this one, that pulls in the Azure Activity Log&amp;nbsp;and allows you to group and summarize data by various dimensions.&amp;nbsp;You still need someone to review the report regularly however consuming the data this way makes it a whole lot&amp;nbsp;easier.&amp;nbsp;This by no means replaces the need for setting up alerts for key signals, however it does give you a great view of what's happened in your environment.&lt;/P&gt;&lt;img /&gt;&lt;P&gt;&lt;BR /&gt;If you're interested, this is the KQL query I'm using in Power Bi&lt;/P&gt;&lt;LI-CODE lang=""&gt;let start_time = ago(24h);
let end_time = now();
AzureActivity
| where TimeGenerated &amp;gt; start_time and TimeGenerated &amp;lt; end_time
| where OperationNameValue contains 'WRITE' or OperationNameValue contains 'DELETE'
| project
    TimeGenerated,
    Properties_d.resource,
    ResourceGroup,
    OperationNameValue,
    Authorization_d.scope,
    Authorization_d.action,
    Caller,
    CallerIpAddress,
    ActivityStatusValue
| order by TimeGenerated asc&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 17 Jan 2025 14:38:11 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/effective-cloud-governance-leveraging-azure-activity-logs-with/m-p/4367649#M4616</guid>
      <dc:creator>Adeelaziz</dc:creator>
      <dc:date>2025-01-17T14:38:11Z</dc:date>
    </item>
    <item>
      <title>Azure AD Powershell module logs in sentinel</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/azure-ad-powershell-module-logs-in-sentinel/m-p/4365830#M4613</link>
      <description>&lt;P&gt;Hello Team, As a part of clean up activity, our SOC has been assigned a task to find list of regular users who are using Azure AD Powershell and what activities they're performing as we want that to be limited to only Admin account to manage azure resources.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I was able to find sign in activities for many users to "Azure Active Directory PowerShell" but I'm unable to find what activities they have performed using powershell. Looked under audit logs and other few tables. Can some one tell me under which table or what KQL can I run to see operations&amp;nbsp; logs associated with Azure AD Powershell. Thank in advance.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Jan 2025 21:01:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/azure-ad-powershell-module-logs-in-sentinel/m-p/4365830#M4613</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2025-01-13T21:01:15Z</dc:date>
    </item>
    <item>
      <title>How to Monitor New Management Group Creation and Deletion.</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/how-to-monitor-new-management-group-creation-and-deletion/m-p/4365710#M4612</link>
      <description>&lt;P&gt;I am writing this post to monitor new Management group creation and Deletion using Azure Activity Logs and Trigger Incident in Microsoft Sentinel.&amp;nbsp; You can also use it to Monitor the Subscription Creation as well using this Step.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;By default, the Dianostic settings for at the management group level is not enabled. It cannot be enabled using Azure Policy or from the Portal interface. Use the below article to enable the "Management Group Diagnostic Settings"&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/rest/api/monitor/management-group-diagnostic-settings/create-or-update?view=rest-monitor-2020-01-01-preview&amp;amp;tabs=HTTP" target="_blank" rel="noopener"&gt;Management Group Diagnostic Settings - Create Or Update - REST API (Azure Monitor) | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Below is the screenshot of message body if you like to forward the logs only to the Log analytic workspace where sentinel is enabled. Also make sure you enable the Diagnostic settings at the tenant management group level to track all changes in your tenant.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;{&lt;/P&gt;
&lt;P&gt;&amp;nbsp; "properties": {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"workspaceId": "&amp;lt;&amp;lt; replace with workspace resource ID&amp;gt;&amp;gt;",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"logs": [&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "category": "Administrative",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "enabled": true&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; },&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "category": "Policy",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "enabled": true&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; ]&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once you have enabled the Diagnostic settings, you can use the below KQL query to monitor the New Management group creation and Deletion using Azure Activity Logs.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;//KQL Query to Identify if Management group is deleted&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;AzureActivity&lt;/P&gt;
&lt;P&gt;| where OperationNameValue == "MICROSOFT.MANAGEMENT/MANAGEMENTGROUPS/DELETE"&lt;/P&gt;
&lt;P&gt;| where ActivityStatusValue == "Success"&lt;/P&gt;
&lt;P&gt;| extend&amp;nbsp; mg = split(tostring(Properties_d.entity),"/")&lt;/P&gt;
&lt;P&gt;| project TimeGenerated, activityStatusValue_ = tostring(Properties_d.activityStatusValue), Managementgroup = mg[4], message_ = tostring(parse_json(Properties).message),&lt;/P&gt;
&lt;P&gt;caller_ = tostring(Properties_d.caller)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;//KQL Query to Identify if Management group is Created&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;AzureActivity&lt;/P&gt;
&lt;P&gt;| where OperationNameValue == "MICROSOFT.MANAGEMENT/MANAGEMENTGROUPS/WRITE"&lt;/P&gt;
&lt;P&gt;| where ActivityStatusValue == "Success"&lt;/P&gt;
&lt;P&gt;| extend&amp;nbsp; mg = split(tostring(Properties_d.entity),"/")&lt;/P&gt;
&lt;P&gt;| project TimeGenerated, activityStatusValue_ = tostring(Properties_d.activityStatusValue), Managementgroup = mg[4], message_ = tostring(parse_json(Properties).message),&lt;/P&gt;
&lt;P&gt;caller_ = tostring(Properties_d.caller)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This log can also be used to monitor the &lt;STRONG&gt;new subscription&lt;/STRONG&gt; creation as well, using the below query&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;AzureActivity&lt;/P&gt;
&lt;P&gt;| where OperationNameValue == "Microsoft.Management" and ActivityStatusValue == "Succeeded" and isnotempty(SubscriptionId)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you need to trigger incident on sentinel, use the above query in your custom scheduled analytical rule and create alert.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Note: Enabling this API on the Mangement group diagnostic logs will also be inherited by the subscriptions downstream on the specific category.&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Feb 2025 17:41:07 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/how-to-monitor-new-management-group-creation-and-deletion/m-p/4365710#M4612</guid>
      <dc:creator>hemanthselva</dc:creator>
      <dc:date>2025-02-20T17:41:07Z</dc:date>
    </item>
    <item>
      <title>Integrate Threat Intelligence with Sentinel that have no API</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/integrate-threat-intelligence-with-sentinel-that-have-no-api/m-p/4365059#M4609</link>
      <description>&lt;P&gt;We receive threat intelligence from a 3rd party vendor through a CSV file to our shared mailbox. As they do not have any API, wondering if there is a way we can automate and have that feeds populate in Sentinel rather than manually entering each IOC's.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2025 22:31:23 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/integrate-threat-intelligence-with-sentinel-that-have-no-api/m-p/4365059#M4609</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2025-01-10T22:31:23Z</dc:date>
    </item>
    <item>
      <title>Hold user reported Emails to see if later they become malicious.</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/hold-user-reported-emails-to-see-if-later-they-become-malicious/m-p/4365054#M4608</link>
      <description>&lt;P&gt;Hello Team,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Our Security Operations Center has identified a phishing report from a user. We activated email notifications for users to receive updates from Microsoft about the investigation results. In this case, the user was initially informed that the email was safe, but soon after they received another similar email from the same malicious sender, which was quarantined by ZAP. And after this ZAP went back and quarantined the initial email too.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Even though &amp;nbsp;ZAP and Safe Links continuously re-evaluate emails post-delivery, it's concerning that initially the report came as clean and later it was quarantined based on the investigation done on another email. I would like to know if there are additional measures we can take to detect emails that may turn malicious after delivery, aside from ZAP.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, can we implement a mechanism to hold reported emails for 2-3 hours to see if later it becomes malicious and &amp;nbsp;until we assess their safety, preventing users from receiving a false safe notification and later you see ZAP quarantines them? Thanks in advance.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2025 22:21:28 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/hold-user-reported-emails-to-see-if-later-they-become-malicious/m-p/4365054#M4608</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2025-01-10T22:21:28Z</dc:date>
    </item>
    <item>
      <title>Question about "anomalous token" alert</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/question-about-quot-anomalous-token-quot-alert/m-p/4364608#M4606</link>
      <description>&lt;P&gt;Hi Everyone,&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am a security analyst working with Sentinel, and every now and again we get the alert "Anomalous token involving one user". "This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens."&lt;/P&gt;&lt;P&gt;&amp;nbsp;I need to understand more about this. I know that malicious actors can possibly spoof these tokens, and abuse them.But I literally have no idea where else to go from here. There's very little support online with regards to further mitigations.&lt;/P&gt;&lt;P&gt;So just wondering if anyone deals with these and what the protocol is at your business? And any controls we can implement to limit such alerts.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jan 2025 20:51:22 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/question-about-quot-anomalous-token-quot-alert/m-p/4364608#M4606</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2025-01-09T20:51:22Z</dc:date>
    </item>
    <item>
      <title>Azure Monitor AMA Migration helper workbook question for subscriptions with AKS clusters</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/azure-monitor-ama-migration-helper-workbook-question-for/m-p/4363437#M4604</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In an ongoing project, I've been looking into helping a customer updating their agents from the Microsoft Monitoring Agent (MMA) to the new Azure Monitoring Agent (AMA) that consolidates installation and the previous Log Analytics agent, Telegraf agent, diagnostics extension in Azure Event Hubs, Storage etc., and then configure Data Collection Rules (DCRs) to collect data using the new agent.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;One of the first steps is of course to identify which resources are affected and that needs to be migrated. There are multiple tools to identify the resources such as this PowerShell script as well as the built-in AMA&amp;nbsp;Migration workbook in Azure Monitor, which is what I used as the initial option at the start of the AMA migration process.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When running the notebook, it will list all VMs, VMSSs etc. in the subscription that do not have the AMA agent installed, e.g., through an Azure Policy or automatically by having configured a DCR, or that do have the&amp;nbsp;old MMA installed, and thus needs to be migrated.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In Azure, Azure Kubernetes Services (AKS), as Kubernetes is a rather specific hosting service, almost like its own mini-ecosystem in regard to networking, storage, scaling etc., enables access and control of the underlying&amp;nbsp;infrastructure composing the cluster created by the AKS and its master node, providing the potential fine-grain and granular control of these resources for IT administrators, power users etc. However, in most typical use cases the underlying AKS infrastructure&amp;nbsp;resources should not be modified as it could break configured SLOs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;When running the Azure Monitor built-in AMA migration workbook, it includes all resources by default&amp;nbsp; that do not have the AMA installed already, no matter what type of resource it is, including potential underlying cluster&amp;nbsp;infrastructure resources created by AKS in the "MC_" resource group(s), such as virtual machine scale sets handling the creation and scaling of nodes and node pools of an AKS cluster. Perhaps the underlying AKS infrastructure resources could be excluded from&amp;nbsp;the AMA migration results of the Azure Monitor workbook by default, or if underlying non-AMA migrated AKS infrastructure resources are found, perhaps accompanied with a text describing potential remediation steps for AMA agent migration for AKS cluster infrastructure&amp;nbsp;resources. Has anyone encountered the same issue and if so how did you work around it? Would be great to hear some input and if there's already some readily available solutions/workaround out there already (if not, I've been thinking perhaps making a proposed&amp;nbsp;PR here with a filter and exclusion added to the default workbook e.g. here&amp;nbsp;&lt;A href="https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook" target="_blank"&gt;https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook&lt;/A&gt;). Thanks!&lt;/P&gt;</description>
      <pubDate>Tue, 07 Jan 2025 12:17:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/azure-monitor-ama-migration-helper-workbook-question-for/m-p/4363437#M4604</guid>
      <dc:creator>KristofferAxelssonACC</dc:creator>
      <dc:date>2025-01-07T12:17:57Z</dc:date>
    </item>
    <item>
      <title>Behavior when Batch Send Failed</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/behavior-when-batch-send-failed/m-p/4361371#M4602</link>
      <description>&lt;P&gt;Hi All,&lt;BR /&gt;&lt;BR /&gt;I am looking to send messages in batches to both Log Analytics and Event Hub services. My solution requires that the sent batches be all-or-none, meaning either all messages are sent successfully, or all messages are dropped in case of failure.&lt;BR /&gt;&lt;BR /&gt;Could you please clarify how Log Analytics and Event Hub handle failures during batch sends?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Dec 2024 21:00:27 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/behavior-when-batch-send-failed/m-p/4361371#M4602</guid>
      <dc:creator>btsui</dc:creator>
      <dc:date>2024-12-30T21:00:27Z</dc:date>
    </item>
    <item>
      <title>Symantec software Disabling Recovery Mode during installations</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/symantec-software-disabling-recovery-mode-during-installations/m-p/4358859#M4597</link>
      <description>&lt;P&gt;Security team have been often receiving alert that during the installation of Symantec Encryption Desktop, Windows is using bcdedit.exec to modify the boot configuration, where its disabling windows default system recovery.&amp;nbsp;&lt;BR /&gt;It might be an expected behavior to ensure no one can bypass the encryption at boot time and It could be a Defense Mechanism. As we're receiving lots of alerts on this, we want to get to the root cause and ensure this is an expected behavior. That way we can have it documented and fine tune our detection.&lt;BR /&gt;&lt;BR /&gt;Does any one know if it it would interact with system boot configuration and any mention of bcdedit tasks being used during installation.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Command Line: "cmd.exe" /c schtasks.exe /Create /RU %USERNAME% /SC DAILY /TN runBCDEDIT /RL HIGHEST /TR "bcdedit.exe /set recoveryenabled No " &amp;amp; schtasks.exe /run /TN runBCDEDIT &amp;amp; schtasks.exe /Delete /TN runBCDEDIT /F &amp;amp; schtasks.exe /Delete /TN "runBCDEDIT" /F&lt;/P&gt;</description>
      <pubDate>Wed, 18 Dec 2024 21:38:54 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/symantec-software-disabling-recovery-mode-during-installations/m-p/4358859#M4597</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2024-12-18T21:38:54Z</dc:date>
    </item>
    <item>
      <title>Has anyone integrated VISA Threat Intelligence with Sentinel or any SIEM.</title>
      <link>https://techcommunity.microsoft.com/t5/azure-observability/has-anyone-integrated-visa-threat-intelligence-with-sentinel-or/m-p/4357378#M4592</link>
      <description>&lt;P&gt;I'm looking to integrate threat intelligence from VISA into Microsoft Sentinel directly and automate the ingestion process. Anyone in the community integrated VISA's threat intelligence platform with their SIEM solution? Thanks in advance!!&lt;/P&gt;</description>
      <pubDate>Fri, 13 Dec 2024 20:04:18 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/azure-observability/has-anyone-integrated-visa-threat-intelligence-with-sentinel-or/m-p/4357378#M4592</guid>
      <dc:creator>AmiShinu</dc:creator>
      <dc:date>2024-12-13T20:04:18Z</dc:date>
    </item>
  </channel>
</rss>

