Azure Log Analytics
401 TopicsAre you getting the most out of your Azure Log Analytics Workspace (LAW) investment?
Using a LAW is a great way to consolidate various types of data (performance, events, security, etc.) and signals from multiple sources. That's the easy part - mining this data for actionable insights is often the real challenge. One way we did this was by surfacing events related to disks across our physical server estate. We were already sending event data to our LAW; it was just a matter of parsing it with KQL and adding to a Power Bi dashboard for additional visibility. The snippet from the Power Bi dashboard shows when the alert was first triggered and when the disk was eventually replaced. Here's the KQL query we came up with. let start_time=ago(30d); let end_time=now(); Event | where TimeGenerated > start_time and TimeGenerated < end_time | where EventLog contains 'System' | where Source contains 'Storage Agents' | where RenderedDescription contains 'Drive Array Physical Drive Status Change' | parse kind=relaxed RenderedDescription with * 'Drive Array Physical Drive Status Change. The ' Drive ' with serial number ""' Serial '"", has a new status of ' Status '. (Drive status values:'* | project Computer, Drive, Serial, Status, TimeGenerated, EventLevelName You can of course set up alerting with Alerts for Azure Monitor. I hope this example helps you get more value from your LAW.40Views1like2CommentsAzure Monitor AMA Migration helper workbook question for subscriptions with AKS clusters
Hi, In an ongoing project, I've been looking into helping a customer updating their agents from the Microsoft Monitoring Agent (MMA) to the new Azure Monitoring Agent (AMA) that consolidates installation and the previous Log Analytics agent, Telegraf agent, diagnostics extension in Azure Event Hubs, Storage etc., and then configure Data Collection Rules (DCRs) to collect data using the new agent. One of the first steps is of course to identify which resources are affected and that needs to be migrated. There are multiple tools to identify the resources such as this PowerShell script as well as the built-in AMA Migration workbook in Azure Monitor, which is what I used as the initial option at the start of the AMA migration process. When running the notebook, it will list all VMs, VMSSs etc. in the subscription that do not have the AMA agent installed, e.g., through an Azure Policy or automatically by having configured a DCR, or that do have the old MMA installed, and thus needs to be migrated. In Azure, Azure Kubernetes Services (AKS), as Kubernetes is a rather specific hosting service, almost like its own mini-ecosystem in regard to networking, storage, scaling etc., enables access and control of the underlying infrastructure composing the cluster created by the AKS and its master node, providing the potential fine-grain and granular control of these resources for IT administrators, power users etc. However, in most typical use cases the underlying AKS infrastructure resources should not be modified as it could break configured SLOs. When running the Azure Monitor built-in AMA migration workbook, it includes all resources by default that do not have the AMA installed already, no matter what type of resource it is, including potential underlying cluster infrastructure resources created by AKS in the "MC_" resource group(s), such as virtual machine scale sets handling the creation and scaling of nodes and node pools of an AKS cluster. Perhaps the underlying AKS infrastructure resources could be excluded from the AMA migration results of the Azure Monitor workbook by default, or if underlying non-AMA migrated AKS infrastructure resources are found, perhaps accompanied with a text describing potential remediation steps for AMA agent migration for AKS cluster infrastructure resources. Has anyone encountered the same issue and if so how did you work around it? Would be great to hear some input and if there's already some readily available solutions/workaround out there already (if not, I've been thinking perhaps making a proposed PR here with a filter and exclusion added to the default workbook e.g. here https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook). Thanks!35Views0likes1CommentBehavior when Batch Send Failed
Hi All, I am looking to send messages in batches to both Log Analytics and Event Hub services. My solution requires that the sent batches be all-or-none, meaning either all messages are sent successfully, or all messages are dropped in case of failure. Could you please clarify how Log Analytics and Event Hub handle failures during batch sends?Solved46Views0likes1CommentAudit user accessing entreprise App by SPN sign-in
I'm in a Hybrid Entra ID environment. Some users can use an "Entreprise Application" by utilizing IDs and a certificate. In the activity or sign-in logs, I can find the access entries, but I don't have the information on which user used the app registration or which certificate was used. I would like to have logs that allow me to identify WHO is using an SPN/App registration. Do you have any ideas? Thank you. Here an example: In this screenshot, I can see access made to an app using, for example, an appid+secret/certificate connection. So, it’s "logical" not to see a username since it's not required for this type of connection. However, I would really like to have this information or some indicator to identify which of my users accessed it. Currently, I only have the machine's IP address, but I would like more information. Maybe in Purview or with another service, but I haven't found anything.79Views0likes3CommentsAPIM ApiManagementGatewayLogs
Hi! I have published couple of APIs through APIM. Now I try to read some diagnostic logs. While I choose APIM -> Logs -> API Management services -> ApiManagementGatewayLogs -> preview data or fire query: ApiManagementGatewayLogs | where TimeGenerated > ago(24h) | limit 10 I got 'where' operator: Failed to resolve table or column expression named 'ApiManagementGatewayLogs' If issue persists, please open a support ticket. What I'm doing wrong? Thanks, Jani5.9KViews0likes3CommentsCan I use regex in a DCR custom text logfile filepath?
Hi, I have about 50 servers attached to a DCR to collect a custom text log into a log analytics workspace custom table. Is it possible or if anyone has experience with using a regex filepath in the DCR situation? The logs are in the same format but paths differs slightly on each servers. There are two structures, but includes the servernames so we have 50 different filepaths: App Server c:\appserver\logs\<server Fully Qualified Name>\server\*.log App Portal c:\appportal\logs\<server Fully Qualified Name>\portal\*.log When I use static paths it works (there's a limit of 20 by the way). I have tried using the following regex filepath nothing comes in: c:\app(server|portal)\logs\SYS[a-zA-Z0-9]{4}wm[0-9]{2}.domain.net\(server|portal)\*.log Can someone confirm with me whether I can use regex in the filepath pattern in the DCR Data Source Tex log setup? If so, how do I get it to work please? Am I missing some escapes somewhere please? Many thanks in advance.50Views0likes1Commenthow to parse logs in DCR if RawMessage is in JSON
Dear Fellow Members, I am going through the tutorial on ingesting logs through the Azure Log Ingestion API. At the moment I am at the point where I need to create a DCR for ingesting the logs. I managed to upload the sample logs, and now I would have to set up the schema/transformation rules for the log ingestion. Now my problem is that the RawMessage part of the ingested logs is basically a JSON document: [ { "RawData": "{\"SourceName\":\"Microsoft-Windows-DNSServer\",\"ProviderGuid\":\"{EB79061A-A566-4698-9119-3ED2807060E7}\",\"EventID\":256,\"Version\":0,\"ChannelID\":16,\"Channel\":\"Microsoft-Windows-DNS-Server/Analytical \",\"LevelValue\":4,\"Level\":\"Information \",\"OpcodeValue\":0,\"TaskValue\":1,\"Category\":\"LOOK_UP \",\"Keywords\":\"9223372036854775809\",\"EventTime\":\"2023-04-13T10:22:14.043901+02:00\",\"ExecutionProcessID\":6624,\"ExecutionThreadID\":4708,\"EventType\":\"INFO\",\"SeverityValue\":2,\"Severity\":\"INFO\",\"Hostname\":\"windns\",\"Domain\":\"NT AUTHORITY\",\"AccountName\":\"SYSTEM\",\"UserID\":\"S-1-5-18\",\"AccountType\":\"User\",\"Flags\":\"256\",\"TCP\":\"0\",\"InterfaceIP\":\"172.18.88.20\",\"Source\":\"172.18.88.20\",\"RD\":\"1\",\"QNAME\":\"v10.events.data.microsoft.com.\",\"QTYPE\":\"1\",\"XID\":\"21030\",\"Port\":\"59130\",\"ParsedPacketData\":{\"dns.id\":21030,\"dns.flags.recursion_desired\":\"true\",\"dns.flags.truncated_response\":\"false\",\"dns.flags.authoritative\":\"false\",\"dns.opcode\":\"QUERY\",\"dns.flags.query_or_response\":\"false\",\"dns.response.code\":\"NOERROR\",\"dns.flags.checking_disabled\":\"false\",\"dns.flags.authentic_data\":\"false\",\"dns.flags.recursion_available\":\"false\",\"dns.query\":[{\"dns.query.name\":\"v10.events.data.microsoft.com\",\"dns.query.type\":\"A\",\"dns.query.class\":\"IN\"}]},\"PacketData\":\"0x52260100000100000000000003763130066576656E74730464617461096D6963726F736F667403636F6D0000010001\",\"AdditionalInfo\":\".\",\"GUID\":\"{B021826E-78B1-4574-8B19-0FF06408A144}\",\"EventReceivedTime\":\"2023-04-13T10:22:16.140231+02:00\",\"SourceModuleName\":\"in_windowsdns_auditanalytics_sentinel_windows\",\"SourceModuleType\":\"im_etw\",\"HostIP\":\"172.18.88.20\",\"BufferSize\":\"N/A\"}", "Time": "2023-04-19T07:30:08.5953753Z", "Application": "LogGenerator" } ] Now that is already in a structured format which should be reasonably easy to parse. However I haven't seen any examples of doing that. I have only encountered JSON parsing examples where the JSON text was contained in some field, and the result of the parsing would be assinged to a different/new field. In this case the JSON content is filled with key-value pairs that should belong to different fields in the new table. Have any of you encountered a similar situation? If yes, how did you manage to solve it? Is anything like this even possible in a DCR? source | parse RawData as json Thanks, János1.5KViews0likes5CommentsImportant Update: Azure Automation Update Management and Log Analytics Agent Retirement
Important Update: Azure Automation Update Management and Log Analytics Agent Retirement Attention Azure users! This is a critical notice regarding the retirement of two key services: Azure Automation Update Management and the Log Analytics agent. Both will be discontinued on August 31, 2024. To ensure uninterrupted update management for your virtual machines, migrating to Azure Update Manager is essential before the retirement date. Why the Change? Microsoft is streamlining its update management offerings by focusing on Azure Update Manager, a robust solution with several advantages. These include: Simplified onboarding: Azure Update Manager leverages existing Azure features for effortless integration. Enhanced control: Granular access controls allow for precise management of update deployment. Flexible automation: Automatic patching capabilities streamline the update process. Taking Action: Migrate to Azure Update Manager To avoid disruptions after August 31st, migrating to Azure Update Manager is necessary. Microsoft provides a comprehensive guide to facilitate this transition: Move from Automation Update Management to Azure Update Manager https://learn.microsoft.com/en-us/azure/automation/update-management/overview This guide details the migration process, ensuring a smooth transfer to the new platform. Don't wait! Begin the migration process today to ensure your virtual machines receive updates seamlessly after the retirement of Azure Automation Update Management and the Log Analytics agent.421Views0likes1CommentAzure Metric vs Performance counters show different values
Azure Metric vs Performance counters Return values of network traffic are totally off, regardless of time frame between portal, log analytic query perf and InsightsMetrics. See screen off excel. I have open log analytic workspace, select Time range Last 24 hours and one day 26/03/2024 Perf | where TimeGenerated between (datetime(2024-03-26) .. datetime(2024-03-27)) | where Computer == "**********" | where ObjectName == "Network Interface" and CounterName == "Bytes Sent/sec" or CounterName == "Bytes Received/sec" | summarize BytsSent = sum(CounterValue) by bin(TimeGenerated, 1d),CounterName InsightsMetrics | where TimeGenerated between (datetime(2024-03-26) .. datetime(2024-03-27)) | where Origin == "vm.azm.ms" | where Computer == "*******" | where Namespace == "Network" | where Name == "ReadBytesPerSecond" or Name == "WriteBytesPerSecond" | extend Tags = parse_json(Tags) | extend BytestoSec = toreal(Tags.["vm.azm.ms/bytes"]) | sort by TimeGenerated | project TimeGenerated,Name,Val,BytestoSec | summarize AggregatedValue = sum(BytestoSec) by bin(TimeGenerated, 1d),Name I don’t know what im doing wrong or i don't understand . But sample interval in data collection rule is 15s, and sample interval of metric is 60s.811Views0likes2CommentsCan I filter what logs need to be sent to my Azure Log Analystic Workspace?
Hello, Is it possible to filter what logs need to be sent to my Azure Log Analystic Workspace? In my case, I am sending all the AuditLogs from Microsoft Entra ID to my Azure Log Analystic Workspace, but my organization is large, I just need a very small group of people's activities to be logged and sent to the workspace. Thank you!377Views0likes1Comment