User Profile
ScottAllison
Iron Contributor
Joined 8 years ago
User Widgets
Recent Discussions
query multiple "contains"
Greetings Community, I'm trying to come up with a way to query for multiple computers, but I have different strings to search for. For example: Heartbeat | where TimeGenerated >= ago(1h) | where Computer contains 'ACOMPUTER1' | summarize max(TimeGenerated) by Computer I can run this query but I have to execute it for a different string each time: Heartbeat | where TimeGenerated >= ago(1h) | where Computer contains 'ACOMPUTER1' | summarize max(TimeGenerated) by Computer Heartbeat | where TimeGenerated >= ago(1h) | where Computer contains 'SERVERABC' | summarize max(TimeGenerated) by Computer Heartbeat | where TimeGenerated >= ago(1h) | where Computer contains 'THISMACHINE_B' | summarize max(TimeGenerated) by Computer Is there a way to go through multiple "contains" or "has" statements in a single query? Was thinking that I'd have to build an array in a function or something... any help is appreciated.Solved111KViews0likes11CommentsNeed help with a parsing query
I'm having a hard time querying out this bit of JSON (extracted from a larger JSON) into their own columns: [{"name":"Category","value":"Direct Agent"},{"name":"Computer","value":"servername.domeain.net"}] Essentially I want to have a column named agentCategory and a column named serverName with these values in them. Thanks in advance!Solved1.2KViews0likes2CommentsAutomatically resolve Heartbeat alerts
Greetings, I'm trying to use the newer feature "Automatically resolve alerts" with a Heartbeat alert, but I am having no luck no matter how I configure the alert. My query: Heartbeat | where TimeGenerated >= ago(24h) | summarize LastHeartbeat=max(TimeGenerated) by Computer, _ResourceId | where LastHeartbeat < ago(15m) Signal logic: ...and under Alert rule details, I have "Automatically resolve alerts" selected. My alerts trigger just fine (I've never had a problem with that), but when I bring a VM back online, the alert does not switch to a "Resolved" condition. All the alerts remain in a "Fired" condition. Am I missing something or does this feature even work? Thanks in advance!2.8KViews0likes6CommentsNeed help with a query with multiple if/thens
So I'm having a hard time coming up with a query that will get me the intended results. Any help would be appreciated. My data looks like this: Computer ID SERVER-A 12345 SERVER-A SERVER-A 67890 SERVER-B SERVER-C SERVER-C 34567 What I'm trying to get in a result: If Computer has an ID or multiple IDs, return those rows but not the blanks If Computer has ONLY blank ID, then return that row. Expected Result: Computer ID SERVER-A 12345 SERVER-A 67890 SERVER-B SERVER-C 34567 I'm sure I'm missing something simple...Solved1.3KViews0likes2CommentsReferencing Key Vault to access external data from Log Analytics
Hi everyone! I've built a nifty solution that allows me to use the https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor operator to query some static data from blob storage via Log Analytics. This works great, but now I want to save this as a Function in Log Analytics and the token is easily read. So my questions are: 1. Is there a way to "mask" the token in the KQL query so that it isn't visible? or.... 2. Is there a way to reference Key Vault to get the secret token ? Any help is appreciated!976Views1like1CommentKusto Explorer broken for Log Analytics
Recently, and up until this week, I've been utilizing https://docs.microsoft.com/en-us/azure/kusto/tools/kusto-explorer instead of the sorely lacking web interface for Log Analytics. However, now I'm unable to connect to my Log Analytics workspaces using the tool: "An error occurred while sending the request." I've been using https://docs.microsoft.com/en-us/azure/data-explorer/query-monitor-data--which allows me to use Azure Data Explorer Web UI to query (also superior to the out of the box Log Analytics experience)--to connect Kusto Explorer. Is anyone else having success?3.8KViews0likes5CommentsIgnore bucket based on offset
I'm trying to visualize the number of Heartbeats in 1 hour buckets for the last 48 hours. However, no matter what I use for an offset, I get a drop in amount at the beginning and end of the dataset: How can I get rid of these in the visual (they're confusing)?Solved1.3KViews0likes2CommentsNew Log Analytics experience
If Microsoft is trying to get me to stop using their UI for Log Analytics, then the latest update might actually work. Please bring back WORKSPACE FAVORITES. Our organization has multiple workspaces and having to click SELECT SCOPE and clicking through blades and multiple lists just to switch to a different workspace is extremely frustrating and time consuming. I just want my list of saved workspaces that I had before, please.1.9KViews0likes3CommentsUsage stats for Log Analytics?
Is there a way to view usage statistics for Log Analytics? Specifically, is there a way to view "when/what/who" queries have been executed in Log Analytics? I'd like to be able to generate some usage reports so that we can make sure that the service is being properly utilized by our users.853Views0likes0CommentsLog Analytics table growth
Greetings community! I'm using the following query to keep a close eye on my top tables in Log Analytics: search * | summarize count() by $table | project Table=$table, Count=count_ | top 5 by Count This is great, but I'd also like to track the growth on a day-to-day basis so that I can graph it and catch when there is a big jump in consumption. Any ideas? Thanks!8.8KViews0likes5CommentsSQL cluster performance counters
Good morning everyone, Our organization is nearly done with our migration from SCOM to Azure Monitor--it's been quite a ride. I've run into a challenge that I can't seem to find an answer for. We have some on-premise SQL clusters that we've added the MMA to and moved to Azure Monitor. However, they do not appear to be pulling SQL performance counters. I suspect this is due to the virtual nature of the SQL cluster: SQL Server name = SERVERV001 Physical nodes = SERVERP001a and SERVERP002a Since the virtual name--SERVERV001--is not added to Log Analytics, I have to rely on the physical nodes--SERVERP001a and SERVERP002a--to view performance counters--except SQL counters. How does one circumvent this issue? Thanks in advance!Solved4.3KViews0likes6CommentsHeartbeat query, show negative results
Greetings Community, I'm trying to formulate a query whereby I provide a list of servers to check for a heartbeat in the last 6 hours, but I only want to return the servers THAT DO NOT HAVE A RECORD in the Heartbeat table. For example in the following query, "SERVER123" and "SERVER456" are heartbeating, but "SERVER789" has never Heartbeat. How can I get the query below to only spit out "SERVER789" as not having a Heartbeat (or any entry at all for that matter)? Heartbeat | project TimeGenerated , Computer | where TimeGenerated >= ago(6h) | where Computer in ("SERVER123","SERVER456","SERVER789") | summarize arg_max(TimeGenerated, *) by ComputerSolved2.6KViews0likes8CommentsAlerting on Heartbeat issue
Surely there is a better solution for this? My use case doesn't work: 1. Create a computer group 2. Alert when an agent in computer group has not "heartbeated" for over 24 hours. By the logic in Alerts, even if I set the query as I do below, the time span that I defineis ignored because of the "Period" in Alerts: Heartbeat | project TimeGenerated, Computer | where TimeGenerated < now() | where Computer in (COMPUTERGROUP) | summarize ["Last Heartbeat"]=max(TimeGenerated) by Computer | where ["Last Heartbeat"] < ago(24h) This query--when run outside of Alerts--returns several machines that have no heartbeat in the last 24 hours, going back to as long as we've been collecting the data. But because Alerts confines me to a max 24 hour period to check against, I get 0 results. I essentially want an alert generated every 24 hours as a "nag alert" with a list of the machines that have not sent heartbeat data in over 24 hours. Is there any way to get around this extremely limiting design?Solved10KViews0likes4CommentsIdentify workspace after a Union
I'm executing a query with a union on the Heartbeat table of two work spaces. I'd like to have an additional column that identifies which work space the result is from. The query: union isfuzzy= true workspace("thisworkspace").Heartbeat, workspace("thatworkspace").Heartbeat | project TimeGenerated , Computer, ResourceId , OSType | where TimeGenerated < now() | where ResourceId == "" | where OSType == "Windows" or OSType == "Linux" | summarize arg_max(TimeGenerated, *) by Computer | distinct Computer I'd like to have results that look like this: ServerABC thisworkspace ServerDEF thatworkspace ServerGHI thatworkspace Server123 thisworkspace Is there a way to do this? Noa KuperbergSolved1.7KViews0likes3CommentsApplication Pool monitoring in Log Analytics
We're in the process of moving all of our monitoring from SCOM to Log Analytics, and obviously Management Packs are not always going to translate to LA. Has anyone done IIS Application Pool monitoring in Log Analytics yet? Specifically, looking for a way to monitor if an application pool is up or not. It's not clear how the MP in SCOM does this, so it's not clear on what I should be looking for in Log Analytics. EventLog?9.5KViews0likes3CommentsRow count for all tables
Good morning everyone! I'm looking for a query that will return all DataTypes and their current row count. This is easy to do for individual DataTypes: AzureMetrics | where TimeGenerated < now() | count The Usage table--while useful--does not have this detail, unfortunately. Any suggestions are appreciated. Thanks!Solved6.5KViews0likes2CommentsFilter preview
When using the Log Analytics query portal, every time we execute a query, the portal automatically switches to the Filter (preview) pane. When working with complex data (such as AzureDiagnostics or Syslog), this hangs the browser--sometimes for several minutes. Can we please have the option to turn this feature OFF? I personally find it useless for my day-to-day work anyway (and I live in Log Analytics).1.1KViews1like1CommentLast 3 values for multiple categories
All, I'm trying to get the last three values of a dataset by a particular category, and I am having trouble figuring out the best way to do this. I have the following columns: TimeGenerated MonitorName SiteName Availability What I need is the last (most recent) 3 values (Availability) by SiteName and MonitorName. I want my result to look something like this: TimeGenerated SiteName MonitorName Availability 2018-10-05T06:44:37.353 Houston Test1 0 2018-10-05T06:34:37.353 Houston Test1 1 2018-10-05T06:24:37.353 Houston Test1 0 2018-10-05T06:31:00.000 Houston Test2 1 2018-10-05T06:21:00.000 Houston Test2 1 2018-10-05T06:11:00.000 Houston Test2 1 2018-10-05T06:51:00.000 Houston Test3 0 2018-10-05T06:41:00.000 Houston Test3 0 2018-10-05T06:31:00.000 Houston Test3 1 2018-10-05T06:38:00.000 Los Angeles Test1 1 2018-10-05T06:28:00.000 Los Angeles Test1 1 2018-10-05T06:18:00.000 Los Angeles Test1 1 2018-10-05T06:55:00.000 Los Angeles Test2 0 2018-10-05T06:45:00.000 Los Angeles Test2 0 2018-10-05T06:35:00.000 Los Angeles Test2 1 I tried using "top-nested" to get a result like this, but can't seem to get it right. Does anyone have any thoughts? Thanks!1.5KViews0likes3Comments
Recent Blog Articles
No content to show