Forum Widgets
Latest Discussions
KQL query - specific laptops with certain CPU
Hi all, Can I specify a specific range of CPUs within KQL? There's no such field within DeviceInfosifsfisfopsNov 01, 2025Copper Contributor297Views0likes1CommentLogic Flow name in Azure Log Analytics
dependencies | where type == "Ajax" | where success == "False" | where name has "logicflows" | project timestamp, name, resultCode, duration, type, target, data, operation_Name, appName | order by timestamp desc This KQL query in Azure Application Insights> Azure Log Analytics is used to get errors for logicflows. It returns the data but, I cannot see the logicflow name or ID anywhere. Is there any way to fetch logicflow ID? The azure app insight is registered for a power app, where we are using automate flows to call apis. We need the flow's name in analytics. I tried looking the database, there is no field for logic flow's name or ID. Though when seen in user>sessions, it shows name in requestHeaders.oseverma9Nov 01, 2025Copper Contributor353Views0likes1CommentAzure Functions vs. Azure Container Apps Choosing Your Serverless Compute
As organizations continue to embrace cloud-native architectures, the demand for serverless computing has skyrocketed. Microsoft Azure offers multiple options for deploying applications without worrying about managing infrastructure. Two of the most popular choices are Azure Functions and Azure Container Apps. While both enable developers to focus on code rather than servers, their use cases, scalability models, and operational models differ significantly. Let’s break down the key distinctions and help you choose the right tool for your next project. https://dellenny.com/azure-functions-vs-azure-container-apps-choosing-your-serverless-compute/25Views0likes0CommentsShould I ingest AADNonInteractiveUserSignInLogs from Entra ID to a LAW
As the title says, I am interested in expert opinions on whether I should include the AADNonInteractiveUserSignInLogs from Entra ID in a LAW, as this table dwarfs the SignInLogs in terms of the amount of data (by a factor of 8x) and therefore creates higher costs. Secondly, I am curious if there are ways to reduce the amount of non-interactive SignInLogs that are generated in the first place.CSIAug 05, 2025Copper Contributor168Views6likes2CommentsCost-effective alternatives to control table for processed files in Azure Synapse
Hello, good morning.In Azure Synapse Analytics, I want to have a control table for the files that have already been processed by the bronze or silver layers. For this, I wanted to create a dedicated pool, but I see that at the minimum performance level it charges 1.51 USD per hour (as I show in the image), so I wanted to know what other more economical alternatives I have, since I will need to do inserts and updates to this control table and with a serverless option this is not possible.JuanMahechaJul 07, 2025Copper Contributor143Views1like2CommentsHow to rigger an azure synapse pipeline after a file is dropped into an azure file-share.
I am currently looking for ideas to trigger an azure synapse pipeline after a file is dropped into an azure file-share. I feel that azure functions might be a suitable candidate to implement this. Azure synapse pipelines don't natively support this at the moment. Microsoft do offer a custom events trigger extension capability. However, so far I have found very little evidence demonstrating how to leverage this capability to trigger my synapse pipeline. Any assistance with approaches to solving this would be greatly appreciated. Thanks.mikejmintoJul 07, 2025Copper Contributor331Views0likes1CommentADF Pipeline Data flow issue
I've created a data flow where the source is ADLS and the sink is an ADLS delta file. I attempted to run the data flow, but I encountered the following issue Job failed due to reason: com.microsoft.dataflow.Issues: DF-EXPR-200 - Function 'outputs' applies to only sink transformation with right saveOrder - [418 464 564 737],[207 319 418 464 564 737],EXE-0001, surrogateKey1 derive( ) ~> derivedColumn1,Dataflow cannot be analyzed as a graph,[62 207 319 418 464 564 737] DF-EXPR-200 - Function 'outputs' applies to only sink transformation with right saveOrder - [418 464 564 737],EXE-0001, surrogateKey1 derive( ) ~> derivedColumn1,Dataflow cannot be analyzed as a graph,[209 319 418 464 564 737]Sheeraz27Jun 01, 2025Copper Contributor660Views0likes1CommentSpecify which Entra ID Sign-in logs are sent to Log Analytics Workspace
Hi, as the title says I am curious if its possible if I can limit which login logs are sent to a Log Analytics Workspace. We currently have a couple of service accounts in use that generate a high amount of traffic (an issue being worked on separately) and would like to exclude the logs from these specific users from being sent to LAW.SolvedCSIMar 17, 2025Copper Contributor124Views0likes1CommentUnable to retrieve query data using Log Analytics API
I have been trying to access Azure KQL data with the help of Log Analytics REST API, the connection is successful showing a 200 response but I am only getting the table headers and not getting any data in the table. Does anyone know how to resolve this? Code snippet: import requests import urllib3 from azure.identity import DefaultAzureCredential from datetime import datetime, timedelta, timezone import certifi import os os.environ["REQUESTS_CA_BUNDLE"] = certifi.where() verify_cert = certifi.where() credential = DefaultAzureCredential() # Set the start and end time for the query end_time = datetime.now(timezone.utc) start_time = end_time - timedelta(hours=6) # Set the query string query = ''' KubePodInventory | take 5 ''' # Set the workspace ID workspace_id = "XXXXXXXXXXXXXXXXXXXXXXXX" # Set the API endpoint api_endpoint = f"https://api.loganalytics.io/v1/workspaces/{workspace_id}/query" # Set the request payload payload = { "query": query, "timespan": f"{start_time.isoformat()}Z/{end_time.isoformat()}Z" } # Set the request headers headers = { "Content-Type": "application/json" } # Disable SSL certificate verification urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) # Authenticate the request using the Azure credential access_token = credential.get_token("https://api.loganalytics.io/.default").token headers["Authorization"] = f"Bearer {access_token}" # Send the POST request response = requests.post(api_endpoint, json=payload, headers=headers, verify=False) # Check the response status if response.status_code == 200: data = response.json() tables = data.get('tables', []) if tables: table = tables[0] # Assuming there is only one table returned columns = table.get('columns', []) rows = table.get('rows', []) if columns and rows: for row in rows: for i, column in enumerate(columns:( column_name = column['name'] column_type = column['type'] row_value = row[i] print(f"Column name: {column_name}, Data type: {column_type}, Value: {row_value}") else: print("Empty table or no data in table") else: print("No tables found in the response") else: print(f"Request failed with status code: {response.status_code}") print(f"Error message: {response.text}")Krishna1994Jan 23, 2025Copper Contributor679Views1like1CommentData archiving of delta table in Azure Databricks
Hi all, Currently I am researching on data archiving for delta table data on Azure platform as there is data retention policy within the company. I have studied the documentation from Databricks official (https://docs.databricks.com/en/optimizations/archive-delta.html) which is about archival support in Databricks. It said "If you enable this setting without having lifecycle policies set for your cloud object storage, Databricks still ignores files based on this specified threshold, but no data is archived." Therefore, I am thinking how to configure the lifecycle policy in azure storage account. I have read the documentation on Microsoft official (https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview) Let say the delta table data are stored in "test-container/sales" and there are lots of "part-xxxx.snappy.parquet" data file stored in that folder. Should I simply specify "tierToArchive", "daysAfterCreationGreaterThan: 1825", "prefixMatch: ["test-container/sales"]? However, I am worried that will this archive mechanism impact on normal delta table operation? Besides, I am worried that what if the parquet data file moved to archive tier contains both data created before 5 years and after 5 years, it is possible? Will it by chance move data earlier to archive tier before 5 years? Highly appreciate if someone could help me out with the questions above. Thanks in advance.Brian_169Jan 04, 2025Copper Contributor316Views0likes1Comment
Resources
Tags
- AMA18 Topics
- Log Analytics6 Topics
- azure6 Topics
- Synapse3 Topics
- azure monitor3 Topics
- Log Analytics Workspace3 Topics
- Stream Analytics2 Topics
- azure databricks2 Topics
- Azure Synapse Analtyics2 Topics
- Azure Log Analytics2 Topics