monitoring
5 TopicsBuilding a Custom Continuous Export Pipeline for Azure Application Insights
1. Introduction MonitorLiftApp is a modular Python application designed to export telemetry data from Azure Application Insights to custom sinks such as Azure Data Lake Storage Gen2 (ADLS). This solution is ideal when Event Hub integration is not feasible, providing a flexible, maintainable, and scalable alternative for organizations needing to move telemetry data for analytics, compliance, or integration scenarios. 2. Problem Statement Exporting telemetry from Azure Application Insights is commonly achieved via Event Hub streaming. However, there are scenarios where Event Hub integration is not possible due to cost, complexity, or architectural constraints. In such cases, organizations need a reliable, incremental, and customizable way to extract telemetry data and move it to a destination of their choice, such as ADLS, SQL, or REST APIs. 3. Investigation Why Not Event Hub? Event Hub integration may not be available in all environments. Some organizations have security or cost concerns. Custom sinks (like ADLS) may be required for downstream analytics or compliance. Alternative Approach Use the Azure Monitor Query SDK to periodically export data. Build a modular, pluggable Python app for maintainability and extensibility. 4. Solution 4.1 Architecture Overview MonitorLiftApp is structured into four main components: Configuration: Centralizes all settings and credentials. Main Application: Orchestrates the export process. Query Execution: Runs KQL queries and serializes results. Sink Abstraction: Allows easy swapping of data targets (e.g., ADLS, SQL). 4.2 Configuration (app_config.py) All configuration is centralized in app_config.py, making it easy to adapt the app to different environments. CONFIG = { "APPINSIGHTS_APP_ID": "<your-app-id>", "APPINSIGHTS_WORKSPACE_ID": "<your-workspace-id>", "STORAGE_ACCOUNT_URL": "<your-adls-url>", "CONTAINER_NAME": "<your-container>", "Dependencies_KQL": "dependencies \n limit 10000", "Exceptions_KQL": "exceptions \n limit 10000", "Pages_KQL": "pageViews \n limit 10000", "Requests_KQL": "requests \n limit 10000", "Traces_KQL": "traces \n limit 10000", "START_STOP_MINUTES": 5, "TIMER_MINUTES": 5, "CLIENT_ID": "<your-client-id>", "CLIENT_SECRET": "<your-client-secret>", "TENANT_ID": "<your-tenant-id>" } Explanation: This configuration file contains all the necessary parameters for connecting to Azure resources, defining KQL queries, and scheduling the export job. By centralizing these settings, the app becomes easy to maintain and adapt. 4.3 Main Application (main.py) The main application is the entry point and can be run as a Python console app. It loads the configuration, sets up credentials, and runs the export job on a schedule. from app_config import CONFIG from azure.identity import ClientSecretCredential from monitorlift.query_runner import run_all_queries from monitorlift.target_repository import ADLSTargetRepository def build_env(): env = {} keys = [ "APPINSIGHTS_WORKSPACE_ID", "APPINSIGHTS_APP_ID", "STORAGE_ACCOUNT_URL", "CONTAINER_NAME" ] for k in keys: env[k] = CONFIG[k] for k, v in CONFIG.items(): if k.endswith("KQL"): env[k] = v return env class MonitorLiftApp: def __init__(self, client_id, client_secret, tenant_id): self.env = build_env() self.credential = ClientSecretCredential(tenant_id, client_id, client_secret) self.target_repo = ADLSTargetRepository( account_url=self.env["STORAGE_ACCOUNT_URL"], container_name=self.env["CONTAINER_NAME"], cred=self.credential ) def run(self): run_all_queries(self.target_repo, self.credential, self.env) if __name__ == "__main__": import time client_id = CONFIG["CLIENT_ID"] client_secret = CONFIG["CLIENT_SECRET"] tenant_id = CONFIG["TENANT_ID"] app = MonitorLiftApp(client_id, client_secret, tenant_id) timer_interval = app.env.get("TIMER_MINUTES", 5) print(f"Starting continuous export job. Interval: {timer_interval} minutes.") while True: print("\n[INFO] Running export job at", time.strftime('%Y-%m-%d %H:%M:%S')) try: app.run() print("[INFO] Export complete.") except Exception as e: print(f"[ERROR] Export failed: {e}") print(f"[INFO] Sleeping for {timer_interval} minutes...") time.sleep(timer_interval * 60) Explanation: The app can be run from any machine with Python and the required libraries installed—locally or in the cloud (VM, container, etc.). No compilation is needed; just run as a Python script. Optionally, you can package it as an executable using tools like PyInstaller. The main loop schedules the export job at regular intervals. 4.4 Query Execution (query_runner.py) This module orchestrates KQL queries, runs them in parallel, and serializes results. import datetime import json from concurrent.futures import ThreadPoolExecutor, as_completed from azure.monitor.query import LogsQueryClient def run_query_for_kql_var(kql_var, target_repo, credential, env): query_name = kql_var[:-4] print(f"[START] run_query_for_kql_var: {kql_var}") query_template = env[kql_var] app_id = env["APPINSIGHTS_APP_ID"] workspace_id = env["APPINSIGHTS_WORKSPACE_ID"] try: latest_ts = target_repo.get_latest_timestamp(query_name) print(f"Latest timestamp for {query_name}: {latest_ts}") except Exception as e: print(f"Error getting latest timestamp for {query_name}: {e}") return start = latest_ts time_window = env.get("START_STOP_MINUTES", 5) end = start + datetime.timedelta(minutes=time_window) query = f"app('{app_id}')." + query_template logs_client = LogsQueryClient(credential) try: response = logs_client.query_workspace(workspace_id, query, timespan=(start, end)) if response.tables and len(response.tables[0].rows) > 0: print(f"Query for {query_name} returned {len(response.tables[0].rows)} rows.") table = response.tables[0] rows = [ [v.isoformat() if isinstance(v, datetime.datetime) else v for v in row] for row in table.rows ] result_json = json.dumps({"columns": table.columns, "rows": rows}) target_repo.save_results(query_name, result_json, start, end) print(f"Saved results for {query_name}") except Exception as e: print(f"Error running query or saving results for {query_name}: {e}") def run_all_queries(target_repo, credential, env): print("[INFO] run_all_queries triggered.") kql_vars = [k for k in env if k.endswith('KQL') and not k.startswith('APPSETTING_')] print(f"Number of KQL queries to run: {len(kql_vars)}. KQL vars: {kql_vars}") with ThreadPoolExecutor(max_workers=len(kql_vars)) as executor: futures = { executor.submit(run_query_for_kql_var, kql_var, target_repo, credential, env): kql_var for kql_var in kql_vars } for future in as_completed(futures): kql_var = futures[future] try: future.result() except Exception as exc: print(f"[ERROR] Exception in query {kql_var}: {exc}") Explanation: Queries are executed in parallel for efficiency. Results are serialized and saved to the configured sink. Incremental export is achieved by tracking the latest timestamp for each query. 4.5 Sink Abstraction (target_repository.py) This module abstracts the sink implementation, allowing you to swap out ADLS for SQL, REST API, or other targets. from abc import ABC, abstractmethod import datetime from azure.storage.blob import BlobServiceClient class TargetRepository(ABC): @abstractmethod def get_latest_timestamp(self, query_name): pass @abstractmethod def save_results(self, query_name, data, start, end): pass class ADLSTargetRepository(TargetRepository): def __init__(self, account_url, container_name, cred): self.account_url = account_url self.container_name = container_name self.credential = cred self.blob_service_client = BlobServiceClient(account_url=account_url, credential=cred) def get_latest_timestamp(self, query_name, fallback_hours=3): blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: timestamp_str = blob_client.download_blob().readall().decode() return datetime.datetime.fromisoformat(timestamp_str) except Exception as e: if hasattr(e, 'error_code') and e.error_code == 'BlobNotFound': print(f"[INFO] No timestamp blob for {query_name}, starting from {fallback_hours} hours ago.") else: print(f"[WARNING] Could not get latest timestamp for {query_name}: {type(e).__name__}: {e}") return datetime.datetime.utcnow() - datetime.timedelta(hours=fallback_hours) def save_results(self, query_name, data, start, end): filename = f"{query_name}/{start:%Y%m%d%H%M}_{end:%Y%m%d%H%M}.json" blob_client = self.blob_service_client.get_blob_client(self.container_name, filename) try: blob_client.upload_blob(data, overwrite=True) print(f"[SUCCESS] Saved results to blob for {query_name} from {start} to {end}") except Exception as e: print(f"[ERROR] Failed to save results to blob for {query_name}: {type(e).__name__}: {e}") ts_blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: ts_blob_client.upload_blob(end.isoformat(), overwrite=True) except Exception as e: print(f"[ERROR] Failed to update latest timestamp for {query_name}: {type(e).__name__}: {e}") Explanation: The sink abstraction allows you to easily switch between different storage backends. The ADLS implementation saves both the results and the latest timestamp for incremental exports. End-to-End Setup Guide Prerequisites Python 3.8+ Azure SDKs: azure-identity, azure-monitor-query, azure-storage-blob Access to Azure Application Insights and ADLS Service principal credentials with appropriate permissions Steps Clone or Download the Repository Place all code files in a working directory. 2. **Configure **app_config.py Fill in your Azure resource IDs, credentials, and KQL queries. 3. Install Dependencies pip install azure-identity azure-monitor-query azure-storage-blob 4. Run the Application Locally: python monitorliftapp/main.py Deploy to a VM, Azure Container Instance, or as a scheduled job. (Optional) Package as Executable Use PyInstaller or similar tools if you want a standalone executable. 2. Verify Data Movement Check your ADLS container for exported JSON files. 6. Screenshots App Insights Logs:\ Exported Data in ADLS: \ 7. Final Thoughts MonitorLiftApp provides a robust, modular, and extensible solution for exporting telemetry from Azure Monitor to custom sinks. Its design supports parallel query execution, pluggable storage backends, and incremental data movement, making it suitable for a wide range of enterprise scenarios.123Views1like0CommentsAZ-500: Microsoft Azure Security Technologies Study Guide
The AZ-500 certification provides professionals with the skills and knowledge needed to secure Azure infrastructure, services, and data. The exam covers identity and access management, data protection, platform security, and governance in Azure. Learners can prepare for the exam with Microsoft's self-paced curriculum, instructor-led course, and documentation. The certification measures the learner’s knowledge of managing, monitoring, and implementing security for resources in Azure, multi-cloud, and hybrid environments. Azure Firewall, Key Vault, and Azure Active Directory are some of the topics covered in the exam.22KViews4likes3Comments[Story from The field] How to monitor vNet subnet IP usage
Customer business case "Using Azure Monitor I want to get alerted when we are running out of available IP addresses on any virtual network (vNet) subnets" Investigation Together with CoE for Monitoring member, Robert Seso, we found an Azure resource query that could do the trick. This query was created by a Microsoft CSA see Retrieve available IP addresses from Azure Resource Graph – Stefan Stranger's Blog Enhancing the query However we found out that this query was not 100% correct. It was lacking to count also any internal load balancer front end IP assigning's and needed some optimalizations. After this fix, the next step was to convert the Azure Resource Graph (ARG )query to let it run in a Log analytics workspace. This is done by simply adding arg(""). in front of any azure resource graph table used. Next, due to log analytics limitations we needed to let the used join operator know to do a local action by adding hint.remote=local. (See also the reference links at the end of this post for more learnings) For all changes see the script code below. // Get all Vnet subnet IP usage // execute in a LogAnalytics Workspace arg('').resources | where type =~ 'Microsoft.Network/virtualNetworks' | extend addressPrefixes=array_length(properties.addressSpace.addressPrefixes) | extend vNetAddressSpace=properties.addressSpace.addressPrefixes | where (isnotnull(properties.subnets)) | mv-expand subnet=properties.subnets limit 2000 | extend subnetId = subnet.id | extend virtualNetwork = name | extend subnetPrefix = subnet.properties.addressPrefix | extend subnets = properties.subnets | extend subnetName = tostring(subnet.name) | extend prefixLength = toint(split(subnetPrefix, "/")[1]) | extend addressPrefix = split(subnetPrefix, "/")[0] | extend numberOfIpAddresses = trim_end(".0",tostring(pow(2, 32 - prefixLength) - 5)) | extend startIp = strcat(strcat_array((array_slice(split(addressPrefix, '.'), 0, 2)),"."), ".", tostring(0)) | extend endIp = strcat(strcat_array((array_slice(split(addressPrefix, '.'), 0, 2)),"."), ".", trim_end(".0",tostring(pow(2, 32 - prefixLength) - 5))) | extend endIPNew = case(prefixLength == 23, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'1.255'), prefixLength == 22, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'3.255'), prefixLength == 21, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'7.255'), prefixLength == 20, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'15.255'), prefixLength == 19, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'31.255'), prefixLength == 18, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'63.255'), prefixLength == 17, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'127.255'), prefixLength == 16, strcat(strcat(strcat_array((array_slice(split(startIp,'.'), 0, 1)), "."), "."),'255.255'), 'unknown' ) | extend finalendIPaddress = iff(endIPNew == "unknown", endIp, endIPNew) | join hint.remote=local kind=leftouter ( arg('').resources | where type =~ 'microsoft.network/networkinterfaces' or type =~ 'microsoft.network/loadbalancers' | project id, ipConfigurations = iff(isnull(properties.ipConfigurations), properties.frontendIPConfigurations,properties.ipConfigurations), subscriptionId | mvexpand ipConfigurations | project id, subnetId = tostring(iff(isnull(ipConfigurations.properties.subnet.id),ipConfigurations.properties.subnet.id,ipConfigurations.properties.subnet.id)),subscriptionId | parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnetName | summarize usedIPAddresses = count() by subnetName, virtualNetwork, subscriptionId ) on subnetName, virtualNetwork, subscriptionId | extend usedIPAddresses_new = iff(isnull(usedIPAddresses),0,usedIPAddresses) | extend AvailableIPAddresses = (toint(numberOfIpAddresses) - usedIPAddresses_new) | extend PercentageUsed = round(100.0 * (todouble(usedIPAddresses_new)/todouble(numberOfIpAddresses)), 2) | project id,subscriptionId, resourceGroup, virtualNetwork, subnetId , subnetName, IPRange = strcat(startIp, " - ", finalendIPaddress), numberOfIpAddresses, usedIPAddresses, AvailableIPAddresses , PercentageUsed To test this query, open a log Analytics Workspace (LAW) and execute the query. See an output example below: Defining the alert Once the query is tested and verified for accuracy in the Log Analytics Workspace (LAW), the next step involves integrating it into Azure Monitor. This includes setting up an alert mechanism that effectively tracks IP utilization and notifies relevant stakeholders. To ensure correct monitoring, the query must include the required columns, such as 'id', which associates the vNet resource ID with the alert dimensions. By leveraging dimensions, alerts can be configured to trigger specifically for each vNet or subnet, enabling granular insights. Following the selection of a 'Custom Log Search' query and defining the metric to alert on, the configuration of thresholds and actions ensures that the monitoring system is both proactive and responsive. Create alert steps Go to the Azure portal and open the Azure Monitor pane and select Add alert Set the target, basically since it’s an Azure Resource Graph (ARG) query can be anything. Alert condition In our case we don't want to send out 1 alert if any of the subnets run out of IP space, but we want to generate an alert per vNet / subnet. This is done by using dimensions. Before we can use this, we need to be sure this option can be enabled. This can be done by adding a column named 'id' to the query output having the vNet 'resource id' value. Next, we configure the threshold. Select a ‘Custom Log Search’ query. Add the created query. Select the field name of metric we want to alert on. In this case we alert on AvailableIPaddresses but we can also go for the PercentageUsed for example. Enable the dimensions. Select the 2 dimensions. Configure the threshold for the monitor to fire. Alert actions (Optional step) When an alert if fired it could send out an email/SMS or trigger a logic app. To configure this, we need to add an action group. Or for better control use the Alert processing rules if possible. (See also the reference links at the end of this post for more learnings) Continue to the Actions pane. Select an existing or create an action group Configure Email target In case we create a new action group Provide a name Add an email notification type Enable and provide an email address Alert details The final step is to provide a name, servility and to resolve alerts and start the creation. Configure the subscription and resource group to store this alert Configure the alert severity Configure the identity that is running this alert rule. See details below. If you want to alert to resolve after the threshold is not hit enable this. If you want every time the threshold is hit a new alert generated, then disable this option. Configure the Managed identity access After deploying the alert rule, it's important to ensure proper role assignments and access levels to maintain functionality and security. Integrate the system or user-assigned identities to align with the operational needs of your application, taking care to select the least privilege necessary while enabling seamless monitoring. By doing so, the alert rule remains both effective and adaptable, perfectly positioned to react to specific thresholds or track resource conditions as dictated by the query structure and subscription scope. In our case assigning a Managed identity access is needed because we are using the Azure Resource Graph (ARG) provider (see in the query the arg() statements). Go to the created Azure alert rule and enable the 'Identity' with at least a 'Monitoring Reader' role on for example, the subscription scope. Navigate to the just created azure monitor rule Access the Identity pane Select system assigned if you want the application principal created also be removed if the alert rule is deleted. Or user assigned if you want to reuse an application principal. Enable the identity Open the Assign role wizard Assign role wizard Once you've set up the identity, the next step is to narrow down the monitoring scope to match what you want the alerts to cover. Setting the subscription scope correctly makes sure the monitoring focuses on exactly what you need. Since in our example we want to report on all vNets in our subscription we configure the scope to ‘subscription’ Apply the scope, in this use case we want to get all vNet IP data, so we select subscription Apply the least security role. That would be ‘Monitoring Reader’ The Result Now the alert rule is created and activated you will see an alert when it’s fired like below. Deployable example As a starter see the created ARM template that is pasted below. This will create the Azure monitor that will trigger when the threshold PercentageUsed indicates that 95% of the IP address space is being used. { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "scheduledqueryrules_ipsubnetmonitoring_name": { "defaultValue": "ipsubnetmonitoring", "type": "String" }, "scheduledqueryrules_ipsubnetmonitoring_scope": { "type": "string", "defaultValue": "Full Resource ID of the resource emitting the metric that will be used for the comparison. For example scope subscription level /subscriptions/00000000-0000-0000-0000-0000-00000000", "metadata": { "description": "The scope for the Azure Monitor resources." } } }, "variables": {}, "resources": [ { "type": "microsoft.insights/scheduledqueryrules", "apiVersion": "2024-01-01-preview", "name": "[parameters('scheduledqueryrules_ipsubnetmonitoring_name')]", "location": "westeurope", "kind": "LogAlert", "identity": { "type": "SystemAssigned" }, "properties": { "displayName": "[parameters('scheduledqueryrules_ipsubnetmonitoring_name')]", "severity": 1, "enabled": true, "evaluationFrequency": "PT5M", "scopes": [ "[parameters('scheduledqueryrules_ipsubnetmonitoring_scope')]" ], "targetResourceTypes": [ ], "windowSize": "PT5M", "criteria": { "allOf": [ { "query": "arg(\"\").resources\n| where type =~ 'Microsoft.Network/virtualNetworks'\n//| where id has 'SCOMSKYPE01'\n| extend addressPrefixes=array_length(properties.addressSpace.addressPrefixes)\n| extend vNetAddressSpace=properties.addressSpace.addressPrefixes\n| where (isnotnull(properties.subnets))\n| mv-expand subnet=properties.subnets limit 2000\n| extend subnetId = subnet.id\n| extend virtualNetwork = name\n| extend subnetPrefix = subnet.properties.addressPrefix\n| extend subnets = properties.subnets\n| extend subnetName = tostring(subnet.name)\n| extend prefixLength = toint(split(subnetPrefix, \"/\")[1])\n| extend addressPrefix = split(subnetPrefix, \"/\")[0]\n| extend numberOfIpAddresses = trim_end(\".0\", tostring(pow(2, 32 - prefixLength) - 5))\n| extend startIp = strcat(strcat_array((array_slice(split(addressPrefix, '.'), 0, 2)), \".\"), \".\", tostring(0))\n| extend endIp = strcat(strcat_array((array_slice(split(addressPrefix, '.'), 0, 2)), \".\"), \".\", trim_end(\".0\", tostring(pow(2, 32 - prefixLength) - 5)))\n| extend endIPNew = case(\n prefixLength == 23,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '1.255'),\n prefixLength == 22,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '3.255'),\n prefixLength == 21,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '7.255'),\n prefixLength == 20,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '15.255'),\n prefixLength == 19,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '31.255'),\n prefixLength == 18,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '63.255'),\n prefixLength == 17,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '127.255'),\n prefixLength == 16,\n strcat(strcat(strcat_array((array_slice(split(startIp, '.'), 0, 1)), \".\"), \".\"), '255.255'),\n 'unknown'\n )\n| extend finalendIPaddress = iff(endIPNew == \"unknown\", endIp, endIPNew) \n| join hint.remote=local kind=leftouter (\n arg(\"\").resources\n | where type =~ 'microsoft.network/networkinterfaces' or type =~ 'microsoft.network/loadbalancers'\n | project\n id,\n ipConfigurations = iff(isnull(properties.ipConfigurations), properties.frontendIPConfigurations, properties.ipConfigurations),\n subscriptionId\n | mvexpand ipConfigurations\n | project\n id,\n subnetId = tostring(iff(isnull(ipConfigurations.properties.subnet.id), ipConfigurations.properties.subnet.id, ipConfigurations.properties.subnet.id)),\n subscriptionId\n | parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnetName\n | summarize usedIPAddresses = count() by subnetName, virtualNetwork, subscriptionId \n )\n on subnetName, virtualNetwork, subscriptionId\n| extend usedIPAddresses_new = iff(isnull(usedIPAddresses), 0, usedIPAddresses)\n| extend AvailableIPAddresses = (toint(numberOfIpAddresses) - usedIPAddresses_new) \n| extend PercentageUsed = round(100.0 * (todouble(usedIPAddresses_new) / todouble(numberOfIpAddresses)), 2)\n| project\n id,\n subscriptionId,\n resourceGroup,\n virtualNetwork,\n subnetId,\n subnetName,\n IPRange = strcat(startIp, \" - \", finalendIPaddress),\n numberOfIpAddresses,\n usedIPAddresses,\n AvailableIPAddresses,\n PercentageUsed\n", "timeAggregation": "Total", "metricMeasureColumn": "PercentageUsed", "dimensions": [ { "name": "virtualNetwork", "operator": "Include", "values": [ "*" ] }, { "name": "subnetName", "operator": "Include", "values": [ "*" ] } ], "resourceIdColumn": "id", "operator": "GreaterThanOrEqual", "threshold": 95, "failingPeriods": { "numberOfEvaluationPeriods": 1, "minFailingPeriodsToAlert": 1 } } ] }, "autoMitigate": true } } ] } Deploy ARM template To proceed, ensure the ARM template is correctly formatted and validated to avoid deployment errors. Once validated, move to the deployment interface, where you can configure necessary parameters such as alerts and resource properties. Follow the step-by-step process to align template configurations seamlessly, ensuring deployment compatibility with your specified environment. Open the ‘deploy a custom template’ pane Select ‘build your own template in the editor’ The cusomized template window will open Paste the ARM template into the pane and save it. The final step is to start the deployment. Back in the main wizard Specify the resource group where the alert will be deployed Specify the alert name and the alert scope (in this example on subscription level) Check the created alert rule. Go to the resource group that you specified in the template Search for the type ‘Log search alert rule’ Conclusion By following these steps, you can ensure a streamlined deployment process to monitor the IP usage using Azure Monitor alerts. This approach not only enhances resource monitoring but also simplifies alert management for your infrastructure. Related Resources To learn more on this capability, refer to: How Azure Resource Graph uses alerts to monitor resources - Azure Resource Graph | Microsoft Learn Create Azure Monitor alert rules - Azure Monitor | Microsoft Learn Alert processing rules for Azure Monitor alerts - Azure Monitor | Microsoft Learn Cross-cluster join - Kusto | Microsoft Learn Troubleshoot Azure Resource Graph alerts - Azure Resource Graph | Microsoft Learn Managed identities for Azure resources | Microsoft Learn Happy Monitoring! Michel Kamp TZ lead CoE for Monitoring at Microsoft SfMC (Support for Mission Critical)705Views4likes0CommentsMonitor cloud resources with Carnegie Mellon University and Microsoft Learn
Learn how to use cloud applications to monitor cloud resources with Peter De Tender, Rodanthi Alexiou and Carnegie Mellon University Microsoft Learn Module Monitor cloud resources. Cloud monitoring allows you to keep track of your cloud infrastructure’s health
1.7KViews1like0Comments