containers
302 TopicsSeamless Metric Export: Simplifying AKS Platform Metrics Routing to Any Destination
Platform metrics in Azure Platform or resource metrics are telemetry data automatically collected by Azure Monitor for resources running within the Azure environment. These metrics include performance indicators such as CPU usage, memory consumption, network traffic, and disk I/O, which are critical for resource monitoring and performance tuning. For Azure Kubernetes Service (AKS), platform metrics provide insights into both the Kubernetes cluster and its underlying infrastructure. Examples include: Node metrics: CPU utilization, memory usage Pod metrics: Pod status Control plane metrics: API server inflight requests These metrics enable administrators and developers to monitor, troubleshoot, and optimize their applications effectively. The list of platform metrics for AKS is available here. This blog explores how to export and utilize platform metrics from AKS to other destinations like Log Analytics, Event Hub and Storage Accounts, with a step-by-step example. Exporting AKS Platform metrics Azure Monitor Metrics Export is configurable throughData Collection Rules (DCR), which provides the capability to route Azure resource metrics data to Azure Storage Accounts, Azure Event Hubs and Azure Log Analytics Workspace for 18 resource types and 10 Azure public regions, which includes AKS. Metrics Export feature provides a more scalable, flexible and reliable way to export platform metrics, in comparison to Azure Monitor Diagnostic Settings. Exporting platform metrics enables users to co-locate their metrics in a single store so that they can use a wide variety of monitoring and dashboarding tools. Additionally, since platform metrics are retained in Azure Monitor only for 93 days, exporting these metrics are crucial while making long term business critical decisions. Metrics Export also enables users to export these metrics in near-real time, with full fidelity and at scale. Using Metrics Export, platform metrics can be sent to the following destinations: Log Analytics Workspaces Metrics are stored in the AzureMetricsV2 table. The workspace and DCR must reside in the same region, but the monitored resources can be in any region. Azure Event Hubs Enables integration with external systems for real-time analytics. The Event Hub, DCR, and monitored resources must be in the same region. Azure Storage Accounts Suitable for long-term storage. Similar regional constraints apply. Example: Exporting AKS platform metrics (CLI and Portal) Step 1: Create an AKS Cluster First, create a new AKS cluster using the Azure CLI / UX. You can skip this step if you have an existing AKS cluster CLI az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 Portal 1. Search for AKS on the Azure Portal Marketplace 2. Start creating AKS Cluster using the creation wizard. You can choose the defaults Save the resource ID of the AKS cluster for further steps. You can find the resource ID in the Properties tab under Settings in Portal or use the following command az aks show --resource-group $myResourceGroup --name $aksClusterName --query id --output tsv Step 2: Configure Data Collection Rules (DCR) in Azure Monitor Create a DCR to specify the metrics to collect and the destination. We will look at examples of sending metrics to both Log Analytics and Event Hubs as destination. CLI - Log Analytics as destination 1. Creating a DCR with Log Analytics as a destination. In this example, we are exporting two metrics all metrics. If you are interested in specific metrics, you can specify them in the streams field by following the documentation here First, we need to create a rule file (named rule-file.json) with the details for the destination and the source. Make sure you to create a Log analytics workspace and have the workspace ID handy for this step. The log Analytics workspace can be located in a different region from the cluster or the DCR { "identity": { "type": "systemAssigned" }, "kind": "PlatformTelemetry", "location": "westus2", "properties": { "dataSources": { "platformTelemetry": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "name": "myPlatformTelemetryDatasource" } ] }, "destinations": { "logAnalytics": [ { "workspaceResourceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/rg-001/providers/microsoft.operationalinsights/workspaces/laworkspace001", "name": "ladestination" } ] }, "dataFlows": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "destinations": [ "ladestination" ] } ] } } az monitor data-collection rule create --name AKSMetricsToLogAnalytics --location myRegion -g myResourceGroup --rule-file rule.json Save the resource ID of the DCR for the next step az monitor data-collection rule show --name aksMetricstoLogAnalytics -g test-rg --query id --output tsv 2. Linking DCR to the AKS resource using DCRA (Data Collection Rules Association) az monitor data-collection rule association create --resource-group myResourceGroup -n logAnalyticsDCRAssociation --rule-id "<DCR resourceid>" --resource "<AKS cluster ID>" Replace the rule ID with the resource ID of the DCR and AKS cluster ID with the resource ID of the AKS cluster CLI - Event Hubs as destination 1. Create an Event Hub in your desired namespace: az eventhubs namespace create \ --name myEventHubNamespace \ --resource-group myResourceGroup az eventhubs eventhub create --name $eventhubName --resource-group $rgName --namespace-name $namespaceName Save the resource ID of the event hub created for the next step. You can find it through CLI using the following command az eventhubs eventhub show --name $eventHubName --namespace-name $namespaceName --resource-group $resourceGroup --query id --output tsv 2. Creating a DCR with Event Hub as the destination: We first create the rule file with the details of the destination. Replace the eventHubResourceID with the ID of the Event hub created in Step 1. { "identity": { "type": "systemAssigned" }, "kind": "PlatformTelemetry", "location": "westus2", "properties": { "dataSources": { "platformTelemetry": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "name": "myPlatformTelemetryDatasource" } ] }, "destinations": { "eventHubs": [ { "eventHubResourceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg-001/providers/Microsoft.EventHub/namespaces/event-hub-001/eventhubs/hub-001", "name": "myHub" } ] }, "dataFlows": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "destinations": [ "myHub" ] } ] } } We create the DCR based on the rule file above az monitor data-collection rule create --name AKSMetricsToEventHub --location myRegion -g myResourceGroup --rule-file rule.json Save the resource ID of the event hub created for the next step. You can find it through CLI using the following command az eventhubs eventhub show --name $eventHubName --namespace-name $namespaceName --resource-group $resourceGroup --query id --output tsv 3. Linking DCR to the AKS resource using DCRA az monitor data-collection rule association create --resource-group myResourceGroup -n logAnalyticsDCRAssociation --rule-id "<DCR resourceid>" --resource "<AKS cluster ID>" Portal- Log Analytics as destination In the DCR Creation Wizard, click the information wizard that specifies creating DCRs for Platform metrics Specify the properties for DCR as shown below (you can ignore the managed identity configuration option as it is not required for Log Analytics as a destination) Select the AKS resource(s) to export platform metrics. Note that you can select multiple resources across subscriptions here (without any region restrictions for Log Analytics as a destination) In the next "Collect and Deliver" step, click on "Add new dataflow" button. In the side panel, you will see that the "Data source type" and "Resource types" are already populated to Platform metrics and Kubernetes service. If you wish to add more resource types in the same DCR, make sure to either add those resources in the step 3 above - or you can opt not to include any resources in the step 3 above and include resources after DCR creation (which will be described in the steps later). Click "Next: Destinations" in the side panel to add a destination Log Analytics workspace. Select the "Azure Monitor Logs" destination type, and it can be in any accessible subscription, as long as the Log Analytics region is same as the DCR region. Click on "Save" in the side panel to add this workflow. You can optionally add tags, and then click on "Review+Create" to create you DCR and start platform metrics export. In the DCR, you can always associate more resource types and resources to a single DCR. Please note that there is only one destination allowed per DCR for platform metrics export Portal- Event Hubs as destination In the DCR creation wizard, make sure to select the "Enabled Managed Identity" checkbox. You can choose either System Assigned or User Assigned to enable export to Event Hub. Add the resources as described in the Log Analytics destination export section above. Please note that the resource(s) must be in the same region as the DCR for Event Hubs export. In the "Collect and Deliver" tab, in the Destinations tab of "Add new dataflow", make sure to select the appropriate Event Hub You can optionally add tags, and then click on "Review+Create" to create you DCR and start platform metrics export. In the DCR, you can always associate more resource types and resources to a single DCR. Please note that there is only one destination allowed per DCR for platform metrics export Step 3: Verify the Export For Log Analytics, navigate to the AzureMetricsV2 table in your workspace to view the exported metrics. For Event Hub, set up a consumer application or use Azure Stream Analytics to verify incoming metrics. Summary Platform metrics are a powerful feature for monitoring AKS clusters and their workloads. By leveraging Azure Monitor’s Data Collection Rules, you can seamlessly export metrics to destinations like Log Analytics, Event Hub, and Storage Accounts enabling advanced analysis and integration. Start using these tools today to gain deeper insights into your AKS clusters!830Views0likes0CommentsTroubleshooting Azure Container App Networking Made Simple with Network Tester
Azure Container Apps provide a robust environment for deploying modern applications, but navigating its networking complexities can sometimes feel like solving a Rubik's cube blindfolded. That's where the Network Tester image comes to the rescue. This handy utility is designed to help users diagnose and troubleshoot network issues in Azure Container App with ease. Below, we'll explore more how to use it.421Views2likes0CommentsDeploy Smarter, Scale Faster – Secure, AI-Ready, Cost-Effective Kubernetes Apps at Your Fingertips!
In our previous blog post, we explored the exciting launch of Kubernetes Apps on Azure Marketplace. This follow-up blog will take you a step further by demonstrating how to programmatically deploy Kubernetes Apps using tools like Terraform, Azure CLI, and ARM templates. As organizations scale their Kubernetes environments, the demand for secure, intelligent, and cost-effective deployments has never been higher. By programmatically deploying Kubernetes Apps through Azure Marketplace, organizations can harness powerful security frameworks, cost-efficient deployment options, and AI solutions to elevate their Azure Kubernetes Service (AKS) and Azure Arc-enabled clusters. This automated approach significantly reduces operational overhead, accelerates time-to-market, and allows teams to dedicate more time to innovation. Whether you're aiming to strengthen security, streamline application lifecycle management, or optimize AI and machine learning workloads, Kubernetes Apps on Azure Marketplace provide a robust, flexible, and scalable solution designed to meet modern business needs. Let’s explore how you can leverage these tools to unlock the full potential of your Kubernetes deployments. Secure Deployment You Can Trust Certified and Secure from the Start – Every Kubernetes app on Azure Marketplace undergoes a rigorous certification process and vulnerability scans before becoming available. Solution providers must resolve any detected security issues, ensuring the app is safe from the outset. Continuous Threat Monitoring – After publication, apps are regularly scanned for vulnerabilities. This ongoing monitoring helps to maintain the integrity of your deployments by identifying and addressing potential threats over time. Enhanced Security with RBAC – Eliminates the need for direct cluster access, reducing attack surfaces by managing permissions and deployments through Azure Role-Based Access Control (RBAC). Lowering Cost of your Applications If your organization has Azure Consumption Commitment (MACC) agreements with Microsoft, you can unlock significant cost savings when deploying your applications. Kubernetes Apps available on the Azure Marketplace are MACC eligible and you can gain the following benefits: Significant Cost Savings and Predictable Expenses – Reduce overall cloud costs with discounts and credits for committed usage, while ensuring stable, predictable expenses to enhance financial planning. Flexible and Comprehensive Commitment Usage – Allocate your commitment across various Marketplace solutions that maximizes flexibility and value for evolving business needs. Simplified Procurement and Budgeting – Benefit from unified billing and streamlined procurement to driving efficiency and performance. AI-Optimized Apps High-Performance Compute and Scalability - Deploy AI-ready apps on Kubernetes clusters with dynamic scaling and GPU acceleration. Optimize performance and resource utilization for intensive AI/ML workloads. Accelerated Time-to-Value - Pre-configured solutions reduce setup time, accelerating progress from proof-of-concept to production, while one-click deployments and automated updates keep AI environments up-to-date effortlessly. Hybrid and Multi-Cloud Flexibility - Deploy AI workloads seamlessly on AKS or Azure Arc-enabled Kubernetes clusters, ensuring consistent performance across on-premises, multi-cloud, or edge environments, while maintaining portability and robust security. Lifecycle Management of Kubernetes Apps Automated Updates and Patching – The auto-upgrade feature keeps your Kubernetes applications up-to-date with the latest features and security patches, seamlessly applied during scheduled maintenance windows to ensure uninterrupted operations. Our system guarantees automated consistency and reliability by continuously reconciling the cluster state with the desired declarative configuration and maintaining stability by automatically rolling back unauthorized changes. CI/CD Automation with ARM Integration – LeverageARM-based APIs and templates to automate deployment and configuration, simplifying application management and boosting operational efficiency. This approach enables seamless integration with Azure policies, monitoring, and governance tools, ensuring streamlined and consistent operations. Flexible Billing Options for Kubernetes Apps We support a variety of billing models to suit your needs: Private Offers for Upfront Billing - Lock in pricing with upfront payments to gain better control and predictability over your expenditures. Multiple Billing Models - Choose from flexible billing options to suit your needs, including usage-based billing, where you pay per core, per node, or other usage metrics, allowing you to scale as required. Opt for flat-rate pricing for predictable monthly or annual costs, ensuring financial stability and peace of mind. Programmatic Deployments of Apps There are several ways of deploying Kubernetes app as follows: - Programmatically deploy using Terraform: Utilize the power of Terraform to automate and manage your Kubernetes applications. - Deploy programmatically with Azure CLI: Leverage the Azure CLI for straightforward, command-line based deployments. - Use ARM templates for programmatic deployment: Define and deploy your Kubernetes applications efficiently with ARM templates. - Deploy via AKS in the Azure portal: Take advantage of the user-friendly Azure portal for a seamless deployment experience. We hope this guide has been helpful and has simplified the process of deploying Kubernetes. Stay tuned for more tips and tricks, and happy deploying! Additional Links: Get started with Kubernetes Apps:https://aka.ms/deployK8sApp. Find otherKubernetes Apps listed on Azure Marketplace:https://aka.ms/KubernetesAppsInMarketplace For Customer support, please visit:https://learn.microsoft.com/en-us/azure/aks/aks-support-help#create-an-azure-support-request Partner with us:If you are an ISV or Azure partner interested in listing your Kubernetes App,please visit:http://aka.ms/K8sAppsGettingStarted Learn more about Partner Benefits:https://learn.microsoft.com/en-us/partner-center/marketplace/overview#why-sell-with-microsoft For Partner Support, please visit:https://partner.microsoft.com/support/?stage=1219Views0likes0CommentsSeamlessly Integrating Azure KeyVault with Jarsigner for Enhanced Security
Dive into the world of enhanced security with our step-by-step guide on integrating Azure KeyVault with Jarsigner. Whether you're a beginner or an experienced developer, this guide will walk you through the process of securely signing your Java applications using Azure's robust security features. Learn how to set up, execute, and verify digital signatures with ease, ensuring your applications are protected in an increasingly digital world. Join us to boost your security setup now!6.6KViews0likes1CommentLanguages & Runtime Community Standup - .NET 8 + Containers = 💖
Containers are _the_ way to deploy applications in today's cloud-native architectures, and .NET has embraced them fully. Come chat with Rich Lander and Chet Husk about how .NET embraces containers from the Runtime all the way up through to SDK to the editors you use daily andlearn new techniques for making your containerized applications the best they can be! Featuring: Rich Lander (@runfaster2000), Chet Husk (@chethusk) #docker #containers #dotnet4.5KViews0likes1CommentLearn Live: Build your first microservice with .NET
Microservice applications are composed of small, independently versioned, and scalable customer-focused services that communicate with each other over standard protocols with well-defined interfaces. Each microservice typically encapsulates simple business logic, which you can scale out or in, test, deploy, and manage independently. Smaller teams develop a microservice based on a customer scenario and use any technologies that they want to use. This module will teach you how to build your first microservice with .NET. In this episode, you will: - Explain what microservices are. - Know how various technologies involved in microservices are and how they relate. - Build a microservice using .NET.273Views0likes0CommentsBuild your first Microservice with ASP.NET Core and Docker | #SamosaChai.NET
Microservice applications are composed of small, independently versioned, and scalable customer-focused services that communicate over standard protocols with well-defined interfaces. This session will explore the Microservices architecture, and we will write our first microservice with .NET and Docker. Register ->https://developer.microsoft.com/reactor/eventregistration/register/14964 Speaker info: Nish Anil Nish is a Program Manager on the .NET Community team at Microsoft. He helps developers build production-ready apps with .NET and maintains the popular Architecture reference guides @ dot.net/architecture. Social Handle Twitter - https://twitter.com/nishanil Speaker info: Vivek Sridhar Vivek Sridhar is a technophile and an Open-Source contributor with around 15 years of experience in the Software Industry and works at Microsoft as Senior Cloud Advocate. In his previous role, he has mentored startups/developers, speaker at conferences/meetups for DigitalOcean as Senior Developer Advocate, Co-Founder / Chief-Architect of NoodleNext Technology. He was also heading DevOps and QA at BlackBuck and was a DevOps Solution Architect at HCL (Australia) in client engagement. Vivek started his career with IBM Rational (India Software Labs) as a Software Developer. Social Handle Twitter - https://twitter.com/vivek_sridhar481Views0likes0CommentsDeploy ASP.NET Core apps on Kubernetes | #SamosaChai.NET
Microservices applications deployed in containers make it possible to scale out apps, and respond to increased demand by deploying more container instances, and to scale back if demand is decreasing. In complex solutions of many microservices the process of deploying, updating, monitoring, and removing containers introduces challenges. This session will explore the basics of Kubernetes and deploying our first Microservice with .NET to Kubernetes. Register ->https://developer.microsoft.com/reactor/eventregistration/register/14965 Speaker info: Nish Anil Nish is a Program Manager on the .NET Community team at Microsoft. He helps developers build production-ready apps with .NET and maintains the popular Architecture reference guides @ dot.net/architecture. Social Handle Twitter - https://twitter.com/nishanil Speaker info: Vivek Sridhar Vivek Sridhar is a technophile and an Open-Source contributor with around 15 years of experience in the Software Industry and works at Microsoft as Senior Cloud Advocate. In his previous role, he has mentored startups/developers, speaker at conferences/meetups for DigitalOcean as Senior Developer Advocate, Co-Founder / Chief-Architect of NoodleNext Technology. He was also heading DevOps and QA at BlackBuck and was a DevOps Solution Architect at HCL (Australia) in client engagement. Vivek started his career with IBM Rational (India Software Labs) as a Software Developer. Social Handle Twitter - https://twitter.com/vivek_sridhar400Views0likes0Comments.NET Conf 2021
.NET Conf is a free, three-day, virtual developer event that celebrates the major releases of the.NET development platform. It is co-organized by the .NET community and Microsoft, and sponsored by the .NET Foundation and our ecosystem partners. Come celebrate and learn about what you can do with .NET 6. Checkout the full schedule athttps://www.dotnetconf.net/agenda Day 1 - November 9 Day one is all about the big news, .NET 6! Join the .NET team on all the new things you can do with the latest release. 8:00 - 9:00 Keynote with Scott Hunter and members of the .NET team 9:00 - 17:00 Sessions from the .NET teams at Microsoft 17:00 - 19:00 Virtual Attendee Party (CodeParty #1). Have fun and win prizes from our sponsors. Day 2 - November 10 Day two is where we dive deeper into all the things you can do with .NET and our 24 hour broadcast begins with community speakers around the world. 7:00 - 9:00 Virtual Attendee Party (CodeParty #2). Have fun and win prizes from our sponsors. 9:00 - 17:00 Sessions from teams all around Microsoft 17:00 - 23:59 Community sessions in local time zones around the world Day 3 - November 11 Day three continues our all day and night broadcast with speakers around the world in their own time zones. 0:00 - 17:00 Community sessions in local time zones around the world1.2KViews2likes1Comment