azure kubernetes service
147 TopicsGetting started with Azure Fleet Manager
Why? A solution to manage multiple Azure Kubernetes Service (AKS) clusters at scale. A secure and compliant solution to streamline operations, maintenance, improve performance, and ensure efficient resource utilization. Addresses the challenges of multi-cluster scenarios orchestrating cluster updates propagating Kubernetes resources balancing multi-cluster loads Pointers Orchestrates application updates and upgrades across multiple clusters Certain Kubernetes objects to deploy on all or certain set of clusters Clusters can be the same subscription or different subscription or even in different region but should be under the same tenant Upgrade group: Updates are applied in parallel Make sure member cluster should be in the running state before joining them to Fleet There are two fleet options available, with Hub and without the Hub Supports private clusters CRP is a cluster-scoped resource HOW? Run the below commands with Azure CLI Step 1: Add the fleet extension az extension add -n fleet Step 2: Create a fleet manager az fleet create --resource-group <name of the resource group> --name <name of the fleet> --location <region> --enable-hub --enable-private-cluster --enable-managed-identity --agent-subnet-id <subent ID> --vm-size <vm size> Step 3: An AKS cluster with 1 node gets created. Get into this Hub cluster az fleet get-credentials --resource-group <name of the resource group> --name <name of the fleet> Step 4: Add and view member clusters az fleet member create --resource-group <name of the resource group> --fleet-name <name of the fleet> --name <name of the membercluster> --member-cluster-id <resource ID of the member cluster> az fleet member list --resource-group <name of the resource group> --fleet-name <name of the fleet> -o table Sample CRP deployment kubectl label membercluster <name of the member cluster> <name of the label>=<value of the label> --overwrite apiVersion: placement.kubernetes-fleet.io/v1beta1 kind: ClusterResourcePlacement metadata: name: crp-asd-prod spec: policy: placementType: PickAll affinity: clusterAffinity: requiredDuringSchedulingIgnoredDuringExecution: clusterSelectorTerms: - labelSelector: matchLabels: crp: prod resourceSelectors: - group: "" kind: Namespace name: dev version: v1 - group: "" kind: Namespace name: qa version: v1 Utilization Best Practices Plan to integrate with your DevOps platform to manage the dynamic nature of workloads. The integration is extremely helpful and easy to adopt for the dev teams, as deployments were synced across other clusters. The service is a viable candidate for multi-cluster management, disaster recovery and migration strategy. Happy Learning 🙂 Reference link: fleet/docs at main · Azure/fleet · GitHub43Views0likes0CommentsPersistent volume with Azure Files on AKS
Based on our document, when we need to statically create persistent volume with Azure Files integration, we need to create Kubernetes Secret to store storage account name and access key. And assign the secret into PV yaml file. https://learn.microsoft.com/en-us/azure/aks/azure-csi-files-storage-provision#create-a-kubernetes-secret However, this mechanism will allow other people that has read permission of AKS cluster to easily read the Kubernetes secret and get the storage account key. Our customer has concern about this and want to know if there was other mechanism that can prevent this risk (for example , need to fetch account key from key vault first , not directly put storage account key into Kubernetes secret)344Views0likes2CommentsSelf Hosted AI Application on AKS in a day with KAITO and CoPilot.
In this blog post I document my experience of spending a full day using KAITO and Copilot to accelerate deployment and development of a self managed AI enabled chatbot deployed in a managed cluster. The goal is to showcase how quickly using a mix of AI tooling we can go from zero to a self hosted, tuned LLM and chatbot application. At the top of this article I want to share my perspective on the future of projects such as KAITO. At the moment I believe KAITO to be somewhat ahead of its time, as most enterprises begin adopting abstracted artificial intelligence it is brilliant to see projects like KAITO being developed ready for the eventual abstraction pendulum to swing back, motivated by usual factors such as increased skills in the market, cost and governance. Enterprises will undoubtedly in the future look to take centralised control of the AI models being used by their enterprises as GPU's become cheaper, more readily available and powerful. When this shift happens open source projects like KAITO will become common place in enterprises. It is also my opinion that Kubernetes lends itself perfectly to be the AI platform of the future a position shared by the CNCF (albeit both sources here may be somewhat biased). The resiliency, scaling and existence of Kuberentes primitives such as "Jobs" mean that Kubernetes is already the de-facto platform for machine learning training and inference. These same reasons also make Kuberentes the best underlying platform for AI development. Companies including DHL, Wayve and even OpenAI all run ML or AI workloads already on Kubernetes. That does not mean that Data Scientists and engineers will suddenly be creating Dockerfiles or exploring admission controllers, Kubernetes instead, as a platform will be multiple layers of abstraction away (Full scale self service platform engineering) however the engineers responsible for running and operating the platform will hail projects like KAITO.246Views1like0CommentsSeamless Metric Export: Simplifying AKS Platform Metrics Routing to Any Destination
Platform metrics in Azure Platform or resource metrics are telemetry data automatically collected by Azure Monitor for resources running within the Azure environment. These metrics include performance indicators such as CPU usage, memory consumption, network traffic, and disk I/O, which are critical for resource monitoring and performance tuning. For Azure Kubernetes Service (AKS), platform metrics provide insights into both the Kubernetes cluster and its underlying infrastructure. Examples include: Node metrics: CPU utilization, memory usage Pod metrics: Pod status Control plane metrics: API server inflight requests These metrics enable administrators and developers to monitor, troubleshoot, and optimize their applications effectively. The list of platform metrics for AKS is available here. This blog explores how to export and utilize platform metrics from AKS to other destinations like Log Analytics, Event Hub and Storage Accounts, with a step-by-step example. Exporting AKS Platform metrics Azure Monitor Metrics Export is configurable through Data Collection Rules (DCR), which provides the capability to route Azure resource metrics data to Azure Storage Accounts, Azure Event Hubs and Azure Log Analytics Workspace for 18 resource types and 10 Azure public regions, which includes AKS. Metrics Export feature provides a more scalable, flexible and reliable way to export platform metrics, in comparison to Azure Monitor Diagnostic Settings. Exporting platform metrics enables users to co-locate their metrics in a single store so that they can use a wide variety of monitoring and dashboarding tools. Additionally, since platform metrics are retained in Azure Monitor only for 93 days, exporting these metrics are crucial while making long term business critical decisions. Metrics Export also enables users to export these metrics in near-real time, with full fidelity and at scale. Using Metrics Export, platform metrics can be sent to the following destinations: Log Analytics Workspaces Metrics are stored in the AzureMetricsV2 table. The workspace and DCR must reside in the same region, but the monitored resources can be in any region. Azure Event Hubs Enables integration with external systems for real-time analytics. The Event Hub, DCR, and monitored resources must be in the same region. Azure Storage Accounts Suitable for long-term storage. Similar regional constraints apply. Example: Exporting AKS platform metrics (CLI and Portal) Step 1: Create an AKS Cluster First, create a new AKS cluster using the Azure CLI / UX. You can skip this step if you have an existing AKS cluster CLI az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 Portal 1. Search for AKS on the Azure Portal Marketplace 2. Start creating AKS Cluster using the creation wizard. You can choose the defaults Save the resource ID of the AKS cluster for further steps. You can find the resource ID in the Properties tab under Settings in Portal or use the following command az aks show --resource-group $myResourceGroup --name $aksClusterName --query id --output tsv Step 2: Configure Data Collection Rules (DCR) in Azure Monitor Create a DCR to specify the metrics to collect and the destination. We will look at examples of sending metrics to both Log Analytics and Event Hubs as destination. CLI - Log Analytics as destination 1. Creating a DCR with Log Analytics as a destination. In this example, we are exporting two metrics all metrics. If you are interested in specific metrics, you can specify them in the streams field by following the documentation here First, we need to create a rule file (named rule-file.json) with the details for the destination and the source. Make sure you to create a Log analytics workspace and have the workspace ID handy for this step. The log Analytics workspace can be located in a different region from the cluster or the DCR { "identity": { "type": "systemAssigned" }, "kind": "PlatformTelemetry", "location": "westus2", "properties": { "dataSources": { "platformTelemetry": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "name": "myPlatformTelemetryDatasource" } ] }, "destinations": { "logAnalytics": [ { "workspaceResourceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/rg-001/providers/microsoft.operationalinsights/workspaces/laworkspace001", "name": "ladestination" } ] }, "dataFlows": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "destinations": [ "ladestination" ] } ] } } az monitor data-collection rule create --name AKSMetricsToLogAnalytics --location myRegion -g myResourceGroup --rule-file rule.json Save the resource ID of the DCR for the next step az monitor data-collection rule show --name aksMetricstoLogAnalytics -g test-rg --query id --output tsv 2. Linking DCR to the AKS resource using DCRA (Data Collection Rules Association) az monitor data-collection rule association create --resource-group myResourceGroup -n logAnalyticsDCRAssociation --rule-id "<DCR resourceid>" --resource "<AKS cluster ID>" Replace the rule ID with the resource ID of the DCR and AKS cluster ID with the resource ID of the AKS cluster CLI - Event Hubs as destination 1. Create an Event Hub in your desired namespace: az eventhubs namespace create \ --name myEventHubNamespace \ --resource-group myResourceGroup az eventhubs eventhub create --name $eventhubName --resource-group $rgName --namespace-name $namespaceName Save the resource ID of the event hub created for the next step. You can find it through CLI using the following command az eventhubs eventhub show --name $eventHubName --namespace-name $namespaceName --resource-group $resourceGroup --query id --output tsv 2. Creating a DCR with Event Hub as the destination: We first create the rule file with the details of the destination. Replace the eventHubResourceID with the ID of the Event hub created in Step 1. { "identity": { "type": "systemAssigned" }, "kind": "PlatformTelemetry", "location": "westus2", "properties": { "dataSources": { "platformTelemetry": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "name": "myPlatformTelemetryDatasource" } ] }, "destinations": { "eventHubs": [ { "eventHubResourceId": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/rg-001/providers/Microsoft.EventHub/namespaces/event-hub-001/eventhubs/hub-001", "name": "myHub" } ] }, "dataFlows": [ { "streams": [ "Microsoft.ContainerService/managedClusters:Metrics-Group-All" ], "destinations": [ "myHub" ] } ] } } We create the DCR based on the rule file above az monitor data-collection rule create --name AKSMetricsToEventHub --location myRegion -g myResourceGroup --rule-file rule.json Save the resource ID of the event hub created for the next step. You can find it through CLI using the following command az eventhubs eventhub show --name $eventHubName --namespace-name $namespaceName --resource-group $resourceGroup --query id --output tsv 3. Linking DCR to the AKS resource using DCRA az monitor data-collection rule association create --resource-group myResourceGroup -n eventHubDCRAssociation --rule-id "<DCR resourceid>" --resource "<AKS cluster ID>" Once the data is routed to Event Hub, you can then integrate to external tools such as Splunk Portal- Log Analytics as destination In the DCR Creation Wizard, click the information wizard that specifies creating DCRs for Platform metrics Specify the properties for DCR as shown below (you can ignore the managed identity configuration option as it is not required for Log Analytics as a destination) Select the AKS resource(s) to export platform metrics. Note that you can select multiple resources across subscriptions here (without any region restrictions for Log Analytics as a destination) In the next "Collect and Deliver" step, click on "Add new dataflow" button. In the side panel, you will see that the "Data source type" and "Resource types" are already populated to Platform metrics and Kubernetes service. If you wish to add more resource types in the same DCR, make sure to either add those resources in the step 3 above - or you can opt not to include any resources in the step 3 above and include resources after DCR creation (which will be described in the steps later). Click "Next: Destinations" in the side panel to add a destination Log Analytics workspace. Select the "Azure Monitor Logs" destination type, and it can be in any accessible subscription, as long as the Log Analytics region is same as the DCR region. Click on "Save" in the side panel to add this workflow. You can optionally add tags, and then click on "Review+Create" to create you DCR and start platform metrics export. In the DCR, you can always associate more resource types and resources to a single DCR. Please note that there is only one destination allowed per DCR for platform metrics export Portal- Event Hubs as destination In the DCR creation wizard, make sure to select the "Enabled Managed Identity" checkbox. You can choose either System Assigned or User Assigned to enable export to Event Hub. Add the resources as described in the Log Analytics destination export section above. Please note that the resource(s) must be in the same region as the DCR for Event Hubs export. In the "Collect and Deliver" tab, in the Destinations tab of "Add new dataflow", make sure to select the appropriate Event Hub You can optionally add tags, and then click on "Review+Create" to create you DCR and start platform metrics export. In the DCR, you can always associate more resource types and resources to a single DCR. Please note that there is only one destination allowed per DCR for platform metrics export Step 3: Verify the Export For Log Analytics, navigate to the AzureMetricsV2 table in your workspace to view the exported metrics. For Event Hub, set up a consumer application or use Azure Stream Analytics to verify incoming metrics. Summary Platform metrics are a powerful feature for monitoring AKS clusters and their workloads. By leveraging Azure Monitor’s Data Collection Rules, you can seamlessly export metrics to destinations like Log Analytics, Event Hub, and Storage Accounts enabling advanced analysis and integration. Start using these tools today to gain deeper insights into your AKS clusters!862Views0likes0CommentsDeploy Smarter, Scale Faster – Secure, AI-Ready, Cost-Effective Kubernetes Apps at Your Fingertips!
In our previous blog post, we explored the exciting launch of Kubernetes Apps on Azure Marketplace. This follow-up blog will take you a step further by demonstrating how to programmatically deploy Kubernetes Apps using tools like Terraform, Azure CLI, and ARM templates. As organizations scale their Kubernetes environments, the demand for secure, intelligent, and cost-effective deployments has never been higher. By programmatically deploying Kubernetes Apps through Azure Marketplace, organizations can harness powerful security frameworks, cost-efficient deployment options, and AI solutions to elevate their Azure Kubernetes Service (AKS) and Azure Arc-enabled clusters. This automated approach significantly reduces operational overhead, accelerates time-to-market, and allows teams to dedicate more time to innovation. Whether you're aiming to strengthen security, streamline application lifecycle management, or optimize AI and machine learning workloads, Kubernetes Apps on Azure Marketplace provide a robust, flexible, and scalable solution designed to meet modern business needs. Let’s explore how you can leverage these tools to unlock the full potential of your Kubernetes deployments. Secure Deployment You Can Trust Certified and Secure from the Start – Every Kubernetes app on Azure Marketplace undergoes a rigorous certification process and vulnerability scans before becoming available. Solution providers must resolve any detected security issues, ensuring the app is safe from the outset. Continuous Threat Monitoring – After publication, apps are regularly scanned for vulnerabilities. This ongoing monitoring helps to maintain the integrity of your deployments by identifying and addressing potential threats over time. Enhanced Security with RBAC – Eliminates the need for direct cluster access, reducing attack surfaces by managing permissions and deployments through Azure Role-Based Access Control (RBAC). Lowering Cost of your Applications If your organization has Azure Consumption Commitment (MACC) agreements with Microsoft, you can unlock significant cost savings when deploying your applications. Kubernetes Apps available on the Azure Marketplace are MACC eligible and you can gain the following benefits: Significant Cost Savings and Predictable Expenses – Reduce overall cloud costs with discounts and credits for committed usage, while ensuring stable, predictable expenses to enhance financial planning. Flexible and Comprehensive Commitment Usage – Allocate your commitment across various Marketplace solutions that maximizes flexibility and value for evolving business needs. Simplified Procurement and Budgeting – Benefit from unified billing and streamlined procurement to driving efficiency and performance. AI-Optimized Apps High-Performance Compute and Scalability - Deploy AI-ready apps on Kubernetes clusters with dynamic scaling and GPU acceleration. Optimize performance and resource utilization for intensive AI/ML workloads. Accelerated Time-to-Value - Pre-configured solutions reduce setup time, accelerating progress from proof-of-concept to production, while one-click deployments and automated updates keep AI environments up-to-date effortlessly. Hybrid and Multi-Cloud Flexibility - Deploy AI workloads seamlessly on AKS or Azure Arc-enabled Kubernetes clusters, ensuring consistent performance across on-premises, multi-cloud, or edge environments, while maintaining portability and robust security. Lifecycle Management of Kubernetes Apps Automated Updates and Patching – The auto-upgrade feature keeps your Kubernetes applications up-to-date with the latest features and security patches, seamlessly applied during scheduled maintenance windows to ensure uninterrupted operations. Our system guarantees automated consistency and reliability by continuously reconciling the cluster state with the desired declarative configuration and maintaining stability by automatically rolling back unauthorized changes. CI/CD Automation with ARM Integration – Leverage ARM-based APIs and templates to automate deployment and configuration, simplifying application management and boosting operational efficiency. This approach enables seamless integration with Azure policies, monitoring, and governance tools, ensuring streamlined and consistent operations. Flexible Billing Options for Kubernetes Apps We support a variety of billing models to suit your needs: Private Offers for Upfront Billing - Lock in pricing with upfront payments to gain better control and predictability over your expenditures. Multiple Billing Models - Choose from flexible billing options to suit your needs, including usage-based billing, where you pay per core, per node, or other usage metrics, allowing you to scale as required. Opt for flat-rate pricing for predictable monthly or annual costs, ensuring financial stability and peace of mind. Programmatic Deployments of Apps There are several ways of deploying Kubernetes app as follows: - Programmatically deploy using Terraform: Utilize the power of Terraform to automate and manage your Kubernetes applications. - Deploy programmatically with Azure CLI: Leverage the Azure CLI for straightforward, command-line based deployments. - Use ARM templates for programmatic deployment: Define and deploy your Kubernetes applications efficiently with ARM templates. - Deploy via AKS in the Azure portal: Take advantage of the user-friendly Azure portal for a seamless deployment experience. We hope this guide has been helpful and has simplified the process of deploying Kubernetes. Stay tuned for more tips and tricks, and happy deploying! Additional Links: Get started with Kubernetes Apps: https://aka.ms/deployK8sApp. Find other Kubernetes Apps listed on Azure Marketplace: https://aka.ms/KubernetesAppsInMarketplace For Customer support, please visit: https://learn.microsoft.com/en-us/azure/aks/aks-support-help#create-an-azure-support-request Partner with us: If you are an ISV or Azure partner interested in listing your Kubernetes App, please visit: http://aka.ms/K8sAppsGettingStarted Learn more about Partner Benefits: https://learn.microsoft.com/en-us/partner-center/marketplace/overview#why-sell-with-microsoft For Partner Support, please visit: https://partner.microsoft.com/support/?stage=1251Views0likes0CommentsUnlock New AI and Cloud Potential with .NET 9 & Azure: Faster, Smarter, and Built for the Future
.NET 9, now available to developers, marks a significant milestone in the evolution of the .NET platform, pushing the boundaries of performance, cloud-native development, and AI integration. This release, shaped by contributions from over 9,000 community members worldwide, introduces thousands of improvements that set the stage for the future of application development. With seamless integration with Azure and a focus on cloud-native development and AI capabilities, .NET 9 empowers developers to build scalable, intelligent applications with unprecedented ease. Expanding Azure PaaS Support for .NET 9 With the release of .NET 9, a comprehensive range of Azure Platform as a Service (PaaS) offerings now fully support the platform’s new capabilities, including the latest .NET SDK for any Azure developer. This extensive support allows developers to build, deploy, and scale .NET 9 applications with optimal performance and adaptability on Azure. Additionally, developers can access a wealth of architecture references and sample solutions to guide them in creating high-performance .NET 9 applications on Azure’s powerful cloud services: Azure App Service: Run, manage, and scale .NET 9 web applications efficiently. Check out this blog to learn more about what's new in Azure App Service. Azure Functions: Leverage serverless computing to build event-driven .NET 9 applications with improved runtime capabilities. Azure Container Apps: Deploy microservices and containerized .NET 9 workloads with integrated observability. Azure Kubernetes Service (AKS): Run .NET 9 applications in a managed Kubernetes environment with expanded ARM64 support. Azure AI Services and Azure OpenAI Services: Integrate advanced AI and OpenAI capabilities directly into your .NET 9 applications. Azure API Management, Azure Logic Apps, Azure Cognitive Services, and Azure SignalR Service: Ensure seamless integration and scaling for .NET 9 solutions. These services provide developers with a robust platform to build high-performance, scalable, and cloud-native applications while leveraging Azure’s optimized environment for .NET. Streamlined Cloud-Native Development with .NET Aspire .NET Aspire is a game-changer for cloud-native applications, enabling developers to build distributed, production-ready solutions efficiently. Available in preview with .NET 9, Aspire streamlines app development, with cloud efficiency and observability at its core. The latest updates in Aspire include secure defaults, Azure Functions support, and enhanced container management. Key capabilities include: Optimized Azure Integrations: Aspire works seamlessly with Azure, enabling fast deployments, automated scaling, and consistent management of cloud-native applications. Easier Deployments to Azure Container Apps: Designed for containerized environments, .NET Aspire integrates with Azure Container Apps (ACA) to simplify the deployment process. Using the Azure Developer CLI (azd), developers can quickly provision and deploy .NET Aspire projects to ACA, with built-in support for Redis caching, application logging, and scalability. Built-In Observability: A real-time dashboard provides insights into logs, distributed traces, and metrics, enabling local and production monitoring with Azure Monitor. With these capabilities, .NET Aspire allows developers to deploy microservices and containerized applications effortlessly on ACA, streamlining the path from development to production in a fully managed, serverless environment. Integrating AI into .NET: A Seamless Experience In our ongoing effort to empower developers, we’ve made integrating AI into .NET applications simpler than ever. Our strategic partnerships, including collaborations with OpenAI, LlamaIndex, and Qdrant, have enriched the AI ecosystem and strengthened .NET’s capabilities. This year alone, usage of Azure OpenAI services has surged to nearly a billion API calls per month, illustrating the growing impact of AI-powered .NET applications. Real-World AI Solutions with .NET: .NET has been pivotal in driving AI innovations. From internal teams like Microsoft Copilot creating AI experiences with .NET Aspire to tools like GitHub Copilot, developed with .NET to enhance productivity in Visual Studio and VS Code, the platform showcases AI at its best. KPMG Clara is a prime example, developed to enhance audit quality and efficiency for 95,000 auditors worldwide. By leveraging .NET and scaling securely on Azure, KPMG implemented robust AI features aligned with strict industry standards, underscoring .NET and Azure as the backbone for high-performing, scalable AI solutions. Performance Enhancements in .NET 9: Raising the Bar for Azure Workloads .NET 9 introduces substantial performance upgrades with over 7,500 merged pull requests focused on speed and efficiency, ensuring .NET 9 applications run optimally on Azure. These improvements contribute to reduced cloud costs and provide a high-performance experience across Windows, Linux, and macOS. To see how significant these performance gains can be for cloud services, take a look at what past .NET upgrades achieved for Microsoft’s high-scale internal services: Bing achieved a major reduction in startup times, enhanced efficiency, and decreased latency across its high-performance search workflows. Microsoft Teams improved efficiency by 50%, reduced latency by 30–45%, and achieved up to 100% gains in CPU utilization for key services, resulting in faster user interactions. Microsoft Copilot and other AI-powered applications benefited from optimized runtime performance, enabling scalable, high-quality experiences for users. Upgrading to the latest .NET version offers similar benefits for cloud apps, optimizing both performance and cost-efficiency. For more information on updating your applications, check out the .NET Upgrade Assistant. For additional details on ASP.NET Core, .NET MAUI, NuGet, and more enhancements across the .NET platform, check out the full Announcing .NET 9 blog post. Conclusion: Your Path to the Future with .NET 9 and Azure .NET 9 isn’t just an upgrade—it’s a leap forward, combining cutting-edge AI integration, cloud-native development, and unparalleled performance. Paired with Azure’s scalability, these advancements provide a trusted, high-performance foundation for modern applications. Get started by downloading .NET 9 and exploring its features. Leverage .NET Aspire for streamlined cloud-native development, deploy scalable apps with Azure, and embrace new productivity enhancements to build for the future. For additional insights on ASP.NET, .NET MAUI, NuGet, and more, check out the full Announcing .NET 9 blog post. Explore the future of cloud-native and AI development with .NET 9 and Azure—your toolkit for creating the next generation of intelligent applications.8.5KViews2likes1CommentKarpenter: Run your Workloads upto 80% Off using Spot with AKS
Using Spot Node with Karpenter Add toleration in Sample AKS-Vote application i.e. "karpenter.sh/disruption:NoSchedule" which comes as default in spot node when provision with AKS Cluster Please refer my github repo for Application yaml and sample nodepool config Scale down your application replicas to allow Karpenter to evict existing on-demand nodes and replace them with Spot nodes Deploy and scale vote application replicas so that karpenter spins up spot nodes based on nodepool configuration and schedule pods after toleration validation on spot Karpenter spins up new spot nodes and Nominate that node for sceduling sample vote-app Configuring Multiple NodePools To configure separate NodePools for Spot and On-Demand capacity: Spot nodes configure with E series VM "Standard E2s_v5" and OnDemand with D series VM as "Standard_D4s_v5" In multi-nodepool scenario each nodepool needs to be configured with 'Weight' attribute, nodepool with highest weight would be priotized over another, here we have Spot node with weight:100 and ondemand with weight:606.6KViews2likes2CommentsAzure at KubeCon India 2024 | Delhi, India – 11-12 December
Welcome to KubeCon + CloudNativeCon India 2024! We're excited to be part of the inaugural event, where we'll highlight the newest advancements in Azure and Azure Kubernetes Service (AKS) and engage with the vibrant cloud-native community in India. We are pleased to announce several new capabilities in Azure Kubernetes Service focused on AI apps development, ease of use, and features to enhance security, scalability, and networking. Here are the key highlights: Simplifying AI Apps Development AI is becoming increasingly crucial as it empowers organizations to leverage cutting-edge technologies to drive innovation and improve their products and services. By providing intuitive tools and extensions, we aim to make AI more accessible to developers, enabling them to deploy and manage AI models with ease. The AI toolchain operator (KAITO) managed add-on is now available in the AKS Visual Studio Code extension. This add-on simplifies AI inference development with an intuitive and visually engaging UI, allowing customers to deploy open-source AI models to their AKS cluster directly from VSCode. AKS plugins in GitHub Copilot for Azure enable various tasks related to AKS directly from the GitHub Copilot Chat view, including creating an AKS cluster, deploying a manifest, and generating kubectl commands. Easily specify your GPU driver type for your Windows GPU node pools to ensure workload and driver compatibility and run compute-intensive Kubernetes workloads. Enhanced Security, Scalability, and Networking Security, scalability, and networking are critical for ensuring the robustness and reliability of Kubernetes deployments. We provide users with the tools they need to maintain high availability and secure their environments, and are rolling out features that improve disaster recovery, protection, and network management. A new managed solution in AKS restricts IMDS endpoint access for customer pods, enhancing overall security. Vaulted backups for AKS enable cross-region disaster recovery, long-term retention, and enhanced security with immutable backups through Azure Backup. Support for private ingress on cluster creation or through API grants users more granular control over ingress controller configuration. Ease of use AKS is also introducing new capabilities to streamline the user experience and reduce the complexity of managing Kubernetes environments. This includes simplifying notifications, defaulting to parallel image pulls in AKS 1.31, improving the UI for automated deployments, and enhancing logging capabilities to help users save time. AKS Communication Manager simplifies notifications for all AKS maintenance tasks, providing timely alerts on event triggers and outcomes. Enhanced AKS logs with Kubernetes metadata and logs filtering, provide richer context and improved visibility into workloads. We’re excited to meet up with you at KubeCon + CloudNativeCon We hope you’re as excited as we are about the first ever KubeCon + CloudNativeCon India 2024. Azure and Kubernetes have some exciting innovations to offer, and we’re eager to share them with you. Be sure to connect with our team on site: Don’t miss the keynote with Microsoft speaker: On Thursday 12 December 2024 at 10:00 AM IST, Lachlan Evenson will deliver a keynote on how to get started in the open-source and Kubernetes community. Check out these sessions by Microsoft engineers: 11 Dec 2024 5:40pm - 6:15pm IST: Effortless Clustering: Rethinking ClusterAPI with Systemd-Systext 12 Dec 2024 11:30am - 12:55pm IST: Flatcar Container Linux Deep Dive: Deploying, Managing and Automating Workloads Securely Visit the Microsoft booth (G1): Stop by our booth to watch live demos, learn from experts, ask questions, and more. Demos at Booth G1: 11 Dec 2024 Running LLMs on Azure Kubernetes Service with KAITO Partner demo: Cost optimization for AI on Kubernetes with CAST AI Persistent storage options for Kubernetes deployment Azure Linux for AKS Azure Backup for AKS Managed Prometheus and Grafana for AKS Application security in Kubernetes Enhance the security of your Container images with Continuous Patching Partner demo: Ultra-fast testing with HyperExecute on AKS 12 Dec 2024 End-to-end developer experience with AKS Automatic Application Gateway for Containers Partner demo: Choreo Internal Developer Platform Azure Container Networking Services Workload Identity at scale with Spinkube on AKS Securing AKS deployments with Azure Firewall AKS add-ons: KEDA, dapr, NAP and more We look forward to connecting with you and hearing your feedback and suggestions. You can also follow us on X for more updates and news. Happy KubeCon + CloudNativeCon!276Views0likes0CommentsBuild and Modernize Intelligent Java apps at Scale
Java on Microsoft Azure Java customers and developers are constantly exploring how they can bring their Java applications to the cloud. Some are looking to modernize existing applications, while others are building new cloud-native solutions from scratch. With these changes, they need a platform that lets them keep working the way they know, without sacrificing control or performance. That’s where Microsoft Azure comes in. As a company, Microsoft is committed to making Java developers as efficient and productive as possible, empowering them to use any tool, framework, and application server on any operating system. Microsoft Azure makes it easy to work with the tools and frameworks Java developers already know and love. Whether using IntelliJ, Eclipse, or VS Code, or managing dependencies with Maven or Gradle, developers can keep using their preferred setup. Azure supports trusted Java application servers and popular open-source tools like Spring Boot, JBoss EAP, and WebLogic, making the transition to the cloud seamless and natural. Scaling on Azure is designed with simplicity and security in mind. Developers can count on built-in tools for monitoring, automation, data support, and caching, along with robust security features. With Azure’s flexible services they can scale confidently, manage costs, and build resilient applications that meet business needs. Azure provides everything Java developers need to build and modernize their applications at scale, letting them do so on their own terms. Tooling for Java app migration and modernization priorities Moving your Java applications to the cloud is easier with the right tools. Azure offers a full set of solutions for every type of migration, whether you are rehosting, re-platforming, refactoring, or rearchitecting. These tools work together to help you transition smoothly, allowing you to work faster, more efficiently, and with greater insight. With Azure, you can achieve meaningful results for your business as you modernize your applications. Azure Migrate and Partner-built Solutions Azure Migrate is a key resource in this process. It provides a holistic view of your server and application estate and generates a cloud-readiness report. With app centricity, you can now assess applications at a portfolio level rather than server by server. This makes it easier for IT decision-makers to plan migrations on a larger scale while aligning with business priorities. In addition to Azure Migrate, you can leverage several partner-built solutions such as CAST, Unify, Dr. Migrate, and others to support additional use cases and scenarios. Azure Migrate application and code assessment For developers, Azure Migrate’s app and code assessment tool (AppCAT) offers in-depth code scanning for Java applications. With this tool, you can assess code changes needed to run your apps in the cloud right from within your preferred terminals, like Bash. GitHub Copilot Chat integration further simplifies the planning process, making it easy to explore modernization options through a conversational approach. AppCAT is especially useful for detailed assessments for refactoring and rearchitecting. GitHub Copilot upgrade assistant for Java A major advancement in this toolkit is the new GitHub Copilot upgrade assistant for Java. Upgrading Java code, runtimes, frameworks, and dependencies can be time-consuming, but with the upgrade assistant, you can streamline the process significantly. Start with your local Java project, receive an AI-powered upgrade strategy, and let Copilot handle the bulk of the work. This powerful tool helps you modernize faster, allowing you to focus on building intelligent applications at scale with confidence. Ready to save time upgrading Java? You can apply for the waitlist to the Technical Preview right here – aka.ms/GHCP-for-Java. This early access is open to a limited number of customers, so we encourage you to sign up soon and share your feedback! Deploy and Scale Java Apps on Azure The Java ecosystem is diverse, encompassing technologies like Java SE, Jakarta EE, Spring, and various application servers. Whatever your Java workload – whether building a monolithic app or a cloud-native microservice – Azure provides a comprehensive platform to support it. Azure offers multiple deployment paths to help meet your specific project goals. For those migrating existing Java applications, infrastructure-as-a-service (IaaS) options like Azure Virtual Machines allow you to lift and shift applications without significant re-architecture. Meanwhile, container options, such as Azure Kubernetes Service (AKS), Azure Container Apps and Azure Red Hat OpenShift, make it easier to manage Java applications in containers. Fully managed platform-as-a-service (PaaS) offerings, like Azure App Service, provide out-of-the-box scalability, DevOps integration, and automation for streamlined management. The following diagram shows recommended Azure services for every Java application type deployed as source or binaries: The following diagram shows the recommended Azure services for every Java application type deployed as containers: Building on Azure's reputation as a versatile platform for various applications, we now turn our focus to three specific offerings that demonstrate this flexibility. Today, we highlight JBoss EAP on Azure App Service, Java on Azure Container Apps, and WebSphere Liberty on Azure Kubernetes Service and how to quickly bring your apps to production with Landing Zone Accelerator. We will also walk you through how to build and modernize intelligent Java apps at scale with the latest AI tools and models. JBoss EAP on Azure App Service Azure App Service offers a fully managed platform with specific enhancements for Java, making it an excellent choice for running enterprise Java applications. Recently, several updates have been introduced to bring even greater value to Java developers using JBoss EAP on App Service: Reduced Licensing Costs: Licensing fees for JBoss EAP on App Service have been cut by over 60%, making it more accessible to a wider range of users. Free Tier Availability: A new free tier is available for those interested in testing the service without an upfront cost, providing an easy entry point for trials and evaluation. Affordable Paid Tiers: Lower-cost paid tiers of App Service Plan for JBoss EAP have been introduced, catering to businesses seeking a cost-effective, production-ready environment. Bring Your Own License Support: Soon, customers will be able to apply existing Red Hat volume licenses to further reduce operational costs, adding flexibility for organizations already invested in Red Hat JBoss EAP. These updates provide significant savings, making JBoss EAP on App Service a smart choice for those looking to optimize costs while running Java applications on a reliable, managed platform. Java on Azure Container Apps Azure Container Apps is a popular serverless platform for Java developers who want to deploy and scale containerized applications with minimal management overhead. Designed for microservices, APIs, and event-driven workloads, Azure Container Apps makes it simple to scale applications from zero up to millions of requests, adapting dynamically to meet real-time demand. Azure Container Apps includes several features tailored specifically for Java: Managed Components for Java: With built-in Spring Cloud services like Service Registry and Config Server, managing Java applications is straightforward. These components simplify service registration, discovery, and configuration management. Enhanced Java Monitoring: Azure Monitor provides Java-specific insights, giving developers visibility into their applications and enabling proactive management with detailed metrics. Effortless Scaling: Container Apps can scale down to zero during periods of low demand and scale out as traffic grows, helping optimize costs. The platform also supports GPU-enabled workloads, perfect for AI-powered Java applications. This fully managed platform supports a range of Java frameworks and runtimes, from Spring Boot to Quarkus to Open Liberty and beyond. With built-in DevOps, secure networking, role-based access, and pay-as-you-go pricing, Azure Container Apps offers a powerful and flexible foundation to build, deploy, and monitor any Java application type. WebSphere Liberty on Azure Kubernetes Service IBM's WebSphere is one of the most widely used middleware platforms globally, especially in large enterprises. Many organizations rely on WebSphere Traditional applications, which have strong market penetration in enterprise environments. As IBM focuses on cloud-native solutions, it is encouraging organizations to migrate from WebSphere Traditional to WebSphere Liberty - a more modern, cloud-native Java runtime. With Azure Kubernetes Service, this migration becomes straightforward and manageable, allowing organizations to bring existing WebSphere Traditional apps into a more flexible, scalable environment. Why Azure Kubernetes Service? AKS provides a powerful platform for running containerized Java applications without the complexity of setting up and maintaining Kubernetes yourself. It’s a fully managed Kubernetes service, integrated end-to-end with Azure’s foundational infrastructure, CI/CD, registry, monitoring, and managed services. Because AKS is based on vanilla Kubernetes, all Kubernetes tools work, and there’s no risk of lock-in. AKS offers global availability, enterprise-grade security, automated upgrades, and compliance, making it a reliable choice for organizations aiming to modernize WebSphere applications. Competitive pricing and cost optimization make AKS even more attractive. Why Transform to WebSphere Liberty? WebSphere Liberty, along with Open Liberty, offers compatibility with WebSphere Traditional, creating an easy migration path. Liberty is a lightweight, modular runtime that’s better suited for cloud-native applications. It reduces resource costs, requiring less memory and CPU than WebSphere Traditional and has quicker startup times. Liberty also embraces modern standards, like Jakarta EE Core Profile and MicroProfile, making it ideal for cloud-native applications. Organizations can even re-purpose existing WebSphere Traditional licenses, significantly reducing migration costs. Running WebSphere Liberty on Azure Kubernetes Service is simple and flexible. IBM and Microsoft have certified Liberty on AKS, providing a reliable path for enterprises to move their WebSphere applications to the cloud. With a solution template available in the Azure Marketplace, you can deploy WebSphere Liberty on AKS in a few clicks. This setup works with both new and existing AKS clusters, as well as any container registry, allowing you to deploy quickly and scale as needed. By combining WebSphere Liberty with AKS, you gain the agility of containers and Kubernetes, along with the robust features of a cloud-native runtime on a trusted enterprise platform. Build Right and Fast! Build Your Java or Spring Apps Environment: Development, Test, or Production in Just 15-30 Minutes with Landing Zone Accelerator! To ensure the scalability and quality of your cloud journey, we re-introduce Landing Zone Accelerators, specifically designed for Azure app destinations such as App Service, Azure Container Apps, and Azure Kubernetes Service. An accelerator allows you to establish secure, complaint, and scalable development, test, or production environments within 15-30 minutes. Adhering to Azure's best practices and embedding security by default, a Landing Zone Accelerator ensures that your cloud transition is not only swift but also robust and scalable. It paves the way for both application and platform teams to thrive in the cloud environment. From realizing cost efficiency to streamlining your migration and modernization journey to ensuring the scalability of your cloud operations, our goal is to demonstrate how your cloud transition can drive innovation, and efficiency, and accelerate business value. The Landing Zone Accelerators for App Service, Azure Container Apps, and Azure Kubernetes Service represent an authoritative, proven, and prescriptive infrastructure-as-code solution, designed to assist enterprise customers in establishing a robust environment for deploying Java, Spring, and polyglot apps. It not only expedites the deployment process but also provides a comprehensive design framework, allowing for the clear planning and designing of Azure environments based on established standards. Build Intelligent Java Apps at Scale Today, many enterprise applications are built with Java. As AI grows in popularity and delivers greater business outcomes, Java developers wonder how to integrate it with their apps. Python is popular for AI - particularly for model building, deploying and fine tuning LLMs, and data handling - but moving an app to a new language can be complex and costly. Instead, Java developers can use Azure to combine their Java apps with AI, building intelligent apps without needing to master Python. Azure makes it simple to bring AI into your existing Java applications. Many customers are already using the Azure platform to add intelligence to their Java apps, delivering more value to their businesses. Whether starting fresh or modernizing existing systems, Azure provides the tools needed to build powerful, intelligent applications that scale. Modernize and Build New Intelligent Apps with Azure. Wherever you are in your cloud journey, Azure helps you modernize and build intelligent apps. Azure offers app platform services, data handling at scale, and AI tools that make it easy to create applications that deliver meaningful business value. Intelligent apps can drive growth, amplify team capabilities, and improve productivity. With Azure, you can bring your Java apps into the future and stay ahead of the competition. The Right Tools for Intelligent Java Apps. Building intelligent applications requires a strong foundation. Azure provides essential services like a robust cloud platform, scalable data solutions, and AI tools, including pretrained models and responsible AI practices. These tools ensure your apps are efficient, scalable, and aligned with best practices. Azure offers several key services for this: Azure AI Studio: A one-stop platform for experimenting and deploying AI solutions. It provides tools for model benchmarking, solution testing, and monitoring, making it easy to develop use cases like customer segmentation and predictive maintenance. Azure OpenAI Service: With access to advanced AI models like GPT-4, this service is ideal for content generation, summarization, and semantic search. Build chatbots, create marketing content, or add AI-driven features to your Java apps. Azure Machine Learning: An end-to-end platform for building and deploying machine learning models. It supports various use cases such as recommendation systems, predictive analytics, and anomaly detection. MLOps capabilities ensure your models are continuously improved and managed. Azure AI Search: Uses retrieval-augmented generation (RAG) technology for powerful search capabilities. Enhance user experience with intelligent search options, helping users quickly find relevant information. Azure Cosmos DB: A globally distributed, multi-model database service ideal for high-performance, low-latency applications. It offers turnkey global distribution, automatic scalability, and integration with other Azure services, making it a strong choice for intelligent apps that handle large amounts of data. Azure Database for PostgreSQL with PGVector: This managed PostgreSQL service now includes the PGVector extension, designed for handling vector embeddings in AI applications. It’s a valuable tool for applications requiring fast, similarity-based searches and supports applications involving recommendation engines, semantic search, and personalization. Azure AI Infrastructure: Provides high-performance infrastructure for AI workloads. Whether training large models or performing real-time inference, Azure’s AI infrastructure meets demanding needs. Get Started with AI in Java. If you are a Java app developer, now is a great time to start integrating AI into your apps. Spring developers can use Spring AI for quick integration, and developers using Quarkus or Jakarta EE or any other app type can take advantage of LangChain4j. You can also use Microsoft Azure AI client libraries for Java. No matter what your framework is, Azure has the tools to help you add intelligence to your applications. Meet the Java team at the Microsoft Ignite 2024 Come meet the Java team at Microsoft Ignite 2024! Join our breakout session, "Java on Azure: Modernize and scale enterprise Java applications on Azure" BRK147, for a close look at the newest ways to build, scale, and modernize Java apps on Azure. In this session, our engineers and product experts will share the latest updates and tools for Java developers. You’ll learn about cost-saving options, new cloud tools, and how to add smart features to your apps. This is a session for all Java developers, whether you're moving apps to the cloud or building cloud-native apps from scratch. Everyone can join - either in person at Ignite or virtually from anywhere in the world. The virtual option is free, so you can attend without leaving your desk. Don’t miss the chance to connect with the Java team, ask questions, and get tips to make your Java apps succeed on Azure! Start Today! Join Us at Java + AI Events Worldwide. Sign Up for upcoming Java and AI events like JDConf 2025 and JavaOne 2025. You’ll also find our developer advocates sharing insights and tips at Java conferences and events around the world. Begin framing your app migration plans with resources to guide you through each step. Get started here – aka.ms/Start-Java. Explore the docs and deploy your first Java or Spring app in the cloud. Follow the quick steps here – aka.ms/Java-Hub. Use our tools and information to build a plan and show your leaders the benefits of Java app modernization. Get the details here – azure.com/Java. Start building, planning, and exploring Azure for Java today!298Views0likes0CommentsExciting Updates Coming to Conversational Diagnostics (Public Preview)
Last year, at Ignite 2023, we unveiled Conversational Diagnostics (Preview), a revolutionary tool integrated with AI-powered capabilities to enhance problem-solving for Windows Web Apps. This year, we're thrilled to share what’s new and forthcoming for Conversational Diagnostics (Preview). Get ready to experience a broader range of functionalities and expanded support across various Azure Products, making your troubleshooting journey even more seamless and intuitive.187Views0likes0Comments