serverless
151 TopicsAnnouncing Native Azure Functions Support in Azure Container Apps
Azure Container Apps is introducing a new, streamlined method for running Azure Functions directly in Azure Container Apps (ACA). This integration allows you to leverage the full features and capabilities of Azure Container Apps while benefiting from the simplicity of auto-scaling provided by Azure Functions. With the new native hosting model, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the container app resource. You can deploy Azure Functions using ARM templates, Bicep, Azure CLI, and the Azure portal. Get started today and explore the complete feature set of Azure Container Apps, including multi-revision management, easy authentication, metrics and alerting, health probes and many more. To learn more, visit: https://aka.ms/fnonacav2135Views0likes0CommentsOptimize Azure Functions for Performance and Costs using Azure Load Testing
Performance optimizer is a tool that helps you find the optimal balance between cost and performance for your Azure Functions. It runs load tests on different configurations and recommends the best one for your app.6.5KViews3likes1CommentAzure Functions Flex Consumption is now generally available
We are excited to announce that Azure Functions Flex Consumption is now generally available. This hosting plan provides the highest performance for Azure Functions with concurrency-based scaling for both HTTP and non-HTTP triggers, scale from zero to 1000 instances, and no cold start with the Always Ready feature. Flex Consumption also allows you to enjoy seamless integration with your virtual network at no extra cost, ensuring secure and private communication, with no considerable impact to your app’s scale out performance. Learn more about How to achieve high HTTP scale with Azure Functions Flex Consumption, the engineering innovation behind it, and project Legion, the platform behind Flex Consumption. In addition to the fast scaling based on per-instance concurrency, you can choose between 2048MB and 4096MB instance sizes. As the function app receives requests it will automatically scale from zero to as many instances of that instance size as needed based on per instance concurrency, and back to zero for cost efficiency when there’s no more requests to process. You can also take advantage of the built-in integration with Azure Load Testing and the Performance Optimizer to optimize your HTTP functions for performance and cost. Flex Consumption is now generally available for .NET 8 on the isolated worker model, Java 11, Java 17, Node 20, PowerShell 7.4, Python 3.10, and Python 3.11 in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, UK South, and West US 2, and in preview in East US 2, South Central US, and West US 3. By December 9th 2024, .NET 9 will also generally available in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, and UK South. Besides the currently supported DevOps and dev tools like VS Code, Java tooling, Azure Pipeline tasks, and GitHub Actions, you can now use the latest Visual Studio 2022 v17.12 update or newer to create and publish to Flex Consumption apps. The Flex Consumption plan offers competitive pricing with flexible options to fit your needs, with GA pricing taking effect on December 1, 2024. For detailed pricing information, please refer to the pricing page. Customer adoption and scenarios We have been working with several internal and external customers during the public preview period, with hundreds of external customers actively using Flex Consumption. “ At Yggdrasil, we immediately started adopting Flex Consumption functions when they went into public preview, as they offer the combination of cost-efficiency, scalability, and security features we need to run our company. We already have 100 Flex Consumption functions running in production, and expect to move at least another 50 functions, now that the product has reached GA. We migrated to Flex from consumption to have VNet integration and private endpoints. – Andreas Strandfelt, Partner & Senior Cloud Specialist at Yggdrasil Commodities ApS “ What really matters to us is that the app scales up and down based on demand. Azure Functions Flex Consumption is very appealing to us because of how it dynamically scales based on the number of messages that are queued up in Azure Event Hubs – Stephan Miehe, GitHub Senior Director. Public case study “ Microsoft AI We had a need to process a large queue, representing a significant volume of data with inconsistent availability. Azure Functions Flex Consumption dramatically simplified the code footprint needed to perform this embarrassingly parallel task and helped us complete it in a much shorter timeframe that we had expected. – Craig Presti, Office of the CTO, Microsoft AI project “ Going Forward In the upcoming months we look forward to rolling out even more features to Flex Consumption, including: Availability zones: Enabling availability zones will be possible for new and existing Flex Consumption apps 512 MB instance size: We will introduce a new, smaller instance size for more granular control Enhanced tooling support: PowerShell modules and Terraform AzureRM support New language versions: Support for the latest language versions like Node 22, Python 3.12, and Java 21 Expanded regional availability: The number of regions will continue to expand in early 2025 with UAE North, Centra US, West US 3, South Central US, East US 2, West US, Canada Central, France Central, and Norway East coming first Metrics support: Full Azure Monitor metrics support for Flex Consumption apps Deployment improvements: Zero-downtime deployment to ensure no disruption to running executions More triggers: Kafka and SQL triggers Closing features: Addressing the limitations identified in Considerations. Please let us know which ones are most important to you! Get Started! Explore our reference samples, quickstarts, and comprehensive documentation to get started with the Azure Functions Flex Consumption hosting plan today!7.7KViews2likes21CommentsKeep Your Azure Functions Up to Date: Identify Apps Running on Retired Versions
Running Azure Functions on retired language versions can lead to security risks, performance issues, and potential service disruptions. While Azure Functions Team notifies users about upcoming retirements through the portal, emails, and warnings, identifying affected Function Apps across multiple subscriptions can be challenging. To simplify this, we’ve provided Azure CLI scripts to help you: ✅ Identify all Function Apps using a specific runtime version ✅ Find apps running on unsupported or soon-to-be-retired versions ✅ Take proactive steps to upgrade and maintain a secure, supported environment Read on for the full set of Azure CLI scripts and instructions on how to upgrade your apps today! Why Upgrading Your Azure Functions Matters Azure Functions supports six different programming languages, with new stack versions being introduced and older ones retired regularly. Staying on a supported language version is critical to ensure: Continued access to support and security updates Avoidance of performance degradation and unexpected failures Compliance with best practices for cloud reliability Failure to upgrade can lead to security vulnerabilities, performance issues, and unsupported workloads that may eventually break. Azure's language support policy follows a structured deprecation timeline, which you can review here. How Will You Know When a Version Is Nearing its End-of-Life? The Azure Functions team communicates retirements well in advance through multiple channels: Azure Portal notifications Emails to subscription owners Warnings in client tools and Azure Portal UI when an app is running on a version that is either retired, or about to be retired in the next 6 months Official Azure Functions Supported Languages document here To help you track these changes, we recommend reviewing the language version support timelines in the Azure Functions Supported Languages document. However, identifying all affected apps across multiple subscriptions can be challenging. To simplify this process, I've built some Azure CLI scripts below that can help you list all impacted Function Apps in your environment. Linux* Function Apps with their language stack versions: az functionapp list --query "[?siteConfig.linuxFxVersion!=null && siteConfig.linuxFxVersion!=''].{Name:name, ResourceGroup:resourceGroup, OS:'Linux', LinuxFxVersion:siteConfig.linuxFxVersion}" --output table *Running on Elastic Premium and App Service Plans Linux* Function Apps on a specific language stack version: Ex: Node.js 18 az functionapp list --query "[?siteConfig.linuxFxVersion=='Node|18'].{Name:name, ResourceGroup:resourceGroup, OS: 'Linux', LinuxFxVersion:siteConfig.linuxFxVersion}" --output table *Running on Elastic Premium and App Service Plans Windows Function Apps only: az functionapp list --query "[?!contains(kind, 'linux')].{Name:name, ResourceGroup:resourceGroup, OS:'Windows'}" --output table Windows Function Apps with their language stack versions: az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $siteConfig = az functionapp config show -n $_.name -g $_.resourceGroup --query "{powerShellVersion: powerShellVersion, netFrameworkVersion: netFrameworkVersion, javaVersion: javaVersion}" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $version = switch($runtime) { 'node' { ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value } 'powershell' { $siteConfig.powerShellVersion } 'dotnet' { $siteConfig.netFrameworkVersion } 'java' { $siteConfig.javaVersion } default { 'Unknown' } } [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } | Format-Table -AutoSize Windows Function Apps running on Node.js runtime: az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value if ($runtime -eq 'node') { $version = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } } | Format-Table -AutoSize Windows Function Apps running on a specific language version: Ex: Node.js 18 az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $nodeVersion = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value if ($runtime -eq 'node' -and $nodeVersion -eq '~18') { [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $nodeVersion } } } | Format-Table -AutoSize All windows Apps running on unsupported language runtimes: (as of March 2025) az functionapp list --query "[?!contains(kind, 'linux')].{name: name, resourceGroup: resourceGroup}" -o json | ConvertFrom-Json | ForEach-Object { $appSettings = az functionapp config appsettings list -n $_.name -g $_.resourceGroup --query "[?name=='FUNCTIONS_WORKER_RUNTIME' || name=='WEBSITE_NODE_DEFAULT_VERSION']" -o json | ConvertFrom-Json $siteConfig = az functionapp config show -n $_.name -g $_.resourceGroup --query "{powerShellVersion: powerShellVersion, netFrameworkVersion: netFrameworkVersion}" -o json | ConvertFrom-Json $runtime = ($appSettings | Where-Object { $_.name -eq 'FUNCTIONS_WORKER_RUNTIME' }).value $version = switch($runtime) { 'node' { $nodeVer = ($appSettings | Where-Object { $_.name -eq 'WEBSITE_NODE_DEFAULT_VERSION' }).value if ([string]::IsNullOrEmpty($nodeVer)) { 'Unknown' } else { $nodeVer } } 'powershell' { $siteConfig.powerShellVersion } 'dotnet' { $siteConfig.netFrameworkVersion } default { 'Unknown' } } # Check if runtime version is unsupported $isUnsupported = switch($runtime) { 'node' { $ver = $version -replace '~','' [double]$ver -le 16 } 'powershell' { $ver = $version -replace '~','' [double]$ver -le 7.2 } 'dotnet' { $ver = $siteConfig.netFrameworkVersion $ver -notlike 'v7*' -and $ver -notlike 'v8*' } default { $false } } if ($isUnsupported) { [PSCustomObject]@{ Name = $_.name ResourceGroup = $_.resourceGroup OS = 'Windows' Runtime = $runtime Version = $version } } } | Format-Table -AutoSize Take Action Now By using these scripts, you can proactively identify and update Function Apps before they reach end-of-support status. Stay ahead of runtime retirements and ensure the reliability of your Function Apps. For step-by-step instructions to upgrade your Function Apps, check out the Azure Functions Language version upgrade guide. For more details on Azure Functions' language support lifecycle, visit the official documentation. Have any questions? Let us know in the comments below!2.1KViews1like2CommentsAnnouncing Public Preview of Larger Container Sizes on Azure Container Instances
ACI provides a fast and simple way to run containers in the cloud. As a serverless solution, ACI eliminates the need to manage underlying infrastructure, automatically scaling to meet application demands. Customers benefit from using ACI because it offers flexible resource allocation, pay-per-use pricing, and rapid deployment, making it easier to focus on development and innovation without worrying about infrastructure management. Today, we are excited to announce the public preview of larger container sizes on Azure Container Instances (ACI). Customers can now deploy workloads with higher vCPU and memory for standard containers, confidential containers, containers with virtual networks, and containers utilizing virtual nodes to connect to Azure Kubernetes Service (AKS). ACI now supports vCPU counts greater than 4 and memory capacities of 16 GB, with a maximum of 32 vCPU and 256 GB for standard containers and a maximum of 32 vCPU and 192 GB for confidential containers. Benefits of Larger Container Sizes on ACI Enhanced Performance More vCPUs mean more processing power, allowing for more efficient handling of complex tasks and applications. The enhanced performance from more vCPUs and larger GB capacity offers faster processing times and reduced latency, which can translate to cost savings in terms of time and productivity. Larger container groups with more GB can handle bigger datasets and more extensive workloads, making them ideal for data-intensive applications. Simplified Scalability Larger container groups provide the flexibility to scale up resources even higher as needed, accommodating growing business demands without compromising performance. Larger container SKUs can simplify the scaling process. Instead of managing many smaller containers, you can scale your applications with fewer, larger ones, potentially reducing the need for frequent scaling adjustments. Scenarios for Larger Container Sizes Data Inferencing Larger container SKUs are ideal for data inferencing tasks that require robust computational power. Examples include real-time fraud detection in financial transactions, predictive maintenance in manufacturing, and personalized recommendation engines in e-commerce. These containers ensure efficient and secure processing of large datasets for accurate predictions and insights. Collaborative Analytics When multiple parties need to share and analyze data, larger container SKUs provide a secure and efficient solution. For instance, companies in healthcare can collaborate on patient data analytics while maintaining confidentiality. Similarly, research institutions can share large datasets for scientific studies without compromising data privacy. Big Data Processing Organizations dealing with large-scale data processing can benefit from the enhanced capacity of larger container SKUs. Examples include processing customer data for targeted marketing campaigns, analyzing social media trends for sentiment analysis, and conducting large-scale financial modeling for risk assessment. These containers ensure efficient handling of extensive workloads. High-Performance Computing High-performance computing applications, such as climate modeling, genomic research, and computational fluid dynamics, demand substantial computational power. Larger container SKUs provide the necessary resources to support these intensive tasks, enabling precise simulations and faster results. How to start using Larger Container Sizes To begin using Larger Container Sizes, follow these steps. If you plan to run containers larger than 4 vCPU and 16 GB, you must request quota. Once your quota has been allocated, you can deploy your container groups through Azure portal, Azure CLI, PowerShell, ARM template, or any other method that allows you to connect to your container groups in Azure. Here are some tutorials for how to deploy containers using different methods. Quickstart - Deploy Docker container to container instance - Azure CLI - Azure Container Instances | Microsoft Learn Quickstart - Deploy Docker container to container instance - Portal - Azure Container Instances | Microsoft Learn Quickstart - Deploy Docker container to container instance - PowerShell - Azure Container Instances | Microsoft Learn Quickstart - create a container instance - Bicep - Azure Container Instances | Microsoft Learn Quickstart - Create a container instance - Azure Resource Manager template - Azure Container Instances | Microsoft Learn To learn more about Azure Container Instances, see Serverless containers in Azure - Azure Container Instances | Microsoft Learn.355Views2likes0CommentsAnnouncing GA for Azure Container Apps Serverless GPUs
Azure Container Apps Serverless GPUs accelerated by NVIDIA are now generally available. Serverless GPUs enable you to seamlessly run AI workloads with per-second billing and scale down to zero when not in use. Thus, reducing operational overhead to support easy real-time custom model inferencing and other GPU-accelerated workloads. Serverless GPUs accelerate the speed of AI development teams by allowing customers to focus on core AI code and less on managing infrastructure when using GPUs. This provides an excellent middle layer option between Azure AI Model Catalog's serverless APIs and hosting custom models on managed compute. Now customers can build their own serverless API endpoints for inferencing AI models including custom models. Customers can also provision on-demand GPU-powered Jupyter Notebooks or run other compute-intensive AI workloads that are ephemeral in nature. It provides full data governance as customer’s data never leaves the boundaries of the container while still providing a managed, serverless platform from which to build your applications. This GA release of Serverless GPUs also adds support for NVIDIA NIM microservices, NVIDIA NIM™, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing at scale. Supporting a wide range of AI models, including open-source community and NVIDIA AI Foundation models, NVIDIA NIM ensures seamless, scalable AI inferencing leveraging industry-standard APIs. Key benefits of serverless GPUs Scale-to zero GPUs: Support for serverless scaling of NVIDIA A100 and T4 GPUs. Per-second billing: Pay only for the GPU compute you use. Built-in data governance: Your data never leaves the container boundary. Flexible compute options: Choose between NVIDIA A100 and T4 GPUs. Middle-layer for AI development: Bring your own model on a managed, serverless compute platform and easily run your AI applications alongside your existing apps. Scenarios Our customers have been running a wide range of workloads on serverless GPUs. Below are some common use cases. NVIDIA T4 Real-time and batch inferencing: Using custom open-source models with fast startup times, automatic scaling, and a per-second billing model, serverless GPUs are ideal for dynamic applications that don't already have a serverless API in the model catalog. NVIDIA A100 Compute intensive machine learning scenarios: Significantly speed up applications that implement fine-tuned custom generative AI models, deep learning, or neural networks. High performance computing (HPC) and data analytics: Applications that require complex calculations or simulations, such as scientific computing and financial modeling as well as accelerated data processing and analysis among massive datasets. Serverless GPUs with NVIDIA NIM Serverless GPUs now support NVIDIA NIM microservices, which simplify and accelerate the development of AI applications and agentic AI workflows with pre-packaged, scalable, and performance-tuned models that can be deployed as secure inference endpoints on Azure Container Apps. In order to leverage the power of NVIDIA’s NIM, go to NVIDIA’s API catalog: Try NVIDIA NIM APIs, and select the NIM you wish to run with the ‘Run Anywhere’ NIM type. You will need to set your NGC_API_KEY as an environment variable when deploying Azure Container Apps. For a full set of instructions on how to add a NIM to your container app, follow the instructions here. (Note: Each NIM model has certain hardware requirements, Azure Container Apps serverless GPUs support A100 and T4 GPUs. Please ensure the NIM you are selecting is supported by the hardware.) Quota changes for GA With GA, we are introducing default GPU quotas for enterprise and pay-as-you-go customers. All enterprise agreement customers will have quota for A100 and T4 GPUs. The feature is supported in West US 3, Australia East, and Sweden Central. Get started with serverless GPUs From the portal, you can select to enable GPUs for your Consumption app in the container tab when creating your Container App or your Container App Job. Note: In order to achieve the best performance with serverless GPUs, use an Azure Container Registry (ACR) with artifact streaming enabled for your image tag. Follow steps here to enable artifact streaming on your ACR. To learn more about getting started with serverless GPUs, see our quickstart. You can also add a new consumption GPU workload profile to your existing Container App environment through the workload profiles UX in portal or through the CLI commands for managing workload profiles. Learn more about serverless GPUs and NIMs With serverless GPUs, Azure Container Apps now simplifies the development of your AI applications by providing scale-to-zero compute, pay-as you go pricing, reduced infrastructure management, and more. To learn more, visit: Using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn Tutorial: Generate images using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn Tutorial: Deploy an NVIDIA Llama3 NIM to Azure Container Apps Try NVIDIA NIM APIs2.7KViews2likes7Comments