serverless
210 TopicsAzure Functions Flex Consumption is now generally available
We are excited to announce that Azure Functions Flex Consumption is now generally available. This hosting plan provides the highest performance for Azure Functions with concurrency-based scaling for both HTTP and non-HTTP triggers, scale from zero to 1000 instances, and no cold start with the Always Ready feature. Flex Consumption also allows you to enjoy seamless integration with your virtual network at no extra cost, ensuring secure and private communication, with no considerable impact to your app’s scale out performance. Learn more aboutHow to achieve high HTTP scale with Azure Functions Flex Consumption, the engineering innovation behind it, andproject Legion, the platform behind Flex Consumption. In addition to the fast scaling based on per-instance concurrency, you can choose between 2048MB and 4096MB instance sizes. As the function app receives requests it will automatically scale from zero to as many instances of that instance size as needed based onper instance concurrency, and back to zero for cost efficiency when there’s no more requests to process. You can also take advantage of the built-in integration with Azure Load Testing and the Performance Optimizer to optimize your HTTP functions for performance and cost. Flex Consumption is now generally available for .NET 8 on the isolated worker model, Java 11, Java 17, Node 20, PowerShell 7.4, Python 3.10, and Python 3.11 in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, UK South, and West US 2, and in preview in East US 2, South Central US, and West US 3. By December 9th 2024, .NET 9 will also generally available in Australia East, East Asia, East US, North Europe, Southeast Asia, Sweden Central, and UK South. Besides the currently supported DevOps and dev tools like VS Code, Java tooling, Azure Pipeline tasks, and GitHub Actions, you can now use the latest Visual Studio 2022 v17.12 update or newer to create and publish to Flex Consumption apps. The Flex Consumption plan offers competitive pricing with flexible options to fit your needs, with GA pricing taking effect on December 1, 2024. For detailed pricing information, please refer tothe pricing page. Customer adoption and scenarios We have been working with several internal and external customers during the public preview period, with hundreds of external customers actively using Flex Consumption. “ At Yggdrasil, we immediately started adopting Flex Consumption functions when they went into public preview, as they offer the combination of cost-efficiency, scalability, and security features we need to run our company. We already have 100 Flex Consumption functions running in production, and expect to move at least another 50 functions, now that the product has reached GA. We migrated to Flex from consumption to have VNet integration and private endpoints. – Andreas Strandfelt, Partner & Senior Cloud Specialist at Yggdrasil Commodities ApS “ What really matters to us is that the app scales up and down based on demand. Azure Functions Flex Consumption is very appealing to us because of how it dynamically scales based on the number of messages that are queued up in Azure Event Hubs – Stephan Miehe, GitHub Senior Director. Public case study “ Microsoft AI We had a need to process a large queue, representing a significant volume of data with inconsistent availability. Azure Functions Flex Consumption dramatically simplified the code footprint needed to perform this embarrassingly parallel task and helped us complete it in a much shorter timeframe that we had expected. –Craig Presti, Office of the CTO, Microsoft AI project “ Going Forward In the upcoming months we look forward to rolling out even more features to Flex Consumption, including: Availability zones: Enabling availability zones will be possible for new and existing Flex Consumption apps 512 MB instance size:We will introduce a new, smaller instance size for more granular control Enhanced tooling support:PowerShell modules and Terraform AzureRM support New language versions:Support for the latest language versions like Node 22, Python 3.12, and Java 21 Expanded regional availability: The number of regions will continue to expand in early 2025 with UAE North, Centra US, West US 3, South Central US, East US 2, West US, Canada Central, France Central, and Norway East coming first Metrics support: Full Azure Monitor metrics support for Flex Consumption apps Deployment improvements: Zero-downtime deployment to ensure no disruption to running executions More triggers: Kafka and SQL triggers Closing features: Addressing the limitations identified in Considerations. Please let us know which ones are most important to you! Get Started! Explore our referencesamples, quickstarts, and comprehensive documentation to get started with the Azure Functions Flex Consumption hosting plan today!3.7KViews1like14CommentsHow to Query Spark Tables from Serverless SQL Pools in Azure Synapse
Introduction Say goodbye to constantly running Spark clusters! With the shared metadata functionality, you can shut down your Spark pools while still be able to query your Spark external tables using Serverless SQL Pool. In this blog we dive into, how Serverless SQL Pool streamlines your data workflow by automatically synchronizing metadata from your Spark pools. Shared Metadata functionality Azure Synapse Analytics allows the different workspace computational engines to share databases and tables between its Apache Spark pools and serverless SQL pool. When we create tables in Apache Spark Pool, whether managed or external, the Serverless SQL pool automatically synchronizes its metadata. This metadata synchronization automatically creates a corresponding external table in a serverless SQL pool database. Then after a short delay, we can see the table in our Serverless SQL pool. Creating a managed table in Spark and querying from Serverless SQL Pool Now we can shut down our Spark pools and still be able to query Spark external tables from Serverless SQL Pool. NOTE: Azure Synapse currently only shares managed and external Spark tables that store their data in Parquet, DELTA, or CSV format. Tables backed by other formats are not automatically synced. You may be able to sync such tables explicitly yourself as an external table in your own SQL database if the SQL engine supports the table's underlying format. Also, External tables created in Spark are not available in dedicated SQL pool databases. Why we get an error if you usedboschema in Spark pool or if you don’t usedboschema in Serverless SQL pool? Thedboschema (short for “database owner”) is the default schema in SQL Server and Azure Synapse SQL pools. Spark pool only supports user-defined schemas. Means, it does not recognize dbo as a valid schema name. While in Serverless SQL Pool, all the tables belong to the dbo schema, regardless of their original schema in Spark pool or other sources.118Views0likes0CommentsSimplify Full-stack Java Development with JHipster Online, Terraform and Bicep
In the previous blog:Build and deploy full-stack Java Web Applications on Azure Container Apps with JHipster, we explored the fundamental features of JHipster Azure Container Apps. Specifically, we demonstrated how to create and deploy a project to Azure Container Apps in just a few steps. In this blog, we will introduce some new features in JHipster Azure Container Apps, which make project creation even simpler and deployment more seamless. JHipster Online: Quick Prototyping Made Easy JHipster Online is a quick prototyping website that allows you to generate a full-stack Spring Boot project without requiring any installation! You can start building your Azure project by clicking the Create Azure Application button. 🌟Generate the project Simply answer a few guided questions, and JHipster Online will generate a project ready for building and deployment. In the final step of the questionnaire, you can choose to generate either a Terraform or Bicep file for deployment. If you prefer using the CLI version, install it with the following command: npm install -g generator-jhipster-azure-container-apps You can run create the project with: jhipster-azure-container-apps 🚀Deploy the project 💚Terraform Terraform is an infrastructure-as-code (IaC) tool that allows you to build, modify, and version cloud and on-premises resources securely and efficiently. It supports a wide range of popular cloud providers, including AWS, Azure, Google Cloud Platform (GCP), Oracle Cloud Infrastructure (OCI), and Docker. To deploy using Terraform, ensure that Terraform is selected during the project generation step. Additionally, you must have Terraform installed and properly configured. After generating the project, navigate to the Terraform folder: cd terraform Initialize Terraform by running the following command: terraform init Once finished, privision the necessary resource on Azure with: terraform apply -auto-approve Now you can deploy the project with: Linux/MacOS: .\deploy.sh You can run the deployment script by adding options subId, region and resourceGroupName. Windows: .\deploy.ps1 You will be prompted to provide subId, region, and resourceGroupName. ❤️Bicep Bicep is a domain-specific language that uses declarative syntax to deploy Azure resources. In order to deploy with Terraform, make sure you select Bicep in the project generation step. You may also need to haveAzure CLI installed and configured. Once the project has been created, change into the Bicep folder: cd bicep Setup bicep with: az deployment sub create -f ./main.bicep --location=eastus2 --name jhipster-aca --only-show-errors Here you can replace the location and the name parameters with your own choices. Now you can deploy the project with: Linux/MacOS: .\deploy.sh You can run the deployment script by adding options subId, region and resourceGroupName. Windows: .\deploy.ps1 You will be prompted to provide subId, region, and resourceGroupName. 💛 Deploy from Source Code, Artifact and more In addition to the options mentioned, Azure Container Apps provides a wide range of deployment methods designed to suit diverse project needs. Whether you prefer deploying directly from source code, pre-built artifacts, or container images, Azure Container Apps streamlines the entire process with its robust built-in Java support. This enables developers to focus on innovation rather than infrastructure management. From integrating with popular CI/CD pipelines to leveraging advanced deployment techniques like Github, Azure Container Apps offers the flexibility to match your workflow. Discover how to effortlessly deploy and scale your project by visiting: Launch your first Java application in Azure Container Apps.140Views0likes0CommentsConnect Privately to Azure Front Door with Azure Container Apps
Azure Container Apps is a fully managed serverless container service that enables you to deploy and run containerized applications with per-second billing and autoscaling without having to manage infrastructure. The service also provides support for a number of enhanced networking capabilities to address security and compliance needs such as network security groups (NSGs), Azure Firewall, and more. Today, Azure Container Apps is excited to announce public preview for another key networking capability, private endpoints for workload profile environments. This feature allows customers to connect to their Container Apps environment using a private IP address in their Azure Virtual Network, thereby eliminating exposure to the public internet and securing access to their applications. With the introduction of private endpoints for workload profile environments, you can now also establish a direct connection from Azure Front Door to your Container Apps environment viaPrivate Link. By enabling Private Link for an Azure Container Apps origin, customers benefit from an extra layer of security that further isolates their traffic from the public internet. Currently, you can configure this connectivity through CLI (portal support coming soon). In this post, we will do a brief overview of private endpoints on Azure Container Apps and the process of privately connecting it to Azure Front Door. Getting started with private endpoints on Azure Container Apps Private endpoints can be enabled either during the creation of a new environment or within an existing one. For new environments, you simply navigate to theNetworking tab, disable public network access, and enable private endpoints. To manage the creation of private endpoints in an existing environment, you can use the newNetworking blade, which is also in public preview. Since private endpoints use a private IP address, the endpoint for a container app is inaccessible through the public internet. This can be confirmed by the lack of connectivity when opening the application URL. If you prefer using CLI, you can find further guidance in enabling private endpoints at Use a private endpoint with an Azure Container Apps environment (preview). Adding container apps as a private origin for Azure Front Door With private endpoints, you can securely connect them to Azure Front Door through Private Link as well. The current process involves CLI commands that guide you in enabling an origin for Private Link and approving the private endpoint connection. Once approved, Azure Front Door assigns a private IP address from a managed regional private network, and you can verify the connectivity between your container app and the Azure Front Door. For a detailed tutorial, please navigate toCreate a private link to an Azure Container App with Azure Front Door (preview). Troubleshooting Have trouble testing the private endpoints? After creating a private endpoint for a container app, you can build and deploy a virtual machine to test the private connection. With no public inbound ports, this virtual machine would be associated with the virtual network defined during creation of the private endpoint. After creating the virtual machine, you can connect via Bastion and verify the private connectivity. You may find outlined instructions atVerify the private endpoint connection. Conclusion The public preview of private endpoints and private connectivity to Azure Front Door for workload profile environments is a long-awaited feature in Azure Container Apps. We encourage you to implement private endpoints for enhanced security and look forward to your feedback on this experience at our GitHub page. Additional Resources To learn more, please visit the following links to official documentation: Networking in Azure Container Apps environment - Private Endpoints Use a private endpoint with an Azure Container Apps environment Create a private link to an Azure Container App with Azure Front Door (preview) What is a private endpoint? What is Azure Private Link?1.3KViews2likes4CommentsEasily deploy .NET apps to Azure Container Apps with default configuration for data protection
The Azure Container Apps and .NET team have made it easier than ever to deploy your .NET application by supporting automatic configuration for data protection. This support is currently available as an opt-in feature in the Container Apps API version 2024-02-02-preview. This blog post will discuss the feature and what it enables, how to determine if your application is correctly configured, and how to enable configuration for data protection across a variety of .NET versions.2KViews1like1Comment