azure container apps
144 TopicsAzure Container Apps with Application Gateway and custom domain: hostname mismatch
Introduction Azure Container Apps offers a robust platform for deploying microservices and containerized applications. When integrating with Azure Application Gateway, an internal container app environment can be accessed via public internet. Users often bind custom domains to enhance accessibility and user experience. A common challenge arises when we bind the custom domain on Application Gateway and try to access container app. Container app is acting as a middleware service and needs to forward request to another API server or finish authentication process, users may encountered HTTP 403 forbidden error which is caused by hostname/redirect URL mismatch. What's more, you definitely don't want to expose your backend service default domain. This blog explores these challenges and offers practical solutions. Why do we encounter this kind of issue: By following our documentation Protect Azure Container Apps with Application Gateway and Web Application Firewall (WAF) | Microsoft Learn, we put the application gateway in front of internal container app, custom domain was resolved to application gateway public IP, and we use default domain of container app as backend pool. When application gateway receives the custom domain request, it will route the request to container app via its default domain. So far, everything seems normal, and users can successfully access the internal container app through the Internet via the custom domain name. However, if the container app is a middleware service, or authentication is required, we will see that the container app use its default domain name to redirect, which often results in a 403 forbidden error due to hostname/redirect URL mismatch. Proposed Solutions To resolve this issue and ensure seamless integration between Azure Container Apps and other services, consider the following steps: 1. Bind custom domain on container app as well. We need to go to container app portal-->Custom domains to add the same custom domain as application gateway. This is internal container app, so we don't need to worry about domain name duplication. 2. Modify Backend setting in application gateway. Navigate to application gateway-->backend settings-->we select override with specific domain name and put your custom domain in Host name. 3. Now, container app is able to reach another service or finish authentication with custom domain. Reference: Protect Azure Container Apps with Application Gateway and Web Application Firewall (WAF) | Microsoft Learn Host name preservation - Azure Architecture Center | Microsoft Learn196Views0likes0CommentsHow to use Azure Table Storage with .NET Aspire and a Minimal API
Azure Storage is a versatile cloud storage solution that I've used in many projects. In this post, I'll share my experience integrating it into a .NET Aspire project through two perspectives: first, by building a simple demo project to learn the basics, and then by applying those learnings to migrate a real-world application, AzUrlShortener. This post is part of a series about modernizing the AzUrlShortener project: Migrating AzUrlShortener from Azure SWA to Azure Container Apps Converting a Blazor WASM to FluentUI Blazor server Azure Developer CLI (azd) in a real-life scenario How to use Azure Table Storage with .NET Aspire and a Minimal API Part 1: Learning using a simple project For this post we will be using a simpler project instead of the full AzUrlShortener solution to make it easier to follow. All the code of this simple project is also available on GitHub: AspireAzStorage, make a copy (fork it) and explore it. 💡All the code is available on GitHub: AspireAzStorage The Context This tutorial demonstrates how to create a .NET Aspire solution with a Minimal API that retrieves employee data from Azure Table Storage. We'll build a clean, structured solution that can run both locally and in Azure. The structure of the solution was created with a simple dotnet new webapi -n EmployeeApi -o EmployeeDemo\EmployeeApi command. Then from your favorite editor, "Add .NET Aspire Orchestration", by right-clicking on the project in the Solution Explorer. For AppHost to be able to orchestrate a Azure Storage, we will need to add Aspire.Hosting.Azure.Storage package. This can be done by many ways, but by using the CLI it would look like dotnet add AppHost package Aspire.Hosting.Azure.Storage . Defining the Orchestration to use Azure Storage We want the API to read data from an Azure Table Storage and return the result. Using dependency injection (DI), we could add an Azure Storage account to the AppHost project, and specify we needs the table client and pass it to the API project. The code of progam.cs in the AppHost project would look like this: using Microsoft.Extensions.Hosting; var builder = DistributedApplication.CreateBuilder(args); var azStorage = builder.AddAzureStorage("azstorage"); if (builder.Environment.IsDevelopment()) { azStorage.RunAsEmulator(); } var strTables = azStorage.AddTables("strTables"); builder.AddProject<Projects.Api>("api") .WithExternalHttpEndpoints() .WithReference(strTables) .WaitFor(strTables); builder.Build().Run(); The azStorage is the reference to the Azure Storage account, and strTables is the reference to the table client. To be able to execute the solution locally, we check if the environment is "IsDevelopment" and run the Azure Storage emulator. This will allow .NET Aspire to create an Azurite container to emulate the Azure Storage account. When in production the emulator is not needed, and the Azure Storage account will be used. Finally we pass the strTables reference to the API project and make sure the client is ready before starting the API. The Minimal API project We already know that our project is expecting an Azure Table Storage client, so we can add the Aspire.Azure.Data.Tables package to the API project. Using the CLI the command is dotnet add EmployeeApi package Aspire.Azure.Data.Tables . And we can add builder.AddAzureTableClient("strTables"); just before the app creation in the Program.cs file. The beauty of a Minimal API is that it is very flexible and can be as minimal or structured as you want. When the project is created everything is in the Program.cs file. That makes it easy to follow and understand. But as the project grows, it can become hard to maintain. To make it easier to maintain, we can move the endpoints, models and services in distinct files and folders. That left our Program.cs with only the following code: using Api.Endpoints; var builder = WebApplication.CreateBuilder(args); builder.AddServiceDefaults(); // Add services to the container. // Learn more about configuring OpenAPI at https://aka.ms/aspnet/openapi builder.Services.AddOpenApi(); builder.AddAzureTableClient("strTables"); var app = builder.Build(); app.MapDefaultEndpoints(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.MapOpenApi(); } app.UseHttpsRedirection(); // Add the Employee Endpoints app.MapEmployeeEndpoints(); app.Run(); And the rest of the code is split in different files and folders. The structure of the project is as follow: EmployeeApi/ ├── Endpoints/ │ └── EmployeeEndpoints.cs # Endpoints for the Employee API ├── Models/ │ └── EmployeeEntity.cs # Model for the Azure Table Storage ├── Services/ │ └── AzStorageTablesService.cs ├── Api.http # HTTP file to test the API └── Program.cs # Main file of the Minimal API Employee Endpoints You may have noticed at the end of the Program.cs file, we are calling app.MapEmployeeEndpoints() . This is a custom extension method that will add the endpoints to the API. public static void MapEmployeeEndpoints(this IEndpointRouteBuilder app) { var endpoints = app.MapGroup("api") .WithOpenApi(); MapGetAllEmployees(endpoints); MapGetAllEmployeesAsync(endpoints); MapGetEmployeesByFirstLetter(endpoints); MapGetEmployeesGroupByFirstLetterFirstNameAsync(endpoints); MapGenerateEmployees(endpoints); } This will group all the endpoints under the /api path and add the OpenAPI documentation. Then we can define each endpoint in a different method. For example, the MapGetAllEmployees method will look like this: private static void MapGetAllEmployees(IEndpointRouteBuilder endpoints) { endpoints.MapGet("/GetEmployeesAsync", (TableServiceClient client) => GetAllEmployeeAsync(new AzStrorageTablesService(client))) .WithName("GetAllEmployees") .WithDescription("Get all employees from the table storage"); } Note the TableServiceClient client parameter. This is the Azure Table Storage client that was created previously and pass using DI. We are passing it to the AzStrorageTablesService service that will be responsible to interact with the Azure Table Storage. The WithName and WithDescription methods are used to add metadata to the endpoint that will be used in the OpenAPI documentation. The Azure Table Storage Service To make sure the Employee table exists when the queries are executed, we can use the AzStrorageTablesService constructor to create the table if it does not exist, and instantiate the table client. private readonly TableClient _employeeTableClient; public AzStrorageTablesService(TableServiceClient client) { client.CreateTableIfNotExists("Employee"); _employeeTableClient = client.GetTableClient("Employee"); } The only thing left is to implement the GetAllEmployeeAsync method that will query the table and return the result. public async Task<List<EmployeeEntity>> GetAllEmployeeAsync() { var lstEmployees = new List<EmployeeEntity>(); var queryResult = _employeeTableClient.QueryAsync<EmployeeEntity>(); await foreach (var emp in queryResult.AsPages().ConfigureAwait(false)) { lstEmployees.AddRange(emp.Values); } return lstEmployees; } To make sure all record are returned, we are using the AsPages method. This will fetch all employee of all pages fill a list and return it. Testing the API To test manually the API, we can use the Api.http file. This file is a simple text file that contains the HTTP requests. For example, to get all employees, the content of the file would look like this: @Api_HostAddress = https://localhost:7125 ### Get all employees GET {{Api_HostAddress}}/api/GetEmployeesAsync Accept: application/json Putting everything together The demo solution contains more endpoints, but the structure is the same. There is a /generate/{quantity?} endpoint to populate the employee table. It use Bogus, a simple open-source fake data generator for .NET languages. To run the solution locally a simple F5 should be enough. Aspire will start the Azurite container and the API. You can then use the Api.http file to generate some employees and get the list of employees. To deploy the solution to Azure, you can use the Azure Developer CLI (azd). With azd init you can create a new project, and with azd up you can deploy the solution to Azure. In a few minutes the solution will be available in the cloud, but this time it will be using a real Azure Storage account. Nothing else to change, the code is the same. Part 2: Lesson Learn while migrating AzUrlShortener The little experiment with AspireAzStorage convinced me. Using Azure Table Storage with .NET Aspire is simple, but we all know, a real project is more complex. Therefore I was expecting some challenges. What a disappointment, there was none. Everything worked as expected. The AzUrlShortener project was written a few years ago and was using the Microsoft.Azure.Cosmos.Table package. This package is still totally valid today, but there is now one for Azure Table. The migration to use the Azure.Data.Tables package wasn't straightforward. A few objects had different names, and the query was a bit different. But the migration was done in a few hours. The deployment worked on the first try. I tested the data migration using the Azure Storage Explorer. The GitHub Action will have to get updated but with the bicep files that azd generates it should be simple. Conclusion I really enjoyed this journey of migrating the AzUrlShortener project as much as building AspireAzStorage. I invite you to fork that repository and play with it. Would you have done something differently? Do you have any questions? Feel free to ask in the comments below or reach out to me directly at @fboucheros.bsky.social. Want to Learn more? To learn more about Azure Container Apps I strongly suggest this repository: Getting Started .NET on Azure Container Apps, it contains many step by step tutorial (with videos) to learn how to use Azure Container Apps with .NET. In video please240Views0likes0CommentsAnnouncing GA for Azure Container Apps Serverless GPUs
Azure Container Apps Serverless GPUs accelerated by NVIDIA are now generally available. Serverless GPUs enable you to seamlessly run AI workloads with per-second billing and scale down to zero when not in use. Thus, reducing operational overhead to support easy real-time custom model inferencing and other GPU-accelerated workloads. Serverless GPUs accelerate the speed of AI development teams by allowing customers to focus on core AI code and less on managing infrastructure when using GPUs. This provides an excellent middle layer option between Azure AI Model Catalog's serverless APIs and hosting custom models on managed compute. Now customers can build their own serverless API endpoints for inferencing AI models including custom models. Customers can also provision on-demand GPU-powered Jupyter Notebooks or run other compute-intensive AI workloads that are ephemeral in nature. It provides full data governance as customer’s data never leaves the boundaries of the container while still providing a managed, serverless platform from which to build your applications. This GA release of Serverless GPUs also adds support for NVIDIA NIM microservices, NVIDIA NIM™, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing at scale. Supporting a wide range of AI models, including open-source community and NVIDIA AI Foundation models, NVIDIA NIM ensures seamless, scalable AI inferencing leveraging industry-standard APIs. Key benefits of serverless GPUs Scale-to zero GPUs: Support for serverless scaling of NVIDIA A100 and T4 GPUs. Per-second billing: Pay only for the GPU compute you use. Built-in data governance: Your data never leaves the container boundary. Flexible compute options: Choose between NVIDIA A100 and T4 GPUs. Middle-layer for AI development: Bring your own model on a managed, serverless compute platform and easily run your AI applications alongside your existing apps. Scenarios Our customers have been running a wide range of workloads on serverless GPUs. Below are some common use cases. NVIDIA T4 Real-time and batch inferencing: Using custom open-source models with fast startup times, automatic scaling, and a per-second billing model, serverless GPUs are ideal for dynamic applications that don't already have a serverless API in the model catalog. NVIDIA A100 Compute intensive machine learning scenarios: Significantly speed up applications that implement fine-tuned custom generative AI models, deep learning, or neural networks. High performance computing (HPC) and data analytics: Applications that require complex calculations or simulations, such as scientific computing and financial modeling as well as accelerated data processing and analysis among massive datasets. Serverless GPUs with NVIDIA NIM Serverless GPUs now support NVIDIA NIM microservices, which simplify and accelerate the development of AI applications and agentic AI workflows with pre-packaged, scalable, and performance-tuned models that can be deployed as secure inference endpoints on Azure Container Apps. In order to leverage the power of NVIDIA’s NIM, go to NVIDIA’s API catalog: Try NVIDIA NIM APIs, and select the NIM you wish to run with the ‘Run Anywhere’ NIM type. You will need to set your NGC_API_KEY as an environment variable when deploying Azure Container Apps. For a full set of instructions on how to add a NIM to your container app, follow the instructions here. (Note: Each NIM model has certain hardware requirements, Azure Container Apps serverless GPUs support A100 and T4 GPUs. Please ensure the NIM you are selecting is supported by the hardware.) Quota changes for GA With GA, we are introducing default GPU quotas for enterprise and pay-as-you-go customers. All enterprise agreement customers will have quota for A100 and T4 GPUs. The feature is supported in West US 3, Australia East, and Sweden Central. Get started with serverless GPUs From the portal, you can select to enable GPUs for your Consumption app in the container tab when creating your Container App or your Container App Job. Note: In order to achieve the best performance with serverless GPUs, use an Azure Container Registry (ACR) with artifact streaming enabled for your image tag. Follow steps here to enable artifact streaming on your ACR. To learn more about getting started with serverless GPUs, see our quickstart. You can also add a new consumption GPU workload profile to your existing Container App environment through the workload profiles UX in portal or through the CLI commands for managing workload profiles. Learn more about serverless GPUs and NIMs With serverless GPUs, Azure Container Apps now simplifies the development of your AI applications by providing scale-to-zero compute, pay-as you go pricing, reduced infrastructure management, and more. To learn more, visit: Using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn Tutorial: Generate images using serverless GPUs in Azure Container Apps (preview) | Microsoft Learn Tutorial: Deploy an NVIDIA Llama3 NIM to Azure Container Apps Try NVIDIA NIM APIs2.6KViews2likes7CommentsGet Ready for .NET Conf: Focus on Modernization
We’re excited to announce the topics and speakers for .NET Conf: Focus on Modernization, our latest virtual event on April 22-23, 2025! This event features live sessions from .NET and cloud computing experts, providing attendees with the latest insights into modernizing .NET applications, including technical upgrades, cloud migration, and tooling advancements. To get ready, visit the .NET Conf: Focus on Modernization home page and click Add to Calendar so you can save the date on your calendar. From this page, on the day of the event you’ll be able to join a live stream on YouTube and Twitch. We will also make the source code for the demos available on GitHub and the on-demand replays will be available on our YouTube channel. Learn more: https://focus.dotnetconf.net/ Why attend? In the fast-changing technological environment we now find ourselves, it has never been more urgent to modernize enterprise .NET applications to maintain competitiveness and stay ahead of the next innovation. Updating .NET applications for the cloud is a major business priority and involves not only technical upgrades and cloud migration, but also improvements in tooling, processes, and skills. At this event, you will get the end to end insights across latest tools, innovations, and best practices for successful .NET modernization. What can developers expect? The event will run live for up to five hours each day, covering different aspects of .NET modernizations. Scott Hanselman will set the tone for day one with discussion of the experiences and processes to modernize .NET applications in the era of AI. This will be followed by expert sessions on upgrading .NET apps and modernizing both your apps and data to the cloud. Day two will soar higher into the clouds, with sessions to help with cloud migration, cloud development, and infusing AI into your apps. You can interact with experts and ask questions to deepen your expertise, as we broadcast live on YouTube, or Twitch. Recordings of all sessions will be available with materials after the event. Agenda Here’s a quick snapshot of the schedule. Things may change, and we recommend that you please visit the event home page for the latest agenda and session times: https://focus.dotnetconf.net/agenda Day 1 – April 22, Tuesday Time (PDT) Session 8:00 am Modernizing .NET: Future-ready applications in the era of AI Scott Hanselman, Chet Husk, McKenna Barlow 9:00 am Deep dive into the upcoming AI-assisted tooling to upgrade .NET apps Chet Husk, McKenna Barlow 10:00 am Use Reliable Web App patterns to confidently replatform your web apps Pablo Lopes 11:00 am Modernize Data-Driven Apps (No AI Needed) Jerry Nixon 12:00 pm Modernize from ASP.NET to ASP.NET Core: The Future is Now Taylor Southwick Day 2 – April 23, Wednesday Time (PDT) Session 8:00 am Unblock .NET modernization with AI-assisted app and code assessment tools Michael Yen-Chi Ho 9:00 am Cloud development doesn't have to be painful thanks to .NET Aspire Maddy Montaquila (Leger) 10:00 am Introducing Artificial Intelligence to your application Jordan Matthiesen 11:00 am Modernizing your desktop: From WinForms to Blazor, Azure, and AI Santiago Arango Toro Save the Date! .NET Conf: Focus on Modernization is a free, two-day livestream event that you won’t want to miss. Tune in on April 22 and 23, 2025, ask questions live, and learn how to get your .NET applications ready for the AI revolution. Save the date! Stay tuned for more updates and detailed session information. We can’t wait to see you there!1.1KViews0likes0CommentsGetting Started with .NET on Azure Container Apps
Great news for .NET developers who would like to become familiar with containers and Azure Container Apps (ACA)! We just released a new Getting Started guide for .NET developers on Azure Container Apps. This guide is designed to help you get started with Azure Container Apps and understand how to build and deploy your applications using this service.1KViews1like0CommentsConnect Azure SQL Server via System Assigned Managed Identity under ASP.NET
TOC Why we use it Architecture How to use it References Why we use it This tutorial will introduce how to integrate Microsoft Entra with Azure SQL Server to avoid using fixed usernames and passwords. By utilizing System-assigned managed identities as a programmatic bridge, it becomes easier for Azure-related PaaS services (such as Container Apps) to communicate with the database without storing connection information in plain text. Architecture I will introduce each service or component and their configurations in subsequent chapters according to the order of A-C: A: The company's account administrator needs to create or designate a user as the database administrator. This role can only be assigned to one person within the database and is responsible for basic configuration and the creation and maintenance of other database users. It is not intended for development or actual system operations. B: The company's development department needs to create a Container App (or other service) as the basic unit of the business system. Programmers within this unit will write business logic (e.g., accessing the database) and deploy it here. C: The company's data department needs to create or maintain a database and designate Microsoft Entra as the only login method, eliminating other fixed username/password combinations. How to use it A: As this article does not dive into the detailed configuration of Microsoft Entra, it will only outline the process. The company's account administrator needs to create or designate a user as the database administrator. In this example, we will call this user "cch," and the account, "cch@thexxxxxxxxxxxx" will be used in subsequent steps. B-1: In this example, we can create a Container App with any SKU/region. Please note that during the initial setup, we will temporarily use the nginx:latest image from docker.io. After creating our own ASP.NET image, we will update it accordingly. For testing convenience, please enable Ingress traffic and allow requests from all regions. Once the Container App has been created, please enable the System Assigned Managed Identity. Lastly, please make a note of your App Name (e.g., mine is az-1767-aca) as we will use it in the following steps. C-1: Create a database/SQL server. During this process, you need to specify the user created in Step A as the database administrator. Please note that to select "Microsoft Entra-only authentication." In this mode, the username/password will no longer be used. Then, click on "Next: Networking." Microsoft Entra and Username & Password login methods are selected, for security reasons, it is strongly recommended to choose Microsoft Entra Only. The Username & Password option will not be used in this tutorial.) Since this article does not cover the detailed network configuration of the database, temporarily allow public access during the tutorial. Use the default values for other settings, click on "Review + Create," and then click "Create" to finish the setup. During this process, you need to specify the system-assigned managed identity created in Step B as the entity that will actually operate the database. And leave it default from the rest of the parts, and finally create the Database. C-2: After the database has created, you can log in using the identity "cch@thexxxxxxxxxxxx" you've get from Step A which is the database administrator. Open a PowerShell terminal and using the "cch" account, enter the following command to log in to SQL Server. You will need to change the <text> to follow your company's naming conventions. sqlcmd -S <YOUR_SERVER_NAME>.database.windows.net -d <YOUR_DB_NAME> -U <YOUR_FULL_USER_EMAIL> -G You will be prompt for a 2 step verification. dentities setup from Step B. First, we will introduce the method for the system-assigned managed identity. The purpose of the commands is to grant database-related operational permissions to the newly created user. This is just an example. In actual scenarios, you should follow your company's security policies and make the necessary adjustments accordingly. Please enter the following command. CREATE USER [<YOUR_APP_NAME>] FROM EXTERNAL PROVIDER; USE [<YOUR_DB_NAME>]; EXEC sp_addrolemember 'db_owner', '<YOUR_APP_NAME>'; For testing purposes, we will create a test table, and insert some data. CREATE TABLE TestTable ( Column1 INT, Column2 NVARCHAR(100) ); INSERT INTO TestTable (Column1, Column2) VALUES (1, 'First Record'); INSERT INTO TestTable (Column1, Column2) VALUES (2, 'Second Record'); B-2: Developers can now start building the Docker image. In my sample development environment, I'm using .NET 8.0. Run the following command in your development environment to create a Hello World project: dotnet new web -n WebApp --no-https This command will generate many files used for the project. You will need to modify both Program.cs and WebApp.csproj. using Microsoft.Data.SqlClient; var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); app.MapGet("/", async context => { var response = context.Response; var connectionString = "Server=az-1767-dbserver.database.windows.net;Database=az-1767-db;Authentication=ActiveDirectoryMsi;TrustServerCertificate=True;"; await response.WriteAsync("Hello World\n\n"); try { using var conn = new SqlConnection(connectionString); await conn.OpenAsync(); var cmd = new SqlCommand("SELECT Column1, Column2 FROM TestTable", conn); using var reader = await cmd.ExecuteReaderAsync(); while (await reader.ReadAsync()) { var line = $"{reader.GetInt32(0)} - {reader.GetString(1)}"; await response.WriteAsync(line + "\n"); } } catch (Exception ex) { await response.WriteAsync($"[Error] {ex.Message}"); } }); app.Run("http://0.0.0.0:80"); <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>net8.0</TargetFramework> <Nullable>enable</Nullable> <ImplicitUsings>enable</ImplicitUsings> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.Data.SqlClient" Version="5.1.4" /> </ItemGroup> </Project> Please note the connectionString in Program.cs. The string must follow a specific format — you’ll need to replace az-1767-dbserver and az-1767-db with your own server and database names. After making the modifications, run the following command in the development environment. It will compile the project into a DLL and immediately run it (press Ctrl+C to stop). dotnet run Once the build is complete, you can package the entire project into a Docker image. Create a Dockerfile in the root of your project. FROM mcr.microsoft.com/dotnet/sdk:8.0 # Install ODBC Driver RUN apt-get update \ && apt-get install -y unixodbc odbcinst unixodbc-dev curl vim \ && curl -sSL -O https://packages.microsoft.com/debian/12/prod/pool/main/m/msodbcsql18/msodbcsql18_18.5.1.1-1_amd64.deb \ && ACCEPT_EULA=Y DEBIAN_FRONTEND=noninteractive dpkg -i msodbcsql18_18.5.1.1-1_amd64.deb \ && rm msodbcsql18_18.5.1.1-1_amd64.deb # Setup Project Code RUN mkdir /WebApp COPY ./WebApp /WebApp # OTHER EXPOSE 80 CMD ["dotnet", "/WebApp/bin/Debug/net8.0/WebApp.dll"] In this case, we are using mcr.microsoft.com/dotnet/sdk:8.0 as the base image. To allow access to Azure SQL DB, you’ll also need to install the ODBC driver in the image. Use the following command to build the image and push it to your Docker Hub (docker.io). Please adjust the image tag, for example az-1767-aca:202504091739 can be renamed to your preferred version, and replace theringe with your own Docker Hub username. docker build -t az-1767-aca:202504091739 . --no-cache docker tag az-1767-aca:202504091739 theringe/az-1767-aca:202504091739 docker push theringe/az-1767-aca:202504091739 After building and uploading the image, go back to your Container App and update the image configuration. Once the new image is applied, visit the app’s homepage and you’ll see the result. References: Connect Azure SQL Server via User Assigned Managed Identity under Django | Microsoft Community Hub Managed identities in Azure Container Apps | Microsoft Learn Azure Identity client library for Python | Microsoft Learn608Views1like0CommentsCode the Future with Java and AI – Join Me at JDConf 2025
JDConf 2025 is just around the corner, and whether you’re a Java developer, architect, team leader, or decision maker I hope you’ll join me as we explore how Java is evolving with the power of AI and how you can start building the next generation of intelligent applications today. Why JDConf 2025? With over 22 expert-led sessions and 10+ hours of live content, JDConf is packed with learning, hands-on demos, and real-world solutions. You’ll hear from Java leaders and engineers on everything from modern application design to bringing AI into your Java stack. It’s free, virtual and your chance to connect from wherever you are. (On-demand sessions will also be available globally from April 9–10, so you can tune in anytime from anywhere.) Bring AI into Java Apps At JDConf 2025, we are going beyond buzzwords. We’ll show you how to bring AI into real Java apps, using patterns and tools that work today. First, we’ll cover Retrieval-Augmented Generation (RAG), a design pattern where your app retrieves the right business data in real time, and combines it with AI models to generate smart, context-aware responses. Whether it is answering support queries, optimizing schedules, or generating insights, RAG enables your app to think in real time. Second, we’ll introduce AI agents -- software entities that do more than respond. They act. Think about automating production line scheduling at an auto manufacturer or rebooking delayed flights for passengers. These agents interact with APIs, reason over data, and make decisions, all without human intervention. Third, we’ll explore the complete AI application platform on Azure. It is built to work with the tools Java developers already know - from Spring Boot to Quarkus - and includes OpenAI and many other models, vector search with PostgreSQL, and libraries like Spring AI and LangChain4j. Here are just two example stacks: Spring Boot AI Stack: any app hosting services like Azure Container Apps or App Service + Spring AI + OpenAI + PostgreSQL for business data and vector data store. Quarkus AI Stack: any app hosting services like Azure Container Apps or App Service + LangChain4j + OpenAI + PostgreSQL for business data and vector data store. This is how you turn existing Java apps into intelligent, interactive systems, without reinventing everything. Whether you are an experienced developer or just starting out, JDConf offers valuable opportunities to explore the latest advancements in Java, cloud, and AI technologies; gain practical insights; and connect with Java experts from across the globe – including Java 25, Virtual Threads, Spring Boot, Jakarta EE 12, AI developer experiences, Spring AI, LangChain4j, combining data and AI, automated refactoring to Java app code modernization. We’ll also show you how GitHub Copilot helps you modernize faster. GitHub Copilot's new “upgrade assistant” can help refactor your project, suggest dependency upgrades, and guide you through framework transitions, freeing you up to focus on innovation. Get the Right Fit for Your Java App And what if your apps run on JBoss, WebLogic, or Tomcat? We will walk you through how to map those apps to the right Azure service: Monoliths (JAR, WAR, EAR) → Deploy to App Service Microservices or containers → Use Azure Container Apps or AKS WebLogic & WebSphere → Lift and shift to Azure Virtual Machines JBoss EAP containers → Run on Azure Red Hat OpenShift You’ll get clear guidance on where your apps fit and how to move forward, with no guesswork or dead ends. Let's Code the Future, Together I’ll be there, along with Josh Long from the Spring AI community and Lize Raes from the LangChain4j community, delivering a technical keynote packed with practical insights. If you haven’t started building intelligent Java apps, you can start with JDConf. If you’ve already started on the journey, tune in to learn how you can enrich your experiences with the latest in tech. So, mark your calendar. Spread the word. Bring your team. JDConf 2025 is your place to build what is next with Java and AI. 👉 Register now at jdconf.com. Check out the 20+ exclusive sessions brought to you by Java experts from across the globe in all major time zones.150Views0likes0CommentsConfigure time-based scaling in Azure Container Apps
Azure Container Apps leverages cron-type KEDA scaling rules to schedule autoscaling actions at specific times. This feature is ideal for applications with predictable workload fluctuations (e.g., batch jobs, reporting systems) that require scaling based on time-of-day or day-of-week patterns. This guide walks you through configuring and optimizing time-based scaling. Prerequisites An active Azure subscription with access to Azure Container Apps. Basic understanding of KEDA (Kubernetes Event-driven Autoscaling) concepts. A deployed application in Azure Container Apps (see Quickstart Guide). How Time-Based Scaling Works Time-based scaling in Azure Container Apps is achieved by defining cron-type scale rules(https://keda.sh/docs/2.15/scalers/cron/). It uses cron expressions to define start and end times for scaling actions. During the active window, the app scales to a specified desiredReplicas count. Outside this window, scaling defaults to minReplicas/maxReplicas settings. Configuration Example: Weekday vs. Weekend Scaling Scale to 1 replica on weekdays (Mon–Fri) and 0 replicas on weekends (Sat–Sun) to optimize costs. triggers: - type: cron metadata: timezone: Asia/Shanghai # Uses TZ database names (e.g., "America/New_York") start: "0 8 * * 1" # 08:00 AM every Monday (1 = Monday in cron syntax) end: "0 0 * * 6" # 00:00 (midnight) every Saturday (6 = Saturday) desiredReplicas: "1" # Maintain 1 replica during active period Key Parameters Explained Parameter Description type Set to cron for time-based scaling. timezone Timezone for cron schedules (e.g., Europe/London). Full list. start / end Cron expressions defining the active window. desiredReplicas Replica count during the start–end window. Cron Syntax Notes: Format: [minute] [hour] [day] [month] [day-of-week] (e.g., 0 8 * * 1 = 8:00 AM every Monday). Days: 0 = Sunday, 1 = Monday, ..., 6 = Saturday. Step-by-Step Configuration Azure Portal: Navigate to your Container App in the Azure portal. Under Application, select Scale. Add a new scaling rule: Rule Type: Select Custom → cron. Metadata: Add timezone, start, end, and desiredReplicas (use the example above). Save changes. Azure CLI: By running below CLI command, we can create a add a new time-based scale rule. az containerapp update \ --name <APP_NAME> \ --resource-group <RESOURCE_GROUP> \ --environment <ENVIRONMENT_NAME> \ --min-replicas 0 \ --max-replicas 1 \ --scale-rule-name "weekdayscale" \ --scale-rule-type "cron" \ --scale-rule-metadata "timezone=Asia/Shanghai" "start=0 8 * * 1" "end=0 0 * * 6" "desiredReplicas=1" ARM Template: This snippet is an excerpt of an ARM template to show you where each section fits in context of the overall template. { "scale": { "minReplicas": 0, "maxReplicas": 1, "rules": [ { "name": "weekdayscale", "type": "cron", "metadata": { "timezone": "Asia/Shanghai", "start": "0 8 * * 1", "end": "0 0 * * 6", "desiredReplicas": "1" } } ] } } Benefits of time-based scaling Cost efficiency: Scale down during off-peak hours to minimize resource costs. Resource optimization: Automatically adjust resources to align with predictable workload patterns. Simplicity: Straightforward configuration and management of scaling rules based on time intervals. Conclusion Time-based scaling in Azure Container Apps simplifies resource management for time-sensitive workloads. By combining cron schedules with KEDA, you can automate scaling actions to match demand while minimizing costs. For advanced scenarios, explore KEDA cron scaler documentation and Azure Container Apps scaling guide.1.1KViews0likes2Comments