azure app service
456 TopicsAnnouncing the Public Preview of the New App Service Quota Self-Service Experience
Update 9/15/2025: The App Service Quota Self-Service experience has been temporarily taken offline to incorporate feedback received during this public preview. As this is public preview, availability and features are subject to change as we receive and incorporate feedback. We will post another update when the self-serve experience is available once more. In the meantime, if you require assistance, please file a support ticket following the guidance at the bottom of this post in the Filing a Support Ticket section. We appreciate your patience while we work to build the best experience possible for this scenario. What’s New? The updated experience introduces a dedicated App Service Quota blade in the Azure portal, offering a streamlined and intuitive interface to: View current usage and limits across the various SKUs Set custom quotas tailored to your App Service plan needs This new experience empowers developers and IT admins to proactively manage resources, avoid service disruptions, and optimize performance. Quick Reference - Start here! If your deployment requires quota for ten or more subscriptions, then file a support ticket with problem type Quota following the instructions at the bottom of this post. If any subscription included in your request requires zone redundancy, then file a support ticket with problem type Quota following the instructions at the bottom of this post. Otherwise, leverage the new self-service experience to increase your quota automatically. Self-service Quota Requests For non-zone-redundant needs, quota alone is sufficient to enable App Service deployment or scale-out. Follow the provided steps to place your request. 1. Navigate to the Quotas resource provider in the Azure portal 2. Select App Service Navigating the primary interface: Each App Service VM size is represented as a separate SKU. If the intention is to be able to scale up or down within a specific offering (e.g., Premium v3), then equivalent number of VMs need to be requested for each applicable size of that offering (e.g., request 5 instances for both P1v3 and P3v3). As with other quotas, you can filter by region, subscription, provider, or usage. You can also group the results by usage, quota (App Service VM type), or location (region). Current usage is represented as App Service VMs. This allows you to quickly identify which SKUs are nearing their quota limits. Adjustments can be made inline: no need to visit another page. This is covered in detail in the next section. 3. Request quota adjustments Clicking the pen icon opens a flyout window to capture the quota request: The quota type (App Service SKU) is already populated, along with current usage. Note that your request is not incremental: you must specify the new limit that you wish to see reflected in the portal. For example, to request two additional instances of P1v2 VMs, you would file the request like this: Click submit to send the request for automatic processing. How quota approvals work: Immediately upon submitting a quota request, you will see a processing dialog like the one shown: If the quota request can be automatically fulfilled, then no support request is needed. You should receive this confirmation within a few minutes of submission: If the request cannot be automatically fulfilled, then you will be given the option to file a support request with the same information. In the example below, the requested new limit exceeds what can be automatically granted for the region: 4. If applicable, create support ticket When creating a support ticket, you will need to repopulate the Region and App Service plan details; the new limit has already been populated for you. If you forget the region or SKU that was requested, you can reference them in your notifications pane: If you choose to create a support ticket, then you will interact with the capacity management team for that region. This is a 24x7 service, so requests may be created at any time. Once you have filed the support request, you can track its status via the Help + support dashboard. Known issues The self-service quota request experience for App Service is in public preview. Here are some caveats worth mentioning while the team finalizes the release for general availability: Closing the quota request flyout window will stop meaningful notifications for that request. You can still view the outcome of your quota requests by checking actual quota, but if you want to rely on notifications for alerts, then we recommend leaving the quota request window open for the few minutes that it is processing. Some SKUs are not yet represented in the quota dashboard. These will be added later in the public preview. The Activity Log does not currently provide a meaningful summary of previous quota requests and their outcomes. This will also be addressed during the public preview. As noted in the walkthrough, the new experience does not enable zone-redundant deployments. Quota is an inherently regional construct, and zone-redundant enablement requires a separate step that can only be taken in response to a support ticket being filed. Quota API documentation is being drafted to enable bulk non-zone redundant quota requests without requiring you to file a support ticket. Filing a Support Ticket If your deployment requires zone redundancy or contains many subscriptions, then we recommend filing a support ticket with issue type "Technical" and problem type "Quota": We want your feedback! If you notice any aspect of the experience that does not work as expected, or you have feedback on how to make it better, please use the comments below to share your thoughts!1.3KViews3likes2CommentsDeployment and Build from Azure Linux based Web App
TOC Introduction Deployment Sources From Laptop From CI/CD tools Build Source From Oryx Build From Runtime From Deployment Sources Walkthrough Laptop + Oryx Laptop + Runtime Laptop CI/CD concept Conclusion 1. Introduction Deployment on Azure Linux Web Apps can be done through several different methods. When a deployment issue occurs, the first step is usually to identify which method was used. The core of these methods revolves around the concept of Build, the process of preparing and loading the third-party dependencies required to run an application. For example, a Python app defines its build process as pip install packages, a Node.js app uses npm install modules, and PHP or Java apps rely on libraries. In this tutorial, I’ll use a simple Python app to demonstrate four different Deployment/Build approaches. Each method has its own use cases and limitations. You can even combine them, for example, using your laptop as the deployment tool while still using Oryx as the build engine. The same concepts apply to other runtimes such as Node.js, PHP, and beyond. 2. Deployment Sources From Laptop Scenarios: Setting up a proof of concept Developing in a local environment Advantages: Fast development cycle Minimal configuration required Limitations: Difficult for the local test environment to interact with cloud resources OS differences between local and cloud environments may cause integration issues From CI/CD tools Scenarios: Projects with established development and deployment workflows Codebases requiring version control and automation Advantages: Developers can focus purely on coding Automatic deployment upon branch commits Limitations: Build and runtime environments may still differ slightly at the OS level 3. Build Source From Oryx Build Scenarios: Offloading resource-intensive build tasks from your local or CI/CD environment directly to the Azure Web App platform, reducing local computing overhead. Advantages: Minimal extra configuration Multi-language support Limitations: Build performance is limited by the App Service SKU and may face performance bottlenecks The build environment may differ from the runtime environment, so apps sensitive to minor package versions should take caution From Runtime Scenarios: When you want the benefits and pricing of a PaaS solution but need control similar to an IaaS setup Advantages: Build occurs in the runtime environment itself Allows greater flexibility for low-level system operations Limitations: Certain system-level settings (e.g., NTP time sync) remain inaccessible From Deployment Sources Scenarios: Pre-package all dependencies and deploy them together, eliminating the need for a separate build step. Advantages: Supports proprietary or closed-source company packages Limitations: Incompatibility may arise if the development and runtime environments differ significantly in OS or package support Type Method Scenario Advantage Limitation Deployment From Laptop POC / Dev Fast setup Poor cloud link Deployment From CI/CD Auto pipeline Focus on code OS mismatch Build From Oryx Platform build Simple, multi-lang Performance cap Build From Runtime High control Flexible ops Limited access Build From Deployment Pre-built deploy Use private pkg Env mismatch 4. Walkthrough Laptop + Oryx Add Environment Variables SCM_DO_BUILD_DURING_DEPLOYMENT=false (Purpose: prevents the deployment environment from packaging during publish; this must also be set in the deployment environment itself.) WEBSITE_RUN_FROM_PACKAGE=false (Purpose: tells Azure Web App not to run the app from a prepackaged file.) ENABLE_ORYX_BUILD=true (Purpose: allows the Azure Web App platform to handle the build process automatically after a deployment event.) Add startup command bash /home/site/wwwroot/run.sh (The run.sh file corresponds to the script in your project code.) Check sample code requirements.txt — defines Python packages (similar to package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py — main Python application code. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop + Oryx" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh — script used to start the application. #!/bin/bash gunicorn --bind=0.0.0.0:8000 app:app .deployment — VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment Once both the deployment and build processes complete successfully, you should see the expected result. Laptop + Runtime Add Environment Variables (Screenshots omitted since the process is similar to previous steps) SCM_DO_BUILD_DURING_DEPLOYMENT=false Purpose: Prevents the deployment environment from packaging during the publishing process. This setting must also be added in the deployment environment itself. WEBSITE_RUN_FROM_PACKAGE=false Purpose: Instructs Azure Web App not to run the application from a prepackaged file. ENABLE_ORYX_BUILD=false Purpose: Ensures that Azure Web App does not perform any build after deployment; all build tasks will instead be handled during the startup script execution. Add Startup Command (Screenshots omitted since the process is similar to previous steps) bash /home/site/wwwroot/run.sh (The run.sh file corresponds to the script of the same name in your project code.) Check Sample Code (Screenshots omitted since the process is similar to previous steps) requirements.txt: Defines Python packages (similar to package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py: The main Python application code. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop + Runtime" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh: Startup script. In addition to launching the app, it also creates a virtual environment and installs dependencies, all build-related tasks happen here. #!/bin/bash python -m venv venv source venv/bin/activate pip install -r requirements.txt gunicorn --bind=0.0.0.0:8000 app:app .deployment: VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment (Screenshots omitted since the process is similar to previous steps) Once both deployment and build are completed, you should see the expected output. Laptop Add Environment Variables (Screenshots omitted as the process is similar to previous steps) SCM_DO_BUILD_DURING_DEPLOYMENT=false Purpose: Prevents the deployment environment from packaging during publish. This must also be set in the deployment environment itself. WEBSITE_RUN_FROM_PACKAGE=false Purpose: Instructs Azure Web App not to run the app from a prepackaged file. ENABLE_ORYX_BUILD=false Purpose: Prevents Azure Web App from building after deployment. All build tasks will instead execute during the startup script. Add Startup Command (Screenshots omitted as the process is similar to previous steps) bash /home/site/wwwroot/run.sh (The run.sh corresponds to the same-named file in your project code.) Check Sample Code (Screenshots omitted as the process is similar to previous steps) requirements.txt: Defines Python packages (like package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py: The main Python application. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh: The startup script. In addition to launching the app, it activates an existing virtual environment. The creation of that environment and installation of dependencies will occur in the next section. #!/bin/bash source venv/bin/activate gunicorn --bind=0.0.0.0:8000 app:app .deployment: VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment Before deployment, you must perform a local build process. Run commands locally (depending on the language, usually for installing dependencies). python -m venv venv source venv/bin/activate pip install -r requirements.txt After completing the local build, deploy your app. Once deployment finishes, you should see the expected result. CI/CD concept For example, when using Azure DevOps (ADO) as your CI/CD tool, its behavior conceptually mirrors deploying directly from a laptop, but with enhanced automation, governance, and reproducibility. Essentially, ADO pipelines translate your manual local deployment steps into codified, repeatable workflows defined in a YAML pipeline file, executed by Microsoft-hosted or self-hosted agents. A typical azure-pipelines.yml defines the stages (e.g., build, deploy) and their corresponding jobs and steps. Each stage runs on a specified VM image (e.g., ubuntu-latest) and executes commands, the same npm install, pip install which you would normally run on your laptop. The ADO pipeline acts as your automated laptop, every build command, environment variable, and deployment step you’d normally execute locally is just formalized in YAML. Whether you build inline, use Oryx, or deploy pre-built artifacts, the underlying concept remains identical: compile, package, and deliver code to Azure. The distinction lies in who performs it. 5. Conclusion Different deployment and build methods lead to different debugging and troubleshooting approaches. Therefore, understanding the selected deployment method and its corresponding troubleshooting process is an essential skill for every developer and DevOps engineer.200Views0likes0CommentsFrom Timeouts to Triumph: Optimizing GPT-4o-mini for Speed, Efficiency, and Reliability
The Challenge Large-scale generative AI deployments can stretch system boundaries — especially when thousands of concurrent requests require both high throughput and low latency. In one such production environment, GPT-4o-mini deployments running under Provisioned Throughput Units (PTUs) began showing sporadic 408 (timeout) and 429 (throttling) errors. Requests that normally completed in seconds were occasionally hitting the 60-second timeout window, causing degraded experiences and unnecessary retries. Initial suspicion pointed toward PTU capacity limitations, but deeper telemetry revealed a different cause. What the Data Revealed Using Azure Data Explorer (Kusto), API Management (APIM) logs, and OpenAI billing telemetry, a detailed investigation uncovered several insights: Latency was not correlated with PTU utilization: PTU resources were healthy and performing within SLA even during spikes. Time-Between-Tokens (TBT) stayed consistently low (~8–10 ms): The model was generating tokens steadily. Excessive token output was the real bottleneck: Requests generating 6K–8K tokens simply required more time than allowed in the 60-second completion window. In short — the model wasn’t slow; the workload was oversized. The Optimization Opportunity The analysis opened a broader optimization opportunity: Balance token length with throughput targets. Introduce architectural patterns to prevent timeout or throttling cascades under load. Enforce automatic token governance instead of relying on client-side discipline. The Solution Three engineering measures delivered immediate impact: token optimization, spillover routing, and policy enforcement. Right-size the Token Budget Empirical throughput for GPT-4o-mini: ~33 tokens/sec → ~2K tokens in 60s. Enforced max_tokens = 2000 for synchronous requests. Enabled streaming responses for longer outputs, allowing incremental delivery without hitting timeout limits. Enable Spillover for Continuity Implemented multi-region spillover using Azure Front Door and APIM Premium gateways. When PTU queues reached capacity or 429s appeared, requests were routed to Standard deployments in secondary regions. The result: graceful degradation and uninterrupted user experience. Govern with APIM Policies Added inbound policies to inspect and adjust max_tokens dynamically. On 408/429 responses, APIM retried and rerouted traffic based on spillover logic. The Results After optimization, improvements were immediate and measurable: Latency Reduction: Significant improvement in end-to-end response times across high-volume workloads Reliability Gains: 408/429 errors fell from >1% to near zero. Cost Efficiency: Average token generation decreased by ~60%, reducing per-request costs. Scalability: Spillover routing ensured consistent performance during regional or capacity surges. Governance: APIM policies established a reusable token-control framework for future AI workloads. Lessons Learned Latency isn’t always about capacity: Investigate workload patterns before scaling hardware. Token budgets define the user experience: Over-generation can quietly break SLA compliance. Design for elasticity: Spillover and multi-region routing maintain continuity during spikes. Measure everything: Combine KQL telemetry, latency and token tracking for faster diagnostics. The Outcome By applying data-driven analysis, architectural tuning, and automated governance, the team turned an operational bottleneck into a model of consistent, scalable performance. The result: Faster responses. Lower costs. Higher trust. A blueprint for building resilient, high-throughput AI systems on Azure.184Views3likes0CommentsExpanding the Public Preview of the Azure SRE Agent
We are excited to share that the Azure SRE Agent is now available in public preview for everyone instantly – no sign up required. A big thank you to all our preview customers who provided feedback and helped shape this release! Watching teams put the SRE Agent to work taught us a ton, and we’ve baked those lessons into a smarter, more resilient, and enterprise-ready experience. You can now find Azure SRE Agent directly in the Azure Portal and get started, or use the link below. 📖 Learn more about SRE Agent. 👉 Create your first SRE Agent (Azure login required) What’s New in Azure SRE Agent - October Update The Azure SRE Agent now delivers secure-by-default governance, deeper diagnostics, and extensible automation—built for scale. It can even resolve incidents autonomously by following your team’s runbooks. With native integrations across Azure Monitor, GitHub, ServiceNow, and PagerDuty, it supports root cause analysis using both source code and historical patterns. And since September 1, billing and reporting are available via Azure Agent Units (AAUs). Please visit product documentation for the latest updates. Here are a few highlights for this month: Prioritizing enterprise governance and security: By default, the Azure SRE Agent operates with least-privilege access and never executes write actions on Azure resources without explicit human approval. Additionally, it uses role-based access control (RBAC) so organizations can assign read-only or approver roles, providing clear oversight and traceability from day one. This allows teams to choose their desired level of autonomy from read-only insights to approval-gated actions to full automation without compromising control. Covering the breadth and depth of Azure: The Azure SRE Agent helps teams manage and understand their entire Azure footprint. With built-in support for AZ CLI and kubectl, it works across all Azure services. But it doesn’t stop there—diagnostics are enhanced for platforms like PostgreSQL, API Management, Azure Functions, AKS, Azure Container Apps, and Azure App Service. Whether you're running microservices or managing monoliths, the agent delivers consistent automation and deep insights across your cloud environment. Automating Incident Management: The Azure SRE Agent now plugs directly into Azure Monitor, PagerDuty, and ServiceNow to streamline incident detection and resolution. These integrations let the Agent ingest alerts and trigger workflows that match your team’s existing tools—so you can respond faster, with less manual effort. Engineered for extensibility: The Azure SRE Agent incident management approach lets teams reuse existing runbooks and customize response plans to fit their unique workflows. Whether you want to keep a human in the loop or empower the Agent to autonomously mitigate and resolve issues, the choice is yours. This flexibility gives teams the freedom to evolve—from guided actions to trusted autonomy—without ever giving up control. Root cause, meet source code: The Azure SRE Agent now supports code-aware root cause analysis (RCA) by linking diagnostics directly to source context in GitHub and Azure DevOps. This tight integration helps teams trace incidents back to the exact code changes that triggered them—accelerating resolution and boosting confidence in automated responses. By bridging operational signals with engineering workflows, the agent makes RCA faster, clearer, and more actionable. Close the loop with DevOps: The Azure SRE Agent now generates incident summary reports directly in GitHub and Azure DevOps—complete with diagnostic context. These reports can be assigned to a GitHub Copilot coding agent, which automatically creates pull requests and merges validated fixes. Every incident becomes an actionable code change, driving permanent resolution instead of temporary mitigation. Getting Started Start here: Create a new SRE Agent in the Azure portal (Azure login required) Blog: Announcing a flexible, predictable billing model for Azure SRE Agent Blog: Enterprise-ready and extensible – Update on the Azure SRE Agent preview Product documentation Product home page Community & Support We’d love to hear from you! Please use our GitHub repo to file issues, request features, or share feedback with the team3.3KViews2likes2CommentsChoosing the Right Azure Containerisation Strategy: AKS, App Service, or Container Apps?
Azure Kubernetes Service (AKS) What is it? AKS is Microsoft’s managed Kubernetes offering, providing full access to the Kubernetes API and control plane. It’s designed for teams that want to run complex, scalable, and highly customisable container workloads, with direct control over orchestration, networking, and security. When to choose AKS: You need advanced orchestration, custom networking, or integration with third-party tools. Your team has Kubernetes expertise and wants granular control. You’re running large-scale, multi-service, or hybrid/multi-cloud workloads. You require Windows container support (with some limitations). Advantages: Full Kubernetes API access and ecosystem compatibility. Supports both Linux and Windows containers. Highly customisable (networking, storage, security, scaling). Suitable for complex, stateful, or regulated workloads. Disadvantages: Steeper learning curve; requires Kubernetes knowledge. You manage cluster upgrades, scaling, and security patches (though Azure automates much of this). Potential for over-provisioning and higher operational overhead. Azure App Service What is it? App Service is a fully managed Platform-as-a-Service (PaaS) for hosting web apps, APIs, and backends. It supports both code and container deployments, but is optimised for web-centric workloads. When to choose App Service: You’re building traditional web apps, REST APIs, or mobile backends. You want to deploy quickly with minimal infrastructure management. Your team prefers a PaaS experience with built-in scaling, SSL, and CI/CD. You need to run Windows containers (with some limitations). Advantages: Easiest to use, minimal configuration, fast deployments. Built-in scaling, SSL, custom domains, and staging slots. Tight integration with Azure DevOps, GitHub Actions, and other Azure services. Handles infrastructure, patching, and scaling for you. Disadvantages: Less flexibility for complex microservices or custom orchestration. Limited access to underlying infrastructure and networking. Not ideal for event-driven or non-HTTP workloads. Azure Container Apps What is it? Container Apps is a fully managed, serverless container platform built on Kubernetes and open-source tech like Dapr and KEDA. It abstracts away Kubernetes complexity, letting you focus on microservices, event-driven, or background jobs. When to choose Container Apps: You want to run microservices or event-driven workloads without managing Kubernetes. You need automatic scaling (including scale to zero) based on HTTP traffic or events. You want to use Dapr for service discovery, pub/sub, or state management. You’re building modern, cloud-native apps but don’t need direct Kubernetes API access. Advantages: Serverless scaling (including to zero), pay only for what you use. Built-in support for microservices patterns, event-driven architectures, and background jobs. No cluster management—Azure handles the infrastructure. Integrates with Azure DevOps, GitHub Actions, and supports Linux containers from any registry. Disadvantages: No direct access to Kubernetes APIs or custom controllers. Linux containers only (no Windows container support). Some advanced networking and customisation options are limited compared to AKS. Key Differences Feature Azure Kubernetes Service (AKS) Azure App Service Azure Container Apps Best for Complex, scalable, custom workloads Web apps, APIs, backends Microservices, event-driven, jobs Management You manage (with Azure help) Fully managed Fully managed, serverless Scaling Manual/auto (pods, nodes) Auto (HTTP traffic) Auto (HTTP/events, scale to zero) API Access Full Kubernetes API No infra access No Kubernetes API OS Support Linux & Windows Linux & Windows Linux only Networking Advanced, customisable Basic (web-centric) Basic, with VNet integration Use Cases Hybrid/multi-cloud, regulated, large-scale Web, REST APIs, mobile Microservices, event-driven, background jobs Learning Curve Steep (Kubernetes skills needed) Low Low-medium Pricing Pay for nodes (even idle) Pay for plan (fixed/auto) Pay for usage (scale to zero) CI/CD Integration Azure DevOps, GitHub, custom Azure DevOps, GitHub Azure DevOps, GitHub How to Decide? Start with App Service if you’re building a straightforward web app or API and want the fastest path to production. Choose Container Apps for modern microservices, event-driven, or background processing workloads where you want serverless scaling and minimal ops. Go with AKS when you need full Kubernetes power, advanced customisation, or are running at enterprise scale with a skilled team. Conclusion Azure’s containerisation portfolio is broad, but each service is optimised for different scenarios. For most new cloud-native projects, Container Apps offers the best balance of simplicity and power. For web-centric workloads, App Service remains the fastest route. For teams needing full control and scale, AKS is unmatched. Tip: Start simple, and only move to more complex platforms as your requirements grow. Azure’s flexibility means you can mix and match these services as your architecture evolves.822Views2likes0CommentsAnnouncing Public Preview: ASEv3 Outbound Network Segmentation
🔍 What Is Outbound Network Segmentation? Outbound Network Segmentation allows you to define and control how outbound traffic is routed from your App Service Environment v3 apps. This means you can now segment outbound traffic at an app level, enabling fine-grained egress control that aligns with enterprise security policies and compliance requirements. Previously, all outbound traffic from an App Service Environment v3 originated from the full subnet range hosting the App Service Environment, making it difficult for networking teams to apply per-app restrictions, like what is available with the multi-tenant App Service offering. With this new capability, you can now: For each app, define the subnet all outbound traffic is routed through. Assign dedicated outbound IPs per app via NAT Gateways. Route traffic through custom firewalls or appliances. Apply Network Security Groups (NSGs) with greater precision. Improve auditability and compliance for regulated workloads. In an App Service Environment, each worker is assigned an IP from the subnet, but there is no way to group IPs from various apps/plans to allow for routing/blocking/allowing specific app traffic from a networking perspective. With outbound network segmentation, you can now direct various app traffic to the same subnet/virtual network and gain this level of control. For example, consider the following scenario where you would like to ensure that only App A is able to talk to Database A. To ensure only traffic from App A can reach Database A, you join App A to an alternate subnet (vnet-integration-subnet). The alternate subnet has network access to the private endpoint subnet via NSG. This means that only traffic from the virtual network integration subnet can reach the private endpoint subnet, which then gives access to the database. 🧪 What’s Included in the Public Preview? This feature is currently available in all public Azure regions. If you're interested in trying it out, you will need to create a new App Service Environment and enable the following cluster setting during creation. Cluster settings can be configured using an ARM/Bicep template. For guidance on configuring cluster settings, see Custom configuration settings for App Service Environments. "clusterSettings": [ { "name": "MultipleSubnetJoinEnabled", "value": "true" } ] Once the App Service Environment is created and this cluster setting is enabled, you will have access to join apps to alternate subnets at any time. However, if you don't set the cluster setting during creation, the App Service Environment will not support this feature. Enabling this feature on existing App Service Environments is not supported. Portal support for enabling this cluster setting as well as joining alternate subnets is not available at this time. To configure the cluster setting, use an ARM/Bicep template to create the App Service Environment. To join an alternate subnet, you can use the following Azure CLI command. The alternate subnet must be empty and be delegated to Microsoft.web/serverfarms prior to attempting to join it. Also ensure that application traffic routing is enabled for your app. This is key to ensure all traffic is routed through the alternate subnet and not the default route. az webapp vnet-integration add --resource-group <APP-RESOURCE-GROUP> --name <APP-NAME> --vnet <VNET-NAME> --subnet <ALTERNATE-SUBNET-NAME> If your alternate subnet is in a different resource group than your app, run "az webapp vnet-integration add -h" and see the help text to learn how to specify this resource id. 🔧 Tech Specs If you're familiar with the multi-plan subnet join feature available in the multi-tenant App Service offering, unfortunately, App Service Environments and the alternate subnet join feature are incompatible with multi-plan subnet join. For App Service Environments, each app from a given plan can only integrate with 1 alternate subnet. Similar to regular virtual network integration, however, a given plan can have multiple different connections and apps in the same plan can use either of the connections. For multi-tenant App Service, this is limited to 2 connections per plan. For App Service Environment v3, you can have up to 4 connections. If you need to remove or change the alternate subnet join for a specific app, you can do this at any time. First remove the existing join, and then add a new one following the same process as you did previously. 💡 Why This Feature Matters App Service Environment v3 has always been about isolation, scalability, and control. With outbound segmentation, we’re taking that control to the next level. Whether you're running high-scale web apps, handling sensitive data, or managing complex environments, this feature gives you the tools to secure outbound traffic without compromising performance. 📚 Learn More To dive deeper into App Service Environment v3 networking capabilities, check out the App Service Environment v3 networking overview. Have questions or feedback? Drop them in the comments below.301Views1like0CommentsHow to connect Azure SQL database from Python Function App using managed identity or access token
This blog will demonstrate on how to connect Azure SQL database from Python Function App using managed identity or access token. If you are looking for how to implement it in Windows App Service, you may refer to this post: https://techcommunity.microsoft.com/t5/apps-on-azure-blog/how-to-connect-azure-sql-database-from-azure-app-service-windows/ba-p/2873397. Note that Azure Active Directory managed identity authentication method was added in ODBC Driver since version 17.3.1.1 for both system-assigned and user-assigned identities. In Azure blessed image for Python Function, the ODBC Driver version is 17.8. Which makes it possible to leverage this feature in Linux App Service. Briefly, this post will provide you a step to step guidance with sample code and introduction on the authentication workflow. Steps: 1. Create a Linux Python Function App from portal 2. Set up the managed identity in the new Function App by enable Identity and saving from portal. It will generate an Object(principal) ID for you automatically. 3. Assign role in Azure SQL database. Search for your own account and save as admin. Note: Alternatively, you can search for the function app's name and set it as admin, then that function app would own admin permission on the database and you can skip step 4 and 5 as well. 4. Got to Query editor in database and be sure to login using your account set in previous step rather than username and password. Or step 5 will fail with below exception. "Failed to execute query. Error: Principal 'xxxx' could not be created. Only connections established with Active Directory accounts can create other Active Directory users." 5. Run below queries to create user for the function app and alter roles. You can choose to alter part of these roles per your demand. CREATE USER "yourfunctionappname" FROM EXTERNAL PROVIDER; ALTER ROLE db_datareader ADD MEMBER "yourfunctionappname" ALTER ROLE db_datawriter ADD MEMBER "yourfunctionappname" ALTER ROLE db_ddladmin ADD MEMBER "yourfunctionappname" 6. Leverage below sample code to build your own project and deploy to the function app. Sample Code: Below is the sample code on how to use Azure access token when run it from local and use managed identity when run in Function app. The token part needs to be replaced with your own. Basically, it is using "pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi')" to authenticate with managed identity. Also, "MSI_SECRET" is used to tell if we are running it from local or function app, it will be created automatically as environment variable when the function app is enabled with Managed Identity. The complete demo project can be found from: https://github.com/kevin808/azure-function-pyodbc-MI import logging import azure.functions as func import os import pyodbc import struct def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') server="your-sqlserver.database.windows.net" database="your_db" driver="{ODBC Driver 17 for SQL Server}" query="SELECT * FROM dbo.users" # Optional to use username and password for authentication # username = 'name' # password = 'pass' db_token = '' connection_string = 'DRIVER='+driver+';SERVER='+server+';DATABASE='+database #When MSI is enabled if os.getenv("MSI_SECRET"): conn = pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi') #Used when run from local else: SQL_COPT_SS_ACCESS_TOKEN = 1256 exptoken = b'' for i in bytes(db_token, "UTF-8"): exptoken += bytes({i}) exptoken += bytes(1) tokenstruct = struct.pack("=i", len(exptoken)) + exptoken conn = pyodbc.connect(connection_string, attrs_before = { SQL_COPT_SS_ACCESS_TOKEN:tokenstruct }) # Uncomment below line when use username and password for authentication # conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) cursor = conn.cursor() cursor.execute(query) row = cursor.fetchone() while row: print(row[0]) row = cursor.fetchone() return func.HttpResponse( 'Success', status_code=200 ) Workflow: Below are the workflow in these two authentication ways, with them in mind, we can understand what happened under the hood. Managed Identity: When we enable the managed identify for function app, a service principal will be generated automatically for it, then it follows the same steps as below to authenticate in database. Function App with managed identify -> send request to database with service principal -> database check the corresponding database user and its permission -> Pass authentication. Access Token: The access toke can be generated by executing ‘az account get-access-token --resource=https://database.windows.net/ --query accessToken’ from local, we then hold this token to authenticate. Please note that the default lifetime for the token is one hour, which means we would need to retrieve it again when it expires. az login -> az account get-access-token -> local function use token to authenticate in SQL database -> DB check if the database user exists and if the permissions granted -> Pass authentication. Thanks for reading. I hope you enjoy it.54KViews6likes18CommentsPowering Observability: Dynatrace Integration with Linux App Service through Sidecars
In this blog we continue to dive into the world of observability with Azure App Service. If you've been following our recent updates, you'll know that we announced the Public Preview for the Sidecar Pattern for Linux App Service. Building upon this architectural pattern, we're going to demonstrate how you can leverage it to integrate Dynatrace, an Azure Native ISV Services partner, with your .NET custom container application. In this blog, we'll guide you through the process of harnessing Dynatrace's powerful monitoring capabilities, allowing you to gain invaluable insights into your application's metrics and traces. Setting up your .NET application To get started, you'll need to containerize your .NET application. This tutorial walks you through the process step by step. This is what a sample Dockerfile for a .Net 8 application # Stage 1: Build the application FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build WORKDIR /app # Copy the project file and restore dependencies COPY *.csproj ./ RUN dotnet restore # Copy the remaining source code COPY . . # Build the application RUN dotnet publish -c Release -o out # Stage 2: Create a runtime image FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime WORKDIR /app # Copy the build output from stage 1 COPY --from=build /app/out ./ # Set the entry point for the application ENTRYPOINT ["dotnet", "<your app>.dll"] You're now ready to build the image and push it to your preferred container registry, be it Azure Container Registry, Docker Hub, or a private registry. Create your Linux Web App Create a new Linux Web App from the portal and choose the options for Container and Linux. On the Container tab, make sure that Sidecar support is Enabled. Specify the details of your application image. Note: Typically, .Net uses port 8080 but you can change it in your project. Setup your Dynatrace account If you don’t have a Dynatrace account, you can create an instance of Dynatrace on the Azure portal by following this Marketplace link. You can choose the Free Trial plan to get a 30 days subscription. AppSettings for Dynatrace Integration You need to set the following AppSettings. You can get more details about the Dynatrace related settings here. DT_TENANT – The environment ID DT_TENANTTOKEN – Same as DT_API_TOKEN. This is the PaaS token for your environment. DT_CONNECTIONPOINT DT_HOME - /home/dynatrace LD_PRELOAD - /home/dynatrace/oneagent/agent/lib64/liboneagentproc.so DT_LOGSTREAM - stdout DT_LOGLEVELCON – INFO We would encourage you to add sensitive information like DT_TENANTTOKEN to Azure Key vault Use Key Vault references - Azure App Service | Microsoft Learn. Add the Dynatrace Sidecar Go to the Deployment Center for your application and add a sidecar container. Image Source: Docker Hub and other registries Image type: Public Registry server URL: mcr.microsoft.com Image and tag: appsvc/docs/sidecars/sample-experiment:dynatrace-dotnet Port: <any port other than your main container port> Once you have added the sidecar, you would need to restart your website to see the data start flowing to the Dynatrace backend. Please note that this is an experimental container image for Dynatrace. We will be updating this blog with a new image soon. Disclaimer: Dynatrace Image Usage It's important to note that the Dynatrace image used here is sourced directly from Dynatrace and is provided 'as-is.' Microsoft does not own or maintain this image. Therefore, its usage is subject to the terms of use outlined by Dynatrace. Visualizing your Observability data in Dynatrace You are all set! You can now see your Observability data flow to Dynatrace backend. The Hosts tab gives you metrics about the VM which is hosting the application. Dynatrace also has a Services view which lets you look at your application specific information like Response Time, Failed Requests and application traces. You can learn more about Dynatrace’s Observability capabilities by going through the documentation. Observe and explore - Dynatrace Docs Next Steps As you've seen, the Sidecar Pattern for Linux App Service opens a world of possibilities for integrating powerful tools like Dynatrace into your Linux App Service-hosted applications. With Dynatrace being an Azure Native ISV Services partner, this integration marks just the beginning of a journey towards a closer and more simplified experience for Azure users. This is just the start. We're committed to providing even more guidance and resources to help you seamlessly integrate Dynatrace with your code-based Linux web applications and other language stacks. Stay tuned for upcoming updates and tutorials as we continue to empower you to make the most of your Azure environment. In the meantime, don't hesitate to explore further, experiment with different configurations, and leverage the full potential of observability with Dynatrace and Azure App Service.2.5KViews1like3Comments