web apps
379 TopicsLogic App Workflow() function returning un-expected results
I am new to Logic Apps, but fairly well-versed in Power Automate. In Power Automate I have a small child flow that I use as an error handler and call it from other flows to send me an email with a link to the failed flow run. I re-created this for my Logic Apps but I have run into a problem. The Error Handler Logic App is triggered by an HTTP request that takes the outputs of the workflow() expression from the calling app. Within the error handler app I parse through the definition to build a clickable link back to the calling app run that failed. When I was testing, the workflow() outputs from my ERROR_TESTER app looked like this: { "id": "/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.Logic/workflows/ERROR_TESTER", "name": "ERROR_TESTER", "type": "Microsoft.Logic/workflows", "location": "centralus", "run": { "id": "/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.Logic/workflows/ERROR_TESTER/runs/<RUN ID>", "name": "<RUN ID>", "type": "Microsoft.Logic/workflows/runs" } } I just caught an error from an actual app I was running. The link didn't work and on further inspection I realized that the failed app's workflow() outputs looked like this instead: { "id": "/workflows/<WORKFLOW ID>", "name": "<LOGIC APP NAME>", "type": "Microsoft.Logic/workflows", "location": "centralus", "run": { "id": "/workflows/<WORKFLOW ID>/runs/<RUN ID>", "name": "<RUN ID>", "type": "Microsoft.Logic/workflows/runs" } } Despite the other differences, I was using the "run.id" property to build the URL to the flow run. Now that it seems to be getting truncated my original error handler app doesn't work. Can anyone enlighten me on why the outputs are so different and potentially how to fix or plan for the differing outputs?645Views0likes1CommentAzure App Service Premium v4 plan is now in public preview
Azure App Service Premium v4 plan is the latest offering in the Azure App Service family, designed to deliver enhanced performance, scalability, and cost efficiency. We are excited to announce the public preview of this major upgrade to one of our most popular services. Key benefits: Fully managed platform-as-a-service (PaaS) to run your favorite web stack, on both Windows and Linux. Built using next-gen Azure hardware for higher performance and reliability. Lower total cost of ownership with new pricing tailored for large-scale app modernization projects. and more to come! [Note: As of September 1st, 2025 Premium v4 is Generally Available on Azure App Service! See the launch blog for more details!] Fully managed platform-as-a-service (PaaS) As the next generation of one of the leading PaaS solutions, Premium v4 abstracts infrastructure management, allowing businesses to focus on application development rather than server maintenance. This reduces operational overhead, as tasks like patching, load balancing, and auto-scaling are handled automatically by Azure, saving time and IT resources. App Service’s auto-scaling optimizes costs by adjusting resources based on demand and saves you the cost and overhead of under- or over-provisioning. Modernizing applications with PaaS delivers a compelling economic impact by helping you eliminate legacy inefficiencies, decrease long-term costs, and increase your competitive agility through seamless cloud integration, CI/CD pipelines, and support for multiple languages. Higher performance and reliability Built on the latest Dadsv6 / Eadsv6 series virtual machines and NVMe based temporary storage, the App Service Premium v4 plan offers higher performance compared to previous Premium generations. According to preliminary measurements during private preview, you may expect to see: >25% performance improvement using Pmv4 plans, relative to the prior generation of memory-optimized Pmv3 plans >50% performance improvement using Pv4 plans, relative to the prior generation of non-memory-optimized Pv3 plans Please note that these features and metrics are preliminary and subject to change during public preview. Premium v4 provides a similar line-up to Premium v3, with four non-memory optimized options (P0v4 though P3v4) and five memory-optimized options (P1mv4 through P5mv4). Features like deployment slots, integrated monitoring, and enhanced global zone resilience further enhance the reliability and user experience, improving customer satisfaction. Lower total cost of ownership (TCO) Driven by the urgency to adopt generative AI and to stay competitive, application modernization has rapidly become one of the top priorities in boardrooms everywhere. Whether you are a large enterprise or a small shop running your web apps in the cloud, you will find App Service Premium v4 is designed to offer you the most compelling performance-per-dollar compared to previous generations, making it an ideal managed solution to run high-demand applications. Using the agentic AI GitHub Copilot app modernization tooling announced in preview at Microsoft Build 2025, you can save up to 24% when you upgrade and modernize your .NET web apps running on Windows Server to Azure App Service for Windows on Premium v4 compared with Premium v3. You will also be able to use deeper commitment-based discounts such as reserved instances and savings plan for Premium v4 when the service is generally available (GA). For more detailed pricing on the various CPU and memory options, see the pricing pages for Windowsand Linux as well as the Azure Pricing Calculator. Get started The preview will roll out globally over the next few weeks. Premium v4 is currently available in the following regions [updated 08/22/2025]: Australia East Canada Central Central US East US East US 2 France Central Japan East Korea Central North Central US North Europe Norway East Poland Central Southeast Asia Sweden Central Switzerland North UK South West Central US West Europe West US West US 3 App Service is continuing to expand the Premium v4 footprint with many additional regions planned to come online over the coming weeks and months. Customers can reference the product documentation for details on how to configure Premium v4 as well as a regularly updated list of regional availability. We encourage you to start assessing your apps using the partners and tools for Azure App Modernization, start using Premium v4 to better understand the benefits and capabilities, and build a plan to hit the ground running when the service is generally available. Watch this space for more information on GA! Key Resources Microsoft Build 2025 on-demand session: https://aka.ms/Build25/BRK200 Azure App Service documentation: https://aka.ms/AppService/Pv4docs Azure App Service web page: https://www.azure.com/appservice Join the Community Standups: https://aka.ms/video/AppService/community Follow us on X: @AzAppService2.7KViews2likes4CommentsBuild your first AI Agent with Azure App Service
Want to build your first AI agent? Already have apps you want to add agentic capabilities to? Maybe you've heard of Azure App Service, or you're already running your applications on it. Either way, App Service makes building and deploying AI-powered applications incredibly straightforward—and here's how to get started. Azure App Service is the go-to platform for building your first AI application because it eliminates infrastructure complexity while delivering enterprise-grade capabilities. As a fully managed Platform-as-a-Service (PaaS), it offers native integration with Azure AI services like Azure OpenAI, built-in DevOps support with GitHub Actions and Codespaces, and multi-language flexibility so you can build with what you know. Security is built-in with network isolation, encryption, and role-based access control, while Azure handles patching, updates, and high availability—letting you focus on innovation, not infrastructure management. Getting Started with AI on App Service Whether you're building your first chatbot or a sophisticated multi-agent system, here are some of the key capabilities you can explore: Chatbots & RAG Applications Build chatbots powered by Azure OpenAI Create RAG (Retrieval Augmented Generation) apps with Azure AI Search Develop intelligent web apps that combine your data with large language models Agentic Web Applications Transform traditional CRUD apps with conversational, agentic capabilities Use frameworks like Microsoft Semantic Kernel, LangGraph, or Azure AI Foundry Agent Service Build agents that can reason, plan, and take actions on behalf of users OpenAPI Tool Integration Expose your existing web app APIs to AI agents using OpenAPI specifications Add your App Service apps as tools in Azure AI Foundry Agent Service Let AI agents interact with your existing REST APIs seamlessly Model Context Protocol (MCP) Servers Integrate your web apps as MCP servers to extend AI agent capabilities Connect to personal AI agents like GitHub Copilot Chat Expose your app's functionality without major rearchitecture Ready to get hands-on? Check out the App Service AI Integration landing page for complete tutorials, code samples, and step-by-step guides across .NET, Python, Java, and Node.js. Ready to Go Deeper? If you're ready for more advanced scenarios and want to dive into production-grade agent development, explore Microsoft Agent Framework—Microsoft's new direction for building robust, scalable AI agents. We've created a comprehensive 3-part blog series that walks through building sophisticated agents on App Service: Part 1: Build Long-Running AI Agents – Learn how to build stateful agents that can handle complex, multi-turn conversations and long-running tasks. Part 2: Advanced Agent Patterns – Explore advanced patterns for building production-ready agents with proper error handling, observability, and scaling. Part 3: Client-Side Multi-Agent Orchestration – Discover how to orchestrate multiple specialized agents working together to solve complex problems. Start Building Today Whether you're just getting started with AI agents or looking to add intelligent capabilities to your existing applications, Azure App Service provides the platform, tools, and integrations you need to succeed. Explore the resources above and start building your next intelligent application today!382Views1like0CommentsPart I: OTEL sidecar extension on Azure App Service for Linux - Intro + PHP walkthrough
Sidecar extensions let you attach a companion container to your App Service for Linux app to add capabilities—without changing your core app or container. If you’re new to sidecars on App Service, start here: Sidecars in Azure App Service OpenTelemetry (OTEL) is the vendor-neutral standard for collecting traces, metrics, and logs, with auto/manual instrumentation across popular languages and backends. See the official docs for concepts and quick starts. (OpenTelemetry) In this post, we’ll use the new sidecar extension—OpenTelemetry - Azure Monitor and show end-to-end setup for a PHP code-based apps (the same extension will also work for other language stacks and container-based apps). Walkthrough: add the OpenTelemetry – Azure Monitor sidecar to a PHP (code-based) app This section shows the exact portal steps plus the code/config you need. Your PHP code is in sidecar-samples/otel-sidecar/php/php-blessed-app at main · Azure-Samples/sidecar-samples. 1) Create Application Insights and copy the Connection string Create (or reuse) an Application Insights resource and copy the Connection string from the Overview blade. 2) Create the PHP Web App (Linux) Create an Azure App Service (Linux) app and choose any supported PHP version (e.g., PHP 8.4). 3) Set environment variables on the main app In Environment variables → Application settings, add: OTEL_PHP_AUTOLOAD_ENABLED = true # (optional) SCM_DO_BUILD_DURING_DEPLOYMENT = true When you add the sidecar extension, these environment variables would be set by default APPLICATIONINSIGHTS_CONNECTION_STRING = <your-connection-string> OTEL_EXPORTER = azuremonitor OTEL_EXPORTER_OTLP_ENDPOINT = http://127.0.0.1:4318 OTEL_SERVICE_NAME = php-blessed-otel # pick a meaningful name 4) Get the app code git clone <repo> cd php-blessed-app 5) PHP dependencies (already in composer.json) The repo already includes the OpenTelemetry libraries and auto-instrumentation plugins: { "require": { "open-telemetry/sdk": "^1.7", "open-telemetry/exporter-otlp": "^1.3", "open-telemetry/opentelemetry-auto-slim": "^1.2", "open-telemetry/opentelemetry-auto-psr18": "^1.1", "monolog/monolog": "^3.0", "open-telemetry/opentelemetry-logger-monolog": "^1.0", "...": "..." }, "config": { "allow-plugins": { "open-telemetry/opentelemetry-auto-slim": true, "open-telemetry/opentelemetry-auto-psr18": true } } } 6) Minimal bootstrap in index.php use OpenTelemetry\API\Globals; require __DIR__ . '/vendor/autoload.php'; 7) Startup script (installs PECL extension if missing) startup.sh included in the repo: #!/bin/bash # Install OpenTelemetry extension if needed if ! php -m | grep -q opentelemetry; then echo "Installing OpenTelemetry extension..." pecl install opentelemetry echo "extension=opentelemetry.so" > /usr/local/etc/php/conf.d/99-opentelemetry.ini echo "OpenTelemetry extension installed successfully" fi # Start PHP-FPM echo "Starting PHP-FPM..." php-fpm 8) Deploy the app Use your preferred method (GitHub Actions, ZIP deploy, local Git, etc.). 9) Add the sidecar extension on the Web App Go to Deployment Center → Containers (new) → Add → Sidecar Extension, pick Observability: OpenTelemetry – Azure Monitor, and paste your Connection string. 10) Map the autoload flag into the sidecar Open the created sidecar container (Edit container) and map the autoload flag from the main app: Name: OTEL_PHP_AUTOLOAD_ENABLED Value: OTEL_PHP_AUTOLOAD_ENABLED (select from the drop-down to reference the app setting) 11) Set the Startup command for PHP In Configuration (preview) → Stack settings, set: cp /home/site/wwwroot/default /etc/nginx/sites-enabled/default && nginx -s reload && bash /home/site/wwwroot/startup.sh 12) Verify telemetry in Application Insights After the app restarts, open your Application Insights resource and check Application map, Live metrics, or Search for spans with service.name = php-blessed-otel (or the value you set). Part I — Conclusion Sidecar extensions turn observability into an additive step - with just a few settings and a lightweight startup script. With OTEL wired for PHP, you now have portable traces, metrics, and logs you can query and dashboard. Next: In Part II, we’ll connect the same app to Elastic APM using the OpenTelemetry – Elastic APM sidecar, with the few settings changes you need.158Views0likes0CommentsPart II: OTEL sidecar extension on Azure App Service for Linux - Elastic APM setup
Picking up from Part I, this post shows how to obtain the Elastic APM Server URL and Secret token, add the OTEL Elastic sidecar, and validate telemetry in Kibana. Most steps are identical to the Azure Monitor walkthrough (create PHP app, add OTEL libraries, deploy, map OTEL_PHP_AUTOLOAD_ENABLED, keep the same startup command, and point your app to the sidecar at http://127.0.0.1:4318). The only differences are - get your Elastic APM Server URL and Secret token, choose the OpenTelemetry – Elastic APM extension, and set Elastic-specific app settings. You can use the sample code (based on your language stack) from here - https://github.com/Azure-Samples/sidecar-samples/tree/main/otel-sidecar 1) Get the Elastic APM Server URL In Kibana go to: Observability → Data management → Fleet → Agent policies → Elastic Cloud agent policy → Elastic APM. Copy the URL shown in the “Server configuration” section. 2) Get or generate the Secret token Still in the Elastic APM integration page, scroll to Agent authorization. Use the existing Secret token or generate one if needed. 3) Add the sidecar extension (Web App → Deployment Center) Deployment Center → Containers (new) → Add → Sidecar Extension → choose Observability: OpenTelemetry – Elastic APM. Provide the APM Server URL and Secret token you copied above, then Save. 4) View Elastic-specific app settings (main app) These are added by default Environment variables → Application settings: ELASTIC_APM_ENDPOINT = https://<your-elastic-apm-server-url> ELASTIC_APM_SECRET_TOKEN = <your-secret-token> OTEL_EXPORTER = elastic OTEL_EXPORTER_OTLP_ENDPOINT = http://127.0.0.1:4318 OTEL_SERVICE_NAME = <your-app-name> (Keep using :4318 for OTLP/HTTP to the sidecar. Your Elastic URL is the remote APM server the sidecar forwards to.) Everything else—code, Composer deps, autoload flag mapping, and startup command—remains the same as the Azure Monitor section. 5) Validate telemetry in Kibana In Kibana, open Observability → APM → Services. Find your service name (the value of OTEL_SERVICE_NAME). Open the service to view transactions, traces, and dependencies. You can also check logs/fields in Discover: That’s it—your PHP app is instrumented with OTEL, sends signals to the local sidecar, and the sidecar ships them to Elastic APM. Sample repo (code & containers) We’ve published a repo with working samples for PHP, Node.js, Python, and .NET showing both code-based and container-based setups using the OTEL sidecar extensions. Use it as your starting point: sidecar-samples/otel-sidecar at main · Azure-Samples/sidecar-samples In Part III, we’ll share a language cheat-sheet, a copy/paste app-settings reference, and troubleshooting tips for common issues.87Views0likes0CommentsLow-Light Image Enhancer (Python + OpenCV) on Azure App Service
Low-light photos are everywhere: indoor team shots, dim restaurant pics, grainy docs. This post shows a tiny Python app (Flask + OpenCV) that fixes them with explainable image processing—not a heavyweight model. We’ll walk the code that does the real work (CLAHE → gamma → brightness → subtle saturation) and deploy it to Azure App Service for Linux. What you’ll build A Flask web app that accepts an image upload, runs a fast enhancement pipeline (CLAHE → gamma → brightness → subtle saturation), and returns base64 data URIs for an instant before/after view in the browser—no storage required for the basic demo. Architecture at a glance Browser → /enhance (multipart form): sends an image + optional tunables (clip_limit, gamma, brightness). Flask → Enhancer: converts the upload to a NumPy RGB array and calls LowLightEnhancer.enhance_image(...). Response: returns JSON with original and enhanced images as base64 PNG data URIs for immediate rendering. Prerequisites An Azure subscription Azure Developer CLI (azd) installed (Optional) Python 3.9+ on your dev box for reading the code or extending it Deploy with azd git clone https://github.com/Azure-Samples/appservice-ai-samples.git cd appservice-ai-samples/lowlight-enhancer azd init azd up When azd up finishes, open the printed URL, upload a low-light photo, and compare the result side-by-side. Code walkthrough (the parts that matter) 1) Flask surface area (app.py) File size guard: app = Flask(__name__) app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 # 16 MB Two routes: GET / - renders the simple UI POST /enhance - the JSON API the UI calls via XHR/fetch Parameter handling with sane defaults: clip_limit = float(request.form.get('clip_limit', 2.0)) gamma = float(request.form.get('gamma', 1.2)) brightness = float(request.form.get('brightness', 1.1)) 2) Zero-temp-file processing + data-URI response (app.py) process_uploaded_image keeps the hot path tight: convert to RGB → enhance → convert both versions to base64 PNG and return them inline. def process_uploaded_image(file_storage, clip_limit=2.0, gamma=1.2, brightness=1.1): # PIL → NumPy (RGB) img_pil = Image.open(file_storage) if img_pil.mode != 'RGB': img_pil = img_pil.convert('RGB') img_array = np.array(img_pil) # Enhance enhanced = LowLightEnhancer().enhance_image( img_array, clip_limit=clip_limit, gamma=gamma, brightness_boost=brightness ) # Back to base64 data URIs for instant display def pil_to_base64(pil_img): buf = io.BytesIO(); pil_img.save(buf, format='PNG') return base64.b64encode(buf.getvalue()).decode('utf-8') enhanced_pil = Image.fromarray(enhanced) return { 'original': f'data:image/png;base64,{pil_to_base64(img_pil)}', 'enhanced': f'data:image/png;base64,{pil_to_base64(enhanced_pil)}' } 3) The enhancement core (enhancer.py) LowLightEnhancer implements a classic pipeline that runs great on CPU: class LowLightEnhancer: def __init__(self): self.clip_limit = 2.0 self.tile_grid_size = (8, 8) def enhance_image(self, image, clip_limit=2.0, gamma=1.2, brightness_boost=1.1): # Normalize to RGB if the input came in as OpenCV BGR is_bgr = self._detect_bgr(image) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) if is_bgr else image # 1) CLAHE on L-channel (LAB) for local contrast without color blowout lab = cv2.cvtColor(rgb, cv2.COLOR_RGB2LAB) l, a, b = cv2.split(lab) clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=self.tile_grid_size) l = clahe.apply(l) # 2) Gamma correction (perceptual brightness curve) l = self._apply_gamma_correction(l, gamma) # 3) Gentle overall lift l = np.clip(l * brightness_boost, 0, 255).astype(np.uint8) # Recombine + small saturation nudge for a natural look enhanced = cv2.cvtColor(cv2.merge([l, a, b]), cv2.COLOR_LAB2RGB) enhanced = self._boost_saturation(enhanced, factor=1.1) return cv2.cvtColor(enhanced, cv2.COLOR_RGB2BGR) if is_bgr else enhanced CLAHE on L (not RGB) avoids the “oversaturated neon” artifact common with naive histogram equalization. Gamma via LUT (below) is fast and lets you brighten mid-tones without crushing highlights. A tiny brightness multiplier brings the image up just a bit after contrast/curve changes. A +10% saturation helps counter the desaturation that often follows brightening. 4) Fast gamma with a lookup table (enhancer.py) def _apply_gamma_correction(self, image, gamma: float) -> np.ndarray: inv_gamma = 1.0 / gamma table = np.array([((i / 255.0) ** inv_gamma) * 255 for i in range(256)], dtype=np.uint8) return cv2.LUT(image, table) Notes: With gamma = 1.2, inv_gamma ≈ 0.833 → curve brightens mid-tones (exponent < 1). cv2.LUT applies the 256-entry mapping efficiently across the image. 5) Bounded color pop: subtle saturation boost (enhancer.py) def _boost_saturation(self, image: np.ndarray, factor: float) -> np.ndarray: hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV).astype(np.float32) hsv[:, :, 1] = np.clip(hsv[:, :, 1] * factor, 0, 255) return cv2.cvtColor(hsv.astype(np.uint8), cv2.COLOR_HSV2RGB) Notes: Working in HSV keeps hue and brightness stable while gently lifting color. The clip to [0, 255] prevents out-of-gamut surprises. Tuning cheatsheet (which knob to turn, and when) Too flat / muddy → raise clip_limit from 2.0 → 3.0–4.0 (more local contrast). Still too dark → raise gamma from 1.2 → 1.4–1.6 (brightens mid-tones). Harsh or “crunchy” → lower clip_limit, or drop brightness_boost to 1.05–1.1. Colors feel washed out → increase saturation factor a touch (e.g., 1.1 → 1.15). What to try next Expose more controls in the UI (e.g., tile grid size, saturation factor). Persist originals/results to Azure Blob Storage and add shareable links. Add a background job for batch processing using the CLI helper. Conclusion The complete sample code and deployment templates are available in the appservice-ai-samples repository Ready to build your own Lowlight enhancer app? Clone the repo and run azd up to get started in minutes! For more Azure App Service AI samples and best practices, check out the Azure App Service AI integration documentation113Views0likes0CommentsIntroducing AI Playground on Azure App Service for Linux
If you’re running a Small Language Model (SLM) as a sidecar with your web app, there’s now a faster way to try prompts, measure latency, and copy working code into your app—all without leaving your site. AI Playground is a lightweight, built-in experience available from the Kudu endpoint for every Linux App Service. What is AI Playground? AI Playground is a simple UI that talks to the SLM you’ve attached to your App Service app (for example, Phi or BitNet via the Sidecar extension). It lets you: Send system and user prompts and view responses in-line See performance metrics like Time to First Token (TTFT), total time, and tokens/sec Grab ready-to-use code snippets for popular languages from the right sidebar (when you’re ready to integrate) Confirm whether a sidecar SLM is configured—and get clear guidance if it isn’t Sidecar SLMs were introduced earlier this year; they let you run models like Phi and BitNet alongside your app. Learn more: https://learn.microsoft.com/en-us/azure/app-service/tutorial-ai-slm-dotnet Where to find it the AI Playground In the Azure portal, go to your App Service (Linux). Open Advanced Tools (Kudu) → Go. In the Kudu left navigation, select AI Playground. Note: A pre-requisite for the playground is already having an SLM sidecar setup with your application. Here is a tutorial to set it up https://learn.microsoft.com/en-us/azure/app-service/tutorial-ai-slm-dotnet A quick tour Prompts panel Set a System Prompt (e.g., “You speak like a pirate.”) to steer behavior. Enter a User Prompt, then click Send to SLM. Performance metrics displayed TTFT: how quickly the first token arrives—great for responsiveness checks. Total: overall response time. Tokens/sec: sustained throughput for the generation. Code integration examples On the right, you’ll find minimal snippets for C#, Python, and Node.js you can paste into your app later (no need to leave Kudu). Tip: Keep prompts compact for SLMs. If output slows, shorten the prompt or reduce requested length. Don’t have a sidecar yet? If AI Playground can’t find an SLM, you’ll see an inline notice with setup steps. Full tutorial: https://learn.microsoft.com/en-us/azure/app-service/tutorial-ai-slm-dotnet Troubleshooting No responses / timeouts Confirm the sidecar is Running in Deployment Center → Containers. Check the sidecar’s port and endpoint Slow TTFT or Tokens/sec Warm up with a couple of short prompts. Consider scaling up to a Premium plan. Keep prompts and requested outputs short. Roadmap This is v1. We’re already working on: Bring-your-own LLMs (play with different models beyond SLMs) Richer evaluation (prompt presets, saved sessions, exportable traces) Better observability (per-call logs, quick links to Log Stream) Conclusion AI Playground makes building AI features on App Service feel immediate - type, run, measure, and ship. We’ll keep smoothing the experience and unlocking more model choices so you can go from idea to integrated AI faster than ever.193Views0likes0CommentsWhat’s New for Python on App Service for Linux: pyproject.toml, uv, and more
Python apps on Azure App Service for Linux just got a lot easier to build and ship! We’ve modernized the build pipeline to support new deployment options —whether you’re on classic setup.py, fully on pyproject.toml with Poetry or uv, or somewhere in between. This post walks through five upgrades that reduce friction end-to-end—from local dev to GitHub Actions to the App Service build environment: pyproject.toml + uv (and poetry): modern, reproducible Python builds setup.py support .bashrc quality-of-life improvements in the App Service container shell GitHub Actions samples for common Python flows (setup.py, uv.lock, local venv, and pyproject.toml deployments) pyproject.toml + uv uv is an extremely fast Python package & project manager written in Rust—think “pip + virtualenv + pip-tools,” but much faster and with first-class project workflows. (Astral Docs) On App Service for Linux: we’ve added automatic uv builds when your repo contains both pyproject.toml and uv.lock . That means reproducible installs with uv’s resolver—no extra switches needed. What’s pyproject.toml? It’s the standardized configuration for modern Python projects (PEP 621) where you declare metadata, dependencies, and your build backend. (Python Enhancement Proposals (PEPs)) Quickstart (new to uv?) # in your project folder pip install uv uv init uv init scaffolds a project and creates a pyproject.toml (and, for application projects, a sample main.py). Try it with uv run . (Astral Docs) Add dependencies: uv add flask # add more as needed, e.g.: # uv add requests pillow A uv.lock file is generated to pin your dependency graph; uv then “syncs” from the lock for consistent installs. (Astral Docs) A minimal pyproject.toml for a Flask app: [project] name = "uv-pyproject" version = "0.1.0" description = "Add your description here" readme = "README.md" requires-python = ">=3.14" dependencies = [ "flask>=3.1.2", ] If you prefer to keep main.py App Service’s default entry is app.py, so either rename main.py to app.py, or set a startup command: uv run uvicorn main:app --host 0.0.0.0 --port 8000 Run locally: uv run app.py (uv run executes your script inside the project’s environment.) (Astral Docs) Deploy to Azure App Service for Linux using your favorite method (e.g., azd up, GitHub Actions, or VS Code). During the build, you’ll see logs like: Detected uv.lock (and no requirements.txt); creating virtual environment with uv... Installing uv... Requirement already satisfied: uv in /tmp/oryx/platforms/python/3.14.0/lib/python3.14/site-packages (0.9.7) Executing: uv venv --link-mode=copy --system-site-packages antenv Using CPython 3.14.0 interpreter at: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 Creating virtual environment at: antenv Activate with: source antenv/bin/activate Activating virtual environment... Detected uv.lock. Installing dependencies with uv... Resolved 9 packages in 1ms Installed 7 packages in 1.82s + blinker==1.9.0 + click==8.3.0 + flask==3.1.2 + itsdangerous==2.2.0 + jinja2==3.1.6 + markupsafe==3.0.3 + werkzeug==3.1.3 Using pyproject.toml with Poetry Already on Poetry? Great—Poetry uses pyproject.toml (typically with [tool.poetry] plus a poetry.lock) and complies with PEP-517/PEP-621. If your project is Poetry-managed, App Service’s pyproject.toml support applies just the same. For details on fields and build configuration, see Poetry’s official docs: the pyproject.toml reference and basic usage. (python-poetry.org) Want to see a working uv example? Check the lowlight-enhancer-uv Flask app in our samples repo (deployable with azd up). Support for setup.py setup.py is the (Python) build/config script used by Setuptools to declare your project’s metadata and dependencies. Setuptools offers first-class support for setup.py, and it remains a valid way to package and install apps. (Setuptools) Minimal setup.py for a Flask app # setup.py from setuptools import setup, find_packages setup( name="flask-app", version="0.1.0", packages=find_packages(exclude=("tests",)), python_requires=">=3.14", install_requires=[ "Flask>=3.1.2", ], include_package_data=True, ) Tip: install_requires and other fields are defined by Setuptools ; see the keywords reference for what you can configure. (Setuptools) What you’ll see during an App Service deployment Python Version: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 Creating directory for command manifest file if it does not exist Removing existing manifest file Python Virtual Environment: antenv Creating virtual environment... Executing: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 -m venv antenv --copies Activating virtual environment... Running pip install setuptools... Collecting setuptools Downloading setuptools-80.9.0-py3-none-any.whl.metadata (6.6 kB) Downloading setuptools-80.9.0-py3-none-any.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 6.1 MB/s 0:00:00 Installing collected packages: setuptools Successfully installed setuptools-80.9.0 ... Running python setup.py install... [09:05:14+0000] Processing /tmp/8de1d13947ba65f [09:05:14+0000] Installing build dependencies: started [09:05:15+0000] Installing build dependencies: finished with status 'done' [09:05:15+0000] Getting requirements to build wheel: started [09:05:15+0000] Getting requirements to build wheel: finished with status 'done' [09:05:15+0000] Preparing metadata (pyproject.toml): started [09:05:15+0000] Preparing metadata (pyproject.toml): finished with status 'done' Bash shell experience: friendlier .bashrc in SSH We’ve started refreshing the SSH banner and shell behavior so it’s easier to orient yourself when you land in a Linux App Service container. What changed (see screenshots below): Clear header with useful links. We now show both the general docs and a Python quickstart link right up front. Runtime at a glance. The header prints the Python version explicitly. Instance details for troubleshooting. You’ll see the Instance Name and Instance Id in the banner—handy when filing a support ticket or comparing logs across instances. No more noisy errors on login. Previously, the shell tried to auto-activate antenv and printed No such file or directory if it didn’t exist. The new logic checks first and shows a gentle tip instead. What’s next More language-specific tips based on the detected stack. Shortcuts for common ssh tasks. Small UX touches (spacing, color, and prompts) to make SSH sessions feel consistent. New GitHub Actions samples We have also published a few Github Action sample workflows which makes it easy to deploy your Python apps and take advantage of our new features Deployment using pyproject.toml + uv - https://github.com/Azure/actions-workflow-samples/blob/master/AppService/Python-GHA-Samples/Python-PyProject-Uv-Sample.yml Deployment using Poetry - actions-workflow-samples/AppService/Python-GHA-Samples/Python-Poetry-Sample.yml at master · Azure/actions-workflow-samples Deployment using setup.py - actions-workflow-samples/AppService/Python-GHA-Samples/Python-SetupPy-Sample.yml at master · Azure/actions-workflow-samples Deployment python apps that are built locally - actions-workflow-samples/AppService/Python-GHA-Samples/Python-Local-Built-Deploy-Sample.yml at master · Azure/actions-workflow-samples To use these templates Copy the relevant YAML into .github/workflows/ folder in your repo. Set auth: use OIDC with azure/login (or a service principal/publish profile if you must). (Microsoft Learn) Fill in inputs: app name, resource group, and sidecar details (image or extension parameters, env vars/ports). Commit & run: trigger on push or via Run workflow. Conclusion In the coming months, we’ll be announcing more improvements to Python on Azure App Service for Linux, focused on faster builds, better performance for AI workloads and clearer diagnostics. Try the flows that fit your team, and let us know what else would make your Python deployments even easier.269Views0likes0CommentsStrapi on App Service: Quick start
In this quick start guide, you will learn how to create and deploy your first Strapi site on Azure App Service Linux, using Azure Database for MySQL or PostgreSQL, along with other necessary Azure resources. This guide utilizes an ARM template to install the required resources for hosting your Strapi application.2.4KViews1like2CommentsWhere is subscription Key
I am following the exercise Exercise: Create a backend API at https://learn.microsoft.com/en-us/training/modules/explore-api-management/8-import-api In step Configure the backend settings it is showed or not indicated to check "Subscription Required" however, in the test stage, step 2, it says The Ocp-Apim-Subscription-Key is filled in automatically for the subscription key associated with this API which is not true as you can notice in the my API snapshot How come that it is generated, we dont see it anywhere and is bound to the API? Is it a mistake from the author?1.6KViews0likes1Comment