azure paas
155 TopicsPart I: OTEL sidecar extension on Azure App Service for Linux - Intro + PHP walkthrough
Sidecar extensions let you attach a companion container to your App Service for Linux app to add capabilities—without changing your core app or container. If you’re new to sidecars on App Service, start here: Sidecars in Azure App Service OpenTelemetry (OTEL) is the vendor-neutral standard for collecting traces, metrics, and logs, with auto/manual instrumentation across popular languages and backends. See the official docs for concepts and quick starts. (OpenTelemetry) In this post, we’ll use the new sidecar extension—OpenTelemetry - Azure Monitor and show end-to-end setup for a PHP code-based apps (the same extension will also work for other language stacks and container-based apps). Walkthrough: add the OpenTelemetry – Azure Monitor sidecar to a PHP (code-based) app This section shows the exact portal steps plus the code/config you need. Your PHP code is in sidecar-samples/otel-sidecar/php/php-blessed-app at main · Azure-Samples/sidecar-samples. 1) Create Application Insights and copy the Connection string Create (or reuse) an Application Insights resource and copy the Connection string from the Overview blade. 2) Create the PHP Web App (Linux) Create an Azure App Service (Linux) app and choose any supported PHP version (e.g., PHP 8.4). 3) Set environment variables on the main app In Environment variables → Application settings, add: OTEL_PHP_AUTOLOAD_ENABLED = true # (optional) SCM_DO_BUILD_DURING_DEPLOYMENT = true When you add the sidecar extension, these environment variables would be set by default APPLICATIONINSIGHTS_CONNECTION_STRING = <your-connection-string> OTEL_EXPORTER = azuremonitor OTEL_EXPORTER_OTLP_ENDPOINT = http://127.0.0.1:4318 OTEL_SERVICE_NAME = php-blessed-otel # pick a meaningful name 4) Get the app code git clone <repo> cd php-blessed-app 5) PHP dependencies (already in composer.json) The repo already includes the OpenTelemetry libraries and auto-instrumentation plugins: { "require": { "open-telemetry/sdk": "^1.7", "open-telemetry/exporter-otlp": "^1.3", "open-telemetry/opentelemetry-auto-slim": "^1.2", "open-telemetry/opentelemetry-auto-psr18": "^1.1", "monolog/monolog": "^3.0", "open-telemetry/opentelemetry-logger-monolog": "^1.0", "...": "..." }, "config": { "allow-plugins": { "open-telemetry/opentelemetry-auto-slim": true, "open-telemetry/opentelemetry-auto-psr18": true } } } 6) Minimal bootstrap in index.php use OpenTelemetry\API\Globals; require __DIR__ . '/vendor/autoload.php'; 7) Startup script (installs PECL extension if missing) startup.sh included in the repo: #!/bin/bash # Install OpenTelemetry extension if needed if ! php -m | grep -q opentelemetry; then echo "Installing OpenTelemetry extension..." pecl install opentelemetry echo "extension=opentelemetry.so" > /usr/local/etc/php/conf.d/99-opentelemetry.ini echo "OpenTelemetry extension installed successfully" fi # Start PHP-FPM echo "Starting PHP-FPM..." php-fpm 8) Deploy the app Use your preferred method (GitHub Actions, ZIP deploy, local Git, etc.). 9) Add the sidecar extension on the Web App Go to Deployment Center → Containers (new) → Add → Sidecar Extension, pick Observability: OpenTelemetry – Azure Monitor, and paste your Connection string. 10) Map the autoload flag into the sidecar Open the created sidecar container (Edit container) and map the autoload flag from the main app: Name: OTEL_PHP_AUTOLOAD_ENABLED Value: OTEL_PHP_AUTOLOAD_ENABLED (select from the drop-down to reference the app setting) 11) Set the Startup command for PHP In Configuration (preview) → Stack settings, set: cp /home/site/wwwroot/default /etc/nginx/sites-enabled/default && nginx -s reload && bash /home/site/wwwroot/startup.sh 12) Verify telemetry in Application Insights After the app restarts, open your Application Insights resource and check Application map, Live metrics, or Search for spans with service.name = php-blessed-otel (or the value you set). Part I — Conclusion Sidecar extensions turn observability into an additive step - with just a few settings and a lightweight startup script. With OTEL wired for PHP, you now have portable traces, metrics, and logs you can query and dashboard. Next: In Part II, we’ll connect the same app to Elastic APM using the OpenTelemetry – Elastic APM sidecar, with the few settings changes you need.133Views0likes0CommentsPart II: OTEL sidecar extension on Azure App Service for Linux - Elastic APM setup
Picking up from Part I, this post shows how to obtain the Elastic APM Server URL and Secret token, add the OTEL Elastic sidecar, and validate telemetry in Kibana. Most steps are identical to the Azure Monitor walkthrough (create PHP app, add OTEL libraries, deploy, map OTEL_PHP_AUTOLOAD_ENABLED, keep the same startup command, and point your app to the sidecar at http://127.0.0.1:4318). The only differences are - get your Elastic APM Server URL and Secret token, choose the OpenTelemetry – Elastic APM extension, and set Elastic-specific app settings. You can use the sample code (based on your language stack) from here - https://github.com/Azure-Samples/sidecar-samples/tree/main/otel-sidecar 1) Get the Elastic APM Server URL In Kibana go to: Observability → Data management → Fleet → Agent policies → Elastic Cloud agent policy → Elastic APM. Copy the URL shown in the “Server configuration” section. 2) Get or generate the Secret token Still in the Elastic APM integration page, scroll to Agent authorization. Use the existing Secret token or generate one if needed. 3) Add the sidecar extension (Web App → Deployment Center) Deployment Center → Containers (new) → Add → Sidecar Extension → choose Observability: OpenTelemetry – Elastic APM. Provide the APM Server URL and Secret token you copied above, then Save. 4) View Elastic-specific app settings (main app) These are added by default Environment variables → Application settings: ELASTIC_APM_ENDPOINT = https://<your-elastic-apm-server-url> ELASTIC_APM_SECRET_TOKEN = <your-secret-token> OTEL_EXPORTER = elastic OTEL_EXPORTER_OTLP_ENDPOINT = http://127.0.0.1:4318 OTEL_SERVICE_NAME = <your-app-name> (Keep using :4318 for OTLP/HTTP to the sidecar. Your Elastic URL is the remote APM server the sidecar forwards to.) Everything else—code, Composer deps, autoload flag mapping, and startup command—remains the same as the Azure Monitor section. 5) Validate telemetry in Kibana In Kibana, open Observability → APM → Services. Find your service name (the value of OTEL_SERVICE_NAME). Open the service to view transactions, traces, and dependencies. You can also check logs/fields in Discover: That’s it—your PHP app is instrumented with OTEL, sends signals to the local sidecar, and the sidecar ships them to Elastic APM. Sample repo (code & containers) We’ve published a repo with working samples for PHP, Node.js, Python, and .NET showing both code-based and container-based setups using the OTEL sidecar extensions. Use it as your starting point: sidecar-samples/otel-sidecar at main · Azure-Samples/sidecar-samples In Part III, we’ll share a language cheat-sheet, a copy/paste app-settings reference, and troubleshooting tips for common issues.76Views0likes0CommentsIntroducing AI Playground on Azure App Service for Linux
If you’re running a Small Language Model (SLM) as a sidecar with your web app, there’s now a faster way to try prompts, measure latency, and copy working code into your app—all without leaving your site. AI Playground is a lightweight, built-in experience available from the Kudu endpoint for every Linux App Service. What is AI Playground? AI Playground is a simple UI that talks to the SLM you’ve attached to your App Service app (for example, Phi or BitNet via the Sidecar extension). It lets you: Send system and user prompts and view responses in-line See performance metrics like Time to First Token (TTFT), total time, and tokens/sec Grab ready-to-use code snippets for popular languages from the right sidebar (when you’re ready to integrate) Confirm whether a sidecar SLM is configured—and get clear guidance if it isn’t Sidecar SLMs were introduced earlier this year; they let you run models like Phi and BitNet alongside your app. Learn more: https://learn.microsoft.com/en-us/azure/app-service/tutorial-ai-slm-dotnet Where to find it the AI Playground In the Azure portal, go to your App Service (Linux). Open Advanced Tools (Kudu) → Go. In the Kudu left navigation, select AI Playground. Note: A pre-requisite for the playground is already having an SLM sidecar setup with your application. Here is a tutorial to set it up https://learn.microsoft.com/en-us/azure/app-service/tutorial-ai-slm-dotnet A quick tour Prompts panel Set a System Prompt (e.g., “You speak like a pirate.”) to steer behavior. Enter a User Prompt, then click Send to SLM. Performance metrics displayed TTFT: how quickly the first token arrives—great for responsiveness checks. Total: overall response time. Tokens/sec: sustained throughput for the generation. Code integration examples On the right, you’ll find minimal snippets for C#, Python, and Node.js you can paste into your app later (no need to leave Kudu). Tip: Keep prompts compact for SLMs. If output slows, shorten the prompt or reduce requested length. Don’t have a sidecar yet? If AI Playground can’t find an SLM, you’ll see an inline notice with setup steps. Full tutorial: https://learn.microsoft.com/en-us/azure/app-service/tutorial-ai-slm-dotnet Troubleshooting No responses / timeouts Confirm the sidecar is Running in Deployment Center → Containers. Check the sidecar’s port and endpoint Slow TTFT or Tokens/sec Warm up with a couple of short prompts. Consider scaling up to a Premium plan. Keep prompts and requested outputs short. Roadmap This is v1. We’re already working on: Bring-your-own LLMs (play with different models beyond SLMs) Richer evaluation (prompt presets, saved sessions, exportable traces) Better observability (per-call logs, quick links to Log Stream) Conclusion AI Playground makes building AI features on App Service feel immediate - type, run, measure, and ship. We’ll keep smoothing the experience and unlocking more model choices so you can go from idea to integrated AI faster than ever.156Views0likes0CommentsWhat’s New for Python on App Service for Linux: pyproject.toml, uv, and more
Python apps on Azure App Service for Linux just got a lot easier to build and ship! We’ve modernized the build pipeline to support new deployment options —whether you’re on classic setup.py, fully on pyproject.toml with Poetry or uv, or somewhere in between. This post walks through five upgrades that reduce friction end-to-end—from local dev to GitHub Actions to the App Service build environment: pyproject.toml + uv (and poetry): modern, reproducible Python builds setup.py support .bashrc quality-of-life improvements in the App Service container shell GitHub Actions samples for common Python flows (setup.py, uv.lock, local venv, and pyproject.toml deployments) pyproject.toml + uv uv is an extremely fast Python package & project manager written in Rust—think “pip + virtualenv + pip-tools,” but much faster and with first-class project workflows. (Astral Docs) On App Service for Linux: we’ve added automatic uv builds when your repo contains both pyproject.toml and uv.lock . That means reproducible installs with uv’s resolver—no extra switches needed. What’s pyproject.toml? It’s the standardized configuration for modern Python projects (PEP 621) where you declare metadata, dependencies, and your build backend. (Python Enhancement Proposals (PEPs)) Quickstart (new to uv?) # in your project folder pip install uv uv init uv init scaffolds a project and creates a pyproject.toml (and, for application projects, a sample main.py). Try it with uv run . (Astral Docs) Add dependencies: uv add flask # add more as needed, e.g.: # uv add requests pillow A uv.lock file is generated to pin your dependency graph; uv then “syncs” from the lock for consistent installs. (Astral Docs) A minimal pyproject.toml for a Flask app: [project] name = "uv-pyproject" version = "0.1.0" description = "Add your description here" readme = "README.md" requires-python = ">=3.14" dependencies = [ "flask>=3.1.2", ] If you prefer to keep main.py App Service’s default entry is app.py, so either rename main.py to app.py, or set a startup command: uv run uvicorn main:app --host 0.0.0.0 --port 8000 Run locally: uv run app.py (uv run executes your script inside the project’s environment.) (Astral Docs) Deploy to Azure App Service for Linux using your favorite method (e.g., azd up, GitHub Actions, or VS Code). During the build, you’ll see logs like: Detected uv.lock (and no requirements.txt); creating virtual environment with uv... Installing uv... Requirement already satisfied: uv in /tmp/oryx/platforms/python/3.14.0/lib/python3.14/site-packages (0.9.7) Executing: uv venv --link-mode=copy --system-site-packages antenv Using CPython 3.14.0 interpreter at: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 Creating virtual environment at: antenv Activate with: source antenv/bin/activate Activating virtual environment... Detected uv.lock. Installing dependencies with uv... Resolved 9 packages in 1ms Installed 7 packages in 1.82s + blinker==1.9.0 + click==8.3.0 + flask==3.1.2 + itsdangerous==2.2.0 + jinja2==3.1.6 + markupsafe==3.0.3 + werkzeug==3.1.3 Using pyproject.toml with Poetry Already on Poetry? Great—Poetry uses pyproject.toml (typically with [tool.poetry] plus a poetry.lock) and complies with PEP-517/PEP-621. If your project is Poetry-managed, App Service’s pyproject.toml support applies just the same. For details on fields and build configuration, see Poetry’s official docs: the pyproject.toml reference and basic usage. (python-poetry.org) Want to see a working uv example? Check the lowlight-enhancer-uv Flask app in our samples repo (deployable with azd up). Support for setup.py setup.py is the (Python) build/config script used by Setuptools to declare your project’s metadata and dependencies. Setuptools offers first-class support for setup.py, and it remains a valid way to package and install apps. (Setuptools) Minimal setup.py for a Flask app # setup.py from setuptools import setup, find_packages setup( name="flask-app", version="0.1.0", packages=find_packages(exclude=("tests",)), python_requires=">=3.14", install_requires=[ "Flask>=3.1.2", ], include_package_data=True, ) Tip: install_requires and other fields are defined by Setuptools ; see the keywords reference for what you can configure. (Setuptools) What you’ll see during an App Service deployment Python Version: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 Creating directory for command manifest file if it does not exist Removing existing manifest file Python Virtual Environment: antenv Creating virtual environment... Executing: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 -m venv antenv --copies Activating virtual environment... Running pip install setuptools... Collecting setuptools Downloading setuptools-80.9.0-py3-none-any.whl.metadata (6.6 kB) Downloading setuptools-80.9.0-py3-none-any.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 6.1 MB/s 0:00:00 Installing collected packages: setuptools Successfully installed setuptools-80.9.0 ... Running python setup.py install... [09:05:14+0000] Processing /tmp/8de1d13947ba65f [09:05:14+0000] Installing build dependencies: started [09:05:15+0000] Installing build dependencies: finished with status 'done' [09:05:15+0000] Getting requirements to build wheel: started [09:05:15+0000] Getting requirements to build wheel: finished with status 'done' [09:05:15+0000] Preparing metadata (pyproject.toml): started [09:05:15+0000] Preparing metadata (pyproject.toml): finished with status 'done' Bash shell experience: friendlier .bashrc in SSH We’ve started refreshing the SSH banner and shell behavior so it’s easier to orient yourself when you land in a Linux App Service container. What changed (see screenshots below): Clear header with useful links. We now show both the general docs and a Python quickstart link right up front. Runtime at a glance. The header prints the Python version explicitly. Instance details for troubleshooting. You’ll see the Instance Name and Instance Id in the banner—handy when filing a support ticket or comparing logs across instances. No more noisy errors on login. Previously, the shell tried to auto-activate antenv and printed No such file or directory if it didn’t exist. The new logic checks first and shows a gentle tip instead. What’s next More language-specific tips based on the detected stack. Shortcuts for common ssh tasks. Small UX touches (spacing, color, and prompts) to make SSH sessions feel consistent. New GitHub Actions samples We have also published a few Github Action sample workflows which makes it easy to deploy your Python apps and take advantage of our new features Deployment using pyproject.toml + uv - https://github.com/Azure/actions-workflow-samples/blob/master/AppService/Python-GHA-Samples/Python-PyProject-Uv-Sample.yml Deployment using Poetry - actions-workflow-samples/AppService/Python-GHA-Samples/Python-Poetry-Sample.yml at master · Azure/actions-workflow-samples Deployment using setup.py - actions-workflow-samples/AppService/Python-GHA-Samples/Python-SetupPy-Sample.yml at master · Azure/actions-workflow-samples Deployment python apps that are built locally - actions-workflow-samples/AppService/Python-GHA-Samples/Python-Local-Built-Deploy-Sample.yml at master · Azure/actions-workflow-samples To use these templates Copy the relevant YAML into .github/workflows/ folder in your repo. Set auth: use OIDC with azure/login (or a service principal/publish profile if you must). (Microsoft Learn) Fill in inputs: app name, resource group, and sidecar details (image or extension parameters, env vars/ports). Commit & run: trigger on push or via Run workflow. Conclusion In the coming months, we’ll be announcing more improvements to Python on Azure App Service for Linux, focused on faster builds, better performance for AI workloads and clearer diagnostics. Try the flows that fit your team, and let us know what else would make your Python deployments even easier.219Views0likes0CommentsNode.js 24 is now available on Azure App Service for Linux
Node.js 24 LTS is live on Azure App Service for Linux. You can create a new Node 24 app through the Azure portal, automate it with the Azure CLI, or roll it out using your favorite ARM/Bicep templates - faster runtime, tighter tooling, same App Service simplicity. A quick look at what the new runtime gives you: Faster, more modern JavaScript Node.js 24 ships with the V8 13.6 engine and npm 11. You get newer JavaScript capabilities like RegExp.escape , Float16Array for tighter numeric data, improved async context handling, global URLPattern , and better WebAssembly memory support. All of this means cleaner code and better performance without extra polyfills or libraries. This release line is an even-numbered release and has moved into Long Term Support (LTS) in October 2025, which makes it a safe target for production apps. Cleaner built-in testing workflows The built-in node:test runner in Node.js 24 now automatically waits on nested subtests, so you get reliable, predictable test execution without wiring up manual await logic or pulling in a third-party test framework. That means fewer flaky “test didn’t finish” errors in CI. For full release details, see the official Node.js 24 release notes: https://nodejs.org/blog/release/v24.0.0 Bring your Node.js 24 app to App Service for Linux, scale it, monitor it, and take advantage of the latest runtime improvements.215Views0likes0CommentsFollow-Up to ‘Important Changes to App Service Managed Certificates’: October 2025 Update
This post provides an update to the Tech Community article ‘Important Changes to App Service Managed Certificates: Is Your Certificate Affected?’ and covers the latest changes introduced since July 2025. With the November 2025 update, ASMC now remains supported even if the site is not publicly accessible, provided all other requirements are met. Details on requirements, exceptions, and validation steps are included below. Background Context to July 2025 Changes As of July 2025, all ASMC certificate issuance and renewals use HTTP token validation. Previously, public access was required because DigiCert needed to access the endpoint https://<hostname>/.well-known/pki-validation/fileauth.txt to verify the token before issuing the certificate. App Service automatically places this token during certificate creation and renewal. If DigiCert cannot access this endpoint, domain ownership validation fails, and the certificate cannot be issued. October 2025 Update Starting October 2025, App Service now allows DigiCert's requests to the https://<hostname>/.well-known/pki-validation/fileauth.txt endpoint, even if the site blocks public access. If there’s a request to create an App Service Managed Certificate (ASMC), App Service places the domain validation token at the validation endpoint. When DigiCert tries to reach the validation endpoint, App Service front ends present the token, and the request terminates at the front end layer. DigiCert's request does not reach the workers running the application. This behavior is now the default for ASMC issuance for initial certificate creation and renewals. Customers do not need to specifically allow DigiCert's IP addresses. Exceptions and Unsupported Scenarios This update addresses most scenarios that restrict public access, including App Service Authentication, disabling public access, IP restrictions, private endpoints, and client certificates. However, a public DNS record is still required. For example, sites using a private endpoint with a custom domain on a private DNS cannot validate domain ownership and obtain a certificate. Even with all validations now relying on HTTP token validation and DigiCert requests being allowed through, certain configurations are still not supported for ASMC: Sites configured as "Nested" or "External" endpoints behind Traffic Manager. Only "Azure" endpoints are supported. Certificates requested for domains ending in *.trafficmanager.net are not supported. Testing Customers can easily test whether their site’s configuration or set-up supports ASMC by attempting to create one for their site. If the initial request succeeds, renewals should also work, provided all requirements are met and the site is not listed in an unsupported scenario.1.9KViews1like0CommentsExpanding the Public Preview of the Azure SRE Agent
We are excited to share that the Azure SRE Agent is now available in public preview for everyone instantly – no sign up required. A big thank you to all our preview customers who provided feedback and helped shape this release! Watching teams put the SRE Agent to work taught us a ton, and we’ve baked those lessons into a smarter, more resilient, and enterprise-ready experience. You can now find Azure SRE Agent directly in the Azure Portal and get started, or use the link below. 📖 Learn more about SRE Agent. 👉 Create your first SRE Agent (Azure login required) What’s New in Azure SRE Agent - October Update The Azure SRE Agent now delivers secure-by-default governance, deeper diagnostics, and extensible automation—built for scale. It can even resolve incidents autonomously by following your team’s runbooks. With native integrations across Azure Monitor, GitHub, ServiceNow, and PagerDuty, it supports root cause analysis using both source code and historical patterns. And since September 1, billing and reporting are available via Azure Agent Units (AAUs). Please visit product documentation for the latest updates. Here are a few highlights for this month: Prioritizing enterprise governance and security: By default, the Azure SRE Agent operates with least-privilege access and never executes write actions on Azure resources without explicit human approval. Additionally, it uses role-based access control (RBAC) so organizations can assign read-only or approver roles, providing clear oversight and traceability from day one. This allows teams to choose their desired level of autonomy from read-only insights to approval-gated actions to full automation without compromising control. Covering the breadth and depth of Azure: The Azure SRE Agent helps teams manage and understand their entire Azure footprint. With built-in support for AZ CLI and kubectl, it works across all Azure services. But it doesn’t stop there—diagnostics are enhanced for platforms like PostgreSQL, API Management, Azure Functions, AKS, Azure Container Apps, and Azure App Service. Whether you're running microservices or managing monoliths, the agent delivers consistent automation and deep insights across your cloud environment. Automating Incident Management: The Azure SRE Agent now plugs directly into Azure Monitor, PagerDuty, and ServiceNow to streamline incident detection and resolution. These integrations let the Agent ingest alerts and trigger workflows that match your team’s existing tools—so you can respond faster, with less manual effort. Engineered for extensibility: The Azure SRE Agent incident management approach lets teams reuse existing runbooks and customize response plans to fit their unique workflows. Whether you want to keep a human in the loop or empower the Agent to autonomously mitigate and resolve issues, the choice is yours. This flexibility gives teams the freedom to evolve—from guided actions to trusted autonomy—without ever giving up control. Root cause, meet source code: The Azure SRE Agent now supports code-aware root cause analysis (RCA) by linking diagnostics directly to source context in GitHub and Azure DevOps. This tight integration helps teams trace incidents back to the exact code changes that triggered them—accelerating resolution and boosting confidence in automated responses. By bridging operational signals with engineering workflows, the agent makes RCA faster, clearer, and more actionable. Close the loop with DevOps: The Azure SRE Agent now generates incident summary reports directly in GitHub and Azure DevOps—complete with diagnostic context. These reports can be assigned to a GitHub Copilot coding agent, which automatically creates pull requests and merges validated fixes. Every incident becomes an actionable code change, driving permanent resolution instead of temporary mitigation. Getting Started Start here: Create a new SRE Agent in the Azure portal (Azure login required) Blog: Announcing a flexible, predictable billing model for Azure SRE Agent Blog: Enterprise-ready and extensible – Update on the Azure SRE Agent preview Product documentation Product home page Community & Support We’d love to hear from you! Please use our GitHub repo to file issues, request features, or share feedback with the team4.9KViews2likes3CommentsUsing Scikit-learn on Azure Web App
TOC Introduction to Scikit-learn System Architecture Architecture Focus of This Tutorial Setup Azure Resources Web App Storage Running Locally File and Directory Structure Training Models and Training Data Predicting with the Model Publishing the Project to Azure Deployment Configuration Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Missing Environment Variables After Deployment Virtual Environment Resource Lock Issues Package Version Dependency Issues Default Binding Missing System Commands in Restricted Environments Conclusion References 1. Introduction to Scikit-learn Scikit-learn is a popular open-source Python library for machine learning, built on NumPy, SciPy, and matplotlib. It offers an efficient and easy-to-use toolkit for data analysis, data mining, and predictive modeling. Scikit-learn supports a variety of machine learning algorithms, including classification, regression, clustering, and dimensionality reduction (e.g., SVM, Random Forest, K-means). Its preprocessing utilities handle tasks like scaling, encoding, and missing data imputation. It also provides tools for model evaluation (e.g., accuracy, precision, recall) and pipeline creation, enabling users to chain preprocessing and model training into seamless workflows. 2. System Architecture Architecture Development Environment OS: Windows 11 Version: 24H2 Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources Web App We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource App Service Yes Resource Storage Account Yes Resource File Share Yes Service Go to the Azure Portal and create an App Service. Important configuration: OS: Select Linux (default if Python stack is chosen). Stack: Select Python 3.9 to avoid dependency issues. SKU: Choose at least Premium Plan to ensure enough memory for your AI workloads. Storage Create a Storage Account in the Azure Portal. Create a file share named data-and-model in the Storage Account. Mount the File Share to the App Service: Use the name data-and-model for consistency with tutorial paths. At this point, all Azure resources and services have been successfully created. Let’s take a slight detour and mount the recently created File Share to your Windows development environment. Navigate to the File Share you just created, and refer to the diagram below to copy the required command. Before copying, please ensure that the drive letter remains set to the default "Z" as the sample code in this tutorial will rely on it. Return to your development environment. Open a PowerShell terminal (do not run it as Administrator) and input the command copied in the previous step, as shown in the diagram. After executing the command, the network drive will be successfully mounted. You can open File Explorer to verify, as illustrated in the diagram. 4. Running Locally File and Directory Structure Please use VSCode to open a PowerShell terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\scikit-learn\tools\add-venv.cmd If you are using a Linux or Mac platform, use the following alternative commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./scikit-learn/tools/add-venv.sh After completing the execution, you should see the following directory structure: File and Path Purpose scikit-learn/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/scikit-learn-webjob/ A virtual environment specifically used for training models. scikit-learn/webjob/requirements.txt The list of packages (with exact versions) required for the scikit-learn-webjob virtual environment. .venv/scikit-learn/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions. scikit-learn/requirements.txt The list of packages (with exact versions) required for the scikit-learn virtual environment. scikit-learn/ The main folder for this tutorial. scikit-learn/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. scikit-learn/tools/download-sample-training-set.* A script to download a sample training set from the UCI Machine Learning Repository, containing heart disease data, into the train directory of the File Share. scikit-learn/webjob/train_heart_disease_model.py A script for training the model. It loads the training set, applies a machine learning algorithm (Logistic Regression), and saves the trained model in the model directory of the File Share. scikit-learn/webjob/train_heart_disease_model.sh A shell script for Azure App Service web jobs. It activates the scikit-learn-webjob virtual environment and starts the train_heart_disease_model.py script. scikit-learn/webjob/train_heart_disease_model.zip A ZIP file containing the shell script for Azure web jobs. It must be recreated manually whenever train_heart_disease_model.sh is modified. Ensure it does not include any directory structure. scikit-learn/api/app.py Code for the Flask application, including routes, port configuration, input parsing, model loading, predictions, and output generation. scikit-learn/.deployment A configuration file for deploying the project to Azure using VSCode. It disables the default Oryx build process in favor of custom scripts. scikit-learn/start.sh A script executed after deployment (as specified in the Portal's startup command). It sets up the virtual environment and starts the Flask application to handle web requests. Training Models and Training Data Return to VSCode and execute the following commands (their purpose has been described earlier). .\.venv\scikit-learn-webjob\Scripts\Activate.ps1 .\scikit-learn\tools\create-folder.cmd .\scikit-learn\tools\download-sample-training-set.cmd python .\scikit-learn\webjob\train_heart_disease_model.py If you are using a Linux or Mac platform, use the following alternative commands instead: source .venv/scikit-learn-webjob/bin/activate bash ./scikit-learn/tools/create-folder.sh bash ./scikit-learn/tools/download-sample-training-set.sh python ./scikit-learn/webjob/train_heart_disease_model.py After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the public dataset website. The right side of the figure describes the meaning of each column in the dataset, while the left side shows the actual training data (after preprocessing). This is a predictive model that uses an individual’s physiological characteristics to determine the likelihood of having heart disease. Columns 1-13 represent various physiological features and background information of the patients, while Column 14 (originally Column 58) is the label indicating whether the individual has heart disease. The supervised learning process involves using a large dataset containing both features and labels. Machine learning algorithms (such as neural networks, SVMs, or in this case, logistic regression) identify the key features and their ranges that differentiate between labels. The trained model is then saved and can be used in services to predict outcomes in real time by simply providing the necessary features. Predicting with the Model Return to VSCode and execute the following commands. First, deactivate the virtual environment used for training the model, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Windows: deactivate .\.venv\scikit-learn\Scripts\Activate.ps1 python .\scikit-learn\api\app.py Commands for Linux or Mac: deactivate source .venv/scikit-learn/bin/activate python ./scikit-learn/api/app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample human feature data: [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] Referring to the feature description table from earlier, we can see that the only modified field is Column 4 ("Resting Blood Pressure"), with the second sample having an abnormally high value. (Note: Normal resting blood pressure ranges are typically 90–139 mmHg.) Next, open a PowerShell terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' You should see the prediction results, confirming that the trained model is working as expected. 5. Publishing the Project to Azure Deployment In the VSCode interface, right-click on the target App Service where you plan to deploy your project. Manually select the local project folder named scikit-learn as the deployment source, as shown in the image below. Configuration After deployment, the App Service will not be functional yet and will still display the default welcome page. This is because the App Service has not been configured to build the virtual environment and start the Flask application. To complete the setup, go to the Azure Portal and navigate to the App Service. The following steps are critical, and their execution order must be correct. To avoid delays, it’s recommended to open two browser tabs beforehand, complete the settings in each, and apply them in sequence. Refer to the following two images for guidance. You need to do the following: Set the Startup Command: Specify the path to the script you deployed bash /home/site/wwwroot/start.sh Set Two App Settings: WEBSITES_CONTAINER_START_TIME_LIMIT=600 The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages). WEBSITES_ENABLE_APP_SERVICE_STORAGE=1 This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training). Step-by-Step Process: Before clicking Continue, switch to the next browser tab and set up all the app settings. In the second tab, apply all app settings, then switch back to the first tab. Click Continue in the first tab and wait for several seconds for the operation to complete. Once completed, switch to the second tab and click Continue within 5 seconds. Ensure to click Continue promptly within 5 seconds after the previous step to finish all settings. After completing the configuration, wait for about 10 minutes for the settings to take effect. Then, navigate to the WebJobs section in the Azure Portal and upload the ZIP file mentioned in the earlier sections. Set its trigger type to Manual. At this point, the entire deployment process is complete. For future code updates, you only need to redeploy from VSCode; there is no need to reconfigure settings in the Azure Portal. 6. Running on Azure Web App Training the Model Go to the Azure Portal, locate your App Service, and navigate to the WebJobs section. Click on Start to initiate the job and wait for the results. During this process, you may need to manually refresh the page to check the status of the job execution. Refer to the image below for guidance. Once you see the model report in the Logs, it indicates that the model training is complete, and the Flask app is ready for predictions. You can also find the newly trained model in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a PowerShell terminal and use the following curl commands to send requests to the app: # Note: Replace both instances of scikit-learn-portal-app with the name of your web app. curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' As with the local environment, you should see the expected results. 7. Troubleshooting Missing Environment Variables After Deployment Symptom: Even after setting values in App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT), they do not take effect. Cause: App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT, WEBSITES_ENABLE_APP_SERVICE_STORAGE) are reset after updating the startup command. Resolution: Use Azure CLI or the Azure Portal to reapply the App Settings after deployment. Alternatively, set the startup command first, and then apply app settings. Virtual Environment Resource Lock Issues Symptom: The app fails to redeploy, even though no configuration or code changes were made. Cause: The virtual environment folder cannot be deleted due to active resource locks from the previous process. Files or processes from the previous virtual environment session remain locked. Resolution: Deactivate processes before deletion and use unique epoch-based folder names to avoid conflicts. Refer to scikit-learn/start.sh in this tutorial for implementation. Package Version Dependency Issues Symptom: Conflicts occur between package versions specified in requirements.txt and the versions required by the Python environment. This results in errors during installation or runtime. Cause: Azure deployment environments enforce specific versions of Python and pre-installed packages, leading to mismatches when older or newer versions are explicitly defined. Additionally, the read-only file system in Azure App Service prevents modifying global packages like typing-extensions. Resolution: Pin compatible dependency versions. For example, follow the instructions for installing scikit-learn from the scikit-learn 1.5.2 documentation. Refer to scikit-learn/requirements.txt in this tutorial. Default Binding Symptom: Despite setting the WEBSITES_PORT parameter in App Settings to match the port Flask listens on (e.g., Flask's default 5000), the deployment still fails. Cause: The Flask framework's default settings are not overridden to bind to 0.0.0.0 or the required port. Resolution: Explicitly bind Flask to 0.0.0.0:8000 in app.py . To avoid additional issues, it’s recommended to use the Azure Python Linux Web App's default port (8000), as this minimizes the need for extra configuration. Missing System Commands in Restricted Environments Symptom: In the WebJobs log, an error is logged stating that the ls command is missing. Cause: This typically occurs in minimal environments, such as Azure App Services, containers, or highly restricted shells. Resolution: Use predefined paths or variables in the script instead of relying on system commands. Refer to scikit-learn/webjob/train_heart_disease_model.sh in this tutorial for an example of handling such cases. 8. Conclusion Azure App Service, while being a PaaS product with less flexibility compared to a VM, still offers several powerful features that allow us to fully leverage the benefits of AI frameworks. For example, the resource-intensive model training phase can be offloaded to a high-performance local machine. This approach enables the App Service to focus solely on loading models and serving predictions. Additionally, if the training dataset is frequently updated, we can configure WebJobs with scheduled triggers to retrain the model periodically, ensuring the prediction service always uses the latest version. These capabilities make Azure App Service well-suited for most business scenarios. 9. References Scikit-learn Documentation UCI Machine Learning Repository Azure App Service Documentation744Views1like1CommentDeployment and Build from Azure Linux based Web App
TOC Introduction Deployment Sources From Laptop From CI/CD tools Build Source From Oryx Build From Runtime From Deployment Sources Walkthrough Laptop + Oryx Laptop + Runtime Laptop CI/CD concept Conclusion 1. Introduction Deployment on Azure Linux Web Apps can be done through several different methods. When a deployment issue occurs, the first step is usually to identify which method was used. The core of these methods revolves around the concept of Build, the process of preparing and loading the third-party dependencies required to run an application. For example, a Python app defines its build process as pip install packages, a Node.js app uses npm install modules, and PHP or Java apps rely on libraries. In this tutorial, I’ll use a simple Python app to demonstrate four different Deployment/Build approaches. Each method has its own use cases and limitations. You can even combine them, for example, using your laptop as the deployment tool while still using Oryx as the build engine. The same concepts apply to other runtimes such as Node.js, PHP, and beyond. 2. Deployment Sources From Laptop Scenarios: Setting up a proof of concept Developing in a local environment Advantages: Fast development cycle Minimal configuration required Limitations: Difficult for the local test environment to interact with cloud resources OS differences between local and cloud environments may cause integration issues From CI/CD tools Scenarios: Projects with established development and deployment workflows Codebases requiring version control and automation Advantages: Developers can focus purely on coding Automatic deployment upon branch commits Limitations: Build and runtime environments may still differ slightly at the OS level 3. Build Source From Oryx Build Scenarios: Offloading resource-intensive build tasks from your local or CI/CD environment directly to the Azure Web App platform, reducing local computing overhead. Advantages: Minimal extra configuration Multi-language support Limitations: Build performance is limited by the App Service SKU and may face performance bottlenecks The build environment may differ from the runtime environment, so apps sensitive to minor package versions should take caution From Runtime Scenarios: When you want the benefits and pricing of a PaaS solution but need control similar to an IaaS setup Advantages: Build occurs in the runtime environment itself Allows greater flexibility for low-level system operations Limitations: Certain system-level settings (e.g., NTP time sync) remain inaccessible From Deployment Sources Scenarios: Pre-package all dependencies and deploy them together, eliminating the need for a separate build step. Advantages: Supports proprietary or closed-source company packages Limitations: Incompatibility may arise if the development and runtime environments differ significantly in OS or package support Type Method Scenario Advantage Limitation Deployment From Laptop POC / Dev Fast setup Poor cloud link Deployment From CI/CD Auto pipeline Focus on code OS mismatch Build From Oryx Platform build Simple, multi-lang Performance cap Build From Runtime High control Flexible ops Limited access Build From Deployment Pre-built deploy Use private pkg Env mismatch 4. Walkthrough Laptop + Oryx Add Environment Variables SCM_DO_BUILD_DURING_DEPLOYMENT=false (Purpose: prevents the deployment environment from packaging during publish; this must also be set in the deployment environment itself.) WEBSITE_RUN_FROM_PACKAGE=false (Purpose: tells Azure Web App not to run the app from a prepackaged file.) ENABLE_ORYX_BUILD=true (Purpose: allows the Azure Web App platform to handle the build process automatically after a deployment event.) Add startup command bash /home/site/wwwroot/run.sh (The run.sh file corresponds to the script in your project code.) Check sample code requirements.txt — defines Python packages (similar to package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py — main Python application code. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop + Oryx" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh — script used to start the application. #!/bin/bash gunicorn --bind=0.0.0.0:8000 app:app .deployment — VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment Once both the deployment and build processes complete successfully, you should see the expected result. Laptop + Runtime Add Environment Variables (Screenshots omitted since the process is similar to previous steps) SCM_DO_BUILD_DURING_DEPLOYMENT=false Purpose: Prevents the deployment environment from packaging during the publishing process. This setting must also be added in the deployment environment itself. WEBSITE_RUN_FROM_PACKAGE=false Purpose: Instructs Azure Web App not to run the application from a prepackaged file. ENABLE_ORYX_BUILD=false Purpose: Ensures that Azure Web App does not perform any build after deployment; all build tasks will instead be handled during the startup script execution. Add Startup Command (Screenshots omitted since the process is similar to previous steps) bash /home/site/wwwroot/run.sh (The run.sh file corresponds to the script of the same name in your project code.) Check Sample Code (Screenshots omitted since the process is similar to previous steps) requirements.txt: Defines Python packages (similar to package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py: The main Python application code. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop + Runtime" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh: Startup script. In addition to launching the app, it also creates a virtual environment and installs dependencies, all build-related tasks happen here. #!/bin/bash python -m venv venv source venv/bin/activate pip install -r requirements.txt gunicorn --bind=0.0.0.0:8000 app:app .deployment: VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment (Screenshots omitted since the process is similar to previous steps) Once both deployment and build are completed, you should see the expected output. Laptop Add Environment Variables (Screenshots omitted as the process is similar to previous steps) SCM_DO_BUILD_DURING_DEPLOYMENT=false Purpose: Prevents the deployment environment from packaging during publish. This must also be set in the deployment environment itself. WEBSITE_RUN_FROM_PACKAGE=false Purpose: Instructs Azure Web App not to run the app from a prepackaged file. ENABLE_ORYX_BUILD=false Purpose: Prevents Azure Web App from building after deployment. All build tasks will instead execute during the startup script. Add Startup Command (Screenshots omitted as the process is similar to previous steps) bash /home/site/wwwroot/run.sh (The run.sh corresponds to the same-named file in your project code.) Check Sample Code (Screenshots omitted as the process is similar to previous steps) requirements.txt: Defines Python packages (like package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py: The main Python application. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh: The startup script. In addition to launching the app, it activates an existing virtual environment. The creation of that environment and installation of dependencies will occur in the next section. #!/bin/bash source venv/bin/activate gunicorn --bind=0.0.0.0:8000 app:app .deployment: VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment Before deployment, you must perform a local build process. Run commands locally (depending on the language, usually for installing dependencies). python -m venv venv source venv/bin/activate pip install -r requirements.txt After completing the local build, deploy your app. Once deployment finishes, you should see the expected result. CI/CD concept For example, when using Azure DevOps (ADO) as your CI/CD tool, its behavior conceptually mirrors deploying directly from a laptop, but with enhanced automation, governance, and reproducibility. Essentially, ADO pipelines translate your manual local deployment steps into codified, repeatable workflows defined in a YAML pipeline file, executed by Microsoft-hosted or self-hosted agents. A typical azure-pipelines.yml defines the stages (e.g., build, deploy) and their corresponding jobs and steps. Each stage runs on a specified VM image (e.g., ubuntu-latest) and executes commands, the same npm install, pip install which you would normally run on your laptop. The ADO pipeline acts as your automated laptop, every build command, environment variable, and deployment step you’d normally execute locally is just formalized in YAML. Whether you build inline, use Oryx, or deploy pre-built artifacts, the underlying concept remains identical: compile, package, and deliver code to Azure. The distinction lies in who performs it. 5. Conclusion Different deployment and build methods lead to different debugging and troubleshooting approaches. Therefore, understanding the selected deployment method and its corresponding troubleshooting process is an essential skill for every developer and DevOps engineer.388Views0likes0CommentsBuilding Agents on Azure Container Apps with Goose AI Agent, Ollama and gpt-oss
Azure Container Apps (ACA) is redefining how developers build and deploy intelligent agents. With serverless scale, GPU-on-demand, and enterprise-grade isolation, ACA provides the ideal foundation for hosting AI agents securely and cost-effectively. Last month we highlighted how you can deploy n8n on Azure Container Apps to go from click-to-build to a running AI based automation platform in minutes, with no complex setup or infrastructure management overhead. In this post, we’re extending that same simplicity to AI agents, where we’ll show why Azure Container Apps is the best platform for running open-source agentic frameworks like Goose. Whether you’re experimenting with open-source models or building enterprise-grade automation, ACA gives you the flexibility and security you need. Challenges when building and hosting AI agents Building and running AI agents in production presents its own set of challenges. These systems often need access to proprietary data and internal APIs, making security and data governance critical, especially when agents interact dynamically with multiple tools and models. At the same time, developers need flexibility to experiment with different frameworks without introducing operational overhead or losing isolation. Simplicity and performance are also key. Managing scale, networking, and infrastructure can slow down iteration, while separating the agent’s reasoning layer from its inference backend can introduce latency and added complexity from managing multiple services. In short, AI agent development requires security, simplicity, and flexibility to ensure reliability and speed at scale. Why ACA and serverless GPUs for hosting AI agents Azure Container Apps provide a secure, flexible, and developer-friendly platform for hosting AI agents and inference workloads side by side within the same ACA environment. This unified setup gives you centralized control over network policies, RBAC, observability, and more, while ensuring that both your agentic logic and model inference run securely within one managed boundary. ACA also provides the following key benefits: Security and data governance: Your agent runs in your private, fully isolated environment, with complete control over identity, networking, and compliance. Your data never leaves the boundaries of your container Serverless economics: Scale automatically to zero when idle, pay only for what you use — no overprovisioning, no wasted resources. Developer simplicity: One-command deployment, integrated with Azure identity and networking. No extra keys, infrastructure management, or manual setup are required. Inferencing flexibility with serverless GPUs: Bring any open-source, community, or custom model. Run your inferencing apps on serverless GPUs alongside your agentic applications within the same environment. For example, running gpt-oss models via Ollama inside ACA containers avoids costly hosted inference APIs and keeps sensitive data private. These capabilities let teams focus on innovation, not infrastructure, making ACA a natural choice for building intelligent agents. Deploy the Goose AI Agent to ACA The Goose AI Agent, developed by Block, is an open source, general-purpose agent framework designed for quick deployment and easy customization. Out of the box, it supports many features like email integration, github interactions, and local CLI and system tool access. It’s great for building ready-to-run AI assistants that can connect to other systems while having a modular design that makes customization simple on top of supporting great defaults out the box. By deploying Goose on ACA, you gain all the benefits of serverless scale, secure isolation, GPU-on-demand, while maintaining the ability to customize and iterate quickly. Get started: Deploy Goose on Azure Container Apps using this open-source starter template. In just a few minutes, you’ll have a private, self-contained AI agent running securely on Azure Container Apps, ready to handle real-world workloads without compromise. Goose running on Azure Container Apps adding some content to a README, submitting a PR and sending a summary email to the team. Additional Benefits of running Goose on ACA Running the Goose AI Agent on Azure Container Apps (ACA) showcases how simple and powerful hosting AI agents can be. Always available: Goose can run continuously—handling long-lived or asynchronous workloads for hours or days—without tying up your local machine. Cost efficiency: ACA’s pay-per-use, serverless GPU model eliminates high per-call inference costs, making it ideal for sustained or compute-intensive workloads. Seamless developer experience: The Goose-on-ACA starter template sets up everything for you—model server, web UI, and CLI endpoints—with no manual configuration required. With ACA, you can go from concept to a fully running agent in minutes, without compromising on security, scalability, or cost efficiency. Part of a Growing Ecosystem of Agentic frameworks on ACA ACA is quickly becoming the go-to platform for containerized AI and Agentic workloads. From n8n, Goose to other emerging open-source and commercial agent frameworks, developers can use ACA to experiment, scale, and secure their agents - all while taking advantage of serverless scale, GPU-on-demand, and complete network isolation. It’s the same developer-first workflow that powers modern applications, now extended to intelligent agents. Whether you’re building a single agent or an entire automation ecosystem, ACA provides the flexibility and reliability you need to innovate faster.267Views0likes0Comments