python
77 TopicsBulletproof agents with the durable task extension for Microsoft Agent Framework
Today, we're thrilled to announce the public preview of the durable task extension for Microsoft Agent Framework. This extension transforms how you build production-ready, resilient and scalable AI agents by bringing the proven durable execution (survives crashes and restarts) and distributed execution (runs across multiple instances) capabilities of Azure Durable Functions directly into the Microsoft Agent Framework. Now you can deploy stateful, resilient AI agents to Azure that automatically handle session management, failure recovery, and scaling, freeing you to focus entirely on your agent logic. Whether you're building customer service agents that maintain context across multi-day conversations, content pipelines with human-in-the-loop approval workflows, or fully automated multi-agent systems coordinating specialized AI models, the durable task extension gives you production-grade reliability, scalability and coordination with serverless simplicity. Key features of the durable task extension include: Serverless Hosting: Deploy agents on Azure Functions with auto-scaling from thousands of instances to zero, while retaining full control in a serverless architecture. Automatic Session Management: Agents maintain persistent sessions with full conversation context that survives process crashes, restarts, and distributed execution across instances Deterministic Multi-Agent Orchestrations: Coordinate specialized durable agents with predictable, repeatable, code-driven execution patterns Human-in-the-Loop with Serverless Cost Savings: Pause for human input without consuming compute resources or incurring costs Built-in Observability with Durable Task Scheduler: Deep visibility into agent operations and orchestrations through the Durable Task Scheduler UI dashboard Click here to create and run a durable agent # Python endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME", "gpt-4o-mini") # Create an AI agent following the standard Microsoft Agent Framework pattern agent = AzureOpenAIChatClient( endpoint=endpoint, deployment_name=deployment_name, credential=AzureCliCredential() ).create_agent( instructions="""You are a professional content writer who creates engaging, well-structured documents for any given topic. When given a topic, you will: 1. Research the topic using the web search tool 2. Generate an outline for the document 3. Write a compelling document with proper formatting 4. Include relevant examples and citations""", name="DocumentPublisher", tools=[ AIFunctionFactory.Create(search_web), AIFunctionFactory.Create(generate_outline) ] ) # Configure the function app to host the agent with durable session management app = AgentFunctionApp(agents=[agent]) app.run() // C# var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT"); var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT") ?? "gpt-4o-mini"; // Create an AI agent following the standard Microsoft Agent Framework pattern AIAgent agent = new AzureOpenAIClient(new Uri(endpoint), new DefaultAzureCredential()) .GetChatClient(deploymentName) .CreateAIAgent( instructions: """You are a professional content writer who creates engaging, well-structured documents for any given topic. When given a topic, you will: 1. Research the topic using the web search tool 2. Generate an outline for the document 3. Write a compelling document with proper formatting 4. Include relevant examples and citations""", name: "DocumentPublisher", tools: [ AIFunctionFactory.Create(SearchWeb), AIFunctionFactory.Create(GenerateOutline) ]); // Configure the function app to host the agent with durable thread management // This automatically creates HTTP endpoints and manages state persistence using IHost app = FunctionsApplication .CreateBuilder(args) .ConfigureFunctionsWebApplication() .ConfigureDurableAgents(options => options.AddAIAgent(agent) ) .Build(); app.Run(); Why the durable task extension? As AI agents evolve from simple chatbots to sophisticated systems handling complex, long-running tasks, new challenges emerge: Conversations span multiple days and weeks, requiring persistent state across process restarts, crashes, and disruptions. Tool calls might take longer than typical timeouts allow, needing automatic checkpointing and recovery. High-volume workloads require elastic scaling across distributed instances to handle thousands of concurrent agent conversations. Multiple specialized agents need coordination with predictable, repeatable execution for reliable business processes. Agents sometimes must wait for human approval before proceeding, ideally without consuming resources. The Durable Extension addresses these challenges by extending Microsoft Agent Framework with capabilities from Azure Durable Functions, enabling you to build AI agents that survive failures, scale elastically, and execute predictably through durable and distributed execution. The extension is built on four foundational value pillars, which we refer to as the 4D’s: Durability Every agent state change (messages, tool calls, decisions) is durably checkpointed automatically. Agents survive and automatically resume from infrastructure updates, crashes, and can be unloaded from memory during long waiting periods without losing context. This is essential for agents that orchestrate long-running operations or wait for external events. Distributed Agent execution is accessible across all instances, enabling elastic scaling and automatic failover. Healthy nodes seamlessly take over work from failed instances, ensuring continuous operation. This distributed execution model allows thousands of stateful agents to scale up and run in parallel. Deterministic Agent orchestrations execute predictably using imperative logic written as ordinary code. Define the execution path, enabling automated testing, verifiable guardrails, and business-critical workflows that stakeholders can trust. This complements agent-directed workflows by providing explicit control flow when needed. Debuggability Use familiar development tools (IDEs, debuggers, breakpoints, stack traces, and unit tests) and programming languages to develop and debug. Your agent and agent orchestrations are expressed as code, making them easily testable, debuggable, and maintainable. Features in action Serverless hosting Deploy agents to Azure Functions (with expansion to other Azure computes soon) with automatic scaling to thousands of instances or down to zero when not in use. Pay only for the compute resources you consume. This code-first deployment approach gives you full control over the compute environment while maintaining the benefits of a serverless architecture. # Python endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME", "gpt-4o-mini") # Create an AI agent following the standard Microsoft Agent Framework pattern agent = AzureOpenAIChatClient( endpoint=endpoint, deployment_name=deployment_name, credential=AzureCliCredential() ).create_agent( instructions="""You are a professional content writer who creates engaging, well-structured documents for any given topic. When given a topic, you will: 1. Research the topic using the web search tool 2. Generate an outline for the document 3. Write a compelling document with proper formatting 4. Include relevant examples and citations""", name="DocumentPublisher", tools=[ AIFunctionFactory.Create(search_web), AIFunctionFactory.Create(generate_outline) ] ) # Configure the function app to host the agent with durable session management app = AgentFunctionApp(agents=[agent]) app.run() Automatic session management Agent sessions are automatically checkpointed in durable storage that you configure in your function app, enabling durable and distributed execution across multiple instances. Any instance can resume an agent's execution after interruptions or process failures, ensuring continuous operation. Under the hood, agents are implemented as durable entities. These are stateful objects that maintain their state across executions. This architecture enables each agent session to function as a reliable, long-lived entity with preserved conversation history and context. Example scenario: A customer service agent handling a complex support case over multiple days and weeks. The conversation history, context, and progress are preserved even if the agent is redeployed or moves to a different instance. # First interaction - start a new thread to create a document curl -X POST https://your-function-app.azurewebsites.net/api/agents/DocumentPublisher/threads \ -H "Content-Type: application/json" \ -d '{"message": "Create a document about the benefits of Azure Functions"}' # Response includes thread ID and initial document outline/draft # {"threadId": "doc789", "response": "I'll create a comprehensive document about Azure Functions benefits. Let me search for the latest information... [Document Draft] # Benefits of Azure Functions\n\n## Introduction\nAzure Functions is a serverless compute service that enables you to run event-driven code without managing infrastructure...\n\n## Cost Efficiency\n- Pay only for execution time\n- No charges for idle resources\n- Automatic scaling reduces over-provisioning...\n\n## Developer Productivity\n- Multiple language support (C#, Python, JavaScript, Java)\n- Integrated development tools and CI/CD...\n\n## Scalability\n- Automatic scaling based on demand\n- Handle millions of requests seamlessly...\n\nReferences: [Azure documentation, serverless computing studies]"} # Second interaction - continue the same thread to refine the document curl -X POST https://your-function-app.azurewebsites.net/api/agents/DocumentPublisher/threads/doc789 \ -H "Content-Type: application/json" \ -d '{"message": "Can you add a section about integration with other Azure services?"}' # Agent remembers the Azure Functions document context and adds the requested section # {"threadId": "doc789", "response": "I've added a comprehensive integration section to your Azure Functions document:\n\n## Integration with Azure Services\n\n### Azure Storage\nSeamless triggers and bindings for Blob Storage, Queue Storage, and Table Storage enable event-driven architectures...\n\n### Azure Event Grid & Event Hubs\nProcess real-time event streams and implement publish-subscribe patterns at scale...\n\n### Azure Cosmos DB\nBuilt-in bindings for document database operations with automatic change feed processing...\n\n### Azure Service Bus\nReliable message processing with enterprise messaging features...\n\n### Azure AI Services\nEasily integrate OpenAI, Cognitive Services, and AI Search for intelligent applications...\n\nThe document now includes this section after the Scalability section. Would you like me to add use cases or deployment best practices?"} Deterministic multi-agent orchestrations Coordinate multiple specialized durable agents using imperative code where you define the control flow. This differs from agent-directed workflows where the agent decides the next steps. Deterministic Orchestrations provide predictable, repeatable execution patterns with automatic checkpointing and recovery. Example scenario: An email processing system that uses a spam detection agent, then conditionally routes to different specialized agents based on the classification. The orchestration automatically recovers if any step fails and completed agent calls are not re-executed. # Python app.orchestration_trigger(context_name="context") def document_publishing_orchestration(context: DurableOrchestrationContext): """Deterministic orchestration coordinating multiple specialized agents.""" doc_request = context.get_input() # Get specialized agents from the orchestration context research_agent = context.get_agent("ResearchAgent") writer_agent = context.get_agent("DocumentPublisherAgent") # Step 1: Research the topic using web search research_result = yield research_agent.run( messages=f"Research the following topic and gather key information: {doc_request.topic}", response_schema=ResearchResult ) # Step 2: Generate outline based on research findings outline = yield context.call_activity("generate_outline", { "topic": doc_request.topic, "research_data": research_result.findings }) # Step 3: Write the document with the research and outline document = yield writer_agent.run( messages=f"""Create a comprehensive document about {doc_request.topic}. Research findings: {research_result.findings} Outline: {outline} Write a well-structured, engaging document with proper formatting and citations.""", response_schema=DocumentResponse ) # Step 4: Save and publish the generated document return yield context.call_activity("publish_document", { "title": doc_request.topic, "content": document.text, "citations": document.citations }) Human-in-the-loop Orchestrations and agents can pause for human input, approval, or review without consuming compute resources. Durable execution enables orchestrations to wait for days or even weeks while waiting for human responses, even if the app crashes or restarts. When combined with serverless hosting, all compute resources are spun down during the wait period, eliminating compute costs until the human provides their input. Example scenario: A content publishing agent that generates drafts, sends them to human reviewers, and waits days for approval without running (or paying for) compute resources during the review period. When the human response arrives, the orchestration automatically resumes with full conversation context and execution state intact. # Python app.orchestration_trigger(context_name="context") def content_approval_workflow(context: DurableOrchestrationContext): """Human-in-the-loop workflow with zero-cost waiting.""" topic = context.get_input() # Step 1: Generate content using an agent content_agent = context.get_agent("ContentGenerationAgent") draft_content = yield content_agent.run(f"Write an article about {topic}") # Step 2: Send for human review yield context.call_activity("notify_reviewer", draft_content) # Step 3: Wait for approval - no compute resources consumed while waiting approval_event = context.wait_for_external_event("ApprovalDecision") timeout_task = context.create_timer(context.current_utc_datetime + timedelta(hours=24)) winner = yield context.task_any([approval_event, timeout_task]) if winner == approval_event: timeout_task.cancel() approved = approval_event.result if approved: result = yield context.call_activity("publish_content", draft_content) return result else: return "Content rejected" else: # Timeout - escalate for review result = yield context.call_activity("escalate_for_review", draft_content) return result Built-in agent observability Configure your Function App with the Durable Task Scheduler as the durable backend (what persists agents and orchestration state). The Durable Task Scheduler is the recommended durable backend for your durable agents, offering the best throughput performance, fully managed infrastructure, and built-in observability through a UI dashboard. The Durable Task Scheduler dashboard provides deep visibility into your agent operations: Conversation history: View complete conversation threads for each agent session, including all messages, tool calls, and conversation context at any point in time Multi-agent visualization: See the execution flow when calling multiple specialized agents with visual representation of agent handoffs, parallel executions, and conditional branching Performance metrics: Monitor agent response times, token usage, and orchestration duration Execution history: Access detailed execution logs with full replay capability for debugging Demo Video Language support The Durable Extension supports: C# (.NET 8.0+) with Azure Functions Python (3.10+) with Azure Functions Support for additional computes coming soon. Get started today Click here to create and run a durable agent Learn more Overview documentation C# Samples Python Samples4.3KViews2likes1CommentLow-Light Image Enhancer (Python + OpenCV) on Azure App Service
Low-light photos are everywhere: indoor team shots, dim restaurant pics, grainy docs. This post shows a tiny Python app (Flask + OpenCV) that fixes them with explainable image processing—not a heavyweight model. We’ll walk the code that does the real work (CLAHE → gamma → brightness → subtle saturation) and deploy it to Azure App Service for Linux. What you’ll build A Flask web app that accepts an image upload, runs a fast enhancement pipeline (CLAHE → gamma → brightness → subtle saturation), and returns base64 data URIs for an instant before/after view in the browser—no storage required for the basic demo. Architecture at a glance Browser → /enhance (multipart form): sends an image + optional tunables (clip_limit, gamma, brightness). Flask → Enhancer: converts the upload to a NumPy RGB array and calls LowLightEnhancer.enhance_image(...). Response: returns JSON with original and enhanced images as base64 PNG data URIs for immediate rendering. Prerequisites An Azure subscription Azure Developer CLI (azd) installed (Optional) Python 3.9+ on your dev box for reading the code or extending it Deploy with azd git clone https://github.com/Azure-Samples/appservice-ai-samples.git cd appservice-ai-samples/lowlight-enhancer azd init azd up When azd up finishes, open the printed URL, upload a low-light photo, and compare the result side-by-side. Code walkthrough (the parts that matter) 1) Flask surface area (app.py) File size guard: app = Flask(__name__) app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 # 16 MB Two routes: GET / - renders the simple UI POST /enhance - the JSON API the UI calls via XHR/fetch Parameter handling with sane defaults: clip_limit = float(request.form.get('clip_limit', 2.0)) gamma = float(request.form.get('gamma', 1.2)) brightness = float(request.form.get('brightness', 1.1)) 2) Zero-temp-file processing + data-URI response (app.py) process_uploaded_image keeps the hot path tight: convert to RGB → enhance → convert both versions to base64 PNG and return them inline. def process_uploaded_image(file_storage, clip_limit=2.0, gamma=1.2, brightness=1.1): # PIL → NumPy (RGB) img_pil = Image.open(file_storage) if img_pil.mode != 'RGB': img_pil = img_pil.convert('RGB') img_array = np.array(img_pil) # Enhance enhanced = LowLightEnhancer().enhance_image( img_array, clip_limit=clip_limit, gamma=gamma, brightness_boost=brightness ) # Back to base64 data URIs for instant display def pil_to_base64(pil_img): buf = io.BytesIO(); pil_img.save(buf, format='PNG') return base64.b64encode(buf.getvalue()).decode('utf-8') enhanced_pil = Image.fromarray(enhanced) return { 'original': f'data:image/png;base64,{pil_to_base64(img_pil)}', 'enhanced': f'data:image/png;base64,{pil_to_base64(enhanced_pil)}' } 3) The enhancement core (enhancer.py) LowLightEnhancer implements a classic pipeline that runs great on CPU: class LowLightEnhancer: def __init__(self): self.clip_limit = 2.0 self.tile_grid_size = (8, 8) def enhance_image(self, image, clip_limit=2.0, gamma=1.2, brightness_boost=1.1): # Normalize to RGB if the input came in as OpenCV BGR is_bgr = self._detect_bgr(image) rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) if is_bgr else image # 1) CLAHE on L-channel (LAB) for local contrast without color blowout lab = cv2.cvtColor(rgb, cv2.COLOR_RGB2LAB) l, a, b = cv2.split(lab) clahe = cv2.createCLAHE(clipLimit=clip_limit, tileGridSize=self.tile_grid_size) l = clahe.apply(l) # 2) Gamma correction (perceptual brightness curve) l = self._apply_gamma_correction(l, gamma) # 3) Gentle overall lift l = np.clip(l * brightness_boost, 0, 255).astype(np.uint8) # Recombine + small saturation nudge for a natural look enhanced = cv2.cvtColor(cv2.merge([l, a, b]), cv2.COLOR_LAB2RGB) enhanced = self._boost_saturation(enhanced, factor=1.1) return cv2.cvtColor(enhanced, cv2.COLOR_RGB2BGR) if is_bgr else enhanced CLAHE on L (not RGB) avoids the “oversaturated neon” artifact common with naive histogram equalization. Gamma via LUT (below) is fast and lets you brighten mid-tones without crushing highlights. A tiny brightness multiplier brings the image up just a bit after contrast/curve changes. A +10% saturation helps counter the desaturation that often follows brightening. 4) Fast gamma with a lookup table (enhancer.py) def _apply_gamma_correction(self, image, gamma: float) -> np.ndarray: inv_gamma = 1.0 / gamma table = np.array([((i / 255.0) ** inv_gamma) * 255 for i in range(256)], dtype=np.uint8) return cv2.LUT(image, table) Notes: With gamma = 1.2, inv_gamma ≈ 0.833 → curve brightens mid-tones (exponent < 1). cv2.LUT applies the 256-entry mapping efficiently across the image. 5) Bounded color pop: subtle saturation boost (enhancer.py) def _boost_saturation(self, image: np.ndarray, factor: float) -> np.ndarray: hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV).astype(np.float32) hsv[:, :, 1] = np.clip(hsv[:, :, 1] * factor, 0, 255) return cv2.cvtColor(hsv.astype(np.uint8), cv2.COLOR_HSV2RGB) Notes: Working in HSV keeps hue and brightness stable while gently lifting color. The clip to [0, 255] prevents out-of-gamut surprises. Tuning cheatsheet (which knob to turn, and when) Too flat / muddy → raise clip_limit from 2.0 → 3.0–4.0 (more local contrast). Still too dark → raise gamma from 1.2 → 1.4–1.6 (brightens mid-tones). Harsh or “crunchy” → lower clip_limit, or drop brightness_boost to 1.05–1.1. Colors feel washed out → increase saturation factor a touch (e.g., 1.1 → 1.15). What to try next Expose more controls in the UI (e.g., tile grid size, saturation factor). Persist originals/results to Azure Blob Storage and add shareable links. Add a background job for batch processing using the CLI helper. Conclusion The complete sample code and deployment templates are available in the appservice-ai-samples repository Ready to build your own Lowlight enhancer app? Clone the repo and run azd up to get started in minutes! For more Azure App Service AI samples and best practices, check out the Azure App Service AI integration documentation112Views0likes0CommentsWhat’s New for Python on App Service for Linux: pyproject.toml, uv, and more
Python apps on Azure App Service for Linux just got a lot easier to build and ship! We’ve modernized the build pipeline to support new deployment options —whether you’re on classic setup.py, fully on pyproject.toml with Poetry or uv, or somewhere in between. This post walks through five upgrades that reduce friction end-to-end—from local dev to GitHub Actions to the App Service build environment: pyproject.toml + uv (and poetry): modern, reproducible Python builds setup.py support .bashrc quality-of-life improvements in the App Service container shell GitHub Actions samples for common Python flows (setup.py, uv.lock, local venv, and pyproject.toml deployments) pyproject.toml + uv uv is an extremely fast Python package & project manager written in Rust—think “pip + virtualenv + pip-tools,” but much faster and with first-class project workflows. (Astral Docs) On App Service for Linux: we’ve added automatic uv builds when your repo contains both pyproject.toml and uv.lock . That means reproducible installs with uv’s resolver—no extra switches needed. What’s pyproject.toml? It’s the standardized configuration for modern Python projects (PEP 621) where you declare metadata, dependencies, and your build backend. (Python Enhancement Proposals (PEPs)) Quickstart (new to uv?) # in your project folder pip install uv uv init uv init scaffolds a project and creates a pyproject.toml (and, for application projects, a sample main.py). Try it with uv run . (Astral Docs) Add dependencies: uv add flask # add more as needed, e.g.: # uv add requests pillow A uv.lock file is generated to pin your dependency graph; uv then “syncs” from the lock for consistent installs. (Astral Docs) A minimal pyproject.toml for a Flask app: [project] name = "uv-pyproject" version = "0.1.0" description = "Add your description here" readme = "README.md" requires-python = ">=3.14" dependencies = [ "flask>=3.1.2", ] If you prefer to keep main.py App Service’s default entry is app.py, so either rename main.py to app.py, or set a startup command: uv run uvicorn main:app --host 0.0.0.0 --port 8000 Run locally: uv run app.py (uv run executes your script inside the project’s environment.) (Astral Docs) Deploy to Azure App Service for Linux using your favorite method (e.g., azd up, GitHub Actions, or VS Code). During the build, you’ll see logs like: Detected uv.lock (and no requirements.txt); creating virtual environment with uv... Installing uv... Requirement already satisfied: uv in /tmp/oryx/platforms/python/3.14.0/lib/python3.14/site-packages (0.9.7) Executing: uv venv --link-mode=copy --system-site-packages antenv Using CPython 3.14.0 interpreter at: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 Creating virtual environment at: antenv Activate with: source antenv/bin/activate Activating virtual environment... Detected uv.lock. Installing dependencies with uv... Resolved 9 packages in 1ms Installed 7 packages in 1.82s + blinker==1.9.0 + click==8.3.0 + flask==3.1.2 + itsdangerous==2.2.0 + jinja2==3.1.6 + markupsafe==3.0.3 + werkzeug==3.1.3 Using pyproject.toml with Poetry Already on Poetry? Great—Poetry uses pyproject.toml (typically with [tool.poetry] plus a poetry.lock) and complies with PEP-517/PEP-621. If your project is Poetry-managed, App Service’s pyproject.toml support applies just the same. For details on fields and build configuration, see Poetry’s official docs: the pyproject.toml reference and basic usage. (python-poetry.org) Want to see a working uv example? Check the lowlight-enhancer-uv Flask app in our samples repo (deployable with azd up). Support for setup.py setup.py is the (Python) build/config script used by Setuptools to declare your project’s metadata and dependencies. Setuptools offers first-class support for setup.py, and it remains a valid way to package and install apps. (Setuptools) Minimal setup.py for a Flask app # setup.py from setuptools import setup, find_packages setup( name="flask-app", version="0.1.0", packages=find_packages(exclude=("tests",)), python_requires=">=3.14", install_requires=[ "Flask>=3.1.2", ], include_package_data=True, ) Tip: install_requires and other fields are defined by Setuptools ; see the keywords reference for what you can configure. (Setuptools) What you’ll see during an App Service deployment Python Version: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 Creating directory for command manifest file if it does not exist Removing existing manifest file Python Virtual Environment: antenv Creating virtual environment... Executing: /tmp/oryx/platforms/python/3.14.0/bin/python3.14 -m venv antenv --copies Activating virtual environment... Running pip install setuptools... Collecting setuptools Downloading setuptools-80.9.0-py3-none-any.whl.metadata (6.6 kB) Downloading setuptools-80.9.0-py3-none-any.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 6.1 MB/s 0:00:00 Installing collected packages: setuptools Successfully installed setuptools-80.9.0 ... Running python setup.py install... [09:05:14+0000] Processing /tmp/8de1d13947ba65f [09:05:14+0000] Installing build dependencies: started [09:05:15+0000] Installing build dependencies: finished with status 'done' [09:05:15+0000] Getting requirements to build wheel: started [09:05:15+0000] Getting requirements to build wheel: finished with status 'done' [09:05:15+0000] Preparing metadata (pyproject.toml): started [09:05:15+0000] Preparing metadata (pyproject.toml): finished with status 'done' Bash shell experience: friendlier .bashrc in SSH We’ve started refreshing the SSH banner and shell behavior so it’s easier to orient yourself when you land in a Linux App Service container. What changed (see screenshots below): Clear header with useful links. We now show both the general docs and a Python quickstart link right up front. Runtime at a glance. The header prints the Python version explicitly. Instance details for troubleshooting. You’ll see the Instance Name and Instance Id in the banner—handy when filing a support ticket or comparing logs across instances. No more noisy errors on login. Previously, the shell tried to auto-activate antenv and printed No such file or directory if it didn’t exist. The new logic checks first and shows a gentle tip instead. What’s next More language-specific tips based on the detected stack. Shortcuts for common ssh tasks. Small UX touches (spacing, color, and prompts) to make SSH sessions feel consistent. New GitHub Actions samples We have also published a few Github Action sample workflows which makes it easy to deploy your Python apps and take advantage of our new features Deployment using pyproject.toml + uv - https://github.com/Azure/actions-workflow-samples/blob/master/AppService/Python-GHA-Samples/Python-PyProject-Uv-Sample.yml Deployment using Poetry - actions-workflow-samples/AppService/Python-GHA-Samples/Python-Poetry-Sample.yml at master · Azure/actions-workflow-samples Deployment using setup.py - actions-workflow-samples/AppService/Python-GHA-Samples/Python-SetupPy-Sample.yml at master · Azure/actions-workflow-samples Deployment python apps that are built locally - actions-workflow-samples/AppService/Python-GHA-Samples/Python-Local-Built-Deploy-Sample.yml at master · Azure/actions-workflow-samples To use these templates Copy the relevant YAML into .github/workflows/ folder in your repo. Set auth: use OIDC with azure/login (or a service principal/publish profile if you must). (Microsoft Learn) Fill in inputs: app name, resource group, and sidecar details (image or extension parameters, env vars/ports). Commit & run: trigger on push or via Run workflow. Conclusion In the coming months, we’ll be announcing more improvements to Python on Azure App Service for Linux, focused on faster builds, better performance for AI workloads and clearer diagnostics. Try the flows that fit your team, and let us know what else would make your Python deployments even easier.266Views0likes0CommentsUsing Scikit-learn on Azure Web App
TOC Introduction to Scikit-learn System Architecture Architecture Focus of This Tutorial Setup Azure Resources Web App Storage Running Locally File and Directory Structure Training Models and Training Data Predicting with the Model Publishing the Project to Azure Deployment Configuration Running on Azure Web App Training the Model Using the Model for Prediction Troubleshooting Missing Environment Variables After Deployment Virtual Environment Resource Lock Issues Package Version Dependency Issues Default Binding Missing System Commands in Restricted Environments Conclusion References 1. Introduction to Scikit-learn Scikit-learn is a popular open-source Python library for machine learning, built on NumPy, SciPy, and matplotlib. It offers an efficient and easy-to-use toolkit for data analysis, data mining, and predictive modeling. Scikit-learn supports a variety of machine learning algorithms, including classification, regression, clustering, and dimensionality reduction (e.g., SVM, Random Forest, K-means). Its preprocessing utilities handle tasks like scaling, encoding, and missing data imputation. It also provides tools for model evaluation (e.g., accuracy, precision, recall) and pipeline creation, enabling users to chain preprocessing and model training into seamless workflows. 2. System Architecture Architecture Development Environment OS: Windows 11 Version: 24H2 Python Version: 3.7.3 Azure Resources App Service Plan: SKU - Premium Plan 0 V3 App Service: Platform - Linux (Python 3.9, Version 3.9.19) Storage Account: SKU - General Purpose V2 File Share: No backup plan Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Running the project locally Publishing the project to Azure Running the application on Azure Troubleshooting common issues Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources Portal (i.e., REST api) ARM Bicep Terraform V How to deploy project to Azure VSCode CLI Azure DevOps GitHub Action V 3. Setup Azure Resources Web App We need to create the following resources or services: Manual Creation Required Resource/Service App Service Plan No Resource App Service Yes Resource Storage Account Yes Resource File Share Yes Service Go to the Azure Portal and create an App Service. Important configuration: OS: Select Linux (default if Python stack is chosen). Stack: Select Python 3.9 to avoid dependency issues. SKU: Choose at least Premium Plan to ensure enough memory for your AI workloads. Storage Create a Storage Account in the Azure Portal. Create a file share named data-and-model in the Storage Account. Mount the File Share to the App Service: Use the name data-and-model for consistency with tutorial paths. At this point, all Azure resources and services have been successfully created. Let’s take a slight detour and mount the recently created File Share to your Windows development environment. Navigate to the File Share you just created, and refer to the diagram below to copy the required command. Before copying, please ensure that the drive letter remains set to the default "Z" as the sample code in this tutorial will rely on it. Return to your development environment. Open a PowerShell terminal (do not run it as Administrator) and input the command copied in the previous step, as shown in the diagram. After executing the command, the network drive will be successfully mounted. You can open File Explorer to verify, as illustrated in the diagram. 4. Running Locally File and Directory Structure Please use VSCode to open a PowerShell terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai .\scikit-learn\tools\add-venv.cmd If you are using a Linux or Mac platform, use the following alternative commands instead: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai bash ./scikit-learn/tools/add-venv.sh After completing the execution, you should see the following directory structure: File and Path Purpose scikit-learn/tools/add-venv.* The script executed in the previous step (cmd for Windows, sh for Linux/Mac) to create all Python virtual environments required for this tutorial. .venv/scikit-learn-webjob/ A virtual environment specifically used for training models. scikit-learn/webjob/requirements.txt The list of packages (with exact versions) required for the scikit-learn-webjob virtual environment. .venv/scikit-learn/ A virtual environment specifically used for the Flask application, enabling API endpoint access for querying predictions. scikit-learn/requirements.txt The list of packages (with exact versions) required for the scikit-learn virtual environment. scikit-learn/ The main folder for this tutorial. scikit-learn/tools/create-folder.* A script to create all directories required for this tutorial in the File Share, including train, model, and test. scikit-learn/tools/download-sample-training-set.* A script to download a sample training set from the UCI Machine Learning Repository, containing heart disease data, into the train directory of the File Share. scikit-learn/webjob/train_heart_disease_model.py A script for training the model. It loads the training set, applies a machine learning algorithm (Logistic Regression), and saves the trained model in the model directory of the File Share. scikit-learn/webjob/train_heart_disease_model.sh A shell script for Azure App Service web jobs. It activates the scikit-learn-webjob virtual environment and starts the train_heart_disease_model.py script. scikit-learn/webjob/train_heart_disease_model.zip A ZIP file containing the shell script for Azure web jobs. It must be recreated manually whenever train_heart_disease_model.sh is modified. Ensure it does not include any directory structure. scikit-learn/api/app.py Code for the Flask application, including routes, port configuration, input parsing, model loading, predictions, and output generation. scikit-learn/.deployment A configuration file for deploying the project to Azure using VSCode. It disables the default Oryx build process in favor of custom scripts. scikit-learn/start.sh A script executed after deployment (as specified in the Portal's startup command). It sets up the virtual environment and starts the Flask application to handle web requests. Training Models and Training Data Return to VSCode and execute the following commands (their purpose has been described earlier). .\.venv\scikit-learn-webjob\Scripts\Activate.ps1 .\scikit-learn\tools\create-folder.cmd .\scikit-learn\tools\download-sample-training-set.cmd python .\scikit-learn\webjob\train_heart_disease_model.py If you are using a Linux or Mac platform, use the following alternative commands instead: source .venv/scikit-learn-webjob/bin/activate bash ./scikit-learn/tools/create-folder.sh bash ./scikit-learn/tools/download-sample-training-set.sh python ./scikit-learn/webjob/train_heart_disease_model.py After execution, the File Share will now include the following directories and files. Let’s take a brief detour to examine the structure of the training data downloaded from the public dataset website. The right side of the figure describes the meaning of each column in the dataset, while the left side shows the actual training data (after preprocessing). This is a predictive model that uses an individual’s physiological characteristics to determine the likelihood of having heart disease. Columns 1-13 represent various physiological features and background information of the patients, while Column 14 (originally Column 58) is the label indicating whether the individual has heart disease. The supervised learning process involves using a large dataset containing both features and labels. Machine learning algorithms (such as neural networks, SVMs, or in this case, logistic regression) identify the key features and their ranges that differentiate between labels. The trained model is then saved and can be used in services to predict outcomes in real time by simply providing the necessary features. Predicting with the Model Return to VSCode and execute the following commands. First, deactivate the virtual environment used for training the model, then activate the virtual environment for the Flask application, and finally, start the Flask app. Commands for Windows: deactivate .\.venv\scikit-learn\Scripts\Activate.ps1 python .\scikit-learn\api\app.py Commands for Linux or Mac: deactivate source .venv/scikit-learn/bin/activate python ./scikit-learn/api/app.py When you see a screen similar to the following, it means the server has started successfully. Press Ctrl+C to stop the server if needed. Before conducting the actual test, let’s construct some sample human feature data: [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1] Referring to the feature description table from earlier, we can see that the only modified field is Column 4 ("Resting Blood Pressure"), with the second sample having an abnormally high value. (Note: Normal resting blood pressure ranges are typically 90–139 mmHg.) Next, open a PowerShell terminal and use the following curl commands to send requests to the app: curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET http://127.0.0.1:8000/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' You should see the prediction results, confirming that the trained model is working as expected. 5. Publishing the Project to Azure Deployment In the VSCode interface, right-click on the target App Service where you plan to deploy your project. Manually select the local project folder named scikit-learn as the deployment source, as shown in the image below. Configuration After deployment, the App Service will not be functional yet and will still display the default welcome page. This is because the App Service has not been configured to build the virtual environment and start the Flask application. To complete the setup, go to the Azure Portal and navigate to the App Service. The following steps are critical, and their execution order must be correct. To avoid delays, it’s recommended to open two browser tabs beforehand, complete the settings in each, and apply them in sequence. Refer to the following two images for guidance. You need to do the following: Set the Startup Command: Specify the path to the script you deployed bash /home/site/wwwroot/start.sh Set Two App Settings: WEBSITES_CONTAINER_START_TIME_LIMIT=600 The value is in seconds, ensuring the Startup Command can continue execution beyond the default timeout of 230 seconds. This tutorial’s Startup Command typically takes around 300 seconds, so setting it to 600 seconds provides a safety margin and accommodates future project expansion (e.g., adding more packages). WEBSITES_ENABLE_APP_SERVICE_STORAGE=1 This setting is required to enable the App Service storage feature, which is necessary for using web jobs (e.g., for model training). Step-by-Step Process: Before clicking Continue, switch to the next browser tab and set up all the app settings. In the second tab, apply all app settings, then switch back to the first tab. Click Continue in the first tab and wait for several seconds for the operation to complete. Once completed, switch to the second tab and click Continue within 5 seconds. Ensure to click Continue promptly within 5 seconds after the previous step to finish all settings. After completing the configuration, wait for about 10 minutes for the settings to take effect. Then, navigate to the WebJobs section in the Azure Portal and upload the ZIP file mentioned in the earlier sections. Set its trigger type to Manual. At this point, the entire deployment process is complete. For future code updates, you only need to redeploy from VSCode; there is no need to reconfigure settings in the Azure Portal. 6. Running on Azure Web App Training the Model Go to the Azure Portal, locate your App Service, and navigate to the WebJobs section. Click on Start to initiate the job and wait for the results. During this process, you may need to manually refresh the page to check the status of the job execution. Refer to the image below for guidance. Once you see the model report in the Logs, it indicates that the model training is complete, and the Flask app is ready for predictions. You can also find the newly trained model in the File Share mounted in your local environment. Using the Model for Prediction Just like in local testing, open a PowerShell terminal and use the following curl commands to send requests to the app: # Note: Replace both instances of scikit-learn-portal-app with the name of your web app. curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 145, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' curl -X GET https://scikit-learn-portal-app.azurewebsites.net/api/detect -H "Content-Type: application/json" -d '{"info": [63, 1, 3, 305, 233, 1, 0, 150, 0, 2.3, 0, 0, 1]}' As with the local environment, you should see the expected results. 7. Troubleshooting Missing Environment Variables After Deployment Symptom: Even after setting values in App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT), they do not take effect. Cause: App Settings (e.g., WEBSITES_CONTAINER_START_TIME_LIMIT, WEBSITES_ENABLE_APP_SERVICE_STORAGE) are reset after updating the startup command. Resolution: Use Azure CLI or the Azure Portal to reapply the App Settings after deployment. Alternatively, set the startup command first, and then apply app settings. Virtual Environment Resource Lock Issues Symptom: The app fails to redeploy, even though no configuration or code changes were made. Cause: The virtual environment folder cannot be deleted due to active resource locks from the previous process. Files or processes from the previous virtual environment session remain locked. Resolution: Deactivate processes before deletion and use unique epoch-based folder names to avoid conflicts. Refer to scikit-learn/start.sh in this tutorial for implementation. Package Version Dependency Issues Symptom: Conflicts occur between package versions specified in requirements.txt and the versions required by the Python environment. This results in errors during installation or runtime. Cause: Azure deployment environments enforce specific versions of Python and pre-installed packages, leading to mismatches when older or newer versions are explicitly defined. Additionally, the read-only file system in Azure App Service prevents modifying global packages like typing-extensions. Resolution: Pin compatible dependency versions. For example, follow the instructions for installing scikit-learn from the scikit-learn 1.5.2 documentation. Refer to scikit-learn/requirements.txt in this tutorial. Default Binding Symptom: Despite setting the WEBSITES_PORT parameter in App Settings to match the port Flask listens on (e.g., Flask's default 5000), the deployment still fails. Cause: The Flask framework's default settings are not overridden to bind to 0.0.0.0 or the required port. Resolution: Explicitly bind Flask to 0.0.0.0:8000 in app.py . To avoid additional issues, it’s recommended to use the Azure Python Linux Web App's default port (8000), as this minimizes the need for extra configuration. Missing System Commands in Restricted Environments Symptom: In the WebJobs log, an error is logged stating that the ls command is missing. Cause: This typically occurs in minimal environments, such as Azure App Services, containers, or highly restricted shells. Resolution: Use predefined paths or variables in the script instead of relying on system commands. Refer to scikit-learn/webjob/train_heart_disease_model.sh in this tutorial for an example of handling such cases. 8. Conclusion Azure App Service, while being a PaaS product with less flexibility compared to a VM, still offers several powerful features that allow us to fully leverage the benefits of AI frameworks. For example, the resource-intensive model training phase can be offloaded to a high-performance local machine. This approach enables the App Service to focus solely on loading models and serving predictions. Additionally, if the training dataset is frequently updated, we can configure WebJobs with scheduled triggers to retrain the model periodically, ensuring the prediction service always uses the latest version. These capabilities make Azure App Service well-suited for most business scenarios. 9. References Scikit-learn Documentation UCI Machine Learning Repository Azure App Service Documentation746Views1like1CommentScaling Azure Functions Python with orjson
Azure Functions now supports ORJSON in the Python worker, giving developers an easy way to boost performance by simply adding the library to their environment. Benchmarks show that ORJSON delivers measurable gains in throughput and latency, with the biggest improvements on small–medium payloads common in real-world workloads. In tests, ORJSON improved throughput by up to 6% on 35 KB payloads and significantly reduced response times under load, while also eliminating dropped requests in high-throughput scenarios. With its Rust-based speed, standards compliance, and drop-in adoption, ORJSON offers a straightforward path to faster, more scalable Python Functions without any code changes.349Views0likes0CommentsDeployment and Build from Azure Linux based Web App
TOC Introduction Deployment Sources From Laptop From CI/CD tools Build Source From Oryx Build From Runtime From Deployment Sources Walkthrough Laptop + Oryx Laptop + Runtime Laptop CI/CD concept Conclusion 1. Introduction Deployment on Azure Linux Web Apps can be done through several different methods. When a deployment issue occurs, the first step is usually to identify which method was used. The core of these methods revolves around the concept of Build, the process of preparing and loading the third-party dependencies required to run an application. For example, a Python app defines its build process as pip install packages, a Node.js app uses npm install modules, and PHP or Java apps rely on libraries. In this tutorial, I’ll use a simple Python app to demonstrate four different Deployment/Build approaches. Each method has its own use cases and limitations. You can even combine them, for example, using your laptop as the deployment tool while still using Oryx as the build engine. The same concepts apply to other runtimes such as Node.js, PHP, and beyond. 2. Deployment Sources From Laptop Scenarios: Setting up a proof of concept Developing in a local environment Advantages: Fast development cycle Minimal configuration required Limitations: Difficult for the local test environment to interact with cloud resources OS differences between local and cloud environments may cause integration issues From CI/CD tools Scenarios: Projects with established development and deployment workflows Codebases requiring version control and automation Advantages: Developers can focus purely on coding Automatic deployment upon branch commits Limitations: Build and runtime environments may still differ slightly at the OS level 3. Build Source From Oryx Build Scenarios: Offloading resource-intensive build tasks from your local or CI/CD environment directly to the Azure Web App platform, reducing local computing overhead. Advantages: Minimal extra configuration Multi-language support Limitations: Build performance is limited by the App Service SKU and may face performance bottlenecks The build environment may differ from the runtime environment, so apps sensitive to minor package versions should take caution From Runtime Scenarios: When you want the benefits and pricing of a PaaS solution but need control similar to an IaaS setup Advantages: Build occurs in the runtime environment itself Allows greater flexibility for low-level system operations Limitations: Certain system-level settings (e.g., NTP time sync) remain inaccessible From Deployment Sources Scenarios: Pre-package all dependencies and deploy them together, eliminating the need for a separate build step. Advantages: Supports proprietary or closed-source company packages Limitations: Incompatibility may arise if the development and runtime environments differ significantly in OS or package support Type Method Scenario Advantage Limitation Deployment From Laptop POC / Dev Fast setup Poor cloud link Deployment From CI/CD Auto pipeline Focus on code OS mismatch Build From Oryx Platform build Simple, multi-lang Performance cap Build From Runtime High control Flexible ops Limited access Build From Deployment Pre-built deploy Use private pkg Env mismatch 4. Walkthrough Laptop + Oryx Add Environment Variables SCM_DO_BUILD_DURING_DEPLOYMENT=false (Purpose: prevents the deployment environment from packaging during publish; this must also be set in the deployment environment itself.) WEBSITE_RUN_FROM_PACKAGE=false (Purpose: tells Azure Web App not to run the app from a prepackaged file.) ENABLE_ORYX_BUILD=true (Purpose: allows the Azure Web App platform to handle the build process automatically after a deployment event.) Add startup command bash /home/site/wwwroot/run.sh (The run.sh file corresponds to the script in your project code.) Check sample code requirements.txt — defines Python packages (similar to package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py — main Python application code. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop + Oryx" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh — script used to start the application. #!/bin/bash gunicorn --bind=0.0.0.0:8000 app:app .deployment — VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment Once both the deployment and build processes complete successfully, you should see the expected result. Laptop + Runtime Add Environment Variables (Screenshots omitted since the process is similar to previous steps) SCM_DO_BUILD_DURING_DEPLOYMENT=false Purpose: Prevents the deployment environment from packaging during the publishing process. This setting must also be added in the deployment environment itself. WEBSITE_RUN_FROM_PACKAGE=false Purpose: Instructs Azure Web App not to run the application from a prepackaged file. ENABLE_ORYX_BUILD=false Purpose: Ensures that Azure Web App does not perform any build after deployment; all build tasks will instead be handled during the startup script execution. Add Startup Command (Screenshots omitted since the process is similar to previous steps) bash /home/site/wwwroot/run.sh (The run.sh file corresponds to the script of the same name in your project code.) Check Sample Code (Screenshots omitted since the process is similar to previous steps) requirements.txt: Defines Python packages (similar to package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py: The main Python application code. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop + Runtime" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh: Startup script. In addition to launching the app, it also creates a virtual environment and installs dependencies, all build-related tasks happen here. #!/bin/bash python -m venv venv source venv/bin/activate pip install -r requirements.txt gunicorn --bind=0.0.0.0:8000 app:app .deployment: VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment (Screenshots omitted since the process is similar to previous steps) Once both deployment and build are completed, you should see the expected output. Laptop Add Environment Variables (Screenshots omitted as the process is similar to previous steps) SCM_DO_BUILD_DURING_DEPLOYMENT=false Purpose: Prevents the deployment environment from packaging during publish. This must also be set in the deployment environment itself. WEBSITE_RUN_FROM_PACKAGE=false Purpose: Instructs Azure Web App not to run the app from a prepackaged file. ENABLE_ORYX_BUILD=false Purpose: Prevents Azure Web App from building after deployment. All build tasks will instead execute during the startup script. Add Startup Command (Screenshots omitted as the process is similar to previous steps) bash /home/site/wwwroot/run.sh (The run.sh corresponds to the same-named file in your project code.) Check Sample Code (Screenshots omitted as the process is similar to previous steps) requirements.txt: Defines Python packages (like package.json in Node.js). Flask==3.0.3 gunicorn==23.0.0 app.py: The main Python application. from flask import Flask app = Flask(__name__) @app.route("/") def home(): return "Deploy from Laptop" if __name__ == "__main__": import os app.run(host="0.0.0.0", port=8000) run.sh: The startup script. In addition to launching the app, it activates an existing virtual environment. The creation of that environment and installation of dependencies will occur in the next section. #!/bin/bash source venv/bin/activate gunicorn --bind=0.0.0.0:8000 app:app .deployment: VS Code deployment configuration file. [config] SCM_DO_BUILD_DURING_DEPLOYMENT=false Deployment Before deployment, you must perform a local build process. Run commands locally (depending on the language, usually for installing dependencies). python -m venv venv source venv/bin/activate pip install -r requirements.txt After completing the local build, deploy your app. Once deployment finishes, you should see the expected result. CI/CD concept For example, when using Azure DevOps (ADO) as your CI/CD tool, its behavior conceptually mirrors deploying directly from a laptop, but with enhanced automation, governance, and reproducibility. Essentially, ADO pipelines translate your manual local deployment steps into codified, repeatable workflows defined in a YAML pipeline file, executed by Microsoft-hosted or self-hosted agents. A typical azure-pipelines.yml defines the stages (e.g., build, deploy) and their corresponding jobs and steps. Each stage runs on a specified VM image (e.g., ubuntu-latest) and executes commands, the same npm install, pip install which you would normally run on your laptop. The ADO pipeline acts as your automated laptop, every build command, environment variable, and deployment step you’d normally execute locally is just formalized in YAML. Whether you build inline, use Oryx, or deploy pre-built artifacts, the underlying concept remains identical: compile, package, and deliver code to Azure. The distinction lies in who performs it. 5. Conclusion Different deployment and build methods lead to different debugging and troubleshooting approaches. Therefore, understanding the selected deployment method and its corresponding troubleshooting process is an essential skill for every developer and DevOps engineer.394Views0likes0CommentsSearch Less, Build More: Inner Sourcing with GitHub CoPilot and ADO MCP Server
Developers burn cycles context‑switching: opening five repos to find a logging example, searching a wiki for a data masking rule, scrolling chat history for the latest pipeline pattern. Organisations that I speak to are often on the path of transformational platform engineering projects but always have the fear or doubt of "what if my engineers don't use these resources". While projects like Backstage still play a pivotal role in inner sourcing and discoverability I also empathise with developers who would argue "How would I even know in the first place, which modules have or haven't been created for reuse". In this blog we explore how we can ensure organisational standards and developer satisfaction without any heavy lifting on either side, no custom model training, no rewriting or relocating of repositories and no stagnant local data. Using GitHub CoPilot + Azure DevOps MCP server (with the free `code_search` extension) we turn the IDE into an organizational knowledge interface. Instead of guessing or re‑implementing, engineers can start scaffolding projects or solving issues as they would normally (hopefully using CoPilot) and without extra prompting. GitHub CoPilot can lean into organisational standards and ensure recommendations are made with code snippets directly generated from existing examples. What Is the Azure DevOps MCP Server + code_search Extension? MCP (Model Context Protocol) is an open standard that lets agents (like GitHub Copilot) pull in structured, on-demand context from external systems. MCP servers contain natural language explanations of the tools that the agent can utilise allowing dynamic decision making of when to implement certain toolsets over others. The Azure DevOps MCP Server is the ADO Product Team's implementation of that standard. It exposes your ADO environment in a way CoPilot can consume. Out of the box it gives you access to: Projects – list and navigate across projects in your organization. Repositories – browse repos, branches, and files. Work items – surface user stories, bugs, or acceptance criteria. Wiki's – pull policies, standards, and documentation. This means CoPilot can ground its answers in live ADO content, instead of hallucinating or relying only on what’s in the current editor window. The ADO server runs locally from your own machine to ensure that all sensitive project information remains within your secure network boundary. This also means that existing permissions on ADO objects such as Projects or Repositories are respected. Wiki search tooling available out of the box with ADO MCP server is very useful however if I am honest I have seen these wiki's go unused with documentation being stored elsewhere either inside the repository or in a project management tool. This means any tool that needs to implement code requires the ability to accurately search the code stored in the repositories themself. That is where the code_search extension enablement in ADO is so important. Most organisations have this enabled already however it is worth noting that this pre-requisite is the real unlock of cross-repo search. This allows for CoPilot to: Query for symbols, snippets, or keywords across all repos. Retrieve usage examples from code, not just docs. Locate standards (like logging wrappers or retry policies) wherever they live. Back every recommendation with specific source lines. In short: MCP connects CoPilot to Azure DevOps. code_search makes that connection powerful by turning it into a discovery engine. What is the relevance of CoPilot Instructions? One of the less obvious but most powerful features of GitHub CoPilot is its ability to follow instructions files. CoPilot automatically looks for these files and uses them as a “playbook” for how it should behave. There are different types of instructions you can provide: Organisational instructions – apply across your entire workspace, regardless of which repo you’re in. Repo-specific instructions – scoped to a particular repository, useful when one project has unique standards or patterns. Personal instructions – smaller overrides layered on top of global rules when a local exception applies. (Stored in .github/copilot-instructions.md) In this solution, I’m using a single personal instructions file. It tells CoPilot: When to search (e.g., always query repos and wikis before answering a standards question). Where to look (Azure DevOps repos, wikis, and with code_search, the code itself). How to answer (responses must cite the repo/file/line or wiki page; if no source is found, say so). How to resolve conflicts (prefer dated wiki entries over older README fragments). As a small example, a section of a CoPilot instruction file could look like this: # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects The result... To test this I created 3 ADO Projects each with between 1-2 repositories. The repositories were light with only ReadMe's inside containing descriptions of the "repo" and some code snippets examples for usage. I have then created a brand-new workspace with no context apart from a CoPilot instructions document (which could be part of a repo scaffold or organisation wide) which tells CoPilot to search code and the wikis across all ADO projects in my demo environment. It returns guidance and standards from all available repo's and starts to use it to formulate its response. In the screenshot I have highlighted some key parts with red boxes. The first being a section of the readme that CoPilot has identified in its response, that part also highlighted within CoPilot chat response. I have highlighted the rather generic prompt I used to get this response at the bottom of that window too. Above I have highlighted CoPilot using the MCP server tooling searching through projects, repo's and code. Finally the largest box highlights the instructions given to CoPilot on how to search and how easily these could be optimised or changed depending on the requirements and organisational coding standards. How did I implement this? Implementation is actually incredibly simple. As mentioned I created multiple projects and repositories within my ADO Organisation in order to test cross-project & cross-repo discovery. I then did the following: Enable code_search - in your Azure DevOps organization (Marketplace → install extension). Login to Azure - Use the AZ CLI to authenticate to Azure with "az login". Create vscode/mcp.json file - Snippet is provided below, the organisation name should be changed to your organisations name. Start and enable your MCP server - In the mcp.json file you should see a "Start" button. Using the snippet below you will be prompted to add your organisation name. Ensure your CoPilot agent has access to the server under "tools" too. View this setup guide for full setup instructions (azure-devops-mcp/docs/GETTINGSTARTED.md at main · microsoft/azure-devops-mcp) Create a CoPilot Instructions file - with a search-first directive. I have inserted the full version used in this demo at the bottom of the article. Experiment with Prompts – Start generic (“How do we secure APIs?”). Review the output and tools used and then tailor your instructions. Considerations While this is a great approach I do still have some considerations when going to production. Latency - Using MCP tooling on every request will add some latency to developer requests. We can look at optimizing usage through copilot instructions to better identify when CoPilot should or shouldn't use the ADO MCP server. Complex Projects and Repositories - While I have demonstrated cross project and cross repository retrieval my demo environment does not truly simulate an enterprise ADO environment. Performance should be tested and closely monitored as organisational complexity increases. Public Preview - The ADO MCP server is moving quickly but is currently still public preview. We have demonstrated in this article how quickly we can make our Azure DevOps content discoverable. While their are considerations moving forward this showcases a direction towards agentic inner sourcing. Feel free to comment below how you think this approach could be extended or augmented for other use cases! Resources MCP Server Config (/.vscode/mcp.json) { "inputs": [ { "id": "ado_org", "type": "promptString", "description": "Azure DevOps organization name (e.g. 'contoso')" } ], "servers": { "ado": { "type": "stdio", "command": "npx", "args": ["-y", "@azure-devops/mcp", "${input:ado_org}"] } } } CoPilot Instructions (/.github/copilot-instructions.md) # GitHub Copilot Instructions for Azure DevOps MCP Integration This project uses Azure DevOps with MCP server integration to provide organizational context awareness. Always check to see if the Azure DevOps MCP server has a tool relevant to the user's request. ## Core Principles ### 1. Azure DevOps Integration - **Always prioritize Azure DevOps MCP tools** when users ask about: - Work items, stories, bugs, tasks - Pull requests and code reviews - Build pipelines and deployments - Repository operations and branch management - Wiki pages and documentation - Test plans and test cases - Project and team information ### 2. Organizational Context Awareness - Before suggesting solutions, **check existing organizational patterns** by: - Searching code across repositories for similar implementations - Referencing established coding standards and frameworks - Looking for existing shared libraries and utilities - Checking architectural decision records (ADRs) in wikis ### 3. Cross-Repository Intelligence - When providing code suggestions: - **Search for existing patterns** in other repositories first - **Reference shared libraries** and common utilities - **Maintain consistency** with organizational standards - **Suggest reusable components** when appropriate ## Tool Usage Guidelines ### Work Items and Project Management When users mention bugs, features, tasks, or project planning: ``` ✅ Use: wit_my_work_items, wit_create_work_item, wit_update_work_item ✅ Use: wit_list_backlogs, wit_get_work_items_for_iteration ✅ Use: work_list_team_iterations, core_list_projects ``` ### Code and Repository Operations When users ask about code, branches, or pull requests: ``` ✅ Use: repo_list_repos_by_project, repo_list_pull_requests_by_repo ✅ Use: repo_list_branches_by_repo, repo_search_commits ✅ Use: search_code for finding patterns across repositories ``` ### Documentation and Knowledge Sharing When users need documentation or want to create/update docs: ``` ✅ Use: wiki_list_wikis, wiki_get_page_content, wiki_create_or_update_page ✅ Use: search_wiki for finding existing documentation ``` ### Build and Deployment When users ask about builds, deployments, or CI/CD: ``` ✅ Use: pipelines_get_builds, pipelines_get_build_definitions ✅ Use: pipelines_run_pipeline, pipelines_get_build_status ``` ## Response Patterns ### 1. Discovery First Before providing solutions, always discover organizational context: ``` "Let me first check what patterns exist in your organization..." → Search code, check repositories, review existing work items ``` ### 2. Reference Organizational Standards When suggesting code or approaches: ``` "Based on patterns I found in your [RepositoryName] repository..." "Following your organization's standard approach seen in..." "This aligns with the pattern established in [TeamName]'s implementation..." ``` ### 3. Actionable Integration Always offer to create or update Azure DevOps artifacts: ``` "I can create a work item for this enhancement..." "Should I update the wiki page with this new pattern?" "Let me link this to the current iteration..." ``` ## Specific Scenarios ### New Feature Development 1. **Search existing repositories** for similar features 2. **Check architectural patterns** and shared libraries 3. **Review related work items** and planning documents 4. **Suggest implementation** based on organizational standards 5. **Offer to create work items** and documentation ### Bug Investigation 1. **Search for similar issues** across repositories and work items 2. **Check related builds** and recent changes 3. **Review test results** and failure patterns 4. **Provide solution** based on organizational practices 5. **Offer to create/update** bug work items and documentation ### Code Review and Standards 1. **Compare against organizational patterns** found in other repositories 2. **Reference coding standards** from wiki documentation 3. **Suggest improvements** based on established practices 4. **Check for reusable components** that could be leveraged ### Documentation Requests 1. **Search existing wikis** for related content 2. **Check for ADRs** and technical documentation 3. **Reference patterns** from similar projects 4. **Offer to create/update** wiki pages with findings ## Error Handling If Azure DevOps MCP tools are not available or fail: 1. **Inform the user** about the limitation 2. **Provide alternative approaches** using available information 3. **Suggest manual steps** for Azure DevOps integration 4. **Offer to help** with configuration if needed ## Best Practices ### Always Do: - ✅ Search organizational context before suggesting solutions - ✅ Reference existing patterns and standards - ✅ Offer to create/update Azure DevOps artifacts - ✅ Maintain consistency with organizational practices - ✅ Provide actionable next steps ### Never Do: - ❌ Suggest solutions without checking organizational context - ❌ Ignore existing patterns and implementations - ❌ Provide generic advice when specific organizational context is available - ❌ Forget to offer Azure DevOps integration opportunities --- **Remember: The goal is to provide intelligent, context-aware assistance that leverages the full organizational knowledge base available through Azure DevOps while maintaining development efficiency and consistency.**995Views1like3CommentsHow to connect Azure SQL database from Python Function App using managed identity or access token
This blog will demonstrate on how to connect Azure SQL database from Python Function App using managed identity or access token. If you are looking for how to implement it in Windows App Service, you may refer to this post: https://techcommunity.microsoft.com/t5/apps-on-azure-blog/how-to-connect-azure-sql-database-from-azure-app-service-windows/ba-p/2873397. Note that Azure Active Directory managed identity authentication method was added in ODBC Driver since version 17.3.1.1 for both system-assigned and user-assigned identities. In Azure blessed image for Python Function, the ODBC Driver version is 17.8. Which makes it possible to leverage this feature in Linux App Service. Briefly, this post will provide you a step to step guidance with sample code and introduction on the authentication workflow. Steps: 1. Create a Linux Python Function App from portal 2. Set up the managed identity in the new Function App by enable Identity and saving from portal. It will generate an Object(principal) ID for you automatically. 3. Assign role in Azure SQL database. Search for your own account and save as admin. Note: Alternatively, you can search for the function app's name and set it as admin, then that function app would own admin permission on the database and you can skip step 4 and 5 as well. 4. Got to Query editor in database and be sure to login using your account set in previous step rather than username and password. Or step 5 will fail with below exception. "Failed to execute query. Error: Principal 'xxxx' could not be created. Only connections established with Active Directory accounts can create other Active Directory users." 5. Run below queries to create user for the function app and alter roles. You can choose to alter part of these roles per your demand. CREATE USER "yourfunctionappname" FROM EXTERNAL PROVIDER; ALTER ROLE db_datareader ADD MEMBER "yourfunctionappname" ALTER ROLE db_datawriter ADD MEMBER "yourfunctionappname" ALTER ROLE db_ddladmin ADD MEMBER "yourfunctionappname" 6. Leverage below sample code to build your own project and deploy to the function app. Sample Code: Below is the sample code on how to use Azure access token when run it from local and use managed identity when run in Function app. The token part needs to be replaced with your own. Basically, it is using "pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi')" to authenticate with managed identity. Also, "MSI_SECRET" is used to tell if we are running it from local or function app, it will be created automatically as environment variable when the function app is enabled with Managed Identity. The complete demo project can be found from: https://github.com/kevin808/azure-function-pyodbc-MI import logging import azure.functions as func import os import pyodbc import struct def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') server="your-sqlserver.database.windows.net" database="your_db" driver="{ODBC Driver 17 for SQL Server}" query="SELECT * FROM dbo.users" # Optional to use username and password for authentication # username = 'name' # password = 'pass' db_token = '' connection_string = 'DRIVER='+driver+';SERVER='+server+';DATABASE='+database #When MSI is enabled if os.getenv("MSI_SECRET"): conn = pyodbc.connect(connection_string+';Authentication=ActiveDirectoryMsi') #Used when run from local else: SQL_COPT_SS_ACCESS_TOKEN = 1256 exptoken = b'' for i in bytes(db_token, "UTF-8"): exptoken += bytes({i}) exptoken += bytes(1) tokenstruct = struct.pack("=i", len(exptoken)) + exptoken conn = pyodbc.connect(connection_string, attrs_before = { SQL_COPT_SS_ACCESS_TOKEN:tokenstruct }) # Uncomment below line when use username and password for authentication # conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password) cursor = conn.cursor() cursor.execute(query) row = cursor.fetchone() while row: print(row[0]) row = cursor.fetchone() return func.HttpResponse( 'Success', status_code=200 ) Workflow: Below are the workflow in these two authentication ways, with them in mind, we can understand what happened under the hood. Managed Identity: When we enable the managed identify for function app, a service principal will be generated automatically for it, then it follows the same steps as below to authenticate in database. Function App with managed identify -> send request to database with service principal -> database check the corresponding database user and its permission -> Pass authentication. Access Token: The access toke can be generated by executing ‘az account get-access-token --resource=https://database.windows.net/ --query accessToken’ from local, we then hold this token to authenticate. Please note that the default lifetime for the token is one hour, which means we would need to retrieve it again when it expires. az login -> az account get-access-token -> local function use token to authenticate in SQL database -> DB check if the database user exists and if the permissions granted -> Pass authentication. Thanks for reading. I hope you enjoy it.54KViews6likes18CommentsUnlocking Application Modernisation with GitHub Copilot
AI-driven modernisation is unlocking new opportunities you may not have even considered yet. It's also allowing organisations to re-evaluate previously discarded modernisation attempts that were considered too hard, complex or simply didn't have the skills or time to do. During Microsoft Build 2025, we were introduced to the concept of Agentic AI modernisation and this post from Ikenna Okeke does a great job of summarising the topic - Reimagining App Modernisation for the Era of AI | Microsoft Community Hub. This blog post however, explores the modernisation opportunities that you may not even have thought of yet, the business benefits, how to start preparing your organisation, empowering your teams, and identifying where GitHub Copilot can help. I’ve spent the last 8 months working with customers exploring usage of GitHub Copilot, and want to share what my team members and I have discovered in terms of new opportunities to modernise, transform your applications, bringing some fun back into those migrations! Let’s delve into how GitHub Copilot is helping teams update old systems, move processes to the cloud, and achieve results faster than ever before. Background: The Modernisation Challenge (Then vs Now) Modernising legacy software has always been hard. In the past, teams faced steep challenges: brittle codebases full of technical debt, outdated languages (think decades-old COBOL or VB6), sparse documentation, and original developers long gone. Integrating old systems with modern cloud services often requiring specialised skills that were in short supply – for example, check out this fantastic post from Arvi LiVigni (@arilivigni ) which talks about migrating from COBOL “the number of developers who can read and write COBOL isn’t what it used to be,” making those systems much harder to update". Common pain points included compatibility issues, data migrations, high costs, security vulnerabilities, and the constant risk that any change could break critical business functions. It’s no wonder many modernisation projects stalled or were “put off” due to their complexity and risk. So, what’s different now (circa 2025) compared to two years ago? In a word: Intelligent AI assistance. Tools like GitHub Copilot have emerged as AI pair programmers that dramatically lower the barriers to modernisation. Arvi’s post talks about how only a couple of years ago, developers had to comb through documentation and Stack Overflow for clues when deciphering old code or upgrading frameworks. Today, GitHub Copilot can act like an expert co-developer inside your IDE, ready to explain mysterious code, suggest updates, and even rewrite legacy code in modern languages. This means less time fighting old code and more time implementing improvements. As Arvi says “nine times out of 10 it gives me the right answer… That speed – and not having to break out of my flow – is really what’s so impactful.” In short, AI coding assistants have evolved from novel experiments to indispensable tools, reimagining how we approach software updates and cloud adoption. I’d also add from my own experience – the models we were using 12 months ago have already been superseded by far superior models with ability to ingest larger context and tackle even further complexity. It's easier to experiment, and fail, bringing more robust outcomes – with such speed to create those proof of concepts, experimentation and failing faster, this has also unlocked the ability to test out multiple hypothesis’ and get you to the most confident outcome in a much shorter space of time. Modernisation is easier now because AI reduces the heavy lifting. Instead of reading the 10,000-line legacy program alone, a developer can ask Copilot to explain what the code does or even propose a refactored version. Rather than manually researching how to replace an outdated library, they can get instant recommendations for modern equivalents. These advancements mean that tasks which once took weeks or months can now be done in days or hours – with more confidence and less drudgery - more fun! The following sections will dive into specific opportunities unlocked by GitHub Copilot across the modernisation journey which you may not even have thought of. Modernisation Opportunities Unlocked by Copilot Modernising an application isn’t just about updating code – it involves bringing everyone and everything up to speed with cloud-era practices. Below are several scenarios and how GitHub Copilot adds value, with the specific benefits highlighted: 1. AI-Assisted Legacy Code Refactoring and Upgrades Instant Code Comprehension: GitHub Copilot can explain complex legacy code in plain English, helping developers quickly understand decades-old logic without scouring scarce documentation. For example, you can highlight a cryptic COBOL or C++ function and ask Copilot to describe what it does – an invaluable first step before making any changes. This saves hours and reduces errors when starting a modernisation effort. Automated Refactoring Suggestions: The AI suggests modern replacements for outdated patterns and APIs, and can even translate code between languages. For instance, Copilot can help convert a COBOL program into JavaScript or C# by recognising equivalent constructs. It also uses transformation tools (like OpenRewrite for Java/.NET) to systematically apply code updates – e.g. replacing all legacy HTTP calls with a modern library in one sweep. Developers remain in control, but GitHub Copilot handles the tedious bulk edits. Bulk Code Upgrades with AI: GitHub Copilot’s App Modernisation capabilities can analyse an entire codebase and generate a detailed upgrade plan, then execute many of the code changes automatically. It can upgrade framework versions (say from .NET Framework 4.x to .NET 6, or Java 8 to Java 17) by applying known fix patterns and even fixing compilation errors after the upgrade. Teams can finally tackle those hundreds of thousand-line enterprise applications – a task that could take multiple years with GitHub Copilot handling the repetitive changes. Technical Debt Reduction: By cleaning up old code and enforcing modern best practices, GitHub Copilot helps chip away at years of technical debt. The modernised codebase is more maintainable and stable, which lowers the long-term risk hanging over critical business systems. Notably, the tool can even scan for known security vulnerabilities during refactoring as it updates your code. In short, each legacy component refreshed with GitHub Copilot comes out safer and easier to work on, instead of remaining a brittle black box. 2. Accelerating Cloud Migration and Azure Modernisation Guided Azure Migration Planning: GitHub Copilot can assess a legacy application’s cloud readiness and recommend target Azure services for each component. For instance, it might suggest migrating an on-premises database to Azure SQL, moving file storage to Azure Blob Storage, and converting background jobs to Azure Functions. This provides a clear blueprint to confidently move an app from servers to Azure PaaS. One-Click Cloud Transformations: GitHub Copilot comes with predefined migration tasksthat automate the code changes required for cloud adoption. With one click, you can have the AI apply dozens of modifications across your codebase. For example: File storage: Replace local file read/writes with Azure Blob Storage SDK calls. Email/Comms: Swap out SMTP email code for Azure Communication Services or SendGrid. Identity: Migrate authentication from Windows AD to Azure AD (Entra ID) libraries. Configuration: Remove hard-coded configurations and use Azure App Configuration or Key Vault for secrets. GitHub Copilot performs these transformations consistently, following best practices (like using connection strings from Azure settings). After applying the changes, it even fixes any compile errors automatically, so you’re not left with broken builds. What used to require reading countless Azure migration guides is now handled in minutes. Automated Validation & Deployment: Modernisation doesn’t stop at code changes. GitHub Copilot can also generate unit tests to validate that the application still behaves correctly after the migration. It helps ensure that your modernised, cloud-ready app passes all its checks before going live. When you’re ready to deploy, GitHub Copilot can produce the necessary Infrastructure-as-Code templates (e.g. Azure Resource Manager Bicep files or Terraform configs) and even set up CI/CD pipeline scripts for you. In other words, the AI can configure the Azure environment and deployment process end-to-end. This dramatically reduces manual effort and error, getting your app to the cloud faster and with greater confidence. Integrations: GitHub Copilot also helps tackle larger migration scenarios that were previously considered too complex. For example, many enterprises want to retire expensive proprietary integration platforms like MuleSoft or Apigee and use Azure-native services instead, but rewriting hundreds of integration workflows was daunting. Now, GitHub Copilot can assist in translating those workflows: for instance, converting an Apigee API proxy into an Azure API Management policy, or a MuleSoft integration into an Azure Logic App. Multi-Cloud Migrations: if you plan to consolidate from other clouds into Azure, GitHub Copilot can suggest equivalent Azure services and SDK calls to replace AWS or GCP-specific code. These AI-assisted conversions significantly cut down the time needed to reimplement functionality on Azure. The business impact can be substantial. By lowering the effort of such migrations, GitHub Copilot makes it feasible to pursue opportunities that deliver big cost savings and simplification. 3. Boosting Developer Productivity and Quality Instant Unit Tests (TDD Made Easy): Writing tests for old code can be tedious, but GitHub Copilot can generate unit test cases on the fly. Developers can highlight an existing function and ask Copilot to create tests; it will produce meaningful test methods covering typical and edge scenarios. This makes it practical to apply test-driven development practices even to legacy systems – you can quickly build a safety net of tests before refactoring. By catching bugs early through these AI-generated tests, teams gain confidence to modernise code without breaking things. It essentially injects quality into the process from the start, which is crucial for successful modernisation. DevOps Automation: GitHub Copilot helps modernise your build and deployment process as well. It can draft CI/CD pipeline configurations, Dockerfiles, Kubernetes manifests, and other DevOps scripts by leveraging its knowledge of common patterns. For example, when setting up a GitHub Actions workflow to deploy your app, GitHub Copilot will autocomplete significant parts (like build steps, test runs, deployment jobs) based on the project structure. This not only saves time but also ensures best practices (proper caching, dependency installation, etc.) are followed by default. Microsoft even provides an extension where you can describe your Azure infrastructure needs in plain language and have GitHub Copilot generate the corresponding templates and pipeline YAML. By automating these pieces, teams can move to cloud-based, automated deployments much faster. Behaviour-Driven Development Support: Teams practicing BDD write human-readable scenarios (e.g. using Gherkin syntax) describing application behaviour. GitHub Copilot’s AI is adept at interpreting such descriptions and suggesting step definition code or test implementations to match. For instance, given a scenario “When a user with no items checks out, then an error message is shown,” GitHub Copilot can draft the code for that condition or the test steps required. This helps bridge the gap between non-technical specifications and actual code. It makes BDD more efficient and accessible, because even if team members aren’t strong coders, the AI can translate their intent into working code that developers can refine. Quality and Consistency: By using AI to handle boilerplate and repetitive tasks, developers can focus more on high-value improvements. GitHub Copilot’s suggestions are based on a vast corpus of code, which often means it surfaces well-structured, idiomatic patterns. Starting from these suggestions, developers are less likely to introduce errors or reinvent the wheel, which leads to more consistent code quality across the project. The AI also often reminds you of edge cases (for example, suggesting input validation or error handling code that might be missed), contributing to a more robust application. In practice, many teams find that adopting GitHub Copilot results in fewer bugs and quicker code reviews, as the code is cleaner on the first pass. It’s like having an extra set of eyes on every pull request, ensuring standards are met. Business Benefits of AI-Powered Modernisation Bringing together the technical advantages above, what’s the payoff for the business and stakeholders? Modernising with GitHub Copilot can yield multiple tangible and intangible benefits: Accelerated Time-to-Market: Modernisation projects that might have taken a year can potentially be completed in a few months, or an upgrade that took weeks can be done in days. This speed means you can deliver new features to customers sooner and respond faster to market changes. It also reduces downtime or disruption since migrations happen more swiftly. Cost Savings: By automating repetitive work and reducing the effort required from highly paid senior engineers, GitHub Copilot can trim development costs. Faster project completion also means lower overall project cost. Additionally, running modernised apps on cloud infrastructure (with updated code) often lowers operational costs due to more efficient resource usage and easier maintenance. There’s also an opportunity cost benefit: developers freed up by Copilot can work on other value-adding projects in parallel. Improved Quality & Reliability: GitHub Copilot’s contributions to testing, bug-fixing, and even security (like patching known vulnerabilities during upgrades) result in more robust applications. Modernised systems have fewer outages and security incidents than shaky legacy ones. Stakeholders will appreciate that with GitHub Copilot, modernisation doesn’t mean “trading one set of bugs for another” – instead, you can increase quality as you modernise (GitHub’s research noted higher code quality when using Copilot, as developers are less likely to introduce errors or skip tests). Business Agility: A modernised application (especially one refactored for cloud) is typically more scalable and adaptable. New integrations or features can be added much faster once the platform is up-to-date. GitHub Copilot helps clear the modernisation hurdle, after which the business can innovate on a solid, flexible foundation (for example, once a monolith is broken into microservices or moved to Azure PaaS, you can iterate on it much faster in the future). AI-assisted modernisation thus unlocks future opportunities (like easier expansion, integrations, AI features, etc.) that were impractical on the legacy stack. Employee Satisfaction and Innovation: Developer happiness is a subtle but important benefit. When tedious work is handled by AI, developers can spend more time on creative tasks – designing new features, improving user experience, exploring new technologies. This can foster a culture of innovation. Moreover, being seen as a company that leverages modern tools (like AI Co-pilots) helps attract and retain top tech talent. Teams that successfully modernise critical systems with Copilot will gain confidence to tackle other ambitious projects, creating a positive feedback loop of improvement. To sum up, GitHub Copilot acts as a force-multiplier for application modernisation. It enables organisations to do more with less: convert legacy “boat anchors” into modern, cloud-enabled assets rapidly, while improving quality and developer morale. This aligns IT goals with business goals – faster delivery, greater efficiency, and readiness for the future. Call to Action: Embrace the Future of Modernisation GitHub Copilot has proven to be a catalyst for transforming how we approach legacy systems and cloud adoption. If you’re excited about the possibilities, here are next steps and what to watch for: Start Experimenting: If you haven’t already, try GitHub Copilot on a sample of your code. Use Copilot or Copilot Chat to explain a piece of old code or generate a unit test. Seeing it in action on your own project can build confidence and spark ideas for where to apply it. Identify a Pilot Project: Look at your application portfolio for a candidate that’s ripe for modernisation – maybe a small legacy service that could be moved to Azure, or a module that needs a refactor. Use GitHub Copilot to assess and estimate the effort. Often, you’ll find tasks once deemed “too hard” might now be feasible. Early successes will help win support for larger initiatives. Stay Tuned for Our Upcoming Blog Series: This post is just the beginning. In forthcoming posts, we’ll dive deeper into: Setting Up Your Organisation for Copilot Adoption: Practical tips on preparing your enterprise environment – from licensing and security considerations to training programs. We’ll discuss best practices (like running internal awareness campaigns, defining success metrics, and creating Copilot champions in your teams) to ensure a smooth rollout. Empowering Your Colleagues: How to foster a culture that embraces AI assistance. This includes enabling continuous learning, sharing prompt techniques and knowledge bases, and addressing any scepticism. We’ll cover strategies to support developers in using Copilot effectively, so that everyone from new hires to veteran engineers can amplify their productivity. Identifying High-Impact Modernisation Areas: Guidance on spotting where GitHub Copilot can add the most value. We’ll look at different domains – code, cloud, tests, data – and how to evaluate opportunities (for example, using telemetry or feedback to find repetitive tasks suited for AI, or legacy components with high ROI if modernised). Engage and Share: As you start leveraging Copilot for modernisation, share your experiences and results. Success stories (even small wins like “GitHub Copilot helped reduce our code review times” or “we migrated a component to Azure in 1 sprint”) can build momentum within your organisation and the broader community. We invite you to discuss and ask questions in the comments or in our tech community forums. Take a look at the new App Modernisation Guidance—a comprehensive, step-by-step playbook designed to help organisations: Understand what to modernise and why Migrate and rebuild apps with AI-first design Continuously optimise with built-in governance and observability Modernisation is a journey, and AI is the new compass and co-pilot to guide the way. By embracing tools like GitHub Copilot, you position your organisation to break through modernisation barriers that once seemed insurmountable. The result is not just updated software, but a more agile, cloud-ready business and a happier, more productive development team. Now is the time to take that step. Empower your team with Copilot, and unlock the full potential of your applications and your developers. Stay tuned for more insights in our next posts, and let’s modernise what’s possible together!1.1KViews4likes1CommentBuild an AI Image-Caption Generator on Azure App Service with Streamlit and GPT-4o-mini
This tiny app just does one thing: upload an image → get a natural one-line caption. Under the hood: Azure AI Vision extracts high-confidence tags from the image. Azure OpenAI (GPT-4o-mini) turns those tags into a fluent caption. Streamlit provides a lightweight, Python-native UI so you can ship fast. All code + infra templates: image_caption_app in the App Service AI Samples repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main/image_caption_app What are these components? What is Streamlit? An open-source Python framework to build interactive data/AI apps with just a few lines of code—perfect for quick, clean UIs. What is Azure AI Vision (Vision API)? A cloud service that analyzes images and returns rich signals like tags with confidence scores, which we use as grounded inputs for captioning. How it works (at a glance) User uploads a photo in Streamlit. The app calls Azure AI Vision → gets a list of tags (keeps only high-confidence ones). The app sends those tags to GPT-4o-mini → generates a one-line caption. Caption is shown instantly in the browser. Prerequisites Azure subscription — https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account Azure CLI — https://learn.microsoft.com/azure/cli/azure/install-azure-cli-linux Azure Developer CLI (azd) — https://learn.microsoft.com/azure/developer/azure-developer-cli/install-azd Python 3.10+ — https://www.python.org/downloads/ Visual Studio Code (optional) — https://code.visualstudio.com/download Streamlit (optional for local runs) — https://docs.streamlit.io/get-started/installation Managed Identity on App Service (recommended) — https://learn.microsoft.com/azure/app-service/overview-managed-identity Resources you’ll deploy You can create everything manually or with the provided azd template. What you need Azure App Service (Linux) to host the Streamlit app. Azure AI Foundry/OpenAI with a gpt-4o-mini deployment for caption generation. Azure AI Vision (Computer Vision) for image tagging. Managed Identity enabled on the Web App, with RBAC grants so the app can call Vision and OpenAI without secrets. One-command deploy with azd (recommended) The sample includes infra under image_caption_app/infra so azd up can provision + deploy in one go. # 1) Clone and move into the sample git clone https://github.com/Azure-Samples/appservice-ai-samples cd appservice-ai-samples/image_caption_app # 2) Log in and provision + deploy azd auth login azd up Manual path (if you prefer doing it yourself) Create Azure AI Vision, note the endpoint (custom subdomain). Create Azure AI Foundry/OpenAI and deploy gpt-4o-mini. Create App Service (Linux, Python) and enable System-Assigned Managed Identity. Assign roles to the Web App’s Managed Identity: Cognitive Services OpenAI User on your OpenAI resource. Cognitive Services User on your Vision resource. Add app settings for endpoints and deployment names (see repo), deploy the code, and run. Startup command (manual setting): If you’re configuring the Web App yourself (instead of using the Bicep), set the Startup Command to: streamlit run app.py --server.port 8000 --server.address 0.0.0.0 Portal path: App Service → Configuration → General settings → Startup Command. CLI example: az webapp config set \ --name <your-webapp-name> \ --resource-group <your-rg> \ --startup-file "streamlit run app.py --server.port 8000 --server.address 0.0.0.0" (The provided Bicep template already sets this for you.) Code tour (the important bits) Top-level flow (app.py) First we get tags from Vision, then ask GPT-4o-mini for a one-liner: tags = extract_tags(image_bytes) caption = generate_caption(tags) Vision call (utils/vision.py) Call the Vision REST API, parse JSON, and keep high-confidence tags (> 0.6): response = requests.post( VISION_API_URL, headers=headers, params=PARAMS, data=image_bytes, timeout=30, ) response.raise_for_status() analysis = response.json() tags = [ t.get('name') for t in analysis.get('tags', []) if t.get('name') and t.get('confidence', 0) > 0.6 ] Caption generation (utils/openai_caption.py) Join tags and ask GPT-4o-mini for a natural caption: tag_text = ", ".join(tags) prompt = f""" You are an assistant that generates vivid, natural-sounding captions for images. Create a one-line caption for an image that contains the following: {tag_text}. """ response = client.chat.completions.create( model=DEPLOYMENT_NAME, messages=[ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": prompt.strip()} ], max_tokens=60, temperature=0.7 ) return response.choices[0].message.content.strip() Security & auth: Managed Identity by default (recommended) This sample ships to use Managed Identity on App Service—no keys in config. The Web App’s Managed Identity authenticates to Vision and Azure OpenAI via Microsoft Entra ID. Prefer Managed Identity in production; if you need to test locally, you can switch to key-based auth by supplying the service keys in your environment. Run it locally (optional) # From the sample folder python -m venv .venv && source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -r requirements.txt # Set env vars for endpoints + deployment (and keys if not using MI locally) streamlit run app.py Repo map App + Streamlit UI + helpers: image_caption_app/ Bicep infrastructure (used by azd up): image_caption_app/infra/ What’s next — ways to extend this sample Richer vision signals: Add object detection, OCR, or brand detection; blend those into the prompt for sharper captions. Persistence & gallery: Save images to Blob Storage and captions/metadata to Cosmos DB or SQLite; add a Streamlit gallery. Performance & cost: Cache tags by image hash; cap image size; track tokens/latency. Observability: Wire up Application Insights with custom events (e.g., caption_generated). Looking for more Python samples? Check out the repo: https://github.com/Azure-Samples/appservice-ai-samples/tree/main For more Azure App Service AI samples and best practices, check out the Azure App Service AI integration documentation279Views0likes0Comments