openai
118 TopicsBuilding with Azure OpenAI Sora: A Complete Guide to AI Video Generation
In this comprehensive guide, we'll explore how to integrate both Sora 1 and Sora 2 models from Azure OpenAI Service into a production web application. We'll cover API integration, request body parameters, cost analysis, limitations, and the key differences between using Azure AI Foundry endpoints versus OpenAI's native API. Table of Contents Introduction to Sora Models Azure AI Foundry vs. OpenAI API Structure API Integration: Request Body Parameters Video Generation Modes Cost Analysis per Generation Technical Limitations & Constraints Resolution & Duration Support Implementation Best Practices Introduction to Sora Models Sora is OpenAI's groundbreaking text-to-video model that generates realistic videos from natural language descriptions. Azure AI Foundry provides access to two versions: Sora 1: The original model focused primarily on text-to-video generation with extensive resolution options (480p to 1080p) and flexible duration (1-20 seconds) Sora 2: The enhanced version with native audio generation, multiple generation modes (text-to-video, image-to-video, video-to-video remix), but more constrained resolution options (720p only in public preview) Azure AI Foundry vs. OpenAI API Structure Key Architectural Differences Sora 1 uses Azure's traditional deployment-based API structure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/... Parameters: Uses Azure-specific naming like n_seconds, n_variants, separate width/height fields Job Management: Uses /jobs/{id} for status polling Content Download: Uses /video/generations/{generation_id}/content/video Sora 2 adapts OpenAI's v1 API format while still being hosted on Azure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/videos Parameters: Uses OpenAI-style naming like seconds (string), size (combined dimension string like "1280x720") Job Management: Uses /videos/{video_id} for status polling Content Download: Uses /videos/{video_id}/content Why This Matters? This architectural difference requires conditional request formatting in your code: const isSora2 = deployment.toLowerCase().includes('sora-2'); if (isSora2) { requestBody = { model: deployment, prompt, size: `${width}x${height}`, // Combined format seconds: duration.toString(), // String type }; } else { requestBody = { model: deployment, prompt, height, // Separate dimensions width, n_seconds: duration.toString(), // Azure naming n_variants: variants, }; } API Integration: Request Body Parameters Sora 1 API Parameters Standard Text-to-Video Request: { "model": "sora-1", "prompt": "Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.", "height": "720", "width": "1280", "n_seconds": "12", "n_variants": "2" } Parameter Details: model (String, Required): Your Azure deployment name prompt (String, Required): Natural language description of the video (max 32000 chars) height (String, Required): Video height in pixels width (String, Required): Video width in pixels n_seconds (String, Required): Duration (1-20 seconds) n_variants (String, Optional): Number of variations to generate (1-4, constrained by resolution) Sora 2 API Parameters Text-to-Video Request: { "model": "sora-2", "prompt": "A serene mountain landscape with cascading waterfalls, cinematic drone shot", "size": "1280x720", "seconds": "12" } Image-to-Video Request (uses FormData): const formData = new FormData(); formData.append('model', 'sora-2'); formData.append('prompt', 'Animate this image with gentle wind movement'); formData.append('size', '1280x720'); formData.append('seconds', '8'); formData.append('input_reference', imageFile); // JPEG/PNG/WebP Video-to-Video Remix Request: Endpoint: POST .../videos/{video_id}/remix Body: Only { "prompt": "your new description" } The original video's structure, motion, and framing are reused while applying the new prompt Parameter Details: model (String, Optional): Your deployment name prompt (String, Required): Video description size (String, Optional): Either "720x1280" or "1280x720" (defaults to "720x1280") seconds (String, Optional): "4", "8", or "12" (defaults to "4") input_reference (File, Optional): Reference image for image-to-video mode remix_video_id (String, URL parameter): ID of video to remix Video Generation Modes 1. Text-to-Video (Both Models) The foundational mode where you provide a text prompt describing the desired video. Implementation: const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': apiKey, }, body: JSON.stringify({ model: deployment, prompt: "A train journey through mountains with dramatic lighting", size: "1280x720", seconds: "12", }), }); Best Practices: Include shot type (wide, close-up, aerial) Describe subject, action, and environment Specify lighting conditions (golden hour, dramatic, soft) Add camera movement if desired (pans, tilts, tracking shots) 2. Image-to-Video (Sora 2 Only) Generate a video anchored to or starting from a reference image. Key Requirements: Supported formats: JPEG, PNG, WebP Image dimensions must exactly match the selected video resolution Our implementation automatically resizes uploaded images to match Implementation Detail: // Resize image to match video dimensions const targetWidth = parseInt(width); const targetHeight = parseInt(height); const resizedImage = await resizeImage(inputReference, targetWidth, targetHeight); // Send as multipart/form-data formData.append('input_reference', resizedImage); 3. Video-to-Video Remix (Sora 2 Only) Create variations of existing videos while preserving their structure and motion. Use Cases: Change weather conditions in the same scene Modify time of day while keeping camera movement Swap subjects while maintaining composition Adjust artistic style or color grading Endpoint Structure: POST {base_url}/videos/{original_video_id}/remix?api-version=2024-08-01-preview Implementation: let requestEndpoint = endpoint; if (isSora2 && remixVideoId) { const [baseUrl, queryParams] = endpoint.split('?'); const root = baseUrl.replace(/\/videos$/, ''); requestEndpoint = `${root}/videos/${remixVideoId}/remix${queryParams ? '?' + queryParams : ''}`; } Cost Analysis per Generation Sora 1 Pricing Model Base Rate: ~$0.05 per second per variant at 720p Resolution Scaling: Cost scales linearly with pixel count Formula: const basePrice = 0.05; const basePixels = 1280 * 720; // Reference resolution const currentPixels = width * height; const resolutionMultiplier = currentPixels / basePixels; const totalCost = basePrice * duration * variants * resolutionMultiplier; Examples: 720p (1280×720), 12 seconds, 1 variant: $0.60 1080p (1920×1080), 12 seconds, 1 variant: $1.35 720p, 12 seconds, 2 variants: $1.20 Sora 2 Pricing Model Flat Rate: $0.10 per second per variant (no resolution scaling in public preview) Formula: const totalCost = 0.10 * duration * variants; Examples: 720p (1280×720), 4 seconds: $0.40 720p (1280×720), 12 seconds: $1.20 720p (720×1280), 8 seconds: $0.80 Note: Since Sora 2 currently only supports 720p in public preview, resolution doesn't affect cost, only duration matters. Cost Comparison Scenario Sora 1 (720p) Sora 2 (720p) Winner 4s video $0.20 $0.40 Sora 1 12s video $0.60 $1.20 Sora 1 12s + audio N/A (no audio) $1.20 Sora 2 (unique) Image-to-video N/A $0.40-$1.20 Sora 2 (unique) Recommendation: Use Sora 1 for cost-effective silent videos at various resolutions. Use Sora 2 when you need audio, image/video inputs, or remix capabilities. Technical Limitations & Constraints Sora 1 Limitations Resolution Options: 9 supported resolutions from 480×480 to 1920×1080 Includes square, portrait, and landscape formats Full list: 480×480, 480×854, 854×480, 720×720, 720×1280, 1280×720, 1080×1080, 1080×1920, 1920×1080 Duration: Flexible: 1 to 20 seconds Any integer value within range Variants: Depends on resolution: 1080p: Variants disabled (n_variants must be 1) 720p: Max 2 variants Other resolutions: Max 4 variants Concurrent Jobs: Maximum 2 jobs running simultaneously Job Expiration: Videos expire 24 hours after generation Audio: No audio generation (silent videos only) Sora 2 Limitations Resolution Options (Public Preview): Only 2 options: 720×1280 (portrait) or 1280×720 (landscape) No square formats No 1080p support in current preview Duration: Fixed options only: 4, 8, or 12 seconds No custom durations Defaults to 4 seconds if not specified Variants: Not prominently supported in current API documentation Focus is on single high-quality generations with audio Concurrent Jobs: Maximum 2 jobs (same as Sora 1) Job Expiration: 24 hours (same as Sora 1) Audio: Native audio generation included (dialogue, sound effects, ambience) Shared Constraints Concurrent Processing: Both models enforce a limit of 2 concurrent video jobs per Azure resource. You must wait for one job to complete before starting a third. Job Lifecycle: queued → preprocessing → processing/running → completed Download Window: Videos are available for 24 hours after completion. After expiration, you must regenerate the video. Generation Time: Typical: 1-5 minutes depending on resolution, duration, and API load Can occasionally take longer during high demand Resolution & Duration Support Matrix Sora 1 Support Matrix Resolution Aspect Ratio Max Variants Duration Range Use Case 480×480 Square 4 1-20s Social thumbnails 480×854 Portrait 4 1-20s Mobile stories 854×480 Landscape 4 1-20s Quick previews 720×720 Square 4 1-20s Instagram posts 720×1280 Portrait 2 1-20s TikTok/Reels 1280×720 Landscape 2 1-20s YouTube shorts 1080×1080 Square 1 1-20s Premium social 1080×1920 Portrait 1 1-20s Premium vertical 1920×1080 Landscape 1 1-20s Full HD content Sora 2 Support Matrix Resolution Aspect Ratio Duration Options Audio Generation Modes 720×1280 Portrait 4s, 8s, 12s ✅ Yes Text, Image, Video Remix 1280×720 Landscape 4s, 8s, 12s ✅ Yes Text, Image, Video Remix Note: Sora 2's limited resolution options in public preview are expected to expand in future releases. Implementation Best Practices 1. Job Status Polling Strategy Implement adaptive backoff to avoid overwhelming the API: const maxAttempts = 180; // 15 minutes max let attempts = 0; const baseDelayMs = 3000; // Start with 3 seconds while (attempts < maxAttempts) { const response = await fetch(statusUrl, { headers: { 'api-key': apiKey }, }); if (response.status === 404) { // Job not ready yet, wait longer const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; continue; } const job = await response.json(); // Check completion (different status values for Sora 1 vs 2) const isCompleted = isSora2 ? job.status === 'completed' : job.status === 'succeeded'; if (isCompleted) break; // Adaptive backoff const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; } 2. Handling Different Response Structures Sora 1 Video Download: const generations = Array.isArray(job.generations) ? job.generations : []; const genId = generations[0]?.id; const videoUrl = `${root}/${genId}/content/video`; Sora 2 Video Download: const videoUrl = `${root}/videos/${jobId}/content`; 3. Error Handling try { const response = await fetch(endpoint, fetchOptions); if (!response.ok) { const error = await response.text(); throw new Error(`Video generation failed: ${error}`); } // ... handle successful response } catch (error) { console.error('[VideoGen] Error:', error); // Implement retry logic or user notification } 4. Image Preprocessing for Image-to-Video Always resize images to match the target video resolution: async function resizeImage(file: File, targetWidth: number, targetHeight: number): Promise<File> { return new Promise((resolve, reject) => { const img = new Image(); const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); img.onload = () => { canvas.width = targetWidth; canvas.height = targetHeight; ctx.drawImage(img, 0, 0, targetWidth, targetHeight); canvas.toBlob((blob) => { if (blob) { const resizedFile = new File([blob], file.name, { type: file.type }); resolve(resizedFile); } else { reject(new Error('Failed to create resized image blob')); } }, file.type); }; img.onerror = () => reject(new Error('Failed to load image')); img.src = URL.createObjectURL(file); }); } 5. Cost Tracking Implement cost estimation before generation and tracking after: // Pre-generation estimate const estimatedCost = calculateCost(width, height, duration, variants, soraVersion); // Save generation record await saveGenerationRecord({ prompt, soraModel: soraVersion, duration: parseInt(duration), resolution: `${width}x${height}`, variants: parseInt(variants), generationMode: mode, estimatedCost, status: 'queued', jobId: job.id, }); // Update after completion await updateGenerationStatus(jobId, 'completed', { videoId: finalVideoId }); 6. Progressive User Feedback Provide detailed status updates during the generation process: const statusMessages: Record<string, string> = { 'preprocessing': 'Preprocessing your request...', 'running': 'Generating video...', 'processing': 'Processing video...', 'queued': 'Job queued...', 'in_progress': 'Generating video...', }; onProgress?.(statusMessages[job.status] || `Status: ${job.status}`); Conclusion Building with Azure OpenAI's Sora models requires understanding the nuanced differences between Sora 1 and Sora 2, both in API structure and capabilities. Key takeaways: Choose the right model: Sora 1 for resolution flexibility and cost-effectiveness; Sora 2 for audio, image inputs, and remix capabilities Handle API differences: Implement conditional logic for parameter formatting and status polling based on model version Respect limitations: Plan around concurrent job limits, resolution constraints, and 24-hour expiration windows Optimize costs: Calculate estimates upfront and track actual usage for better budget management Provide great UX: Implement adaptive polling, progressive status updates, and clear error messages The future of AI video generation is exciting, and Azure AI Foundry provides production-ready access to these powerful models. As Sora 2 matures and limitations are lifted (especially resolution options), we'll see even more creative applications emerge. Resources: Azure AI Foundry Sora Documentation OpenAI Sora API Reference Azure OpenAI Service Pricing This blog post is based on real-world implementation experience building LemonGrab, my AI video generation platform that integrates both Sora 1 and Sora 2 through Azure AI Foundry. The code examples are extracted from production usage.193Views0likes0CommentsLogic Apps Aviators Newsletter - February 2026
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month February 2026’s Ace Aviator: Camilla Bielk What's your role and title? What are your responsibilities? I’m a developer and solution architect, working as a consultant (at XLENT) in teams involved across the integration lifecycle—from understanding business needs and building new solutions to operations. Previously I worked a lot with BizTalk and with customers using both Azure and BizTalk. In my current role, I work exclusively with Azure. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? A typical day starts by syncing operational status with the team to ensure everything is running as expected (Logic Apps, Service Bus, Functions, API Management, etc.) and discussing any findings from the previous day. Then I continue with ongoing work, ranging from stakeholder meetings and designing new integration flows to C# development, operations, and maintenance of the existing landscape. I enjoy being involved across the entire chain. Working closely with operational issues provides valuable input into design decisions, and vice versa, which strengthens me as a designer and architect. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The openness to feedback and knowledge sharing from both community developers and the product team motivates me to stay active, continuously learn, and become the best consultant I can be. Passing knowledge to new team members is rewarding and completes the circle. Integration conferences are inspiring - networking with amazing people and discussing the technology we use every day. That’s also why we started the Nordic Integration Summit: to give more people the chance to take part in such an inspiring event. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Your social skills are more valuable than you might think. Teamwork is everything, and understanding business needs—even from non-technical colleagues—is just as important as technology. Listen to your heart and find environments where you can thrive. Programming languages change over time; the real takeaway is mastering common concepts and patterns and developing the ability to learn continuously. Stay curious and embrace new things as they come! What has helped you grow professionally? 100% of my professional growth comes from working with kind, humble, and collaborative people—teams where knowledge is shared freely in all directions, mistakes are allowed, and learning from them is encouraged. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? Better cost transparency—clearly showing which actions drive costs (executions, retries, data volume, logging) with visibility before production deployment. Design-time guidance that warns about expensive operations, anti-patterns, and inefficient configurations. News from our product group Introducing Unit Test Agent Profiles for Logic Apps & Data Maps Focused unit test agent profiles help Logic Apps Standard teams discover workflows and data maps, write reusable specifications, generate typed mocks/test data, and implement MSTest suites against the Automated Test SDK. The approach promotes spec-first testing, consistent artifacts, and reliable validations across scenarios (happy path, error handling, boundaries). The project demonstrates how GitHub Copilot custom agents and prompts can accelerate unit test authoring while enforcing constraints for maintainability in enterprise integrations. Automated Test Framework - Missing Tests in Test Explorer If Logic Apps Standard tests disappear from VS Code’s Test Explorer, the cause is typically a MSTest version mismatch introduced by a recent C# DevKit update. The fix is to update package references in the project (MSTest, Test SDK, and related dependencies) to supported versions. After adjusting packages and restarting VS Code, tests reappear and run as expected. The extension is being updated, but existing projects should apply the manual changes to restore a stable testing experience. Upcoming Agentic Azure Logic Apps Workshops Free workshops introduce Agentic Business Processes in Azure Logic Apps, including MCP Server integrations and agent loop patterns. Sessions cover connecting the enterprise with MCP Servers from Copilot Studio and building agentic workflows in a day using Logic Apps (Standard). Registration links are provided, with dates set in January. These events help practitioners explore agent tools, orchestration patterns, and practical scenarios for intelligent automation in modern integration landscapes. News from our community Enterprise AI ≠ Copilot Post by Al Ghoniem, MBA Deploying Copilot is not the same as building an enterprise AI strategy. This article distinguishes between adopting a product versus developing a core organizational capability. It explores why many AI rollouts stall when treated as point solutions and outlines what it takes to embed AI as a strategic, scalable function across the enterprise. For leaders evaluating their AI roadmaps, this piece offers a framework for thinking beyond tools toward sustainable transformation. BizTalk Server and WinSCP Error Post by Sandro Pereira A common SFTP adapter issue resurfaces when BizTalk Server cumulative updates are applied. This troubleshooting guide explains why the WinSCPnet assembly fails to load after upgrading from CU5 to CU6 and provides a version compatibility matrix for BizTalk Server 2020. It includes step-by-step instructions to resolve the error by copying the correct WinSCP binaries to the BizTalk installation folder, along with an automated PowerShell script for streamlined remediation. More on finding application registrations used by Logic Apps Post by Mikael Sand Building on earlier work around API connection discovery, this post extends KQL queries to find Logic Apps that authenticate via Client ID and secret in HTTP actions. Using Azure Resource Graph Explorer, the technique scans workflow parameters to identify application registration references. While the approach assumes parameters are used for credentials, it provides a practical method for auditing which Logic Apps depend on a given app registration across subscriptions. MCP Servers in Azure Logic Apps Agent Loops (Step-by-Step) Video by Stephen W. Thomas This walkthrough demonstrates how to move tool logic out of Logic App Agent loops and into reusable Model Context Protocol servers. The video covers setting up an MCP server, registering tools, and invoking them from within an agent workflow. By decoupling tool definitions from orchestration logic, developers can build modular, maintainable agentic systems that scale across multiple workflows and scenarios. From Rigid Choreography to Intelligent Collaboration: Agentic Orchestration as the Evolution of SOA Post by Steef-Jan Wiggers Integration has evolved from rigid, deterministic workflows to adaptive, goal-driven orchestrations powered by intelligent agents. This article traces the journey from BizTalk-era SOA composites to modern agentic patterns, explaining how AI-enabled orchestrators dynamically plan, self-correct, and communicate intent rather than follow fixed sequences. With examples from Azure Logic Apps Agent Loop, it shows how integration professionals can leverage existing connectors and APIs as tools within reasoning-based architectures. Azure Integrations That Actually Work in Production Post by Devarajan Gurusamy Architecture diagrams often look perfect in presentations but fail under real-world conditions. This article examines what it takes to build Azure integrations that survive production workloads, covering resilience patterns, error handling strategies, and operational considerations that separate proof-of-concepts from enterprise-grade solutions. It offers practical guidance for teams moving beyond demos toward reliable, maintainable integration implementations. Configuring BizTalk Server Backup Jobs for Azure Blob Storage Post by Sandro Pereira Modernizing BizTalk infrastructure can start with small, targeted improvements. This guide shows how to redirect native BizTalk backup jobs to Azure Blob Storage using SQL Server’s BACKUP TO URL feature. It covers generating SAS tokens, creating SQL credentials, updating job steps, and validating backups. The approach improves disaster recovery posture with geo-redundant cloud storage while keeping the BizTalk environment unchanged—no new components or custom code required. Building an AI Agent with Logic Apps Consumption and Agent Loop Post by Stephen W. Thomas Logic Apps Consumption with AgentLoop offers a fast path to practical AI agents, particularly for experimentation and low-volume workloads. This post walks through building a stateful poker-playing agent, demonstrating tool orchestration, agent instructions, and execution tracing. It compares Consumption and Standard tiers, explores cost efficiency, and shows how to offload reasoning to external models. At roughly a dollar for 25 runs, it’s an accessible entry point for agentic development. Microsoft Agent Framework Post by Al Ghoniem, MBA Microsoft’s open-source Agent Framework provides a comprehensive toolkit for building, orchestrating, and deploying AI agents in Python and .NET. It features graph-based workflows with streaming and checkpointing, multi-agent patterns, built-in observability via OpenTelemetry, and support for multiple LLM providers. The repository includes quickstart samples, migration guides from Semantic Kernel and AutoGen, and experimental labs for benchmarking and reinforcement learning. Storage Account: You do not have permissions to list the data using your user account with Entra ID Post by Sandro Pereira Being an Owner of an Azure Storage Account does not automatically grant access to its data. This common confusion trips up many developers working with Logic Apps and blob storage. The article explains the distinction between management roles and data-plane permissions, then walks through assigning Storage Blob Data roles via Access Control (IAM) to resolve the “You do not have permissions” error when authenticating with Microsoft Entra ID. Using Office 365 Outlook Connector in Logic Apps Standard Post by Srikanth Gunnala When configuring Office 365 Outlook (V2) with Managed Identity in Logic Apps Standard, the workflow designer may fail to auto-bind existing API connections—even when runtime works correctly. This post details the issue, explaining how a null referenceName in the workflow JSON causes the trigger to appear unbound. The fix involves manually setting the connection reference name in the workflow definition, after which the designer immediately recognizes the connection. Inside Logic Apps Standard: Understanding Compute Units (CU) for Storage Scaling Post by Daniel Jonathan High-volume Logic Apps can hit Azure Storage throttling limits. This deep-dive explains how Compute Units enable horizontal storage scaling by distributing workflow execution data across up to 32 storage accounts. It covers CU routing via Run ID suffixes, table distribution patterns, and configuration via host.json. Understanding this architecture is essential for building monitoring tools, querying run histories programmatically, or troubleshooting performance at scale. E59 - Roles & Personas Video by Sebastian Meyer This episode recaps highlights from SAP TechEd and Microsoft Ignite, with a focus on the SAP Innovation Guide and Logic Apps announcements. It explores roles and personas in the integration space, discussing how different team members contribute to enterprise integration projects. The conversation provides context for practitioners following both SAP and Microsoft ecosystems and offers insights into recent platform developments relevant to hybrid integration scenarios. Architecting Trust: Leveraging Microsoft Foundry to Solve AI Governance Challenges Post by Kent Weare As organizations scale AI deployments, governance becomes critical. This article explores how Microsoft Foundry addresses common AI governance challenges including model management, access controls, auditability, and compliance. It outlines architectural patterns for building trust into AI systems from the ground up and provides guidance for teams navigating the balance between innovation velocity and responsible AI practices in enterprise environments.Introducing Unit Test Agent Profiles for Logic Apps & Data Maps
I did some experimentation this week with GitHub Copilot custom agents and custom prompts. The result was a couple of agents to help with creation of unit tests for Logic Apps Standard workflows and data maps. These agent profiles are designed to keep test artifacts consistent and maintainable by separating discovery, specification, data generation, and implementation. Specifications serve as the single source of truth; generated code and data follow from the specs. Discover: Enumerate workflows/maps, triggers/actions, dependencies, and control flow to build a testability inventory. Spec (Speckit): Write reusable test specifications (intent, inputs/mocks, expected outcomes) stored in a predictable location. Cases: Design scenario catalogs with categories (Happy Path, Error Handling, Boundary Conditions, etc.). Test Data: Generate typed mocks (Logic Apps) or input/expected data files (Data Maps) per scenario. Implement: Produce MSTest code against the Automated Test SDK on net8.0, enforcing strict constraints for reliability. Project Batch: Run operations across all workflows/maps with pre-checks, scaffolding verification, and summary reporting. ℹ️ You can find this project on GitHub: https://github.com/wsilveiranz/logicapps-unittest-custom-agent. This is an example of how those agents can be created using what is available in GitHub Copilot, so feel free to fork it and adjust it to your own requirements. Agent Profiles Available Logic Apps Workflow Unit Test Author Specialist agent for authoring Logic Apps Standard workflow unit tests using MSTest (net8.0) and the Automated Test SDK. Produces spec-first test cases, typed mocks, and runnable MSTest implementations. How It Works Discover workflows: Locate workflow.json, summarize triggers/actions, and identify external dependencies to mock. Spec-first: Write reusable specs capturing intent, mock plan, inputs, and expected outcomes. Create cases: Build scenario catalogs (Happy Path, Error Handling, Boundary Conditions, etc.). Generate test data: Produce typed MockOutput and ActionMock classes instances mapped to action names. This step is optional, as implementation will create test data if not available. Implement tests: Generate MSTest code enforcing SDK constraints and required annotations. Validate: Build and run tests; refine specs and data as needed. Batch (optional): Apply the flow across all workflows once scaffolding is verified. Skill Description Prompt File Discover Workflows Discover workflow definitions, summarize triggers/actions, and produce a mock inventory. .github/prompts/la-unit-tests-discover.prompt.md Create Test Cases Create unit test cases from a workflow definition; define intent, inputs, mock plan, and expected outcomes. .github/prompts/la-unit-tests-create-cases.prompt.md Write Speckit Specs Write and maintain Speckit-style specification documents for reusable scenarios. .github/prompts/la-unit-tests-speckit-specs.prompt.md Implement Tests Implement MSTest unit tests using the Automated Test SDK with enforced constraints. .github/prompts/la-unit-tests-implement.prompt.md Generate Test Data Generate trigger/action mock payloads and typed outputs for success and failure variants. .github/prompts/la-unit-tests-generate-test-data.prompt.md Project Batch Execute project-wide operations with scaffolding verification and roll-up reporting. .github/prompts/la-unit-tests-project-batch.prompt.md Logic Apps Data Map Unit Test Author Specialist agent for authoring Data Map unit tests from LML or XSLT using MSTest (net8.0). Produces spec-first test cases, sample input/expected data, and MSTest implementations. How It Works Discover data maps: Locate Artifacts/MapDefinitions/*.lml and Artifacts/Maps/*.xslt; summarize transformations and schema requirements. Spec-first: Write reusable specs capturing intent, input data, expected output, and validation criteria. Create cases: Build scenario catalogs (Happy Path, Empty/Missing, Boundary Values, Special Characters, Large Data, etc.). Generate test data: Produce representative input files and expected output files aligned to schemas and mapping rules. This step is optional, as implementation will create test data if not available. Implement tests: Generate MSTest classes using DataMapTestExecutor. Validate: Build and run tests; refine specs and data as needed. Batch (optional): Apply the flow across all maps; verify project scaffolding first. Skill Description Prompt File Discover Data Maps Discover LML/XSLT maps and schemas; summarize transformation structure and complexity. .github/prompts/dm-unit-tests-discover.prompt.md Create Test Cases Design scenarios covering happy path, edge cases, and error handling. .github/prompts/dm-unit-tests-create-cases.prompt.md Write Speckit Specs Write and maintain Speckit-style specs with mapping rules and validations. .github/prompts/dm-unit-tests-speckit-specs.prompt.md Implement Tests Implement MSTest suites using DataMapTestExecutor with required defaults. .github/prompts/dm-unit-tests-implement.prompt.md Generate Test Data Generate input and expected output files matched to map rules and schemas. .github/prompts/dm-unit-tests-generate-test-data.prompt.md Project Batch Execute project-wide operations with scaffolding and summary reporting. .github/prompts/dm-unit-tests-project-batch.prompt.md FAQ Why Speckit-style specs? Specs are human-readable contracts that drive consistent test implementation across teams and time. They decouple scenario design from code generation, improving reuse. Can I run batch operations? Yes. Use the project-batch profiles to orchestrate discovery and implementation across all workflows/maps once scaffolding is verified. What if scaffolding is missing? The Implement and Project Batch agents will stop and report the exact folders/files you need to create (via the Logic Apps Designer’s Create Unit Test command or manual steps).441Views2likes0CommentsAnnouncing the General Availability (GA) of the Premium v2 tier of Azure API Management
Superior capacity, highest entity limits, unlimited included calls, and the most comprehensive set of features set the Premium v2 tier apart from other API Management tiers. Customers rely on the Premium v2 tier for running enterprise-wide API programs at scale, with high availability, and performance. The Premium v2 tier has a new architecture that eliminates management traffic from the customer VNet, making private networking much more secure and easier to setup. During the creation of a Premium v2 instance, you can choose between VNet injection or VNet integration (introduced in the Standard v2 tier) options. In addition, today we are also adding three new features to Premium v2: Inbound Private Link: You can now enable private endpoint connectivity to restrict inbound access to your Premium v2 instance. It can be enabled along with VNet injection or VNet integration or without a VNet. Availability zone support: Premium v2 now supports availability zones (zone redundancy) to enhance the reliability and resilience of your API gateway. Custom CA certificates: Azure API management v2 gateway can now validate TLS connections with the backend service using custom CA certificates. New and improved VNet injection Using VNet injection in Premium v2 no longer requires configuring routes or service endpoints. Customers can secure their API workloads without impacting API Management dependencies, while Microsoft can secure the infrastructure without interfering with customer API workloads. In short, the new VNet injection implementation enables both parties to manage network security and configuration settings independently and without affecting each other. You can now configure your APIs with complete networking flexibility: force tunnel all outbound traffic to on-premises, send all outbound traffic through an NVA, or add a WAF device to monitor all inbound traffic to your API Management Premium v2—all without constraints. Inbound Private Link Customers can now configure an inbound private endpoint for their API Management Premium v2 instance to allow your API consumers securely access the API Management gateway over Azure Private Link. The private endpoint uses an IP address from an Azure virtual network in which it's hosted. Network traffic between a client on your private network and API Management traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public internet. Further, you can configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. With a private endpoint and Private Link, you can: Create multiple Private Link connections to an API Management instance. Use the private endpoint to send inbound traffic on a secure connection. Apply different API Management policies based on whether traffic comes from the private endpoint. Limit incoming traffic only to private endpoints, preventing data exfiltration. Combine with inbound virtual network injection or outbound virtual network integration to provide end-to-end network isolation of your API Management clients and backend services. More details can be found here Today, only the API Management instance’s Gateway endpoint supports inbound private link connections. Each API management instance can support at most 100 Private Link connections. Availability zones Azure API Management Premium v2 now supports Availability Zones (AZ) redundancy to enhance the reliability and resilience of your API gateway. When deploying an API Management instance in an AZ-enabled region, users can choose to enable zone redundancy. This distributes the service's units, including Gateway, management plane, and developer portal, across multiple, physically separate AZs within that region. Learn how to enable AZs here. CA certificates If the API Management Gateway needs to connect to the backends secured with TLS certificates issued by private certificate authorities (CA), you need to configure custom CA certificates in the API Management instance. Custom CA certificates can be added and managed as Authorization Credentials in the Backend entities. The Backend entity has been extended with new properties allowing customers to specify a list of certificate thumbprints or subject name + issuer thumbprint pairs that Gateway should trust when establishing TLS connection with associated backend endpoint. More details can be found here. Region availability The Premium v2 tier is now generally available in six public regions (Australia East, East US2, Germany West Central, Korea Central, Norway East and UK South) with additional regions coming soon. For pricing information and regional availability, please visit the API Management pricing page. Learn more API Management v2 tiers FAQ API Management v2 tiers documentation API Management overview documentationLogic Apps Aviators Newsletter - January 2026
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month January's Ace Aviator: Sagar Sharma What's your role and title? What are your responsibilities? I’m a cross-domain Business Solution Architect specializing in delivering new business capabilities to customers. I design end-to-end architectures specially on Azure platforms and also in the Integration domain using azure integration services. My role involves marking architectural decisions, defining standards, ensuring platform reliability, guiding teams, and helping organizations transition from legacy integration systems to modern cloud-native patterns. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually blends architecture work with hands-on collaboration. I review integration designs, refine patterns, help teams troubleshoot integration flows, and ensure deployments run smoothly through DevOps pipelines. A good part of my time is spent translating business needs into integration patterns and making sure the overall platform stays secure, scalable, and maintainable. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community has shaped a big part of my career. Many of my early breakthroughs came from blogs, samples, and talks shared by others. Contributing back feels like closing the loop. I enjoy sharing real-world lessons, learning from peers, and helping others adopt integration patterns with confidence. The energy of the community and the conversations it creates keep me inspired. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Focus on core concepts—messaging, APIs, security, and distributed systems—because tools evolve, but fundamentals stay relevant. Share your learning early, even if it feels small. Be curious about the “why” behind patterns. Build side projects, not just follow tutorials. And don’t fear a nonlinear career path—diverse experience is an asset in technology. What has helped you grow professionally? Hands-on customer work, strong mentors, and consistent learning habits have been key. Community involvement—writing, speaking, and collaborating—has pushed me to structure my knowledge and stay current. And working in environments that encourage experimentation has helped me develop faster and with more confidence. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d love to see a unified, out-of-the-box business transaction tracing experience across Logic Apps, Service Bus, APIM, Functions, and downstream services. Something that automatically correlates events, visualizes the full journey of a transaction, and simplifies root-cause analysis. This would make operational troubleshooting dramatically easier in enterprise environments. News from our product group Microsoft BizTalk Server Product Lifecycle Update BizTalk Server 2020 will be the final release, with support extending through 2030. Microsoft encourages a gradual transition to Azure Logic Apps, offering migration tooling, hybrid deployment options, and reuse of existing BizTalk artifacts. Customers can modernize at their own pace while maintaining operational continuity. Data Mapper Test Executor: A New Addition to Logic Apps Standard Test Framework The Data Mapper Test Executor adds native support for testing XSLT and Data Mapper transformations directly within the Logic Apps Standard test framework. It streamlines validation, improves feedback cycles, and integrates with the latest SDK to enable reliable, automated testing of map generation and execution. Announcing General Availability of AI & RAG Connectors in Logic Apps (Standard) Logic Apps Standard AI and RAG connectors are now GA333, enabling native document processing, semantic search, embeddings, and agentic workflows. These capabilities let teams build intelligent, context‑aware automations using their own data, reducing complexity and enhancing decisioning across enterprise integrations. Logic Apps Labs The Logic Apps Labs, which introduces Azure Logic Apps agentic workflows learning path, offering guided modules on building conversational and autonomous agents, extending capabilities with MCP tools, and orchestrating multi‑agent workflows. It serves as a starting point for hands‑on labs covering design, deployment, and advanced patterns for intelligent automation. News from our community Handling Empty SQL Query Results in Azure Logic Apps Post by Anitha Eswaran If a SQL stored procedure returns no rows, you can detect the empty result set in Logic Apps by checking whether the output’s ResultSets object is {}. When empty, the workflow can be cleanly terminated or used to trigger alerts, ensuring predictable behavior and more resilient integrations. Azure Logic Apps MCP Server Post by Laveesh Bansal Our own Laveesh Bansal spent some time creating an Azure Logic Apps MCP Server that enables natural‑language debugging, workflow inspection, and updates without using the portal. It supports both Standard and Consumption apps, integrates with AI clients like Copilot and Claude, and offers tools for local or cloud‑hosted setups, testing, and workflow lifecycle operations. Azure Logic Apps: Change Detection in JSON Objects and Arrays Post by Suraj Somani Logic Apps offers native functions to detect changes in JSON objects and arrays without worrying about field or item order. Using equals() for objects and intersection() for arrays, you can determine when data has truly changed and trigger workflows only when updates occur, improving efficiency and reducing unnecessary processing. Logic Apps Standard: A clever hack to use JSON schemas in your Artifacts folder for JSON message validation (Part 1) Post by Şahin Özdemir Şahin outlines a workaround for using JSON schemas stored in the Artifacts folder to validate messages in Logic Apps Standard. It revisits integration needs from BizTalk migrations and shows how to bring structured validation into modern workflows without relying on Integration Accounts. This is a two part series and you can find part two here. Let's integrate SAP with Microsoft Video by Sebastian Meyer Sebastian has a new video out, and in this episode he and Martin Pankraz break down SAP GROW and RISE for Microsoft integration developers, covering key differences and integration options across IDoc, RFC, BAPI, SOAP, HTTPS, and OData, giving a concise overview of today’s SAP landscape and what it means for building integrations on Azure. Logic Apps Initialize variables action has a max limit of 20 variables Post by Sandro Pereira Logic Apps allows only 20 variables per Initialize variables action, and exceeding it triggers a validation error. This limit applies per action, not per workflow. Using objects, parameters, or Compose actions often reduces the need for many scalars and leads to cleaner, more maintainable workflows. Did you know that? It is a Friday Fact!Logic Apps Aviators Newsletter - December 2025
In this issue: Ace Aviator of the Month News from our product group Community Playbook News from our community Ace Aviator of the Month December’s Ace Aviator: Daniel Jonathan What's your role and title? What are your responsibilities? I’m an Azure Integration Architect at Cnext, helping organizations modernize and migrate their integrations to Azure Integration Services. I design and build solutions using Logic Apps, Azure Functions, Service Bus, and API Management. I also work on AI solutions using Semantic Kernel and LangChain to bring intelligence into business processes. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? My day usually begins by attending customer requests and handling recent deployments. Most of my time goes into designing integration patterns, building Logic Apps, mentoring the team, and helping customers with technical solutions. Lately, I’ve also been integrating AI capabilities into workflows. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community is open, friendly, and full of knowledge. I enjoy sharing ideas, writing posts, and helping others solve real-world challenges. It’s great to learn and grow together. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Start small and stay consistent. Learn the basics well—like messaging, retries, and error handling—before diving into complex tools. Keep learning and share what you know. What has helped you grow professionally? Hands-on experience, teamwork, and continuous learning. Working across different projects taught me how to design reliable and scalable systems. Exploring AI with Semantic Kernel and LangChain has also helped me think beyond traditional integrations. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? I’d add an “Overview Page” in Logic Apps containing the HTTP URLs for each workflow, so developers can quickly access to test from one place. It would save time and make working with multiple workflows much easier. News from our product group Logic Apps Community Day 2025 Playlist Did you miss or want to catch up on individual sessions from Logic Apps Community Day 2025? Here is the full playlist – choose your favorite sessions and have fun! The future of integration is here and it's agentic Missed Kent Weare and Divya Swarnkar session at Ignite? It is here for you to watch on demand. Enterprise integration is being reimagined. It’s no longer just about connecting systems, but about enabling adaptive, agentic workflows that unify apps, data, and systems. In this session, discover how to modernize integration, migrate from BizTalk, and adopt AI-driven patterns that deliver agility and intelligence. Through customer stories and live demos, see how to bring these workflows to life with Agent Loop in Azure Logic Apps. Public Preview: Azure Logic Apps Connectors as MCP Tools in Microsoft Foundry Unlock secure enterprise connectivity with Azure Logic Apps connectors as MCP tools in Microsoft Foundry. Agents can now use hundreds of connectors natively—no custom code required. Learn how to configure and register MCP servers for seamless integration. Announcing AI Foundry Agent Service Connector v2 (Preview) AI Foundry Agent Service Connector v2 (Preview) is here! Azure Logic Apps can now securely invoke Foundry agents, enabling low-code AI integration, multi-agent workflows, and faster time-to-value. Explore new operations for orchestration and monitoring. Announcing the General Availability of the XML Parse and Compose Actions in Azure Logic Apps XML Parse and Compose Actions are now GA in Azure Logic Apps! Easily handle XML with XSD schemas, streamline workflows, and support internationalization. Learn best practices for arrays, encoding, and safe transport of content. Clone a Consumption Logic App to a Standard Workflow Clone your Consumption Logic Apps into Standard workflows with ease! This new feature accelerates migration, preserves design, and unlocks advanced capabilities for modern integration solutions. Announcing the HL7 connector for Azure Logic Apps Standard and Hybrid (Public Preview) Connect healthcare systems effortlessly! The new HL7 connector for Azure Logic Apps (Standard & Hybrid) enables secure, standardized data exchange and automation using HL7 protocols—now in Public Preview. Announcing Foundry Control Plane support for Logic Apps Agent Loop (Preview) Foundry Control Plane now supports Logic Apps Agent Loop (Preview)! Manage, govern, and observe agents at scale with built-in integration—no extra steps required. Ensure trust, compliance, and scalability in the agentic era. Announcing General Availability of Agent Loop in Azure Logic Apps Agent Loop transforms Logic Apps into a multi-agent automation platform, enabling AI agents to collaborate with workflows and humans. Build secure, enterprise-ready agentic solutions for business automation at scale. Agent Loop Ignite Update - New Set of AI Features Arrive in Public Preview We are releasing a broad set of Agent Loop new and powerful AI-first capabilities in Public Preview that dramatically expand what developers can build: run agents in the Consumption SKU ,bring your own models through APIM AI Gateway, call any tool through MCP, deploy agents directly into Teams, secure RAG with document-level permissions, onboard with Okta, and build in a completely redesigned workflow designer. Announcing MCP Server Support for Logic Apps Agent Loop Agent Loop in Azure Logic Apps now supports Model Context Protocol (MCP), enabling secure, standardized tool integration. Bring your own MCP connector, use Azure-managed servers, or build custom connectors for enterprise workflows. Enabling API Key Authentication for Logic Apps MCP Servers Logic Apps MCP Servers now support API Key authentication alongside OAuth2 and Anonymous options. Configure keys via host.json or Azure APIs, retrieve and regenerate keys easily, and connect MCP clients securely for agentic workflows. Announcing Public Preview of Agent Loop in Azure Logic Apps Consumption Agent Loop now brings AI-powered automation to Logic Apps Consumption with a frictionless, pay-as-you-go model. Build autonomous and conversational agents using 1,400+ connectors—no dedicated infrastructure required. Ideal for rapid prototyping and enterprise workflows. Moving the Logic Apps Designer Forward Major redesign of Azure Logic Apps designer enters Public Preview for Standard workflows. Phase I focuses on faster onboarding, unified views, draft mode with auto-save, improved search, and enhanced debugging. Feedback will shape future phases for a seamless development experience. Announcing the General Availability of the RabbitMQ Connector RabbitMQ Connector for Azure Logic Apps is now generally available, enabling reliable message exchange for Standard and Hybrid workflows. It supports triggers, publishing, and advanced routing, with global rollout underway for robust, scalable integration scenarios. Duplicate Detection in Logic App Trigger Prevent duplicate processing in Logic Apps triggers with a REST API-based solution. It checks recent runs using clientTrackingId to avoid reprocessing items caused by edits or webhook updates. Works with Logic App Standard and adaptable for Consumption or Power Automate. Announcing the BizTalk Server 2020 Cumulative Update 7 BizTalk Server 2020 Cumulative Update 7 is out, adding support for Visual Studio 2022, Windows Server 2022, SQL Server 2022, and Windows 11. Includes all prior fixes. Upgrade from older versions or consider migrating to Azure Logic Apps for modernization. News from our community Logic Apps Local Development Series Post by Daniel Jonathan Last month I shared an article from Daniel about debugging XSLT in VS Code. This month, I bumped into not one, but five articles in a series about Build, Test and Run Logic Apps Standard locally – definitely worth the read! Working with sessions in Agentic Workflows Post by Simon Stender Build AI-powered chat experiences with session-based agentic workflows in Azure Logic Apps. Learn how they enable dynamic, stateful interactions, integrate with APIs and apps, and avoid common pitfalls like workflows stuck in “running” forever. Integration Love Story with Mimmi Gullberg Video by Ahmed Bayoumy and Robin Wilde Meet Mimmi Gullberg, Green Cargo’s new integration architect driving smarter, sustainable rail logistics. With experience from BizTalk to Azure, she blends tech and business insight to create real value. Her mantra: understand the problem first, then choose the right tools—Logic Apps, Functions, or AI. Integration Love Story with Jenny Andersson Video by Ahmed Bayoumy and Robin Wilde Discover Jenny Andesson’s inspiring journey from skepticism to creativity in tech. In this episode, she shares insights on life as an integration architect, tackling system challenges, listening to customers, and how AI is shaping the future of integration. You Can Get an XPath value in Logic Apps without returning an array Post by Luis Rigueira Working with XML in Azure Logic Apps? The xpath() function always returns an array—even for a single node. Or does it? Found how to return just the values you want on this Friday Fact from Luis Rigueira. Set up Azure Standard Logic App Connectors as MCP Server Video by Srikanth Gunnala Expose your Azure Logic Apps integrations as secure tools for AI assistants. Learn how to turn connectors like SAP, SQL, and Jira into MCP tools, protect them with Entra ID/OAuth, and test in GitHub Copilot Chat for safe, action-ready AI workflows. Making Logic Apps Speak Business Post by Al Ghoniem Stop forcing Logic Apps to look like business diagrams. With Business Process Tracking, you can keep workflows technically sound while giving business users clear, stage-based visibility into processes—decoupled, visual, and KPI-driven.512Views0likes0CommentsUnleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge. Key Features of MCP Standardized Communication – MCP provides a structured way for AI models to interact with various tools. Tool Access & Expansion – AI assistants can now utilize external tools for real-time insights. Secure & Scalable – Enables safe and scalable integration with enterprise applications. Multi-Modal Integration – Supports STDIO, SSE (Server-Sent Events), and WebSocket communication methods. MCP Architecture & How It Works MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: Components of MCP MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. Data Flow in MCP The AI model sends a request (e.g., "fetch user profile data"). The MCP client forwards the request to the appropriate MCP server. The MCP server retrieves the required data from a database or API. The response is sent back to the AI model via the MCP client. Integrating MCP with Azure OpenAI Services Microsoft has integrated MCP with Azure OpenAI Services, allowing GPT models to interact with external services and fetch live data. This means AI models are no longer limited to static knowledge but can access real-time information. Benefits of Azure OpenAI Services + MCP Integration ✔ Real-time Data Fetching – AI assistants can retrieve fresh information from APIs, databases, and internal systems. ✔ Contextual AI Responses – Enhances AI responses by providing accurate, up-to-date information. ✔ Enterprise-Ready – Secure and scalable for business applications, including finance, healthcare, and retail. Hands-On Tools for MCP Implementation To implement MCP effectively, Microsoft provides two powerful tools: Semantic Workbench and AI Gateway. Microsoft Semantic Workbench A development environment for prototyping AI-powered assistants and integrating MCP-based functionalities. Features: Build and test multi-agent AI assistants. Configure settings and interactions between AI models and external tools. Supports GitHub Codespaces for cloud-based development. Explore Semantic Workbench Workbench interface examples Microsoft AI Gateway A plug-and-play interface that allows developers to experiment with MCP using Azure API Management. Features: Credential Manager – Securely handle API credentials. Live Experimentation – Test AI model interactions with external tools. Pre-built Labs – Hands-on learning for developers. Explore AI Gateway Setting Up MCP with Azure OpenAI Services Step 1: Create a Virtual Environment First, create a virtual environment using Python: python -m venv .venv Activate the environment: # Windows venv\Scripts\activate # MacOS/Linux source .venv/bin/activate Step 2: Install Required Libraries Create a requirements.txt file and add the following dependencies: langchain-mcp-adapters langgraph langchain-openai Then, install the required libraries: pip install -r requirements.txt Step 3: Set Up OpenAI API Key Ensure you have your OpenAI API key set up: # Windows setx OPENAI_API_KEY "<your_api_key> # MacOS/Linux export OPENAI_API_KEY=<your_api_key> Building an MCP Server This server performs basic mathematical operations like addition and multiplication. Create the Server File First, create a new Python file: touch math_server.py Then, implement the server: from mcp.server.fastmcp import FastMCP # Initialize the server mcp = FastMCP("Math") MCP.tool() def add(a: int, b: int) -> int: return a + b MCP.tool() def multiply(a: int, b: int) -> int: return a * b if __name__ == "__main__": mcp.run(transport="stdio") Your MCP server is now ready to run. Building an MCP Client This client connects to the MCP server and interacts with it. Create the Client File First, create a new file: touch client.py Then, implement the client: import asyncio from mcp import ClientSession, StdioServerParameters from langchain_openai import ChatOpenAI from mcp.client.stdio import stdio_client # Define server parameters server_params = StdioServerParameters( command="python", args=["math_server.py"], ) # Define the model model = ChatOpenAI(model="gpt-4o") async def run_agent(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: await session.initialize() tools = await load_mcp_tools(session) agent = create_react_agent(model, tools) agent_response = await agent.ainvoke({"messages": "what's (4 + 6) x 14?"}) return agent_response["messages"][3].content if __name__ == "__main__": result = asyncio.run(run_agent()) print(result) Your client is now set up and ready to interact with the MCP server. Running the MCP Server and Client Step 1: Start the MCP Server Open a terminal and run: python math_server.py This starts the MCP server, making it available for client connections. Step 2: Run the MCP Client In another terminal, run: python client.py Expected Output 140 This means the AI agent correctly computed (4 + 6) x 14 using both the MCP server and GPT-4o. Conclusion Integrating MCP with Azure OpenAI Services enables AI applications to securely interact with external tools, enhancing functionality beyond text-based responses. With standardized communication and improved AI capabilities, developers can build smarter and more interactive AI-powered solutions. By following this guide, you can set up an MCP server and client, unlocking the full potential of AI with structured external interactions. Next Steps: Explore more MCP tools and integrations. Extend your MCP setup to work with additional APIs. Deploy your solution in a cloud environment for broader accessibility. For further details, visit the GitHub repository for MCP integration examples and best practices. MCP GitHub Repository MCP Documentation Semantic Workbench AI Gateway MCP Video Walkthrough MCP Blog MCP Github End to End Demo60KViews10likes6Comments🤖 Agent Loop Demos 🤖
We announced the public preview of agent loop at Build 2025. Agent Loop is a new feature in Logic Apps to build AI Agents for use cases that span across industry domains and patterns. Here are some resources to learn more about them Logic Apps Labs - https://aka.ms/lalabs Agent in a day workshop - https://aka.ms/la-agent-in-a-day In this article, share with you use cases implemented in Logic Apps using agent loop and other features. This video shows a conversational agent that answers questions about Health Plans and their coverage. The demo features document ingestion of policy documents in AI Search and then retrieving them in Agent loop using natural language This video shows an autonomous agent that generates sales report. The demo features Python Code Interpreter which analyzed excel data and aggregates it for the LLM to reason on it This video shows a conversational agent that helps recruiters with candidate screening and interview scheduling. The demo features OBO (On-Behalf-Of) where agent uses tools in the security context of user. This video shows a conversational agent for a Utility company. The demo features multi agent orchestration using handoff, and a native chat client that supports multi turn conversations and streaming, and is secured via Entra for user authentication This video shows an autonomous Loan Approval Agent specifically that handles auto loans for a bank. The demo features an AI Agent that uses an Azure Open AI model, company's policies, and several tools to process loan application. For edge cases, huma in involved via Teams connector. This video shows an autonomous Product Return Agent for Fourth Coffee company. The returns are processed by agent based on company policy, and other criterions. In this case also, a human is involved when decisions are outside the agent's boundaries This video shows a commercial agent that grants credits for purchases of groceries and other products, for Northwind Stores. The Agent extracts financial information from an IBM Mainframe and an IBM i system to assess each requestor and updates the internal Northwind systems with the approved customers information. Multi-Agent scenario including both a codeful and declarative method of implementation. Note: This is pre-release functionality and is subject to change. If you are interested in further discussing Logic Apps codeful Agents, please fill out the following feedback form. Operations Agent (part 1): In this conversational agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow. Operations Agent (part 2): In this autonomous agent, we will perform Logic Apps operations such as repair and resubmit to ensure our integration platform is healthy and processing transactions. To ensure of compliance we will ensure all operational activities are logged in ServiceNow.4.5KViews4likes2CommentsLogic Apps Aviators Newsletter - November 2025
In this issue: Ace Aviator of the Month News from our product group News from our community Ace Aviator of the Month Novembers’s Ace Aviator: Al Ghoniem What's your role and title? What are your responsibilities? As a Senior Integration Consultant, I design and deliver enterprise-grade integration on Microsoft Azure, primarily using Logic Apps Standard, API Management, Service Bus, Event Grid and Azure Functions. My remit covers reference architectures, “golden” templates, governance and FinOps guardrails, CI/CD automation (Bicep and YAML), and production-ready patterns for reliability, observability and cost efficiency. Alongside my technical work, I lead teams of consultants and engineers, helping them adopt standardised delivery models, mentor through code reviews and architectural walkthroughs, and ensure we deliver consistent, high-quality outcomes across projects. I also help teams apply decisioning patterns (embedded versus external rules) and integrate AI responsibly within enterprise workflows. Can you give us some insights into your day-to-day activities and what a typical day in your role looks like? Architecture and patterns: refining solution designs, sequence diagrams and rules models for new and existing integrations. Build and automation: evolving reusable Logic App Standard templates, Bicep modules and pipelines, embedding monitoring, alerts and identity-first security. Problem-solving: addressing performance tuning, transient fault handling, poison/DLQ flows and “design for reprocessing.” Leadership and enablement: mentoring consultants, facilitating technical discussions, and ensuring knowledge is shared across teams. Community and writing: publishing articles and examples to demystify real-world integration trade-offs. What motivates and inspires you to be an active member of the Aviators/Microsoft community? The community continuously turns hard-won lessons into reusable practices. Sharing patterns (and anti-patterns) saves others time and incidents, while learning from peers strengthens my own work. Microsoft’s product teams also listen closely, and seeing customer feedback directly shape the platform is genuinely rewarding. Looking back, what advice do you wish you had been given earlier that you'd now share with those looking to get into STEM/technology? Optimise for learning speed, not titles. Choose problems that stretch you and deliver in small, measurable increments. Master the fundamentals. Naming, idempotency, retries and observability are not glamorous but make systems dependable. Document everything. Diagrams, runbooks and ADRs multiply your impact. Understand trade-offs. Every decision buys something and costs something; acknowledge both sides clearly. Value collaboration over heroics. Ask questions, share knowledge and give credit freely. What has helped you grow professionally? Reusable scaffolding: creating golden templates and reference repositories that capture best practice once and reuse it everywhere. Feedback loops: leveraging telemetry, post-incident reviews and peer critique to improve. Teaching and mentoring: explaining concepts to others brings clarity and strengthens leadership. Cross-disciplinary curiosity: combining architecture, DevOps, FinOps and AI to address problems holistically. If you had a magic wand that could create a feature in Logic Apps, what would it be and why? "Stateful Sessions and Decisions” as a first-class capability: Built-in session state across multiple workflows, durable correlation and resumable orchestrations without external storage. A native decisioning activity with versioned decision tables and rule auditing (“why this rule fired”). A local-first developer experience with fast testing and contract validation for confident iteration. This would simplify complex, human-in-the-loop and event-driven scenarios, reduce custom plumbing, and make advanced orchestration patterns accessible to a wider audience. News from our product group Logic Apps Community Day 2025 Did you miss or want to catch up again on your favorite Logic Apps Community Day videos – jump back into action on this four hours long learning session, with 10 sessions from our Community Experts. And stay tuned for individual sessions being shared throughout the week. Announcing Parse & Chunk with Metadata in Logic Apps: Build Context-Aware RAG Agents New Parse & Chunk actions add metadata like page numbers and sentence completeness—perfect for context-aware document Q&A using Azure AI Search and Agent Loop. Introducing the RabbitMQ Connector (Public Preview) The new connector (Public Preview) lets you send and receive messages with RabbitMQ in Logic Apps Standard and Hybrid—ideal for scalable, reliable messaging across industries. News from our community EventGrid And Entra Auth In Logic Apps Standard Post by Riccardo Viglianisi Learn how to use Entra Auth for webhook authentication, ditch SAS tokens, and configure private endpoints with public access rules—perfect for secure, scalable integrations. Debugging XSLT Made Easy in VS Code: .NET-Based Debugging for Logic Apps Post by Daniel Jonathan A new .NET-based extension brings real debugging to XSLT for Logic Apps. Set breakpoints, step through transformations, and inspect variables—making XSLT development clear and productive. This is the 3 rd post in a 5 part series, so worth checking out the other posts too. Modifying the Logic App Azure Workbook: Custom Views for Multi Workflow Monitoring Post by Jeff Wessling Learn how to tailor dashboards with KQL, multi-workflow views, and context panes—boosting visibility, troubleshooting speed, and operational efficiency across your integrations. Azure AI Agents in Logic Apps: A Guide to Automate Decisions Post by Imashi Kinigama Discover how GPT-powered agents, created using Logic Apps Agent Loop, automate decisions, extract data, and adapt in real time. Build intelligent workflows with minimal effort—no hardcoding, just instructions and tools. How to Turn Logic App Connectors into MCP Servers (Step-by-Step Guide) Post by Stephen W. Thomas Learn how to expose connectors like Google Drive or Salesforce as MCP endpoints using Azure API Center—giving AI agents secure, real-time access to 1,400+ services directly from VS Code. Custom SAP MCP Server with Logic Apps Post by Sebastian Meyer Learn how to turn Logic Apps into AI-accessible tools using MCP. From workflow descriptions to Easy Auth setup and VS Code integration—this guide unlocks SAP automation with Copilot. How Azure Logic Apps as MCP Servers Accelerate AI Agent Development Post by Monisha S Turn 1,400+ connectors into AI tools with Logic Apps Standard. Build agents fast, integrate with legacy systems, and scale intelligent workflows across your organization. Designing Business Rules in Azure Logic Apps: When to Go Embedded vs External Post by Al Ghoniem Learn when to use Logic Apps' native Rules Engine or offload to Azure Functions with NRules or JSON RulesEngine. Discover hybrid patterns for scalable, testable decision automation. Syncing SharePoint with Azure Blob Storage using Logic Apps & Azure Functions for Azure AI Search Post by Daniel Jonathan Solve folder delete issues by tagging blobs with SharePoint metadata. Use Logic Apps and a custom Azure Function to clean up orphaned files and keep Azure AI Search in sync. Step-by-Step Guide: Building a Conversational Agent in Azure Logic Apps Post by Stephen W. Thomas Use Azure AI Foundry and Logic Apps Standard to create chatbots that shuffle cards, answer questions, and embed into websites—no code required, just smart workflows and EasyAuth. You can hide sensitive data from the Logic App run history Post by Francisco Leal Learn how to protect sensitive data like authentication tokens, credentials, and personal information in Logic App, so this data don’t appear in the run history, which could pose security and privacy risks.559Views0likes0CommentsPower Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to: Speak commands or input instead of typing, making the interface more accessible and user-friendly. Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio. Provide a more natural and hands-free experience especially on devices like smartphones or tablets. In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features. If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here. Deploy Azure AI Speech service in Azure. Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup. After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI. Settings things up in Open Web UI Speech to Text settings (STT) Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below. Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank. Text to Speech settings (TTS) Now, let's proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank. You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below: Click Save to reflect the change. Expected Result Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button. The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input. To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI. The TTS Engine should reflect Azure AI Speech service! Conclusion And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine. From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.1.6KViews2likes1Comment