integration
62 TopicsStreamline Azure NetApp Files Management—Right from Your IDE
The Azure NetApp Files VS Code Extension is designed to streamline storage provisioning and management directly within the developer’s IDE. Traditional workflows often require extensive portal navigation, manual configuration, and policy management, leading to inefficiencies and context switching. The extension addresses these challenges by enabling AI-powered automation through natural language commands, reducing provisioning time from hours to minutes while minimizing errors and improving compliance. Key capabilities include generating production-ready ARM templates, validating resources, and delivering optimization insights—all without leaving the coding environment.78Views0likes0CommentsHow Azure NetApp Files Object REST API powers Azure and ISV Data and AI services – on YOUR data
This article introduces the Azure NetApp Files Object REST API, a transformative solution for enterprises seeking seamless, real-time integration between their data and Azure's advanced analytics and AI services. By enabling direct, secure access to enterprise data—without costly transfers or duplication—the Object REST API accelerates innovation, streamlines workflows, and enhances operational efficiency. With S3-compatible object storage support, it empowers organizations to make faster, data-driven decisions while maintaining compliance and data security. Discover how this new capability unlocks business potential and drives a new era of productivity in the cloud.719Views0likes0CommentsAccelerating HPC and EDA with Powerful Azure NetApp Files Enhancements
High-Performance Computing (HPC) and Electronic Design Automation (EDA) workloads demand uncompromising performance, scalability, and resilience. Whether you're managing petabyte-scale datasets or running compute intensive simulations, Azure NetApp Files delivers the agility and reliability needed to innovate without limits.514Views1like0CommentsBoosting Hybrid Cloud Data Efficiency for EDA: The Power of Azure NetApp Files cache volumes
Electronic Design Automation (EDA) is the foundation of modern semiconductor innovation, enabling engineers to design, simulate, and validate increasingly sophisticated chip architectures. As designs push the boundaries of PPA (Power, Performance, and reduced Area) to meet escalating market demands, the volume of associated design data has surged exponentially with a single System-on-Chip (SoC) project generating multiple petabytes of data during its development lifecycle, making data mobility and accessibility critical bottlenecks. To overcome these challenges, Azure NetApp Files (ANF) cache volumes are purpose-built to optimize data movement and minimize latency, delivering high-speed access to massive design datasets across distributed environments. By mitigating data gravity, Azure NetApp Files cache volumes empower chip designers to leverage cloud-scale compute resources on demand and at scale, thus accelerating innovation without being constrained by physical infrastructure.456Views0likes0CommentsSynthetic Monitoring in Application Insights Using Playwright: A Game-Changer
Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyond simple endpoint checks. The solution? Custom TrackAvailability tests using Playwright. What is Playwright? Playwright is a powerful end-to-end testing framework that enables automated browser testing for modern web applications. It supports multiple browsers (Chromium, Firefox, WebKit) and can run tests in headless mode, making it ideal for synthetic monitoring. Why Use Playwright for Synthetic Monitoring? Simulate real user interactions (login, navigate, click, etc.) Catch UI failures that simple URL ping tests cannot detect Execute complex workflows like authentication and transactions Integrate with Azure Functions for periodic execution Log availability metrics in Application Insights for better tracking and alerting Step-by-Step Implementation (Repo link) Set Up an Azure Function App Navigate to the Azure Portal. Create a new Function App. Select Runtime Stack: Node.js. Enable Application Insights. Install Dependencies In your local development environment, create a Node.js project: mkdir playwright-monitoring && cd playwright-monitoring npm init -y npm install /functions playwright applicationinsights dotenv Implement the Timer-Triggered Azure Function Create timerTrigger1.js: const { app } = require('@azure/functions'); const { runPlaywrightTests } = require('../playwrightTest.js'); // Import the Playwright test function app.timer('timerTrigger1', { schedule: '0 */5 * * * *', // Runs every 5 minutes handler: async (myTimer, context) => { try { context.log("Executing Playwright test..."); await runPlaywrightTests(context); context.log("Playwright test executed successfully!"); } catch (error) { context.log.error("Error executing Playwright test:", error); } finally { context.log("Timer function processed request."); } } }); Implement the Playwright Test Logic Create playwrightTest.js: require('dotenv').config(); const playwright = require('playwright'); const appInsights = require('applicationinsights'); // Debugging: Print env variable to check if it's loaded correctly console.log("App Insights Key:", process.env.APPLICATIONINSIGHTS_CONNECTION_STRING); // Initialize Application Insights appInsights .setup(process.env.APPLICATIONINSIGHTS_CONNECTION_STRING || process.env.APPINSIGHTS_INSTRUMENTATIONKEY) .setSendLiveMetrics(true) .setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C) .setAutoDependencyCorrelation(true) .setAutoCollectRequests(true) .setAutoCollectPerformance(true) .setAutoCollectExceptions(true) .setAutoCollectDependencies(true) .setAutoCollectConsole(true) .setUseDiskRetryCaching(true) // Enables retry caching for telemetry .setInternalLogging(true, true) // Enables internal logging for debugging .start(); const client = appInsights.defaultClient; async function runPlaywrightTests(context) { const timestamp = new Date().toISOString(); try { context.log(`[${timestamp}] Running Playwright login test...`); // Launch Browser const browser = await playwright.chromium.launch({ headless: true }); const page = await browser.newPage(); // Navigate to login page await page.goto('https://www.saucedemo.com/'); // Perform Login await page.fill('#user-name', 'standard_user'); await page.fill('#password', 'secret_sauce'); await page.click('#login-button'); // Verify successful login await page.waitForSelector('.inventory_list', { timeout: 5000 }); // Log Success to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: true, duration: 5000, // Execution time runLocation: "Azure Function", message: "Login successful", time: new Date() }); context.log("✅ Playwright login test successful."); await browser.close(); } catch (error) { context.log.error("❌ Playwright login test failed:", error); // Log Failure to Application Insights client.trackAvailability({ name: "SauceDemo Login Test", success: false, duration: 0, runLocation: "Azure Function", message: error.message, time: new Date() }); } } module.exports = { runPlaywrightTests }; Configure Environment Variables Create a .env file and set your Application Insights connection string: APPLICATIONINSIGHTS_CONNECTION_STRING=<your_connection_string> Deploy and Monitor Deploy the Function App using Azure CLI: func azure functionapp publish <your-function-app-name> Monitor the availability results in Application Insights → Availability. Setting Up Alerts for Failed Tests To get notified when availability tests fail: Open Application Insights in the Azure portal. Go to Alerts → Create Alert Rule. Select Signal Type: Availability Results. Configure a condition where Success = 0 (Failure). Add an action group (email, Teams, etc.). Click Create Alert Rule. Conclusion With Playwright-based synthetic monitoring, you can go beyond basic URL ping tests and validate real user interactions in your application. Since Microsoft has deprecated Multi-step web tests and URL ping tests, this approach ensures better availability tracking, UI validation, and proactive issue detection in Application Insights.2.5KViews2likes1CommentBuilding AI Agents: Workflow-First vs. Code-First vs. Hybrid
AI Agents are no longer just a developer’s playground. They’re becoming essential for enterprise automation, decision-making, and customer engagement. But how do you build them? Do you go workflow-first with drag-and-drop designers, code-first with SDKs, or adopt a hybrid approach that blends both worlds? In this article, I’ll walk you through the landscape of AI Agent design. We’ll look at workflow-first approaches with drag-and-drop designers, code-first approaches using SDKs, and hybrid models that combine both. The goal is to help you understand the options and choose the right path for your organization. Why AI Agents Need Orchestration Before diving into tools and approaches, let’s talk about why orchestration matters. AI Agents are not just single-purpose bots anymore. They often need to perform multi-step reasoning, interact with multiple systems, and adapt to dynamic workflows. Without orchestration, these agents can become siloed and fail to deliver real business value. Here’s what I’ve observed as the key drivers for orchestration: Complexity of Enterprise Workflows Modern business processes involve multiple applications, data sources, and decision points. AI Agents need a way to coordinate these steps seamlessly. Governance and Compliance Enterprises require control over how AI interacts with sensitive data and systems. Orchestration frameworks provide guardrails for security and compliance. Scalability and Maintainability A single agent might work fine for a proof of concept, but scaling to hundreds of workflows requires structured orchestration to avoid chaos. Integration with Existing Systems AI Agents rarely operate in isolation. They need to plug into ERP systems, CRMs, and custom apps. Orchestration ensures these integrations are reliable and repeatable. In short, orchestration is the backbone that turns AI Agents from clever prototypes into enterprise-ready solutions. Behind the Scenes I’ve always been a pro-code guy. I started my career on open-source coding in Unix and hardly touched the mouse. Then I discovered Visual Studio, and it completely changed my perspective. It showed me the power of a hybrid approach, the best of both worlds. That said, I won’t let my experience bias your ideas of what you’d like to build. This blog is about giving you the full picture so you can make the choice that works best for you. Workflow-First Approach Workflow-first platforms are more than visual designers and not just about drag-and-drop simplicity. They represent a design paradigm where orchestration logic is abstracted into declarative models rather than imperative code. These tools allow you to define agent behaviors, event triggers, and integration points visually, while the underlying engine handles state management, retries, and scaling. For architects, this means faster prototyping and governance baked into the platform. For developers, it offers extensibility through connectors and custom actions without sacrificing enterprise-grade reliability. Copilot Studio Building conversational agents becomes intuitive with a visual designer that maps prompts, actions, and connectors into structured flows. Copilot Studio makes this possible by integrating enterprise data and enabling agents to automate tasks and respond intelligently without deep coding. Building AI Agents using Copilot Studio Design conversation flows with adaptive prompts Integrate Microsoft Graph for contextual responses Add AI-driven actions using Copilot extensions Support multi-turn reasoning for complex queries Enable secure access to enterprise data sources Extend functionality through custom connectors Logic Apps Adaptive workflows and complex integrations are handled through a robust orchestration engine. Logic Apps introduces Agent Loop, allowing agents to reason iteratively, adapt workflows, and interact with multiple systems in real time. Building AI Agents using Logic Apps Implement Agent Loop for iterative reasoning Integrate Azure OpenAI for goal-driven decisions Access 1,400+ connectors for enterprise actions Support human-in-the-loop for critical approvals Enable multi-agent orchestration for complex tasks Provide observability and security for agent workflows Power Automate Multi-step workflows can be orchestrated across business applications using AI Builder models or external AI APIs. Power Automate enables agents to make decisions, process data, and trigger actions dynamically, all within a low-code environment. Building AI Agents using Power Automate Automate repetitive tasks with minimal effort Apply AI Builder for predictions and classification Call Azure OpenAI for natural language processing Integrate with hundreds of enterprise connectors Trigger workflows based on real-time events Combine flows with human approvals for compliance Azure AI Foundry Visual orchestration meets pro-code flexibility through Prompt Flow and Connected Agents, enabling multi-step reasoning flows while allowing developers to extend capabilities through SDKs. Azure AI Foundry is ideal for scenarios requiring both agility and deep customization. Building AI Agents using Azure AI Foundry Design reasoning flows visually with Prompt Flow Orchestrate multi-agent systems using Connected Agents Integrate with VS Code for advanced development Apply governance and deployment pipelines for production Use Azure OpenAI models for adaptive decision-making Monitor workflows with built-in observability tools Microsoft Agent Framework (Preview) I’ve been exploring Microsoft Agent Framework (MAF), an open-source foundation for building AI agents that can run anywhere. It integrates with Azure AI Foundry and Azure services, enabling multi-agent workflows, advanced memory services, and visual orchestration. With public preview live and GA coming soon, MAF is shaping how we deliver scalable, flexible agentic solutions. Enterprise-scale orchestration is achieved through graph-based workflows, human-in-the-loop approvals, and observability features. The Microsoft Agent Framework lays the foundation for multi-agent systems that are durable and compliant. Building AI Agents using Microsoft Agent Framework Coordinate multiple specialized agents in a graph Implement durable workflows with pause and resume Support human-in-the-loop for controlled autonomy Integrate with Azure AI Foundry for hosting and governance Enable observability through OpenTelemetry integration Provide SDK flexibility for custom orchestration patterns Visual-first platforms make building AI Agents feel less like coding marathons and more like creative design sessions. They’re perfect for those scenarios when you’d rather design than debug and still want the option to dive deeper when complexity calls. Pro-Code Approach Remember I told you how I started as a pro-code developer early in my career and later embraced a hybrid approach? I’ll try to stay neutral here as we explore the pro-code world. Pro-code frameworks offer integration with diverse ecosystems, multi-agent coordination, and fine-grained control over logic. While workflow-first and pro-code approaches both provide these capabilities, the difference lies in how they balance factors such as ease of development, ease of maintenance, time to deliver, monitoring capabilities, and other non-functional requirements. Choosing the right path often depends on which of these trade-offs matter most for your scenario. LangChain When I first explored LangChain, it felt like stepping into a developer’s playground for AI orchestration. I could stitch together prompts, tools, and APIs like building blocks, and I enjoyed the flexibility. It reminded me why pro-code approaches appeal to those who want full control over logic and integration with diverse ecosystems. Building AI Agents using LangChain Define custom chains for multi-step reasoning [it is called Lang“Chain”] Integrate external APIs and tools for dynamic actions Implement memory for context-aware conversations Support multi-agent collaboration through orchestration patterns Extend functionality with custom Python modules Deploy agents across cloud environments for scalability Semantic Kernel I’ve worked with Semantic Kernel when I needed more control over orchestration logic, and what stood out was its flexibility. It provides both .NET and Python SDKs, which makes it easy to combine natural language prompts with traditional programming logic. I found the planners and skills especially useful for breaking down goals into smaller steps, and connectors helped integrate external systems without reinventing the wheel. Building AI Agents using Semantic Kernel Create semantic functions for prompt-driven tasks Use planners for dynamic goal decomposition Integrate plugins for external system access Implement memory for persistent context across sessions Combine AI reasoning with deterministic code logic Enable observability and telemetry for enterprise monitoring Microsoft Agent Framework (Preview) Although I introduced MAF in the earlier section, its SDK-first design makes it relevant here as well for advanced orchestration and the pro-code nature… and so I’ll probably write this again in the Hybrid section. The Agent Framework is designed for developers who need full control over multi-agent orchestration. It provides a pro-code approach for defining agent behaviors, implementing advanced coordination patterns, and integrating enterprise-grade observability. Building AI Agents using Microsoft Agent Framework Define custom orchestration logic using SDK APIs Implement graph-based workflows for multi-agent coordination Extend agent capabilities with custom code modules Apply durable execution patterns with pause and resume Integrate OpenTelemetry for detailed monitoring and debugging Securely host and manage agents through Azure AI Foundry integration Hybrid Approach and decision framework I’ve always been a fan of both worlds, the flexibility of pro-code and the simplicity of workflow drag-and-drop style IDEs and GUIs. A hybrid approach is not about picking one over the other; it’s about balancing them. In practice, this to me means combining the speed and governance of workflow-first platforms with the extensibility and control of pro-code frameworks. Hybrid design shines when you need agility without sacrificing depth. For example, I can start with Copilot Studio to build a conversational agent using its visual designer. But if the scenario demands advanced logic or integration, I can call an Azure Function for custom processing, trigger a Logic Apps workflow for complex orchestration, or even invoke the Microsoft Agent Framework for multi-agent coordination. This flexibility delivers the best of both worlds, low-code for rapid development (remember RAD?) and pro-code for enterprise-grade customization with complex logic or integrations. Why go Hybrid Ø Balance speed and control: Rapid prototyping with workflow-first tools, deep customization with code. Ø Extend functionality: Call APIs, Azure Functions, or SDK-based frameworks from visual workflows. Ø Optimize for non-functional requirements: Address maintainability, monitoring, and scalability without compromising ease of development. Ø Enable interoperability: Combine connectors, plugins, and open standards for diverse ecosystems. Ø Support multi-agent orchestration: Integrate workflow-driven agents with pro-code agents for complex scenarios. The hybrid approach for building AI Agents is not just a technical choice but a design philosophy. When I need rapid prototyping or business automation, workflow-first is my choice. For multi-agent orchestration and deep customization, I go with code-first. Hybrid makes sense for regulated industries and large-scale deployments where flexibility and compliance are critical. The choice isn’t binary, it’s strategic. I’ve worked with both workflow-first tools like Copilot Studio, Power Automate, and Logic Apps, and pro-code frameworks such as LangChain, Semantic Kernel, and the Microsoft Agent Framework. Each approach has its strengths, and the decision often comes down to what matters most for your scenario. If rapid prototyping and business automation are priorities, workflow-first platforms make sense. When multi-agent orchestration, deep customization, and integration with diverse ecosystems are critical, pro-code frameworks give you the flexibility and control you need. Hybrid approaches bring both worlds together for regulated industries and large-scale deployments where governance, observability, and interoperability cannot be compromised. Understanding these trade-offs will help you create AI Agents that work so well, you’ll wonder if they’re secretly applying for your job! About the author Pradyumna (Prad) Harish is a Technology leader in the WW GSI Partner Organization at Microsoft. He has 26 years of experience in Product Engineering, Partner Development, Presales, and Delivery. Responsible for revenue growth through Cloud, AI, Cognitive Services, ML, Data & Analytics, Integration, DevOps, Open-Source Software, Enterprise Architecture, IoT, Digital strategies and other innovative areas for business generation and transformation; achieving revenue targets via extensive experience in managing global functions, global accounts, products, and solution architects across over 26 countries.8.6KViews4likes0CommentsSelecting the Right Agentic Solution on Azure – Part 2 (Security)
Let’s pick up from where we left off in the previous post — Selecting the Right Agentic Solution on Azure - Part 1. Earlier, we explored a decision tree to help identify the most suitable Azure service for building your agentic solution. Following that discussion, we received several requests to dive deeper into the security considerations for each of these services. In this post, we’ll examine the security aspects of each option, one by one. But before going ahead and looking at the security perspective I highly recommend looking at list of Azure AI Services Technologies made available by Microsoft. This list is inclusive of all those services which were part of erstwhile cognitive services and latest additions. Workflows with AI agents and models in Azure Logic Apps (Preview) – This approach focuses on running your agents as an action or as part of an “agent loop” with multiple actions within Azure Logic Apps. It’s important not to confuse this with the alternative setup, where Azure Logic Apps integrates with AI Agents in the Foundry Agent Service—either as a tool or as a trigger. (Announcement: Power your Agents in Azure AI Foundry Agent Service with Azure Logic Apps | Microsoft Community Hub). In that scenario, your agents are hosted under the Azure AI Foundry Agent Service, which we’ll discuss separately below. Although, to create an agent workflow, you’ll need to establish a connection—either to Azure OpenAI or to an Azure AI Foundry project for connecting to a model. When connected to a Foundry project, you can view agents and threads directly within that project’s lists. Since agents here run as Logic Apps actions, their security is governed by the Logic Apps security framework. Let’s look at the key aspects: Easy Auth or App Service Auth (Preview) - Agent workflows often integrate with a broader range of systems—models, MCPs, APIs, agents, and even human interactions. You can secure these workflows using Easy Auth, which integrates with Microsoft Entra ID for authentication and authorization. Read more here: Protect Agent Workflows with Easy Auth - Azure Logic Apps | Microsoft Learn. Securing and Encrypting Data at Rest - Azure Logic Apps stores data in Azure Storage, which uses Microsoft-managed keys for encryption by default. You can further enhance security by: Restricting access to Logic App operations via Azure RBAC Limiting access to run history data Securing inputs and outputs Controlling parameter access for webhook-triggered workflows Managing outbound call access to external services More info here: Secure access and data in workflows - Azure Logic Apps | Microsoft Learn. Secure Data at transit – When exposing your Logic App as an HTTP(S) endpoint, consider using: Azure API Management for access policies and documentation Azure Application Gateway or Azure Front Door for WAF (Web Application Firewall) protection. I highly recommend the labs provided by Logic Apps Product Group to learn more about Agentic Workflows: https://azure.github.io/logicapps-labs/docs/intro. Azure AI Foundry Agent Service – As of this writing, the Azure AI Foundry Agent Service abstracts the underlying infrastructure where your agents run. Microsoft manages this secure environment, so you don’t need to handle compute, network, or storage resources—though bring-your-own-storage is an option. Securing and Encrypting Data at Rest - Microsoft guarantees that your prompts and outputs remain private—never shared with other customers or AI providers (such as OpenAI or Meta). Data (from messages, threads, runs, and uploads) is encrypted using AES-256. It remains stored in the same region where the Agent Service is deployed. You can optionally use Customer-Managed Keys (CMK) for encryption. Read more here: Data, privacy, and security for Azure AI Agent Service - Azure AI Services | Microsoft Learn. Network Security – The service allows integration with your private virtual network using a private endpoint. Note: There are known limitations, such as subnet IP restrictions, the need for a dedicated agent subnet, same-region requirements, and limited regional availability. Read more here: How to use a virtual network with the Azure AI Foundry Agent Service - Azure AI Foundry | Microsoft Learn. Secure Data at transit – Upcoming enhancements include API Management support (soon in Public Preview) for AI APIs, including Model APIs, Tool APIs/MCP servers, and Agent APIs. Here is another great article about using APIM to safeguard HTTP APIs exposed by Azure OpenAI that let your applications perform embeddings or completions by using OpenAI's language models. Agent Orchestrators – We’ve introduced the Agent Framework, which succeeds both AutoGen and Semantic Kernel. According to the product group, it combines the best capabilities of both predecessors. Support for Semantic Kernel and related documentation for AutoGen will continue to be available for some time to allow users to transition smoothly to the new framework. When discussing the security aspects of agent orchestrators, it’s important to note that these considerations also extend to the underlying services hosting them—whether on AKS or Container Apps. However, this discussion will not focus on the security features of those hosting environments, as comprehensive resources already exist for them. Instead, we’ll focus on common security concerns applicable across different orchestrators, including AutoGen, Semantic Kernel, and other frameworks such as LlamaIndex, LangGraph, or LangChain. Key areas to consider include (but are not limited to): Secure Secrets / Key Management Avoid hard-coding secrets (e.g., API keys for Foundry, OpenAI, Anthropic, Pinecone, etc.). Use secret management solutions such as Azure Key Vault or environment variables. Encrypt secrets at rest and enforce strict limits on scope and lifetime. Access Control & Least Privilege Grant each agent or tool only the minimum required permissions. Implement Role-Based Access Control (RBAC) and enforce least privilege principles. Use strong authentication (e.g., OAuth2, Azure AD) for administrative or tool-level access. Restrict the scope of external service credentials (e.g., read-only vs. write) and rotate them regularly. Isolation / Sandboxing Isolate plugin execution and use inter-process separation as needed. Prevent user inputs from executing arbitrary code on the host. Apply resource limits for model or function execution to mitigate abuse. Sensitive Data Protection Encrypt data both at rest and in transit. Mask or remove PII before sending data to models. Avoid persisting sensitive context unnecessarily. Ensure logs and memory do not inadvertently expose secrets or user data. Prompt & Query Security Sanitize or escape user input in custom query engines or chat interfaces. Protect against prompt injection by implementing guardrails to monitor and filter prompts. Set context length limits and use safe output filters (e.g., profanity filters, regex validators). Observability, Logging & Auditing Maintain comprehensive logs, including tool invocations, agent decisions, and execution paths. Continuously monitor for anomalies or unexpected behaviour. I hope this overview assists you in evaluating and implementing the appropriate security measures for your chosen agentic solution.564Views3likes3CommentsAgentic Integration with SAP, ServiceNow, and Salesforce
Copilot/Copilot Studio Integration with SAP (No Code) By integrating SAP Cloud Identity Services with Microsoft Entra ID, organizations can establish secure, federated identity management across platforms. This configuration enables Microsoft Copilot and Teams to seamlessly connect with SAP’s Joule digital assistant, supporting natural language interactions and automating business processes efficiently. Key Resources as given in SAP docs (Image courtesy SAP): Configuring SAP Cloud Identity Services and Microsoft Entra ID for Joule Enable Microsoft Copilot and Teams to Pass Requests to Joule Copilot Studio Integration with ServiceNow and Salesforce (No Code) Integration with ServiceNow and Salesforce, has two main approaches: Copilot Agents using Copilot Studio: Custom agents can be built in Copilot Studio to interact directly with Salesforce CRM data or ServiceNow knowledge bases and helpdesk tickets. This enables organizations to automate sales and support processes using conversational AI. Create a custom sales agent using your Salesforce CRM data (YouTube) ServiceNow Connect Knowledge Base + Helpdesk Tickets (YouTube) 3rd Party Agents using Copilot for Service Agent: Microsoft Copilot can be embedded within Salesforce and ServiceNow interfaces, providing users with contextual assistance and workflow automation directly inside these platforms. Set up the embedded experience in Salesforce Set up the embedded experience in ServiceNow MCP or Agent-to-Agent (A2A) Interoperability (Pro Code) - (Image courtesy SAP) If you choose a pro-code approach, you can either implement the Model Context Protocol (MCP) in a client/server setup for SAP, ServiceNow, and Salesforce, or leverage existing agents for these third-party services using Agent-to-Agent (A2A) integration. Depending on your requirements, you may use either method individually or combine them. The recently released Azure Agent Framework offers practical examples for both MCP and A2A implementations. Below is the detailed SAP reference architecture, illustrating how A2A solutions can be layered on top of SAP systems to enable modular, scalable automation and data exchange. Agent2Agent Interoperability | SAP Architecture Center Logic Apps as Integration Actions Logic Apps is the key component of Azure Integration platform. Just like so many other connectors it has connectors for all this three platforms (SAP, ServiceNow, Salesforce). Logic Apps can be invoked from custom Agent (built in action in Foundry) or Copilot Agent. Same can be said for Power Platform/Automate as well. Conclusion This article provides a comprehensive overview of how Microsoft Copilot, Copilot Studio, Foundry by A2A/MCP, and Azure Logic Apps can be combined to deliver robust, agentic integrations with SAP, ServiceNow, and Salesforce. The narrative highlights the importance of secure identity federation, modular agent orchestration, and low-code/pro-code automation in building next-generation enterprise solutions.716Views1like0CommentsAI for Operations - Copilot Agent Integration
Solution ideas The original framework introduced several Logic App and Function App patterns for SQL BPA, Update Manager, Cost Management, Anomaly Detection, and Smart Doc creation. In this article we add two Copilot Studio Agents, packaged in the GitHub repository Microsoft Azure AI for Operation Framework, designed to be deployed in a dedicated subscription (e.g., OpenAI-CoreIntegration): Copilot FinOps Agent – interactive cost & usage analysis Copilot Update Manager Agent – interactive patch status & one-time updates Architecture Copilot FinOps Agent A Copilot Studio agent that lets stakeholders chat in natural language to retrieve, compare, and summarise cost data—without leaving Teams. Dataflow # Stage Description Initial Trigger User message (Teams / Copilot Studio web) invoke topic The conversation kicks off the topic “Analyze Azure Costs”. 1 Pre-Processing Power Automate flow captures tenant ID, subscription filters, date range. 2 Cost Query Azure Cost Management APIs pull actual and previous spend, returning JSON rows (service name, cost €). 3 OpenAI Analysis Data is analyzed by OpenAI\Copilot Agent following the flow structure. 4 Response Formatting Copilot Studio flow format the output as a table. 5 Chat Reply Copilot agent posts the insight list. Users can ask any kind of question related the FinOps topic. Components Microsoft Copilot Studio (Developer licence) – low-code agent designer Power Automate Premium – orchestrates REST calls, prompt assembly, file handling Azure Cost Management + Billing – source of spend data (Rest API) Azure OpenAI Service – GPT-4o and o3-mini reasoning & text generation Microsoft Teams – chat surface for Q&A, cards, and adaptive actions Potential use cases Finance teams asking “Why did VM spend jump last week?” Engineers requesting a monthly cost overview before sprint planning Leadership dashboards that can be drilled into via natural-language chat Copilot Update Manager Agent A Copilot Studio agent that surfaces patch compliance and can trigger ad-hoc One-Time Updates for selected VMs directly from the chat. Dataflow # Stage Description Initial Trigger User message (Teams / Copilot Studio web) invoke topic. The conversation kicks off the topic “Analyze Azure Costs”. 1 Pre-Processing Flow validates RBAC and captures target scope (subscription / RG / VM). 2 Patch Status Query Azure Update Manager & Resource Graph query patchassessmentresources for KBs, severities, pending counts. 3 OpenAI Report GPT-4o - o3-mini generates: • VM-level summary (English) • General Overview 4 Adaptive Card Power Automate builds an Adaptive Card listing non-compliant VMs with “One-time Update”- "No action" buttons. 5a User Action – Review User inspects details or asks follow-up questions. 5b User Action – Patch Now Clicking One-time Update calls Update Manager REST API to start a One-Time Update job. 6 Confirmation Agent posts job ID, live status, and final success / error summary. Components Microsoft Copilot Studio – conversational front-end Power Automate Premium – API orchestration & status polling Azure Update Manager – compliance data & patch execution Azure OpenAI Service – explanation & remediation text Microsoft Teams – Adaptive Cards with action buttons Potential use cases Service owners getting a daily compliance digest with the ability to remediate on demand Security officers validating zero-day patch rollout status via chat Help-desk agents triaging “Is VM X missing critical updates?” without opening the Azure portal Prerequisites Resource Quantity Notes Copilot Studio Developer licence 1 Assign in Microsoft 365 Admin Center Power Automate Premium licence 1 user Needed for HTTP, Azure AD, OpenAI connectors Microsoft Teams 1 user Chat interface Azure subscription 1 Dedicated OpenAI-CoreIntegration recommended GitHub repo latest Microsoft Azure AI for Operation Framework Copilot Agent Copilot Studio User Experience Deployment steps (high level) Assign licences – Copilot Studio Developer + Power Automate Premium Create Copilot Studio Agent New Agent → Skip to configure → fill basics → Create → Settings → disable GenAI orchestration Import topics Copilot topic Update Manager (link to configuration file) Copilot topic FinOps (link to configuration file) Publish & share the agent to Teams. Verify permission scopes for Cost Management and Update Manager APIs. Start chatting! Feel free to clone the GitHub repo, adapt the topics to your tag taxonomy or FinOps dashboard structure, and let us know in the comments how Copilot Agents are transforming your operational workflows and... Stay Tuned for the next updates! Contributors Principal authors Tommaso Sacco | Cloud Solutions Architect Simone Verza | Cloud Solution Architect Special thanks Carmelo Ferrara | Director CSA Antonio Sgrò | Sr CSA Manager Marco Crippa | Sr CSA Manager1.4KViews1like1Comment