azure
7251 TopicsAnnouncing the availability of Azure Databricks connector in Azure AI Foundry
At Microsoft, Databricks Data Intelligence Platform is available as a fully managed, native, first party Data and AI solution called Azure Databricks. This makes Azure the optimal cloud for running Databricks workloads. Because of our unique partnership, we can bring you seamless integrations leveraging the power of the entire Microsoft ecosystem to do more with your data. Azure AI Foundry is an integrated platform for Developers and IT Administrators to design, customize, and manage AI applications and agents. Today we are excited to announce the public preview of the Azure Databricks connector in Azure AI Foundry. With this launch you can build enterprise-grade AI agents that reason over real-time Azure Databricks data while being governed by Unity Catalog. These agents will also be enriched by the responsible AI capabilities of Azure AI Foundry. Here are a few ways this can benefit you and your organization: Native Integration: Connect to Azure Databricks AI/BI Genie from Azure AI Foundry Contextual Answers: Genie agents provide answers grounded in your unique data Supports Various LLMs: Secure, authenticated data access Streamlined Process: Real-time data insights within GenAI apps Seamless Integration: Simplifies AI agent management with data governance Multi-Agent workflows: Leverages Azure AI agents and Genie Spaces for faster insights Enhanced Collaboration: Boosts productivity between business and technical users To further democratize the use of data to those in your organization who aren't directly interacting with Azure Databricks, you can also take it one step further with Microsoft Teams and AI/BI Genie. AI/BI Genie enables you to get deep insights from your data using your natural language without needing to access Azure Databricks. Here you see an example of what an agent built in AI Foundry using data from Azure Databricks available in Microsoft Teams looks like We'd love to hear your feedback as you use the Azure Databricks connector in AI Foundry. Try it out today – to help you get started, we’ve put together some samples here. Read more on the Databricks blog, too.3.1KViews4likes2CommentsDoubt exercise networks
Hello, I’m doing the course of Azure Fundamentals, and in this exercise asks me to write in the PowerShell console what is in the image above but do not know how to write it because I do as well as in the image below but gives me fault, if someone please knows.21Views0likes1CommentAutomating Microsoft Sentinel: A blog series on enabling Smart Security
Welcome to the first entry of our blog series on automating Microsoft Sentinel. We're excited to share insights and practical guidance on leveraging automation to enhance your security posture. In this series, we'll explore the various facets of automation within Microsoft Sentinel. Whether you're a seasoned security professional or just starting, our goal is to empower you with the knowledge and tools to streamline your security operations and stay ahead of threats. Join us on this journey as we uncover the power of automation in Microsoft Sentinel and learn how to transform your security strategy from reactive to proactive. Stay tuned for our upcoming posts where we'll dive deeper into specific automation techniques and share success stories from the field. Let's make your security smarter, faster, and more resilient together. In this series, we will show you how to automate various aspects of Microsoft Sentinel, from simple automation of Microsoft Sentinel Alerts and Incidents to more complicated response scenarios with multiple moving parts. We’re doing this as a series so that we can build up our knowledge step-by-step and finishing off with a “capstone project” that takes SOAR into areas that most people aren’t aware of or even thought was possible. Here is a preview of what you can expect in the upcoming posts [we’ll be updating this post with links to new posts as they happen]: Part 1: [You are here] – Introduction to Automating Microsoft Sentinel Part 2: Automation Rules – Automate the mundane away Part 3: Playbooks 1 – Playbooks Part I – Fundamentals o Triggers o Entities o In-App Content / GitHub o Consumption plan vs. dedicated – which to choose and why? Part 4: Playbooks 2 – Playbooks Part II – Diving Deeper o Built-In 1 st and 3 rd Party Connections (ServiceNow, etc.) o REST APIs (everything else) Part 5: Azure Functions / Custom Code o Why Azure Functions? o Consumption vs. Dedicated – which to choose and why? Part 6: Capstone Project (Art of the Possible) – Putting it all together Part 1: Introduction to Automating Microsoft Sentinel Microsoft Sentinel is a cloud-native security information and event management (SIEM) platform that helps you collect, analyze, and respond to security threats across your enterprise. But did you know that it also has a native, integrated Security Orchestration, Automation, and Response (SOAR) platform? A SOAR platform that can do just about anything you can think of? It’s true! What is SOAR and why would I want to use it? A Security Orchestration, Automation, and Response (SOAR) platform helps your team take action in response to alerts or events in your SIEM. For example, let’s say Contoso Corp has a policy where if a user has a medium sign-in risk in Entra ID and fails their login three times in a row within a ten-minute timeframe that we force them to re-confirm their identity with MFA. While an analyst could certainly take the actions required, wouldn’t it be better if we could do that automatically? Using the Sentinel SOAR capabilities, you could have an analytic rule that automatically takes the action without the analyst being involved at all. Why Automate Microsoft Sentinel? Automation is a key component of any modern security operations center (SOC). Automation can help you: Reduce manual tasks and human errors Improve the speed and accuracy of threat detection and response Optimize the use of your resources and skills Enhance your visibility and insights into your security environment Align your security processes with your business objectives and compliance requirements Reduce manual tasks and human errors Alexander Pope famously wrote “To err is human; to forgive, divine”. Busy and distracted humans make mistakes. If we can reduce their workload and errors, then it makes sense to do so. Using automation, we can make sure that all of the proper steps in our response playbook are followed and we can make our analysts lives easier by giving them a simpler “point and click” response capability for those scenarios that a human is “in the loop” or by having the system run the automation in response to events and not have to wait for the analyst to respond. Improve the speed and accuracy of threat detection and response Letting machines do machine-like things (such as working twenty-four hours a day) is a good practice. Leveraging automation, we can let our security operations center (SOC) run around the clock by having automation tied to analytics. Rather than waiting for an analyst to come online, triage an alert and then take action, Microsoft Sentinel can stand guard and respond when needed. Optimize the use of your resources and skills Having our team members repeat the same mundane tasks is not optimal for the speed of response and their work satisfaction. By automating the mundane away, we can give our teams more time to learn new things or work on other tasks. Enhance your visibility and insights into your security environment Automation can be leveraged for more than just responding to an alert or incident. We can augment the information we have about entities involved in an alert or incident by using automation to call REST based APIs to do point-in-time lookups of the latest threat information, vulnerability data, patching statuses, etc. Align your security processes with your business objectives and compliance requirements If you have to meet particular regulatory requirements or internal KPIs, automation can help your team to achieve their goals quickly and consistently. What Tools and Frameworks Can You Use to Automate Microsoft Sentinel? Microsoft Sentinel provides several tools that enable you to automate your security workflows, such as: Automation Rules o Automation rules can be used to automate Microsoft Sentinel itself. For example, let’s say there is a group of machines that have been classified as business critical and if there is an alert related to those machines, then the incident needs to be assigned to a Tier 3 response team, and the severity of the alert needs to be raised to at least “high”. Using an automation rule, you can take one analytic rule, apply it to the entire enterprise, but then have an automation rule that only applies to those business-critical systems. That way only the items that need that immediate escalation receive it, quickly and efficiently. o Another great use of Automation Rules is to create Incident Tasks for analysts to follow. If you have a process and workflow, by using Incident Tasks, you can have those appear inside of an Incident right there for the analysts to follow. No need to go “look it up” in a PDF or other document. Playbooks: You can use playbooks to automatically execute actions based on triggers, such as alerts, incidents, or custom events. Playbooks are based on Azure Logic Apps, which allow you to create workflows using various connectors, such as Microsoft Teams, Azure Functions, Azure Automation, and third-party services. Azure Functions can be leveraged to run custom code like PowerShell or Python and can be called from Sentinel via Playbooks. This way if you have a process or code that’s beyond a Playbook , you can still call it from the normal Sentinel workflow. Conclusion In this blog post, we introduced the automation capabilities and benefits of SOAR in Microsoft Sentinel, and some of the tools and frameworks that you can use to automate your security workflows. In the next blog posts, we will dive deeper into each of these topics and provide some practical examples and scenarios of how to automate Microsoft Sentinel. Stay tuned for more updates and tips on automating Microsoft Sentinel! Additional Resources What are Automation Rules? Automate Threat Response with playbooks in Microsoft Sentinel808Views5likes2CommentsAnnouncing the General Availability of the Azure Logic Apps Rules Engine
This week we announced our agent loop, a groundbreaking new capability in Azure Logic Apps to build AI agents into your enterprise workflows. With agent loop, you can embed advanced AI decision-making directly into your processes – enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. Now, we are announcing the General Availability of our Azure Logic Apps Rules Engine. A deterministic rules engine runtime based on the RETE algorithm that allows in-memory execution, prioritization, and reevaluation of business rules in Azure Logic Apps. The Azure Logic Apps Rules Engine is a decision management inference engine in Azure Logic Apps, which provides the capability for customers to build Standard workflows in Azure Logic Apps and integrate readable, declarative, and semantically rich rules that operate on multiple data sources. The native data sources available today for the Rules Engine are XML and .NET objects. These data sources are called "facts" and are used to construct rules from small building blocks of business logic or "rulesets". To create rules, you need the Rules Composer. It can be downloaded from the download center. The Rules Engine can also interact with the data exchanged by all the connectors available for Standard logic app resources. This design pattern promotes code reuse, design simplicity, and business logic modularity. Our Rules engine uses a VSCode experience to create Logic Apps projects with Rules engine support. For more information on how to create projects with Rules Engine, visit here. Now. What can I do with it? In a world of AI that essentially follows a probabilistic approach, rules engines are vital because they provide consistency, clarity, and compliance across different business goals. When you use rules with a workflow in Azure Logic Apps, you can define the logic, constraints, and policies that govern how to process, validate, and exchange data across systems, while you avoid AI hallucinations. Rules also help you make sure that applications follow the regulations and standards of their respective industries and markets. By using a rules engine, you can manage and update your workflow's business logic independently from the code and without having to alter your workflow. This approach helps you reduce the complexity and maintenance costs of your applications and increase their agility and scalability. From a technical perspective, the Azure Logic Apps rules engine allows you to do forward chaining or forward reasoning, in other words to do a re-evaluation of rules triggered by changes in the facts because of a rule’s execution. This is one of those scenarios where rules engine is unique; instead of writing complex code or creating complex “state-machine” workflows, the logic apps rules engine conducts this task with an instruction called “Update”. Getting started In the example below, I will show how to use the Logic Apps rules engine to ground an AI workflow loop. For this to scenario, I am adding a Rules Engine workflow, to an existing agent loop workflow, and use it to correct rates and provide a “cross-sell” recommendation. First, I need to deploy the workflow from VSCode to Azure. As the rules engine currently only supports XML and .NET Framework objects, I create an XSD schema (using Copilot if you don’t have an existing one) and use it with a “Compose XML with schema” action to create the XML fact that is needed. To obtain the returned data, I am using the “Parse XML with schema” action as well. After the logic app was deployed, I added it as a tool in the Logic Apps agent workflow loop, with a Call workflow in this logic app. I then pass the values that I need for parameters for the Rules engine to work. And I leave the rules engine return values empty. Then I updated my system prompt to indicate how I want the Rules engine to be used. The agent loop will find the right tool for the right job. Once the system prompt has been updated, I proceed to run the workflow with a payload. I have highlighted in red in the Agent chat, the guardrails imposed by the Rules Engine. Those rules have been used to make sure that the AI responses fall within the internal compliance and cross-sell company criteria. Some of the business rules can have different priorities and might require re-calculation for accuracy. The Logic Apps Rules Engine takes care of it without coding or adding complex business logic through additional workflows. Further adjustments to the rules using the Rules Composer will ground the agent’s results even more. What else can I do with it? You can use a Rules Engine in any context. In fact, decision management that falls under Intelligent business processes automation is growing in customers who want to provide flexibility, governance and compliance with their cloud workloads. Another well-known scenario is for BizTalk Migrations to Azure Logic Apps. For customers who have implemented the BizTalk BRE for decision management, content redirection, SWIFT or .NET framework. Demo Please watch the following short demo of this sample. Contact Us Have feedback or questions about the Rules Engine? We’d love to hear from you. Reply directly to this blog post or reach out to us through this form. Your input helps shape the future of Logic Apps and the rules engine.249Views0likes0CommentsCodeful Workflows: A New Authoring Model for Logic Apps Standard
📝 This blog introduce early concepts of a pre-release functionality and is subject to change. Azure Logic Apps Standard offers you a powerful cloud orchestration engine, enabling you to build and run automated workflows that effortlessly integrate resources from various services, systems, apps, and data sources. Whether you're looking to streamline processes across a complex enterprise or simply reduce the need for extensive coding, this platform provides a solution that's both efficient and flexible. For those of you who require more control over workflow designs or want to leverage your expertise in frameworks like .NET and the Durable Tasks framework, Logic Apps Standard now introduces an exciting new feature: Codeful Workflows. With Codeful Workflows, you can define workflows using an imperative programming style, blending the flexibility of coding with the simplicity and operational strengths of Logic Apps. This means you can structure your workflows the way that makes sense to you while still tapping into the rich ecosystem of connectors and tools built into Logic Apps. What Are Codeful Workflows? Codeful Workflows expand the authoring and execution models of a Logic Apps Standard, offering developers the ability to implement, test and run workflows using an imperative programming model both locally and in the cloud. Built on frameworks like .NET and the Durable Tasks framework, Codeful Workflows allow you to structure workflows in code while seamlessly integrating with Logic Apps Standard rich connector ecosystem, and leverage its operational capabilities. The core elements of a Logic App workflow—triggers, actions and connections —are translated into durable task concepts within this codeful model: Triggers are implemented as Client Functions that invoke durable orchestrations, which contain the body of the workflow, blending logic implemented by the language primitives, with connections actions for external connectivity. Connector actions are presented as Activity Functions. The Logic Apps Connector ecosystem is exposed to you via an SDK, bringing discoverability and rich support for intelisense when creating action inputs, invoking actions or reusing action outputs in later steps. The SDK vastly simplifies the execution of those connectors, by wrapping them internally on a Activity Function, so you don’t need to create new activities for each connector action you want to invoke. Connections, which manages the connectivitiy between actions and end systems, remains unchanged, allowing you to setup once and share connections between multiple orchestrations and logic apps declarative workflows. Connector actions uses a reference to a connection, providing flexibility between local and cloud configurations. Using those building blocks, you can create workflows using familiar programming paradigms, while still benefiting from the easy configuration and operational feature of Logic Apps Standard. If you are an existing Logic Apps Standard customer, your codeful and visual workflows can coexist within the same application, bridging the gap between pro-code and low-code approaches. With those two execution models working hand in hand on the same application, Logic Apps Standard becomes a comprehensive orchestration tool that caters to all developer personas, from integration specialists to enterprise teams, with no cliffs on their experience. Creating Codeful Workflows Designing codeful workflows begins with creating a new Logic Apps project within Visual Studio Code, configured for .NET and the Durable Tasks framework. From triggers to actions, developers gain full flexibility to define their workflows programmatically. Implementing Triggers Triggers are the entry points of workflows, and in Codeful Workflows, they are defined as Client Functions. For example, an HTTP trigger can start a workflow when a request is received: [FunctionName("HelloTrigger")] public static async Task<HttpResponseMessage> HttpStart( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestMessage req, [DurableClient] IDurableOrchestrationClient starter, ILogger log) { var requestContent = await req.Content.ReadAsStringAsync(); var workflowInput = new HTTPHelloInput { Greeting = $"Hello from Codeful workflows. You said '{requestContent}'" }; log.LogInformation("Workflow Input = '{workflowInput}'.", JsonSerializer.Serialize(workflowInput)); string instanceId = await starter.StartNewAsync("HelloOrchestrator", workflowInput); log.LogInformation("Started orchestration with ID = '{instanceId}'.", instanceId); return await starter.WaitForCompletionOrCreateCheckStatusResponseAsync(req, instanceId); } Using Connector Actions Both Managed and Service Provider Actions are available to be used within your orchestrations. They are organized in the SDK by type making it easy to find the right connector to use. Once you identify the action to use, you can use the rich intelisense interface to generate inputs and call the action directly in your orchestration code. Deployment and Operations Deploying Logic Apps Standard that uses both codeful and codeless workflows follows the same practices already available in Logic Apps Standard. Operational insights, such as endpoint visibility and execution monitoring, are provided within the Azure Portal, ensuring parity with the functionality available for codeless workflows. This cohesive deployment model allows organizations to maximize their resources and cater to diverse development needs, whether they require quick prototyping via low-code tools or robust, scalable solutions through pro-code implementations. Codeful Workflows and Intelligent Agents You can take advantage of codeful workflows and Logic Apps Standard Agent Loop to create new intelligent applications that embed advanced AI decision-making directly into your processes – enabling your apps and automation to not just follow predefined steps, but to reason, adapt, and act autonomously towards goals. See this demo where we share two approaches to implement agent loops – combining codeful and codeless workflows, where you can reuse existing workflows as tools, and writing agent loop actions directly with code: Looking for feedback on Codeful Workflows We are looking for early feedback on this feature. If you are interested in participating on a private preview, please use the form below to register your interest and we will contact you to share the instructions. https://aka.ms/lacodeful/privatepreview/formHealthcare Agent Orchestrator: Multi-agent Framework for Domain-Specific Decision Support
At Microsoft Build, we introduced the Healthcare Agent Orchestrator, now available in Azure AI Foundry Agent Catalog . In this blog, we unpack the science: how we structured the architecture, curated real tumor board data, and built robust agent coordination that brings AI into real healthcare workflows. Healthcare Agent Orchestrator assisting a simulated tumor board meeting. Introduction Healthcare is inherently collaborative. Critical decisions often require input from multiple specialists—radiologists, pathologists, oncologists, and geneticists—working together to deliver the best outcomes for patients. Yet most AI systems today are designed around narrow tasks or single-agent architectures, failing to reflect the real-world teamwork that defines healthcare practice. That’s why we developed the Healthcare Agent Orchestrator: an orchestrator and code sample built around Microsoft’s industry-leading healthcare AI models, designed to support reasoning and multidisciplinary collaboration -- enabling modular, interpretable AI workflows that mirror how healthcare teams actually work. The orchestrator brings together Microsoft healthcare AI models—such as MedImageParse for image recognition, CXRReportGen for automated radiology reporting, and MedImageInsight for retrieval and similarity analysis—into a unified, task-aware system that enables developers to build an agent that reflects real-word healthcare decision making pattern. Healthcare Is Naturally Multi-Agent Healthcare decision-making often requires synthesizing diverse data types—radiologic images, pathology slides, genetic markers, and unstructured clinical narratives—while reconciling differing expert perspectives. In a molecular tumor board, for instance, a radiologist might highlight a suspicious lesion on CT imaging, a pathologist may flag discordant biopsy findings, and a geneticist could identify a mutation pointing toward an alternate treatment path. Effective collaboration in these settings hinges not on isolated analysis, but on structured dialogue—where evidence is surfaced, assumptions are challenged, and hypotheses are iteratively refined. To support the development of healthcare agent orchestrator, we partnered with a leading healthcare provider organization, who independently curated and de-identified a proprietary dataset comprising longitudinal patient records and real tumor board transcripts—capturing the complexity of multidisciplinary discussions. We provided guidance on data types most relevant for evaluating agent coordination, reasoning handoffs, and task alignment in collaborative settings. We then applied LLM-based structuring techniques to convert de-identified free-form transcripts into interpretable units, followed by expert review to ensure domain fidelity and relevance. This dataset provides a critical foundation for assessing agent coordination, reasoning handoffs, and task alignment in simulated collaborative settings. Why General-Purpose LLMs Fall Short for Healthcare Collaboration While general-purpose large language models have delivered remarkable results in many domains, they face key limitations in high-stakes healthcare environments: Precision is critical: Even small hallucinations or inconsistencies can compromise safety and decision quality Multi-modal integration is required: Many healthcare decisions involve interpreting and correlating diverse data types—images, reports, structured records—much of which is not available in public training sets Transparency and traceability matter: Users must understand how conclusions are formed and be able to audit intermediate steps The Healthcare Agent Orchestrator addresses these challenges by pairing general reasoning capabilities with specialized agents that operate over imaging, genomics, and structured EHRs—ensuring grounded, explainable results aligned with clinical expectations. Each agent contributes domain-specific expertise, while the orchestrator ensures coherence, oversight, and explainability—resulting in outputs that are both grounded and verifiable. Architecture: Coordinating Specialists Through Orchestration Healthcare Agent Orchestrator. Healthcare Agent Orchestrator’s multi-agent framework is built on modular AI infrastructure, designed for secure, scalable collaboration: Semantic Kernel: A lightweight, open-source development kit for building AI agents and integrating the latest AI models into C#, Python, or Java codebases. It acts as efficient middleware for rapidly delivering enterprise-grade solutions—modular, extensible, and designed to support responsible AI at scale. Model Context Protocol (MCP): an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. Magentic-One: Microsoft’s generalist multi-agent system for solving open-ended web and file-based tasks across domains—built on Microsoft AutoGen, our popular open-source framework for developing multi-agent applications. Each agent is orchestrated within the system and integrated via Semantic Kernel’s group chat infrastructure, with support for communication and modular deployment via Azure. This orchestration ensures that each model—whether interpreting a lung nodule, analyzing a biopsy image, or summarizing a genomic variant—is applied precisely where its expertise is most relevant, without overloading a single system with every task. The modularity of the framework also future-proofs: as new health AI models and tools emerge, they can be seamlessly incorporated into the ecosystem without disrupting existing workflows—enabling continuous innovation while maintaining clinical stability. Microsoft’s healthcare AI models at the Core Healthcare agent orchestrator also enables developers to explore the capabilities of Microsoft’s latest healthcare AI models: CXRReportGen: Integrates multimodal inputs—including current and prior X-ray images and report context—to generate grounded, interpretable radiology reports. The model has shown improved accuracy and transparency in automated chest X-ray interpretation, evaluated on both public and private data. MedImageParse 3 : A biomedical foundation model for imaging parsing that can jointly conduct segmentation, detection, and recognition across 9 imaging modalities. MedImageInsight 4 : Facilitates fast retrieval of clinically similar cases, supports disease classification across broad range of medical image modalities, accelerating second opinion generation and diagnostic review workflows. Each model has the ability to act as a specialized agent within the system, contributing focused expertise while allowing flexible, context-aware collaboration orchestrated at the system level. CXRReportGen is included in the initial release and supports the development and testing of grounded radiology report generation. Other Microsoft healthcare models such as MedImageParse and MedImageInsight are being explored in internal prototypes to expand the orchestrator’s capabilities across segmentation, detection, and image retrieval tasks. Seamless Integration with Microsoft Teams Rather than creating new silos, Healthcare Agent Orchestrator integrates directly into the tools clinicians already use—specifically Microsoft Teams. Developers are investigating how clinicians can engage with agents through natural conversation, asking questions, requesting second opinions, or cross-validating findings—all without leaving their primary collaboration environment. This approach minimizes friction, improves user experience, and brings cutting-edge AI into real-world care settings. Building Toward Robust, Trustworthy Multi-Agent Collaboration Think of the orchestrator as managing a secure, structured group chat. Each participant is a specialized AI agent—such as a ‘Radiology’ agent, ‘PatientHistory’ agent, or 'ClinicalTrials‘ agent. At the center is the ‘Orchestrator’ agent, which moderates the interaction: assigning tasks, maintaining shared context, and resolving conflicting outputs. Agents can also communicate directly with one another, exchanging intermediate results or clarifying inputs. Meanwhile, the user can engage either with the orchestrator or with specific agents as needed. Each agent is configured with instructions (the system prompt that guides its reasoning), and a description (used by both the UI and the orchestrator to determine when the agent should be activated). For example, the Radiology agent is paired with the cxr_report_gen tool, which wraps Microsoft’s CXRReportGen model for generating findings from chest X-ray images. Tools like this are declared under the agent’s tools field and allow it to call foundation models or other capabilities on demand—such as the clinical_trials tool 5 for querying ClinicalTrials.gov. Only one agent is marked as facilitator, designating it as the moderator of the conversation; in this scenario, the Orchestrator agent fills that role. Early observations highlight that multi-agent orchestration introduces new complexities—even as it improves specialization and task alignment. To address these emergent challenges, we are actively evolving the framework across several dimensions: Mitigating Error Propagation Across Agents: Ensuring that early-stage errors by one agent do not cascade unchecked through subsequent reasoning steps. This includes introducing critical checkpoints where outputs from key agents are verified before being consumed by others. Optimizing Agent Selection and Specialization: Recognizing that more agents are not always better. Adding unnecessary or redundant agents can introduce noise and confusion. We’ve implemented a systematic framework that emphasizes a few highly suited agents per task —dynamically selected based on case complexity and domain needs—while continuously tracking performance gains and catching regressions early. Improving Transparency and Hand-off Clarity: Structuring agent interactions to make intermediate outputs and rationales visible, enabling developers (and the system itself) to trace how conclusions were reached, catch inconsistencies early, and intervene when necessary. Adapting General Frameworks for Healthcare Complexity Generic orchestration frameworks like Semantic Kernel provide a strong foundation—but healthcare demands more. The stakes are higher, the data more nuanced, and the workflows require precision, traceability, and regulatory compliance. Here’s how we’ve extended and adapted these systems to help address healthcare demands: Precision and Safety: We introduced domain-aware verification checkpoints and task-specific agent constraints to reduce inappropriate tool usage—supporting more reliable reasoning. To help uphold the high standards required in healthcare, we defined two complementary metric systems (Check Healthcare Agent Orchestrator Evaluation for more details): Core Metrics: monitor health agents selection accuracy, intent resolution, contextual relevance, and information aggregation RoughMetric: a composite score based on ROUGE that helps quantify the precision of generated outputs and conversation reliability. TBFact: A modified version of RadFact 2 that measures factuality of claims in agents' messages and helps identifying omissions and hallucination Domain-Specific Tool Planning: Healthcare agents must reason across multimodal inputs—such as chest X-rays, CT slices, pathology images, and structured EHRs. We’ve customized Semantic Kernel’s tool invocation and planning modules to reflect clinical workflows, not generic task chains. These infrastructure-level adaptations are designed to complement Microsoft Healthcare AI models—such as CXRReportGen, MedImageParse, and MedImageInsight—working together to enable coordinated, domain-aware reasoning across complex healthcare tasks. Enabling Collaborative, Trustworthy AI in Healthcare Healthcare demands AI systems that are as collaborative, adaptive, and trustworthy as the clinical teams they aim to support. The Healthcare Agent Orchestrator is a concrete step toward that vision—pairing specialized health AI models with a flexible, multi-agent coordination framework, purpose-built to reflect the complexity of real clinical decision-making. By aligning with existing healthcare workflows and enabling transparent, role-specific collaboration, this system shows promise to empower clinicians to work more effectively—with AI as a partner, not a replacement. Healthcare Multi-Agent Orchestrator and the Microsoft healthcare AI models are intended for research and development use. Healthcare Multi-Agent Orchestrator and the healthcare AI models not designed or intended to be deployed in clinical settings as-is nor is it intended for use in the diagnosis or treatment of any health or medical condition, and its performance for such purposes has not been established. You bear sole responsibility and liability for any use of Healthcare Multi-Agent Orchestrator or the healthcare AI models, including verification of outputs and incorporation into any product or service intended for a medical purpose or to inform clinical decision-making, compliance with applicable healthcare laws and regulations, and obtaining any necessary clearances or approvals. 1 arXiv, Universal Abstraction: Harnessing Frontier Models to Structure Real-World Data at Scale, February 2, 2025 2 arXiv, MAIRA-2: Grounded Radiology Report Generation, June 6, 2024 3 Nature Method, A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities, Nov 18, 2024 4 arXiv, Medimageinsight: An open-source embedding model for general domain medical imaging, Oct 9, 2024 5 Machine Learning for Healthcare Conference, Scaling Clinical Trial Matching Using Large Language Models: A Case Study in Oncology, August 4, 2023584Views0likes0CommentsJoin the Fabric Partner Community for the next Fabric Engineering Connection calls!
Are you a Microsoft partner that is interested in data and analytics? Be sure to join us for the next Fabric Engineering Connection call, now offered at two different times! 🎉 Next week's Fabric Engineering Connection calls will include Mirroring Updates from Maraki Ketema and Swetha Mannepalli along with a presentation from Amir H. Jafari on Data Agent Integrations with Power BI Copilot, AI Foundry, and Copilot Studio. The Americas/EMEA Fabric Engineering Connection call will take place Wednesday, May 28, from 8-9 am PDT. The APAC Fabric Engineering Connection call will take place Thursday, May 29, from 1-2 am UTC/Wednesday, May 28, from 5-6 pm PDT. This is your opportunity to learn more, ask questions, and provide feedback. To join the call, you must be a member of the Fabric Partner Community Teams channel. To join, complete the participation form at https://aka.ms/JoinFabricPartnerCommunity. We can't wait to see you later this week!3Views0likes0CommentsJoin the Fabric Partner Community for the next Fabric Engineering Connection calls!
Are you a Microsoft partner that is interested in data and analytics? Be sure to join us for the next Fabric Engineering Connection call, now offered at two different times! 🎉 Next week's Fabric Engineering Connection calls will include Mirroring Updates from Maraki Ketema and Swetha Mannepalli along with a presentation from Amir H. Jafari on Data Agent Integrations with Power BI Copilot, AI Foundry, and Copilot Studio. The Americas/EMEA Fabric Engineering Connection call will take place Wednesday, May 28, from 8-9 am PDT. The APAC Fabric Engineering Connection call will take place Thursday, May 29, from 1-2 am UTC/Wednesday, May 28, from 5-6 pm PDT. This is your opportunity to learn more, ask questions, and provide feedback. To join the call, you must be a member of the Fabric Partner Community Teams channel. To join, complete the participation form at https://aka.ms/JoinFabricPartnerCommunity. We can't wait to see you later this week!5Views0likes0Comments🔒 Strengthening Azure DNS Zone Security with RBAC and Resource Locks
🔎 DNS security is more than just configuration it’s about protecting critical assets against unauthorized changes and accidental deletions. 🔎 Managing DNS zones effectively requires a layered security approach. 🔎 Two powerful mechanisms in Azure : Role-Based Access Control (RBAC) and Resource Locks 🚀 Role-Based Access Control (RBAC) 🚀 * Granular DNS Access Control * RBAC ensures controlled access management at both the DNS zone and record set levels. * Instead of assigning broad permissions, RBAC enables precise delegation using built-in roles such as: 🔹 Owner – Full control over the DNS zone, including configurations and deletions. 🔹 Contributor – Can modify DNS settings but cannot change access permissions. 🔹 Network Contributor – Can manage networking configurations related to DNS, but not modify records. 🔹 DNS Zone Contributor – Dedicated role for managing DNS zones without broader networking privileges. ✅ Key Advantages of RBAC in DNS Security: ✔ Prevent unauthorized modifications by restricting access to only necessary roles. ✔ Ensure operational integrity by limiting exposure to critical configurations. ✔ Improve governance by aligning roles with organizational security policies. 🔐 Resource Locks 🔐 * Guardrails for DNS Protection * Even with well-defined RBAC settings, accidental deletions can still occur. * Azure Resource Locks add an additional safeguard by preventing changes to a DNS zone or specific record sets. 🔹 Zone Lock ----> Protects an entire DNS zone from being deleted, preserving all associated record sets. 🔹 SOA Lock ----> Prevents unintentional zone deletions while allowing record modifications within the zone. ✅ How Resource Locks Enhance Security: ✔ Shields DNS zones from accidental or malicious deletions. ✔ Maintains continuity by ensuring record sets remain intact. ✔ Strengthens compliance controls for critical infrastructure. 🛠 Best Practices for Securing DNS with RBAC & Resource Locks 🔸 Assign least privilege roles—never give unnecessary access. 🔸 Implement locks on essential zones to prevent configuration errors. 🔸 Regularly audit access permissions using Azure Policy & Activity Logs. 🔸 Use Automation & Alerts to track modifications for enhanced security. 🔹 Implementing RBAC & Resource Locks ensures your cloud environment remains secure, operational, and fault-tolerant.17Views0likes0Comments413 Request Entity Too Large error while updating watchlist
We are getting an error while using Watchlists API. We are deleting the Watchlists and then creating the watchlists. While creating this watchlist, before it was giving the error "Cannot upload watchlist ''. Delete is currently in progress" But from last week we are getting a different error pasted below. We need to resolve both errors as many customers are impacted cause of this. We have tried with watchlist items as well rather than using watchlists but getting watchlist items only return 100 records at one time. This is timing out if lists are big. FYI our largest list is around 1.5 MB and I see in the documentation that upto 3.8 MB is supported while pushing watchlists. Here is the full error detail – Encountered an unknown error while processing technology action - Error handling method 'AzureSentinelManagementClient updateWatchListForIntelPush due to [] with response status 413 and body 413 Request Entity Too Large16Views0likes0Comments