<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Healthcare and Life Sciences Blog articles</title>
    <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/bg-p/HealthcareAndLifeSciencesBlog</link>
    <description>Healthcare and Life Sciences Blog articles</description>
    <pubDate>Sun, 26 Apr 2026 00:10:40 GMT</pubDate>
    <dc:creator>HealthcareAndLifeSciencesBlog</dc:creator>
    <dc:date>2026-04-26T00:10:40Z</dc:date>
    <item>
      <title>Reimagining Cancer R&amp;D with Agentic AI Using GigaTIME in Microsoft Discovery</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/reimagining-cancer-r-d-with-agentic-ai-using-gigatime-in/ba-p/4513545</link>
      <description>&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/alberto-santamaria/" target="_blank" rel="noopener"&gt;@Alberto Santamaria-Pang&lt;/A&gt;, &lt;BR /&gt;Principal AI Data Scientist, Industry Solutions Engineering Healthcare&lt;BR /&gt;Adjunct Faculty at Johns Hopkins School of Medicine&lt;BR /&gt;&lt;A href="https://www.linkedin.com/in/mersoy/" target="_blank" rel="noopener"&gt;@Alexander Mehmet Ersoy&lt;/A&gt;, &lt;BR /&gt;Dir. Industry Advisory, Healthcare &amp;amp; Life Sciences&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;1. Introduction: From Images to insight in modern oncology&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;What if we could characterize every single cell in a tumor not just by how it looks under the microscope, but by the biological signals that shape how it behaves, how it evades the immune system, and how it responds to therapy? This question sits at the heart of modern oncology and precision medicine. Advances in artificial intelligence and spatial biology are rapidly lowering the barrier to understanding cancer at cellular and molecular resolution, supporting research into more precise, more personalized, and ultimately more effective treatments. Immuno-oncology already offers a glimpse of what becomes possible when therapy is guided by biology rather than averages. For example, the FDA approval of tisagenlecleucel for relapsed or refactory B-cell acute lymphoblastic leukemia was supported by an overall remission rate of 82.5%, underscoring how meaningful outcomes can be when treatment aligns with the right biological signals [1]. The challenge is scale: how do we make this type of biologically informed decision-making feasible across millions of patients, diverse tumor types, and real-world clinical settings?&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Two recent Microsoft innovations help address that challenge, at different layers of the R&amp;amp;D stack: The GigaTIME AI Framework (a model and workflow for virtual mIF generation from routine pathology) and Microsoft Discovery platform (the agentic R&amp;amp;D platform that orchestrates data, tools, and AI Agents). In this time, we introduce GigaTIME in general (including a practical tutorial on how model can be used), and then show how GigaTIME could be used within, and in the context of, the Discovery platform as one tool that helps accelerate precision oncology discovery.&lt;/P&gt;
&lt;H1&gt;2. GigaTIME: Scaling tumor microenvironment insight from routine pathology&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;A routine hematoxylin and eosin (H&amp;amp;E) slide is a common cost-efficient diagnostic tool used to understand the specifics of patient’s oncological condition. It is like a high-resolution photograph of a complex cellular community. An H&amp;amp;E slide captures structure, morphology, and organization in remarkable detail, but it cannot fully reveal how cells are communicating or which molecular programs are active beneath the surface. This is why multiplex immunofluorescence (mIF) and related spatial proteomics assays have become so valuable in oncology research: they reveal protein patterns linked to immune identity, checkpoint signaling, proliferation, and tumor context. Their broad use, however, is limited by cost and throughput, which makes large-scale tumor immune microenvironment analysis difficult [2]. GigaTIME provides an important bridge. It translates routine H&amp;amp;E pathology slides into virtual mIF images across 21 protein channels, making it possible to infer spatially resolved, biologically meaningful virtual mIF patterns from a much more accessible input. In this blog, we focus on what that means at the tissue level: how to interpret selected virtual mIF signals, how to localize them in cellular context, and why that matters for understanding tumor–immune interactions in oncology [3].&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 1. GigaTIME Workflow schematic.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2&gt;2.1 Reading virtual mIF signals in context&lt;/H2&gt;
&lt;P&gt;To make the virtual mIF panel easier to interpret, it helps to think of the tissue as two interacting compartments: the tumor compartment, where malignant growth and tumor-associated programs dominate, and the stroma or host compartment, where immune cells, vasculature, and connective tissue either resist, reshape, or sometimes enable tumor progression. The most important biology often happens at the boundary between these two worlds. Rather than reading the panel as a flat list of proteins, we can read it as a guide to tumor geography, immune access, checkpoint context, proliferation, and tissue infrastructure.&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;BR /&gt;&lt;STRONG&gt;&lt;EM&gt;Table 1. Selected markers produced in GigaTIME.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 93.5185%; height: 310.4px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Marker&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;What it represents&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Why it matters biologically&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;CK&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Tumor-rich epithelial regions&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Defines where the tumor compartment is located.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;DAPI&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Cell nuclei&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Anchors localization at the single-cell level.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;CD8&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Cytotoxic T cells&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Helps assess whether immune cells are infiltrating tumor regions.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;CD68&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Macrophage-associated signal&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Highlights myeloid context at tumor borders or within tumor-rich tissue.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;PD-1 / PD-L1&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Checkpoint-associated signaling&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Provides context on whether immune activity may be locally restrained.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Ki67&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Proliferation&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Indicates whether tumor-rich regions are actively cycling.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 38.8px;"&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;&lt;STRONG&gt;CD34&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Vasculature&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 38.8px;"&gt;
&lt;P&gt;Helps interpret access routes and stromal context around the tumor.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In this blog, we focus on a small set of markers that are especially useful for reading tumor geography, immune access, checkpoint biology, proliferation, and vascular organization. To make that concrete, we implemented a practical notebook that shows how the GigaTIME model can be deployed as an endpoint, used for inference on H&amp;amp;E patches, and combined with single-cell localization to support downstream phenotyping and interpretation. The main point is not any one marker in isolation, but how marker combinations organize in space and help us ask more meaningful questions about tumor–host interaction.&lt;/P&gt;
&lt;H2&gt;2.2 From H&amp;amp;E to virtual mIF: how GigaTIME works&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;The starting point is a sample-level H&amp;amp;E patch from the test dataset, paired with a compressed label file that contains binary marker masks and cell-segmentation scaffolds used downstream. The workflow is intentionally practical: load the H&amp;amp;E input, generate or reuse GigaTIME predictions, visualize selected virtual mIF channels, refine those predictions with single-cell localization, and summarize the results as virtual phenotypes and per-marker counts [4].&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;At the model-output stage, GigaTIME produces a multi-channel spatial prediction stack from the H&amp;amp;E patch. In the notebook, each channel can be visualized as a virtual mIF map indicating where the model predicts marker-associated signal in the tissue. However, these raw virtual mIF maps are not yet cell phenotypes. To make them biologically interpretable, the notebook converts dense predictions into cell-aware assignments. It uses labels_dapi for nuclear regions and labels_dapi_expanded for expanded cell regions, then computes the fraction of positive pixels within each segmented region. Marker positivity is assigned only when the overlap exceeds a threshold, with localization adjusted according to expected marker biology, such as nuclear versus non-nuclear signal [5].&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This same localization scaffold also supports validation. Because the reference files provide binarized marker masks together with shared nuclei and expanded-cell labels, predicted signal and reference signal can be compared in the same segmented cellular space rather than only as unstructured image intensities. Once virtual mIF maps are tied to individual nuclei or cell regions, they become both quantitative and spatial, supporting measurements of infiltration, compartment-specific localization, and per-marker cell counts that can be aggregated across samples. You can access the tutorial here: &lt;A href="https://aka.ms/gigatime-sample" target="_blank" rel="noopener"&gt;https://aka.ms/gigatime-sample&lt;/A&gt;&lt;STRONG&gt;.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 2. Example H&amp;amp;E patch and virtual mIF output.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H2&gt;2.3 Virtual Phenotyping&lt;/H2&gt;
&lt;P class="lia-align-justify"&gt;Once the virtual mIF maps have been localized to segmented cells, they can be interpreted as spatial phenotypes rather than diffuse prediction maps. In this tutorial, we use a limited sample dataset to demonstrate how these localized overlays can be reproduced and read biologically in practice. The goal is not to make broad claims from a small set of examples, but to show how virtual phenotyping connects marker prediction, cellular localization, and tumor microenvironment interpretation. In real applications, this type of workflow would typically require additional fine-tuning and validation to account for imaging conditions, tissue context, cohort composition, and study-specific marker panels.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;At a high level, the figures in this section can be read through four themes: tumor–immune interaction, immune system structure, immune checkpoint biology, and stromal and vascular context. These themes translate localized virtual mIF signals into biologically meaningful spatial patterns. Rather than reading each marker in isolation, we can read how marker combinations organize near tumor-rich regions, immune niches, and tissue boundaries. These same concepts are already used in modern oncology, where immune infiltration, immune organization, checkpoint signaling, and vascular or stromal remodeling all shape how therapies are developed and interpreted [6–9].&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 99.7222%; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Theme&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Biological interpretation&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Example marker trends&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Therapy relevance&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Tumor–immune interaction [6]&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Tumor-rich compartment is being accessed by immune cells, shaped by myeloid cells, and actively proliferating.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;• Higher CD8 near CK-rich regions suggests immune infiltration;&lt;BR /&gt;• CD68 concentrated at the tumor border suggests a myeloid interface or barrier;&lt;BR /&gt;• Higher Ki67 within CK-rich regions suggests active tumor proliferation.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Higher intratumoral CD8 is generally favorable for anti-tumor immunity; border-restricted CD68 may reflect a suppressive interface; high Ki67 in CK-rich regions is generally unfavorable because it suggests active tumor growth.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Immune system structure [7]&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Immune compartment appears coordinated, sparse, balanced, or spatially segregated.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;• CD3 and CD20 co-localized in organized clusters suggests structured lymphoid neighborhoods;&lt;BR /&gt;• Balanced CD4 and CD8 distributions suggests a coordinated immune context;&lt;BR /&gt;• Fragmented or separated patterns suggest a less organized response.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Organized lymphoid structure and balanced adaptive immune populations are generally favorable; fragmented or sparse immune organization may indicate weaker local immune coordination.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Immune checkpoint biology [8]&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Immune cells are present but may be locally restrained by inhibitory signaling.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;• CD8 overlapping with PD-L1 suggests immune presence in a potentially suppressive niche;&lt;/P&gt;
&lt;P&gt;• CD3 overlapping with PD-1 suggests T cells in a checkpoint-associated state consistent with local restraint.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Context-dependent: this may indicate a restrained immune response that could be relevant to checkpoint blockade, but not automatically a positive or negative finding in isolation.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Stromal and vascular context [9]&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Tissue structure supports access, creates barriers, or concentrates inflammatory niches.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;• CD34 aligned near CK-rich regions suggests vascular routes close to tumor compartments;&lt;BR /&gt;• Tryptase and CD68 clustered in stromal or perivascular regions suggests innate inflammatory niches that may shape local signaling and access.&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Context-dependent: vascular proximity can support access, while stromal or perivascular inflammatory niches may either facilitate response or reinforce barriers depending on the broader microenvironment.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Table 2. Quick guide to interpreting virtual phenotyping themes.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;2.3.1 Tumor–immune interaction&lt;/H3&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 3. Tumor–immune interaction.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;We begin with a central question in the tumor microenvironment: can immune cells reach the tumor? In Figure 3, the CK-centered overlays provide a compact way to read this biology. CK + CD8 shows tumor-rich regions alongside cytotoxic T-cell signal, allowing us to ask whether immune cells are infiltrating tumor nests, remaining at the border, or being excluded from the tumor core. CK + CD68 adds macrophage context and helps highlight whether myeloid cells are embedded within tumor-rich regions or concentrated at the tumor–stroma interface. CK + Ki67 complements these immune overlays by showing whether the same tumor-rich regions also display strong proliferative activity.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Read together, these panels provide a concise illustrative summary of tumor geography, immune access, myeloid interface biology, and growth state. Are immune cells entering the malignant compartment, or is access limited? Are macrophages mixing with tumor cells or forming a border-associated niche? Are tumor-rich regions relatively quiescent, or are they actively cycling? Even in a tutorial setting, this combination of overlays shows how virtual markers can move beyond visualization and support structured interpretation of the tumor immune microenvironment.&lt;/P&gt;
&lt;H3&gt;2.3.2 Immune system structure&lt;/H3&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 4. Immune system structure.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Virtual phenotyping is also useful for understanding how immune populations are organized beyond the tumor border itself. In Figure 4, overlays such as CD3 + CD20 and CD4 + CD8 provide a view into the composition and organization of the lymphoid compartment. Rather than asking only whether immune cells are present, these panels help us ask whether the immune landscape appears coordinated, sparse, balanced, or spatially segregated. This matters because immune presence alone does not fully capture immune effectiveness; spatial arrangement can suggest very different biological states.&lt;/P&gt;
&lt;H3&gt;2.3.3 Immune checkpoint biology&lt;/H3&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 5. Immune checkpoint biology.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Checkpoint biology provides another layer of interpretation that is especially relevant in immuno-oncology. In Figure 5, overlays such as CD8 + PD-L1 and CD3 + PD-1 help connect immune presence with local regulatory signals. These panels are useful because they show that immune cells may be present in the tissue and still not be fully effective if their activity is being restrained by checkpoint-associated biology. Spatial overlap between T-cell markers and checkpoint-associated signal does not, by itself, prove immune exhaustion or therapeutic response, but it can provide context that is consistent with restrained or suppressed immune activity.&lt;/P&gt;
&lt;H3&gt;2.3.4 Stromal and vascular context&lt;/H3&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;EM&gt;&lt;STRONG&gt;Figure 6. Stromal and vascular context.&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The tumor microenvironment is also shaped by the surrounding tissue infrastructure. In Figure 6, overlays such as CD34 + CK and Tryptase + CD68 help reveal how vessels, stromal niches, and innate immune populations are positioned relative to tumor-rich regions. These patterns matter because immune access, tumor expansion, and local signaling are all influenced by the organization of the supporting tissue around the tumor. By including vascular and stromal context, the notebook helps show how virtual markers can support a more complete spatial interpretation of tumor–host interaction.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;These examples show how virtual phenotyping transforms raw virtual mIF maps into interpretable spatial summaries of the tumor microenvironment. After localization, the outputs are no longer just probability maps; they become cell-aware patterns that can be read in terms of immune infiltration, tumor growth, checkpoint context, stromal organization, and compartment-specific localization.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;The goal of examples is reproducibility and interpretation rather than broad biological generalization. The limited dataset is useful because it makes the workflow easy to follow and the figures easy to inspect, but real deployment would require additional tuning, validation, and adaptation for the target imaging workflow and marker set. Even with that caveat, this workflow illustrates the practical value of GigaTIME: virtual mIF predictions become most useful when they are localized, contextualized, and interpreted as part of a spatial system rather than as isolated channels.&lt;/P&gt;
&lt;H1&gt;3. Microsoft Discovery: Transform the end‑to‑end discovery process from hypothesis generation to simulation, evaluation, iteration, and design&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;&lt;A class="lia-external-url" href="http://aka.ms/discoveryplatform" target="_blank" rel="noopener"&gt;Microsoft Discovery&lt;/A&gt; is designed as an enterprise agentic AI platform. It is built around a graph-based knowledge engine and teams of specialized AI agents that collaborate with scientists throughout the discovery cycle from literature reasoning and hypothesis formation to simulation and iterative learning. With Microsoft Discovery, teams can:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;
&lt;DIV class="lia-align-justify"&gt;Accelerate end‑to‑end research with autonomous, multi‑agent systems that conduct literature analysis, scientific reasoning, simulation, and tool execution at scale&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="lia-align-justify"&gt;Unify institutional knowledge through GraphRAG‑powered Bookshelves that transform proprietary documents and scientific data into structured, queryable knowledge graphs&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="lia-align-justify"&gt;Scale advanced computation on Azure supercomputing infrastructure to support large‑scale simulation, modeling, and design‑space exploration&lt;/DIV&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;DIV class="lia-align-justify"&gt;Collaborate with confidence in enterprise‑grade workspaces featuring built‑in RBAC, managed identities, and full data sovereignty&lt;/DIV&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 7. Microsoft Discovery.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Importantly, Discovery does not treat AI outputs as final answers. Instead, it embeds them into an explicit scientific reasoning loop, where:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Knowledge is represented as contextual, versioned graphs rather than static text&lt;/LI&gt;
&lt;LI&gt;Conflicting evidence and assumptions are surfaced, not hidden&lt;/LI&gt;
&lt;LI&gt;AI agents specialize, adapt, and learn across iterations&lt;/LI&gt;
&lt;LI&gt;Researchers remain in control, with traceable sources and explainable steps&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;All outputs are intended to support, not replace, expert scientific and clinical judgment.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 8. Microsoft Discovery Scientific Reasoning Loop.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Built on Microsoft Azure, Microsoft Discovery orchestrates teams of specialized AI agents using a graph-based knowledge engineering framework and able to leverage AI models available through Microsoft Foundry. The platform integrates advanced AI, high-performance computing (HPC) and quantum capabilities, and can connect insights back to the physical world to enable continuous experimentation and refinement. Meanwhile, Microsoft Discovery remains fully extensible to an organization’s own models, agents, tools, and datasets while meeting stringent enterprise requirements for trust, governance, security and compliance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 9. Enterprise Agentic R&amp;amp;D Platform Microsoft Discovery.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;H1&gt;4. Using GigaTIME within Microsoft Discovery for precision oncology R&amp;amp;D&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;Microsoft Discovery is the overall agentic R&amp;amp;D platform. GigaTIME is one of the many AI tools that can be used on the Discovery platform to generate spatially resolved tumor microenvironment features from routine pathology, and then connect those features to downstream reasoning, validation, and iteration. GigaTIME provides population-scale, spatially resolved tumor microenvironment features derived from routine pathology.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;When GigaTIME runs as a standalone notebook or point solution, the pipeline is often held together by ad hoc storage, cross-team handoffs, and manual input/output tracking (for example, whole slide images and patches in one location, predictions in another, single-cell localization outputs elsewhere, and downstream analyses in separate scripts).&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In Microsoft Discovery, the pipeline is reshaped with governed ingestion, model execution, post-processing/feature extraction, and iterative reasoning. So that each stage produces typed, versioned inputs for the next instead of “files you have to hunt down”. Operationalizing GigaTIME in Discovery shifts the day-to-day experience from “run a model, then assemble context elsewhere” to “ask, explore, and iterate in one governed workspace”. In addition to that, Microsoft Discovery provides comprehensive suite of tools that transform data from sources like science catalog and AI models into actionable insights and validated findings. These tools include intelligent multi-agent orchestration, a cognitive discovery engine, a bookshelf, high-performance compute and validation of hypotheses, scientific reasoning, and an iteration framework.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Within a Discovery Platform, researcher can build customized analytics workflows for image ingestion, model inference, visualization, and these can become standardized building blocks rather than one-off analyses. Because the platform is extensible, teams can integrate additional models from Microsoft Foundry, third-party tools, or in-house pipelines alongside GigaTIME, creating a governed, end-to-end tumor immune phenotyping and discovery workflow.&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;&lt;EM&gt;Figure 10. Microsoft Discovery platform using GigaTIME as R&amp;amp;D tool (alongside other models, data sources, and R&amp;amp;D capabilities)&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;In the future, we expect Discovery to empower the research community to explore several other R&amp;amp;D applications by incorporating new models like GigaTIME alongside additional tools, datasets, experimental systems, and domain knowledge, including:&lt;/P&gt;
&lt;UL class="lia-align-justify"&gt;
&lt;LI&gt;Exploring tumor responses to immunotherapy treatment by linking spatial immune context&lt;/LI&gt;
&lt;LI&gt;Supporting drug-discovery research by connecting spatial phenotypes to molecular pathways and targets&lt;/LI&gt;
&lt;LI&gt;Helping researchers generate hypotheses about candidate biomarkers and therapeutic targets by contextualizing population-scale signals against prior evidence in a knowledge graph.&lt;/LI&gt;
&lt;LI&gt;Informing research on treatment stratification using cell-aware spatial signatures beyond bulk averages&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="lia-align-justify"&gt;GigaTIME and Microsoft Discovery are intended for research and development purposes. They are not medical devices and are not intended to diagnose, prevent, monitor, predict, prognose, treat, or alleviate any disease or condition. Any clinical application would require separate validation and applicable regulatory clearance.&lt;/P&gt;
&lt;H1&gt;5. From tutorial to platform scale impact&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;The virtual phenotyping the tumor immune microenvironment with GigaTIME shows that virtual mIF outputs are most useful value when they are localized, contextualized, and interpreted as part of a spatial system rather than&amp;nbsp; isolated channels. When integrated into Microsoft Discovery, these outputs form the foundation for scalable, auditable, and collaborative oncology R&amp;amp;D.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;With this integration, Microsoft Discovery reflects a broader shift in how AI is applied to science. The objective is no longer simply to run individual models or analyses faster, but to help evolve how R&amp;amp;D is conducted by embedding reasoning, learning, and orchestration directly into the scientific process. In this way, outputs from tools like GigaTIME can be translated into testable hypotheses and validated decisions.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Ultimately, this about providing tools that can help researchers examine complex systems, structure their reasoning, and iterate on their analyses.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Microsoft Discovery is now available in preview. Ready to take the next steps and try out platform with GigaTIME and any other Microsoft 1P or 3P Models available through Microsoft Foundry:&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Microsoft Discovery expended preview announcement &amp;nbsp;&lt;A href="https://aka.ms/MicrosoftDiscoveryBlog" target="_blank" rel="noopener"&gt;https://aka.ms/MicrosoftDiscoveryBlog&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Learn and practice how Microsoft Discovery can help scientists and engineers transform research and development at &lt;A href="https://aka.ms/microsoftdiscovery" target="_blank" rel="noopener"&gt;https://aka.ms/microsoftdiscovery&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Follow our tutorial notebook to understand how to deploy GigaTIME using Microsoft Foundry model catalog, reproduce the results described here, and understand how to use it for your own workloads: &lt;A href="https://aka.ms/gigatime-sample" target="_blank" rel="noopener"&gt;https://aka.ms/gigatime-sample&lt;/A&gt;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Access &lt;A href="https://ai.azure.com/catalog/models/GigaTIME/" target="_blank" rel="noopener"&gt;GigaTIME model card&lt;/A&gt;, learn model details and access deployment.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;EM&gt;This post contains forward-looking statements regarding potential future capabilities, research directions, and applications of GigaTIME and Microsoft Discovery. These statements reflect current plans and expectations, are subject to change without notice, and do not constitute a commitment to deliver any functionality, feature, code, or service. Actual results may differ.&lt;/EM&gt;&lt;/P&gt;
&lt;H4&gt;Special thanks to Microsoft cross functional team for their great support:&lt;/H4&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/jeya-maria-jose-357951130/" target="_blank" rel="noopener"&gt;@Jeya Maria Jose Valanarasu&lt;/A&gt;, Sr. Scientist, Microsoft Research Health Futures&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/naoto-usuyama/" target="_blank" rel="noopener"&gt;@Naoto Usuyama&lt;/A&gt;, Principal Researcher at Microsoft Research Health Futures&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/hao-qiu-996126127/" target="_blank" rel="noopener"&gt;@Hao Qiu&lt;/A&gt;, Data Scientist, HLS Frontiers&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/itarapov/" target="_blank" rel="noopener"&gt;@Ivan Tarapov&lt;/A&gt;, Senior Director, Multimodal Healthcare AI at Microsoft &lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/saumilshri/" target="_blank" rel="noopener"&gt;@Saumil Shrivastava&lt;/A&gt;, Principal Product Manager, Microsoft Foundry&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/bella11/" target="_blank" rel="noopener"&gt;@Bella Chan&lt;/A&gt;, Principal Product Manager, Microsoft Discovery&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/ash-jogalekar-0649934/" target="_blank" rel="noopener"&gt;@Ash Jogalekar&lt;/A&gt;, Senior Program Manager, Microsoft Discovery&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/nihitpokhrel/" target="_blank" rel="noopener"&gt;@Nihit Pokhrel&lt;/A&gt;, Senior Product Manager, Microsoft Discovery&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/lily-k-kim/" target="_blank" rel="noopener"&gt;@Lily Kim&lt;/A&gt;, General Manager, Microsoft Discovery&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/samueldefreitasmartins/" target="_blank" rel="noopener"&gt;@Samuel De Freitas Martins&lt;/A&gt;, Senior Director, Strategy and Partnerships&lt;BR /&gt;&lt;A href="https://www.linkedin.com/in/mu-wei-038a3849/" target="_blank" rel="noopener"&gt;@Mu Wei&lt;/A&gt;, Principal Applied Science Manager, Health and Life Sciences&lt;BR /&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/hoifung-poon-9559943/" target="_blank" rel="noopener"&gt;@Hoifung Poon&lt;/A&gt;, General Manager, Microsoft Research Health Futures&lt;/P&gt;
&lt;H1&gt;References&lt;/H1&gt;
&lt;P&gt;[1] U.S. Food and Drug Administration. FDA approves tisagenlecleucel for B-cell ALL and tocilizumab for cytokine release syndrome. 2017.&lt;/P&gt;
&lt;P&gt;[2] Valanarasu JMJ, et al. Multimodal AI generates virtual population for tumor microenvironment modeling. Cell. 2026.&lt;/P&gt;
&lt;P&gt;[3] Valanarasu JMJ, et al. Multimodal AI generates virtual population for tumor microenvironment modeling. Cell. 2026.&lt;/P&gt;
&lt;P&gt;[4] Sood Anup et. al., &lt;A href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7472296/" target="_blank" rel="noopener"&gt;Comparison of Multiplexed Immunofluorescence Imaging to Chromogenic Immunohistochemistry of Skin Biomarkers in Response to Monkeypox, Viruses 12 (8), 787&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;[5] Santamaria-Pang, A., et.al., &lt;A href="https://arxiv.org/pdf/2007.09471" target="_blank" rel="noopener"&gt;Automated Phenotyping via Cell Auto Training (CAT) on the Cell DIVE Platform, 2019 IEEE BIBM&lt;/A&gt;, &amp;nbsp;&lt;/P&gt;
&lt;P&gt;[6] Brummel K, Eerkens AL, de Bruyn M, et al. Tumour-infiltrating lymphocytes: from prognosis to treatment selection. British Journal of Cancer. 2023.&lt;/P&gt;
&lt;P&gt;[7] Zhao L, Jin S, Wu H. Tertiary lymphoid structures in diseases: immune mechanisms and therapeutic advances. Signal Transduction and Targeted Therapy. 2024.&lt;/P&gt;
&lt;P&gt;[8] Sun Q, Hong Z, Zhang C, et al. Immune checkpoint therapy for solid tumours: clinical dilemmas and future trends. Signal Transduction and Targeted Therapy. 2023.&lt;/P&gt;
&lt;P&gt;[9] Choi Y, Jung K. Normalization of the tumor microenvironment by harnessing vascular and immune modulation to achieve enhanced cancer therapy. Experimental &amp;amp; Molecular Medicine. 2023.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 16:14:57 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/reimagining-cancer-r-d-with-agentic-ai-using-gigatime-in/ba-p/4513545</guid>
      <dc:creator>Alberto_Santamaria</dc:creator>
      <dc:date>2026-04-23T16:14:57Z</dc:date>
    </item>
    <item>
      <title>Modernizing Digital Health Record Governance with Microsoft Entra Identity Governance</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/modernizing-digital-health-record-governance-with-microsoft/ba-p/4512739</link>
      <description>&lt;P&gt;The digital transformation of healthcare continues to accelerate. Clinicians expect near-instant access to Electronic Health Records (EHRs), clinical workflows increasingly span cloud and on-premises systems, and regulatory pressures around identity, access, and auditability have never been higher.&lt;/P&gt;
&lt;P&gt;For healthcare security and IT leaders, one challenge consistently rises to the top: ensuring the right clinicians have the right access to EHR systems—no more, no less—throughout their lifecycle.&lt;/P&gt;
&lt;P&gt;Microsoft Entra Identity Governance was built to help address these challenges. By connecting authoritative workforce data to Microsoft Entra, automating joiner-mover-leaver processes, governing access through access packages, and recertifying access over time with access reviews, organizations can move from manual administration to policy-driven automation across the workforce lifecycle.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This represents an important evolution for healthcare organizations that have historically relied on on-premises identity tooling to synchronize data among HR systems, directories, and clinical applications. With Entra Identity Governance Microsoft provides cloud-driven identity lifecycle automation, application provisioning, entitlement management, and access reviews that can be applied to users, guests, agents, groups, and enterprise applications—including EHR systems.&lt;/P&gt;
&lt;P&gt;EHR platforms such as Epic, Oracle Health (Cerner), and Meditech were designed to support complex clinical roles, dynamic care teams, and granular security models. Our goal with Entra Identity Governance is to simplify and automate the provisioning and lifecycle of these digital health records.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Provisioning&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Provisioning starts with a source of authority. Microsoft Entra Identity Governance HR-driven provisioning creates digital identities based on human resources systems, and Microsoft’s API-driven inbound provisioning extends that model by supporting integration with virtually any system of record, including credential systems, payroll systems, spreadsheets, flat files, and SQL tables.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once workforce data is in Microsoft Entra ID, IT administrators can standardize attribute mappings and establish the identity foundation for joiner, mover, and leaver processes. Entra Identity Governance Lifecycle Workflows can automate downstream tasks after the identity is established, helping organizations coordinate onboarding, internal moves, and offboarding with less manual effort.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;From there, Microsoft Entra automatic app provisioning can create, maintain, and remove user identities and entitlements in connected applications. Provisioning is supported by using connectors, protocols, agents, and Azure function and logic apps for SCIM, LDAP, SQL, REST, SOAP, PowerShell, and even custom ECMA and API based scenarios. For healthcare organizations, that means Microsoft Entra can serve as the control plane for governed downstream access to the directories, groups, enterprise applications, and electronic health record (EHR) systems of their choice.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Entitlement Management&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Provisioning establishes the identity, but Microsoft Entra Entitlement Management governs what that identity can request and maintain access to. Entitlement management is the identity governance capability that automates access request workflows and access assignments. The core construct is the Access Package, which bundles all resources a user needs together in one governed unit.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Access packages can include applications, entitlements, groups, Teams, and SharePoint Online sites. Policies control who can request access, whether approvals are required, whether business justification is collected, and how long the assignment should last. This helps organizations move away from one-off entitlement decisions and toward a repeatable, policy-driven model that is automated.&lt;/P&gt;
&lt;P&gt;Electronic Health Records may have hundreds or several thousand granular entitlements within them.&amp;nbsp; Using Microsoft Entitlement Management and Access Packages customers can model clinical roles and automatically assign entitlements to users throughout their lifecycle.&amp;nbsp; This easily enables RBAC (role based access control) and ABAC (attribute based access control) scenarios.&amp;nbsp; Instead of manually stitching together individual permissions, organizations can publish business-friendly access packages for healthcare roles that are approved, time-bound, and easier to audit.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Access Reviews&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Assigning access is only part of the governance challenge; organizations also need a way to verify that access is still appropriate over time. Access reviews in Microsoft Entra Identity Governance help organizations manage group memberships, access to enterprise applications, and role assignments so that only the right people retain access at the right time.&lt;/P&gt;
&lt;P&gt;Access Reviews can be scheduled or ad hoc, delegated to managers, resource owners, or users for self-attestation, and tracked for compliance or policy reasons.&amp;nbsp; These reviews can be performed with business-critical application access, external users, and even scenarios where systems are disconnected from Entra ID.&lt;/P&gt;
&lt;P&gt;When a review finishes, Microsoft Entra Identity Governance will apply the outcome and remove access from users who no longer need it. In a healthcare context, that gives security and compliance teams a structured way to recertify access to the groups, access packages, and applications tied to EHR workflows that clinicians need.&amp;nbsp; Overall, this reduces access creep and maintains clearer audit evidence for ongoing governance and compliance.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;Microsoft Entra Suite&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You can experience the benefits described in this article by deploying Microsoft Entra Identity Governance, which is part of the &lt;A href="https://learn.microsoft.com/en-us/entra/fundamentals/licensing" target="_blank" rel="noopener"&gt;Microsoft Entra Suite&lt;/A&gt;, the industry’s most comprehensive Zero Trust access solution for the workforce.&amp;nbsp;The Microsoft Entra Suite provides everything needed to verify users, prevent overprivileged permissions, improve threat detections, and enforce granular access controls for all users and resources, including electronic health records.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Get started with the Microsoft Entra Suite with a&amp;nbsp;&lt;A href="https://aka.ms/EntraSuiteTrial" target="_blank" rel="noopener"&gt;free 90-day trial&lt;/A&gt;.&lt;/P&gt;
&lt;P&gt;For additional details, please reach out to your Microsoft Representative or Microsoft Partner.&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Read more on this topic&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/id-governance/identity-governance-overview" target="_blank" rel="noopener"&gt;What is Microsoft Entra ID Governance?&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/id-governance/scenarios/automate-identity-lifecycle" target="_blank" rel="noopener"&gt;Automate identity lifecycle management with Microsoft Entra ID Governance&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/identity/app-provisioning/inbound-provisioning-api-concepts" target="_blank" rel="noopener"&gt;API-driven inbound provisioning concepts&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/identity/app-provisioning/inbound-provisioning-api-logic-apps" target="_blank" rel="noopener"&gt;API-driven inbound provisioning with Azure Logic Apps&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/identity/app-provisioning/user-provisioning" target="_blank" rel="noopener"&gt;What is app provisioning in Microsoft Entra ID?&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/id-governance/entitlement-management-overview" target="_blank" rel="noopener"&gt;What is entitlement management?&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/id-governance/access-reviews-overview" target="_blank" rel="noopener"&gt;What are access reviews?&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/entra/id-governance/deploy-access-reviews" target="_blank" rel="noopener"&gt;Plan a Microsoft Entra access reviews deployment&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://edgile.com/information-security/microsoft-entra-id-epic-connector/" target="_blank" rel="noopener"&gt;Microsoft Entra ID Epic Connector (Wipro)&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://www.majorkeytech.com/our-success-story/migrating-healthcare-institutions-to-microsoft-entra-id-governance" target="_blank"&gt;Customer Story with MajorKey&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Learn more about Microsoft Entra&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;⁠&lt;A href="https://www.microsoft.com/en-us/security/blog/products/microsoft-entra/" target="_blank" rel="noopener"&gt;Microsoft Entra News and Insights | Microsoft Security Blog&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;⁠&lt;A href="https://techcommunity.microsoft.com/t5/microsoft-entra-blog/bg-p/Identity" target="_blank" rel="noopener"&gt;⁠Microsoft Entra blog | Tech Community&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;⁠&lt;A href="https://learn.microsoft.com/en-us/entra/" target="_blank" rel="noopener"&gt;Microsoft Entra documentation | Microsoft Learn&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/t5/microsoft-entra/bd-p/Azure-Active-Directory" target="_blank" rel="noopener"&gt;Microsoft Entra discussions | Microsoft Community&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Mon, 20 Apr 2026 13:38:42 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/modernizing-digital-health-record-governance-with-microsoft/ba-p/4512739</guid>
      <dc:creator>Randall_Irwin</dc:creator>
      <dc:date>2026-04-20T13:38:42Z</dc:date>
    </item>
    <item>
      <title>Driving AI‑Powered Healthcare: A Data &amp; AI Webinar and Workshop Series</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/driving-ai-powered-healthcare-a-data-ai-webinar-and-workshop/ba-p/4509450</link>
      <description>&lt;P&gt;Across these sessions, you’ll learn how healthcare organizations are using Microsoft Fabric, advanced analytics, and AI to unify fragmented data, modernize analytics, and enable intelligent, scalable solutions, from enterprise reporting to AI‑powered use cases.&lt;/P&gt;
&lt;P&gt;Whether you’re just getting started or looking to accelerate adoption, these sessions offer practical guidance, real‑world examples, and hands‑on learning to help you build a strong data foundation for AI in healthcare.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="width: 1120px; height: 1524px; border-width: 1px;"&gt;&lt;tbody&gt;&lt;tr style="height: 67px;"&gt;&lt;td style="height: 67px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Date&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Topic&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 67px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Details&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 67px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Location&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 67px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Registration Link&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 123px;"&gt;&lt;td style="height: 123px;"&gt;
&lt;P&gt;&lt;STRONG&gt;May 6&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 123px;"&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Webinar:&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/EM&gt; Microsoft Fabric Foundations - A Simple Path to Modern Analytics and AI&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 123px;"&gt;
&lt;P&gt;Discover how Microsoft Fabric consolidates fragmented analytics into a single integrated data platform, making it easier to deliver trusted insights and adopt AI without added complexity.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 123px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 123px;"&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://msit.events.teams.microsoft.com/event/msit.daaca78c-0165-4b5e-862a-a16f7ef0a510@72f988bf-86f1-41af-91ab-2d7cd011db47" target="_blank" rel="noopener"&gt;Register&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 95px;"&gt;&lt;td style="height: 95px;"&gt;
&lt;P&gt;&lt;STRONG&gt;May 13&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 95px;"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;EM&gt;&lt;STRONG&gt;Webinar: &lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;Reduce BI Sprawl, Cut Cost and Build an AI-Ready Analytics Foundation&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 95px;"&gt;
&lt;P&gt;Learn how Power BI enables enterprise BI consolidation, consistent metrics, and secure, scalable analytics that support both operational reporting and emerging AI use cases.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 95px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 95px;"&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://msit.events.teams.microsoft.com/event/msit.b2d7ec67-efc6-4a26-a0f4-d88a075ea6e1@72f988bf-86f1-41af-91ab-2d7cd011db47" target="_blank" rel="noopener"&gt;Register&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 179px;"&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;May 19-20&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-20"&gt;&lt;EM&gt;&lt;STRONG&gt;In Person Workshop: &lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Chicago&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;&lt;A class="lia-external-url" href="https://ms-workshops.cloudevents.ai/ms-innovation-workshops/events/43044F9C-802E-4FEA-A741-65070E3AD4D0" data-interception="off" target="_blank"&gt;Register&lt;/A&gt;&amp;nbsp;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 151px;"&gt;&lt;td style="height: 151px;"&gt;
&lt;P&gt;&lt;STRONG&gt;May 27&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 151px;"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;EM&gt;&lt;STRONG&gt;Webinar:&lt;/STRONG&gt;&lt;/EM&gt; &lt;/SPAN&gt;Unified Data Foundation for AI &amp;amp; Analytics - Leveraging OneLake and Microsoft Fabric&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 151px;"&gt;
&lt;P&gt;This session shows how organizations can simplify fragmented data architectures by using Microsoft Fabric and OneLake as a single, governed foundation for analytics and AI.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 151px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 151px;"&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://msit.events.teams.microsoft.com/event/msit.25785d2f-684a-42d0-8be2-7221f519463c@72f988bf-86f1-41af-91ab-2d7cd011db47" target="_blank" rel="noopener"&gt;Register&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 179px;"&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;May 27-28&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-20"&gt;&lt;EM&gt;&lt;STRONG&gt;In Person Workshop: &lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Silicon Valley&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;&lt;A class="lia-external-url" href="https://ms-workshops.cloudevents.ai/ms-innovation-workshops/events/06EB9EA8-F680-4FAC-B11C-F52D4445EA80" target="_blank"&gt;Register&lt;/A&gt;&amp;nbsp;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 123px;"&gt;&lt;td style="height: 123px;"&gt;
&lt;P&gt;&lt;STRONG&gt;June 2&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 123px;"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;EM&gt;&lt;STRONG&gt;Webinar: &lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;Delivering Personalized Patient Experiences at Scale with Microsoft Fabric and Adobe&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 123px;"&gt;
&lt;P&gt;Learn how healthcare organizations can improve patient engagement by unifying trusted data in Microsoft Fabric and activating it through Adobe’s personalization platform.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 123px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 123px;"&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://msit.events.teams.microsoft.com/event/msit.3c6ce6e4-7fbe-4146-a318-3ada460d932b@72f988bf-86f1-41af-91ab-2d7cd011db47" target="_blank" rel="noopener"&gt;Register&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 179px;"&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;June 3-4&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-20"&gt;&lt;EM&gt;&lt;STRONG&gt;In Person Workshop: &lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;New York&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;&lt;A class="lia-external-url" href="https://ms-workshops.cloudevents.ai/ms-innovation-workshops/events/7225CDFA-982D-4C6A-892E-3083C4333457" target="_blank"&gt;Register&lt;/A&gt;&amp;nbsp;&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 108px;"&gt;&lt;td style="height: 108px;"&gt;
&lt;P&gt;&lt;STRONG&gt;June 10&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 108px;"&gt;
&lt;P&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Webinar: &lt;/STRONG&gt;&lt;/SPAN&gt;From Data to Decisions: How AI Data Agents in Microsoft Fabric Redefine Analytics&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 108px;"&gt;
&lt;P&gt;Join us to learn how Fabric Data Agents enable users to interact with enterprise data through AI‑powered, governed agents that understand both data and business context.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 108px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 108px;"&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://msit.events.teams.microsoft.com/event/msit.d29ba7ec-5eea-4f44-a742-b44cd6aed0f6@72f988bf-86f1-41af-91ab-2d7cd011db47" target="_blank" rel="noopener"&gt;Register&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 141px;"&gt;&lt;td style="height: 141px;"&gt;
&lt;P&gt;&lt;STRONG&gt;June 17&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 141px;"&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;Webinar: &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/EM&gt;Building the Intelligent Factory: A Unified Data and AI Approach to Life Science Manufacturing&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 141px;"&gt;
&lt;P&gt;Discover how life science &amp;amp; MedTech manufacturers use Microsoft Fabric to integrate operational, quality, and enterprise data and apply AI‑powered analytics for smarter, faster manufacturing decisions.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 141px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Virtual&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 141px;"&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://msit.events.teams.microsoft.com/event/msit.e0cbf960-3fd6-4524-9e31-3bb8d40d2af7@72f988bf-86f1-41af-91ab-2d7cd011db47" target="_blank" rel="noopener"&gt;Register&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr style="height: 179px;"&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;June 23-24&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;&lt;EM&gt;&lt;SPAN class="lia-text-color-20"&gt;&lt;STRONG&gt;In Person Workshop: &lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/EM&gt;Driving AI‑Powered Healthcare: Advanced Analytics, AI, and Real‑World Impact&lt;/P&gt;
&lt;/td&gt;&lt;td style="height: 179px;"&gt;
&lt;P&gt;Attend this two‑day, in‑person event to learn how healthcare organizations use Microsoft Fabric to unify data, accelerate AI adoption, and deliver measurable clinical and operational value. Day 1 focuses on strategy, architecture, and real‑world healthcare use cases, while Day 2 offers hands‑on workshops to apply those concepts through guided labs and agent‑powered solutions.&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;
&lt;P&gt;&lt;STRONG&gt;Dallas&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td class="lia-align-center" style="height: 179px;"&gt;&lt;A class="lia-external-url" href="https://ms-workshops.cloudevents.ai/ms-innovation-workshops/events/828AD8CD-F69C-4E3B-BD55-B3228F49F5AA" target="_blank"&gt;Register&lt;/A&gt;&amp;nbsp;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 109px" /&gt;&lt;col style="width: 258px" /&gt;&lt;col style="width: 516px" /&gt;&lt;col style="width: 111px" /&gt;&lt;col style="width: 126px" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 10 Apr 2026 13:26:40 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/driving-ai-powered-healthcare-a-data-ai-webinar-and-workshop/ba-p/4509450</guid>
      <dc:creator>CamilleWhicker</dc:creator>
      <dc:date>2026-04-10T13:26:40Z</dc:date>
    </item>
    <item>
      <title>How to Compute GPU Capacity for GPT Models (GPT‑4o and Later)</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/how-to-compute-gpu-capacity-for-gpt-models-gpt-4o-and-later/ba-p/4506930</link>
      <description>&lt;P&gt;When deploying large language models like&amp;nbsp;&lt;STRONG&gt;GPT‑4o&lt;/STRONG&gt;, capacity planning is no longer about picking a GPU SKU. Instead, Azure abstracts GPU compute behind &lt;STRONG&gt;Provisioned Throughput Units (PTUs)&lt;/STRONG&gt;—a model‑centric way to reason about GPU usage, throughput, and latency.&lt;/P&gt;
&lt;P&gt;This post explains &lt;STRONG&gt;how GPU capacity is computed for GPT‑4o‑class models&lt;/STRONG&gt;, and how to translate your workload into the right number of PTUs.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;From GPUs to Tokens: The Mental Shift&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;With GPT‑4o and newer models, Azure does &lt;STRONG&gt;not&lt;/STRONG&gt; expose GPUs directly. Instead:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;GPU compute is consumed as &lt;STRONG&gt;token throughput&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Throughput is measured in &lt;STRONG&gt;tokens per minute (TPM)&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;Capacity is provisioned using &lt;STRONG&gt;PTUs&lt;/STRONG&gt;, which represent a fixed slice of GPU processing capacity&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;A PTU is not “one GPU.” It is a &lt;STRONG&gt;guaranteed amount of model‑processing capacity&lt;/STRONG&gt;, backed by GPUs under the hood and optimized by Azure for that specific model. &lt;A href="https://learn.microsoft.com/en-us/azure/foundry/openai/how-to/provisioned-throughput-onboarding" target="_blank"&gt;[learn.microsoft.com]&lt;/A&gt;, &lt;A href="https://learn.microsoft.com/en-us/azure/foundry/openai/concepts/provisioned-throughput" target="_blank"&gt;[learn.microsoft.com]&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Key Change with GPT‑4o&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For GPT‑4o and later models, &lt;STRONG&gt;input and output tokens are metered separately&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;That matters because:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Input tokens (prompt processing) stress the model differently than&lt;/LI&gt;
&lt;LI&gt;Output tokens (generation), which are more GPU‑intensive&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Azure therefore assigns &lt;STRONG&gt;separate TPM budgets per PTU&lt;/STRONG&gt; for input and output tokens.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;GPT‑4o Throughput per PTU&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For &lt;STRONG&gt;gpt‑4o&lt;/STRONG&gt;, the effective per‑PTU capacities are:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Metric&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Input TPM per PTU&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;~2,500&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Output TPM per PTU&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;~625&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Input : Output ratio&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;4 : 1&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;These ratios are baked into Azure’s PTU calculators and provisioning logic.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Core Formula&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;To compute required GPU capacity (PTUs):&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;Then:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Round up&lt;/LI&gt;
&lt;LI&gt;Apply minimum deployment constraints (e.g., 15 PTUs for Global / Data Zone)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step‑by‑Step Example&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Assume this workload:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;800 input tokens&lt;/LI&gt;
&lt;LI&gt;150 output tokens&lt;/LI&gt;
&lt;LI&gt;30 requests per minute&lt;/LI&gt;
&lt;/UL&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt; Compute TPM&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Input TPM&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Output TPM&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;OL start="2"&gt;
&lt;LI&gt;&lt;STRONG&gt; Convert to PTUs&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Input side&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;STRONG&gt;Output side&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;OL start="3"&gt;
&lt;LI&gt;&lt;STRONG&gt; Take the bottleneck&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;img /&gt;
&lt;P&gt;Apply Azure’s &lt;STRONG&gt;minimum deployment size&lt;/STRONG&gt; → &lt;STRONG&gt;15 PTUs required&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;This is why tables often show PTUs higher than a simple TPM ÷ constant calculation.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why Output Tokens Matter More&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Output tokens:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Are generated sequentially&lt;/LI&gt;
&lt;LI&gt;Consume GPU compute longer per token&lt;/LI&gt;
&lt;LI&gt;Drive latency and tail performance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;That’s why GPT‑4o uses a &lt;STRONG&gt;4:1 input‑to‑output ratio&lt;/STRONG&gt;, and why output TPM often becomes the bottleneck in chatty or agentic workloads. &lt;A href="https://modelavailability.com/tools/azure-ptu-calculator" target="_blank"&gt;[modelavail...bility.com]&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Practical Guidance&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Short prompts, long answers&lt;/STRONG&gt; → output‑bound → more PTUs&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Large prompts, short answers&lt;/STRONG&gt; → input‑bound → more PTUs&lt;/LI&gt;
&lt;LI&gt;Stable traffic → PTUs give predictable latency&lt;/LI&gt;
&lt;LI&gt;Spiky traffic → consider Standard + spillover&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Azure recommends validating sizing with the &lt;STRONG&gt;PTU Calculator&lt;/STRONG&gt; and real traffic benchmarks before committing long‑term reservations.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Final Takeaway&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;For GPT‑4o and newer models, GPU sizing is token‑driven, not hardware‑driven.&lt;BR /&gt;&lt;STRONG&gt;PTUs abstract GPUs&lt;/STRONG&gt;, and the required capacity is simply the &lt;STRONG&gt;maximum of input‑bound and output‑bound throughput needs&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Once you understand that, GPT‑4o capacity planning becomes predictable, explainable, and much easier to operate at scale.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 15:07:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/how-to-compute-gpu-capacity-for-gpt-models-gpt-4o-and-later/ba-p/4506930</guid>
      <dc:creator>Yan_Liang</dc:creator>
      <dc:date>2026-03-30T15:07:49Z</dc:date>
    </item>
    <item>
      <title>Configuring Noise Detection and Barge‑In with Azure Voice Live API</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/configuring-noise-detection-and-barge-in-with-azure-voice-live/ba-p/4506916</link>
      <description>&lt;P&gt;Natural voice conversations depend on two things:&amp;nbsp;&lt;STRONG&gt;knowing when a user is speaking&lt;/STRONG&gt; and &lt;STRONG&gt;letting them interrupt naturally&lt;/STRONG&gt;. Without reliable noise detection and barge‑in, voice agents feel rigid and frustrating—especially in real‑world environments like call centers or mobile scenarios.&lt;/P&gt;
&lt;P&gt;Azure &lt;STRONG&gt;Voice Live API&lt;/STRONG&gt; addresses this by providing &lt;STRONG&gt;built‑in noise handling, server‑side Voice Activity Detection (VAD), and native barge‑in support&lt;/STRONG&gt;—all configurable with a single session update.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;How Voice Live Handles Noise and Interruption&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Voice Live performs &lt;STRONG&gt;server‑side speech detection&lt;/STRONG&gt; on the incoming audio stream. Instead of relying on simple volume thresholds, it can use &lt;STRONG&gt;Azure Semantic VAD&lt;/STRONG&gt;, which is more resilient to background noise and conversational fillers.&lt;/P&gt;
&lt;P&gt;When enabled:&lt;/P&gt;
&lt;P&gt;Background noise is ignored&lt;/P&gt;
&lt;P&gt;Speech start and stop are detected automatically&lt;/P&gt;
&lt;P&gt;User speech can interrupt the assistant mid‑response (barge‑in)&lt;/P&gt;
&lt;P&gt;All of this happens without stitching together separate STT, silence detection, or TTS cancellation logic.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The Key Configuration: session.update&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Noise detection and barge‑in are configured using the &lt;STRONG&gt;session.update&lt;/STRONG&gt; event, typically sent immediately after opening the Voice Live WebSocket session from client side such as ACA or Azure function.&lt;/P&gt;
&lt;P&gt;Below is a recommended baseline configuration:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;{&lt;/P&gt;
&lt;P&gt;&amp;nbsp; "type": "session.update",&lt;/P&gt;
&lt;P&gt;&amp;nbsp; "session": {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "modalities": ["text", "audio"],&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "input_audio_format": "pcm16",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "output_audio_format": "pcm16",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "input_audio_sampling_rate": 24000,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; "turn_detection": {&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "type": "azure_semantic_vad",&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "threshold": 0.5,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "prefix_padding_ms": 300,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "silence_duration_ms": 500,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "interrupt_response": true,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; "auto_truncate": true&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;&amp;nbsp; }&lt;/P&gt;
&lt;P&gt;}&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What this configuration enables:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Ø&amp;nbsp; &lt;STRONG&gt;Azure Semantic VAD&lt;/STRONG&gt; for robust speech detection&lt;/P&gt;
&lt;P&gt;Uses Azure’s semantic model to detect &lt;STRONG&gt;actual speech&lt;/STRONG&gt;, not just sound energy. This dramatically reduces false positives from background noise.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Noise‑tolerant turn detection&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;True barge‑in&lt;/STRONG&gt; via interrupt_response: true&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Immediate stop and truncation&lt;/STRONG&gt; of AI audio when interrupted&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Ø&amp;nbsp; &lt;STRONG&gt;threshold&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Controls how sensitive speech detection is.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Lower value → more sensitive&lt;/LI&gt;
&lt;LI&gt;Higher value → less sensitive&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Typical values:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;0.3–0.4 for quiet environments&lt;/LI&gt;
&lt;LI&gt;0.5–0.7 for noisy call centers&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;What Happens at Runtime&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Once configured:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The assistant begins speaking&lt;/LI&gt;
&lt;LI&gt;The user starts talking&lt;/LI&gt;
&lt;LI&gt;Voice Live detects speech server‑side&lt;/LI&gt;
&lt;LI&gt;AI audio stops immediately&lt;/LI&gt;
&lt;LI&gt;Unplayed audio is discarded&lt;/LI&gt;
&lt;LI&gt;The user’s speech becomes the active turn&lt;/LI&gt;
&lt;LI&gt;No custom interruption logic is required—your application simply reacts to speech start events.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Common Mistakes to Avoid&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Sending audio &lt;STRONG&gt;before&lt;/STRONG&gt; session.update&lt;/LI&gt;
&lt;LI&gt;Forgetting interrupt_response: true&lt;/LI&gt;
&lt;LI&gt;Using overly aggressive thresholds&lt;/LI&gt;
&lt;LI&gt;Ignoring speech start events on the client&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Best Practices&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Use Semantic VAD&lt;/STRONG&gt; in noisy environments (call centers, mobile)&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Tune the threshold&lt;/STRONG&gt; (higher for noisy spaces, lower for quiet rooms)&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Enable echo cancellation and noise suppression&lt;/STRONG&gt; on the client microphone&lt;/P&gt;
&lt;P&gt;Always enable auto_truncate when using barge‑in&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Sample JavaScript code:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;import WebSocket from "ws";&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;const VOICELIVE_URL =&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; "wss://&amp;lt;your-resource&amp;gt;.services.ai.azure.com/voice-live/realtime" +&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; "?api-version=2025-10-01&amp;amp;model=&amp;lt;model&amp;gt;";&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;// Use Entra ID token or api-key header&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;const ws = new WebSocket(VOICELIVE_URL, {&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; headers: {&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; // Recommended:&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; // Authorization: `Bearer ${process.env.AZURE_AI_TOKEN}`&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; // or api-key for non-browser clients&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; // "api-key": process.env.AZURE_VOICELIVE_API_KEY&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; }&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;});&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;ws.on("open", () =&amp;gt; {&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; console.log("Connected to Voice Live");&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; ws.send(JSON.stringify({&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; type: "session.update",&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; session: {&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; modalities: ["text", "audio"],&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; input_audio_format: "pcm16",&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; output_audio_format: "pcm16",&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; input_audio_sampling_rate: 24000,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; turn_detection: {&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; type: "azure_semantic_vad",&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; threshold: 0.5,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; prefix_padding_ms: 300,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; silence_duration_ms: 500,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; interrupt_response: true,&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; auto_truncate: true&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; }&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; }));&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;&amp;nbsp; console.log("session.update sent");&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;});&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 14:17:45 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/configuring-noise-detection-and-barge-in-with-azure-voice-live/ba-p/4506916</guid>
      <dc:creator>Yan_Liang</dc:creator>
      <dc:date>2026-03-30T14:17:45Z</dc:date>
    </item>
    <item>
      <title>Image Search Series Part V: Building Histopathology Image Search with Prov-GigaPath</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/image-search-series-part-v-building-histopathology-image-search/ba-p/4501392</link>
      <description>&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/alberto-santamaria/" target="_blank" rel="noopener"&gt;@Alberto Santamaria-Pang,&lt;/A&gt; Principal AI Data Scientist, Healthcare ISE and Adjunct Faculty, Johns Hopkins School of Medicine&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/asmabenabacha/" target="_blank" rel="noopener"&gt;@Asma Ben Abacha,&lt;/A&gt;&amp;nbsp;Senior Applied Scientist, HLS AI&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/manoj1116/" target="_blank" rel="noopener"&gt;@Manoj Kumar,&lt;/A&gt;&amp;nbsp;Director HLS, Data &amp;amp; AI HLS Frontiers AI&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/jameson-merkow/" target="_blank" rel="noopener"&gt;@Jameson Merkow,&lt;/A&gt; Principal Applied Data Scientist&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/mu-wei-038a3849/" target="_blank" rel="noopener"&gt;@Mu Wei,&lt;/A&gt;&amp;nbsp;&lt;SPAN aria-hidden="true"&gt;Principal Applied Science Manager, Health and Life Sciences&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://www.linkedin.com/in/itarapov/" target="_blank" rel="noopener"&gt;@Ivan Tarapov,&lt;/A&gt; Senior Director, Multimodal Healthcare AI at Microsoft&lt;/P&gt;
&lt;H1&gt;1. Introduction&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;In earlier posts, we showed how to build a practical 2D medical image search system: take an image, turn it into an embedding with a foundation model, and use similarity search to find the closest prior cases [1]. We also demonstrated why radiology + pathology together matters for cancer workflows, where imaging findings and tissue evidence complement each other and can be combined in a single pipeline [2,3]. But in real clinical practice, prediction alone isn’t enough. Doctors routinely need to pull up similar prior cases across modalities, compare patterns, and check whether what appears on MRI lines up with what is confirmed under the microscope.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;This post focuses on making that workflow practical for pathology. Using a pathology foundation model (Prov-GigaPath) as a retrieval backbone, we convert pathology images into searchable embeddings, build an index, and return the most similar slides in seconds—using the same retrieval pattern introduced in the image search series [1]. Because this approach fits naturally alongside radiology representations used in multimodal pipelines, it helps close the radiology–pathology gap and supports diagnostic concordance with evidence clinicians can directly review [2,3]. &lt;STRONG&gt;An overview of the end-to-end workflow is shown in Figure 1.&lt;/STRONG&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Figure 1. Histopathology image search workflow with linked radiology (MRI) context.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H1&gt;2. Histopathology Data&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;Even with strong foundation models, clinicians still face a practical problem: they need to compare current cases against prior cases across radiology and pathology, not just receive a prediction. In real workflows, diagnostic concordance often hinges on questions like: “Do these MRI findings match what we see in the tissue?” and “Have we seen a similar slide pattern before, and what did it correspond to on imaging?” Our tutorial&amp;nbsp;&lt;A class="lia-external-url" href="https://github.com/microsoft/healthcareai-examples/blob/main/azureml/advanced_demos/image_search/2d_pathology_image_search.ipynb" target="_blank" rel="noopener"&gt;2d_pathology_image_search.ipynb&lt;/A&gt;&amp;nbsp;addresses this gap by treating pathology as a search problem: extract embeddings from pathology images, index them, and retrieve the most similar prior cases so clinicians can review evidence rather than rely only on model outputs.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;A second problem is interoperability. Clinical systems evolve quickly, and the retrieval layer must remain usable even as models change. The architecture in this workflow is intentionally simple and model-agnostic: any foundation model that produces embeddings can plug into the same pipeline (&lt;STRONG&gt;embed → index → retrieve&lt;/STRONG&gt;). In this tutorial we use pathology embeddings from &lt;STRONG&gt;Prov-GigaPath&lt;/STRONG&gt; and take advantage of an existing radiology–pathology mapping (MRI linked to pathology cases) to make retrieval more impactful: once a relevant pathology case is retrieved, the corresponding radiology context can also be surfaced to support concordance. In this notebook the mapping already exists, but in practice the same idea can be extended to &lt;STRONG&gt;multi-modal indexing&lt;/STRONG&gt;, where both pathology and radiology embeddings are indexed (separately or in an aligned space) so that search can pull relevant information across modalities as part of a single workflow.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;For this tutorial, we use pre-computed pathology embeddings derived from &lt;STRONG&gt;TCGA-GBMLGG&lt;/STRONG&gt;, a curated cohort of &lt;STRONG&gt;170 subjects&lt;/STRONG&gt; with &lt;STRONG&gt;H&amp;amp;E-stained histopathology slides&lt;/STRONG&gt; and &lt;STRONG&gt;tumor Grade labels (0/1/2)&lt;/STRONG&gt;. We split the cohort into &lt;STRONG&gt;~80% training&lt;/STRONG&gt; (to build the FAISS index) and &lt;STRONG&gt;~20% test&lt;/STRONG&gt; (to evaluate retrieval performance), with each image represented as a &lt;STRONG&gt;1536-dimensional embedding&lt;/STRONG&gt; generated by GigaPath (&lt;STRONG&gt;Table 1&lt;/STRONG&gt;).&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Table 1. TCGA-GBMLGG dataset summary and embedding configuration&lt;/STRONG&gt;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN lia-align-center"&gt;&lt;table border="1" style="width: 61.8519%; height: 234px; border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Property&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Value&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Total subjects&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;170&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Split&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;~80% train / ~20% test&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Tumor grades&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Grade 0, Grade 1, Grade 2&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Image type&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;H&amp;amp;E-stained histopathology slides&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Embedding dimension&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;1536 (GigaPath)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H1&gt;3. Building the Image Search Engine&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;To build the pathology search engine, we follow the same practical steps described in the 2D image search blog: &lt;STRONG&gt;(1)&lt;/STRONG&gt; turn each image into an embedding using a foundation model, &lt;STRONG&gt;(2)&lt;/STRONG&gt; build a vector index (we use &lt;STRONG&gt;FAISS&lt;/STRONG&gt;) over those embeddings, and &lt;STRONG&gt;(3)&lt;/STRONG&gt; retrieve the nearest neighbors for any new query image. Concretely, we take a pathology image (typically a tile/patch from a whole-slide image), run it through the pathology foundation model to produce a spatial feature map, and then apply &lt;STRONG&gt;adaptive pooling&lt;/STRONG&gt; to convert that variable-sized feature map into a &lt;STRONG&gt;fixed-length embedding&lt;/STRONG&gt;. Adaptive pooling matters because it guarantees a consistent embedding shape even when patch sizes or resolutions vary. Without that, indexing and distance comparisons become brittle and hard to scale.&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Once we can reliably generate embeddings, the rest of the search engine is straightforward: we compute embeddings for the pathology corpus, build a FAISS index (e.g., flat L2 for a baseline), and then run &lt;STRONG&gt;query → embedding → nearest neighbors&lt;/STRONG&gt; to retrieve similar pathology cases. Example retrieval results across tumor grades (0–2) are shown in &lt;STRONG&gt;Figure 2&lt;/STRONG&gt;. To make “similarity” more clinically meaningful, we optionally apply a &lt;STRONG&gt;lightweight adapter &lt;/STRONG&gt;implemented as a small &lt;STRONG&gt;MLP, &lt;/STRONG&gt;on top of the foundation embeddings. In the notebook, the adapter takes &lt;STRONG&gt;1536-D GigaPath embeddings&lt;/STRONG&gt; as input (in_channels=1536) and produces a compact &lt;STRONG&gt;254-D representation&lt;/STRONG&gt; (adapter_emb_size=254), trained with a simple &lt;STRONG&gt;3-class objective&lt;/STRONG&gt; (num_class=3, Grades 0/1/2). This is intentionally lightweight compared with retraining the foundation model: we only learn a small mapping from embeddings to a better-aligned space, then &lt;STRONG&gt;rebuild the FAISS index&lt;/STRONG&gt; using the adapted vectors (gigapath_adapter_features) to improve retrieval relevance. The effect of this optimization is visualized in &lt;STRONG&gt;Figure 3&lt;/STRONG&gt;, which contrasts the baseline embedding space with the adapter-optimized space.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Figure 2. Nearest-neighbor retrieval examples for Grade 0, Grade 1, and Grade 2 queries.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;img /&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Figure 3. Embedding space before and after lightweight adapter optimization.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H1&gt;4. Results&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;We evaluated pathology image retrieval using &lt;STRONG&gt;cancer Grade (0/1/2)&lt;/STRONG&gt; as the clinical label. For each query pathology tile/patch in the test set, we searched a &lt;STRONG&gt;FAISS&lt;/STRONG&gt; index built from the &lt;STRONG&gt;training set embeddings&lt;/STRONG&gt; and computed &lt;STRONG&gt;Precision@K&lt;/STRONG&gt;, defined as the fraction of the top-K retrieved items that share the same Grade as the query. In the notebook, we evaluate &lt;STRONG&gt;K = [1, 3, 5]&lt;/STRONG&gt;, comparing baseline embeddings.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Table 2. Overall retrieval precision before and after refinement&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Overall &lt;STRONG&gt;Precision@K&lt;/STRONG&gt; using (i) baseline GigaPath embeddings and (ii) refined adapter-informed embeddings.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Embedding space&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Precision@1&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Precision@3&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Precision@5&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Baseline (GigaPath)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.5795&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.5593&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.5739&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Refined (adapter-informed)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.7727&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.7967&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.7689&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-justify"&gt;Overall retrieval quality improves substantially after refinement (Table 2), with consistent gains across all K values, indicating that nearest neighbors become more aligned with Grade-consistent similarity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Table 3. Precision by cancer Grade before and after refinement&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Precision@K&lt;/STRONG&gt; stratified by pathology cancer &lt;STRONG&gt;Grade (0/1/2)&lt;/STRONG&gt; for baseline vs refined embeddings.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Grade&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Baseline P@1&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Baseline P@3&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Baseline P@5&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Refined P@1&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Refined P@3&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Refined P@5&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;0&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.5000&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.5000&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.5500&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.6250&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.6667&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.6250&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;1&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.3636&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.3030&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.3091&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.8182&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.8485&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.7818&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;2&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.8750&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.8750&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;0.8625&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.8750&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.8750&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;0.9000&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 14.29%" /&gt;&lt;col style="width: 14.29%" /&gt;&lt;col style="width: 14.29%" /&gt;&lt;col style="width: 14.29%" /&gt;&lt;col style="width: 14.29%" /&gt;&lt;col style="width: 14.29%" /&gt;&lt;col style="width: 14.29%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P class="lia-align-justify"&gt;Performance differs by Grade (Table 3). Baseline retrieval is strongest for&amp;nbsp;&lt;STRONG&gt;Grade 2&lt;/STRONG&gt;, moderate for &lt;STRONG&gt;Grade 0&lt;/STRONG&gt;, and weakest for &lt;STRONG&gt;Grade 1&lt;/STRONG&gt;, suggesting Grade 1 is the most challenging cohort under raw embeddings. After refinement, Grade 1 improves markedly across all K values, while Grade 2 remains high and improves slightly at deeper retrieval (P@5).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Table 4. Absolute improvement in precision after refinement&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Absolute change (&lt;STRONG&gt;Δ = refined − baseline&lt;/STRONG&gt;) in &lt;STRONG&gt;Precision@K&lt;/STRONG&gt;, overall and by Grade.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Cohort&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Δ Precision@1&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Δ Precision@3&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Δ Precision@5&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Overall&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.1932&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.2374&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.1951&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Grade 0&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.1250&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.1667&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.0750&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Grade 1&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;+0.4545&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;+0.5455&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;+0.4727&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Grade 2&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.0000&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.0000&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;+0.0375&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;col style="width: 25.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;As summarized in &lt;STRONG&gt;Table 4&lt;/STRONG&gt;, the improvements are driven primarily by &lt;STRONG&gt;Grade 1&lt;/STRONG&gt; (ΔP@1 = +0.4545; ΔP@3 = +0.5455; ΔP@5 = +0.4727). &lt;STRONG&gt;Note on Grade 2:&lt;/STRONG&gt; ΔP@1 and ΔP@3 are 0.0000 because baseline retrieval for Grade 2 is already high (P@1 = 0.8750, P@3 = 0.8750; &lt;STRONG&gt;Table 3&lt;/STRONG&gt;), so the adapter does not change top-1/top-3 correctness for that cohort. The improvement appears at deeper retrieval (ΔP@5 = +0.0375), suggesting the adapter mainly refines ranking beyond the top few results rather than increasing an already strong “hit rate.”&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;Collectively, these results indicate that the refined embedding space makes similarity &lt;STRONG&gt;more grade-consistent&lt;/STRONG&gt;, which is exactly what diagnostic concordance workflows need: when clinicians retrieve “similar” pathology cases, they want those neighbors to reflect clinically relevant groupings (here, tumor grade), and to remain interpretable when linked to the corresponding radiology context. The fact that Grade 1 benefits most is also plausible from a pathology standpoint: intermediate grades often show &lt;STRONG&gt;more overlap in morphology&lt;/STRONG&gt; with both lower and higher grades (i.e., less separable visual patterns), while higher grades may exhibit more distinctive features that are easier to retrieve correctly even without refinement. In that sense, the lightweight adapter acts as a targeted calibration step: shaping the embedding space so that ambiguous, overlapping cases (like Grade 1) are pulled closer to the right neighbors.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-center"&gt;&lt;STRONG&gt;Figure 4. Histopathology (H&amp;amp;E) retrieval with linked radiology (MRI) context.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1&gt;5. Conclusion&lt;/H1&gt;
&lt;P class="lia-align-justify"&gt;We built a practical histopathology image search engine using a simple, reusable pattern: generate &lt;STRONG&gt;Prov-GigaPath embeddings&lt;/STRONG&gt; from pathology tiles (with adaptive pooling to produce fixed-length vectors), index them with &lt;STRONG&gt;FAISS&lt;/STRONG&gt;, and retrieve nearest neighbors for any query. This matters because retrieval makes foundation models actionable for clinicians, returning &lt;STRONG&gt;similar prior examples&lt;/STRONG&gt; that can be reviewed and compared, rather than only producing a prediction score. The implementation is lightweight and interoperable: once embeddings are available, the remaining steps are standard vector indexing and search, and the same design naturally extends to multimodal workflows by linking retrieved H&amp;amp;E cases to their corresponding &lt;STRONG&gt;MRI&lt;/STRONG&gt; context (or by indexing radiology and pathology embeddings side-by-side for cross-modal lookup).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H5&gt;&lt;STRONG&gt;&lt;U&gt;Image Search Series:&lt;/U&gt;&amp;nbsp;Blog Posts &amp;amp; Jupyter Notebooks&amp;nbsp;&lt;/STRONG&gt;&lt;/H5&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/image-search-series-part-1-chest-x-ray-lookup-with-medimageinsight/4372736" data-lia-auto-title="Image Search Series Part 1: Chest X-ray lookup with MedImageInsight | Microsoft Community Hub&amp;nbsp;" data-lia-auto-title-active="0" target="_blank"&gt;Image Search Series Part 1: Chest X-ray lookup with MedImageInsight | Microsoft Community Hub&amp;nbsp;&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/healthcareai-examples/blob/main/azureml/advanced_demos/image_search/2d_image_search.ipynb" target="_blank"&gt;2d_image_search.ipynb&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/image-search-series-part-2-ai-methods-for-the-automation-of-3d-image-retrieval-i/4377103" data-lia-auto-title="Image Search Series Part 2: AI Methods for the Automation of 3D Image Retrieval in Radiology | Microsoft Community Hub&amp;nbsp;" data-lia-auto-title-active="0" target="_blank"&gt;Image Search Series Part 2: AI Methods for the Automation of 3D Image Retrieval in Radiology | Microsoft Community Hub&amp;nbsp;&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/healthcareai-examples/blob/main/azureml/advanced_demos/image_search/3d_image_search.ipynb" target="_blank"&gt;3d_image_search.ipynb&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/image-search-series-part-3-foundation-models-and-retrieval-augmented-generation-/4415832" data-lia-auto-title="Image Search Series Part 3: Foundation Models and Retrieval-Augmented Generation in Dermatology | Microsoft Community Hub&amp;nbsp;" data-lia-auto-title-active="0" target="_blank"&gt;Image Search Series Part 3: Foundation Models and Retrieval-Augmented Generation in Dermatology | Microsoft Community Hub&amp;nbsp;&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/image-search-series-part-4-advancing-wound-care-with-foundation-models-and-conte/4456340" data-lia-auto-title="Image Search Series Part 4: Advancing Wound Care with Foundation Models and Context-Aware Retrieval | Microsoft Community Hub&amp;nbsp;" data-lia-auto-title-active="0" target="_blank"&gt;Image Search Series Part 4: Advancing Wound Care with Foundation Models and Context-Aware Retrieval | Microsoft Community Hub&amp;nbsp;&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/healthcareai-examples/blob/main/azureml/advanced_demos/image_search/rag_infection_detection.ipynb" target="_blank"&gt;rag_infection_detection.ipynb&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/image-search-series-part-v-building-histopathology-image-search-with-prov-gigapa/4501392" data-lia-auto-title="Image Search Series Part V: Building Histopathology Image Search with Prov-GigaPath | Microsoft Community Hub" data-lia-auto-title-active="0" target="_blank"&gt;Image Search Series Part V: Building Histopathology Image Search with Prov-GigaPath | Microsoft Community Hub&lt;/A&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://github.com/microsoft/healthcareai-examples/blob/main/azureml/advanced_demos/image_search/2d_pathology_image_search.ipynb" target="_blank"&gt;2d_pathology_image_search.ipynb&lt;/A&gt;&amp;nbsp;&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="lia-align-justify"&gt;&lt;EM&gt;&lt;SPAN data-contrast="auto"&gt;The Microsoft healthcare AI models, including MedImageInsight, are intended for research and model development exploration. The models are not designed or intended to be deployed in clinical settings as-is nor for use in the diagnosis or treatment of any health or medical condition, and the individual models’ performances for such purposes have not been established. You bear sole responsibility and liability for any use of the healthcare AI models, including verification of outputs and incorporation into any product or service intended for a medical purpose or to inform clinical decision-making, compliance with applicable healthcare laws and regulations, and obtaining any necessary clearances or approvals.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/EM&gt;&lt;/P&gt;
&lt;H4 aria-level="5"&gt;&lt;STRONG&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 5"&gt;References&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:600,&amp;quot;335559739&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H4&gt;
&lt;P&gt;&lt;STRONG&gt;[1]&lt;/STRONG&gt;&amp;nbsp;&lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/image-search-series-part-1-chest-x-ray-lookup-with-medimageinsight/4372736" target="_blank" rel="noopener" data-lia-auto-title="Image Search Series Part 1: Chest X-ray lookup with MedImageInsight | Microsoft Community Hub" data-lia-auto-title-active="0"&gt;Image Search Series Part 1: Chest X-ray lookup with MedImageInsight | Microsoft Community Hub&lt;/A&gt;&amp;nbsp;&lt;BR /&gt;&lt;STRONG&gt;[2]&lt;/STRONG&gt; &lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/cancer-survival-with-radiology-pathology-analysis-and-healthcare-ai-models-in-az/4366241" target="_blank" rel="noopener"&gt;Cancer Survival with Radiology-Pathology Analysis and Healthcare AI Models in Azure AI Foundry (Microsoft Healthcare &amp;amp; Life Sciences Blog)&lt;/A&gt; &lt;STRONG&gt;[3]&lt;/STRONG&gt;&amp;nbsp;&lt;A href="https://github.com/microsoft/healthcareai-examples/blob/main/azureml/medimageinsight/adapter-training.ipynb" target="_blank" rel="noopener"&gt;&lt;EM&gt;Adapter training notebook (MedImageInsight).&lt;/EM&gt; Microsoft healthcareai-examples GitHub repository (azureml/medimageinsight/adapter-training.ipynb)&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2026 22:07:53 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/image-search-series-part-v-building-histopathology-image-search/ba-p/4501392</guid>
      <dc:creator>Alberto_Santamaria</dc:creator>
      <dc:date>2026-03-16T22:07:53Z</dc:date>
    </item>
    <item>
      <title>Agents: Snack Pack</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/agents-snack-pack/ba-p/4499844</link>
      <description>&lt;P&gt;It’s not just you.&lt;/P&gt;
&lt;P&gt;Agents are everywhere, and everyone else seems to know exactly what they are.&lt;/P&gt;
&lt;P&gt;If you’ve been nodding along while secretly thinking “&lt;EM&gt;I should probably look this up&lt;/EM&gt;"... you've found the right place!&lt;/P&gt;
&lt;P&gt;This Agents Snack Pack contains easy‑to‑digest content designed to help you understand agents, one piece at a time. Content is arranged from left to right in order of &lt;STRONG&gt;increasing&lt;/STRONG&gt;&lt;STRONG&gt; complexity&lt;/STRONG&gt;. If you're starting out, begin with Column 1 (Access) and progress as you build confidence!&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table style="width: 876px;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;#1. Discover&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;#2. Use&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;#3. Build&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Drill in the fundamentals: &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=HhoBVKs66Ds" target="_blank"&gt;What are Microsoft 365 Copilot agents and how to use them | Microsoft&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Ask big questions, let Researcher do the investigation.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=lfruwkpqvk4" target="_blank"&gt;Researcher: A reasoning agent in Microsoft 365 Copilot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Create your first agent:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=5bYMrKYyxmg" target="_blank"&gt;Microsoft 365 Copilot Chat - Agent builder demo&lt;/A&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Explore the Agent Store: &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=0J9jCnrSSoE" target="_blank"&gt;Agent Store in Microsoft 365 Copilot&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;When spreadsheets get messy, call in Analyst. &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=9O3CoP8rEkY" target="_blank"&gt;Analyst: A reasoning agent in Microsoft 365 Copilot - YouTube&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;*Create an Epic Tip Sheet Agent: &lt;/STRONG&gt;&amp;nbsp;&lt;A href="https://youtu.be/ilL5j5qBdpE" target="_blank"&gt;https://youtu.be/ilL5j5qBdpE&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*Agent requires premium Copilot license&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Practical use cases: &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Sales: &lt;A href="https://www.youtube.com/watch?v=-1Ki0wHUTXg" target="_blank"&gt;Microsoft 365 Copilot Chat - Sales agent demo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Customer Service: &lt;A href="https://www.youtube.com/watch?v=3gAfo0lGzjE" target="_blank"&gt;Microsoft 365 Copilot Chat - Customer service agent demo&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Demo: Leveraging Analyst to break down hospital occupancy data &lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://youtu.be/0OG6yTpZoMU" target="_blank"&gt;https://youtu.be/0OG6yTpZoMU&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;*Build your first autonomous agent:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.youtube.com/watch?v=L7u-HcKQ2sE" target="_blank"&gt;How to create an autonomous agent with Copilot Studio&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;*Agent requires premium Copilot license&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hungry for more? Join our live, instructor-led &lt;A href="https://aka.ms/HLSAgentRx" target="_blank"&gt;AgentRx&lt;/A&gt; sessions to get hands on guidance with Microsoft experts.&lt;/P&gt;</description>
      <pubDate>Fri, 06 Mar 2026 01:07:32 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/agents-snack-pack/ba-p/4499844</guid>
      <dc:creator>samhitaraman</dc:creator>
      <dc:date>2026-03-06T01:07:32Z</dc:date>
    </item>
    <item>
      <title>Bringing Organizational Knowledge into the Clinical Workflow</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/bringing-organizational-knowledge-into-the-clinical-workflow/ba-p/4499455</link>
      <description>&lt;P&gt;&lt;SPAN class="lia-text-color-19"&gt;&lt;EM&gt;&lt;STRONG&gt;This blog is co-authored by Hadas Bitran, Partner GM, Health AI, Microsoft Health &amp;amp; Life Sciences&lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Every day, clinicians spend valuable time looking for information that lives in different places. An email thread from a specialist colleague. A Microsoft Teams discussion about a complex case. Updated organizational processes buried in SharePoint or OneDrive. This information provides context that could be critical to their workflows or help inform their decisions. But that context is not part of their clinical workflow.&lt;/P&gt;
&lt;P&gt;The result? Clinicians are forced to break their clinical workflow, searching manually across organizational resources, and mentally combining scattered data points, all while a patient is waiting. This isn't a knowledge problem. It's a retrieval problem. And it's costing time, focus, cognitive burden and clinical confidence every single day.&lt;/P&gt;
&lt;P&gt;That's exactly the gap we're closing by bringing clinical intelligence and your organization's knowledge into one seamless, workflow-native experience.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Clinical workflow, now with your organizational context&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Within Dragon Copilot, clinicians will be able to securely surface relevant information across Microsoft 365, without leaving the clinical workflow:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Email&lt;/STRONG&gt;: retrieve relevant information that was exchanged with patients, colleagues or from specialist correspondence, referral communications, or care coordination threads.&lt;BR /&gt;&lt;BR /&gt;
&lt;BLOCKQUOTE&gt;&lt;EM&gt;find me the email from Dr. Ting that mentioned the latest research about this mutation.&lt;/EM&gt;&lt;/BLOCKQUOTE&gt;
In this example, the chat functionality in Dragon Copilot uses the patient and encounter context to resolve the referenced mutation, then leverages Microsoft 365 Copilot behind the scenes to locate the email from Dr. Ting that mentions it.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Teams&lt;/STRONG&gt;: surface information from Microsoft Teams chats that the clinician had with colleagues, discussions or group chat conversations.&lt;BR /&gt;&lt;EM&gt; &lt;BR /&gt;&lt;/EM&gt;
&lt;BLOCKQUOTE&gt;&lt;EM&gt;The patient is traveling to Florida. Identify dialysis centers near the patient’s destination based on information shared by Dr. Salomon in Microsoft Teams and provide practical travel guidelines I can share with the patient.&lt;BR /&gt;&lt;/EM&gt;&lt;/BLOCKQUOTE&gt;
In this example, Dragon Copilot uses trusted sources for travel guidelines and Microsoft 365 Copilot to retrieve relevant Microsoft Teams messages from Dr. Salomon, identifying nearby dialysis centers in Florida.&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;SharePoint and&lt;/STRONG&gt;&lt;STRONG&gt; OneDrive&lt;/STRONG&gt;: access organizational knowledge on demand: HR policies, facility procedures, compliance guidelines, shift schedules, and more&lt;BR /&gt;&lt;BR /&gt;
&lt;BLOCKQUOTE&gt;&lt;EM&gt;Who is on call for nephrology tonight and who is covering tomorrow morning?&lt;BR /&gt;&lt;/EM&gt;&lt;/BLOCKQUOTE&gt;
&lt;P&gt;In this example, Dragon Copilot leverages Microsoft 365 Copilot behind the scenes to locate the most up‑to‑date Excel file with upcoming shift and coverage information from the hospital’s SharePoint, and surfaces the answer directly in the conversation, without disrupting the clinician’s workflow.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With Microsoft 365 Copilot, work context is available directly inside Dragon Copilot, clinicians can choose if, and when to access their work information. Within Dragon Copilot, they can ask questions in natural language and receive the most relevant information, grounded in patient context, from trusted clinical sources and their Microsoft 365 data. One conversational flow. Full clinical and work context. No tab switching, no manual searching, no lost focus.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Trusted by design, built for healthcare&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Security and privacy are built in from the ground up. Information is always accessed on behalf of the individual user, fully respecting existing Microsoft 365 identity and access management, compliance, and privacy controls, meaning clinicians see only what they're authorized to see, and that Dragon Copilot will only use their work context if the clinician consented to it. This also means no new security risks to manage, and no changes to how your organization governs access to information.&lt;/P&gt;
&lt;P&gt;For healthcare organizations where data sensitivity, regulatory compliance, and patient privacy are non-negotiable, this better-together experience is designed to meet that bar from day one.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Join the Private Preview&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;If you're a Dragon Copilot customer, and your organization is using Microsoft 365 Copilot, we invite you to be among the first to experience this new capability. Register now for early access to the private preview and play a role in shaping the future of clinical workflow intelligence.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8qzGFSFBvtFt68Uvf6KiOxUOEw3TlBTVVkxWE81UDQzRDlEMkpGVjRGTi4u" target="_blank" rel="noopener"&gt;&lt;STRONG&gt;Register for private preview&lt;/STRONG&gt;&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 17:40:47 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/bringing-organizational-knowledge-into-the-clinical-workflow/ba-p/4499455</guid>
      <dc:creator>BertHoorne</dc:creator>
      <dc:date>2026-03-05T17:40:47Z</dc:date>
    </item>
    <item>
      <title>Dragon Copilot centralizes trusted medical content and relevant contextual information in-workflow</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/dragon-copilot-centralizes-trusted-medical-content-and-relevant/ba-p/4499011</link>
      <description>&lt;P&gt;&lt;SPAN class="lia-text-color-19"&gt;&lt;EM&gt;This blog is co-authored by Bert Hoorne, Principal Program Manager &amp;amp; Ksenya Kveler, &lt;SPAN data-teams="true"&gt;Principle Medical Science Manager&lt;/SPAN&gt;&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Dragon Copilot delivers medical intelligence from trusted sources directly within clinical workflows for healthcare organizations in one solution.&lt;/P&gt;
&lt;P&gt;We are pleased to announce that we are expanding those knowledge sources with additional best‑in‑class content providers and enabling broader access to your organization’s internal sources with Microsoft 365 Copilot integration.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Access information from new credible medical content providers&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Dragon Copilot users will gain access to an additional robust collection of&lt;/P&gt;
&lt;P&gt;trusted clinical content from leading evidence-based resources. We are partnering with renowned publishers to bring you the best, most trusted content, safely and securely, within clinician’s workflows while helping to reduce the use of unauthorized AI tools and applications, commonly referred to, as “shadow AI.”&lt;/P&gt;
&lt;H4&gt;Access content from Wolters Kluwer UpToDate&lt;/H4&gt;
&lt;P&gt;We’ve partnered with Wolters Kluwer UpToDate to bring trusted, evidence-based clinical guidance directly into Dragon Copilot. Customers with an active Wolters Kluwer UpToDate license will be able to access UpToDate content in Dragon Copilot, within the context of their clinical workflows.&lt;/P&gt;
&lt;P&gt;This integration allows clinicians to ask both general questions and patient specific questions and receive answers grounded in UpToDate evidence, with clear references to supporting sources. Over time, it will also introduce contextual links to UpToDate concepts layered on top of Dragon Copilot–generated notes, further enhancing clinical insight at the point of care.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;“&lt;EM&gt;Clinicians need reliable guidance that supports fast, confident decision-making without disrupting care delivery. We are excited to partner with Microsoft to bring UpToDate’s gold standard evidence and expertise-based clinical insights to Dragon Copilot, helping clinicians quickly access, actionable answers that reduce cognitive burden and support better patient care.&lt;/EM&gt;”&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Yaw Fellin, Senior Vice President and General Manager, UpToDate Clinical Decision Support and Provider Solutions&lt;BR /&gt;Wolters Kluwer Health&lt;/P&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here’s an example of UpToDate content embedded in the Dragon Copilot workflow:&lt;/P&gt;
&lt;img&gt;Wolters Kluwer UpToDate powering Dragon Copilot Chat answers&lt;/img&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H4&gt;Obtain trusted clinical evidence with Elsevier ClinicalKey AI&lt;/H4&gt;
&lt;P&gt;Elsevier’s ClinicalKey AI will be available in Dragon Copilot. This integration enables customers with an active Elsevier ClinicalKey AI license to surface trusted medical literature and clinical evidence directly within clinicians’ workflows.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;“&lt;EM&gt;Clinicians are navigating a complex and rapidly changing healthcare landscape and need solutions they can trust. The ClinicalKey AI extension for Dragon Copilot transforms how clinicians interact with trusted medical literature and clinical answers. The conversational interface makes evidence discovery faster and more intuitive.&lt;/EM&gt;”&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Jukka Valimaki, SVP Clinical Solutions&lt;BR /&gt;Elsevier&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Here’s an example of ClinicalKey AI content embedded in the Dragon Copilot workflow:&lt;/P&gt;
&lt;img /&gt;
&lt;H4&gt;Support clinical decisions with EBMcalc&lt;/H4&gt;
&lt;P&gt;With the integration of EBMcalc medical calculators, Dragon Copilot enables clinicians to use evidence-based calculators directly within their workflows—applied in context to the patient they’re caring for.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;“&lt;EM&gt;Clinicians need trusted, evidence-based insights exactly at the point of care&lt;/EM&gt;. &lt;EM&gt;By integrating EBMcalc’s rigorously curated clinical calculators and references into Dragon Copilot, we’re helping make high quality medical evidence more accessible, more actionable, and easier to use within everyday clinical workflows”.&lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Louis Leff, MD, MACP, Founder and CEO&lt;BR /&gt;EBMcalc&lt;/P&gt;
&lt;H4&gt;Access independent evidence in Dragon Copilot with&amp;nbsp;Wiley and Cochrane&lt;/H4&gt;
&lt;P&gt;Wiley and Microsoft are partnering to bring scientific literature and clinical evidence directly into the healthcare workflow, starting with the Cochrane Library. Through this integration, customers with an active Cochrane Library AI license will be able to access Cochrane’s high-quality, independent evidence, systematic reviews, and clinical answers, to inform more reliable and efficient decision-making. This includes the Cochrane Database of Systematic Reviews (CDSR), the home of gold-standard evidence syntheses, widely used to inform clinical guidelines worldwide.&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;&lt;EM&gt;"Working with Microsoft to bring the Cochrane Library into Dragon Copilot reflects a shared commitment to meeting researchers and clinicians where they are.&amp;nbsp; Healthcare Institutions can now access independent, peer-reviewed evidence— right within their clinical workflow” &lt;/EM&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P class="lia-indent-padding-left-30px"&gt;Josh Jarrett, SVP &amp;amp; GM of AI Growth &lt;BR /&gt;Wiley&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Access work context with Microsoft 365 Copilot in Dragon Copilot&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;With the Microsoft 365 Copilot integration, Dragon Copilot enables clinicians to seamlessly access information from their emails, chats, OneDrive and SharePoint, within the flow of their clinical work. Clinicians can combine this information with additional questions and actions, all governed by existing organizational and user access controls. Use of this data within Dragon Copilot workflow remains fully at the user’s discretion.&lt;/P&gt;
&lt;P&gt;Here’s an example of content from an email surfaced by Microsoft 365 Copilot accessible through the Dragon Copilot workflow:&lt;/P&gt;
&lt;img&gt;Microsoft 365 organizational context in Dragon Copilot&lt;/img&gt;
&lt;P&gt;Read more for a deeper dive on&amp;nbsp;&lt;A class="lia-external-url" href="https://aka.ms/drcm365copilotblog" target="_blank" rel="noopener"&gt;how Dragon Copilot enables work context access&lt;/A&gt; with Microsoft 365 Copilot integration.&lt;/P&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Safe web search&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;Dragon Copilot safe web search delivers trusted, evidence linked answers when curated sources are unavailable—ensuring clinicians continue to receive timely support without disrupting their workflow.&lt;/P&gt;
&lt;P&gt;The goal of safe web search is to prevent broken workflows and eliminate unsafe external browsing. Clinicians remain within their clinical context, focused on the patient—without tab hopping or the risk of landing on unreliable or unverified websites.&lt;/P&gt;
&lt;P&gt;Safe web search eliminates “no response” dead ends by maintaining a seamless conversational experience in Dragon Copilot and reducing unanswered prompts.&lt;/P&gt;
&lt;P&gt;This capability is enabled by using verified, secure, and responsible mechanisms designed for safe clinical experiences. It enforces multilayer protection through evidence validation, provenance linked responses, content filtering, and regulated search with built in safeguards.&lt;/P&gt;
&lt;P&gt;Here’s an example of content from a safe web search in the Dragon Copilot workflow:&lt;/P&gt;
&lt;img /&gt;
&lt;H2&gt;&lt;SPAN class="lia-text-color-15"&gt;Conclusion&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;These advancements represent an important step forward in how Dragon Copilot delivers trusted medical intelligence - bringing together best‑in‑class clinical evidence, organizational knowledge, and safe web access in one governed, in‑workflow experience. We will continue to expand our partner ecosystem, deepen integrations with leading evidence providers, and evolve Dragon Copilot conversational extensibility to meet clinicians where they work.&lt;/P&gt;</description>
      <pubDate>Fri, 06 Mar 2026 15:00:20 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/dragon-copilot-centralizes-trusted-medical-content-and-relevant/ba-p/4499011</guid>
      <dc:creator>hadasb</dc:creator>
      <dc:date>2026-03-06T15:00:20Z</dc:date>
    </item>
    <item>
      <title>Why nursing needs a different kind of AI—and how Dragon Copilot delivers</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/why-nursing-needs-a-different-kind-of-ai-and-how-dragon-copilot/ba-p/4499564</link>
      <description>&lt;P&gt;The Dragon Copilot experience for nurses was made generally available (GA) in December 2025 with a clear mission: help nursing staff focus on care, not the computer.&lt;/P&gt;
&lt;P&gt;From the start, the goal was to create a comprehensive AI clinical assistant—one that works alongside nurses throughout their shift, reduces cognitive load, captures the full scope of care delivered, and translates real clinical work into automated next steps, including documentation—fundamentally transforming workflows to keep patient care at the center.&lt;/P&gt;
&lt;P&gt;Microsoft has continued to execute on that vision. Recent enhancements include extended mobile access with Android support—enabling nurses to record care in Epic Rover on Android devices—as well as significant expansion in ambient documentation coverage. Together, these advances reflect a consistent approach: adoption follows when technology aligns with how nurses work.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Expansive nursing documentation &lt;/STRONG&gt;&lt;STRONG&gt;coverage&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Nursing work spans multiple flowsheet templates, assessments, state changes, and, at times, narrative notes. When solutions support only a subset of this work, nurses are left filling gaps after the fact—reintroducing cognitive load and eroding the value of this technology.&lt;/P&gt;
&lt;P&gt;Microsoft has expanded Dragon Copilot’s ambient documentation capabilities by broadening the range of supported nursing value types—and by extending it to deliver complete coverage across all flowsheet templates in supported departments and settings. The result is comprehensive documentation generated from each recording including:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Lines, Drains, Airways, and Wounds&lt;/STRONG&gt; (LDAs) documentation, including assessments, additions, and removals&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Nurse notes&lt;/STRONG&gt;, automatically generated from natural nurse-patient conversations and voice memos captured on the go&amp;nbsp;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Full flowsheet template&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt; &lt;/SPAN&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;coverage&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;—not just a subset—including admission and discharge flowsheets, blood administration, CIWA-Ar, and care plan-related flowsheets&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG style="color: rgb(30, 30, 30);"&gt;Adaptations to each organizations charting philosophy&lt;/STRONG&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;, including macros support, chart-by-exception, pertinent positives, and more&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This breadth matters because nursing work is rarely captured within only a narrow set of flowsheets—nor does it typically result in just narrative notes. Yet many solutions labeled “for nurses” prioritize what is easiest to automate, rather than what nurses need. The result can be a false sense of completeness, with nurses still managing gaps across their shift.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why nursing ambient documentation is hard—and what makes Dragon Copilot unique&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Achieving comprehensive, high‑quality nursing documentation has required specialized technology designed to address the structural, workflow, and feedback challenges unique to nursing—challenges that general narrative ambient models and physician‑oriented solutions are not built to solve:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Flowsheets are messy, complex, and frequently changing&lt;/STRONG&gt;&lt;BR /&gt;Flowsheets are large, hospital-specific, internally ambiguous, and constantly evolving under governance. Complex logic—such as cascading rows, documentation‑by‑exception patterns, and duplicative or overlapping rows—makes it far from straightforward to accurately map a clinical observation to the correct field and value.&lt;EM&gt; &lt;/EM&gt;Microsoft works directly with real hospital schemas, handling hierarchy, ambiguity, and multiple valid documentation destinations—without requiring flowsheet redesign or sacrificing quality.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Nurses don’t speak for documentation&lt;/STRONG&gt;&lt;BR /&gt;Bedside language is optimized for care delivery, not chart completeness. Critical details are often implied or never spoken aloud.&lt;EM&gt; &lt;/EM&gt;Microsoft’s&lt;EM&gt; &lt;/EM&gt;technology translates natural nursing communication into accurate documentation without changing nurse behavior. Built on industry‑leading transcription accuracy and decades of speech recognition expertise, Dragon Copilot is informed by real‑world integration across diverse EHR environments, preserving accurate translation and clinical intent that directly impact downstream documentation accuracy.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Nursing audio is diverse&lt;/STRONG&gt;&lt;BR /&gt;Recordings mix shorthand, dialogue, monologue, and unit-specific language. Dragon Copilot accounts for mixed speaking modes instead of flattening audio through a generic pipeline or requiring nurses to speak in constrained ways.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Feedback loops are noisy&lt;/STRONG&gt;&lt;BR /&gt;Nurse corrections to AI output often reflect hindsight or personal preferences rather than model error. Microsoft’s approach analyzes correction patterns with clinical context, enabling calibration at the institution, department, and even individual user level.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Bedside workflows demand predictability&lt;/STRONG&gt;&lt;BR /&gt;Baseline LLMs are not suited for real-world nursing accuracy, latency, and cost requirements — especially with tens-of-thousands of possible flowsheet values. Dragon Copilot is optimized for consistent performance across real nursing environments, exceeding the reliability and latency characteristics of baseline models.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;Beyond specialized nursing architecture, Dragon Copilot enforces strict quality and safety gates for new documentation outputs—including oversight by Microsoft’s internal, nurse-led Clinical Integrity team, phased validation, and Responsible AI review—ensuring new documentation covered meets defined nursing standards before being introduced at scale.&lt;/P&gt;
&lt;P&gt;Dragon Copilot represents a fundamental shift in how nursing work is supported by AI by meeting the full complexity of bedside care head-on. By delivering comprehensive ambient documentation across live inpatient care environments, Dragon Copilot helps ensure that the care nurses provide is accurately captured, trusted, and usable downstream. The result is an AI clinical assistant that keeps nurses focused on what matters most: their patients.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 14:30:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/why-nursing-needs-a-different-kind-of-ai-and-how-dragon-copilot/ba-p/4499564</guid>
      <dc:creator>Allison_Novick</dc:creator>
      <dc:date>2026-03-05T14:30:00Z</dc:date>
    </item>
    <item>
      <title>From dictation to intelligence at the cursor with Dragon Copilot Desktop</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/from-dictation-to-intelligence-at-the-cursor-with-dragon-copilot/ba-p/4496145</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;This blog is co-authored by Dr. David Ting, Chief Clinical Product Lead, and Sarah Grey, Senior Product Manager.&lt;/EM&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Today’s patient encounters are packed with friction that breaks clinicians’ flow, clouds clinical thinking, and drains the joy from practice. Instead of focusing on patients, listening, assessing, and collaborating on care, clinicians spend visits staring at a screen, wrestling with bloated EHRs, and acting as data entry clerks. They type into endless fields, check boxes, click buttons, and memorize arcane text shortcuts and key sequences designed to satisfy computer hard-stops, regulatory tests, and payer demands, not to deliver compassionate, high-quality care.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;And it&amp;nbsp;doesn’t&amp;nbsp;stop with the EHR. Clinicians bounce between imaging systems, referral portals, messaging apps, mobile devices, specialty tools, and the nonstop demands of email, policies, training, credentialing, and CME. Managing care across dozens of disconnected systems every day makes one thing clear: clinicians and healthcare organizations are desperate for relief.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Can artificial intelligence help? Of course it can. But hope in AI is too often disappointed by the reality of AI&amp;nbsp;solutions that are poorly integrated with each other and with clinicians’ holistic workflow. AI risks being trapped in a single application like the EHR, leading to disjointed and suboptimal experiences.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;But if AI lives in a separate window from where clinicians actually work, it risks adding&amp;nbsp;another source of&amp;nbsp;friction,&amp;nbsp;forcing context switching and manual copying and pasting across the EHR and other systems.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;What&amp;nbsp;clinicians&amp;nbsp;need&amp;nbsp;is a&amp;nbsp;way to&amp;nbsp;access&amp;nbsp;a seamless&amp;nbsp;clinical intelligence&amp;nbsp;everywhere&amp;nbsp;they&amp;nbsp;work.&amp;nbsp;This is&amp;nbsp;precisely what Dragon Copilot brings&amp;nbsp;to the beleaguered&amp;nbsp;physician,&amp;nbsp;nurse,&amp;nbsp;and advanced&amp;nbsp;practice&amp;nbsp;provider.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Extending the power of AI to&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;remove&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;friction&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The reality for most clinicians is that their patients’ information and care coordination is managed in an EHR and via complementary apps living on the Windows desktop. If an AI clinical assistant is to be helpful, it must be available anywhere the clinician is working, regardless of which of the many desktop applications happen to be in focus. As the clinician moves from the EHR to a PACS viewer, web browser, email client, or Teams meeting, the AI should be able to understand the underlying context and offer ways to streamline the clinician’s flow. &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Dragon Copilot’s&amp;nbsp;AI changes the&amp;nbsp;dynamic.&amp;nbsp;When editing text&amp;nbsp;anywhere&amp;nbsp;– in the EHR, in Outlook,&amp;nbsp;Word,&amp;nbsp;a&amp;nbsp;web browser app, or within Dragon Copilot’s own editor&amp;nbsp;–&amp;nbsp;a&amp;nbsp;clinician&amp;nbsp;simply&amp;nbsp;selects&amp;nbsp;desired text,&amp;nbsp;speaks&amp;nbsp;or types&amp;nbsp;a natural language instruction&amp;nbsp;(e.g., “Expand this paragraph to&amp;nbsp;reflect more&amp;nbsp;of the patient’s description of the car accident”), and&amp;nbsp;receives&amp;nbsp;an inline rewrite or insertion&amp;nbsp;directly in the target application.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;A&amp;nbsp;clinician&amp;nbsp;searching for information&amp;nbsp;can simply place their&amp;nbsp;cursor&amp;nbsp;over&amp;nbsp;the text,&amp;nbsp;and&amp;nbsp;Dragon Copilot&amp;nbsp;understands the context, allowing the clinician to&amp;nbsp;make&amp;nbsp;multi-part&amp;nbsp;requests: “Given&amp;nbsp;this diagnosis,&amp;nbsp;what is the&amp;nbsp;recommended&amp;nbsp;treatment, and&amp;nbsp;are any of those covered by the patient’s insurance plan?”&amp;nbsp;&amp;nbsp;Dragon Copilot&amp;nbsp;gathers&amp;nbsp;context, searches through approved, trusted&amp;nbsp;knowledge sources,&amp;nbsp;and&amp;nbsp;provides&amp;nbsp;the&amp;nbsp;answer&amp;nbsp;within&amp;nbsp;the Dragon Copilot&amp;nbsp;workflow.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Here’s&amp;nbsp;another&amp;nbsp;example: while reviewing an online guideline or internal&amp;nbsp;protocol&amp;nbsp;in&amp;nbsp;the organization’s&amp;nbsp;SharePoint,&amp;nbsp;a clinician can select a passage and ask, “Give me the&amp;nbsp;three&amp;nbsp;key takeaways&amp;nbsp;from this reading&amp;nbsp;and&amp;nbsp;summarize&amp;nbsp;them in patient-friendly&amp;nbsp;language&amp;nbsp;to include in&amp;nbsp;the after-visit summary.”&amp;nbsp;The&amp;nbsp;clinician receives&amp;nbsp;the&amp;nbsp;in-context summary&amp;nbsp;directly in&amp;nbsp;workflow.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Dragon Copilot&amp;nbsp;overcomes EHR-constrained AI&amp;nbsp;and disconnected&amp;nbsp;tools by&amp;nbsp;unifying and&amp;nbsp;delivering intelligence&amp;nbsp;in one centralized workspace.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Dragon Copilot&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;is the&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;AI clinical assistant&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;connecting&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;fragmented systems&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Care doesn’t happen in a single app. Even though the EHR is the system of record, clinicians still do significant work outside of it. Yet EHR-based AI &lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);"&gt;cannot reach outside the EHR itself. And&amp;nbsp;most&amp;nbsp;third-party tools&amp;nbsp;don’t&amp;nbsp;(and&amp;nbsp;won’t) ship deep AI integrations quickly.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The result is a frustrating array of AI-powered experiences that can only&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;perform a part of the required task – such as an EHR ordering agent that cannot read the hospital formulary in SharePoint, or a third-party coding &lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;solution that cannot automatically map the provider’s visit diagnoses.&amp;nbsp;That leaves&amp;nbsp;the clinician in&amp;nbsp;an&amp;nbsp;unhappy position of needing to be&amp;nbsp;data&amp;nbsp;courier, manually copying information from one application to the other, rather than spending time taking care of the patient.&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Dragon Copilot acts as&amp;nbsp;the&amp;nbsp;connective tissue across that fragmentation. By working with standard text controls, it can bring a consistent interaction model, including dictation, commands, and&amp;nbsp;cursor-native AI, across the clinical&amp;nbsp;workflow.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;S&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;hift&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;ing from&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;speech dictation&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;to natural language&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;editing&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Over&amp;nbsp;650,000&amp;nbsp;providers worldwide have&amp;nbsp;benefited&amp;nbsp;from computerized speech-to-text dictation using&amp;nbsp;Dragon Medical One.&amp;nbsp;Traditional speech-to-text systems, like&amp;nbsp;Dragon Medical&amp;nbsp;One&amp;nbsp;convert spoken audio&amp;nbsp;into text&amp;nbsp;word for word, like a courtroom transcript.&amp;nbsp;But Dragon Copilot&amp;nbsp;provides&amp;nbsp;a&amp;nbsp;new&amp;nbsp;way to turn language into clinical content, in addition to&amp;nbsp;capturing&amp;nbsp;patient&amp;nbsp;encounters ambiently.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With Dragon Copilot, clinicians can just say what&amp;nbsp;they&amp;nbsp;want&amp;nbsp;by speaking naturally.&amp;nbsp;They&amp;nbsp;can instruct Dragon Copilot to perform&amp;nbsp;targeted edits (“Summarize the HPI in two sentences”)&amp;nbsp;and&amp;nbsp;issue&amp;nbsp;high-leverage whole-document edits that used to be tedious.&amp;nbsp;Clinicians&amp;nbsp;can even&amp;nbsp;talk naturally&amp;nbsp;to&amp;nbsp;create&amp;nbsp;documentation from scratch in new ways:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="26" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;“Using only the details already documented, draft an HPI and list a few clarifying questions to ask.”&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="26" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;“Update the patient’s pronouns throughout the note (don’t change clinical facts).”&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="26" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;“Rewrite the entire note in a more concise style, preserving meaning and keeping all facts the same.”&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="26" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;“Write the A&amp;amp;P&amp;nbsp;using&amp;nbsp;my standard&amp;nbsp;knee template with a conservative treatment plan.”&lt;/SPAN&gt;&amp;nbsp;&lt;BR /&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;These are new use cases where a short instruction can yield large, efficient edits.&amp;nbsp;And for&amp;nbsp;Dragon Medical One&amp;nbsp;users who have&amp;nbsp;benefitted&amp;nbsp;from&amp;nbsp;Dragon Medical One&amp;nbsp;voice macros (called Step-By-Step commands) and custom&amp;nbsp;vocabularies, these are all&amp;nbsp;transferrable to Dragon Copilot.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;T&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;he&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;cursor&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;becomes&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;a reusable (and extensible) primitive&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The cursor provides a dependable, ubiquitous&amp;nbsp;interface for AI. While EHRs and complementary clinical and non-clinical applications may&amp;nbsp;present&amp;nbsp;vastly different UIs, they&amp;nbsp;generally rely&amp;nbsp;on the underlying operating system to provide the same cursor. Hence, users have come to&amp;nbsp;expect a common behavior and access to a core set of functions associated with the cursor, regardless of the underlying application.&amp;nbsp;&amp;nbsp;Thus, the cursor&amp;nbsp;defines scope (selected text vs. current field), intent (summarize, rewrite, extract), and placement (results appear where the clinician expects them).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In-workflow, cursor-anchored AI like Dragon Copilot’s turn that repeatable pattern into a platform primitive—a foundational, reusable building block that developers can rely on to deliver consistent experiences across applications. For Microsoft developers as well as third-party extension partners, this platform primitive provides experience extensibility: app teams and agent developers will be able to deliver new skills at the cursor through Dragon Copilot. Rapid expansion of Dragon Copilot capabilities becomes possible, because internal and partner developers can focus on the capabilities themselves instead of reinventing the UI. And end-users benefit from a coherent, always-discoverable, always-available experience.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Cursor-anchored AI can be safer and more trustworthy, especially with integrations&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Healthcare demands trust. Cursor-anchored AI keeps actions tied to what the clinician can see and edit, and it makes output reviewable, editable, and reversible.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;That&amp;nbsp;doesn’t&amp;nbsp;mean ignoring&amp;nbsp;backend-integrated context. The goal is to combine the value of integrations with an interaction model that keeps scope clear: what the AI acted on, what it used, and what it produced, so clinicians stay in control.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Conclusion: Dragon Copilot&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;on the desktop&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;unifies&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;fragmented systems and delivers&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;a seamless cross-application&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;intelligence&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;in one&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;solution&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Dragon Copilot&amp;nbsp;represents&amp;nbsp;the&amp;nbsp;next and necessary step in the evolution of healthcare AI.&amp;nbsp;With&amp;nbsp;today’s&amp;nbsp;status quo,&amp;nbsp;clinical use of generative AI is&amp;nbsp;largely restricted&amp;nbsp;to&amp;nbsp;algorithmic clinical decision support or ambient documentation creation. Clinicians are still faced with disparate&amp;nbsp;AI systems that are either tightly bound to EHRs but limited in scope beyond the EHR, or poorly integrated with the EHR, such that the AI systems lack sufficient visibility into patient context or&amp;nbsp;require manual data extract,&amp;nbsp;transform&amp;nbsp;and load. In either case, clinicians&amp;nbsp;are stuck with the role of being human data couriers; and that is still&amp;nbsp;a far cry from&amp;nbsp;the patient-clinician&amp;nbsp;relationship they went to medical school or nursing school to&amp;nbsp;practice.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;With Dragon Copilot, AI&amp;nbsp;isn’t&amp;nbsp;“somewhere else.” It is intelligence embedded in the act of&amp;nbsp;information gathering, analyzing, synthesizing,&amp;nbsp;writing, revising, deciding,&amp;nbsp;and executing.&amp;nbsp;It is AI&amp;nbsp;assistance&amp;nbsp;right where you work.&amp;nbsp;It&amp;nbsp;represents&amp;nbsp;less context switching, better notes,&amp;nbsp;and&amp;nbsp;stronger&amp;nbsp;knowledge&amp;nbsp;work support.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Mar 2026 13:24:33 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/from-dictation-to-intelligence-at-the-cursor-with-dragon-copilot/ba-p/4496145</guid>
      <dc:creator>James_Jeffries</dc:creator>
      <dc:date>2026-03-09T13:24:33Z</dc:date>
    </item>
    <item>
      <title>Dragon Copilot brings AI into revenue cycle workflows</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/dragon-copilot-brings-ai-into-revenue-cycle-workflows/ba-p/4499454</link>
      <description>&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;&lt;EM&gt;This blog is co-authored by Josh Waldo, Product Manager, Joeri Van der &lt;/EM&gt;&lt;/SPAN&gt;&lt;EM&gt;Vloet, Principal Research Manager, and Koen Mertens, Senior Research Manager.&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Many health systems treat revenue cycle performance solely as a downstream problem. Clinicians complete documentation, teams send notes to coders, queries follow, and everyone works backward to recover what was missed. In some cases, recovery isn’t even possible in this model. With the rise and pervasiveness of AI, there is an opportunity to rethink the model by improving documentation quality and comprehensiveness at the point of care. In this new model, reimbursement accuracy can be captured before the opportunity has been lost.&amp;nbsp; &lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;In this blog post, we walk through three powerful approaches we use in Dragon Copilot to support revenue cycle management (RCM):&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="18" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Embed revenue cycle intelligence directly into the clinical workflow&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;, delivering&amp;nbsp;built-in&amp;nbsp;guidance as documentation is created—while the patient is still in the room—to ensure critical details are captured.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="18" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;Deliver that intelligence directly&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;into existing EHR workflows&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;&amp;nbsp;wherever possible.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI,Times New Roman" data-listid="18" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;Extend RCM capabilities through an open ecosystem&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;, enabling specialized partners to bring their coding&amp;nbsp;expertise&amp;nbsp;directly into the same workflow.&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;201341983&amp;quot;:2,&amp;quot;335559739&amp;quot;:0,&amp;quot;335559740&amp;quot;:300}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Clinicians document using Dragon Copilot while revenue relevant insights surface in context before notes ever move downstream. The result is better reimbursement accuracy, fewer back and forth exchanges, and less rework for clinicians and revenue cycle teams.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;The problem, the opportunity, and the shape of a&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;workflow solution&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Most revenue cycle friction traces back to incomplete or imprecise documentation. Missing specificity, unclear assessments, and undocumented work create queries, denials, delays, and rework.&amp;nbsp;The opportunity is to move RCM support upstream to the point where documentation is being created, with help that is context aware, transparent, and clinician controlled.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Solutions must analyze&amp;nbsp;the&amp;nbsp;patient,&amp;nbsp;context&amp;nbsp;and resulting&amp;nbsp;documentation as&amp;nbsp;they&amp;nbsp;are&amp;nbsp;created and surface opportunities related to:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Diagnosis specificity and diagnostic comprehensiveness&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="2" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;HCC capture and recapture opportunities&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="3" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Evaluation and management (E/M) level considerations&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="4" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Completeness of documentation&amp;nbsp;relative&amp;nbsp;to established criteria, like MEAT (monitor, evaluate, assess, treat)&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="5" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Prior authorization guidelines&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;LI aria-setsize="-1" data-leveltext="" data-font="Symbol" data-listid="14" data-list-defn-props="{&amp;quot;335552541&amp;quot;:1,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769226&amp;quot;:&amp;quot;Symbol&amp;quot;,&amp;quot;469769242&amp;quot;:[8226],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="6" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Complete capture of CC’s/MCC’s&amp;nbsp;and other inpatient guidance&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The shape of the solution must also provide assistance that shows up inside the documentation flow.&amp;nbsp;It should feel like a light touch and not a separate destination or a stream of alerts. The design goal is to minimize distraction, avoid overwhelming notifications, and make every suggestion easy to review with&amp;nbsp;clear evidence.&amp;nbsp;Further, the nudges need to be actionable.&amp;nbsp;With the click of a button, a clinician should be able to&amp;nbsp;accept/reject the&amp;nbsp;insight and have&amp;nbsp;the&amp;nbsp;appropriate actions&amp;nbsp;then ensue, such as updating documentation automatically.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Small improvements may look modest in a single note, but they compound across clinicians and visits when applied consistently. If we can help clinicians capture the full clinical picture once, downstream processes inherit better inputs without asking clinicians to do a second pass.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Recognizing the scope of RCM opportunities&amp;nbsp;in&amp;nbsp;both ambulatory and&amp;nbsp;inpatient,&amp;nbsp;we are taking a three-pronged approach&amp;nbsp;to&amp;nbsp;help&amp;nbsp;healthcare organizations&amp;nbsp;enable the&amp;nbsp;broadest and deepest set of coding skills into their Dragon Copilot workflows.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Built-in&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;suggestions&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Dragon Copilot already&amp;nbsp;provides&amp;nbsp;built-in&amp;nbsp;capabilities that support RCM by improving the note itself, and we are continuing to expand them. We start by capturing&amp;nbsp;all the minute details of&amp;nbsp;what happened during the encounter and helping clinicians turn that into clear&amp;nbsp;and comprehensive&amp;nbsp;documentation. The goal is&amp;nbsp;accurate&amp;nbsp;and complete capture of the work performed and the patient complexity.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Dragon&amp;nbsp;Copilot&amp;nbsp;also supports&amp;nbsp;on-demand&amp;nbsp;billing code support.&amp;nbsp;For each documented diagnosis, we can suggest relevant ICD-10 code options with supporting&amp;nbsp;evidence&amp;nbsp;so clinicians can quickly&amp;nbsp;validate&amp;nbsp;what is being proposed. We can also generate a coding summary report for each note to support downstream review.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Next,&amp;nbsp;Dragon Copilot&amp;nbsp;includes&amp;nbsp;built‑in,&amp;nbsp;proactive&amp;nbsp;diagnosis&amp;nbsp;specificity&amp;nbsp;suggestions designed to&amp;nbsp;support&amp;nbsp;clinicians in selecting&amp;nbsp;the most&amp;nbsp;accurate&amp;nbsp;and&amp;nbsp;specific diagnoses&amp;nbsp;at&amp;nbsp;the&amp;nbsp;point of&amp;nbsp;documentation. As clinical notes are created, Dragon Copilot analyzes the documented clinical context and&amp;nbsp;identifies&amp;nbsp;opportunities where a diagnosis may be underspecified or incomplete.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When applicable, Dragon Copilot surfaces suggestions to refine a diagnosis (for example, laterality, acuity, severity, underlying cause, or associated conditions). For risk‑adjusted conditions,&amp;nbsp;it&amp;nbsp;helps clinicians&amp;nbsp;identify&amp;nbsp;diagnoses that may&amp;nbsp;impact&amp;nbsp;Hierarchical Condition Categories (HCCs), while emphasizing the need for complete, clinically&amp;nbsp;appropriate, and defensible documentation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;All suggestions are advisory and designed to fit naturally into the clinician’s workflow. Clinicians&amp;nbsp;remain&amp;nbsp;fully in control, able to review, accept,&amp;nbsp;modify, or ignore suggestions based on their clinical judgment.&amp;nbsp;It&amp;nbsp;does not add diagnoses automatically and does not replace clinical decision‑making. Its goal is to reduce missed specificity, support&amp;nbsp;accurate&amp;nbsp;clinical representation of patient complexity, and help ensure documentation aligns with coding and compliance expectations.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;At a user experience level, we want this to be simple and predictable&amp;nbsp;as outlined in this workflow:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;The clinician documents using&amp;nbsp;Dragon Copilot’s suite of&amp;nbsp;documentation tools.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;Once the note is drafted, a set of relevant suggestions to refine a diagnosis is surfaced discreetly in workflow&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;Each opportunity includes the why, the evidence, and a suggested improvement.&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;The clinician can accept, ignore, or defer, and stays in control at every step.&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="2" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;The note is improved before it reaches coding, reducing the need for back-and-forth exchanges later.&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Done well, this approach improves documentation quality without changing clinical practice. It reduces avoidable queries, supports more consistent coding outcomes, and helps revenue cycle teams spend less time on cleanup.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Looking ahead,&amp;nbsp;these built-in proactive capabilities&amp;nbsp;will&amp;nbsp;be&amp;nbsp;extended&amp;nbsp;into&amp;nbsp;additional&amp;nbsp;high value areas such as E/M level considerations and completeness expectations such as MEAT (monitor, evaluate, assess, treat).&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Integrated&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;EHR context and&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;workflow&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Deep EHR connectivity is a force multiplier because it provides context and continuity. With the right backend integrations, Dragon Copilot can&amp;nbsp;leverage&amp;nbsp;chart data to understand the&amp;nbsp;patient&amp;nbsp;history, current problems, and the broader clinical picture that surrounds the encounter.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;For example, this can include access to relevant prior notes, the problem list, medications, allergies, labs, imaging results, and structured diagnoses already documented in the chart.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This context helps us enrich the completeness and accuracy of notes before anything is presented to the clinician.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Connecting directly to EHR workflows also matters. When insights and supporting evidence can be presented in the same places clinicians already review the chart, reconcile problems, place orders, and&amp;nbsp;finalize&amp;nbsp;documentation, we reduce context switching and increase the likelihood that improvements happen at the right moment.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We are already building these powerful workflows with some of the most widely used EHRs in the world&amp;nbsp;and are bringing the ability to get diagnosis specificity directly in workflow as one example.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This is also where we can align RCM support with clinical actions. Suggestions should appear when they are most useful, such as while the diagnoses are reviewed and the&amp;nbsp;note&amp;nbsp;finalized.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;This strategy is also about trust and safety. Clinicians should be able to see where a suggestion comes from, what it is based on, and why it matters, without leaving their workflow.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Partner extensions&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;RCM is&amp;nbsp;complex&amp;nbsp;and it looks different across organizations, service lines, and care settings. No single approach fits every health system&amp;nbsp;and covers all the targeted needs. An ecosystem approach allows specialized partners to bring their&amp;nbsp;expertise&amp;nbsp;into the same workflow.&amp;nbsp;With&amp;nbsp;an&amp;nbsp;open ecosystem, any partner&amp;nbsp;can integrate coding into workflow in a self-service and standardized fashion, opening&amp;nbsp;coding at scale and giving health systems the widest range of options to choose from.&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;We&amp;nbsp;see&amp;nbsp;some of the most developed&amp;nbsp;expertise&amp;nbsp;and exciting innovations&amp;nbsp;coming&amp;nbsp;from our partners.&amp;nbsp;Dragon Copilot now serves&amp;nbsp;as&amp;nbsp;a consistent channel into the workflow, so&amp;nbsp;our&amp;nbsp;partners can deliver value without having to reinvent access to encounter context, user experience patterns, and distribution inside the EHR.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;Examples of what partner extensions can enable include:&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;&lt;SPAN data-contrast="auto"&gt;Specialty specific documentation guidance&lt;/SPAN&gt;&lt;SPAN data-contrast="auto"&gt;&amp;nbsp;that reflects local policy and payer mix.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;HCC program support&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;that includes organization specific rules for capture and recapture.&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;Prior authorization&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;&amp;nbsp;support that highlights requirements&amp;nbsp;earlier, when&amp;nbsp;care decisions are being made.&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt; &lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI aria-setsize="-1" data-leveltext="%1." data-font="Segoe UI" data-listid="1" data-list-defn-props="{&amp;quot;335552541&amp;quot;:0,&amp;quot;335559685&amp;quot;:720,&amp;quot;335559991&amp;quot;:360,&amp;quot;469769242&amp;quot;:[65533,0],&amp;quot;469777803&amp;quot;:&amp;quot;left&amp;quot;,&amp;quot;469777804&amp;quot;:&amp;quot;%1.&amp;quot;,&amp;quot;469777815&amp;quot;:&amp;quot;hybridMultilevel&amp;quot;}" data-aria-posinset="1" data-aria-level="1"&gt;CC’s/MCC’s&lt;SPAN style="color: rgb(30, 30, 30);" data-contrast="auto"&gt;&amp;nbsp;identification to&amp;nbsp;accurately capture O/E ratios&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="color: rgb(30, 30, 30);" data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:0}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The goal is to let health systems choose what to enable, where to enable it, and for whom, while keeping the clinician experience consistent and limiting disruption.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2 aria-level="2"&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;Patient&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;-&lt;/SPAN&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;first&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-contrast="none"&gt;&lt;SPAN data-ccp-parastyle="heading 2"&gt;&amp;nbsp;is the best revenue cycle strategy&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;134245418&amp;quot;:true,&amp;quot;134245529&amp;quot;:true,&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559738&amp;quot;:360,&amp;quot;335559739&amp;quot;:120}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;The most effective revenue cycle strategy starts with patients, not paperwork.&amp;nbsp;Dragon Copilot augments clinician care,&amp;nbsp;helps capture the full&amp;nbsp;patient&amp;nbsp;story, and supports&amp;nbsp;accurate&amp;nbsp;reimbursement as a byproduct of good clinical documentation.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN data-contrast="auto"&gt;When technology fades into the background and supports clinicians without interrupting care, everyone&amp;nbsp;benefits:&amp;nbsp;clinicians reclaim time, revenue cycle teams see fewer issues, and health systems are reimbursed accurately for the care they provide.&lt;/SPAN&gt;&lt;SPAN data-ccp-props="{&amp;quot;335557856&amp;quot;:16777215,&amp;quot;335559739&amp;quot;:240}"&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 17:51:44 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/dragon-copilot-brings-ai-into-revenue-cycle-workflows/ba-p/4499454</guid>
      <dc:creator>James_Jeffries</dc:creator>
      <dc:date>2026-03-05T17:51:44Z</dc:date>
    </item>
    <item>
      <title>Can you use AI to implement an Enterprise Master Patient Index (EMPI)?</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/can-you-use-ai-to-implement-an-enterprise-master-patient-index/ba-p/4499600</link>
      <description>&lt;H2 data-line="6"&gt;The Short Answer: Yes. And It's Better Than You Think.&lt;/H2&gt;
&lt;P data-line="8"&gt;If you've worked in healthcare IT for any length of time, you've dealt with this problem.&lt;/P&gt;
&lt;P data-line="10"&gt;Patient A shows up at Hospital 1 as "Jonathan Smith, DOB 03/15/1985." Patient B shows up at Hospital 2 as "Jon Smith, DOB 03/15/1985." Patient C shows up at a clinic as "John Smythe, DOB 03/15/1985."&lt;/P&gt;
&lt;P data-line="14"&gt;Same person? Probably. But how do you&amp;nbsp;&lt;STRONG&gt;prove&lt;/STRONG&gt;&amp;nbsp;it at scale — across millions of records, dozens of source systems, and data quality that ranges from pristine to "someone fat-fingered a birth year"?&lt;/P&gt;
&lt;P data-line="16"&gt;That's the problem an&amp;nbsp;&lt;STRONG&gt;Enterprise Master Patient Index (EMPI)&lt;/STRONG&gt;&amp;nbsp;solves. And traditionally, it's been solved with expensive commercial products, rigid rule engines, and a lot of manual review.&lt;/P&gt;
&lt;P data-line="18"&gt;We built one with AI. On Azure. With open-source tooling. And the results are genuinely impressive.&lt;/P&gt;
&lt;P data-line="20"&gt;This post walks through how it works, what the architecture looks like, and why the combination of deterministic matching, probabilistic algorithms, and AI-enhanced scoring produces better results than any single approach alone.&lt;/P&gt;
&lt;H2 data-line="24"&gt;1. Why EMPI Still Matters (More Than Ever)&lt;/H2&gt;
&lt;P data-line="26"&gt;Healthcare organizations don't have a "patient data problem." They have a&amp;nbsp;&lt;STRONG&gt;patient identity problem.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P data-line="28"&gt;Every EHR, lab system, pharmacy platform, and claims processor creates its own patient record. When those systems exchange data via FHIR, HL7, or flat files, there's no universal patient identifier in the U.S. — Congress has blocked funding for one since 1998.&lt;/P&gt;
&lt;P data-line="30"&gt;The result:&lt;/P&gt;
&lt;UL data-line="32"&gt;
&lt;LI data-line="32"&gt;&lt;STRONG&gt;Duplicate records&lt;/STRONG&gt;&amp;nbsp;inflate costs and fragment care history&lt;/LI&gt;
&lt;LI data-line="33"&gt;&lt;STRONG&gt;Missed matches&lt;/STRONG&gt;&amp;nbsp;mean clinicians don't see a patient's full medical picture&lt;/LI&gt;
&lt;LI data-line="34"&gt;&lt;STRONG&gt;False positives&lt;/STRONG&gt;&amp;nbsp;can merge two different patients into one record — a patient safety risk&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="36"&gt;Traditional EMPI solutions use deterministic matching (exact field comparisons) and sometimes probabilistic scoring (fuzzy string matching). They work. But they leave a significant gray zone of records that require human review — and that queue grows faster than teams can process it.&lt;/P&gt;
&lt;P data-line="38"&gt;What if AI could shrink that gray zone?&lt;/P&gt;
&lt;H2 data-line="42"&gt;2. The Architecture: Three Layers of Matching&lt;/H2&gt;
&lt;P data-line="44"&gt;Here's the core insight:&amp;nbsp;&lt;STRONG&gt;no single matching technique is sufficient.&lt;/STRONG&gt;&amp;nbsp;Exact matches miss typos. Fuzzy matches produce false positives. AI alone hallucinates.&lt;/P&gt;
&lt;P data-line="46"&gt;But layer them together with calibrated weights, and you get something remarkably accurate.&lt;/P&gt;
&lt;P data-line="73"&gt;Let's break each layer down.&lt;/P&gt;
&lt;H2 data-line="77"&gt;3. Layer 1: Deterministic Matching — The Foundation&lt;/H2&gt;
&lt;P data-line="79"&gt;Deterministic matching is the bedrock. If two records share an Enterprise ID, they're the same person. Full stop.&lt;/P&gt;
&lt;P data-line="81"&gt;The system assigns trust levels to each identifier type:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Identifier&lt;/th&gt;&lt;th&gt;Weight&lt;/th&gt;&lt;th&gt;Why&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Enterprise ID&lt;/td&gt;&lt;td&gt;1.0&lt;/td&gt;&lt;td&gt;Explicitly assigned by an authority&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;SSN&lt;/td&gt;&lt;td&gt;0.9&lt;/td&gt;&lt;td&gt;Highly reliable when present and accurate&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;MRN&lt;/td&gt;&lt;td&gt;0.8&lt;/td&gt;&lt;td&gt;System-dependent — only valid within the same healthcare system&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Date of Birth&lt;/td&gt;&lt;td&gt;0.35&lt;/td&gt;&lt;td&gt;Common but not unique — 0.3% of the population shares any given birthday&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Phone&lt;/td&gt;&lt;td&gt;0.3&lt;/td&gt;&lt;td&gt;Useful signal but changes frequently&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Email&lt;/td&gt;&lt;td&gt;0.3&lt;/td&gt;&lt;td&gt;Same — supportive evidence, not proof&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;col style="width: 33.33%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P data-line="92"&gt;The key implementation detail here is&amp;nbsp;&lt;STRONG&gt;MRN system validation.&lt;/STRONG&gt;&amp;nbsp;An MRN of "12345" at Hospital A is completely unrelated to MRN "12345" at Hospital B. The system checks the identifier's source system URI before considering it a match. Without this, you'd get a flood of false positives from coincidental MRN collisions.&lt;/P&gt;
&lt;P data-line="94"&gt;If an Enterprise ID match is found, the system short-circuits — no need for probabilistic or AI scoring. It's a guaranteed match.&lt;/P&gt;
&lt;H2 data-line="98"&gt;4. Layer 2: Probabilistic Matching — Where It Gets Interesting&lt;/H2&gt;
&lt;P data-line="100"&gt;This is where the system earns its keep. Probabilistic matching handles the messy reality of healthcare data: typos, nicknames, transposed digits, abbreviations, and inconsistent formatting.&lt;/P&gt;
&lt;H3 data-line="102"&gt;Name Similarity&lt;/H3&gt;
&lt;P data-line="104"&gt;The system uses a multi-algorithm ensemble for name matching:&lt;/P&gt;
&lt;UL data-line="106"&gt;
&lt;LI data-line="106"&gt;&lt;STRONG&gt;Jaro-Winkler&lt;/STRONG&gt;&amp;nbsp;(60% weight): Optimized for short strings like names. Gives extra credit when strings share a common prefix — so "Jonathan" vs "Jon" scores higher than you'd expect.&lt;/LI&gt;
&lt;LI data-line="107"&gt;&lt;STRONG&gt;Soundex / Metaphone&lt;/STRONG&gt;&amp;nbsp;(phonetic boost): Catches "Smith" vs "Smythe," "Jon" vs "John," and other sound-alike variations that string distance alone would miss.&lt;/LI&gt;
&lt;LI data-line="108"&gt;&lt;STRONG&gt;Levenshtein distance&lt;/STRONG&gt;&amp;nbsp;(typo detection): Handles single-character errors — "Johanson" vs "Johansn."&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="110"&gt;These scores are blended, and first name and last name are scored independently before combining. This prevents a matching last name from compensating for a wildly different first name.&lt;/P&gt;
&lt;H3 data-line="112"&gt;Date of Birth — Smarter Than You'd Think&lt;/H3&gt;
&lt;P data-line="114"&gt;DOB matching goes beyond exact comparison. The system detects&amp;nbsp;&lt;STRONG&gt;month/day transposition&lt;/STRONG&gt;&amp;nbsp;— one of the most common data entry errors in healthcare:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Scenario&lt;/th&gt;&lt;th&gt;Score&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Exact match&lt;/td&gt;&lt;td&gt;1.0&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Month and day swapped (e.g., 03/15 vs 15/03)&lt;/td&gt;&lt;td&gt;0.8&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Off by 1 day&lt;/td&gt;&lt;td&gt;0.9&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Off by 2–30 days&lt;/td&gt;&lt;td&gt;0.5–0.8 (scaled)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Different year&lt;/td&gt;&lt;td&gt;0.0&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P data-line="124"&gt;This alone catches a category of mismatches that pure deterministic systems miss entirely.&lt;/P&gt;
&lt;H3 data-line="126"&gt;Address Similarity&lt;/H3&gt;
&lt;P data-line="128"&gt;Address matching uses a hybrid approach:&lt;/P&gt;
&lt;UL data-line="130"&gt;
&lt;LI data-line="130"&gt;&lt;STRONG&gt;Jaro-Winkler&lt;/STRONG&gt;&amp;nbsp;on the normalized full address (70% weight)&lt;/LI&gt;
&lt;LI data-line="131"&gt;&lt;STRONG&gt;Token-based Jaccard similarity&lt;/STRONG&gt;&amp;nbsp;(30% weight) to handle word reordering&lt;/LI&gt;
&lt;LI data-line="132"&gt;&lt;STRONG&gt;Bonus scoring&lt;/STRONG&gt;&amp;nbsp;for matching postal codes, city, and state&lt;/LI&gt;
&lt;LI data-line="133"&gt;&lt;STRONG&gt;Abbreviation expansion&lt;/STRONG&gt;&amp;nbsp;— "St" becomes "Street," "Ave" becomes "Avenue"&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2 data-line="137"&gt;5. Layer 3: AI-Enhanced Matching — The Game Changer&lt;/H2&gt;
&lt;P data-line="139"&gt;This is where the architecture diverges from traditional EMPI solutions.&lt;/P&gt;
&lt;H3 data-line="141"&gt;OpenAI Embeddings (Semantic Similarity)&lt;/H3&gt;
&lt;P data-line="143"&gt;The system generates a text embedding for each patient's complete demographic profile using OpenAI's&amp;nbsp;text-embedding-3-small&amp;nbsp;model. Then it computes cosine similarity between patient pairs.&lt;/P&gt;
&lt;P data-line="145"&gt;Why does this work? Because embeddings capture&amp;nbsp;&lt;STRONG&gt;semantic relationships&lt;/STRONG&gt;&amp;nbsp;that string-matching can't. "123 Main Street, Apt 4B, Springfield, IL" and "123 Main St #4B, Springfield, Illinois" are semantically identical even though they differ character-by-character.&lt;/P&gt;
&lt;P data-line="147"&gt;The embedding score carries only 10% of the total weight — it's a signal, not a verdict. But in ambiguous cases, it's the signal that tips the scale.&lt;/P&gt;
&lt;H3 data-line="149"&gt;GPT-5.2 LLM Analysis (Intelligent Reasoning)&lt;/H3&gt;
&lt;P data-line="151"&gt;For matches that land in the human review zone (0.65–0.85), the system optionally invokes GPT-5.2 to analyze the patient pair and provide structured reasoning:&lt;/P&gt;
&lt;P&gt;{ "match_score": 0.92, "confidence": "high", "reasoning": "Multiple strong signals: identical last name, DOB matches exactly, same city. First name 'Jon' is a common nickname for 'Jonathan'.", "name_analysis": "First name variation is a known nickname pattern.", "potential_issues": [], "recommendation": "merge" }&lt;/P&gt;
&lt;P data-line="165"&gt;The LLM doesn't just produce a number — it explains&amp;nbsp;&lt;STRONG&gt;why&lt;/STRONG&gt;&amp;nbsp;it thinks two records match. This is enormously valuable for the human reviewers who make final decisions on ambiguous cases. Instead of staring at two records and guessing, they get AI-generated reasoning they can evaluate.&lt;/P&gt;
&lt;P data-line="167"&gt;When LLM analysis is enabled, the final score blends traditional and LLM scores:&lt;/P&gt;
&lt;P&gt;Final Score = (Traditional Score × 0.8) + (LLM Score × 0.2)&lt;/P&gt;
&lt;P data-line="173"&gt;The LLM temperature is set to 0.1 for consistency — you want deterministic outputs from your matching engine, not creative ones.&lt;/P&gt;
&lt;H2 data-line="177"&gt;6. The Graph Database: Modeling Patient Relationships&lt;/H2&gt;
&lt;P data-line="179"&gt;Records and scores are only half the story. The real power comes from how the system&amp;nbsp;&lt;STRONG&gt;stores and traverses relationships.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P data-line="181"&gt;We use&amp;nbsp;&lt;STRONG&gt;Azure Cosmos DB with the Gremlin API&lt;/STRONG&gt;&amp;nbsp;— a graph database that models patients, identifiers, addresses, and clinical data as vertices connected by typed edges.&lt;/P&gt;
&lt;P&gt;(:Patient)──[:HAS_IDENTIFIER]──▶(:Identifier) │ ├──[:HAS_ADDRESS]──▶(:Address) │ ├──[:HAS_CONTACT]──▶(:ContactPoint) │ ├──[:LINKED_TO]──▶(:EmpiRecord) ← Golden Record │ ├──[:POTENTIAL_MATCH {score, confidence}]──▶(:Patient) │ └──[:HAS_ENCOUNTER]──▶(:Encounter) └──[:HAS_OBSERVATION]──▶(:Observation)&lt;/P&gt;
&lt;H3 data-line="198"&gt;Why a Graph?&lt;/H3&gt;
&lt;P data-line="200"&gt;Three reasons:&lt;/P&gt;
&lt;OL data-line="202"&gt;
&lt;LI data-line="202"&gt;&lt;STRONG&gt;Candidate retrieval is a graph traversal problem.&lt;/STRONG&gt;&amp;nbsp;"Find all patients who share an identifier with Patient X" is a natural graph query — traverse from the patient to their identifiers, then back to other patients who share those same identifiers. In Gremlin, this is a few lines. In SQL, it's a multi-table join with performance that degrades as data grows.&lt;/LI&gt;
&lt;LI data-line="204"&gt;&lt;STRONG&gt;Relationships are first-class citizens.&lt;/STRONG&gt;&amp;nbsp;A&amp;nbsp;POTENTIAL_MATCH&amp;nbsp;edge stores the match score, confidence level, and detailed breakdown directly on the relationship. You can query "show me all high-confidence matches" without any joins.&lt;/LI&gt;
&lt;LI data-line="206"&gt;&lt;STRONG&gt;EMPI records are naturally hierarchical.&lt;/STRONG&gt; A golden record (EmpiRecord) links to multiple source patients via&amp;nbsp;LINKED_TO&amp;nbsp;edges. When you merge two patients, you're adding an edge — not rewriting rows in a relational table.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3 data-line="208"&gt;Performance at Scale&lt;/H3&gt;
&lt;P data-line="210"&gt;Cosmos DB's partition strategy uses&amp;nbsp;source_system&amp;nbsp;as the partition key, providing logical isolation between healthcare systems. The system handles Azure's 429 rate-limiting with automatic retry and exponential backoff, and uses batch operations for bulk loads to avoid RU exhaustion.&lt;/P&gt;
&lt;H2 data-line="214"&gt;7. FHIR-Native Data Ingestion&lt;/H2&gt;
&lt;P data-line="216"&gt;The system ingests&amp;nbsp;&lt;STRONG&gt;HL7 FHIR R4 Bundles&lt;/STRONG&gt;&amp;nbsp;— the emerging interoperability standard for healthcare data exchange.&lt;/P&gt;
&lt;P data-line="218"&gt;Each FHIR Bundle is a JSON file containing a complete patient record: demographics, encounters, observations, conditions, procedures, immunizations, medication requests, and diagnostic reports.&lt;/P&gt;
&lt;P data-line="220"&gt;The FHIR loader:&lt;/P&gt;
&lt;UL data-line="222"&gt;
&lt;LI data-line="222"&gt;Maps FHIR identifier systems to internal types (SSN, MRN, Enterprise ID)&lt;/LI&gt;
&lt;LI data-line="223"&gt;Handles all three FHIR date formats (YYYY, YYYY-MM, YYYY-MM-DD)&lt;/LI&gt;
&lt;LI data-line="224"&gt;Extracts clinical data for comprehensive patient profiles&lt;/LI&gt;
&lt;LI data-line="225"&gt;Uses an iterator pattern for memory-efficient processing of thousands of patients&lt;/LI&gt;
&lt;LI data-line="226"&gt;Tracks source system provenance for audit compliance&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="228"&gt;This means the service can ingest data directly from any FHIR-compliant EHR — Epic, Cerner, MEDITECH, or Synthea-generated test data — without custom integration work.&lt;/P&gt;
&lt;H2 data-line="232"&gt;8. The Conversational Agent: Matching via Natural Language&lt;/H2&gt;
&lt;P data-line="234"&gt;Here's where it gets fun.&lt;/P&gt;
&lt;P data-line="236"&gt;The system includes a&amp;nbsp;&lt;STRONG&gt;conversational AI agent&lt;/STRONG&gt; built on the Azure AI Foundry Agent Service. It's deployed as a GPT-5.2-powered agent with OpenAPI tools that call the matching service's REST API.&lt;/P&gt;
&lt;P data-line="238"&gt;Instead of navigating a complex UI to find matches, a data steward can simply ask:&lt;/P&gt;
&lt;P data-line="240"&gt;"Search patients named Aaron"&lt;/P&gt;
&lt;P data-line="242"&gt;"Compare patient abc-123 with patient xyz-456"&lt;/P&gt;
&lt;P data-line="244"&gt;"What matches are pending review?"&lt;/P&gt;
&lt;P data-line="246"&gt;"Approve the match between patient A and patient B"&lt;/P&gt;
&lt;P data-line="248"&gt;The agent is integrated directly into the Streamlit dashboard's Agent Chat tab, so users never leave their workflow. Under the hood, when the agent decides to call a tool (like "search patients"), Azure AI Foundry makes an HTTP request directly to the Container App API — no local function execution required.&lt;/P&gt;
&lt;H3 data-line="250"&gt;Available Agent Tools&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Tool&lt;/th&gt;&lt;th&gt;What It Does&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;searchPatients&lt;/td&gt;&lt;td&gt;Search patients by name, DOB, or identifier&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;getPatientDetails&lt;/td&gt;&lt;td&gt;Get detailed patient demographics and history&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;findPatientMatches&lt;/td&gt;&lt;td&gt;Find potential duplicates for a patient&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;compareTwoPatients&lt;/td&gt;&lt;td&gt;Side-by-side comparison with detailed scoring&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;getPendingReviews&lt;/td&gt;&lt;td&gt;List matches awaiting human decision&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;submitReviewDecision&lt;/td&gt;&lt;td&gt;Approve or reject a match&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;getServiceStatistics&lt;/td&gt;&lt;td&gt;MPI dashboard metrics&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P data-line="262"&gt;This same tool set is also exposed via a&amp;nbsp;&lt;STRONG&gt;Model Context Protocol (MCP) server&lt;/STRONG&gt;, making the matching engine accessible from AI-powered IDEs and coding assistants.&lt;/P&gt;
&lt;H2 data-line="266"&gt;9. The Dashboard: Putting It All Together&lt;/H2&gt;
&lt;P data-line="268"&gt;The Patient Matching Service includes a full-featured&amp;nbsp;&lt;STRONG&gt;Streamlit dashboard&lt;/STRONG&gt;&amp;nbsp;for operational management.&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Page&lt;/th&gt;&lt;th&gt;What You See&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Dashboard&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Key metrics, score distribution charts, recent match activity&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Match Results&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Filterable list with score breakdowns — deterministic, probabilistic, AI, and LLM tabs&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Patients&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Browse and search all loaded patients with clinical data&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Patient Graph&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Interactive graph visualization of patient relationships using streamlit-agraph&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Review Queue&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Pending matches with approve/reject actions&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Agent Chat&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Conversational AI for natural language queries&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Settings&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Configure match weights, thresholds, and display preferences&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P data-line="280"&gt;The match detail view provides&amp;nbsp;&lt;STRONG&gt;six tabs&lt;/STRONG&gt;&amp;nbsp;that walk reviewers through every scoring component: Summary, Deterministic, Probabilistic, AI/Embeddings, LLM Analysis, and Raw Data. Reviewers don't just see a number — they see exactly why the system scored a match the way it did.&lt;/P&gt;
&lt;H2 data-line="284"&gt;10. Azure Architecture&lt;/H2&gt;
&lt;P data-line="286"&gt;The full solution runs on Azure:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table border="1" style="border-width: 1px;"&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Service&lt;/th&gt;&lt;th&gt;Role&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Azure Cosmos DB&lt;/STRONG&gt;&amp;nbsp;(Gremlin + NoSQL)&lt;/td&gt;&lt;td&gt;Patient graph storage and match result persistence&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Azure OpenAI&lt;/STRONG&gt; (GPT-5.2 + text-embedding-3-small)&lt;/td&gt;&lt;td&gt;LLM analysis and semantic embeddings&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Azure Container Apps&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Hosts the FastAPI REST API&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Azure AI Foundry Agent Service&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Conversational agent with OpenAPI tools&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;STRONG&gt;Azure Log Analytics&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Centralized logging and monitoring&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;colgroup&gt;&lt;col style="width: 50.00%" /&gt;&lt;col style="width: 50.00%" /&gt;&lt;/colgroup&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P data-line="296"&gt;The separation between Cosmos DB's Gremlin API (graph traversal) and NoSQL API (match result documents) is intentional. Graph queries excel at relationship traversal — "find all patients connected to this identifier." Document queries excel at filtering and aggregation — "show me all auto-merge matches from the last 24 hours."&lt;/P&gt;
&lt;H2 data-line="300"&gt;11. What We Learned&lt;/H2&gt;
&lt;H3 data-line="302"&gt;AI doesn't replace deterministic matching. It augments it.&lt;/H3&gt;
&lt;P data-line="304"&gt;The three-layer approach works because each layer compensates for the others' weaknesses:&lt;/P&gt;
&lt;UL data-line="306"&gt;
&lt;LI data-line="306"&gt;&lt;STRONG&gt;Deterministic&lt;/STRONG&gt;&amp;nbsp;handles the easy cases quickly and with certainty&lt;/LI&gt;
&lt;LI data-line="307"&gt;&lt;STRONG&gt;Probabilistic&lt;/STRONG&gt;&amp;nbsp;catches the typos, nicknames, and formatting differences that exact matching misses&lt;/LI&gt;
&lt;LI data-line="308"&gt;&lt;STRONG&gt;AI&lt;/STRONG&gt;&amp;nbsp;provides semantic understanding and human-readable reasoning for the ambiguous middle ground&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3 data-line="310"&gt;The LLM is most valuable as a reviewer's assistant, not a decision-maker.&lt;/H3&gt;
&lt;P data-line="312"&gt;We deliberately keep the LLM weight at 20% of the final score. Its real value is the structured reasoning it produces — the "why" behind a match score. Human reviewers process cases faster when they have AI-generated analysis explaining the matching signals.&lt;/P&gt;
&lt;H3 data-line="314"&gt;Graph databases are naturally suited for patient identity.&lt;/H3&gt;
&lt;P data-line="316"&gt;Patient matching is fundamentally a relationship problem. "Who shares identifiers with whom?" "Which patients are linked to this golden record?" "Show me the cluster of records that might all be the same person." These are graph traversal queries. Trying to model this in relational tables works, but you're fighting the data model instead of leveraging it.&lt;/P&gt;
&lt;H3 data-line="318"&gt;FHIR interoperability reduces integration friction to near zero.&lt;/H3&gt;
&lt;P data-line="320"&gt;By accepting FHIR R4 Bundles as the input format, the service can ingest data from any modern EHR without custom connectors. This is a massive practical advantage — the hardest part of any EMPI project is usually getting the data in, not matching it.&lt;/P&gt;
&lt;H2 data-line="324"&gt;12. Try It Yourself&lt;/H2&gt;
&lt;P data-line="326"&gt;The Patient Matching Service is built entirely on Azure services and open-source tooling &lt;A class="lia-external-url" href="https://github.com/dondinulos/patient-matching-service" target="_blank" rel="noopener"&gt;https://github.com/dondinulos/patient-matching-service&lt;/A&gt; :&lt;/P&gt;
&lt;UL data-line="328"&gt;
&lt;LI data-line="328"&gt;&lt;STRONG&gt;Python&lt;/STRONG&gt;&amp;nbsp;with FastAPI, Streamlit, and the Azure AI SDKs&lt;/LI&gt;
&lt;LI data-line="329"&gt;&lt;STRONG&gt;Azure Cosmos DB&lt;/STRONG&gt;&amp;nbsp;(Gremlin API) for graph storage&lt;/LI&gt;
&lt;LI data-line="330"&gt;&lt;STRONG&gt;Azure OpenAI&lt;/STRONG&gt;&amp;nbsp;for embeddings and LLM analysis&lt;/LI&gt;
&lt;LI data-line="331"&gt;&lt;STRONG&gt;Azure AI Foundry&lt;/STRONG&gt;&amp;nbsp;for the conversational agent&lt;/LI&gt;
&lt;LI data-line="332"&gt;&lt;STRONG&gt;Azure Container Apps&lt;/STRONG&gt;&amp;nbsp;for deployment&lt;/LI&gt;
&lt;LI data-line="333"&gt;&lt;STRONG&gt;Synthea&lt;/STRONG&gt;&amp;nbsp;for FHIR test data generation&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-line="335"&gt;The matching algorithms (Jaro-Winkler, Soundex, Metaphone, Levenshtein) use pure Python implementations — no proprietary matching engines required.&lt;/P&gt;
&lt;P data-line="337"&gt;Whether you're building a new EMPI from scratch or augmenting an existing one with AI capabilities, the three-layer approach gives you the best of all worlds: the certainty of deterministic matching, the flexibility of probabilistic scoring, and the intelligence of AI-enhanced analysis.&lt;/P&gt;
&lt;H2 data-line="341"&gt;Final Thoughts&lt;/H2&gt;
&lt;P data-line="343"&gt;Can you use AI to implement an EMPI?&lt;/P&gt;
&lt;P data-line="345"&gt;Yes. And the answer isn't "replace everything with an LLM." It's "use AI where it adds the most value — semantic understanding, natural language reasoning, and augmenting human reviewers — while keeping deterministic and probabilistic matching as the foundation."&lt;/P&gt;
&lt;P data-line="347"&gt;The combination is more accurate than any single approach. The graph database makes relationships queryable. The conversational agent makes the system accessible. And the whole thing runs on Azure with FHIR-native data ingestion.&lt;/P&gt;
&lt;P data-line="349"&gt;Patient matching isn't a solved problem. But with AI in the stack, it's a much more manageable one.&lt;/P&gt;
&lt;P data-line="353"&gt;&lt;EM&gt;Tags: Healthcare, Azure, AI, EMPI, FHIR, Patient Matching, Azure Cosmos DB, Azure OpenAI, Graph Database, Interoperability&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 05:54:21 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/can-you-use-ai-to-implement-an-enterprise-master-patient-index/ba-p/4499600</guid>
      <dc:creator>dondinulos</dc:creator>
      <dc:date>2026-03-05T05:54:21Z</dc:date>
    </item>
    <item>
      <title>🚀 The Great Foundry Shift: Microsoft Foundry New vs Classic Explained</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/the-great-foundry-shift-microsoft-foundry-new-vs-classic/ba-p/4499574</link>
      <description>&lt;H1&gt;&lt;STRONG&gt;Introduction&lt;/STRONG&gt;&lt;/H1&gt;
&lt;P&gt;If you've been working in the Azure AI ecosystem, you've likely noticed a seismic shift happening at &lt;STRONG&gt;ai.azure.com&lt;/STRONG&gt;. What was once &lt;STRONG&gt;Azure AI Studio&lt;/STRONG&gt;, then became &lt;STRONG&gt;Azure AI Foundry&lt;/STRONG&gt;, and has now been rebranded as &lt;STRONG&gt;Microsoft Foundry&lt;/STRONG&gt; — but the rebrand is the least interesting part. The architecture, portal experience, agent capabilities, and developer workflows have been fundamentally redesigned.&lt;/P&gt;
&lt;P&gt;Microsoft now ships two portal experiences side by side — Foundry (New) and Foundry (Classic) — accessible via a toggle in the portal banner. But this isn't just a UI facelift. Under the hood, the resource model, project hierarchy, agent framework, tool ecosystem, and governance surface have all changed in meaningful ways.&lt;/P&gt;
&lt;img /&gt;
&lt;P&gt;This article breaks down every major difference across every layer of the stack so you can make an informed decision about when to migrate, what you'll gain, and what still requires the Classic experience.&lt;/P&gt;
&lt;H1&gt;1. The Branding &amp;amp; Naming Evolution&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Era&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Name&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Portal URL&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Pre-2024&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Azure AI Studio&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;ai.azure.com&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Mid-2024 to Late 2025&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Azure AI Foundry&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;ai.azure.com&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;2026 (Current)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft Foundry&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;ai.azure.com&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;💡 Key Point: "Azure AI Foundry" is now "Microsoft Foundry." Screenshots across Microsoft Learn documentation are still being updated. Both portals live at the same URL — a toggle in the top banner lets you switch between (New) and (Classic).&lt;/EM&gt;&lt;/P&gt;
&lt;H1&gt;2. Portal Philosophy: When to Use Which&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Portal&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Best For&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Foundry (Classic)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Working with multiple resource types: Azure OpenAI resources, Foundry resources, hub-based projects, and Foundry projects. Use this when you need features not yet available in the new experience (e.g., Prompt Flow, open-source model deployments on managed compute, Azure ML workloads).&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Foundry (New)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;A streamlined, agent-first experience for building, managing, and scaling multi-agent applications. Only Foundry projects are visible. Hub-based projects, Azure OpenAI standalone resources, and legacy project types are not shown.&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;The new portal is not a superset of the classic portal — it is a focused reimagining for the agentic AI era. If you need legacy ML capabilities, you still switch back to Classic.&lt;/P&gt;
&lt;H1&gt;3. Resource Architecture &amp;amp; Project Hierarchy&lt;/H1&gt;
&lt;H2&gt;Foundry (Classic): The Hub-Based Model&lt;/H2&gt;
&lt;P&gt;Classic supports two project types:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Hub-Based Projects —&lt;/STRONG&gt; Built on the &lt;STRONG&gt;Microsoft.MachineLearningServices&lt;/STRONG&gt; resource provider. A "Hub" (Azure AI Hub) acts as the parent resource, and projects are children of that hub. Hubs required provisioning extra resources: Azure Storage, Azure Key Vault, and optionally Azure Container Registry as mandatory sibling resources.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Foundry Projects —&lt;/STRONG&gt; A newer project type introduced under the &lt;STRONG&gt;Microsoft.CognitiveServices&lt;/STRONG&gt; provider. These are child resources of a Foundry Resource (kind: AIServices).&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Foundry (New): One Project Type to Rule Them All&lt;/H2&gt;
&lt;P&gt;The new portal only surfaces Foundry Projects. The resource hierarchy is simplified:&lt;/P&gt;
&lt;P&gt;Foundry Resource (Microsoft.CognitiveServices/account, kind: AIServices)&lt;BR /&gt;&amp;nbsp; └── Foundry Project (Microsoft.CognitiveServices/account/project)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; └── Project Assets (agents, evaluations, files, indexes)&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Aspect&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic (Hub-Based)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;New (Foundry Project)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Resource Provider&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft.MachineLearningServices&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft.CognitiveServices&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Parent Resource&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;AI Hub&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Foundry Resource (AIServices)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Required Sibling Resources&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Storage Account, Key Vault (mandatory)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;None required by default&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Project Isolation&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Via Hub RBAC&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Native project-level RBAC&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent Service GA&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Preview only&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;General Availability&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Foundry SDK &amp;amp; API&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Limited&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full support&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;ML Training (AutoML, Pipelines)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No (use hub-based project)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Prompt Flow&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Managed Compute (HuggingFace)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;💡 Critical Takeaway: New generative AI and model-centric features are available only through the Foundry Resource and its Foundry projects. Hub-based projects will not receive new agent or model features.&lt;/EM&gt;&lt;/P&gt;
&lt;H1&gt;4. Resource Provider Unification&lt;/H1&gt;
&lt;P&gt;One of the most impactful architectural changes is the consolidation under the Microsoft.CognitiveServices provider namespace:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Resource&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Provider&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Kind&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Microsoft Foundry&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft.CognitiveServices/account&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;AIServices&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Foundry Project&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft.CognitiveServices/account/project&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;AIServices&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Azure Speech&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft.CognitiveServices/account&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Speech&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Azure Language&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft.CognitiveServices/account&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Language&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Azure Vision&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft.CognitiveServices/account&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Vision&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This means:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Unified RBAC:&lt;/STRONG&gt; The same Azure RBAC actions work across Foundry, Azure OpenAI, Speech, Vision, and Language.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Unified Azure Policy:&lt;/STRONG&gt; Existing custom Azure Policies continue to apply if you're upgrading from Azure OpenAI to Foundry.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Unified Networking:&lt;/STRONG&gt; Private Link, VNet configuration, and network isolation share the same management patterns.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In contrast, hub-based projects under Microsoft.MachineLearningServices had a completely separate RBAC model, networking stack, and policy surface.&lt;/P&gt;
&lt;H1&gt;5. Security &amp;amp; Governance: Separation of Concerns&lt;/H1&gt;
&lt;H2&gt;Foundry (New) — Clear Control Plane vs. Data Plane Separation&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Layer&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Scope&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Who&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Examples&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Control Plane&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Foundry Resource (top-level)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;IT Admins&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Create deployments, configure networking, manage projects, set encryption&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Data Plane&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Foundry Project (child)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Developers&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Build agents, run evaluations, upload files, test in playground&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;This means IT can set up governance once at the resource level, and developers can self-serve by creating projects as isolated workspaces without needing admin intervention.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Starter RBAC Assignments:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Azure AI User for each developer at the Foundry Resource scope&lt;/LI&gt;
&lt;LI&gt;Azure AI User for each project managed identity at the Foundry Resource scope&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Foundry (Classic) — Hub-Centric Governance&lt;/H2&gt;
&lt;P&gt;In Classic, hub-based projects relied on the Hub as the governance boundary. This worked but required IT to manage the Hub and its dependent resources (Storage, Key Vault). Foundry projects under Classic had the same new governance model as above, but the portal experience was merged with hub-based projects, adding confusion.&lt;/P&gt;
&lt;H1&gt;6. Agent Service: The Biggest Leap Forward&lt;/H1&gt;
&lt;P&gt;The Foundry Agent Service is arguably the reason Microsoft rebuilt the portal experience. Here's how agent capabilities differ:&lt;/P&gt;
&lt;H2&gt;Foundry (Classic) — Agent Service in Preview&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Agents available in preview only within hub-based projects&lt;/LI&gt;
&lt;LI&gt;Agents in GA within Foundry projects (accessed through Classic portal)&lt;/LI&gt;
&lt;LI&gt;Single-agent interactions primarily&lt;/LI&gt;
&lt;LI&gt;Limited tool selection&lt;/LI&gt;
&lt;LI&gt;Basic observability&lt;/LI&gt;
&lt;LI&gt;Required connection strings for SDK authentication&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Foundry (New) — Agent Service, Fully Realized&lt;/H2&gt;
&lt;H3&gt;a) Multi-Agent Orchestration &amp;amp; Workflows&lt;/H3&gt;
&lt;P&gt;Build advanced automation with the visual workflow builder using SDKs for C# and Python. Supports:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Sequential workflows —&lt;/STRONG&gt; Agent A → Agent B → Agent C in defined order&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Group Chat —&lt;/STRONG&gt; Dynamic control passing between agents based on context&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Human-in-the-loop —&lt;/STRONG&gt; Approval steps, clarifying questions mid-workflow&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Visual YAML editor —&lt;/STRONG&gt; Edit workflows visually or in YAML; changes sync in real-time&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Power Fx integration —&lt;/STRONG&gt; Excel-like formulas for conditional logic, variable handling, data transformations&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Versioning —&lt;/STRONG&gt; Every save creates an immutable version with full history&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;b) Agent Types&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Type&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Kind&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Description&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Prompt Agent&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;prompt&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;LLM-backed agent defined declaratively with model config, instructions, tools, and prompts&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Hosted Agent&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;hosted&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Containerized agent running custom code, deployed and managed by Foundry&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Workflow&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;YAML-based&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Orchestrates multiple agents together using agentic patterns&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;c) Memory (Preview — New Only)&lt;/H3&gt;
&lt;P&gt;Long-term agent memory is a brand-new capability:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;User Profile Memory:&lt;/STRONG&gt; Stores preferences, dietary restrictions, language preferences — persists across sessions&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Chat Summary Memory:&lt;/STRONG&gt; Distilled summaries of conversation topics for cross-session continuity&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Three Phases:&lt;/STRONG&gt; Extraction → Consolidation → Retrieval&lt;/LI&gt;
&lt;LI&gt;Memory search tool or low-level Memory Store APIs&lt;/LI&gt;
&lt;LI&gt;Supports up to 10,000 memories per scope, 100 scopes per store&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;d) Foundry IQ — Knowledge Integration (Preview — New Only)&lt;/H3&gt;
&lt;P&gt;A managed, multi-source knowledge base for enterprise content:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Connects to Azure Blob Storage, SharePoint, OneLake, and public web&lt;/LI&gt;
&lt;LI&gt;Automated document chunking, vector embedding generation, metadata extraction&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Agentic retrieval engine:&lt;/STRONG&gt; Decomposes complex questions into subqueries, executes in parallel, semantically reranks, returns unified responses with citations&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Permission-aware:&lt;/STRONG&gt; Synchronizes ACLs, honors Microsoft Purview sensitivity labels, enforces permissions at query time&lt;/LI&gt;
&lt;LI&gt;One knowledge base can serve multiple agents&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;e) Expanded Tool Catalog (Preview — New Only)&lt;/H3&gt;
&lt;P&gt;The new portal introduces a Foundry Tool Catalog with 1,400+ tools:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Tool Category&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Examples&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Built-in&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Azure AI Search, Code Interpreter, File Search, Grounding with Bing, Image Generation, Computer Use, SharePoint, Microsoft Fabric, Browser Automation, Web Search&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;MCP Servers (Remote)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Publisher-hosted servers using Model Context Protocol&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;MCP Servers (Local)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Self-hosted MCP servers connected to Foundry&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Custom&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;OpenAPI 3.0 specs, Agent-to-Agent (A2A) endpoints, custom MCP endpoints&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Private Catalog&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Organization-scoped tools visible only to your team&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Classic had a much smaller tool surface: primarily Azure AI Search, File Search, Code Interpreter, and custom functions.&lt;/P&gt;
&lt;H3&gt;f) Integration &amp;amp; Publishing Capabilities&lt;/H3&gt;
&lt;P&gt;The new portal supports:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Publish agents to Microsoft 365, Teams, and BizChat&lt;/LI&gt;
&lt;LI&gt;Containerized deployments for portability&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Open protocol support:&lt;/STRONG&gt; MCP and A2A with full authentication&lt;/LI&gt;
&lt;LI&gt;AI Gateway integration (Azure API Management)&lt;/LI&gt;
&lt;LI&gt;Azure Policy integration for agent governance&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;7. Model Deployment: What Changed?&lt;/H1&gt;
&lt;P&gt;The model deployment story is shared across both portals but the new portal streamlines the experience significantly.&lt;/P&gt;
&lt;H2&gt;Deployment Types (Available in Both)&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Deployment Type&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;SKU&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Data Processing&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Billing&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Global Standard&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;GlobalStandard&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Any Azure region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pay-per-token&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Global Provisioned&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;GlobalProvisionedManaged&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Any Azure region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Reserved PTU&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Global Batch&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;GlobalBatch&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Any Azure region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;50% discount, 24-hr target&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Data Zone Standard&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;DataZoneStandard&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Within data zone (EU/US)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pay-per-token&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Data Zone Provisioned&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;DataZoneProvisionedManaged&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Within data zone&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Reserved PTU&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Data Zone Batch&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;DataZoneBatch&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Within data zone&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;50% discount&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Standard (Regional)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Standard&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Single region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Pay-per-token&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Regional Provisioned&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;ProvisionedManaged&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Single region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Reserved PTU&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Developer&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;DeveloperTier&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Any Azure region&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Fine-tuned eval only, no SLA&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Key Differences by Portal&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Capability&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;New&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Models sold directly by Azure (Azure OpenAI, DeepSeek, xAI)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Via connections in hub-based; native in Foundry projects&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Native&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Partner/Community Models (via Marketplace)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Via connections in hub-based; native in Foundry projects&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Native&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Models on Managed Compute (HuggingFace etc.)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Hub-based projects only&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Not supported&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Serverless API Endpoints&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Hub-based projects&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Standard deployment only&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Model Catalog Browsing&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Available without sign-in&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Requires project context&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H1&gt;8. Observability &amp;amp; Monitoring&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Capability&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;New&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Azure Monitor metrics&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Scoped to resource level&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Scoped to resource + project level&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Application Insights integration&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Manual setup&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Built-in for Agent Service&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Conversation-level tracing&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;SDK-based (manual)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Real-time in portal with built-in metrics&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Evaluation workflows&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Available (preview)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Available with continuous evaluation via Python SDK&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agent monitoring dashboard&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Not available&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Built-in "Operate" section&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Model tracking&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Basic&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Enhanced with centralized AI asset management&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;The new portal introduces a dedicated "Operate" section for centralized AI asset management — agents, models, and tools in one place. You can register agents from other clouds, get alerts when agents or models need attention, and manage fleet health at scale.&lt;/P&gt;
&lt;H1&gt;9. SDK &amp;amp; API Experience&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Aspect&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;New&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Authentication&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Connection strings&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Project endpoint + DefaultAzureCredential&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;SDK&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;azure-ai-projects (preview)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;azure-ai-projects (GA), unified Foundry SDK&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Languages&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Python, C# (GA); JS/TS, Java (preview)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Same&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;API Surface&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Foundry API (limited for hub-based)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full Foundry API — agents, evaluations, models, indexes, data&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;VS Code Extension&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Available&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Available (enhanced)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;Migration Example&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Classic (Hub-Based, Preview SDK):&lt;/STRONG&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;# Used connection strings&lt;BR /&gt;client = AIProjectClient.from_connection_string(&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; conn_str="your_connection_string",&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; credential=DefaultAzureCredential()&lt;BR /&gt;)&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;P&gt;&lt;STRONG&gt;New (Foundry Project, GA SDK):&lt;/STRONG&gt;&lt;/P&gt;
&lt;BLOCKQUOTE&gt;
&lt;P&gt;from azure.identity import DefaultAzureCredential&lt;BR /&gt;from azure.ai.projects import AIProjectClient&lt;BR /&gt;&lt;BR /&gt;project = AIProjectClient(&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; endpoint="your_project_endpoint",&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; credential=DefaultAzureCredential()&lt;BR /&gt;)&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;
&lt;H1&gt;10. Data Storage &amp;amp; Encryption&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Feature&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;New&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Default Storage&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft-managed (logical separation)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Microsoft-managed (logical separation)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Bring Your Own Storage&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;BYOS for Agent State&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Standard setup available&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Standard setup available&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Customer-Managed Key Encryption&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported (FIPS 140-2, 256-bit AES)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported (same)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Bring Your Own Key Vault&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;BCDR for Agents&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Customer-provisioned Cosmos DB&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Customer-provisioned Cosmos DB&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;The storage layer is largely unchanged between portals — both support the same bring-your-own patterns for compliance.&lt;/P&gt;
&lt;H1&gt;11. Networking &amp;amp; Private Access&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Feature&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;New&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Private Link&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Fully supported&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported (some limitations for end-to-end isolation)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;VNet Integration (Container Injection)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Supported&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;End-to-End Network Isolation&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Fully supported (SDK, CLI, Portal)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Partially supported — use Classic, SDK, or CLI for fully isolated deployments&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;&lt;EM&gt;💡 Important: If you require end-to-end network isolation in production, Microsoft currently recommends using the Classic experience, SDK, or CLI until the new portal reaches full parity.&lt;/EM&gt;&lt;/P&gt;
&lt;H1&gt;12. Navigation &amp;amp; UX Differences&lt;/H1&gt;
&lt;H2&gt;Classic Portal&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Left pane navigation organized by development stages: Define &amp;amp; Explore → Build &amp;amp; Customize → Observe &amp;amp; Improve&lt;/LI&gt;
&lt;LI&gt;Customizable left pane per project, per user (pin/unpin items)&lt;/LI&gt;
&lt;LI&gt;Management Center — centralized hub for projects, quotas, permissions, usage metrics&lt;/LI&gt;
&lt;LI&gt;Breadcrumb navigation showing project type (Hub vs. Foundry)&lt;/LI&gt;
&lt;LI&gt;Supports browsing model catalog without signing in&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;H2&gt;New Portal&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;Top menu bar with Build/Operate sections&lt;/LI&gt;
&lt;LI&gt;Project selector in upper-left corner — switch between recently used projects&lt;/LI&gt;
&lt;LI&gt;"Operate" section for centralized AI asset management (agents, models, tools)&lt;/LI&gt;
&lt;LI&gt;Streamlined navigation with redesigned interface&lt;/LI&gt;
&lt;LI&gt;Faster load times with dynamic prefetching&lt;/LI&gt;
&lt;LI&gt;Only shows default project per Foundry resource; "View all resources" opens Classic portal&lt;/LI&gt;
&lt;/UL&gt;
&lt;img /&gt;
&lt;H1&gt;13. Foundry Local — Runs Everywhere&lt;/H1&gt;
&lt;P&gt;A capability that works across both experiences: Foundry Local lets you run LLMs on your own device for free. Integrates with inference SDKs, supports HuggingFace model compilation, and provides a local development loop.&lt;/P&gt;
&lt;H1&gt;14. Feature Availability Matrix (Comprehensive)&lt;/H1&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Feature&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic — Hub-Based&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;Classic — Foundry Project&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;&lt;STRONG&gt;New — Foundry Project&lt;/STRONG&gt;&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Agents (GA)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Preview only&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;GA&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;GA&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Multi-Agent Workflows&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Memory (Long-Term)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes (Preview)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Foundry IQ Knowledge Base&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes (Preview)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Tool Catalog (1,400+ tools)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes (Preview)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;MCP &amp;amp; A2A Protocol Support&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Publish to M365/Teams/BizChat&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Centralized AI Asset Mgmt&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Models (Azure OpenAI, etc.)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Via connections&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Native&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Native&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Models on Managed Compute&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Prompt Flow&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;AutoML / ML Pipelines&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;No&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Evaluations&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes (Preview)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes (Preview)&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Playgrounds&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Content Understanding&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Fine-Tuning&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Datasets &amp;amp; Indexes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Full Foundry SDK &amp;amp; API&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Limited&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;E2E Network Isolation&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Full&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Partial&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;RBAC (Resource + Project)&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Hub-level&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Resource + Project&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Resource + Project&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Azure Policy Integration&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Yes&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;Enhanced&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;
&lt;P&gt;Disable Preview Features&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;RBAC or Tags&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;RBAC or Tags&lt;/P&gt;
&lt;/td&gt;&lt;td&gt;
&lt;P&gt;RBAC or Tags&lt;/P&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H1&gt;15. Migration Path: Classic → New&lt;/H1&gt;
&lt;P&gt;Microsoft provides a clear migration guide:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Step 1:&lt;/STRONG&gt; Locate your existing Foundry resource (the AIServices kind resource created alongside your hub)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Step 2:&lt;/STRONG&gt; Create a new Foundry project under that resource&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Step 3:&lt;/STRONG&gt; What transfers automatically: Model deployments, data files, fine-tuned models, assistants, vector stores&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Step 4:&lt;/STRONG&gt; What doesn't transfer: Preview agent state (threads, messages, files), open-source model deployments, hub project access&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Step 5:&lt;/STRONG&gt; Update SDK code — replace connection strings with project endpoints&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Step 6:&lt;/STRONG&gt; Optionally recreate connections for tools and data sources&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Step 7:&lt;/STRONG&gt; Optionally clean up hub-based projects (keep them if you still need ML training or Prompt Flow)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Estimated migration time: &lt;/STRONG&gt;5–10 minutes for project creation; additional time for agent code migration depending on complexity.&lt;/P&gt;
&lt;H1&gt;16. When Should You Move to Foundry (New)?&lt;/H1&gt;
&lt;H2&gt;Move now if:&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;You're building agentic applications and want GA Agent Service&lt;/LI&gt;
&lt;LI&gt;You need multi-agent orchestration (workflows, sequential, group chat)&lt;/LI&gt;
&lt;LI&gt;You want the Tool Catalog, Memory, or Foundry IQ&lt;/LI&gt;
&lt;LI&gt;You want to publish agents to Microsoft 365/Teams&lt;/LI&gt;
&lt;LI&gt;You're starting greenfield and want simplified governance&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Stay on Classic (for now) if:&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;You depend on Prompt Flow for orchestration&lt;/LI&gt;
&lt;LI&gt;You deploy open-source models on managed compute (HuggingFace, etc.)&lt;/LI&gt;
&lt;LI&gt;You need Azure Machine Learning features (AutoML, ML Pipelines, training)&lt;/LI&gt;
&lt;LI&gt;You require fully isolated end-to-end networking in the portal (not just SDK/CLI)&lt;/LI&gt;
&lt;LI&gt;You have extensive hub-based project investments you're not ready to migrate&lt;/LI&gt;
&lt;/UL&gt;
&lt;H1&gt;Final Thoughts&lt;/H1&gt;
&lt;P&gt;Microsoft Foundry (New) isn't just a portal redesign — it's a platform pivot from "AI Studio for building chatbots" to "the enterprise AI agent factory." The introduction of multi-agent workflows, long-term memory, Foundry IQ knowledge bases, a 1,400+ tool catalog with MCP/A2A support, and centralized fleet management represents a generational leap.&lt;/P&gt;
&lt;P&gt;But it's also an honest work-in-progress. Network isolation parity, Prompt Flow, and managed compute for open-source models are still reasons to keep the Classic experience bookmarked. The good news is that both portals coexist at the same URL, and switching between them is a single toggle.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;The direction is clear: Foundry (New) is the future. Start building there. Fall back to Classic only when you must.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H1&gt;Useful Links&lt;/H1&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;What is Microsoft Foundry?: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/what-is-foundry" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/what-is-foundry&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Foundry Architecture: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/concepts/architecture" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/concepts/architecture&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Foundry Agent Service Overview: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/agents/overview" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/agents/overview&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Migrate from Hub-Based to Foundry Projects: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry-classic/how-to/migrate-project" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry-classic/how-to/migrate-project&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Foundry Tool Catalog: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/tool-catalog" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/tool-catalog&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Foundry IQ Knowledge Base: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/what-is-foundry-iq" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/what-is-foundry-iq&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Agent Memory: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/what-is-memory" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/what-is-memory&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Workflow Builder: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/workflow" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/workflow&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Deployment Types: &lt;A class="lia-external-url" href="https://learn.microsoft.com/en-us/azure/foundry/foundry-models/concepts/deployment-types" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/foundry/foundry-models/concepts/deployment-types&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Microsoft Foundry Portal: &lt;A class="lia-external-url" href="https://ai.azure.com" target="_blank"&gt;https://ai.azure.com&lt;/A&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;EM style="color: rgb(30, 30, 30);"&gt;&lt;STRONG&gt;Note:&lt;/STRONG&gt; All information sourced from Microsoft Learn documentation as of March 2026.&amp;nbsp;Feature availability may change as Microsoft continues updating both portal experiences.&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Mar 2026 00:16:49 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/the-great-foundry-shift-microsoft-foundry-new-vs-classic/ba-p/4499574</guid>
      <dc:creator>VinodSoni</dc:creator>
      <dc:date>2026-03-05T00:16:49Z</dc:date>
    </item>
    <item>
      <title>Claude in Copilot? Excel on Steroids!!</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/claude-in-copilot-excel-on-steroids/ba-p/4498176</link>
      <description>&lt;P&gt;If you’ve been paying attention to Claude lately, you’ve probably noticed one thing.&lt;/P&gt;
&lt;P&gt;It’s really good at Excel.&lt;/P&gt;
&lt;P&gt;Not formulas.&lt;BR /&gt;Not basic charts.&lt;/P&gt;
&lt;P&gt;I mean understanding structured data. Reasoning across multiple sheets. Spotting correlations. Pulling out insights. The kind of work that usually takes a data analyst or data scientist a lot of time.&lt;/P&gt;
&lt;P&gt;Here’s the part most people miss.&lt;/P&gt;
&lt;P&gt;You already get that capability inside Microsoft 365 Copilot.&lt;/P&gt;
&lt;P&gt;Let me show you what that actually looks like in the real world.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;div data-video-id="https://youtu.be/zkaRUDSDBwk?si=LWtxw-wc8NcyJtne/1772312539193" data-video-remote-vid="https://youtu.be/zkaRUDSDBwk?si=LWtxw-wc8NcyJtne/1772312539193" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FzkaRUDSDBwk%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DzkaRUDSDBwk&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FzkaRUDSDBwk%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;P&gt;&lt;STRONG&gt;If you want to rebuild it yourself or follow along, I’ve published everything on GitHub:&lt;/STRONG&gt;&lt;BR /&gt;- The mock Excel data&lt;BR /&gt;- All of the analyst and executive prompts&lt;BR /&gt;- Step‑by‑step flow used in the demo&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://github.com/heyitsgoad/copilot-playground/blob/main/education%20playground/claude%20in%20copilot%20with%20agent%20mode%20for%20excel.md" target="_blank" rel="noopener"&gt;Github Resources&lt;/A&gt;&lt;/P&gt;
&lt;H2&gt;The scenario: a very real executive ask&lt;/H2&gt;
&lt;P&gt;This is how the work showed up for me.&lt;/P&gt;
&lt;P&gt;In this example scenario &lt;EM&gt;(all data is built using Copilot for demonstration purposes),&lt;/EM&gt; an executive leader reached out and said:&lt;/P&gt;
&lt;P&gt;“My CFO wants a Q2 financial summary of paramedic overtime across multiple counties.”&lt;/P&gt;
&lt;P&gt;What I got back was raw data.&lt;/P&gt;
&lt;P&gt;Messy.&lt;BR /&gt;Unstructured.&lt;BR /&gt;No story.&lt;/P&gt;
&lt;P&gt;Normally, this turns into hours of work:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Cleaning data&lt;/LI&gt;
&lt;LI&gt;Building formulas&lt;/LI&gt;
&lt;LI&gt;Creating pivot tables&lt;/LI&gt;
&lt;LI&gt;Designing dashboards&lt;/LI&gt;
&lt;LI&gt;Writing an executive summary&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Instead, I used Copilot in Excel with Agent Mode.&lt;/P&gt;
&lt;P&gt;Six prompts.&lt;/P&gt;
&lt;P&gt;That’s it.&lt;/P&gt;
&lt;H2&gt;Why Agent Mode matters&lt;/H2&gt;
&lt;P&gt;Copilot in Excel with Agent Mode is where Copilot goes from “helpful assistant” to something much closer to a reasoning partner.&lt;/P&gt;
&lt;P&gt;Under the hood, it’s doing things like:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Reasoning across multiple sheets&lt;/LI&gt;
&lt;LI&gt;Writing and running Python&lt;/LI&gt;
&lt;LI&gt;Catching its own errors and fixing them&lt;/LI&gt;
&lt;LI&gt;Structuring data so it can be reused for dashboards and reports&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This is also where you might notice model selection.&lt;/P&gt;
&lt;P&gt;If your IT admin has enabled Anthropic, you’ll see Claude as an option. If you don’t, that’s an admin setting, not a missing feature.&lt;/P&gt;
&lt;P&gt;And that matters, because this is exactly the type of work Claude has been getting attention for.&lt;/P&gt;
&lt;H2&gt;From raw data to executive‑ready in minutes&lt;/H2&gt;
&lt;P&gt;Here’s what I built using natural language.&lt;/P&gt;
&lt;H3&gt;Data framing&lt;/H3&gt;
&lt;P&gt;I asked Copilot to take raw overtime data and turn it into structured tables that could actually support dashboards and reporting.&lt;/P&gt;
&lt;H3&gt;Dashboard creation&lt;/H3&gt;
&lt;P&gt;Without telling it which charts to build or how to lay things out, Copilot created a working dashboard and an SBAR‑style report structure.&lt;/P&gt;
&lt;H3&gt;Storytelling&lt;/H3&gt;
&lt;P&gt;I prompted it to explain what was happening in the data. It called out overtime spikes in May and flagged operational risk.&lt;/P&gt;
&lt;H3&gt;Executive brief&lt;/H3&gt;
&lt;P&gt;I asked for CFO talking points.&lt;/P&gt;
&lt;P&gt;Copilot generated:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Key insights&lt;/LI&gt;
&lt;LI&gt;Questions a CFO should be asking&lt;/LI&gt;
&lt;LI&gt;Decisions that needed to be made&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;What‑if analysis&lt;/H3&gt;
&lt;P&gt;One prompt created an entirely new sheet to stress‑test scenarios.&lt;/P&gt;
&lt;P&gt;All of this happened inside Excel.&lt;/P&gt;
&lt;P&gt;No exporting.&lt;BR /&gt;No rework.&lt;BR /&gt;No separate AI tool.&lt;/P&gt;
&lt;H2&gt;Real Talk: The part people get wrong about AI and jobs&lt;/H2&gt;
&lt;P&gt;This is where the conversation usually goes sideways.&lt;/P&gt;
&lt;P&gt;“Isn’t this replacing data scientists?”&lt;/P&gt;
&lt;P&gt;No.&lt;/P&gt;
&lt;P&gt;What it replaced was busy work.&lt;/P&gt;
&lt;P&gt;AI is very good at:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Processing structured data&lt;/LI&gt;
&lt;LI&gt;Generating insights&lt;/LI&gt;
&lt;LI&gt;Updating models&lt;/LI&gt;
&lt;LI&gt;Iterating quickly&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Humans are very good at:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Understanding context&lt;/LI&gt;
&lt;LI&gt;Knowing when something feels off&lt;/LI&gt;
&lt;LI&gt;Asking the right questions&lt;/LI&gt;
&lt;LI&gt;Applying judgment&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In this sceanrio, I knew what the CFO cared about.&lt;BR /&gt;I knew the story that needed to be told.&lt;BR /&gt;I knew what to stress‑test.&lt;/P&gt;
&lt;P&gt;Copilot didn’t replace that. It sped it up.&lt;/P&gt;
&lt;P&gt;What used to take hours took about half an hour, including review.&lt;/P&gt;
&lt;P&gt;That’s not a loss of skill. That’s leverage.&lt;/P&gt;
&lt;H2&gt;Copilot as a delegation tool&lt;/H2&gt;
&lt;P&gt;This is the mindset shift.&lt;/P&gt;
&lt;P&gt;Stop thinking about Copilot as “AI that answers questions.”&lt;/P&gt;
&lt;P&gt;Start thinking about it as delegation.&lt;/P&gt;
&lt;P&gt;I delegated:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Data cleanup&lt;/LI&gt;
&lt;LI&gt;Analysis&lt;/LI&gt;
&lt;LI&gt;Visualization&lt;/LI&gt;
&lt;LI&gt;First‑draft insights&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Then I reviewed it, applied judgment, and refined.&lt;/P&gt;
&lt;P&gt;When I handed this back to the CFO, I didn’t even use Agent Mode. I switched to standard Copilot and asked:&lt;/P&gt;
&lt;P&gt;“Summarize this in one paragraph and give me three bullets a CFO would care about.”&lt;/P&gt;
&lt;P&gt;That was it.&lt;/P&gt;
&lt;P&gt;Copilot understood the entire workbook and produced executive‑ready talking points.&lt;/P&gt;
&lt;H2&gt;The real takeaway&lt;/H2&gt;
&lt;P&gt;Claude’s Excel capabilities are impressive.&lt;/P&gt;
&lt;P&gt;What matters more is where you can actually use them.&lt;/P&gt;
&lt;P&gt;Copilot brings that level of reasoning into the tools people already work in. Excel.&lt;BR /&gt;Teams.&lt;BR /&gt;Word.&lt;/P&gt;
&lt;P&gt;AI isn’t here to think for you.&lt;/P&gt;
&lt;P&gt;It’s here to handle the mechanics so you can focus on judgment, context, and decisions.&lt;/P&gt;
&lt;P&gt;That’s the difference.&lt;/P&gt;
&lt;P&gt;Go try it.&lt;BR /&gt;Have some fun.&lt;BR /&gt;And start delegating.&lt;/P&gt;</description>
      <pubDate>Mon, 02 Mar 2026 21:18:15 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/claude-in-copilot-excel-on-steroids/ba-p/4498176</guid>
      <dc:creator>michaelgoad</dc:creator>
      <dc:date>2026-03-02T21:18:15Z</dc:date>
    </item>
    <item>
      <title>HLS Copilot Use Case Webinar Series</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/hls-copilot-use-case-webinar-series/ba-p/4496177</link>
      <description>&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Friday 2/27 - Taming the Excel Jungle of Clinician Scheduling&lt;/STRONG&gt;&lt;BR /&gt;&lt;EM&gt;Nurse clinicians face enough pressure without spending hours wrestling with six‑week schedules in Excel.&lt;/EM&gt; Even with time‑tracking systems in place, many teams still export schedules, staff details, and skills into spreadsheets to manually resolve conflicts and coverage gaps—adding frustration and stress to an already demanding role. In this HLS webinar on &lt;STRONG&gt;February 27 at 12:00 PM ET&lt;/STRONG&gt;, Microsoft’s &lt;STRONG&gt;Michael Gannotti&lt;/STRONG&gt; will demonstrate how &lt;STRONG&gt;Microsoft 365 Copilot&lt;/STRONG&gt; can transform clinician scheduling—reducing work from hours or days to mere minutes, and making last‑minute changes easy to reconcile as new schedules go live.&lt;BR /&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;A href="https://teams.microsoft.com/meet/23481078350175?p=yrMxt0yyWoJ6cyn3sv" target="_blank"&gt;Meeting link active on 2/27&lt;/A&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Friday 3/13 - Supercharging Your Marketing Research and Executive Reporting with Microsoft 365 Copilot&lt;/STRONG&gt;&lt;BR /&gt;Marketing and executive teams are under constant pressure to deliver high‑quality research, insights, and reporting—faster than ever. In this webinar on &lt;STRONG&gt;Friday, March 13 at 12:00 PM ET&lt;/STRONG&gt;, Microsoft’s &lt;STRONG&gt;Michael Gannotti&lt;/STRONG&gt; will demonstrate how &lt;STRONG&gt;Microsoft 365 Copilot&lt;/STRONG&gt; can streamline and turbocharge marketing research, report creation, and supporting collateral—condensing days of manual work into minutes. Learn how Copilot helps teams move from raw information to executive‑ready insights with greater speed, consistency, and impact.&lt;BR /&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;A href="https://teams.microsoft.com/meet/28730469924777?p=48hzpGjtPcyykRQxqV" target="_blank"&gt;Meeting link active on 3/13&lt;/A&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Fri, 20 Feb 2026 22:36:29 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/hls-copilot-use-case-webinar-series/ba-p/4496177</guid>
      <dc:creator>MichaelGannotti</dc:creator>
      <dc:date>2026-02-20T22:36:29Z</dc:date>
    </item>
    <item>
      <title>🚀 Git-Driven Deployments for Microsoft Fabric Using GitHub Actions</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/git-driven-deployments-for-microsoft-fabric-using-github-actions/ba-p/4496125</link>
      <description>&lt;H2&gt;👋 Introduction&lt;/H2&gt;
&lt;P&gt;If you've been working with Microsoft Fabric, you've likely faced this question:&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"How do we promote Fabric items from DEV → QA → PROD reliably, consistently, and with proper governance?"&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;Many teams default to the built-in&amp;nbsp;&lt;STRONG&gt;Fabric Deployment Pipelines&lt;/STRONG&gt;&amp;nbsp;— and they work great for simpler scenarios. But what happens when your enterprise demands:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;🔒 Centralized governance across&amp;nbsp;&lt;STRONG&gt;all&lt;/STRONG&gt;&amp;nbsp;platforms (infra, app, and data)&lt;/LI&gt;
&lt;LI&gt;📜 Full audit trail of&amp;nbsp;&lt;STRONG&gt;every change&lt;/STRONG&gt;&amp;nbsp;tied to a Git commit&lt;/LI&gt;
&lt;LI&gt;✅ Approval gates with&amp;nbsp;&lt;STRONG&gt;reviewer-based promotion&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;🔑 Per-environment&amp;nbsp;&lt;STRONG&gt;service principal isolation&lt;/STRONG&gt;&lt;/LI&gt;
&lt;LI&gt;🧩 Alignment with your&amp;nbsp;&lt;STRONG&gt;existing DevOps standards&lt;/STRONG&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;That's exactly the problem we set out to solve. In this post, I'll walk you through a&amp;nbsp;&lt;STRONG&gt;production-ready, enterprise-grade CI/CD solution&lt;/STRONG&gt;&amp;nbsp;for Microsoft Fabric using the&amp;nbsp;&lt;A href="vscode-file://vscode-app/c:/Users/vinodsoni/AppData/Local/Programs/Microsoft%20VS%20Code/c3a26841a8/resources/app/out/vs/code/electron-browser/workbench/workbench.html" data-href="https://pypi.org/project/fabric-cicd/" target="_blank"&gt;fabric-cicd&lt;/A&gt;&amp;nbsp;Python library and&amp;nbsp;&lt;STRONG&gt;GitHub Actions&lt;/STRONG&gt;&amp;nbsp;— with&amp;nbsp;&lt;STRONG&gt;zero dependency&lt;/STRONG&gt;&amp;nbsp;on Fabric Deployment Pipelines.&lt;/P&gt;
&lt;H2&gt;🎯 What Problem Are We Solving?&lt;/H2&gt;
&lt;P&gt;Traditional Fabric promotion workflows often look like this:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Step&lt;/th&gt;&lt;th&gt;Method&lt;/th&gt;&lt;th&gt;Problem&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Build in DEV workspace&lt;/td&gt;&lt;td&gt;Fabric Portal UI&lt;/td&gt;&lt;td&gt;✅ Works fine&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Promote to QA&lt;/td&gt;&lt;td&gt;Fabric Deployment Pipeline or manual copy&lt;/td&gt;&lt;td&gt;⚠️ No Git traceability&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Promote to PROD&lt;/td&gt;&lt;td&gt;Fabric Deployment Pipeline with approval&lt;/td&gt;&lt;td&gt;⚠️ Separate governance model from app/infra CI/CD&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Rollback&lt;/td&gt;&lt;td&gt;🤷 Manual recreation&lt;/td&gt;&lt;td&gt;❌ No deterministic rollback path&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Audit&lt;/td&gt;&lt;td&gt;"Who clicked what, when?"&lt;/td&gt;&lt;td&gt;❌ Limited trail&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H3&gt;The Core Issue&lt;/H3&gt;
&lt;P&gt;Fabric Deployment Pipelines introduce a&amp;nbsp;&lt;STRONG&gt;parallel governance model&lt;/STRONG&gt;&amp;nbsp;that's disconnected from how your platform and application teams already work. You end up with:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;🔀 Two different promotion systems (GitHub Actions for apps, Fabric Pipelines for data)&lt;/LI&gt;
&lt;LI&gt;🕳️ Governance blind spots between the two&lt;/LI&gt;
&lt;LI&gt;😰 Cultural friction ("Why do data teams have a different process?")&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Our Approach: Git as the Single Source of Truth 📖&lt;/H3&gt;
&lt;LI-CODE lang=""&gt;┌─────────────┐     push to main     ┌─────────────┐
│  Developer   │ ──────────────────▶  │   GitHub     │
│  commits to  │                      │   Actions    │
│  Git repo    │                      │   Workflow   │
└─────────────┘                      └──────┬──────┘
                                            │
                          ┌─────────────────┼─────────────────┐
                          ▼                 ▼                 ▼
                    ┌──────────┐     ┌──────────┐     ┌──────────┐
                    │ 🟢 DEV   │     │ 🟡 QA    │     │ 🔴 PROD  │
                    │ Auto     │────▶│ Approval  │────▶│ Approval │
                    │ Deploy   │     │ Required  │     │ Required │
                    └──────────┘     └──────────┘     └──────────┘&lt;/LI-CODE&gt;
&lt;P&gt;Every deployment originates from &lt;STRONG&gt;Git&lt;/STRONG&gt;. Every promotion is&amp;nbsp;&lt;STRONG&gt;traceable to a commit SHA&lt;/STRONG&gt;. Every environment has&amp;nbsp;&lt;STRONG&gt;its own approval gate&lt;/STRONG&gt;. One pipeline model — across everything.&lt;/P&gt;
&lt;H2&gt;🏗️ Solution Architecture&lt;/H2&gt;
&lt;H3&gt;📁 Repository Structure&lt;/H3&gt;
&lt;LI-CODE lang=""&gt;fabric-cicd-project/
│
├── 📂 .github/
│   ├── 📂 workflows/
│   │   └── 📄 fabric-cicd.yml          # GitHub Actions pipeline
│   ├── 📄 CODEOWNERS                    # Review enforcement
│   └── 📄 dependabot.yml               # Automated dependency updates
│
├── 📂 config/
│   └── 📄 parameter.yml                # Environment-specific parameterization
│
├── 📂 deploy/
│   ├── 📄 deploy_workspace.py          # Main deployment entrypoint
│   └── 📄 validate_repo.py            # Pre-deployment validation
│
├── 📂 workspace/                       # Fabric items (Git-integrated / PBIP)
│
├── 📄 .env.example                     # Environment variable template
├── 📄 .gitignore
├── 📄 ruff.toml                        # Python linting config
├── 📄 requirements.txt                 # Pinned dependencies
├── 📄 SECURITY.md                      # Vulnerability disclosure policy
└── 📄 README.md&lt;/LI-CODE&gt;
&lt;H3&gt;🔧 Key Components&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Component&lt;/th&gt;&lt;th&gt;Purpose&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;fabric-cicd&amp;nbsp;Python library&lt;/td&gt;&lt;td&gt;Deploys Fabric items from Git to workspaces (handles all Fabric API calls internally)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;deploy_workspace.py&lt;/td&gt;&lt;td&gt;CLI entrypoint — authenticates, configures, deploys, logs&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;parameter.yml&lt;/td&gt;&lt;td&gt;Find-and-replace rules for environment-specific values (connections, lakehouse IDs, etc.)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;validate_repo.py&lt;/td&gt;&lt;td&gt;Pre-flight checks — validates repo structure, parameter.yml presence, .platform files&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;fabric-cicd.yml&lt;/td&gt;&lt;td&gt;GitHub Actions workflow — orchestrates validate → DEV → QA → PROD&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;✨ Feature Deep Dive&lt;/H2&gt;
&lt;H3&gt;1️⃣ Per-Environment Service Principal Isolation 🔐&lt;/H3&gt;
&lt;P&gt;Instead of a single shared service principal, each environment gets its own:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;DEV_TENANT_ID / DEV_CLIENT_ID / DEV_CLIENT_SECRET
QA_TENANT_ID  / QA_CLIENT_ID  / QA_CLIENT_SECRET
PROD_TENANT_ID / PROD_CLIENT_ID / PROD_CLIENT_SECRET&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Why this matters:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;🛡️&amp;nbsp;&lt;STRONG&gt;Least-privilege access&lt;/STRONG&gt;&amp;nbsp;— the DEV SP can't touch PROD&lt;/LI&gt;
&lt;LI&gt;🔍&amp;nbsp;&lt;STRONG&gt;Audit clarity&lt;/STRONG&gt;&amp;nbsp;— you know&amp;nbsp;&lt;EM&gt;which&lt;/EM&gt;&amp;nbsp;identity deployed&amp;nbsp;&lt;EM&gt;where&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;💥&amp;nbsp;&lt;STRONG&gt;Blast radius reduction&lt;/STRONG&gt;&amp;nbsp;— a compromised DEV secret doesn't affect PROD&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The deploy script automatically resolves the correct credentials based on&amp;nbsp;TARGET_ENVIRONMENT, with fallback to shared&amp;nbsp;FABRIC_*&amp;nbsp;variables for simpler setups.&lt;/P&gt;
&lt;H3&gt;2️⃣ Environment-Specific Parameterization 🎛️&lt;/H3&gt;
&lt;P&gt;A single&amp;nbsp;&lt;STRONG&gt;parameter.yml&lt;/STRONG&gt;&amp;nbsp;drives all environment differences:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;find_replace:
  - find: "DEV_Lakehouse"
    replace_with:
      DEV: "DEV_Lakehouse"
      QA: "QA_Lakehouse"
      PROD: "PROD_Lakehouse"

  - find: "dev-sql-server.database.windows.net"
    replace_with:
      DEV: "dev-sql-server.database.windows.net"
      QA: "qa-sql-server.database.windows.net"
      PROD: "prod-sql-server.database.windows.net"&lt;/LI-CODE&gt;
&lt;P&gt;✅ Same Git artifacts → different runtime bindings per environment&lt;BR /&gt;✅ No manual edits between promotions&lt;BR /&gt;✅ Easy to review in pull requests&lt;/P&gt;
&lt;H3&gt;3️⃣ Approval-Gated Promotions ✅&lt;/H3&gt;
&lt;P&gt;The GitHub Actions workflow uses&amp;nbsp;&lt;STRONG&gt;GitHub Environments&lt;/STRONG&gt;&amp;nbsp;with reviewer requirements:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Environment&lt;/th&gt;&lt;th&gt;Trigger&lt;/th&gt;&lt;th&gt;Approval&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;🟢&amp;nbsp;&lt;STRONG&gt;DEV&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;Automatic on push to&amp;nbsp;main&lt;/td&gt;&lt;td&gt;None — deploys immediately&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;🟡&amp;nbsp;&lt;STRONG&gt;QA&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;After successful DEV deploy&lt;/td&gt;&lt;td&gt;✅ Requires reviewer approval&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;🔴&amp;nbsp;&lt;STRONG&gt;PROD&lt;/STRONG&gt;&lt;/td&gt;&lt;td&gt;After successful QA deploy&lt;/td&gt;&lt;td&gt;✅ Requires reviewer approval&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;Reviewers see a&amp;nbsp;&lt;STRONG&gt;rich job summary&lt;/STRONG&gt;&amp;nbsp;in GitHub showing:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;📌 Git commit SHA being deployed&lt;/LI&gt;
&lt;LI&gt;🎯 Target workspace and environment&lt;/LI&gt;
&lt;LI&gt;📦 Item types in scope&lt;/LI&gt;
&lt;LI&gt;⏱️ Deployment duration&lt;/LI&gt;
&lt;LI&gt;✅ / ❌ Final status&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;4️⃣ Pre-Deployment Validation 🔍&lt;/H3&gt;
&lt;P&gt;Before any deployment runs, a dedicated&amp;nbsp;&lt;STRONG&gt;validate&lt;/STRONG&gt;&amp;nbsp;job checks:&lt;/P&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Check&lt;/th&gt;&lt;th&gt;What It Does&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;📂&amp;nbsp;&lt;A href="vscode-file://vscode-app/c:/Users/vinodsoni/AppData/Local/Programs/Microsoft%20VS%20Code/c3a26841a8/resources/app/out/vs/code/electron-browser/workbench/workbench.html" data-href="file:///c%3A/Users/vinodsoni/fabric-cicd-project/workspace/" data-keybinding-context="9639" target="_blank"&gt;workspace&lt;/A&gt;&amp;nbsp;exists&lt;/td&gt;&lt;td&gt;Ensures Fabric items are present&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;📄&amp;nbsp;parameter.yml&amp;nbsp;exists&lt;/td&gt;&lt;td&gt;Ensures parameterization is configured&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;📄&amp;nbsp;.platform&amp;nbsp;files present&lt;/td&gt;&lt;td&gt;Validates Fabric Git integration metadata&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;🐍&amp;nbsp;ruff check deploy/&lt;/td&gt;&lt;td&gt;Lints Python code for syntax errors and bad imports&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;If validation fails,&amp;nbsp;&lt;STRONG&gt;no deployment runs&lt;/STRONG&gt;&amp;nbsp;— across any environment.&lt;/P&gt;
&lt;H3&gt;5️⃣ Full Git SHA Traceability 📜&lt;/H3&gt;
&lt;P&gt;Every deployment logs and surfaces the exact Git commit being deployed:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why this matters:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;🔄&amp;nbsp;&lt;STRONG&gt;Rollback&lt;/STRONG&gt;&amp;nbsp;=&amp;nbsp;git revert &amp;lt;sha&amp;gt;&amp;nbsp;+ push → pipeline redeploys previous state&lt;/LI&gt;
&lt;LI&gt;🕵️&amp;nbsp;&lt;STRONG&gt;Audit&lt;/STRONG&gt;&amp;nbsp;= every PROD deployment tied to a specific commit, reviewer, and timestamp&lt;/LI&gt;
&lt;LI&gt;🔀&amp;nbsp;&lt;STRONG&gt;Diff&lt;/STRONG&gt;&amp;nbsp;=&amp;nbsp;git diff v1..v2&amp;nbsp;shows exactly what changed between deployments&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;6️⃣ Concurrency Control 🚦&lt;/H3&gt;
&lt;LI-CODE lang=""&gt;concurrency:
  group: fabric-deploy-${{ github.ref }}
  cancel-in-progress: false&lt;/LI-CODE&gt;
&lt;P&gt;Two rapid pushes to&amp;nbsp;main&amp;nbsp;won't cause parallel deployments fighting over the same workspace. The second run&amp;nbsp;&lt;STRONG&gt;queues&lt;/STRONG&gt;&amp;nbsp;until the first completes.&lt;/P&gt;
&lt;H3&gt;7️⃣ Smart Path Filtering 🧠&lt;/H3&gt;
&lt;LI-CODE lang=""&gt;paths-ignore:
  - "**.md"
  - "docs/**"
  - ".vscode/**"&lt;/LI-CODE&gt;
&lt;P&gt;A README-only commit? A docs update?&amp;nbsp;&lt;STRONG&gt;No deployment triggered.&lt;/STRONG&gt;&amp;nbsp;This saves runner minutes and avoids unnecessary approval requests for QA/PROD.&lt;/P&gt;
&lt;H3&gt;8️⃣ Retry Logic with Exponential Backoff 🔁&lt;/H3&gt;
&lt;P&gt;The deploy script wraps&amp;nbsp;fabric-cicd&amp;nbsp;calls with retry logic:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Attempt 1 → fails (HTTP 429 rate limit)
  ⏳ Wait 5 seconds
Attempt 2 → fails (HTTP 503 transient)
  ⏳ Wait 15 seconds
Attempt 3 → succeeds ✅&lt;/LI-CODE&gt;
&lt;P&gt;Transient Fabric service issues don't break your pipeline — the deployment retries automatically.&lt;/P&gt;
&lt;H3&gt;9️⃣ Orphan Cleanup 🧹&lt;/H3&gt;
&lt;P&gt;Set&amp;nbsp;CLEAN_ORPHANS=true&amp;nbsp;and items that exist in the workspace but&amp;nbsp;&lt;STRONG&gt;not&lt;/STRONG&gt;&amp;nbsp;in Git get removed:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;Workspace has: Notebook_A, Notebook_B, Notebook_C
Git repo has:  Notebook_A, Notebook_B

→ Notebook_C gets removed (orphan)&lt;/LI-CODE&gt;
&lt;P&gt;This ensures your workspace&amp;nbsp;&lt;STRONG&gt;exactly matches&lt;/STRONG&gt;&amp;nbsp;your Git state — no drift, no surprises.&lt;/P&gt;
&lt;H3&gt;🔟 Dependency Management with Dependabot 🤖&lt;/H3&gt;
&lt;LI-CODE lang=""&gt;# .github/dependabot.yml
updates:
  - package-ecosystem: "pip"
    schedule:
      interval: "weekly"
  - package-ecosystem: "github-actions"
    schedule:
      interval: "weekly"&lt;/LI-CODE&gt;
&lt;P&gt;fabric-cicd,&amp;nbsp;azure-identity, and GitHub Actions versions are automatically monitored. When updates are available, Dependabot opens a PR — keeping your pipeline secure and current.&lt;/P&gt;
&lt;H3&gt;1️⃣1️⃣ CODEOWNERS Enforcement 👥&lt;/H3&gt;
&lt;LI-CODE lang=""&gt;# .github/CODEOWNERS
/deploy/                    @platform-team
/config/                    @platform-team
/.github/workflows/         @platform-team&lt;/LI-CODE&gt;
&lt;P&gt;Changes to deployment scripts, parameterization, or the workflow&amp;nbsp;&lt;STRONG&gt;require review from the platform team&lt;/STRONG&gt;. No one accidentally modifies the pipeline without oversight.&lt;/P&gt;
&lt;H3&gt;1️⃣2️⃣ Job Timeouts ⏱️&lt;/H3&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Job&lt;/th&gt;&lt;th&gt;Timeout&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Validate&lt;/td&gt;&lt;td&gt;10 minutes&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Deploy (DEV/QA/PROD)&lt;/td&gt;&lt;td&gt;30 minutes&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;P&gt;A hung process won't burn 6 hours of runner time. It fails fast, alerts the team, and frees the runner.&lt;/P&gt;
&lt;H3&gt;1️⃣3️⃣ Security Policy 🛡️&lt;/H3&gt;
&lt;P&gt;A dedicated&amp;nbsp;&lt;A href="vscode-file://vscode-app/c:/Users/vinodsoni/AppData/Local/Programs/Microsoft%20VS%20Code/c3a26841a8/resources/app/out/vs/code/electron-browser/workbench/workbench.html" data-href="file:///c%3A/Users/vinodsoni/fabric-cicd-project/SECURITY.md" data-keybinding-context="9640" target="_blank"&gt;SECURITY.md&lt;/A&gt;&amp;nbsp;provides:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;📧 Responsible vulnerability disclosure process&lt;/LI&gt;
&lt;LI&gt;⏰ 48-hour acknowledgement SLA&lt;/LI&gt;
&lt;LI&gt;📋 Best practices for contributors (no secrets in code, least-privilege SPs, 90-day rotation)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;🔄 The Complete Workflow&lt;/H2&gt;
&lt;P&gt;Here's what happens end-to-end when a developer merges a PR:&lt;/P&gt;
&lt;LI-CODE lang=""&gt;1. 👨‍💻 Developer merges PR to main
         │
2. 🔍 VALIDATE job runs
         │  ✅ Repo structure checks
         │  ✅ Python linting (ruff)
         │  ✅ parameter.yml validation
         │
3. 🟢 DEPLOY-DEV job runs (automatic)
         │  🔑 Authenticates with DEV SP
         │  📦 Deploys all items to DEV workspace
         │  📝 Logs commit SHA + summary
         │
4. 🟡 DEPLOY-QA job waits for approval
         │  👀 Reviewer checks job summary
         │  ✅ Reviewer approves
         │  🔑 Authenticates with QA SP
         │  📦 Deploys all items to QA workspace
         │
5. 🔴 DEPLOY-PROD job waits for approval
         │  👀 Reviewer checks job summary
         │  ✅ Reviewer approves
         │  🔑 Authenticates with PROD SP
         │  📦 Deploys all items to PROD workspace
         │
6. 🎉 Done — all environments in sync with Git&lt;/LI-CODE&gt;
&lt;H2&gt;🆚 Comparison: This Approach vs. Fabric Deployment Pipelines&lt;/H2&gt;
&lt;DIV class="styles_lia-table-wrapper__h6Xo9 styles_table-responsive__MW0lN"&gt;&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Capability&lt;/th&gt;&lt;th&gt;Fabric Deployment Pipelines&lt;/th&gt;&lt;th&gt;This Solution (fabric-cicd + GitHub Actions)&lt;/th&gt;&lt;/tr&gt;&lt;/thead&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td&gt;Source of truth&lt;/td&gt;&lt;td&gt;Workspace&lt;/td&gt;&lt;td&gt;✅ Git&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Promotion trigger&lt;/td&gt;&lt;td&gt;UI click / API call&lt;/td&gt;&lt;td&gt;✅ Git push + approval&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Approval gates&lt;/td&gt;&lt;td&gt;Fabric-native&lt;/td&gt;&lt;td&gt;✅ GitHub Environments (same as app teams)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Audit trail&lt;/td&gt;&lt;td&gt;Fabric activity log&lt;/td&gt;&lt;td&gt;✅ Git commits + GitHub Actions history&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Rollback&lt;/td&gt;&lt;td&gt;Manual&lt;/td&gt;&lt;td&gt;✅&amp;nbsp;git revert&amp;nbsp;+ auto-redeploy&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Cross-platform governance&lt;/td&gt;&lt;td&gt;Separate model&lt;/td&gt;&lt;td&gt;✅ Unified with infra/app CI/CD&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Parameterization&lt;/td&gt;&lt;td&gt;Deployment rules&lt;/td&gt;&lt;td&gt;✅&amp;nbsp;parameter.yml&amp;nbsp;(reviewable in PR)&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Secret management&lt;/td&gt;&lt;td&gt;Fabric-managed&lt;/td&gt;&lt;td&gt;✅ GitHub Secrets + per-env SP isolation&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;Drift detection&lt;/td&gt;&lt;td&gt;Limited&lt;/td&gt;&lt;td&gt;✅ Orphan cleanup (CLEAN_ORPHANS=true)&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/DIV&gt;
&lt;H2&gt;🚀 Getting Started&lt;/H2&gt;
&lt;H3&gt;Prerequisites&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;3 Fabric workspaces (DEV, QA, PROD)&lt;/LI&gt;
&lt;LI&gt;Service principal(s) with&amp;nbsp;&lt;STRONG&gt;Contributor&lt;/STRONG&gt;&amp;nbsp;role on each workspace&lt;/LI&gt;
&lt;LI&gt;GitHub repository with Actions enabled&lt;/LI&gt;
&lt;LI&gt;GitHub Environments configured (dev,&amp;nbsp;qa,&amp;nbsp;prod)&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Quick Setup&lt;/H3&gt;
&lt;LI-CODE lang=""&gt;# 1. Clone the repo
git clone https://github.com/&amp;lt;your-org&amp;gt;/fabric-cicd-project.git

# 2. Install dependencies
pip install -r requirements.txt

# 3. Copy and fill environment variables
cp .env.example .env

# 4. Run locally against DEV
python deploy/deploy_workspace.py&lt;/LI-CODE&gt;
&lt;H3&gt;GitHub Actions Setup&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Create GitHub Environments:&amp;nbsp;dev,&amp;nbsp;qa&amp;nbsp;(add reviewers),&amp;nbsp;prod&amp;nbsp;(add reviewers)&lt;/LI&gt;
&lt;LI&gt;Add secrets to each environment:
&lt;UL&gt;
&lt;LI&gt;DEV_TENANT_ID,&amp;nbsp;DEV_CLIENT_ID,&amp;nbsp;DEV_CLIENT_SECRET&lt;/LI&gt;
&lt;LI&gt;QA_TENANT_ID,&amp;nbsp;QA_CLIENT_ID,&amp;nbsp;QA_CLIENT_SECRET&lt;/LI&gt;
&lt;LI&gt;PROD_TENANT_ID,&amp;nbsp;PROD_CLIENT_ID,&amp;nbsp;PROD_CLIENT_SECRET&lt;/LI&gt;
&lt;LI&gt;DEV_WORKSPACE_ID,&amp;nbsp;QA_WORKSPACE_ID,&amp;nbsp;PROD_WORKSPACE_ID&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;Push to&amp;nbsp;main&amp;nbsp;— the pipeline takes over! 🎉&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;💡 Lessons Learned&lt;/H2&gt;
&lt;P&gt;After implementing this pattern across several engagements, here are the key takeaways:&lt;/P&gt;
&lt;H3&gt;✅ What Works Well&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;Teams&amp;nbsp;&lt;STRONG&gt;love&lt;/STRONG&gt;&amp;nbsp;the Git traceability once they experience a clean rollback&lt;/LI&gt;
&lt;LI&gt;Approval gates in GitHub feel&amp;nbsp;&lt;STRONG&gt;natural&lt;/STRONG&gt;&amp;nbsp;to platform engineers&lt;/LI&gt;
&lt;LI&gt;Parameter.yml changes in PRs create&amp;nbsp;&lt;STRONG&gt;great review conversations&lt;/STRONG&gt;&amp;nbsp;about environment differences&lt;/LI&gt;
&lt;LI&gt;Job summaries give reviewers&amp;nbsp;&lt;STRONG&gt;confidence&lt;/STRONG&gt;&amp;nbsp;to approve without digging into logs&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;⚠️ Watch Out For&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Cultural resistance&lt;/STRONG&gt;&amp;nbsp;is the #1 blocker — invest in enablement, not just automation&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Fabric items with runtime state&lt;/STRONG&gt;&amp;nbsp;(data in lakehouses, refresh history) aren't captured in Git&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Secret rotation&lt;/STRONG&gt;&amp;nbsp;across 3+ environments needs process discipline (consider OIDC federated credentials)&lt;/LI&gt;
&lt;LI&gt;Run a&amp;nbsp;&lt;STRONG&gt;"portal vs. pipeline" side-by-side demo&lt;/STRONG&gt;&amp;nbsp;early — it changes minds fast&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;🤝 For CSAs: Sharing This With Customers&lt;/H2&gt;
&lt;P&gt;This solution is ideal for customers who:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;☑️ Already use GitHub Actions for application or infrastructure CI/CD&lt;/LI&gt;
&lt;LI&gt;☑️ Have&amp;nbsp;&lt;STRONG&gt;governance requirements&lt;/STRONG&gt;&amp;nbsp;that demand Git-based audit trails&lt;/LI&gt;
&lt;LI&gt;☑️ Operate&amp;nbsp;&lt;STRONG&gt;multiple Fabric workspaces&lt;/STRONG&gt;&amp;nbsp;across environments&lt;/LI&gt;
&lt;LI&gt;☑️ Want to&amp;nbsp;&lt;STRONG&gt;standardize&lt;/STRONG&gt;&amp;nbsp;their promotion model across all workloads&lt;/LI&gt;
&lt;LI&gt;☑️ Are moving from Power BI Premium to Fabric and want to modernize their DevOps practices&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;🗣️ Conversation Starters&lt;/H3&gt;
&lt;P&gt;&lt;EM&gt;"How are you promoting Fabric items between environments today?"&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"Is your data team using the same CI/CD patterns as your app teams?"&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;"If something goes wrong in production, how quickly can you roll back to the previous version?"&lt;/EM&gt;&lt;/P&gt;
&lt;H2&gt;📚 Resources&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;📦&amp;nbsp;&lt;A href="vscode-file://vscode-app/c:/Users/vinodsoni/AppData/Local/Programs/Microsoft%20VS%20Code/c3a26841a8/resources/app/out/vs/code/electron-browser/workbench/workbench.html" data-href="https://pypi.org/project/fabric-cicd/" target="_blank"&gt;fabric-cicd on PyPI&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;📖&amp;nbsp;&lt;A href="vscode-file://vscode-app/c:/Users/vinodsoni/AppData/Local/Programs/Microsoft%20VS%20Code/c3a26841a8/resources/app/out/vs/code/electron-browser/workbench/workbench.html" data-href="https://microsoft.github.io/fabric-cicd/" target="_blank"&gt;fabric-cicd Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;🐙&amp;nbsp;&lt;A href="vscode-file://vscode-app/c:/Users/vinodsoni/AppData/Local/Programs/Microsoft%20VS%20Code/c3a26841a8/resources/app/out/vs/code/electron-browser/workbench/workbench.html" data-href="https://docs.github.com/en/actions" target="_blank"&gt;GitHub Actions Documentation&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;🏗️&amp;nbsp;&lt;A href="vscode-file://vscode-app/c:/Users/vinodsoni/AppData/Local/Programs/Microsoft%20VS%20Code/c3a26841a8/resources/app/out/vs/code/electron-browser/workbench/workbench.html" data-href="https://learn.microsoft.com/en-us/fabric/cicd/git-integration/intro-to-git-integration" target="_blank"&gt;Microsoft Fabric Git Integration&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://github.com/vinod-soni-microsoft/FABRIC-CICD-PROJECT" target="_blank"&gt;🌐Git Repository URL: vinod-soni-microsoft/FABRIC-CICD-PROJECT: Enterprise-grade CI/CD solution for Microsoft Fabric using fabric-cicd Python library and GitHub Actions. Git-driven deployments across DEV → QA → PROD with environment approval gates, per-environment service principal isolation, and parameterized promotion — no Fabric Deployment Pipelines required.&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;🏁 Conclusion&lt;/H2&gt;
&lt;P&gt;The shift from UI-driven promotion to&amp;nbsp;&lt;STRONG&gt;Git-driven CI/CD&lt;/STRONG&gt;&amp;nbsp;for Microsoft Fabric isn't just a technical upgrade — it's a&amp;nbsp;&lt;STRONG&gt;governance and cultural alignment&lt;/STRONG&gt;&amp;nbsp;decision. By using&amp;nbsp;fabric-cicd&amp;nbsp;with GitHub Actions, you get:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;📖&amp;nbsp;&lt;STRONG&gt;One source of truth&lt;/STRONG&gt;&amp;nbsp;(Git)&lt;/LI&gt;
&lt;LI&gt;🔄&amp;nbsp;&lt;STRONG&gt;One promotion model&lt;/STRONG&gt;&amp;nbsp;(GitHub Actions)&lt;/LI&gt;
&lt;LI&gt;✅&amp;nbsp;&lt;STRONG&gt;One approval process&lt;/STRONG&gt;&amp;nbsp;(GitHub Environments)&lt;/LI&gt;
&lt;LI&gt;🔍&amp;nbsp;&lt;STRONG&gt;One audit trail&lt;/STRONG&gt;&amp;nbsp;(Git history + Actions logs)&lt;/LI&gt;
&lt;LI&gt;🔐&amp;nbsp;&lt;STRONG&gt;One security model&lt;/STRONG&gt;&amp;nbsp;(GitHub Secrets + per-env SPs)&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;No parallel governance. No hidden drift. No "who clicked what in the portal."&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Just Git, code, and confidence.&lt;/STRONG&gt;&amp;nbsp;💪&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Have questions or want to share your experience? Drop a comment below — I'd love to hear how your team is approaching Fabric CI/CD!&lt;/EM&gt; 👇&lt;/P&gt;</description>
      <pubDate>Fri, 20 Feb 2026 15:48:00 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/git-driven-deployments-for-microsoft-fabric-using-github-actions/ba-p/4496125</guid>
      <dc:creator>VinodSoni</dc:creator>
      <dc:date>2026-02-20T15:48:00Z</dc:date>
    </item>
    <item>
      <title>Compliance Meets AI: What’s New in Insider Risk Management for Copilot</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/compliance-meets-ai-what-s-new-in-insider-risk-management-for/ba-p/4493184</link>
      <description>&lt;P&gt;AI is moving fast—and so are the risks. That’s exactly why our latest&amp;nbsp;&lt;STRONG&gt;Compliance Meets AI&lt;/STRONG&gt; session, &lt;EM&gt;“What’s New – Insider Risk Management for Copilot,”&lt;/EM&gt; focused on how organizations can confidently secure data in an increasingly agent-driven world.&lt;/P&gt;
&lt;P&gt;In this session, &lt;STRONG&gt;Kevin Uy&lt;/STRONG&gt; walked through what’s new (and what’s next) in &lt;STRONG&gt;Microsoft Purview Insider Risk Management&lt;/STRONG&gt;, with a strong spotlight on &lt;STRONG&gt;Copilot, AI agents, and Security Copilot integration&lt;/STRONG&gt;. From real-world risk scenarios to live demos, this was a practical deep dive into protecting your organization while still empowering innovation.&lt;/P&gt;
&lt;P&gt;If you missed the session we have you covered!&amp;nbsp; You can find the recording below.&lt;/P&gt;
&lt;P&gt;&lt;A class="lia-external-url" href="https://aka.ms/Compliance-Meets-Ai-Insider-Risk-Management" target="_blank"&gt;https://aka.ms/Compliance-Meets-Ai-Insider-Risk-Management&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;🔍 Key highlights included:&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Insider Risk for Copilot &amp;amp; Agents&lt;/STRONG&gt; – How Microsoft now monitors &lt;EM&gt;both humans and AI agents&lt;/EM&gt; with purpose-built policies.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Risky Agents (Preview)&lt;/STRONG&gt; – New capabilities that provide visibility and governance for agents built in &lt;STRONG&gt;Copilot Studio&lt;/STRONG&gt; and &lt;STRONG&gt;Azure AI Foundry&lt;/STRONG&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Security Copilot Triage Agents&lt;/STRONG&gt; – A powerful look at AI helping security teams prioritize what truly matters by summarizing and contextualizing Insider Risk and DLP alerts.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;DSPM for AI&lt;/STRONG&gt; – How organizations can gain insight into risky AI usage, prompts, and responses—inside and outside the Microsoft ecosystem.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;💡 One of the biggest takeaways? &lt;STRONG&gt;AI security no longer stops at users.&lt;/STRONG&gt; Agents operate at machine speed, and organizations need the same level of governance, risk scoring, and investigation capabilities to keep pace.&lt;/P&gt;
&lt;P&gt;If you’re responsible for &lt;STRONG&gt;security, compliance, data protection, or AI governance&lt;/STRONG&gt;, this session is packed with insights you can apply right away.&lt;/P&gt;
&lt;P&gt;All past recordings Can be found here &lt;A href="https://www.youtube.com/@JayCottonMicrosoft" target="_blank"&gt;Jay Cotton - YouTube &lt;/A&gt;and dont forget to register for our session over the next three weeks here&lt;/P&gt;
&lt;P&gt;&lt;A href="https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/compliance-meets-ai-2026-microsoft-purview-in-the-age-of-ai/4475027" target="_blank"&gt;Compliance Meets Ai 2026: Microsoft Purview in the Age of Ai | Microsoft Community Hub&lt;/A&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 06 Feb 2026 18:57:17 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/compliance-meets-ai-what-s-new-in-insider-risk-management-for/ba-p/4493184</guid>
      <dc:creator>Jay_Cotton</dc:creator>
      <dc:date>2026-02-06T18:57:17Z</dc:date>
    </item>
    <item>
      <title>Strategic Data Replication Enhances a Single Source of Truth Architecture for Analytics &amp; AI</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/strategic-data-replication-enhances-a-single-source-of-truth/ba-p/4493166</link>
      <description>&lt;P&gt;Strategic Data Replication is not Many Sources of Truth, and can be complimentary to a Single Source of Truth architectural design. Even in a highly regulated Healthcare data ecosystem, strategic replication can maintain a compliant source of truth as part of a larger strategy to scale and control costs.&lt;BR /&gt;&lt;BR /&gt;The concepts of single source of truth, data replication, data redundancy, and data duplication are often convoluted and can lead to misunderstandings that result in poorly performing and unnecessarily expensive architectures. I've published a new video that attempts to unwind the problem. No matter what data tools or cloud platforms you work with, hopefully you can find value in the content. I've added some fun analogies to data replication for cryptocurrency (such as Bitcoin) and Biology (DNA) to help explain the benefits of data replication in a multi-cloud world of analytics and AI.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;How can you plan for a highly scalable, cost optimized analytics and AI architecture with high query concurrency, global users, and rapidly evolving technology having an AI Agentic future? How did Kimball vs Inmon influence the single source of truth discussion? How do you have a multi-tool environment with both corporate reporting and self-service tools that provide consistent metrics? How can you prevent the self-service spreadmarts and shadow IT nightmares of the past without stifling innovation and progress? Hopefully the content in the video below helps explain the history and frames up the future:&lt;/P&gt;
&lt;div data-video-id="https://youtu.be/Gn7A-wFwvjE/1770400563310" data-video-remote-vid="https://youtu.be/Gn7A-wFwvjE/1770400563310" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FGn7A-wFwvjE%3Ffeature%3Doembed&amp;amp;display_name=YouTube&amp;amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DGn7A-wFwvjE&amp;amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FGn7A-wFwvjE%2Fhqdefault.jpg&amp;amp;type=text%2Fhtml&amp;amp;schema=youtube" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 06 Feb 2026 17:56:16 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/strategic-data-replication-enhances-a-single-source-of-truth/ba-p/4493166</guid>
      <dc:creator>Greg_Beaumont</dc:creator>
      <dc:date>2026-02-06T17:56:16Z</dc:date>
    </item>
    <item>
      <title>Introduction to Copilot Role Stories</title>
      <link>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/introduction-to-copilot-role-stories/ba-p/4490630</link>
      <description>&lt;P&gt;AI is becoming a core part of how people across every industry get work done. From marketers creating content faster, to clinicians improving patient care, to insurance teams automating claims and authorizations, Microsoft Copilot and AI Agents are helping people focus on what matters most.&lt;/P&gt;
&lt;P&gt;In &lt;STRONG&gt;Role Stories &lt;/STRONG&gt;we walk through real-world scenarios in specific industries and job roles using M365 Copilot &amp;amp; Agents.&amp;nbsp; Today we are focused on &lt;STRONG&gt;Marketing &amp;amp; Content Creation&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;Marketing teams are under constant pressure to produce more content quickly and tell better stories. That’s exactly why&amp;nbsp;&lt;EM&gt;Role Stories&lt;/EM&gt; exists, it's a program designed to show real, practical examples of how different jobs can use Microsoft Copilot and AI Agents to get meaningful time back.&lt;/P&gt;
&lt;P&gt;But,&lt;STRONG&gt; &lt;/STRONG&gt;Marketing isn’t just making content, it’s shaping brand voice, driving engagement, and keeping teams ahead of tight deadlines. But the work takes time. Copilot changes that by helping marketers brainstorm, write, edit, and repurpose content in minutes.&amp;nbsp; &lt;BR /&gt;&lt;STRONG&gt;Check out the story and&amp;nbsp;&lt;A class="lia-external-url" href="https://youtu.be/194Nww7JCjo?si=ZvthYCd4ANKAnYGt" target="_blank"&gt;video&lt;/A&gt; below.&lt;/STRONG&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Let's learn about Maya the Marketer&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;The Role Story: Marketing - Create Better Content&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;On Monday morning, Maya opened a blank doc and felt that familiar knot: new blog due, socials to plan, no time. So she asked Copilot for help.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Chapter 1: Finding the angle&lt;/STRONG&gt;&lt;BR /&gt;Maya opened &lt;STRONG&gt;Copilot in Word&lt;/STRONG&gt; and prompted: “&lt;SPAN class="lia-text-color-15"&gt;&lt;EM&gt;&lt;STRONG&gt;Give me 10 SEO keywords to attract people ages 20–30 who want to reduce their carbon footprint.&lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;” Copilot gave back a tight list in seconds. Maya picked a theme, and keywords, and the fog started to lift.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Chapter 2: From idea to draft&lt;/STRONG&gt;&lt;BR /&gt;“&lt;SPAN class="lia-text-color-15"&gt;&lt;EM&gt;&lt;STRONG&gt;Draft a friendly, confident blog post using Chicago Style grammar using these keywords.&lt;/STRONG&gt;&lt;/EM&gt;&lt;/SPAN&gt;"&amp;nbsp;&lt;BR /&gt;Copilot in Word shaped a clean outline, wrote sections, and matched her brand voice. Maya tweaked examples and added a customer quote. The draft felt like&amp;nbsp;&lt;EM&gt;her&lt;/EM&gt;, just faster.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Chapter 3: One draft, many moments&lt;/STRONG&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;“&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;&lt;EM&gt;Turn this into a 100‑word LinkedIn blurb with emojis and a headline.&lt;/EM&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;” Done. 👍&lt;/LI&gt;
&lt;LI&gt;“&lt;EM&gt;&lt;SPAN class="lia-text-color-15"&gt;&lt;STRONG&gt;C&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-14"&gt;&lt;SPAN class="lia-text-color-15"&gt;reate an email to advertise this blog and give me 3 different subject lines to choose from&lt;/SPAN&gt;.&lt;/SPAN&gt;”&lt;/STRONG&gt;&lt;/EM&gt; Done. 👍&lt;/LI&gt;
&lt;LI&gt;Then she used &lt;STRONG&gt;Copilot Create&lt;/STRONG&gt; to generate a photo‑realistic image that fit the story’s vibe, no endless stock‑photo hunt.&lt;BR /&gt;&lt;EM&gt;&lt;STRONG&gt;"&lt;SPAN class="lia-text-color-14"&gt;&lt;SPAN class="lia-text-color-15"&gt;Generate a photo‑realistic image that fit a story’s vibe about people between ages 20–30 who want to reduce their carbon footprint.&lt;/SPAN&gt;" 👍&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/EM&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;SPAN class="lia-text-color-14"&gt;“&lt;SPAN class="lia-text-color-15"&gt;Pull short captions for Instagram.&lt;/SPAN&gt;&lt;/SPAN&gt;”&lt;/STRONG&gt;&lt;/EM&gt; Done. 👍&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Chapter 4: Result: Less work, more impact&lt;/STRONG&gt;&lt;BR /&gt;By lunch, Maya had a polished blog, a LinkedIn teaser, captions, headlines, and a hero image. She saved hours and widened reach without burning out. That afternoon, the post went live, and the team had time left to plan the next story.&amp;nbsp;&lt;/P&gt;
&lt;div data-video-id="https://youtu.be/194Nww7JCjo/1770166384518" data-video-remote-vid="https://youtu.be/194Nww7JCjo/1770166384518" class="lia-video-container lia-media-is-center lia-media-size-large"&gt;&lt;iframe src="https://cdn.embedly.com/widgets/media.html?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D194Nww7JCjo&amp;amp;type=text%2Fhtml&amp;amp;schema=youtu&amp;amp;display_name=YouTube&amp;amp;src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F194Nww7JCjo" allowfullscreen="" style="max-width: 100%"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;P&gt;&lt;STRONG&gt;Marketing - Content Creation:&amp;nbsp;&lt;/STRONG&gt;&lt;A href="https://adoption.microsoft.com/en-us/scenario-library/marketing/content-creation/" target="_blank" rel="noopener"&gt;Marketing scenario: Content creation (Copilot Scenario Library) – Microsoft Adoption&lt;/A&gt;&lt;STRONG&gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;STRONG&gt;Find Your Role and Your Story:&lt;/STRONG&gt;&amp;nbsp;&lt;A href="https://adoption.microsoft.com/en-us/scenario-library/" target="_blank" rel="noopener"&gt;Microsoft Scenario Library – Microsoft Adoption&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Feb 2026 14:23:51 GMT</pubDate>
      <guid>https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/introduction-to-copilot-role-stories/ba-p/4490630</guid>
      <dc:creator>Christina_Tillbrook</dc:creator>
      <dc:date>2026-02-04T14:23:51Z</dc:date>
    </item>
  </channel>
</rss>

